Merge android-4.19.36 (10f41ccfc
) into msm-4.19
* refs/heads/tmp-10f41ccfc: Linux 4.19.36 appletalk: Fix compile regression mm: hide incomplete nr_indirectly_reclaimable in sysfs mm: hide incomplete nr_indirectly_reclaimable in /proc/zoneinfo IB/hfi1: Failed to drain send queue when QP is put into error state bpf: fix use after free in bpf_evict_inode include/linux/swap.h: use offsetof() instead of custom __swapoffset macro f2fs: fix to dirty inode for i_mode recovery rxrpc: Fix client call connect/disconnect race lib/div64.c: off by one in shift appletalk: Fix use-after-free in atalk_proc_exit drm/amdkfd: use init_mqd function to allocate object for hid_mqd (CI) ARM: 8839/1: kprobe: make patch_lock a raw_spinlock_t drm/nouveau/volt/gf117: fix speedo readout register PCI: Blacklist power management of Gigabyte X299 DESIGNARE EX PCIe ports coresight: cpu-debug: Support for CA73 CPUs Revert "ACPI / EC: Remove old CLEAR_ON_RESUME quirk" crypto: axis - fix for recursive locking from bottom half drm/panel: panel-innolux: set display off in innolux_panel_unprepare lkdtm: Add tests for NULL pointer dereference lkdtm: Print real addresses soc/tegra: pmc: Drop locking from tegra_powergate_is_powered() scsi: core: Avoid that system resume triggers a kernel warning iommu/dmar: Fix buffer overflow during PCI bus notification net: ip6_gre: fix possible NULL pointer dereference in ip6erspan_set_version crypto: sha512/arm - fix crash bug in Thumb2 build crypto: sha256/arm - fix crash bug in Thumb2 build xfrm: destroy xfrm_state synchronously on net exit path net/rds: fix warn in rds_message_alloc_sgs ACPI: EC / PM: Disable non-wakeup GPEs for suspend-to-idle ALSA: hda: fix front speakers on Huawei MBXP drm/ttm: Fix bo_global and mem_global kfree error platform/x86: Add Intel AtomISP2 dummy / power-management driver kernel: hung_task.c: disable on suspend cifs: fallback to older infolevels on findfirst queryinfo retry net: stmmac: Set OWN bit for jumbo frames f2fs: cleanup dirty pages if recover failed netfilter: nf_flow_table: remove flowtable hook flush routine in netns exit routine compiler.h: update definition of unreachable() KVM: nVMX: restore host state in nested_vmx_vmexit for VMFail HID: usbhid: Add quirk for Redragon/Dragonrise Seymur 2 ACPI / SBS: Fix GPE storm on recent MacBookPro's usbip: fix vhci_hcd controller counting ARM: samsung: Limit SAMSUNG_PM_CHECK config option to non-Exynos platforms pinctrl: core: make sure strcmp() doesn't get a null parameter HID: i2c-hid: override HID descriptors for certain devices Bluetooth: Fix debugfs NULL pointer dereference media: au0828: cannot kfree dev before usb disconnect powerpc/pseries: Remove prrn_work workqueue serial: uartps: console_setup() can't be placed to init section netfilter: xt_cgroup: shrink size of v2 path f2fs: fix to do sanity check with current segment number ASoC: Fix UBSAN warning at snd_soc_get/put_volsw_sx() 9p locks: add mount option for lock retry interval 9p: do not trust pdu content for stat item size f2fs: fix to avoid NULL pointer dereference on se->discard_map rsi: improve kernel thread handling to fix kernel panic gpio: pxa: handle corner case of unprobed device drm/cirrus: Use drm_framebuffer_put to avoid kernel oops in clean-up ext4: prohibit fstrim in norecovery mode x86/gart: Exclude GART aperture from kcore fix incorrect error code mapping for OBJECTID_NOT_FOUND x86/hw_breakpoints: Make default case in hw_breakpoint_arch_parse() return an error iommu/vt-d: Check capability before disabling protected memory drm/nouveau/debugfs: Fix check of pm_runtime_get_sync failure x86/cpu/cyrix: Use correct macros for Cyrix calls on Geode processors x86/hyperv: Prevent potential NULL pointer dereference x86/hpet: Prevent potential NULL pointer dereference irqchip/mbigen: Don't clear eventid when freeing an MSI irqchip/stm32: Don't clear rising/falling config registers at init drm/exynos/mixer: fix MIXER shadow registry synchronisation code blk-iolatency: #include "blk.h" PM / Domains: Avoid a potential deadlock ACPI / utils: Drop reference in test for device presence perf tests: Fix a memory leak in test__perf_evsel__tp_sched_test() perf tests: Fix memory leak by expr__find_other() in test__expr() perf tests: Fix a memory leak of cpu_map object in the openat_syscall_event_on_all_cpus test perf evsel: Free evsel->counts in perf_evsel__exit() perf hist: Add missing map__put() in error case perf top: Fix error handling in cmd_top() perf build-id: Fix memory leak in print_sdt_events() perf config: Fix a memory leak in collect_config() perf config: Fix an error in the config template documentation perf list: Don't forget to drop the reference to the allocated thread_map tools/power turbostat: return the exit status of a command x86/mm: Don't leak kernel addresses sched/core: Fix buffer overflow in cgroup2 property cpu.max sched/cpufreq: Fix 32-bit math overflow scsi: iscsi: flush running unbind operations when removing a session thermal/intel_powerclamp: fix truncated kthread name thermal/int340x_thermal: fix mode setting thermal/int340x_thermal: Add additional UUIDs thermal: bcm2835: Fix crash in bcm2835_thermal_debugfs thermal: samsung: Fix incorrect check after code merge thermal/intel_powerclamp: fix __percpu declaration of worker_data ALSA: opl3: fix mismatch between snd_opl3_drum_switch definition and declaration mmc: davinci: remove extraneous __init annotation i40iw: Avoid panic when handling the inetdev event IB/mlx4: Fix race condition between catas error reset and aliasguid flows drm/udl: use drm_gem_object_put_unlocked. auxdisplay: hd44780: Fix memory leak on ->remove() ALSA: sb8: add a check for request_region ALSA: echoaudio: add a check for ioremap_nocache ext4: report real fs size after failed resize ext4: add missing brelse() in add_new_gdb_meta_bg() ext4: avoid panic during forced reboot perf/core: Restore mmap record type correctly inotify: Fix fsnotify_mark refcount leak in inotify_update_existing_watch() arc: hsdk_defconfig: Enable CONFIG_BLK_DEV_RAM ARC: u-boot args: check that magic number is correct ANDROID: cuttlefish_defconfig: Enable L2TP/PPTP ANDROID: Makefile: Properly resolve 4.19.35 merge Make arm64 serial port config compatible with crosvm Conflicts: kernel/sched/cpufreq_schedutil.c Change-Id: I8f049eb34344f72bf2d202c5e360f448771c78f4 Signed-off-by: Ivaylo Georgiev <irgeorgiev@codeaurora.org>
This commit is contained in:
commit
1b74ac0833
139 changed files with 1778 additions and 487 deletions
|
@ -7328,6 +7328,12 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers)
|
|||
S: Supported
|
||||
F: sound/soc/intel/
|
||||
|
||||
INTEL ATOMISP2 DUMMY / POWER-MANAGEMENT DRIVER
|
||||
M: Hans de Goede <hdegoede@redhat.com>
|
||||
L: platform-driver-x86@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/platform/x86/intel_atomisp2_pm.c
|
||||
|
||||
INTEL C600 SERIES SAS CONTROLLER DRIVER
|
||||
M: Intel SCU Linux support <intel-linux-scu@intel.com>
|
||||
M: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
|
||||
|
|
8
Makefile
8
Makefile
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 35
|
||||
SUBLEVEL = 36
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
@ -486,7 +486,11 @@ endif
|
|||
|
||||
ifeq ($(cc-name),clang)
|
||||
ifneq ($(CROSS_COMPILE),)
|
||||
CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%))
|
||||
CLANG_TRIPLE ?= $(CROSS_COMPILE)
|
||||
CLANG_FLAGS := --target=$(notdir $(CLANG_TRIPLE:%-=%))
|
||||
ifeq ($(shell $(srctree)/scripts/clang-android.sh $(CC) $(CLANG_FLAGS)), y)
|
||||
$(error "Clang with Android --target detected. Did you specify CLANG_TRIPLE?")
|
||||
endif
|
||||
GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit))
|
||||
CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)
|
||||
GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..)
|
||||
|
|
|
@ -8,6 +8,7 @@ CONFIG_NAMESPACES=y
|
|||
# CONFIG_UTS_NS is not set
|
||||
# CONFIG_PID_NS is not set
|
||||
CONFIG_BLK_DEV_INITRD=y
|
||||
CONFIG_BLK_DEV_RAM=y
|
||||
CONFIG_EMBEDDED=y
|
||||
CONFIG_PERF_EVENTS=y
|
||||
# CONFIG_VM_EVENT_COUNTERS is not set
|
||||
|
|
|
@ -107,6 +107,7 @@ ENTRY(stext)
|
|||
; r2 = pointer to uboot provided cmdline or external DTB in mem
|
||||
; These are handled later in handle_uboot_args()
|
||||
st r0, [@uboot_tag]
|
||||
st r1, [@uboot_magic]
|
||||
st r2, [@uboot_arg]
|
||||
#endif
|
||||
|
||||
|
|
|
@ -35,6 +35,7 @@ unsigned int intr_to_DE_cnt;
|
|||
|
||||
/* Part of U-boot ABI: see head.S */
|
||||
int __initdata uboot_tag;
|
||||
int __initdata uboot_magic;
|
||||
char __initdata *uboot_arg;
|
||||
|
||||
const struct machine_desc *machine_desc;
|
||||
|
@ -484,6 +485,8 @@ static inline bool uboot_arg_invalid(unsigned long addr)
|
|||
#define UBOOT_TAG_NONE 0
|
||||
#define UBOOT_TAG_CMDLINE 1
|
||||
#define UBOOT_TAG_DTB 2
|
||||
/* We always pass 0 as magic from U-boot */
|
||||
#define UBOOT_MAGIC_VALUE 0
|
||||
|
||||
void __init handle_uboot_args(void)
|
||||
{
|
||||
|
@ -499,6 +502,11 @@ void __init handle_uboot_args(void)
|
|||
goto ignore_uboot_args;
|
||||
}
|
||||
|
||||
if (uboot_magic != UBOOT_MAGIC_VALUE) {
|
||||
pr_warn(IGNORE_ARGS "non zero uboot magic\n");
|
||||
goto ignore_uboot_args;
|
||||
}
|
||||
|
||||
if (uboot_tag != UBOOT_TAG_NONE &&
|
||||
uboot_arg_invalid((unsigned long)uboot_arg)) {
|
||||
pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg);
|
||||
|
|
|
@ -212,10 +212,11 @@ K256:
|
|||
.global sha256_block_data_order
|
||||
.type sha256_block_data_order,%function
|
||||
sha256_block_data_order:
|
||||
.Lsha256_block_data_order:
|
||||
#if __ARM_ARCH__<7
|
||||
sub r3,pc,#8 @ sha256_block_data_order
|
||||
#else
|
||||
adr r3,sha256_block_data_order
|
||||
adr r3,.Lsha256_block_data_order
|
||||
#endif
|
||||
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
|
||||
ldr r12,.LOPENSSL_armcap
|
||||
|
|
|
@ -93,10 +93,11 @@ K256:
|
|||
.global sha256_block_data_order
|
||||
.type sha256_block_data_order,%function
|
||||
sha256_block_data_order:
|
||||
.Lsha256_block_data_order:
|
||||
#if __ARM_ARCH__<7
|
||||
sub r3,pc,#8 @ sha256_block_data_order
|
||||
#else
|
||||
adr r3,sha256_block_data_order
|
||||
adr r3,.Lsha256_block_data_order
|
||||
#endif
|
||||
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
|
||||
ldr r12,.LOPENSSL_armcap
|
||||
|
|
|
@ -274,10 +274,11 @@ WORD64(0x5fcb6fab,0x3ad6faec, 0x6c44198c,0x4a475817)
|
|||
.global sha512_block_data_order
|
||||
.type sha512_block_data_order,%function
|
||||
sha512_block_data_order:
|
||||
.Lsha512_block_data_order:
|
||||
#if __ARM_ARCH__<7
|
||||
sub r3,pc,#8 @ sha512_block_data_order
|
||||
#else
|
||||
adr r3,sha512_block_data_order
|
||||
adr r3,.Lsha512_block_data_order
|
||||
#endif
|
||||
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
|
||||
ldr r12,.LOPENSSL_armcap
|
||||
|
|
|
@ -141,10 +141,11 @@ WORD64(0x5fcb6fab,0x3ad6faec, 0x6c44198c,0x4a475817)
|
|||
.global sha512_block_data_order
|
||||
.type sha512_block_data_order,%function
|
||||
sha512_block_data_order:
|
||||
.Lsha512_block_data_order:
|
||||
#if __ARM_ARCH__<7
|
||||
sub r3,pc,#8 @ sha512_block_data_order
|
||||
#else
|
||||
adr r3,sha512_block_data_order
|
||||
adr r3,.Lsha512_block_data_order
|
||||
#endif
|
||||
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
|
||||
ldr r12,.LOPENSSL_armcap
|
||||
|
|
|
@ -16,7 +16,7 @@ struct patch {
|
|||
unsigned int insn;
|
||||
};
|
||||
|
||||
static DEFINE_SPINLOCK(patch_lock);
|
||||
static DEFINE_RAW_SPINLOCK(patch_lock);
|
||||
|
||||
static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
|
||||
__acquires(&patch_lock)
|
||||
|
@ -33,7 +33,7 @@ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
|
|||
return addr;
|
||||
|
||||
if (flags)
|
||||
spin_lock_irqsave(&patch_lock, *flags);
|
||||
raw_spin_lock_irqsave(&patch_lock, *flags);
|
||||
else
|
||||
__acquire(&patch_lock);
|
||||
|
||||
|
@ -48,7 +48,7 @@ static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
|
|||
clear_fixmap(fixmap);
|
||||
|
||||
if (flags)
|
||||
spin_unlock_irqrestore(&patch_lock, *flags);
|
||||
raw_spin_unlock_irqrestore(&patch_lock, *flags);
|
||||
else
|
||||
__release(&patch_lock);
|
||||
}
|
||||
|
|
|
@ -256,7 +256,7 @@ config S3C_PM_DEBUG_LED_SMDK
|
|||
|
||||
config SAMSUNG_PM_CHECK
|
||||
bool "S3C2410 PM Suspend Memory CRC"
|
||||
depends on PM
|
||||
depends on PM && (PLAT_S3C24XX || ARCH_S3C64XX || ARCH_S5PV210)
|
||||
select CRC32
|
||||
help
|
||||
Enable the PM code's memory area checksum over sleep. This option
|
||||
|
|
|
@ -47,8 +47,6 @@ CONFIG_CP15_BARRIER_EMULATION=y
|
|||
CONFIG_SETEND_EMULATION=y
|
||||
CONFIG_ARM64_SW_TTBR0_PAN=y
|
||||
CONFIG_RANDOMIZE_BASE=y
|
||||
CONFIG_CMDLINE="console=ttyAMA0"
|
||||
CONFIG_CMDLINE_EXTEND=y
|
||||
# CONFIG_EFI is not set
|
||||
CONFIG_COMPAT=y
|
||||
CONFIG_PM_WAKELOCKS=y
|
||||
|
@ -286,6 +284,7 @@ CONFIG_SERIAL_8250_NR_UARTS=48
|
|||
CONFIG_SERIAL_8250_EXTENDED=y
|
||||
CONFIG_SERIAL_8250_MANY_PORTS=y
|
||||
CONFIG_SERIAL_8250_SHARE_IRQ=y
|
||||
CONFIG_SERIAL_OF_PLATFORM=y
|
||||
CONFIG_SERIAL_AMBA_PL011=y
|
||||
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
|
||||
CONFIG_VIRTIO_CONSOLE=y
|
||||
|
@ -385,6 +384,7 @@ CONFIG_MMC=y
|
|||
# CONFIG_MMC_BLOCK is not set
|
||||
CONFIG_RTC_CLASS=y
|
||||
# CONFIG_RTC_SYSTOHC is not set
|
||||
CONFIG_RTC_DRV_PL030=y
|
||||
CONFIG_RTC_DRV_PL031=y
|
||||
CONFIG_VIRTIO_PCI=y
|
||||
# CONFIG_VIRTIO_PCI_LEGACY is not set
|
||||
|
|
|
@ -274,27 +274,16 @@ void pSeries_log_error(char *buf, unsigned int err_type, int fatal)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_PPC_PSERIES
|
||||
static s32 prrn_update_scope;
|
||||
|
||||
static void prrn_work_fn(struct work_struct *work)
|
||||
static void handle_prrn_event(s32 scope)
|
||||
{
|
||||
/*
|
||||
* For PRRN, we must pass the negative of the scope value in
|
||||
* the RTAS event.
|
||||
*/
|
||||
pseries_devicetree_update(-prrn_update_scope);
|
||||
pseries_devicetree_update(-scope);
|
||||
numa_update_cpu_topology(false);
|
||||
}
|
||||
|
||||
static DECLARE_WORK(prrn_work, prrn_work_fn);
|
||||
|
||||
static void prrn_schedule_update(u32 scope)
|
||||
{
|
||||
flush_work(&prrn_work);
|
||||
prrn_update_scope = scope;
|
||||
schedule_work(&prrn_work);
|
||||
}
|
||||
|
||||
static void handle_rtas_event(const struct rtas_error_log *log)
|
||||
{
|
||||
if (rtas_error_type(log) != RTAS_TYPE_PRRN || !prrn_is_enabled())
|
||||
|
@ -303,7 +292,7 @@ static void handle_rtas_event(const struct rtas_error_log *log)
|
|||
/* For PRRN Events the extended log length is used to denote
|
||||
* the scope for calling rtas update-nodes.
|
||||
*/
|
||||
prrn_schedule_update(rtas_error_extended_log_length(log));
|
||||
handle_prrn_event(rtas_error_extended_log_length(log));
|
||||
}
|
||||
|
||||
#else
|
||||
|
|
|
@ -91,6 +91,7 @@ CONFIG_IP_ADVANCED_ROUTER=y
|
|||
CONFIG_IP_MULTIPLE_TABLES=y
|
||||
CONFIG_IP_ROUTE_MULTIPATH=y
|
||||
CONFIG_IP_ROUTE_VERBOSE=y
|
||||
CONFIG_NET_IPGRE_DEMUX=y
|
||||
CONFIG_IP_MROUTE=y
|
||||
CONFIG_IP_PIMSM_V1=y
|
||||
CONFIG_IP_PIMSM_V2=y
|
||||
|
@ -148,6 +149,7 @@ CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
|
|||
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_HELPER=y
|
||||
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
|
||||
# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
|
||||
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
|
||||
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
|
||||
CONFIG_NETFILTER_XT_MATCH_MAC=y
|
||||
|
@ -186,6 +188,7 @@ CONFIG_IP6_NF_FILTER=y
|
|||
CONFIG_IP6_NF_TARGET_REJECT=y
|
||||
CONFIG_IP6_NF_MANGLE=y
|
||||
CONFIG_IP6_NF_RAW=y
|
||||
CONFIG_L2TP=y
|
||||
CONFIG_NET_SCHED=y
|
||||
CONFIG_NET_SCH_HTB=y
|
||||
CONFIG_NET_SCH_NETEM=y
|
||||
|
@ -240,6 +243,8 @@ CONFIG_PPP=y
|
|||
CONFIG_PPP_BSDCOMP=y
|
||||
CONFIG_PPP_DEFLATE=y
|
||||
CONFIG_PPP_MPPE=y
|
||||
CONFIG_PPTP=y
|
||||
CONFIG_PPPOL2TP=y
|
||||
CONFIG_USB_RTL8152=y
|
||||
CONFIG_USB_USBNET=y
|
||||
# CONFIG_USB_NET_AX8817X is not set
|
||||
|
|
|
@ -101,9 +101,13 @@ static int hv_cpu_init(unsigned int cpu)
|
|||
u64 msr_vp_index;
|
||||
struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()];
|
||||
void **input_arg;
|
||||
struct page *pg;
|
||||
|
||||
input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
|
||||
*input_arg = page_address(alloc_page(GFP_KERNEL));
|
||||
pg = alloc_page(GFP_KERNEL);
|
||||
if (unlikely(!pg))
|
||||
return -ENOMEM;
|
||||
*input_arg = page_address(pg);
|
||||
|
||||
hv_get_vp_index(msr_vp_index);
|
||||
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#define pr_fmt(fmt) "AGP: " fmt
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/kcore.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/memblock.h>
|
||||
|
@ -57,7 +58,7 @@ int fallback_aper_force __initdata;
|
|||
|
||||
int fix_aperture __initdata = 1;
|
||||
|
||||
#ifdef CONFIG_PROC_VMCORE
|
||||
#if defined(CONFIG_PROC_VMCORE) || defined(CONFIG_PROC_KCORE)
|
||||
/*
|
||||
* If the first kernel maps the aperture over e820 RAM, the kdump kernel will
|
||||
* use the same range because it will remain configured in the northbridge.
|
||||
|
@ -66,20 +67,25 @@ int fix_aperture __initdata = 1;
|
|||
*/
|
||||
static unsigned long aperture_pfn_start, aperture_page_count;
|
||||
|
||||
static int gart_oldmem_pfn_is_ram(unsigned long pfn)
|
||||
static int gart_mem_pfn_is_ram(unsigned long pfn)
|
||||
{
|
||||
return likely((pfn < aperture_pfn_start) ||
|
||||
(pfn >= aperture_pfn_start + aperture_page_count));
|
||||
}
|
||||
|
||||
static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
|
||||
static void __init exclude_from_core(u64 aper_base, u32 aper_order)
|
||||
{
|
||||
aperture_pfn_start = aper_base >> PAGE_SHIFT;
|
||||
aperture_page_count = (32 * 1024 * 1024) << aper_order >> PAGE_SHIFT;
|
||||
WARN_ON(register_oldmem_pfn_is_ram(&gart_oldmem_pfn_is_ram));
|
||||
#ifdef CONFIG_PROC_VMCORE
|
||||
WARN_ON(register_oldmem_pfn_is_ram(&gart_mem_pfn_is_ram));
|
||||
#endif
|
||||
#ifdef CONFIG_PROC_KCORE
|
||||
WARN_ON(register_mem_pfn_is_ram(&gart_mem_pfn_is_ram));
|
||||
#endif
|
||||
}
|
||||
#else
|
||||
static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
|
||||
static void exclude_from_core(u64 aper_base, u32 aper_order)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
@ -469,7 +475,7 @@ int __init gart_iommu_hole_init(void)
|
|||
* may have allocated the range over its e820 RAM
|
||||
* and fixed up the northbridge
|
||||
*/
|
||||
exclude_from_vmcore(last_aper_base, last_aper_order);
|
||||
exclude_from_core(last_aper_base, last_aper_order);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
@ -515,7 +521,7 @@ int __init gart_iommu_hole_init(void)
|
|||
* overlap with the first kernel's memory. We can't access the
|
||||
* range through vmcore even though it should be part of the dump.
|
||||
*/
|
||||
exclude_from_vmcore(aper_alloc, aper_order);
|
||||
exclude_from_core(aper_alloc, aper_order);
|
||||
|
||||
/* Fix up the north bridges */
|
||||
for (i = 0; i < amd_nb_bus_dev_ranges[i].dev_limit; i++) {
|
||||
|
|
|
@ -124,7 +124,7 @@ static void set_cx86_reorder(void)
|
|||
setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
|
||||
|
||||
/* Load/Store Serialize to mem access disable (=reorder it) */
|
||||
setCx86_old(CX86_PCR0, getCx86_old(CX86_PCR0) & ~0x80);
|
||||
setCx86(CX86_PCR0, getCx86(CX86_PCR0) & ~0x80);
|
||||
/* set load/store serialize from 1GB to 4GB */
|
||||
ccr3 |= 0xe0;
|
||||
setCx86(CX86_CCR3, ccr3);
|
||||
|
@ -135,11 +135,11 @@ static void set_cx86_memwb(void)
|
|||
pr_info("Enable Memory-Write-back mode on Cyrix/NSC processor.\n");
|
||||
|
||||
/* CCR2 bit 2: unlock NW bit */
|
||||
setCx86_old(CX86_CCR2, getCx86_old(CX86_CCR2) & ~0x04);
|
||||
setCx86(CX86_CCR2, getCx86(CX86_CCR2) & ~0x04);
|
||||
/* set 'Not Write-through' */
|
||||
write_cr0(read_cr0() | X86_CR0_NW);
|
||||
/* CCR2 bit 2: lock NW bit and set WT1 */
|
||||
setCx86_old(CX86_CCR2, getCx86_old(CX86_CCR2) | 0x14);
|
||||
setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x14);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -153,14 +153,14 @@ static void geode_configure(void)
|
|||
local_irq_save(flags);
|
||||
|
||||
/* Suspend on halt power saving and enable #SUSP pin */
|
||||
setCx86_old(CX86_CCR2, getCx86_old(CX86_CCR2) | 0x88);
|
||||
setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88);
|
||||
|
||||
ccr3 = getCx86(CX86_CCR3);
|
||||
setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
|
||||
|
||||
|
||||
/* FPU fast, DTE cache, Mem bypass */
|
||||
setCx86_old(CX86_CCR4, getCx86_old(CX86_CCR4) | 0x38);
|
||||
setCx86(CX86_CCR4, getCx86(CX86_CCR4) | 0x38);
|
||||
setCx86(CX86_CCR3, ccr3); /* disable MAPEN */
|
||||
|
||||
set_cx86_memwb();
|
||||
|
@ -296,7 +296,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
|
|||
/* GXm supports extended cpuid levels 'ala' AMD */
|
||||
if (c->cpuid_level == 2) {
|
||||
/* Enable cxMMX extensions (GX1 Datasheet 54) */
|
||||
setCx86_old(CX86_CCR7, getCx86_old(CX86_CCR7) | 1);
|
||||
setCx86(CX86_CCR7, getCx86(CX86_CCR7) | 1);
|
||||
|
||||
/*
|
||||
* GXm : 0x30 ... 0x5f GXm datasheet 51
|
||||
|
@ -319,7 +319,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
|
|||
if (dir1 > 7) {
|
||||
dir0_msn++; /* M II */
|
||||
/* Enable MMX extensions (App note 108) */
|
||||
setCx86_old(CX86_CCR7, getCx86_old(CX86_CCR7)|1);
|
||||
setCx86(CX86_CCR7, getCx86(CX86_CCR7)|1);
|
||||
} else {
|
||||
/* A 6x86MX - it has the bug. */
|
||||
set_cpu_bug(c, X86_BUG_COMA);
|
||||
|
|
|
@ -909,6 +909,8 @@ int __init hpet_enable(void)
|
|||
return 0;
|
||||
|
||||
hpet_set_mapping();
|
||||
if (!hpet_virt_address)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Read the period and check for a sane value:
|
||||
|
|
|
@ -357,6 +357,7 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
|
|||
#endif
|
||||
default:
|
||||
WARN_ON_ONCE(1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -599,8 +599,8 @@ static int __init smp_scan_config(unsigned long base, unsigned long length)
|
|||
mpf_base = base;
|
||||
mpf_found = true;
|
||||
|
||||
pr_info("found SMP MP-table at [mem %#010lx-%#010lx] mapped at [%p]\n",
|
||||
base, base + sizeof(*mpf) - 1, mpf);
|
||||
pr_info("found SMP MP-table at [mem %#010lx-%#010lx]\n",
|
||||
base, base + sizeof(*mpf) - 1);
|
||||
|
||||
memblock_reserve(base, sizeof(*mpf));
|
||||
if (mpf->physptr)
|
||||
|
|
|
@ -13181,24 +13181,6 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
|
|||
kvm_clear_interrupt_queue(vcpu);
|
||||
}
|
||||
|
||||
static void load_vmcs12_mmu_host_state(struct kvm_vcpu *vcpu,
|
||||
struct vmcs12 *vmcs12)
|
||||
{
|
||||
u32 entry_failure_code;
|
||||
|
||||
nested_ept_uninit_mmu_context(vcpu);
|
||||
|
||||
/*
|
||||
* Only PDPTE load can fail as the value of cr3 was checked on entry and
|
||||
* couldn't have changed.
|
||||
*/
|
||||
if (nested_vmx_load_cr3(vcpu, vmcs12->host_cr3, false, &entry_failure_code))
|
||||
nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_PDPTE_FAIL);
|
||||
|
||||
if (!enable_ept)
|
||||
vcpu->arch.walk_mmu->inject_page_fault = kvm_inject_page_fault;
|
||||
}
|
||||
|
||||
/*
|
||||
* A part of what we need to when the nested L2 guest exits and we want to
|
||||
* run its L1 parent, is to reset L1's guest state to the host state specified
|
||||
|
@ -13212,6 +13194,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
|
|||
struct vmcs12 *vmcs12)
|
||||
{
|
||||
struct kvm_segment seg;
|
||||
u32 entry_failure_code;
|
||||
|
||||
if (vmcs12->vm_exit_controls & VM_EXIT_LOAD_IA32_EFER)
|
||||
vcpu->arch.efer = vmcs12->host_ia32_efer;
|
||||
|
@ -13238,7 +13221,17 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
|
|||
vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK);
|
||||
vmx_set_cr4(vcpu, vmcs12->host_cr4);
|
||||
|
||||
load_vmcs12_mmu_host_state(vcpu, vmcs12);
|
||||
nested_ept_uninit_mmu_context(vcpu);
|
||||
|
||||
/*
|
||||
* Only PDPTE load can fail as the value of cr3 was checked on entry and
|
||||
* couldn't have changed.
|
||||
*/
|
||||
if (nested_vmx_load_cr3(vcpu, vmcs12->host_cr3, false, &entry_failure_code))
|
||||
nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_PDPTE_FAIL);
|
||||
|
||||
if (!enable_ept)
|
||||
vcpu->arch.walk_mmu->inject_page_fault = kvm_inject_page_fault;
|
||||
|
||||
/*
|
||||
* If vmcs01 don't use VPID, CPU flushes TLB on every
|
||||
|
@ -13334,6 +13327,140 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
|
|||
nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_MSR_FAIL);
|
||||
}
|
||||
|
||||
static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx)
|
||||
{
|
||||
struct shared_msr_entry *efer_msr;
|
||||
unsigned int i;
|
||||
|
||||
if (vm_entry_controls_get(vmx) & VM_ENTRY_LOAD_IA32_EFER)
|
||||
return vmcs_read64(GUEST_IA32_EFER);
|
||||
|
||||
if (cpu_has_load_ia32_efer)
|
||||
return host_efer;
|
||||
|
||||
for (i = 0; i < vmx->msr_autoload.guest.nr; ++i) {
|
||||
if (vmx->msr_autoload.guest.val[i].index == MSR_EFER)
|
||||
return vmx->msr_autoload.guest.val[i].value;
|
||||
}
|
||||
|
||||
efer_msr = find_msr_entry(vmx, MSR_EFER);
|
||||
if (efer_msr)
|
||||
return efer_msr->data;
|
||||
|
||||
return host_efer;
|
||||
}
|
||||
|
||||
static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
|
||||
struct vcpu_vmx *vmx = to_vmx(vcpu);
|
||||
struct vmx_msr_entry g, h;
|
||||
struct msr_data msr;
|
||||
gpa_t gpa;
|
||||
u32 i, j;
|
||||
|
||||
vcpu->arch.pat = vmcs_read64(GUEST_IA32_PAT);
|
||||
|
||||
if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) {
|
||||
/*
|
||||
* L1's host DR7 is lost if KVM_GUESTDBG_USE_HW_BP is set
|
||||
* as vmcs01.GUEST_DR7 contains a userspace defined value
|
||||
* and vcpu->arch.dr7 is not squirreled away before the
|
||||
* nested VMENTER (not worth adding a variable in nested_vmx).
|
||||
*/
|
||||
if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)
|
||||
kvm_set_dr(vcpu, 7, DR7_FIXED_1);
|
||||
else
|
||||
WARN_ON(kvm_set_dr(vcpu, 7, vmcs_readl(GUEST_DR7)));
|
||||
}
|
||||
|
||||
/*
|
||||
* Note that calling vmx_set_{efer,cr0,cr4} is important as they
|
||||
* handle a variety of side effects to KVM's software model.
|
||||
*/
|
||||
vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx));
|
||||
|
||||
vcpu->arch.cr0_guest_owned_bits = X86_CR0_TS;
|
||||
vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW));
|
||||
|
||||
vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK);
|
||||
vmx_set_cr4(vcpu, vmcs_readl(CR4_READ_SHADOW));
|
||||
|
||||
nested_ept_uninit_mmu_context(vcpu);
|
||||
vcpu->arch.cr3 = vmcs_readl(GUEST_CR3);
|
||||
__set_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail);
|
||||
|
||||
/*
|
||||
* Use ept_save_pdptrs(vcpu) to load the MMU's cached PDPTRs
|
||||
* from vmcs01 (if necessary). The PDPTRs are not loaded on
|
||||
* VMFail, like everything else we just need to ensure our
|
||||
* software model is up-to-date.
|
||||
*/
|
||||
ept_save_pdptrs(vcpu);
|
||||
|
||||
kvm_mmu_reset_context(vcpu);
|
||||
|
||||
if (cpu_has_vmx_msr_bitmap())
|
||||
vmx_update_msr_bitmap(vcpu);
|
||||
|
||||
/*
|
||||
* This nasty bit of open coding is a compromise between blindly
|
||||
* loading L1's MSRs using the exit load lists (incorrect emulation
|
||||
* of VMFail), leaving the nested VM's MSRs in the software model
|
||||
* (incorrect behavior) and snapshotting the modified MSRs (too
|
||||
* expensive since the lists are unbound by hardware). For each
|
||||
* MSR that was (prematurely) loaded from the nested VMEntry load
|
||||
* list, reload it from the exit load list if it exists and differs
|
||||
* from the guest value. The intent is to stuff host state as
|
||||
* silently as possible, not to fully process the exit load list.
|
||||
*/
|
||||
msr.host_initiated = false;
|
||||
for (i = 0; i < vmcs12->vm_entry_msr_load_count; i++) {
|
||||
gpa = vmcs12->vm_entry_msr_load_addr + (i * sizeof(g));
|
||||
if (kvm_vcpu_read_guest(vcpu, gpa, &g, sizeof(g))) {
|
||||
pr_debug_ratelimited(
|
||||
"%s read MSR index failed (%u, 0x%08llx)\n",
|
||||
__func__, i, gpa);
|
||||
goto vmabort;
|
||||
}
|
||||
|
||||
for (j = 0; j < vmcs12->vm_exit_msr_load_count; j++) {
|
||||
gpa = vmcs12->vm_exit_msr_load_addr + (j * sizeof(h));
|
||||
if (kvm_vcpu_read_guest(vcpu, gpa, &h, sizeof(h))) {
|
||||
pr_debug_ratelimited(
|
||||
"%s read MSR failed (%u, 0x%08llx)\n",
|
||||
__func__, j, gpa);
|
||||
goto vmabort;
|
||||
}
|
||||
if (h.index != g.index)
|
||||
continue;
|
||||
if (h.value == g.value)
|
||||
break;
|
||||
|
||||
if (nested_vmx_load_msr_check(vcpu, &h)) {
|
||||
pr_debug_ratelimited(
|
||||
"%s check failed (%u, 0x%x, 0x%x)\n",
|
||||
__func__, j, h.index, h.reserved);
|
||||
goto vmabort;
|
||||
}
|
||||
|
||||
msr.index = h.index;
|
||||
msr.data = h.value;
|
||||
if (kvm_set_msr(vcpu, &msr)) {
|
||||
pr_debug_ratelimited(
|
||||
"%s WRMSR failed (%u, 0x%x, 0x%llx)\n",
|
||||
__func__, j, h.index, h.value);
|
||||
goto vmabort;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return;
|
||||
|
||||
vmabort:
|
||||
nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_MSR_FAIL);
|
||||
}
|
||||
|
||||
/*
|
||||
* Emulate an exit from nested guest (L2) to L1, i.e., prepare to run L1
|
||||
* and modify vmcs12 to make it see what it would expect to see there if
|
||||
|
@ -13478,7 +13605,13 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
|
|||
*/
|
||||
nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD);
|
||||
|
||||
load_vmcs12_mmu_host_state(vcpu, vmcs12);
|
||||
/*
|
||||
* Restore L1's host state to KVM's software model. We're here
|
||||
* because a consistency check was caught by hardware, which
|
||||
* means some amount of guest state has been propagated to KVM's
|
||||
* model and needs to be unwound to the host's state.
|
||||
*/
|
||||
nested_vmx_restore_host_state(vcpu);
|
||||
|
||||
/*
|
||||
* The emulated instruction was already skipped in
|
||||
|
|
|
@ -75,6 +75,7 @@
|
|||
#include <linux/blk-mq.h>
|
||||
#include "blk-rq-qos.h"
|
||||
#include "blk-stat.h"
|
||||
#include "blk.h"
|
||||
|
||||
#define DEFAULT_SCALE_COOKIE 1000000U
|
||||
|
||||
|
|
|
@ -194,6 +194,7 @@ static struct workqueue_struct *ec_query_wq;
|
|||
static int EC_FLAGS_QUERY_HANDSHAKE; /* Needs QR_EC issued when SCI_EVT set */
|
||||
static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
|
||||
static int EC_FLAGS_IGNORE_DSDT_GPE; /* Needs ECDT GPE as correction setting */
|
||||
static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
|
||||
|
||||
/* --------------------------------------------------------------------------
|
||||
* Logging/Debugging
|
||||
|
@ -499,6 +500,26 @@ static inline void __acpi_ec_disable_event(struct acpi_ec *ec)
|
|||
ec_log_drv("event blocked");
|
||||
}
|
||||
|
||||
/*
|
||||
* Process _Q events that might have accumulated in the EC.
|
||||
* Run with locked ec mutex.
|
||||
*/
|
||||
static void acpi_ec_clear(struct acpi_ec *ec)
|
||||
{
|
||||
int i, status;
|
||||
u8 value = 0;
|
||||
|
||||
for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) {
|
||||
status = acpi_ec_query(ec, &value);
|
||||
if (status || !value)
|
||||
break;
|
||||
}
|
||||
if (unlikely(i == ACPI_EC_CLEAR_MAX))
|
||||
pr_warn("Warning: Maximum of %d stale EC events cleared\n", i);
|
||||
else
|
||||
pr_info("%d stale EC events cleared\n", i);
|
||||
}
|
||||
|
||||
static void acpi_ec_enable_event(struct acpi_ec *ec)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
@ -507,6 +528,10 @@ static void acpi_ec_enable_event(struct acpi_ec *ec)
|
|||
if (acpi_ec_started(ec))
|
||||
__acpi_ec_enable_event(ec);
|
||||
spin_unlock_irqrestore(&ec->lock, flags);
|
||||
|
||||
/* Drain additional events if hardware requires that */
|
||||
if (EC_FLAGS_CLEAR_ON_RESUME)
|
||||
acpi_ec_clear(ec);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
|
@ -1034,6 +1059,18 @@ void acpi_ec_unblock_transactions(void)
|
|||
acpi_ec_start(first_ec, true);
|
||||
}
|
||||
|
||||
void acpi_ec_mark_gpe_for_wake(void)
|
||||
{
|
||||
if (first_ec && !ec_no_wakeup)
|
||||
acpi_mark_gpe_for_wake(NULL, first_ec->gpe);
|
||||
}
|
||||
|
||||
void acpi_ec_set_gpe_wake_mask(u8 action)
|
||||
{
|
||||
if (first_ec && !ec_no_wakeup)
|
||||
acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
|
||||
}
|
||||
|
||||
void acpi_ec_dispatch_gpe(void)
|
||||
{
|
||||
if (first_ec)
|
||||
|
@ -1808,6 +1845,31 @@ static int ec_flag_query_handshake(const struct dmi_system_id *id)
|
|||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* On some hardware it is necessary to clear events accumulated by the EC during
|
||||
* sleep. These ECs stop reporting GPEs until they are manually polled, if too
|
||||
* many events are accumulated. (e.g. Samsung Series 5/9 notebooks)
|
||||
*
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=44161
|
||||
*
|
||||
* Ideally, the EC should also be instructed NOT to accumulate events during
|
||||
* sleep (which Windows seems to do somehow), but the interface to control this
|
||||
* behaviour is not known at this time.
|
||||
*
|
||||
* Models known to be affected are Samsung 530Uxx/535Uxx/540Uxx/550Pxx/900Xxx,
|
||||
* however it is very likely that other Samsung models are affected.
|
||||
*
|
||||
* On systems which don't accumulate _Q events during sleep, this extra check
|
||||
* should be harmless.
|
||||
*/
|
||||
static int ec_clear_on_resume(const struct dmi_system_id *id)
|
||||
{
|
||||
pr_debug("Detected system needing EC poll on resume.\n");
|
||||
EC_FLAGS_CLEAR_ON_RESUME = 1;
|
||||
ec_event_clearing = ACPI_EC_EVT_TIMING_STATUS;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Some ECDTs contain wrong register addresses.
|
||||
* MSI MS-171F
|
||||
|
@ -1857,6 +1919,9 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
|
|||
ec_honor_ecdt_gpe, "ASUS X580VD", {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "X580VD"),}, NULL},
|
||||
{
|
||||
ec_clear_on_resume, "Samsung hardware", {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
|
||||
{},
|
||||
};
|
||||
|
||||
|
|
|
@ -188,6 +188,8 @@ int acpi_ec_ecdt_probe(void);
|
|||
int acpi_ec_dsdt_probe(void);
|
||||
void acpi_ec_block_transactions(void);
|
||||
void acpi_ec_unblock_transactions(void);
|
||||
void acpi_ec_mark_gpe_for_wake(void);
|
||||
void acpi_ec_set_gpe_wake_mask(u8 action);
|
||||
void acpi_ec_dispatch_gpe(void);
|
||||
int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
|
||||
acpi_handle handle, acpi_ec_query_func func,
|
||||
|
|
|
@ -441,9 +441,13 @@ static int acpi_ac_get_present(struct acpi_sbs *sbs)
|
|||
|
||||
/*
|
||||
* The spec requires that bit 4 always be 1. If it's not set, assume
|
||||
* that the implementation doesn't support an SBS charger
|
||||
* that the implementation doesn't support an SBS charger.
|
||||
*
|
||||
* And on some MacBooks a status of 0xffff is always returned, no
|
||||
* matter whether the charger is plugged in or not, which is also
|
||||
* wrong, so ignore the SBS charger for those too.
|
||||
*/
|
||||
if (!((status >> 4) & 0x1))
|
||||
if (!((status >> 4) & 0x1) || status == 0xffff)
|
||||
return -ENODEV;
|
||||
|
||||
sbs->charger_present = (status >> 15) & 0x1;
|
||||
|
|
|
@ -940,6 +940,8 @@ static int lps0_device_attach(struct acpi_device *adev,
|
|||
|
||||
acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
|
||||
bitmask);
|
||||
|
||||
acpi_ec_mark_gpe_for_wake();
|
||||
} else {
|
||||
acpi_handle_debug(adev->handle,
|
||||
"_DSM function 0 evaluation failed\n");
|
||||
|
@ -968,11 +970,16 @@ static int acpi_s2idle_prepare(void)
|
|||
if (lps0_device_handle) {
|
||||
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
|
||||
acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
|
||||
|
||||
acpi_ec_set_gpe_wake_mask(ACPI_GPE_ENABLE);
|
||||
}
|
||||
|
||||
if (acpi_sci_irq_valid())
|
||||
enable_irq_wake(acpi_sci_irq);
|
||||
|
||||
/* Change the configuration of GPEs to avoid spurious wakeup. */
|
||||
acpi_enable_all_wakeup_gpes();
|
||||
acpi_os_wait_events_complete();
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1017,10 +1024,14 @@ static void acpi_s2idle_sync(void)
|
|||
|
||||
static void acpi_s2idle_restore(void)
|
||||
{
|
||||
acpi_enable_all_runtime_gpes();
|
||||
|
||||
if (acpi_sci_irq_valid())
|
||||
disable_irq_wake(acpi_sci_irq);
|
||||
|
||||
if (lps0_device_handle) {
|
||||
acpi_ec_set_gpe_wake_mask(ACPI_GPE_DISABLE);
|
||||
|
||||
acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
|
||||
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON);
|
||||
}
|
||||
|
|
|
@ -800,6 +800,7 @@ bool acpi_dev_present(const char *hid, const char *uid, s64 hrv)
|
|||
match.hrv = hrv;
|
||||
|
||||
dev = bus_find_device(&acpi_bus_type, NULL, &match, acpi_dev_match_cb);
|
||||
put_device(dev);
|
||||
return !!dev;
|
||||
}
|
||||
EXPORT_SYMBOL(acpi_dev_present);
|
||||
|
|
|
@ -299,6 +299,8 @@ static int hd44780_remove(struct platform_device *pdev)
|
|||
struct charlcd *lcd = platform_get_drvdata(pdev);
|
||||
|
||||
charlcd_unregister(lcd);
|
||||
|
||||
kfree(lcd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -197,11 +197,16 @@ static ssize_t node_read_vmstat(struct device *dev,
|
|||
sum_zone_numa_state(nid, i));
|
||||
#endif
|
||||
|
||||
for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
|
||||
for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) {
|
||||
/* Skip hidden vmstat items. */
|
||||
if (*vmstat_text[i + NR_VM_ZONE_STAT_ITEMS +
|
||||
NR_VM_NUMA_STAT_ITEMS] == '\0')
|
||||
continue;
|
||||
n += sprintf(buf+n, "%s %lu\n",
|
||||
vmstat_text[i + NR_VM_ZONE_STAT_ITEMS +
|
||||
NR_VM_NUMA_STAT_ITEMS],
|
||||
node_page_state(pgdat, i));
|
||||
}
|
||||
|
||||
return n;
|
||||
}
|
||||
|
|
|
@ -1388,12 +1388,12 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
|
|||
if (IS_ERR(gpd_data))
|
||||
return PTR_ERR(gpd_data);
|
||||
|
||||
genpd_lock(genpd);
|
||||
|
||||
ret = genpd->attach_dev ? genpd->attach_dev(genpd, dev) : 0;
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
genpd_lock(genpd);
|
||||
|
||||
dev_pm_domain_set(dev, &genpd->domain);
|
||||
|
||||
genpd->device_count++;
|
||||
|
@ -1401,9 +1401,8 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
|
|||
|
||||
list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
|
||||
|
||||
out:
|
||||
genpd_unlock(genpd);
|
||||
|
||||
out:
|
||||
if (ret)
|
||||
genpd_free_dev_data(dev, gpd_data);
|
||||
else
|
||||
|
@ -1452,15 +1451,15 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
|
|||
genpd->device_count--;
|
||||
genpd->max_off_time_changed = true;
|
||||
|
||||
if (genpd->detach_dev)
|
||||
genpd->detach_dev(genpd, dev);
|
||||
|
||||
dev_pm_domain_set(dev, NULL);
|
||||
|
||||
list_del_init(&pdd->list_node);
|
||||
|
||||
genpd_unlock(genpd);
|
||||
|
||||
if (genpd->detach_dev)
|
||||
genpd->detach_dev(genpd, dev);
|
||||
|
||||
genpd_free_dev_data(dev, gpd_data);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -284,6 +284,7 @@ enum artpec6_crypto_hash_flags {
|
|||
|
||||
struct artpec6_crypto_req_common {
|
||||
struct list_head list;
|
||||
struct list_head complete_in_progress;
|
||||
struct artpec6_crypto_dma_descriptors *dma;
|
||||
struct crypto_async_request *req;
|
||||
void (*complete)(struct crypto_async_request *req);
|
||||
|
@ -2046,7 +2047,8 @@ static int artpec6_crypto_prepare_aead(struct aead_request *areq)
|
|||
return artpec6_crypto_dma_map_descs(common);
|
||||
}
|
||||
|
||||
static void artpec6_crypto_process_queue(struct artpec6_crypto *ac)
|
||||
static void artpec6_crypto_process_queue(struct artpec6_crypto *ac,
|
||||
struct list_head *completions)
|
||||
{
|
||||
struct artpec6_crypto_req_common *req;
|
||||
|
||||
|
@ -2057,7 +2059,7 @@ static void artpec6_crypto_process_queue(struct artpec6_crypto *ac)
|
|||
list_move_tail(&req->list, &ac->pending);
|
||||
artpec6_crypto_start_dma(req);
|
||||
|
||||
req->req->complete(req->req, -EINPROGRESS);
|
||||
list_add_tail(&req->complete_in_progress, completions);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -2087,6 +2089,11 @@ static void artpec6_crypto_task(unsigned long data)
|
|||
struct artpec6_crypto *ac = (struct artpec6_crypto *)data;
|
||||
struct artpec6_crypto_req_common *req;
|
||||
struct artpec6_crypto_req_common *n;
|
||||
struct list_head complete_done;
|
||||
struct list_head complete_in_progress;
|
||||
|
||||
INIT_LIST_HEAD(&complete_done);
|
||||
INIT_LIST_HEAD(&complete_in_progress);
|
||||
|
||||
if (list_empty(&ac->pending)) {
|
||||
pr_debug("Spurious IRQ\n");
|
||||
|
@ -2120,19 +2127,30 @@ static void artpec6_crypto_task(unsigned long data)
|
|||
|
||||
pr_debug("Completing request %p\n", req);
|
||||
|
||||
list_del(&req->list);
|
||||
list_move_tail(&req->list, &complete_done);
|
||||
|
||||
artpec6_crypto_dma_unmap_all(req);
|
||||
artpec6_crypto_copy_bounce_buffers(req);
|
||||
|
||||
ac->pending_count--;
|
||||
artpec6_crypto_common_destroy(req);
|
||||
}
|
||||
|
||||
artpec6_crypto_process_queue(ac, &complete_in_progress);
|
||||
|
||||
spin_unlock_bh(&ac->queue_lock);
|
||||
|
||||
/* Perform the completion callbacks without holding the queue lock
|
||||
* to allow new request submissions from the callbacks.
|
||||
*/
|
||||
list_for_each_entry_safe(req, n, &complete_done, list) {
|
||||
req->complete(req->req);
|
||||
}
|
||||
|
||||
artpec6_crypto_process_queue(ac);
|
||||
|
||||
spin_unlock_bh(&ac->queue_lock);
|
||||
list_for_each_entry_safe(req, n, &complete_in_progress,
|
||||
complete_in_progress) {
|
||||
req->req->complete(req->req, -EINPROGRESS);
|
||||
}
|
||||
}
|
||||
|
||||
static void artpec6_crypto_complete_crypto(struct crypto_async_request *req)
|
||||
|
|
|
@ -777,6 +777,9 @@ static int pxa_gpio_suspend(void)
|
|||
struct pxa_gpio_bank *c;
|
||||
int gpio;
|
||||
|
||||
if (!pchip)
|
||||
return 0;
|
||||
|
||||
for_each_gpio_bank(gpio, c, pchip) {
|
||||
c->saved_gplr = readl_relaxed(c->regbase + GPLR_OFFSET);
|
||||
c->saved_gpdr = readl_relaxed(c->regbase + GPDR_OFFSET);
|
||||
|
@ -795,6 +798,9 @@ static void pxa_gpio_resume(void)
|
|||
struct pxa_gpio_bank *c;
|
||||
int gpio;
|
||||
|
||||
if (!pchip)
|
||||
return;
|
||||
|
||||
for_each_gpio_bank(gpio, c, pchip) {
|
||||
/* restore level with set/clear */
|
||||
writel_relaxed(c->saved_gplr, c->regbase + GPSR_OFFSET);
|
||||
|
|
|
@ -323,57 +323,7 @@ static int init_mqd_hiq(struct mqd_manager *mm, void **mqd,
|
|||
struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
|
||||
struct queue_properties *q)
|
||||
{
|
||||
uint64_t addr;
|
||||
struct cik_mqd *m;
|
||||
int retval;
|
||||
|
||||
retval = kfd_gtt_sa_allocate(mm->dev, sizeof(struct cik_mqd),
|
||||
mqd_mem_obj);
|
||||
|
||||
if (retval != 0)
|
||||
return -ENOMEM;
|
||||
|
||||
m = (struct cik_mqd *) (*mqd_mem_obj)->cpu_ptr;
|
||||
addr = (*mqd_mem_obj)->gpu_addr;
|
||||
|
||||
memset(m, 0, ALIGN(sizeof(struct cik_mqd), 256));
|
||||
|
||||
m->header = 0xC0310800;
|
||||
m->compute_pipelinestat_enable = 1;
|
||||
m->compute_static_thread_mgmt_se0 = 0xFFFFFFFF;
|
||||
m->compute_static_thread_mgmt_se1 = 0xFFFFFFFF;
|
||||
m->compute_static_thread_mgmt_se2 = 0xFFFFFFFF;
|
||||
m->compute_static_thread_mgmt_se3 = 0xFFFFFFFF;
|
||||
|
||||
m->cp_hqd_persistent_state = DEFAULT_CP_HQD_PERSISTENT_STATE |
|
||||
PRELOAD_REQ;
|
||||
m->cp_hqd_quantum = QUANTUM_EN | QUANTUM_SCALE_1MS |
|
||||
QUANTUM_DURATION(10);
|
||||
|
||||
m->cp_mqd_control = MQD_CONTROL_PRIV_STATE_EN;
|
||||
m->cp_mqd_base_addr_lo = lower_32_bits(addr);
|
||||
m->cp_mqd_base_addr_hi = upper_32_bits(addr);
|
||||
|
||||
m->cp_hqd_ib_control = DEFAULT_MIN_IB_AVAIL_SIZE;
|
||||
|
||||
/*
|
||||
* Pipe Priority
|
||||
* Identifies the pipe relative priority when this queue is connected
|
||||
* to the pipeline. The pipe priority is against the GFX pipe and HP3D.
|
||||
* In KFD we are using a fixed pipe priority set to CS_MEDIUM.
|
||||
* 0 = CS_LOW (typically below GFX)
|
||||
* 1 = CS_MEDIUM (typically between HP3D and GFX
|
||||
* 2 = CS_HIGH (typically above HP3D)
|
||||
*/
|
||||
m->cp_hqd_pipe_priority = 1;
|
||||
m->cp_hqd_queue_priority = 15;
|
||||
|
||||
*mqd = m;
|
||||
if (gart_addr)
|
||||
*gart_addr = addr;
|
||||
retval = mm->update_mqd(mm, m, q);
|
||||
|
||||
return retval;
|
||||
return init_mqd(mm, mqd, mqd_mem_obj, gart_addr, q);
|
||||
}
|
||||
|
||||
static int update_mqd_hiq(struct mqd_manager *mm, void *mqd,
|
||||
|
|
|
@ -146,7 +146,7 @@ struct cirrus_device {
|
|||
|
||||
struct cirrus_fbdev {
|
||||
struct drm_fb_helper helper;
|
||||
struct drm_framebuffer gfb;
|
||||
struct drm_framebuffer *gfb;
|
||||
void *sysram;
|
||||
int size;
|
||||
int x1, y1, x2, y2; /* dirty rect */
|
||||
|
|
|
@ -22,14 +22,14 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
|
|||
struct drm_gem_object *obj;
|
||||
struct cirrus_bo *bo;
|
||||
int src_offset, dst_offset;
|
||||
int bpp = afbdev->gfb.format->cpp[0];
|
||||
int bpp = afbdev->gfb->format->cpp[0];
|
||||
int ret = -EBUSY;
|
||||
bool unmap = false;
|
||||
bool store_for_later = false;
|
||||
int x2, y2;
|
||||
unsigned long flags;
|
||||
|
||||
obj = afbdev->gfb.obj[0];
|
||||
obj = afbdev->gfb->obj[0];
|
||||
bo = gem_to_cirrus_bo(obj);
|
||||
|
||||
/*
|
||||
|
@ -82,7 +82,7 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
|
|||
}
|
||||
for (i = y; i < y + height; i++) {
|
||||
/* assume equal stride for now */
|
||||
src_offset = dst_offset = i * afbdev->gfb.pitches[0] + (x * bpp);
|
||||
src_offset = dst_offset = i * afbdev->gfb->pitches[0] + (x * bpp);
|
||||
memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, width * bpp);
|
||||
|
||||
}
|
||||
|
@ -192,23 +192,26 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
|
|||
return -ENOMEM;
|
||||
|
||||
info = drm_fb_helper_alloc_fbi(helper);
|
||||
if (IS_ERR(info))
|
||||
return PTR_ERR(info);
|
||||
if (IS_ERR(info)) {
|
||||
ret = PTR_ERR(info);
|
||||
goto err_vfree;
|
||||
}
|
||||
|
||||
info->par = gfbdev;
|
||||
|
||||
ret = cirrus_framebuffer_init(cdev->dev, &gfbdev->gfb, &mode_cmd, gobj);
|
||||
fb = kzalloc(sizeof(*fb), GFP_KERNEL);
|
||||
if (!fb) {
|
||||
ret = -ENOMEM;
|
||||
goto err_drm_gem_object_put_unlocked;
|
||||
}
|
||||
|
||||
ret = cirrus_framebuffer_init(cdev->dev, fb, &mode_cmd, gobj);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_kfree;
|
||||
|
||||
gfbdev->sysram = sysram;
|
||||
gfbdev->size = size;
|
||||
|
||||
fb = &gfbdev->gfb;
|
||||
if (!fb) {
|
||||
DRM_INFO("fb is NULL\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
gfbdev->gfb = fb;
|
||||
|
||||
/* setup helper */
|
||||
gfbdev->helper.fb = fb;
|
||||
|
@ -241,24 +244,27 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
|
|||
DRM_INFO(" pitch is %d\n", fb->pitches[0]);
|
||||
|
||||
return 0;
|
||||
|
||||
err_kfree:
|
||||
kfree(fb);
|
||||
err_drm_gem_object_put_unlocked:
|
||||
drm_gem_object_put_unlocked(gobj);
|
||||
err_vfree:
|
||||
vfree(sysram);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int cirrus_fbdev_destroy(struct drm_device *dev,
|
||||
struct cirrus_fbdev *gfbdev)
|
||||
{
|
||||
struct drm_framebuffer *gfb = &gfbdev->gfb;
|
||||
struct drm_framebuffer *gfb = gfbdev->gfb;
|
||||
|
||||
drm_fb_helper_unregister_fbi(&gfbdev->helper);
|
||||
|
||||
if (gfb->obj[0]) {
|
||||
drm_gem_object_put_unlocked(gfb->obj[0]);
|
||||
gfb->obj[0] = NULL;
|
||||
}
|
||||
|
||||
vfree(gfbdev->sysram);
|
||||
drm_fb_helper_fini(&gfbdev->helper);
|
||||
drm_framebuffer_unregister_private(gfb);
|
||||
drm_framebuffer_cleanup(gfb);
|
||||
if (gfb)
|
||||
drm_framebuffer_put(gfb);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -127,7 +127,7 @@ static int cirrus_crtc_do_set_base(struct drm_crtc *crtc,
|
|||
return ret;
|
||||
}
|
||||
|
||||
if (&cdev->mode_info.gfbdev->gfb == crtc->primary->fb) {
|
||||
if (cdev->mode_info.gfbdev->gfb == crtc->primary->fb) {
|
||||
/* if pushing console in kmap it */
|
||||
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
|
||||
if (ret)
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include "regs-vp.h"
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/i2c.h>
|
||||
|
@ -337,15 +338,62 @@ static void mixer_cfg_vp_blend(struct mixer_context *ctx)
|
|||
mixer_reg_write(ctx, MXR_VIDEO_CFG, val);
|
||||
}
|
||||
|
||||
static void mixer_vsync_set_update(struct mixer_context *ctx, bool enable)
|
||||
static bool mixer_is_synced(struct mixer_context *ctx)
|
||||
{
|
||||
/* block update on vsync */
|
||||
mixer_reg_writemask(ctx, MXR_STATUS, enable ?
|
||||
MXR_STATUS_SYNC_ENABLE : 0, MXR_STATUS_SYNC_ENABLE);
|
||||
u32 base, shadow;
|
||||
|
||||
if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
|
||||
ctx->mxr_ver == MXR_VER_128_0_0_184)
|
||||
return !(mixer_reg_read(ctx, MXR_CFG) &
|
||||
MXR_CFG_LAYER_UPDATE_COUNT_MASK);
|
||||
|
||||
if (test_bit(MXR_BIT_VP_ENABLED, &ctx->flags) &&
|
||||
vp_reg_read(ctx, VP_SHADOW_UPDATE))
|
||||
return false;
|
||||
|
||||
base = mixer_reg_read(ctx, MXR_CFG);
|
||||
shadow = mixer_reg_read(ctx, MXR_CFG_S);
|
||||
if (base != shadow)
|
||||
return false;
|
||||
|
||||
base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(0));
|
||||
shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(0));
|
||||
if (base != shadow)
|
||||
return false;
|
||||
|
||||
base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(1));
|
||||
shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(1));
|
||||
if (base != shadow)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int mixer_wait_for_sync(struct mixer_context *ctx)
|
||||
{
|
||||
ktime_t timeout = ktime_add_us(ktime_get(), 100000);
|
||||
|
||||
while (!mixer_is_synced(ctx)) {
|
||||
usleep_range(1000, 2000);
|
||||
if (ktime_compare(ktime_get(), timeout) > 0)
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mixer_disable_sync(struct mixer_context *ctx)
|
||||
{
|
||||
mixer_reg_writemask(ctx, MXR_STATUS, 0, MXR_STATUS_SYNC_ENABLE);
|
||||
}
|
||||
|
||||
static void mixer_enable_sync(struct mixer_context *ctx)
|
||||
{
|
||||
if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
|
||||
ctx->mxr_ver == MXR_VER_128_0_0_184)
|
||||
mixer_reg_writemask(ctx, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE);
|
||||
mixer_reg_writemask(ctx, MXR_STATUS, ~0, MXR_STATUS_SYNC_ENABLE);
|
||||
if (test_bit(MXR_BIT_VP_ENABLED, &ctx->flags))
|
||||
vp_reg_write(ctx, VP_SHADOW_UPDATE, enable ?
|
||||
VP_SHADOW_UPDATE_ENABLE : 0);
|
||||
vp_reg_write(ctx, VP_SHADOW_UPDATE, VP_SHADOW_UPDATE_ENABLE);
|
||||
}
|
||||
|
||||
static void mixer_cfg_scan(struct mixer_context *ctx, int width, int height)
|
||||
|
@ -482,7 +530,6 @@ static void vp_video_buffer(struct mixer_context *ctx,
|
|||
|
||||
spin_lock_irqsave(&ctx->reg_slock, flags);
|
||||
|
||||
vp_reg_write(ctx, VP_SHADOW_UPDATE, 1);
|
||||
/* interlace or progressive scan mode */
|
||||
val = (test_bit(MXR_BIT_INTERLACE, &ctx->flags) ? ~0 : 0);
|
||||
vp_reg_writemask(ctx, VP_MODE, val, VP_MODE_LINE_SKIP);
|
||||
|
@ -537,11 +584,6 @@ static void vp_video_buffer(struct mixer_context *ctx,
|
|||
vp_regs_dump(ctx);
|
||||
}
|
||||
|
||||
static void mixer_layer_update(struct mixer_context *ctx)
|
||||
{
|
||||
mixer_reg_writemask(ctx, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE);
|
||||
}
|
||||
|
||||
static void mixer_graph_buffer(struct mixer_context *ctx,
|
||||
struct exynos_drm_plane *plane)
|
||||
{
|
||||
|
@ -618,11 +660,6 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
|
|||
mixer_cfg_layer(ctx, win, priority, true);
|
||||
mixer_cfg_gfx_blend(ctx, win, fb->format->has_alpha);
|
||||
|
||||
/* layer update mandatory for mixer 16.0.33.0 */
|
||||
if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
|
||||
ctx->mxr_ver == MXR_VER_128_0_0_184)
|
||||
mixer_layer_update(ctx);
|
||||
|
||||
spin_unlock_irqrestore(&ctx->reg_slock, flags);
|
||||
|
||||
mixer_regs_dump(ctx);
|
||||
|
@ -687,7 +724,7 @@ static void mixer_win_reset(struct mixer_context *ctx)
|
|||
static irqreturn_t mixer_irq_handler(int irq, void *arg)
|
||||
{
|
||||
struct mixer_context *ctx = arg;
|
||||
u32 val, base, shadow;
|
||||
u32 val;
|
||||
|
||||
spin_lock(&ctx->reg_slock);
|
||||
|
||||
|
@ -701,26 +738,9 @@ static irqreturn_t mixer_irq_handler(int irq, void *arg)
|
|||
val &= ~MXR_INT_STATUS_VSYNC;
|
||||
|
||||
/* interlace scan need to check shadow register */
|
||||
if (test_bit(MXR_BIT_INTERLACE, &ctx->flags)) {
|
||||
if (test_bit(MXR_BIT_VP_ENABLED, &ctx->flags) &&
|
||||
vp_reg_read(ctx, VP_SHADOW_UPDATE))
|
||||
goto out;
|
||||
|
||||
base = mixer_reg_read(ctx, MXR_CFG);
|
||||
shadow = mixer_reg_read(ctx, MXR_CFG_S);
|
||||
if (base != shadow)
|
||||
goto out;
|
||||
|
||||
base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(0));
|
||||
shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(0));
|
||||
if (base != shadow)
|
||||
goto out;
|
||||
|
||||
base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(1));
|
||||
shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(1));
|
||||
if (base != shadow)
|
||||
goto out;
|
||||
}
|
||||
if (test_bit(MXR_BIT_INTERLACE, &ctx->flags)
|
||||
&& !mixer_is_synced(ctx))
|
||||
goto out;
|
||||
|
||||
drm_crtc_handle_vblank(&ctx->crtc->base);
|
||||
}
|
||||
|
@ -895,12 +915,14 @@ static void mixer_disable_vblank(struct exynos_drm_crtc *crtc)
|
|||
|
||||
static void mixer_atomic_begin(struct exynos_drm_crtc *crtc)
|
||||
{
|
||||
struct mixer_context *mixer_ctx = crtc->ctx;
|
||||
struct mixer_context *ctx = crtc->ctx;
|
||||
|
||||
if (!test_bit(MXR_BIT_POWERED, &mixer_ctx->flags))
|
||||
if (!test_bit(MXR_BIT_POWERED, &ctx->flags))
|
||||
return;
|
||||
|
||||
mixer_vsync_set_update(mixer_ctx, false);
|
||||
if (mixer_wait_for_sync(ctx))
|
||||
dev_err(ctx->dev, "timeout waiting for VSYNC\n");
|
||||
mixer_disable_sync(ctx);
|
||||
}
|
||||
|
||||
static void mixer_update_plane(struct exynos_drm_crtc *crtc,
|
||||
|
@ -942,7 +964,7 @@ static void mixer_atomic_flush(struct exynos_drm_crtc *crtc)
|
|||
if (!test_bit(MXR_BIT_POWERED, &mixer_ctx->flags))
|
||||
return;
|
||||
|
||||
mixer_vsync_set_update(mixer_ctx, true);
|
||||
mixer_enable_sync(mixer_ctx);
|
||||
exynos_crtc_handle_event(crtc);
|
||||
}
|
||||
|
||||
|
@ -957,7 +979,7 @@ static void mixer_enable(struct exynos_drm_crtc *crtc)
|
|||
|
||||
exynos_drm_pipe_clk_enable(crtc, true);
|
||||
|
||||
mixer_vsync_set_update(ctx, false);
|
||||
mixer_disable_sync(ctx);
|
||||
|
||||
mixer_reg_writemask(ctx, MXR_STATUS, ~0, MXR_STATUS_SOFT_RESET);
|
||||
|
||||
|
@ -970,7 +992,7 @@ static void mixer_enable(struct exynos_drm_crtc *crtc)
|
|||
|
||||
mixer_commit(ctx);
|
||||
|
||||
mixer_vsync_set_update(ctx, true);
|
||||
mixer_enable_sync(ctx);
|
||||
|
||||
set_bit(MXR_BIT_POWERED, &ctx->flags);
|
||||
}
|
||||
|
|
|
@ -38,6 +38,7 @@ int nvkm_volt_set_id(struct nvkm_volt *, u8 id, u8 min_id, u8 temp,
|
|||
|
||||
int nv40_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
|
||||
int gf100_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
|
||||
int gf117_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
|
||||
int gk104_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
|
||||
int gk20a_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
|
||||
int gm20b_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
|
||||
|
|
|
@ -161,7 +161,7 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
|
|||
}
|
||||
|
||||
ret = pm_runtime_get_sync(drm->dev);
|
||||
if (IS_ERR_VALUE(ret) && ret != -EACCES)
|
||||
if (ret < 0 && ret != -EACCES)
|
||||
return ret;
|
||||
ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
|
||||
pm_runtime_put_autosuspend(drm->dev);
|
||||
|
|
|
@ -1613,7 +1613,7 @@ nvd7_chipset = {
|
|||
.pci = gf106_pci_new,
|
||||
.therm = gf119_therm_new,
|
||||
.timer = nv41_timer_new,
|
||||
.volt = gf100_volt_new,
|
||||
.volt = gf117_volt_new,
|
||||
.ce[0] = gf100_ce_new,
|
||||
.disp = gf119_disp_new,
|
||||
.dma = gf119_dma_new,
|
||||
|
|
|
@ -2,6 +2,7 @@ nvkm-y += nvkm/subdev/volt/base.o
|
|||
nvkm-y += nvkm/subdev/volt/gpio.o
|
||||
nvkm-y += nvkm/subdev/volt/nv40.o
|
||||
nvkm-y += nvkm/subdev/volt/gf100.o
|
||||
nvkm-y += nvkm/subdev/volt/gf117.o
|
||||
nvkm-y += nvkm/subdev/volt/gk104.o
|
||||
nvkm-y += nvkm/subdev/volt/gk20a.o
|
||||
nvkm-y += nvkm/subdev/volt/gm20b.o
|
||||
|
|
60
drivers/gpu/drm/nouveau/nvkm/subdev/volt/gf117.c
Normal file
60
drivers/gpu/drm/nouveau/nvkm/subdev/volt/gf117.c
Normal file
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* Copyright 2019 Ilia Mirkin
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
* copy of this software and associated documentation files (the "Software"),
|
||||
* to deal in the Software without restriction, including without limitation
|
||||
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
* and/or sell copies of the Software, and to permit persons to whom the
|
||||
* Software is furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
||||
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
||||
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*
|
||||
* Authors: Ilia Mirkin
|
||||
*/
|
||||
#include "priv.h"
|
||||
|
||||
#include <subdev/fuse.h>
|
||||
|
||||
static int
|
||||
gf117_volt_speedo_read(struct nvkm_volt *volt)
|
||||
{
|
||||
struct nvkm_device *device = volt->subdev.device;
|
||||
struct nvkm_fuse *fuse = device->fuse;
|
||||
|
||||
if (!fuse)
|
||||
return -EINVAL;
|
||||
|
||||
return nvkm_fuse_read(fuse, 0x3a8);
|
||||
}
|
||||
|
||||
static const struct nvkm_volt_func
|
||||
gf117_volt = {
|
||||
.oneinit = gf100_volt_oneinit,
|
||||
.vid_get = nvkm_voltgpio_get,
|
||||
.vid_set = nvkm_voltgpio_set,
|
||||
.speedo_read = gf117_volt_speedo_read,
|
||||
};
|
||||
|
||||
int
|
||||
gf117_volt_new(struct nvkm_device *device, int index, struct nvkm_volt **pvolt)
|
||||
{
|
||||
struct nvkm_volt *volt;
|
||||
int ret;
|
||||
|
||||
ret = nvkm_volt_new_(&gf117_volt, device, index, &volt);
|
||||
*pvolt = volt;
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return nvkm_voltgpio_init(volt);
|
||||
}
|
|
@ -70,18 +70,12 @@ static inline struct innolux_panel *to_innolux_panel(struct drm_panel *panel)
|
|||
static int innolux_panel_disable(struct drm_panel *panel)
|
||||
{
|
||||
struct innolux_panel *innolux = to_innolux_panel(panel);
|
||||
int err;
|
||||
|
||||
if (!innolux->enabled)
|
||||
return 0;
|
||||
|
||||
backlight_disable(innolux->backlight);
|
||||
|
||||
err = mipi_dsi_dcs_set_display_off(innolux->link);
|
||||
if (err < 0)
|
||||
DRM_DEV_ERROR(panel->dev, "failed to set display off: %d\n",
|
||||
err);
|
||||
|
||||
innolux->enabled = false;
|
||||
|
||||
return 0;
|
||||
|
@ -95,6 +89,11 @@ static int innolux_panel_unprepare(struct drm_panel *panel)
|
|||
if (!innolux->prepared)
|
||||
return 0;
|
||||
|
||||
err = mipi_dsi_dcs_set_display_off(innolux->link);
|
||||
if (err < 0)
|
||||
DRM_DEV_ERROR(panel->dev, "failed to set display off: %d\n",
|
||||
err);
|
||||
|
||||
err = mipi_dsi_dcs_enter_sleep_mode(innolux->link);
|
||||
if (err < 0) {
|
||||
DRM_DEV_ERROR(panel->dev, "failed to enter sleep mode: %d\n",
|
||||
|
|
|
@ -1445,7 +1445,6 @@ static void ttm_bo_global_kobj_release(struct kobject *kobj)
|
|||
container_of(kobj, struct ttm_bo_global, kobj);
|
||||
|
||||
__free_page(glob->dummy_read_page);
|
||||
kfree(glob);
|
||||
}
|
||||
|
||||
void ttm_bo_global_release(struct drm_global_reference *ref)
|
||||
|
|
|
@ -216,14 +216,6 @@ static ssize_t ttm_mem_global_store(struct kobject *kobj,
|
|||
return size;
|
||||
}
|
||||
|
||||
static void ttm_mem_global_kobj_release(struct kobject *kobj)
|
||||
{
|
||||
struct ttm_mem_global *glob =
|
||||
container_of(kobj, struct ttm_mem_global, kobj);
|
||||
|
||||
kfree(glob);
|
||||
}
|
||||
|
||||
static struct attribute *ttm_mem_global_attrs[] = {
|
||||
&ttm_mem_global_lower_mem_limit,
|
||||
NULL
|
||||
|
@ -235,7 +227,6 @@ static const struct sysfs_ops ttm_mem_global_ops = {
|
|||
};
|
||||
|
||||
static struct kobj_type ttm_mem_glob_kobj_type = {
|
||||
.release = &ttm_mem_global_kobj_release,
|
||||
.sysfs_ops = &ttm_mem_global_ops,
|
||||
.default_attrs = ttm_mem_global_attrs,
|
||||
};
|
||||
|
|
|
@ -224,7 +224,7 @@ int udl_gem_mmap(struct drm_file *file, struct drm_device *dev,
|
|||
*offset = drm_vma_node_offset_addr(&gobj->base.vma_node);
|
||||
|
||||
out:
|
||||
drm_gem_object_put(&gobj->base);
|
||||
drm_gem_object_put_unlocked(&gobj->base);
|
||||
unlock:
|
||||
mutex_unlock(&udl->gem_lock);
|
||||
return ret;
|
||||
|
|
|
@ -348,6 +348,7 @@
|
|||
#define USB_DEVICE_ID_DMI_ENC 0x5fab
|
||||
|
||||
#define USB_VENDOR_ID_DRAGONRISE 0x0079
|
||||
#define USB_DEVICE_ID_REDRAGON_SEYMUR2 0x0006
|
||||
#define USB_DEVICE_ID_DRAGONRISE_WIIU 0x1800
|
||||
#define USB_DEVICE_ID_DRAGONRISE_PS3 0x1801
|
||||
#define USB_DEVICE_ID_DRAGONRISE_DOLPHINBAR 0x1803
|
||||
|
|
|
@ -70,6 +70,7 @@ static const struct hid_device_id hid_quirks[] = {
|
|||
{ HID_USB_DEVICE(USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC), HID_QUIRK_NOGET },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DRACAL_RAPHNET, USB_DEVICE_ID_RAPHNET_2NES2SNES), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DRACAL_RAPHNET, USB_DEVICE_ID_RAPHNET_4NES4SNES), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_REDRAGON_SEYMUR2), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_DOLPHINBAR), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_GAMECUBE1), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_PS3), HID_QUIRK_MULTI_INPUT },
|
||||
|
|
|
@ -3,3 +3,6 @@
|
|||
#
|
||||
|
||||
obj-$(CONFIG_I2C_HID) += i2c-hid.o
|
||||
|
||||
i2c-hid-objs = i2c-hid-core.o
|
||||
i2c-hid-$(CONFIG_DMI) += i2c-hid-dmi-quirks.o
|
||||
|
|
|
@ -43,6 +43,7 @@
|
|||
#include <linux/platform_data/i2c-hid.h>
|
||||
|
||||
#include "../hid-ids.h"
|
||||
#include "i2c-hid.h"
|
||||
|
||||
/* quirks to control the device */
|
||||
#define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV BIT(0)
|
||||
|
@ -687,6 +688,7 @@ static int i2c_hid_parse(struct hid_device *hid)
|
|||
char *rdesc;
|
||||
int ret;
|
||||
int tries = 3;
|
||||
char *use_override;
|
||||
|
||||
i2c_hid_dbg(ihid, "entering %s\n", __func__);
|
||||
|
||||
|
@ -705,26 +707,37 @@ static int i2c_hid_parse(struct hid_device *hid)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
rdesc = kzalloc(rsize, GFP_KERNEL);
|
||||
use_override = i2c_hid_get_dmi_hid_report_desc_override(client->name,
|
||||
&rsize);
|
||||
|
||||
if (!rdesc) {
|
||||
dbg_hid("couldn't allocate rdesc memory\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
if (use_override) {
|
||||
rdesc = use_override;
|
||||
i2c_hid_dbg(ihid, "Using a HID report descriptor override\n");
|
||||
} else {
|
||||
rdesc = kzalloc(rsize, GFP_KERNEL);
|
||||
|
||||
i2c_hid_dbg(ihid, "asking HID report descriptor\n");
|
||||
if (!rdesc) {
|
||||
dbg_hid("couldn't allocate rdesc memory\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = i2c_hid_command(client, &hid_report_descr_cmd, rdesc, rsize);
|
||||
if (ret) {
|
||||
hid_err(hid, "reading report descriptor failed\n");
|
||||
kfree(rdesc);
|
||||
return -EIO;
|
||||
i2c_hid_dbg(ihid, "asking HID report descriptor\n");
|
||||
|
||||
ret = i2c_hid_command(client, &hid_report_descr_cmd,
|
||||
rdesc, rsize);
|
||||
if (ret) {
|
||||
hid_err(hid, "reading report descriptor failed\n");
|
||||
kfree(rdesc);
|
||||
return -EIO;
|
||||
}
|
||||
}
|
||||
|
||||
i2c_hid_dbg(ihid, "Report Descriptor: %*ph\n", rsize, rdesc);
|
||||
|
||||
ret = hid_parse_report(hid, rdesc, rsize);
|
||||
kfree(rdesc);
|
||||
if (!use_override)
|
||||
kfree(rdesc);
|
||||
|
||||
if (ret) {
|
||||
dbg_hid("parsing report descriptor failed\n");
|
||||
return ret;
|
||||
|
@ -851,12 +864,19 @@ static int i2c_hid_fetch_hid_descriptor(struct i2c_hid *ihid)
|
|||
int ret;
|
||||
|
||||
/* i2c hid fetch using a fixed descriptor size (30 bytes) */
|
||||
i2c_hid_dbg(ihid, "Fetching the HID descriptor\n");
|
||||
ret = i2c_hid_command(client, &hid_descr_cmd, ihid->hdesc_buffer,
|
||||
sizeof(struct i2c_hid_desc));
|
||||
if (ret) {
|
||||
dev_err(&client->dev, "hid_descr_cmd failed\n");
|
||||
return -ENODEV;
|
||||
if (i2c_hid_get_dmi_i2c_hid_desc_override(client->name)) {
|
||||
i2c_hid_dbg(ihid, "Using a HID descriptor override\n");
|
||||
ihid->hdesc =
|
||||
*i2c_hid_get_dmi_i2c_hid_desc_override(client->name);
|
||||
} else {
|
||||
i2c_hid_dbg(ihid, "Fetching the HID descriptor\n");
|
||||
ret = i2c_hid_command(client, &hid_descr_cmd,
|
||||
ihid->hdesc_buffer,
|
||||
sizeof(struct i2c_hid_desc));
|
||||
if (ret) {
|
||||
dev_err(&client->dev, "hid_descr_cmd failed\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
|
||||
/* Validate the length of HID descriptor, the 4 first bytes:
|
376
drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
Normal file
376
drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
Normal file
|
@ -0,0 +1,376 @@
|
|||
// SPDX-License-Identifier: GPL-2.0+
|
||||
|
||||
/*
|
||||
* Quirks for I2C-HID devices that do not supply proper descriptors
|
||||
*
|
||||
* Copyright (c) 2018 Julian Sax <jsbc@gmx.de>
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
|
||||
#include "i2c-hid.h"
|
||||
|
||||
|
||||
struct i2c_hid_desc_override {
|
||||
union {
|
||||
struct i2c_hid_desc *i2c_hid_desc;
|
||||
uint8_t *i2c_hid_desc_buffer;
|
||||
};
|
||||
uint8_t *hid_report_desc;
|
||||
unsigned int hid_report_desc_size;
|
||||
uint8_t *i2c_name;
|
||||
};
|
||||
|
||||
|
||||
/*
|
||||
* descriptors for the SIPODEV SP1064 touchpad
|
||||
*
|
||||
* This device does not supply any descriptors and on windows a filter
|
||||
* driver operates between the i2c-hid layer and the device and injects
|
||||
* these descriptors when the device is prompted. The descriptors were
|
||||
* extracted by listening to the i2c-hid traffic that occurs between the
|
||||
* windows filter driver and the windows i2c-hid driver.
|
||||
*/
|
||||
|
||||
static const struct i2c_hid_desc_override sipodev_desc = {
|
||||
.i2c_hid_desc_buffer = (uint8_t [])
|
||||
{0x1e, 0x00, /* Length of descriptor */
|
||||
0x00, 0x01, /* Version of descriptor */
|
||||
0xdb, 0x01, /* Length of report descriptor */
|
||||
0x21, 0x00, /* Location of report descriptor */
|
||||
0x24, 0x00, /* Location of input report */
|
||||
0x1b, 0x00, /* Max input report length */
|
||||
0x25, 0x00, /* Location of output report */
|
||||
0x11, 0x00, /* Max output report length */
|
||||
0x22, 0x00, /* Location of command register */
|
||||
0x23, 0x00, /* Location of data register */
|
||||
0x11, 0x09, /* Vendor ID */
|
||||
0x88, 0x52, /* Product ID */
|
||||
0x06, 0x00, /* Version ID */
|
||||
0x00, 0x00, 0x00, 0x00 /* Reserved */
|
||||
},
|
||||
|
||||
.hid_report_desc = (uint8_t [])
|
||||
{0x05, 0x01, /* Usage Page (Desktop), */
|
||||
0x09, 0x02, /* Usage (Mouse), */
|
||||
0xA1, 0x01, /* Collection (Application), */
|
||||
0x85, 0x01, /* Report ID (1), */
|
||||
0x09, 0x01, /* Usage (Pointer), */
|
||||
0xA1, 0x00, /* Collection (Physical), */
|
||||
0x05, 0x09, /* Usage Page (Button), */
|
||||
0x19, 0x01, /* Usage Minimum (01h), */
|
||||
0x29, 0x02, /* Usage Maximum (02h), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x95, 0x06, /* Report Count (6), */
|
||||
0x81, 0x01, /* Input (Constant), */
|
||||
0x05, 0x01, /* Usage Page (Desktop), */
|
||||
0x09, 0x30, /* Usage (X), */
|
||||
0x09, 0x31, /* Usage (Y), */
|
||||
0x15, 0x81, /* Logical Minimum (-127), */
|
||||
0x25, 0x7F, /* Logical Maximum (127), */
|
||||
0x75, 0x08, /* Report Size (8), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x81, 0x06, /* Input (Variable, Relative), */
|
||||
0xC0, /* End Collection, */
|
||||
0xC0, /* End Collection, */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x09, 0x05, /* Usage (Touchpad), */
|
||||
0xA1, 0x01, /* Collection (Application), */
|
||||
0x85, 0x04, /* Report ID (4), */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x09, 0x22, /* Usage (Finger), */
|
||||
0xA1, 0x02, /* Collection (Logical), */
|
||||
0x15, 0x00, /* Logical Minimum (0), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0x09, 0x47, /* Usage (Touch Valid), */
|
||||
0x09, 0x42, /* Usage (Tip Switch), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x75, 0x03, /* Report Size (3), */
|
||||
0x25, 0x05, /* Logical Maximum (5), */
|
||||
0x09, 0x51, /* Usage (Contact Identifier), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x03, /* Report Count (3), */
|
||||
0x81, 0x03, /* Input (Constant, Variable), */
|
||||
0x05, 0x01, /* Usage Page (Desktop), */
|
||||
0x26, 0x44, 0x0A, /* Logical Maximum (2628), */
|
||||
0x75, 0x10, /* Report Size (16), */
|
||||
0x55, 0x0E, /* Unit Exponent (14), */
|
||||
0x65, 0x11, /* Unit (Centimeter), */
|
||||
0x09, 0x30, /* Usage (X), */
|
||||
0x46, 0x1A, 0x04, /* Physical Maximum (1050), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x46, 0xBC, 0x02, /* Physical Maximum (700), */
|
||||
0x26, 0x34, 0x05, /* Logical Maximum (1332), */
|
||||
0x09, 0x31, /* Usage (Y), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0xC0, /* End Collection, */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x09, 0x22, /* Usage (Finger), */
|
||||
0xA1, 0x02, /* Collection (Logical), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0x09, 0x47, /* Usage (Touch Valid), */
|
||||
0x09, 0x42, /* Usage (Tip Switch), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x75, 0x03, /* Report Size (3), */
|
||||
0x25, 0x05, /* Logical Maximum (5), */
|
||||
0x09, 0x51, /* Usage (Contact Identifier), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x03, /* Report Count (3), */
|
||||
0x81, 0x03, /* Input (Constant, Variable), */
|
||||
0x05, 0x01, /* Usage Page (Desktop), */
|
||||
0x26, 0x44, 0x0A, /* Logical Maximum (2628), */
|
||||
0x75, 0x10, /* Report Size (16), */
|
||||
0x09, 0x30, /* Usage (X), */
|
||||
0x46, 0x1A, 0x04, /* Physical Maximum (1050), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x46, 0xBC, 0x02, /* Physical Maximum (700), */
|
||||
0x26, 0x34, 0x05, /* Logical Maximum (1332), */
|
||||
0x09, 0x31, /* Usage (Y), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0xC0, /* End Collection, */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x09, 0x22, /* Usage (Finger), */
|
||||
0xA1, 0x02, /* Collection (Logical), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0x09, 0x47, /* Usage (Touch Valid), */
|
||||
0x09, 0x42, /* Usage (Tip Switch), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x75, 0x03, /* Report Size (3), */
|
||||
0x25, 0x05, /* Logical Maximum (5), */
|
||||
0x09, 0x51, /* Usage (Contact Identifier), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x03, /* Report Count (3), */
|
||||
0x81, 0x03, /* Input (Constant, Variable), */
|
||||
0x05, 0x01, /* Usage Page (Desktop), */
|
||||
0x26, 0x44, 0x0A, /* Logical Maximum (2628), */
|
||||
0x75, 0x10, /* Report Size (16), */
|
||||
0x09, 0x30, /* Usage (X), */
|
||||
0x46, 0x1A, 0x04, /* Physical Maximum (1050), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x46, 0xBC, 0x02, /* Physical Maximum (700), */
|
||||
0x26, 0x34, 0x05, /* Logical Maximum (1332), */
|
||||
0x09, 0x31, /* Usage (Y), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0xC0, /* End Collection, */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x09, 0x22, /* Usage (Finger), */
|
||||
0xA1, 0x02, /* Collection (Logical), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0x09, 0x47, /* Usage (Touch Valid), */
|
||||
0x09, 0x42, /* Usage (Tip Switch), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x75, 0x03, /* Report Size (3), */
|
||||
0x25, 0x05, /* Logical Maximum (5), */
|
||||
0x09, 0x51, /* Usage (Contact Identifier), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x03, /* Report Count (3), */
|
||||
0x81, 0x03, /* Input (Constant, Variable), */
|
||||
0x05, 0x01, /* Usage Page (Desktop), */
|
||||
0x26, 0x44, 0x0A, /* Logical Maximum (2628), */
|
||||
0x75, 0x10, /* Report Size (16), */
|
||||
0x09, 0x30, /* Usage (X), */
|
||||
0x46, 0x1A, 0x04, /* Physical Maximum (1050), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x46, 0xBC, 0x02, /* Physical Maximum (700), */
|
||||
0x26, 0x34, 0x05, /* Logical Maximum (1332), */
|
||||
0x09, 0x31, /* Usage (Y), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0xC0, /* End Collection, */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x55, 0x0C, /* Unit Exponent (12), */
|
||||
0x66, 0x01, 0x10, /* Unit (Seconds), */
|
||||
0x47, 0xFF, 0xFF, 0x00, 0x00,/* Physical Maximum (65535), */
|
||||
0x27, 0xFF, 0xFF, 0x00, 0x00,/* Logical Maximum (65535), */
|
||||
0x75, 0x10, /* Report Size (16), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x09, 0x56, /* Usage (Scan Time), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x09, 0x54, /* Usage (Contact Count), */
|
||||
0x25, 0x7F, /* Logical Maximum (127), */
|
||||
0x75, 0x08, /* Report Size (8), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x05, 0x09, /* Usage Page (Button), */
|
||||
0x09, 0x01, /* Usage (01h), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x81, 0x02, /* Input (Variable), */
|
||||
0x95, 0x07, /* Report Count (7), */
|
||||
0x81, 0x03, /* Input (Constant, Variable), */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x85, 0x02, /* Report ID (2), */
|
||||
0x09, 0x55, /* Usage (Contact Count Maximum), */
|
||||
0x09, 0x59, /* Usage (59h), */
|
||||
0x75, 0x04, /* Report Size (4), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x25, 0x0F, /* Logical Maximum (15), */
|
||||
0xB1, 0x02, /* Feature (Variable), */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x85, 0x07, /* Report ID (7), */
|
||||
0x09, 0x60, /* Usage (60h), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0xB1, 0x02, /* Feature (Variable), */
|
||||
0x95, 0x07, /* Report Count (7), */
|
||||
0xB1, 0x03, /* Feature (Constant, Variable), */
|
||||
0x85, 0x06, /* Report ID (6), */
|
||||
0x06, 0x00, 0xFF, /* Usage Page (FF00h), */
|
||||
0x09, 0xC5, /* Usage (C5h), */
|
||||
0x26, 0xFF, 0x00, /* Logical Maximum (255), */
|
||||
0x75, 0x08, /* Report Size (8), */
|
||||
0x96, 0x00, 0x01, /* Report Count (256), */
|
||||
0xB1, 0x02, /* Feature (Variable), */
|
||||
0xC0, /* End Collection, */
|
||||
0x06, 0x00, 0xFF, /* Usage Page (FF00h), */
|
||||
0x09, 0x01, /* Usage (01h), */
|
||||
0xA1, 0x01, /* Collection (Application), */
|
||||
0x85, 0x0D, /* Report ID (13), */
|
||||
0x26, 0xFF, 0x00, /* Logical Maximum (255), */
|
||||
0x19, 0x01, /* Usage Minimum (01h), */
|
||||
0x29, 0x02, /* Usage Maximum (02h), */
|
||||
0x75, 0x08, /* Report Size (8), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0xB1, 0x02, /* Feature (Variable), */
|
||||
0xC0, /* End Collection, */
|
||||
0x05, 0x0D, /* Usage Page (Digitizer), */
|
||||
0x09, 0x0E, /* Usage (Configuration), */
|
||||
0xA1, 0x01, /* Collection (Application), */
|
||||
0x85, 0x03, /* Report ID (3), */
|
||||
0x09, 0x22, /* Usage (Finger), */
|
||||
0xA1, 0x02, /* Collection (Logical), */
|
||||
0x09, 0x52, /* Usage (Device Mode), */
|
||||
0x25, 0x0A, /* Logical Maximum (10), */
|
||||
0x95, 0x01, /* Report Count (1), */
|
||||
0xB1, 0x02, /* Feature (Variable), */
|
||||
0xC0, /* End Collection, */
|
||||
0x09, 0x22, /* Usage (Finger), */
|
||||
0xA1, 0x00, /* Collection (Physical), */
|
||||
0x85, 0x05, /* Report ID (5), */
|
||||
0x09, 0x57, /* Usage (57h), */
|
||||
0x09, 0x58, /* Usage (58h), */
|
||||
0x75, 0x01, /* Report Size (1), */
|
||||
0x95, 0x02, /* Report Count (2), */
|
||||
0x25, 0x01, /* Logical Maximum (1), */
|
||||
0xB1, 0x02, /* Feature (Variable), */
|
||||
0x95, 0x06, /* Report Count (6), */
|
||||
0xB1, 0x03, /* Feature (Constant, Variable),*/
|
||||
0xC0, /* End Collection, */
|
||||
0xC0 /* End Collection */
|
||||
},
|
||||
.hid_report_desc_size = 475,
|
||||
.i2c_name = "SYNA3602:00"
|
||||
};
|
||||
|
||||
|
||||
static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
|
||||
{
|
||||
.ident = "Teclast F6 Pro",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TECLAST"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "F6 Pro"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Teclast F7",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TECLAST"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "F7"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Trekstor Primebook C13",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TREKSTOR"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Primebook C13"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Trekstor Primebook C11",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TREKSTOR"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Primebook C11"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Direkt-Tek DTLAPY116-2",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Direkt-Tek"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "DTLAPY116-2"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
},
|
||||
{
|
||||
.ident = "Mediacom Flexbook Edge 11",
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "MEDIACOM"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "FlexBook edge11 - M-FBE11"),
|
||||
},
|
||||
.driver_data = (void *)&sipodev_desc
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
struct i2c_hid_desc *i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name)
|
||||
{
|
||||
struct i2c_hid_desc_override *override;
|
||||
const struct dmi_system_id *system_id;
|
||||
|
||||
system_id = dmi_first_match(i2c_hid_dmi_desc_override_table);
|
||||
if (!system_id)
|
||||
return NULL;
|
||||
|
||||
override = system_id->driver_data;
|
||||
if (strcmp(override->i2c_name, i2c_name))
|
||||
return NULL;
|
||||
|
||||
return override->i2c_hid_desc;
|
||||
}
|
||||
|
||||
char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
|
||||
unsigned int *size)
|
||||
{
|
||||
struct i2c_hid_desc_override *override;
|
||||
const struct dmi_system_id *system_id;
|
||||
|
||||
system_id = dmi_first_match(i2c_hid_dmi_desc_override_table);
|
||||
if (!system_id)
|
||||
return NULL;
|
||||
|
||||
override = system_id->driver_data;
|
||||
if (strcmp(override->i2c_name, i2c_name))
|
||||
return NULL;
|
||||
|
||||
*size = override->hid_report_desc_size;
|
||||
return override->hid_report_desc;
|
||||
}
|
20
drivers/hid/i2c-hid/i2c-hid.h
Normal file
20
drivers/hid/i2c-hid/i2c-hid.h
Normal file
|
@ -0,0 +1,20 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0+ */
|
||||
|
||||
#ifndef I2C_HID_H
|
||||
#define I2C_HID_H
|
||||
|
||||
|
||||
#ifdef CONFIG_DMI
|
||||
struct i2c_hid_desc *i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name);
|
||||
char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
|
||||
unsigned int *size);
|
||||
#else
|
||||
static inline struct i2c_hid_desc
|
||||
*i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name)
|
||||
{ return NULL; }
|
||||
static inline char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
|
||||
unsigned int *size)
|
||||
{ return NULL; }
|
||||
#endif
|
||||
|
||||
#endif
|
|
@ -668,6 +668,10 @@ static const struct amba_id debug_ids[] = {
|
|||
.id = 0x000bbd08,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ /* Debug for Cortex-A73 */
|
||||
.id = 0x000bbd09,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ 0, 0 },
|
||||
};
|
||||
|
||||
|
|
|
@ -784,7 +784,7 @@ void notify_error_qp(struct rvt_qp *qp)
|
|||
write_seqlock(lock);
|
||||
if (!list_empty(&priv->s_iowait.list) &&
|
||||
!(qp->s_flags & RVT_S_BUSY)) {
|
||||
qp->s_flags &= ~RVT_S_ANY_WAIT_IO;
|
||||
qp->s_flags &= ~HFI1_S_ANY_WAIT_IO;
|
||||
list_del_init(&priv->s_iowait.list);
|
||||
priv->s_iowait.lock = NULL;
|
||||
rvt_put_qp(qp);
|
||||
|
|
|
@ -173,7 +173,12 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
|
|||
|
||||
rcu_read_lock();
|
||||
in = __in_dev_get_rcu(upper_dev);
|
||||
local_ipaddr = ntohl(in->ifa_list->ifa_address);
|
||||
|
||||
if (!in->ifa_list)
|
||||
local_ipaddr = 0;
|
||||
else
|
||||
local_ipaddr = ntohl(in->ifa_list->ifa_address);
|
||||
|
||||
rcu_read_unlock();
|
||||
} else {
|
||||
local_ipaddr = ntohl(ifa->ifa_address);
|
||||
|
@ -185,6 +190,11 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
|
|||
case NETDEV_UP:
|
||||
/* Fall through */
|
||||
case NETDEV_CHANGEADDR:
|
||||
|
||||
/* Just skip if no need to handle ARP cache */
|
||||
if (!local_ipaddr)
|
||||
break;
|
||||
|
||||
i40iw_manage_arp_cache(iwdev,
|
||||
netdev->dev_addr,
|
||||
&local_ipaddr,
|
||||
|
|
|
@ -804,8 +804,8 @@ void mlx4_ib_destroy_alias_guid_service(struct mlx4_ib_dev *dev)
|
|||
unsigned long flags;
|
||||
|
||||
for (i = 0 ; i < dev->num_ports; i++) {
|
||||
cancel_delayed_work(&dev->sriov.alias_guid.ports_guid[i].alias_guid_work);
|
||||
det = &sriov->alias_guid.ports_guid[i];
|
||||
cancel_delayed_work_sync(&det->alias_guid_work);
|
||||
spin_lock_irqsave(&sriov->alias_guid.ag_work_lock, flags);
|
||||
while (!list_empty(&det->cb_list)) {
|
||||
cb_ctx = list_entry(det->cb_list.next,
|
||||
|
|
|
@ -144,7 +144,7 @@ dmar_alloc_pci_notify_info(struct pci_dev *dev, unsigned long event)
|
|||
for (tmp = dev; tmp; tmp = tmp->bus->self)
|
||||
level++;
|
||||
|
||||
size = sizeof(*info) + level * sizeof(struct acpi_dmar_pci_path);
|
||||
size = sizeof(*info) + level * sizeof(info->path[0]);
|
||||
if (size <= sizeof(dmar_pci_notify_info_buf)) {
|
||||
info = (struct dmar_pci_notify_info *)dmar_pci_notify_info_buf;
|
||||
} else {
|
||||
|
|
|
@ -1624,6 +1624,9 @@ static void iommu_disable_protect_mem_regions(struct intel_iommu *iommu)
|
|||
u32 pmen;
|
||||
unsigned long flags;
|
||||
|
||||
if (!cap_plmr(iommu->cap) && !cap_phmr(iommu->cap))
|
||||
return;
|
||||
|
||||
raw_spin_lock_irqsave(&iommu->register_lock, flags);
|
||||
pmen = readl(iommu->reg + DMAR_PMEN_REG);
|
||||
pmen &= ~DMA_PMEN_EPM;
|
||||
|
|
|
@ -161,6 +161,9 @@ static void mbigen_write_msg(struct msi_desc *desc, struct msi_msg *msg)
|
|||
void __iomem *base = d->chip_data;
|
||||
u32 val;
|
||||
|
||||
if (!msg->address_lo && !msg->address_hi)
|
||||
return;
|
||||
|
||||
base += get_mbigen_vec_reg(d->hwirq);
|
||||
val = readl_relaxed(base);
|
||||
|
||||
|
|
|
@ -650,11 +650,6 @@ stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data,
|
|||
*/
|
||||
writel_relaxed(0, base + stm32_bank->imr_ofst);
|
||||
writel_relaxed(0, base + stm32_bank->emr_ofst);
|
||||
writel_relaxed(0, base + stm32_bank->rtsr_ofst);
|
||||
writel_relaxed(0, base + stm32_bank->ftsr_ofst);
|
||||
writel_relaxed(~0UL, base + stm32_bank->rpr_ofst);
|
||||
if (stm32_bank->fpr_ofst != UNDEF_REG)
|
||||
writel_relaxed(~0UL, base + stm32_bank->fpr_ofst);
|
||||
|
||||
pr_info("%s: bank%d, External IRQs available:%#x\n",
|
||||
node->full_name, bank_idx, irqs_mask);
|
||||
|
|
|
@ -629,7 +629,6 @@ static int au0828_usb_probe(struct usb_interface *interface,
|
|||
pr_err("%s() au0282_dev_register failed to register on V4L2\n",
|
||||
__func__);
|
||||
mutex_unlock(&dev->lock);
|
||||
kfree(dev);
|
||||
goto done;
|
||||
}
|
||||
|
||||
|
|
|
@ -152,7 +152,9 @@ static const struct crashtype crashtypes[] = {
|
|||
CRASHTYPE(EXEC_VMALLOC),
|
||||
CRASHTYPE(EXEC_RODATA),
|
||||
CRASHTYPE(EXEC_USERSPACE),
|
||||
CRASHTYPE(EXEC_NULL),
|
||||
CRASHTYPE(ACCESS_USERSPACE),
|
||||
CRASHTYPE(ACCESS_NULL),
|
||||
CRASHTYPE(WRITE_RO),
|
||||
CRASHTYPE(WRITE_RO_AFTER_INIT),
|
||||
CRASHTYPE(WRITE_KERN),
|
||||
|
|
|
@ -45,7 +45,9 @@ void lkdtm_EXEC_KMALLOC(void);
|
|||
void lkdtm_EXEC_VMALLOC(void);
|
||||
void lkdtm_EXEC_RODATA(void);
|
||||
void lkdtm_EXEC_USERSPACE(void);
|
||||
void lkdtm_EXEC_NULL(void);
|
||||
void lkdtm_ACCESS_USERSPACE(void);
|
||||
void lkdtm_ACCESS_NULL(void);
|
||||
|
||||
/* lkdtm_refcount.c */
|
||||
void lkdtm_REFCOUNT_INC_OVERFLOW(void);
|
||||
|
|
|
@ -47,7 +47,7 @@ static noinline void execute_location(void *dst, bool write)
|
|||
{
|
||||
void (*func)(void) = dst;
|
||||
|
||||
pr_info("attempting ok execution at %p\n", do_nothing);
|
||||
pr_info("attempting ok execution at %px\n", do_nothing);
|
||||
do_nothing();
|
||||
|
||||
if (write == CODE_WRITE) {
|
||||
|
@ -55,7 +55,7 @@ static noinline void execute_location(void *dst, bool write)
|
|||
flush_icache_range((unsigned long)dst,
|
||||
(unsigned long)dst + EXEC_SIZE);
|
||||
}
|
||||
pr_info("attempting bad execution at %p\n", func);
|
||||
pr_info("attempting bad execution at %px\n", func);
|
||||
func();
|
||||
}
|
||||
|
||||
|
@ -66,14 +66,14 @@ static void execute_user_location(void *dst)
|
|||
/* Intentionally crossing kernel/user memory boundary. */
|
||||
void (*func)(void) = dst;
|
||||
|
||||
pr_info("attempting ok execution at %p\n", do_nothing);
|
||||
pr_info("attempting ok execution at %px\n", do_nothing);
|
||||
do_nothing();
|
||||
|
||||
copied = access_process_vm(current, (unsigned long)dst, do_nothing,
|
||||
EXEC_SIZE, FOLL_WRITE);
|
||||
if (copied < EXEC_SIZE)
|
||||
return;
|
||||
pr_info("attempting bad execution at %p\n", func);
|
||||
pr_info("attempting bad execution at %px\n", func);
|
||||
func();
|
||||
}
|
||||
|
||||
|
@ -82,7 +82,7 @@ void lkdtm_WRITE_RO(void)
|
|||
/* Explicitly cast away "const" for the test. */
|
||||
unsigned long *ptr = (unsigned long *)&rodata;
|
||||
|
||||
pr_info("attempting bad rodata write at %p\n", ptr);
|
||||
pr_info("attempting bad rodata write at %px\n", ptr);
|
||||
*ptr ^= 0xabcd1234;
|
||||
}
|
||||
|
||||
|
@ -100,7 +100,7 @@ void lkdtm_WRITE_RO_AFTER_INIT(void)
|
|||
return;
|
||||
}
|
||||
|
||||
pr_info("attempting bad ro_after_init write at %p\n", ptr);
|
||||
pr_info("attempting bad ro_after_init write at %px\n", ptr);
|
||||
*ptr ^= 0xabcd1234;
|
||||
}
|
||||
|
||||
|
@ -112,7 +112,7 @@ void lkdtm_WRITE_KERN(void)
|
|||
size = (unsigned long)do_overwritten - (unsigned long)do_nothing;
|
||||
ptr = (unsigned char *)do_overwritten;
|
||||
|
||||
pr_info("attempting bad %zu byte write at %p\n", size, ptr);
|
||||
pr_info("attempting bad %zu byte write at %px\n", size, ptr);
|
||||
memcpy(ptr, (unsigned char *)do_nothing, size);
|
||||
flush_icache_range((unsigned long)ptr, (unsigned long)(ptr + size));
|
||||
|
||||
|
@ -164,6 +164,11 @@ void lkdtm_EXEC_USERSPACE(void)
|
|||
vm_munmap(user_addr, PAGE_SIZE);
|
||||
}
|
||||
|
||||
void lkdtm_EXEC_NULL(void)
|
||||
{
|
||||
execute_location(NULL, CODE_AS_IS);
|
||||
}
|
||||
|
||||
void lkdtm_ACCESS_USERSPACE(void)
|
||||
{
|
||||
unsigned long user_addr, tmp = 0;
|
||||
|
@ -185,16 +190,29 @@ void lkdtm_ACCESS_USERSPACE(void)
|
|||
|
||||
ptr = (unsigned long *)user_addr;
|
||||
|
||||
pr_info("attempting bad read at %p\n", ptr);
|
||||
pr_info("attempting bad read at %px\n", ptr);
|
||||
tmp = *ptr;
|
||||
tmp += 0xc0dec0de;
|
||||
|
||||
pr_info("attempting bad write at %p\n", ptr);
|
||||
pr_info("attempting bad write at %px\n", ptr);
|
||||
*ptr = tmp;
|
||||
|
||||
vm_munmap(user_addr, PAGE_SIZE);
|
||||
}
|
||||
|
||||
void lkdtm_ACCESS_NULL(void)
|
||||
{
|
||||
unsigned long tmp;
|
||||
unsigned long *ptr = (unsigned long *)NULL;
|
||||
|
||||
pr_info("attempting bad read at %px\n", ptr);
|
||||
tmp = *ptr;
|
||||
tmp += 0xc0dec0de;
|
||||
|
||||
pr_info("attempting bad write at %px\n", ptr);
|
||||
*ptr = tmp;
|
||||
}
|
||||
|
||||
void __init lkdtm_perms_init(void)
|
||||
{
|
||||
/* Make sure we can write to __ro_after_init values during __init */
|
||||
|
|
|
@ -1117,7 +1117,7 @@ static inline void mmc_davinci_cpufreq_deregister(struct mmc_davinci_host *host)
|
|||
{
|
||||
}
|
||||
#endif
|
||||
static void __init init_mmcsd_host(struct mmc_davinci_host *host)
|
||||
static void init_mmcsd_host(struct mmc_davinci_host *host)
|
||||
{
|
||||
|
||||
mmc_davinci_reset_ctrl(host, 1);
|
||||
|
|
|
@ -59,7 +59,7 @@ static int jumbo_frm(void *p, struct sk_buff *skb, int csum)
|
|||
|
||||
desc->des3 = cpu_to_le32(des2 + BUF_SIZE_4KiB);
|
||||
stmmac_prepare_tx_desc(priv, desc, 1, bmax, csum,
|
||||
STMMAC_RING_MODE, 0, false, skb->len);
|
||||
STMMAC_RING_MODE, 1, false, skb->len);
|
||||
tx_q->tx_skbuff[entry] = NULL;
|
||||
entry = STMMAC_GET_ENTRY(entry, DMA_TX_SIZE);
|
||||
|
||||
|
@ -91,7 +91,7 @@ static int jumbo_frm(void *p, struct sk_buff *skb, int csum)
|
|||
tx_q->tx_skbuff_dma[entry].is_jumbo = true;
|
||||
desc->des3 = cpu_to_le32(des2 + BUF_SIZE_4KiB);
|
||||
stmmac_prepare_tx_desc(priv, desc, 1, nopaged_len, csum,
|
||||
STMMAC_RING_MODE, 0, true, skb->len);
|
||||
STMMAC_RING_MODE, 1, true, skb->len);
|
||||
}
|
||||
|
||||
tx_q->cur_tx = entry;
|
||||
|
|
|
@ -75,7 +75,6 @@ static inline int rsi_kill_thread(struct rsi_thread *handle)
|
|||
atomic_inc(&handle->thread_done);
|
||||
rsi_set_event(&handle->event);
|
||||
|
||||
wait_for_completion(&handle->completion);
|
||||
return kthread_stop(handle->task);
|
||||
}
|
||||
|
||||
|
|
|
@ -2489,6 +2489,25 @@ void pci_config_pm_runtime_put(struct pci_dev *pdev)
|
|||
pm_runtime_put_sync(parent);
|
||||
}
|
||||
|
||||
static const struct dmi_system_id bridge_d3_blacklist[] = {
|
||||
#ifdef CONFIG_X86
|
||||
{
|
||||
/*
|
||||
* Gigabyte X299 root port is not marked as hotplug capable
|
||||
* which allows Linux to power manage it. However, this
|
||||
* confuses the BIOS SMI handler so don't power manage root
|
||||
* ports on that system.
|
||||
*/
|
||||
.ident = "X299 DESIGNARE EX-CF",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."),
|
||||
DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"),
|
||||
},
|
||||
},
|
||||
#endif
|
||||
{ }
|
||||
};
|
||||
|
||||
/**
|
||||
* pci_bridge_d3_possible - Is it possible to put the bridge into D3
|
||||
* @bridge: Bridge to check
|
||||
|
@ -2530,6 +2549,9 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge)
|
|||
if (bridge->is_hotplug_bridge)
|
||||
return false;
|
||||
|
||||
if (dmi_check_system(bridge_d3_blacklist))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* It should be safe to put PCIe ports from 2015 or newer
|
||||
* to D3.
|
||||
|
|
|
@ -627,7 +627,7 @@ static int pinctrl_generic_group_name_to_selector(struct pinctrl_dev *pctldev,
|
|||
while (selector < ngroups) {
|
||||
const char *gname = ops->get_group_name(pctldev, selector);
|
||||
|
||||
if (!strcmp(function, gname))
|
||||
if (gname && !strcmp(function, gname))
|
||||
return selector;
|
||||
|
||||
selector++;
|
||||
|
@ -743,7 +743,7 @@ int pinctrl_get_group_selector(struct pinctrl_dev *pctldev,
|
|||
while (group_selector < ngroups) {
|
||||
const char *gname = pctlops->get_group_name(pctldev,
|
||||
group_selector);
|
||||
if (!strcmp(gname, pin_group)) {
|
||||
if (gname && !strcmp(gname, pin_group)) {
|
||||
dev_dbg(pctldev->dev,
|
||||
"found group selector %u for %s\n",
|
||||
group_selector,
|
||||
|
|
|
@ -1231,6 +1231,18 @@ config I2C_MULTI_INSTANTIATE
|
|||
To compile this driver as a module, choose M here: the module
|
||||
will be called i2c-multi-instantiate.
|
||||
|
||||
config INTEL_ATOMISP2_PM
|
||||
tristate "Intel AtomISP2 dummy / power-management driver"
|
||||
depends on PCI && IOSF_MBI && PM
|
||||
help
|
||||
Power-management driver for Intel's Image Signal Processor found on
|
||||
Bay and Cherry Trail devices. This dummy driver's sole purpose is to
|
||||
turn the ISP off (put it in D3) to save power and to allow entering
|
||||
of S0ix modes.
|
||||
|
||||
To compile this driver as a module, choose M here: the module
|
||||
will be called intel_atomisp2_pm.
|
||||
|
||||
endif # X86_PLATFORM_DEVICES
|
||||
|
||||
config PMC_ATOM
|
||||
|
|
|
@ -92,3 +92,4 @@ obj-$(CONFIG_MLX_PLATFORM) += mlx-platform.o
|
|||
obj-$(CONFIG_INTEL_TURBO_MAX_3) += intel_turbo_max_3.o
|
||||
obj-$(CONFIG_INTEL_CHTDC_TI_PWRBTN) += intel_chtdc_ti_pwrbtn.o
|
||||
obj-$(CONFIG_I2C_MULTI_INSTANTIATE) += i2c-multi-instantiate.o
|
||||
obj-$(CONFIG_INTEL_ATOMISP2_PM) += intel_atomisp2_pm.o
|
||||
|
|
119
drivers/platform/x86/intel_atomisp2_pm.c
Normal file
119
drivers/platform/x86/intel_atomisp2_pm.c
Normal file
|
@ -0,0 +1,119 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Dummy driver for Intel's Image Signal Processor found on Bay and Cherry
|
||||
* Trail devices. The sole purpose of this driver is to allow the ISP to
|
||||
* be put in D3.
|
||||
*
|
||||
* Copyright (C) 2018 Hans de Goede <hdegoede@redhat.com>
|
||||
*
|
||||
* Based on various non upstream patches for ISP support:
|
||||
* Copyright (C) 2010-2017 Intel Corporation. All rights reserved.
|
||||
* Copyright (c) 2010 Silicon Hive www.siliconhive.com.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <asm/iosf_mbi.h>
|
||||
|
||||
/* PCI configuration regs */
|
||||
#define PCI_INTERRUPT_CTRL 0x9c
|
||||
|
||||
#define PCI_CSI_CONTROL 0xe8
|
||||
#define PCI_CSI_CONTROL_PORTS_OFF_MASK 0x7
|
||||
|
||||
/* IOSF BT_MBI_UNIT_PMC regs */
|
||||
#define ISPSSPM0 0x39
|
||||
#define ISPSSPM0_ISPSSC_OFFSET 0
|
||||
#define ISPSSPM0_ISPSSC_MASK 0x00000003
|
||||
#define ISPSSPM0_ISPSSS_OFFSET 24
|
||||
#define ISPSSPM0_ISPSSS_MASK 0x03000000
|
||||
#define ISPSSPM0_IUNIT_POWER_ON 0x0
|
||||
#define ISPSSPM0_IUNIT_POWER_OFF 0x3
|
||||
|
||||
static int isp_probe(struct pci_dev *dev, const struct pci_device_id *id)
|
||||
{
|
||||
unsigned long timeout;
|
||||
u32 val;
|
||||
|
||||
pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, 0);
|
||||
|
||||
/*
|
||||
* MRFLD IUNIT DPHY is located in an always-power-on island
|
||||
* MRFLD HW design need all CSI ports are disabled before
|
||||
* powering down the IUNIT.
|
||||
*/
|
||||
pci_read_config_dword(dev, PCI_CSI_CONTROL, &val);
|
||||
val |= PCI_CSI_CONTROL_PORTS_OFF_MASK;
|
||||
pci_write_config_dword(dev, PCI_CSI_CONTROL, val);
|
||||
|
||||
/* Write 0x3 to ISPSSPM0 bit[1:0] to power off the IUNIT */
|
||||
iosf_mbi_modify(BT_MBI_UNIT_PMC, MBI_REG_READ, ISPSSPM0,
|
||||
ISPSSPM0_IUNIT_POWER_OFF, ISPSSPM0_ISPSSC_MASK);
|
||||
|
||||
/*
|
||||
* There should be no IUNIT access while power-down is
|
||||
* in progress HW sighting: 4567865
|
||||
* Wait up to 50 ms for the IUNIT to shut down.
|
||||
*/
|
||||
timeout = jiffies + msecs_to_jiffies(50);
|
||||
while (1) {
|
||||
/* Wait until ISPSSPM0 bit[25:24] shows 0x3 */
|
||||
iosf_mbi_read(BT_MBI_UNIT_PMC, MBI_REG_READ, ISPSSPM0, &val);
|
||||
val = (val & ISPSSPM0_ISPSSS_MASK) >> ISPSSPM0_ISPSSS_OFFSET;
|
||||
if (val == ISPSSPM0_IUNIT_POWER_OFF)
|
||||
break;
|
||||
|
||||
if (time_after(jiffies, timeout)) {
|
||||
dev_err(&dev->dev, "IUNIT power-off timeout.\n");
|
||||
return -EBUSY;
|
||||
}
|
||||
usleep_range(1000, 2000);
|
||||
}
|
||||
|
||||
pm_runtime_allow(&dev->dev);
|
||||
pm_runtime_put_sync_suspend(&dev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void isp_remove(struct pci_dev *dev)
|
||||
{
|
||||
pm_runtime_get_sync(&dev->dev);
|
||||
pm_runtime_forbid(&dev->dev);
|
||||
}
|
||||
|
||||
static int isp_pci_suspend(struct device *dev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int isp_pci_resume(struct device *dev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static UNIVERSAL_DEV_PM_OPS(isp_pm_ops, isp_pci_suspend,
|
||||
isp_pci_resume, NULL);
|
||||
|
||||
static const struct pci_device_id isp_id_table[] = {
|
||||
{ PCI_VDEVICE(INTEL, 0x22b8), },
|
||||
{ 0, }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, isp_id_table);
|
||||
|
||||
static struct pci_driver isp_pci_driver = {
|
||||
.name = "intel_atomisp2_pm",
|
||||
.id_table = isp_id_table,
|
||||
.probe = isp_probe,
|
||||
.remove = isp_remove,
|
||||
.driver.pm = &isp_pm_ops,
|
||||
};
|
||||
|
||||
module_pci_driver(isp_pci_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Intel AtomISP2 dummy / power-management drv (for suspend)");
|
||||
MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -3128,7 +3128,6 @@ void scsi_device_resume(struct scsi_device *sdev)
|
|||
* device deleted during suspend)
|
||||
*/
|
||||
mutex_lock(&sdev->state_mutex);
|
||||
WARN_ON_ONCE(!sdev->quiesced_by);
|
||||
sdev->quiesced_by = NULL;
|
||||
blk_clear_preempt_only(sdev->request_queue);
|
||||
if (sdev->sdev_state == SDEV_QUIESCE)
|
||||
|
|
|
@ -2185,6 +2185,8 @@ void iscsi_remove_session(struct iscsi_cls_session *session)
|
|||
scsi_target_unblock(&session->dev, SDEV_TRANSPORT_OFFLINE);
|
||||
/* flush running scans then delete devices */
|
||||
flush_work(&session->scan_work);
|
||||
/* flush running unbind operations */
|
||||
flush_work(&session->unbind_work);
|
||||
__iscsi_unbind_session(&session->unbind_work);
|
||||
|
||||
/* hw iscsi may not have removed all connections from session */
|
||||
|
|
|
@ -524,16 +524,10 @@ EXPORT_SYMBOL(tegra_powergate_power_off);
|
|||
*/
|
||||
int tegra_powergate_is_powered(unsigned int id)
|
||||
{
|
||||
int status;
|
||||
|
||||
if (!tegra_powergate_is_valid(id))
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&pmc->powergates_lock);
|
||||
status = tegra_powergate_state(id);
|
||||
mutex_unlock(&pmc->powergates_lock);
|
||||
|
||||
return status;
|
||||
return tegra_powergate_state(id);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -128,8 +128,7 @@ static const struct debugfs_reg32 bcm2835_thermal_regs[] = {
|
|||
|
||||
static void bcm2835_thermal_debugfs(struct platform_device *pdev)
|
||||
{
|
||||
struct thermal_zone_device *tz = platform_get_drvdata(pdev);
|
||||
struct bcm2835_thermal_data *data = tz->devdata;
|
||||
struct bcm2835_thermal_data *data = platform_get_drvdata(pdev);
|
||||
struct debugfs_regset32 *regset;
|
||||
|
||||
data->debugfsdir = debugfs_create_dir("bcm2835_thermal", NULL);
|
||||
|
@ -275,7 +274,7 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
|
|||
|
||||
data->tz = tz;
|
||||
|
||||
platform_set_drvdata(pdev, tz);
|
||||
platform_set_drvdata(pdev, data);
|
||||
|
||||
/*
|
||||
* Thermal_zone doesn't enable hwmon as default,
|
||||
|
@ -299,8 +298,8 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
|
|||
|
||||
static int bcm2835_thermal_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct thermal_zone_device *tz = platform_get_drvdata(pdev);
|
||||
struct bcm2835_thermal_data *data = tz->devdata;
|
||||
struct bcm2835_thermal_data *data = platform_get_drvdata(pdev);
|
||||
struct thermal_zone_device *tz = data->tz;
|
||||
|
||||
debugfs_remove_recursive(data->debugfsdir);
|
||||
thermal_zone_of_sensor_unregister(&pdev->dev, tz);
|
||||
|
|
|
@ -22,6 +22,13 @@ enum int3400_thermal_uuid {
|
|||
INT3400_THERMAL_PASSIVE_1,
|
||||
INT3400_THERMAL_ACTIVE,
|
||||
INT3400_THERMAL_CRITICAL,
|
||||
INT3400_THERMAL_ADAPTIVE_PERFORMANCE,
|
||||
INT3400_THERMAL_EMERGENCY_CALL_MODE,
|
||||
INT3400_THERMAL_PASSIVE_2,
|
||||
INT3400_THERMAL_POWER_BOSS,
|
||||
INT3400_THERMAL_VIRTUAL_SENSOR,
|
||||
INT3400_THERMAL_COOLING_MODE,
|
||||
INT3400_THERMAL_HARDWARE_DUTY_CYCLING,
|
||||
INT3400_THERMAL_MAXIMUM_UUID,
|
||||
};
|
||||
|
||||
|
@ -29,6 +36,13 @@ static char *int3400_thermal_uuids[INT3400_THERMAL_MAXIMUM_UUID] = {
|
|||
"42A441D6-AE6A-462b-A84B-4A8CE79027D3",
|
||||
"3A95C389-E4B8-4629-A526-C52C88626BAE",
|
||||
"97C68AE7-15FA-499c-B8C9-5DA81D606E0A",
|
||||
"63BE270F-1C11-48FD-A6F7-3AF253FF3E2D",
|
||||
"5349962F-71E6-431D-9AE8-0A635B710AEE",
|
||||
"9E04115A-AE87-4D1C-9500-0F3E340BFE75",
|
||||
"F5A35014-C209-46A4-993A-EB56DE7530A1",
|
||||
"6ED722A7-9240-48A5-B479-31EEF723D7CF",
|
||||
"16CAF1B7-DD38-40ED-B1C1-1B8A1913D531",
|
||||
"BE84BABF-C4D4-403D-B495-3128FD44dAC1",
|
||||
};
|
||||
|
||||
struct int3400_thermal_priv {
|
||||
|
@ -302,10 +316,9 @@ static int int3400_thermal_probe(struct platform_device *pdev)
|
|||
|
||||
platform_set_drvdata(pdev, priv);
|
||||
|
||||
if (priv->uuid_bitmap & 1 << INT3400_THERMAL_PASSIVE_1) {
|
||||
int3400_thermal_ops.get_mode = int3400_thermal_get_mode;
|
||||
int3400_thermal_ops.set_mode = int3400_thermal_set_mode;
|
||||
}
|
||||
int3400_thermal_ops.get_mode = int3400_thermal_get_mode;
|
||||
int3400_thermal_ops.set_mode = int3400_thermal_set_mode;
|
||||
|
||||
priv->thermal = thermal_zone_device_register("INT3400 Thermal", 0, 0,
|
||||
priv, &int3400_thermal_ops,
|
||||
&int3400_thermal_params, 0, 0);
|
||||
|
|
|
@ -101,7 +101,7 @@ struct powerclamp_worker_data {
|
|||
bool clamping;
|
||||
};
|
||||
|
||||
static struct powerclamp_worker_data * __percpu worker_data;
|
||||
static struct powerclamp_worker_data __percpu *worker_data;
|
||||
static struct thermal_cooling_device *cooling_dev;
|
||||
static unsigned long *cpu_clamping_mask; /* bit map for tracking per cpu
|
||||
* clamping kthread worker
|
||||
|
@ -494,7 +494,7 @@ static void start_power_clamp_worker(unsigned long cpu)
|
|||
struct powerclamp_worker_data *w_data = per_cpu_ptr(worker_data, cpu);
|
||||
struct kthread_worker *worker;
|
||||
|
||||
worker = kthread_create_worker_on_cpu(cpu, 0, "kidle_inject/%ld", cpu);
|
||||
worker = kthread_create_worker_on_cpu(cpu, 0, "kidle_inj/%ld", cpu);
|
||||
if (IS_ERR(worker))
|
||||
return;
|
||||
|
||||
|
|
|
@ -666,7 +666,7 @@ static int exynos_get_temp(void *p, int *temp)
|
|||
struct exynos_tmu_data *data = p;
|
||||
int value, ret = 0;
|
||||
|
||||
if (!data || !data->tmu_read || !data->enabled)
|
||||
if (!data || !data->tmu_read)
|
||||
return -EINVAL;
|
||||
else if (!data->enabled)
|
||||
/*
|
||||
|
|
|
@ -1223,7 +1223,7 @@ static void cdns_uart_console_write(struct console *co, const char *s,
|
|||
*
|
||||
* Return: 0 on success, negative errno otherwise.
|
||||
*/
|
||||
static int __init cdns_uart_console_setup(struct console *co, char *options)
|
||||
static int cdns_uart_console_setup(struct console *co, char *options)
|
||||
{
|
||||
struct uart_port *port = console_port;
|
||||
|
||||
|
|
21
fs/9p/v9fs.c
21
fs/9p/v9fs.c
|
@ -61,6 +61,8 @@ enum {
|
|||
Opt_cache_loose, Opt_fscache, Opt_mmap,
|
||||
/* Access options */
|
||||
Opt_access, Opt_posixacl,
|
||||
/* Lock timeout option */
|
||||
Opt_locktimeout,
|
||||
/* Error token */
|
||||
Opt_err
|
||||
};
|
||||
|
@ -80,6 +82,7 @@ static const match_table_t tokens = {
|
|||
{Opt_cachetag, "cachetag=%s"},
|
||||
{Opt_access, "access=%s"},
|
||||
{Opt_posixacl, "posixacl"},
|
||||
{Opt_locktimeout, "locktimeout=%u"},
|
||||
{Opt_err, NULL}
|
||||
};
|
||||
|
||||
|
@ -187,6 +190,7 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
|
|||
#ifdef CONFIG_9P_FSCACHE
|
||||
v9ses->cachetag = NULL;
|
||||
#endif
|
||||
v9ses->session_lock_timeout = P9_LOCK_TIMEOUT;
|
||||
|
||||
if (!opts)
|
||||
return 0;
|
||||
|
@ -359,6 +363,23 @@ static int v9fs_parse_options(struct v9fs_session_info *v9ses, char *opts)
|
|||
#endif
|
||||
break;
|
||||
|
||||
case Opt_locktimeout:
|
||||
r = match_int(&args[0], &option);
|
||||
if (r < 0) {
|
||||
p9_debug(P9_DEBUG_ERROR,
|
||||
"integer field, but no integer?\n");
|
||||
ret = r;
|
||||
continue;
|
||||
}
|
||||
if (option < 1) {
|
||||
p9_debug(P9_DEBUG_ERROR,
|
||||
"locktimeout must be a greater than zero integer.\n");
|
||||
ret = -EINVAL;
|
||||
continue;
|
||||
}
|
||||
v9ses->session_lock_timeout = (long)option * HZ;
|
||||
break;
|
||||
|
||||
default:
|
||||
continue;
|
||||
}
|
||||
|
|
|
@ -116,6 +116,7 @@ struct v9fs_session_info {
|
|||
struct p9_client *clnt; /* 9p client */
|
||||
struct list_head slist; /* list of sessions registered with v9fs */
|
||||
struct rw_semaphore rename_sem;
|
||||
long session_lock_timeout; /* retry interval for blocking locks */
|
||||
};
|
||||
|
||||
/* cache_validity flags */
|
||||
|
|
|
@ -105,7 +105,6 @@ static int v9fs_dir_readdir(struct file *file, struct dir_context *ctx)
|
|||
int err = 0;
|
||||
struct p9_fid *fid;
|
||||
int buflen;
|
||||
int reclen = 0;
|
||||
struct p9_rdir *rdir;
|
||||
struct kvec kvec;
|
||||
|
||||
|
@ -138,11 +137,10 @@ static int v9fs_dir_readdir(struct file *file, struct dir_context *ctx)
|
|||
while (rdir->head < rdir->tail) {
|
||||
err = p9stat_read(fid->clnt, rdir->buf + rdir->head,
|
||||
rdir->tail - rdir->head, &st);
|
||||
if (err) {
|
||||
if (err <= 0) {
|
||||
p9_debug(P9_DEBUG_VFS, "returned %d\n", err);
|
||||
return -EIO;
|
||||
}
|
||||
reclen = st.size+2;
|
||||
|
||||
over = !dir_emit(ctx, st.name, strlen(st.name),
|
||||
v9fs_qid2ino(&st.qid), dt_type(&st));
|
||||
|
@ -150,8 +148,8 @@ static int v9fs_dir_readdir(struct file *file, struct dir_context *ctx)
|
|||
if (over)
|
||||
return 0;
|
||||
|
||||
rdir->head += reclen;
|
||||
ctx->pos += reclen;
|
||||
rdir->head += err;
|
||||
ctx->pos += err;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -154,6 +154,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
|
|||
uint8_t status = P9_LOCK_ERROR;
|
||||
int res = 0;
|
||||
unsigned char fl_type;
|
||||
struct v9fs_session_info *v9ses;
|
||||
|
||||
fid = filp->private_data;
|
||||
BUG_ON(fid == NULL);
|
||||
|
@ -189,6 +190,8 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
|
|||
if (IS_SETLKW(cmd))
|
||||
flock.flags = P9_LOCK_FLAGS_BLOCK;
|
||||
|
||||
v9ses = v9fs_inode2v9ses(file_inode(filp));
|
||||
|
||||
/*
|
||||
* if its a blocked request and we get P9_LOCK_BLOCKED as the status
|
||||
* for lock request, keep on trying
|
||||
|
@ -202,7 +205,8 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
|
|||
break;
|
||||
if (status == P9_LOCK_BLOCKED && !IS_SETLKW(cmd))
|
||||
break;
|
||||
if (schedule_timeout_interruptible(P9_LOCK_TIMEOUT) != 0)
|
||||
if (schedule_timeout_interruptible(v9ses->session_lock_timeout)
|
||||
!= 0)
|
||||
break;
|
||||
/*
|
||||
* p9_client_lock_dotl overwrites flock.client_id with the
|
||||
|
|
|
@ -780,43 +780,50 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
|
|||
} else if ((rc == -EACCES) && backup_cred(cifs_sb) &&
|
||||
(strcmp(server->vals->version_string, SMB1_VERSION_STRING)
|
||||
== 0)) {
|
||||
/*
|
||||
* For SMB2 and later the backup intent flag is already
|
||||
* sent if needed on open and there is no path based
|
||||
* FindFirst operation to use to retry with
|
||||
*/
|
||||
/*
|
||||
* For SMB2 and later the backup intent flag is already
|
||||
* sent if needed on open and there is no path based
|
||||
* FindFirst operation to use to retry with
|
||||
*/
|
||||
|
||||
srchinf = kzalloc(sizeof(struct cifs_search_info),
|
||||
GFP_KERNEL);
|
||||
if (srchinf == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto cgii_exit;
|
||||
}
|
||||
srchinf = kzalloc(sizeof(struct cifs_search_info),
|
||||
GFP_KERNEL);
|
||||
if (srchinf == NULL) {
|
||||
rc = -ENOMEM;
|
||||
goto cgii_exit;
|
||||
}
|
||||
|
||||
srchinf->endOfSearch = false;
|
||||
srchinf->endOfSearch = false;
|
||||
if (tcon->unix_ext)
|
||||
srchinf->info_level = SMB_FIND_FILE_UNIX;
|
||||
else if ((tcon->ses->capabilities &
|
||||
tcon->ses->server->vals->cap_nt_find) == 0)
|
||||
srchinf->info_level = SMB_FIND_FILE_INFO_STANDARD;
|
||||
else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)
|
||||
srchinf->info_level = SMB_FIND_FILE_ID_FULL_DIR_INFO;
|
||||
else /* no srvino useful for fallback to some netapp */
|
||||
srchinf->info_level = SMB_FIND_FILE_DIRECTORY_INFO;
|
||||
|
||||
srchflgs = CIFS_SEARCH_CLOSE_ALWAYS |
|
||||
CIFS_SEARCH_CLOSE_AT_END |
|
||||
CIFS_SEARCH_BACKUP_SEARCH;
|
||||
srchflgs = CIFS_SEARCH_CLOSE_ALWAYS |
|
||||
CIFS_SEARCH_CLOSE_AT_END |
|
||||
CIFS_SEARCH_BACKUP_SEARCH;
|
||||
|
||||
rc = CIFSFindFirst(xid, tcon, full_path,
|
||||
cifs_sb, NULL, srchflgs, srchinf, false);
|
||||
if (!rc) {
|
||||
data =
|
||||
(FILE_ALL_INFO *)srchinf->srch_entries_start;
|
||||
rc = CIFSFindFirst(xid, tcon, full_path,
|
||||
cifs_sb, NULL, srchflgs, srchinf, false);
|
||||
if (!rc) {
|
||||
data = (FILE_ALL_INFO *)srchinf->srch_entries_start;
|
||||
|
||||
cifs_dir_info_to_fattr(&fattr,
|
||||
(FILE_DIRECTORY_INFO *)data, cifs_sb);
|
||||
fattr.cf_uniqueid = le64_to_cpu(
|
||||
((SEARCH_ID_FULL_DIR_INFO *)data)->UniqueId);
|
||||
validinum = true;
|
||||
cifs_dir_info_to_fattr(&fattr,
|
||||
(FILE_DIRECTORY_INFO *)data, cifs_sb);
|
||||
fattr.cf_uniqueid = le64_to_cpu(
|
||||
((SEARCH_ID_FULL_DIR_INFO *)data)->UniqueId);
|
||||
validinum = true;
|
||||
|
||||
cifs_buf_release(srchinf->ntwrk_buf_start);
|
||||
}
|
||||
kfree(srchinf);
|
||||
if (rc)
|
||||
goto cgii_exit;
|
||||
cifs_buf_release(srchinf->ntwrk_buf_start);
|
||||
}
|
||||
kfree(srchinf);
|
||||
if (rc)
|
||||
goto cgii_exit;
|
||||
} else
|
||||
goto cgii_exit;
|
||||
|
||||
|
|
|
@ -1036,7 +1036,8 @@ static const struct status_to_posix_error smb2_error_map_table[] = {
|
|||
{STATUS_UNFINISHED_CONTEXT_DELETED, -EIO,
|
||||
"STATUS_UNFINISHED_CONTEXT_DELETED"},
|
||||
{STATUS_NO_TGT_REPLY, -EIO, "STATUS_NO_TGT_REPLY"},
|
||||
{STATUS_OBJECTID_NOT_FOUND, -EIO, "STATUS_OBJECTID_NOT_FOUND"},
|
||||
/* Note that ENOATTTR and ENODATA are the same errno */
|
||||
{STATUS_OBJECTID_NOT_FOUND, -ENODATA, "STATUS_OBJECTID_NOT_FOUND"},
|
||||
{STATUS_NO_IP_ADDRESSES, -EIO, "STATUS_NO_IP_ADDRESSES"},
|
||||
{STATUS_WRONG_CREDENTIAL_HANDLE, -EIO,
|
||||
"STATUS_WRONG_CREDENTIAL_HANDLE"},
|
||||
|
|
|
@ -999,6 +999,13 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
|||
if (!blk_queue_discard(q))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/*
|
||||
* We haven't replayed the journal, so we cannot use our
|
||||
* block-bitmap-guided storage zapping commands.
|
||||
*/
|
||||
if (test_opt(sb, NOLOAD) && ext4_has_feature_journal(sb))
|
||||
return -EROFS;
|
||||
|
||||
if (copy_from_user(&range, (struct fstrim_range __user *)arg,
|
||||
sizeof(range)))
|
||||
return -EFAULT;
|
||||
|
|
|
@ -932,11 +932,18 @@ static int add_new_gdb_meta_bg(struct super_block *sb,
|
|||
memcpy(n_group_desc, o_group_desc,
|
||||
EXT4_SB(sb)->s_gdb_count * sizeof(struct buffer_head *));
|
||||
n_group_desc[gdb_num] = gdb_bh;
|
||||
|
||||
BUFFER_TRACE(gdb_bh, "get_write_access");
|
||||
err = ext4_journal_get_write_access(handle, gdb_bh);
|
||||
if (err) {
|
||||
kvfree(n_group_desc);
|
||||
brelse(gdb_bh);
|
||||
return err;
|
||||
}
|
||||
|
||||
EXT4_SB(sb)->s_group_desc = n_group_desc;
|
||||
EXT4_SB(sb)->s_gdb_count++;
|
||||
kvfree(o_group_desc);
|
||||
BUFFER_TRACE(gdb_bh, "get_write_access");
|
||||
err = ext4_journal_get_write_access(handle, gdb_bh);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -2073,6 +2080,10 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
|
|||
free_flex_gd(flex_gd);
|
||||
if (resize_inode != NULL)
|
||||
iput(resize_inode);
|
||||
ext4_msg(sb, KERN_INFO, "resized filesystem to %llu", n_blocks_count);
|
||||
if (err)
|
||||
ext4_warning(sb, "error (%d) occurred during "
|
||||
"file system resize", err);
|
||||
ext4_msg(sb, KERN_INFO, "resized filesystem to %llu",
|
||||
ext4_blocks_count(es));
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -431,6 +431,12 @@ static void ext4_journal_commit_callback(journal_t *journal, transaction_t *txn)
|
|||
spin_unlock(&sbi->s_md_lock);
|
||||
}
|
||||
|
||||
static bool system_going_down(void)
|
||||
{
|
||||
return system_state == SYSTEM_HALT || system_state == SYSTEM_POWER_OFF
|
||||
|| system_state == SYSTEM_RESTART;
|
||||
}
|
||||
|
||||
/* Deal with the reporting of failure conditions on a filesystem such as
|
||||
* inconsistencies detected or read IO failures.
|
||||
*
|
||||
|
@ -461,7 +467,12 @@ static void ext4_handle_error(struct super_block *sb)
|
|||
if (journal)
|
||||
jbd2_journal_abort(journal, -EIO);
|
||||
}
|
||||
if (test_opt(sb, ERRORS_RO)) {
|
||||
/*
|
||||
* We force ERRORS_RO behavior when system is rebooting. Otherwise we
|
||||
* could panic during 'reboot -f' as the underlying device got already
|
||||
* disabled.
|
||||
*/
|
||||
if (test_opt(sb, ERRORS_RO) || system_going_down()) {
|
||||
ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
|
||||
/*
|
||||
* Make sure updated value of ->s_mount_flags will be visible
|
||||
|
@ -469,8 +480,7 @@ static void ext4_handle_error(struct super_block *sb)
|
|||
*/
|
||||
smp_wmb();
|
||||
sb->s_flags |= SB_RDONLY;
|
||||
}
|
||||
if (test_opt(sb, ERRORS_PANIC)) {
|
||||
} else if (test_opt(sb, ERRORS_PANIC)) {
|
||||
if (EXT4_SB(sb)->s_journal &&
|
||||
!(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
|
||||
return;
|
||||
|
|
|
@ -519,8 +519,10 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
|
|||
fsn_mark = fsnotify_find_mark(&inode->i_fsnotify_marks, group);
|
||||
if (!fsn_mark)
|
||||
return -ENOENT;
|
||||
else if (create)
|
||||
return -EEXIST;
|
||||
else if (create) {
|
||||
ret = -EEXIST;
|
||||
goto out;
|
||||
}
|
||||
|
||||
i_mark = container_of(fsn_mark, struct inotify_inode_mark, fsn_mark);
|
||||
|
||||
|
@ -548,6 +550,7 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
|
|||
/* return the wd */
|
||||
ret = i_mark->wd;
|
||||
|
||||
out:
|
||||
/* match the get from fsnotify_find_mark() */
|
||||
fsnotify_put_mark(fsn_mark);
|
||||
|
||||
|
|
|
@ -54,6 +54,28 @@ static LIST_HEAD(kclist_head);
|
|||
static DECLARE_RWSEM(kclist_lock);
|
||||
static int kcore_need_update = 1;
|
||||
|
||||
/*
|
||||
* Returns > 0 for RAM pages, 0 for non-RAM pages, < 0 on error
|
||||
* Same as oldmem_pfn_is_ram in vmcore
|
||||
*/
|
||||
static int (*mem_pfn_is_ram)(unsigned long pfn);
|
||||
|
||||
int __init register_mem_pfn_is_ram(int (*fn)(unsigned long pfn))
|
||||
{
|
||||
if (mem_pfn_is_ram)
|
||||
return -EBUSY;
|
||||
mem_pfn_is_ram = fn;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pfn_is_ram(unsigned long pfn)
|
||||
{
|
||||
if (mem_pfn_is_ram)
|
||||
return mem_pfn_is_ram(pfn);
|
||||
else
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* This doesn't grab kclist_lock, so it should only be used at init time. */
|
||||
void __init kclist_add(struct kcore_list *new, void *addr, size_t size,
|
||||
int type)
|
||||
|
@ -465,6 +487,11 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
|
|||
goto out;
|
||||
}
|
||||
m = NULL; /* skip the list anchor */
|
||||
} else if (!pfn_is_ram(__pa(start) >> PAGE_SHIFT)) {
|
||||
if (clear_user(buffer, tsz)) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
} else if (m->type == KCORE_VMALLOC) {
|
||||
vread(buf, (char *)start, tsz);
|
||||
/* we have to zero-fill user buffer even if no read */
|
||||
|
|
|
@ -158,19 +158,29 @@ extern int sysctl_aarp_retransmit_limit;
|
|||
extern int sysctl_aarp_resolve_time;
|
||||
|
||||
#ifdef CONFIG_SYSCTL
|
||||
extern void atalk_register_sysctl(void);
|
||||
extern int atalk_register_sysctl(void);
|
||||
extern void atalk_unregister_sysctl(void);
|
||||
#else
|
||||
#define atalk_register_sysctl() do { } while(0)
|
||||
#define atalk_unregister_sysctl() do { } while(0)
|
||||
static inline int atalk_register_sysctl(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void atalk_unregister_sysctl(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PROC_FS
|
||||
extern int atalk_proc_init(void);
|
||||
extern void atalk_proc_exit(void);
|
||||
#else
|
||||
#define atalk_proc_init() ({ 0; })
|
||||
#define atalk_proc_exit() do { } while(0)
|
||||
static inline int atalk_proc_init(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void atalk_proc_exit(void)
|
||||
{
|
||||
}
|
||||
#endif /* CONFIG_PROC_FS */
|
||||
|
||||
#endif /* __LINUX_ATALK_H__ */
|
||||
|
|
|
@ -124,7 +124,10 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
|
|||
# define ASM_UNREACHABLE
|
||||
#endif
|
||||
#ifndef unreachable
|
||||
# define unreachable() do { annotate_reachable(); do { } while (1); } while (0)
|
||||
# define unreachable() do { \
|
||||
annotate_unreachable(); \
|
||||
__builtin_unreachable(); \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
|
|
@ -44,6 +44,8 @@ void kclist_add_remap(struct kcore_list *m, void *addr, void *vaddr, size_t sz)
|
|||
m->vaddr = (unsigned long)vaddr;
|
||||
kclist_add(m, addr, sz, KCORE_REMAP);
|
||||
}
|
||||
|
||||
extern int __init register_mem_pfn_is_ram(int (*fn)(unsigned long pfn));
|
||||
#else
|
||||
static inline
|
||||
void kclist_add(struct kcore_list *new, void *addr, size_t size, int type)
|
||||
|
|
|
@ -155,9 +155,9 @@ struct swap_extent {
|
|||
/*
|
||||
* Max bad pages in the new format..
|
||||
*/
|
||||
#define __swapoffset(x) ((unsigned long)&((union swap_header *)0)->x)
|
||||
#define MAX_SWAP_BADPAGES \
|
||||
((__swapoffset(magic.magic) - __swapoffset(info.badpages)) / sizeof(int))
|
||||
((offsetof(union swap_header, magic.magic) - \
|
||||
offsetof(union swap_header, info.badpages)) / sizeof(int))
|
||||
|
||||
enum {
|
||||
SWP_USED = (1 << 0), /* is slot in swap_info[] used? */
|
||||
|
|
|
@ -259,6 +259,8 @@ struct hci_dev {
|
|||
__u16 le_max_tx_time;
|
||||
__u16 le_max_rx_len;
|
||||
__u16 le_max_rx_time;
|
||||
__u8 le_max_key_size;
|
||||
__u8 le_min_key_size;
|
||||
__u16 discov_interleaved_timeout;
|
||||
__u16 conn_info_min_age;
|
||||
__u16 conn_info_max_age;
|
||||
|
|
|
@ -850,7 +850,7 @@ static inline void xfrm_pols_put(struct xfrm_policy **pols, int npols)
|
|||
xfrm_pol_put(pols[i]);
|
||||
}
|
||||
|
||||
void __xfrm_state_destroy(struct xfrm_state *);
|
||||
void __xfrm_state_destroy(struct xfrm_state *, bool);
|
||||
|
||||
static inline void __xfrm_state_put(struct xfrm_state *x)
|
||||
{
|
||||
|
@ -860,7 +860,13 @@ static inline void __xfrm_state_put(struct xfrm_state *x)
|
|||
static inline void xfrm_state_put(struct xfrm_state *x)
|
||||
{
|
||||
if (refcount_dec_and_test(&x->refcnt))
|
||||
__xfrm_state_destroy(x);
|
||||
__xfrm_state_destroy(x, false);
|
||||
}
|
||||
|
||||
static inline void xfrm_state_put_sync(struct xfrm_state *x)
|
||||
{
|
||||
if (refcount_dec_and_test(&x->refcnt))
|
||||
__xfrm_state_destroy(x, true);
|
||||
}
|
||||
|
||||
static inline void xfrm_state_hold(struct xfrm_state *x)
|
||||
|
@ -1616,7 +1622,7 @@ struct xfrmk_spdinfo {
|
|||
|
||||
struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq);
|
||||
int xfrm_state_delete(struct xfrm_state *x);
|
||||
int xfrm_state_flush(struct net *net, u8 proto, bool task_valid);
|
||||
int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync);
|
||||
int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid);
|
||||
void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
|
||||
void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
|
||||
|
|
|
@ -76,6 +76,7 @@ enum rxrpc_client_trace {
|
|||
rxrpc_client_chan_disconnect,
|
||||
rxrpc_client_chan_pass,
|
||||
rxrpc_client_chan_unstarted,
|
||||
rxrpc_client_chan_wait_failed,
|
||||
rxrpc_client_cleanup,
|
||||
rxrpc_client_count,
|
||||
rxrpc_client_discard,
|
||||
|
@ -275,6 +276,7 @@ enum rxrpc_tx_point {
|
|||
EM(rxrpc_client_chan_disconnect, "ChDisc") \
|
||||
EM(rxrpc_client_chan_pass, "ChPass") \
|
||||
EM(rxrpc_client_chan_unstarted, "ChUnst") \
|
||||
EM(rxrpc_client_chan_wait_failed, "ChWtFl") \
|
||||
EM(rxrpc_client_cleanup, "Clean ") \
|
||||
EM(rxrpc_client_count, "Count ") \
|
||||
EM(rxrpc_client_discard, "Discar") \
|
||||
|
|
|
@ -22,4 +22,20 @@ struct xt_cgroup_info_v1 {
|
|||
void *priv __attribute__((aligned(8)));
|
||||
};
|
||||
|
||||
#define XT_CGROUP_PATH_MAX 512
|
||||
|
||||
struct xt_cgroup_info_v2 {
|
||||
__u8 has_path;
|
||||
__u8 has_classid;
|
||||
__u8 invert_path;
|
||||
__u8 invert_classid;
|
||||
union {
|
||||
char path[XT_CGROUP_PATH_MAX];
|
||||
__u32 classid;
|
||||
};
|
||||
|
||||
/* kernel internal data */
|
||||
void *priv __attribute__((aligned(8)));
|
||||
};
|
||||
|
||||
#endif /* _UAPI_XT_CGROUP_H */
|
||||
|
|
|
@ -554,19 +554,6 @@ struct bpf_prog *bpf_prog_get_type_path(const char *name, enum bpf_prog_type typ
|
|||
}
|
||||
EXPORT_SYMBOL(bpf_prog_get_type_path);
|
||||
|
||||
static void bpf_evict_inode(struct inode *inode)
|
||||
{
|
||||
enum bpf_type type;
|
||||
|
||||
truncate_inode_pages_final(&inode->i_data);
|
||||
clear_inode(inode);
|
||||
|
||||
if (S_ISLNK(inode->i_mode))
|
||||
kfree(inode->i_link);
|
||||
if (!bpf_inode_type(inode, &type))
|
||||
bpf_any_put(inode->i_private, type);
|
||||
}
|
||||
|
||||
/*
|
||||
* Display the mount options in /proc/mounts.
|
||||
*/
|
||||
|
@ -579,11 +566,28 @@ static int bpf_show_options(struct seq_file *m, struct dentry *root)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void bpf_destroy_inode_deferred(struct rcu_head *head)
|
||||
{
|
||||
struct inode *inode = container_of(head, struct inode, i_rcu);
|
||||
enum bpf_type type;
|
||||
|
||||
if (S_ISLNK(inode->i_mode))
|
||||
kfree(inode->i_link);
|
||||
if (!bpf_inode_type(inode, &type))
|
||||
bpf_any_put(inode->i_private, type);
|
||||
free_inode_nonrcu(inode);
|
||||
}
|
||||
|
||||
static void bpf_destroy_inode(struct inode *inode)
|
||||
{
|
||||
call_rcu(&inode->i_rcu, bpf_destroy_inode_deferred);
|
||||
}
|
||||
|
||||
static const struct super_operations bpf_super_ops = {
|
||||
.statfs = simple_statfs,
|
||||
.drop_inode = generic_delete_inode,
|
||||
.show_options = bpf_show_options,
|
||||
.evict_inode = bpf_evict_inode,
|
||||
.destroy_inode = bpf_destroy_inode,
|
||||
};
|
||||
|
||||
enum {
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue