This is the 4.19.79 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl2grBgACgkQONu9yGCS
 aT6xRBAA0pTW2W/VvzBHBLeVlmNtwQZb8x7civVb72iZkltKR9tTPim90PULpz/P
 iO7kh8KqkgVUqdgBE0VzkHGWUSThggfSTQiqzCqOgTwV8WQWqSF8ET0HU8zbglYB
 5pXSojoRYmurGVznd4Ll6aWa5brXIKwf1mDSrFHagOyOLxQmyggHaTRSLx36BSfj
 gunE2ideB1oTaPmd/2aTI03CU3jRwXmowe8rZIDa8pJEpplZPFdk0YOPXg2t6uRI
 bjJGO8bhfR/14r/3h76IwsEiVVXIcCeEVm0fos/H6NUypedfi7jlT0Ldzg1/zZti
 mUMkbPGHcJbOWfBYPQq8xQzviCa+MFraA4Tek5h/Lf7kf3NpjE20AnH3pb9TaqQf
 mJYUGziCoOOOz8k+0eNtIjIZiCysOnf9sI5rGhMYb9qfZoZGG6RiitqyVYNa+rzJ
 wvIUQZ4vSnYmQMAXqxyayfSZvFbMxv6pAdeH0NrXVRgFF6dnKG9TSsCnIuQaJxAE
 OQRaYEJktMUBs81hS0IjnJNDFLW3r++s87xEYvCt4L7XGSrxMJ3jW6xLZlmET68G
 4UIddJ81zIuqpGY1qoWdWZAp3nfRfSX4ehOnoNmIDyC9pRhiCKc+N6j5rX8gBNO/
 SO8YOaNf9RTphhEG6Op7u4ZbU+UR4pYP+rjKveyT2HKPH6D/Tv0=
 =wt6H
 -----END PGP SIGNATURE-----

Merge 4.19.79 into android-4.19-q

Changes in 4.19.79
	s390/process: avoid potential reading of freed stack
	KVM: s390: Test for bad access register and size at the start of S390_MEM_OP
	s390/topology: avoid firing events before kobjs are created
	s390/cio: exclude subchannels with no parent from pseudo check
	KVM: PPC: Book3S HV: Fix race in re-enabling XIVE escalation interrupts
	KVM: PPC: Book3S HV: Check for MMU ready on piggybacked virtual cores
	KVM: PPC: Book3S HV: Don't lose pending doorbell request on migration on P9
	KVM: X86: Fix userspace set invalid CR4
	KVM: nVMX: handle page fault in vmread fix
	nbd: fix max number of supported devs
	PM / devfreq: tegra: Fix kHz to Hz conversion
	ASoC: Define a set of DAPM pre/post-up events
	ASoC: sgtl5000: Improve VAG power and mute control
	powerpc/mce: Fix MCE handling for huge pages
	powerpc/mce: Schedule work from irq_work
	powerpc/powernv: Restrict OPAL symbol map to only be readable by root
	powerpc/powernv/ioda: Fix race in TCE level allocation
	powerpc/book3s64/mm: Don't do tlbie fixup for some hardware revisions
	can: mcp251x: mcp251x_hw_reset(): allow more time after a reset
	tools lib traceevent: Fix "robust" test of do_generate_dynamic_list_file
	crypto: qat - Silence smp_processor_id() warning
	crypto: skcipher - Unmap pages after an external error
	crypto: cavium/zip - Add missing single_release()
	crypto: caam - fix concurrency issue in givencrypt descriptor
	crypto: ccree - account for TEE not ready to report
	crypto: ccree - use the full crypt length value
	MIPS: Treat Loongson Extensions as ASEs
	power: supply: sbs-battery: use correct flags field
	power: supply: sbs-battery: only return health when battery present
	tracing: Make sure variable reference alias has correct var_ref_idx
	usercopy: Avoid HIGHMEM pfn warning
	timer: Read jiffies once when forwarding base clk
	PCI: vmd: Fix shadow offsets to reflect spec changes
	PCI: Restore Resizable BAR size bits correctly for 1MB BARs
	watchdog: imx2_wdt: fix min() calculation in imx2_wdt_set_timeout
	perf stat: Fix a segmentation fault when using repeat forever
	drm/omap: fix max fclk divider for omap36xx
	drm/msm/dsi: Fix return value check for clk_get_parent
	drm/nouveau/kms/nv50-: Don't create MSTMs for eDP connectors
	drm/i915/gvt: update vgpu workload head pointer correctly
	mmc: sdhci: improve ADMA error reporting
	mmc: sdhci-of-esdhc: set DMA snooping based on DMA coherence
	Revert "locking/pvqspinlock: Don't wait if vCPU is preempted"
	xen/xenbus: fix self-deadlock after killing user process
	ieee802154: atusb: fix use-after-free at disconnect
	s390/cio: avoid calling strlen on null pointer
	cfg80211: initialize on-stack chandefs
	arm64: cpufeature: Detect SSBS and advertise to userspace
	ima: always return negative code for error
	ima: fix freeing ongoing ahash_request
	fs: nfs: Fix possible null-pointer dereferences in encode_attrs()
	9p: Transport error uninitialized
	9p: avoid attaching writeback_fid on mmap with type PRIVATE
	xen/pci: reserve MCFG areas earlier
	ceph: fix directories inode i_blkbits initialization
	ceph: reconnect connection if session hang in opening state
	watchdog: aspeed: Add support for AST2600
	netfilter: nf_tables: allow lookups in dynamic sets
	drm/amdgpu: Fix KFD-related kernel oops on Hawaii
	drm/amdgpu: Check for valid number of registers to read
	pNFS: Ensure we do clear the return-on-close layout stateid on fatal errors
	pwm: stm32-lp: Add check in case requested period cannot be achieved
	x86/purgatory: Disable the stackleak GCC plugin for the purgatory
	ntb: point to right memory window index
	thermal: Fix use-after-free when unregistering thermal zone device
	thermal_hwmon: Sanitize thermal_zone type
	libnvdimm/region: Initialize bad block for volatile namespaces
	fuse: fix memleak in cuse_channel_open
	libnvdimm/nfit_test: Fix acpi_handle redefinition
	sched/membarrier: Call sync_core only before usermode for same mm
	sched/membarrier: Fix private expedited registration check
	sched/core: Fix migration to invalid CPU in __set_cpus_allowed_ptr()
	perf build: Add detection of java-11-openjdk-devel package
	kernel/elfcore.c: include proper prototypes
	perf unwind: Fix libunwind build failure on i386 systems
	nfp: flower: fix memory leak in nfp_flower_spawn_vnic_reprs
	drm/radeon: Bail earlier when radeon.cik_/si_support=0 is passed
	KVM: PPC: Book3S HV: XIVE: Free escalation interrupts before disabling the VP
	KVM: nVMX: Fix consistency check on injected exception error code
	nbd: fix crash when the blksize is zero
	powerpc/pseries: Fix cpu_hotplug_lock acquisition in resize_hpt()
	powerpc/book3s64/radix: Rename CPU_FTR_P9_TLBIE_BUG feature flag
	tools lib traceevent: Do not free tep->cmdlines in add_new_comm() on failure
	tick: broadcast-hrtimer: Fix a race in bc_set_next
	perf tools: Fix segfault in cpu_cache_level__read()
	perf stat: Reset previous counts on repeat with interval
	riscv: Avoid interrupts being erroneously enabled in handle_exception()
	arm64: ssbd: Add support for PSTATE.SSBS rather than trapping to EL3
	KVM: arm64: Set SCTLR_EL2.DSSBS if SSBD is forcefully disabled and !vhe
	arm64: docs: Document SSBS HWCAP
	arm64: fix SSBS sanitization
	arm64: Add sysfs vulnerability show for spectre-v1
	arm64: add sysfs vulnerability show for meltdown
	arm64: enable generic CPU vulnerabilites support
	arm64: Always enable ssb vulnerability detection
	arm64: Provide a command line to disable spectre_v2 mitigation
	arm64: Advertise mitigation of Spectre-v2, or lack thereof
	arm64: Always enable spectre-v2 vulnerability detection
	arm64: add sysfs vulnerability show for spectre-v2
	arm64: add sysfs vulnerability show for speculative store bypass
	arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB
	arm64: Force SSBS on context switch
	arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
	arm64/speculation: Support 'mitigations=' cmdline option
	vfs: Fix EOVERFLOW testing in put_compat_statfs64
	coresight: etm4x: Use explicit barriers on enable/disable
	staging: erofs: fix an error handling in erofs_readdir()
	staging: erofs: some compressed cluster should be submitted for corrupted images
	staging: erofs: add two missing erofs_workgroup_put for corrupted images
	staging: erofs: detect potential multiref due to corrupted images
	cfg80211: add and use strongly typed element iteration macros
	cfg80211: Use const more consistently in for_each_element macros
	nl80211: validate beacon head
	Linux 4.19.79

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I360de3b953c9a43f595ff403e9821989f2966019
This commit is contained in:
Greg Kroah-Hartman 2019-10-11 19:14:50 +02:00
commit 40321f254d
121 changed files with 1420 additions and 472 deletions

View file

@ -2503,8 +2503,8 @@
http://repo.or.cz/w/linux-2.6/mini2440.git
mitigations=
[X86,PPC,S390] Control optional mitigations for CPU
vulnerabilities. This is a set of curated,
[X86,PPC,S390,ARM64] Control optional mitigations for
CPU vulnerabilities. This is a set of curated,
arch-independent options, each of which is an
aggregation of existing arch-specific options.
@ -2513,12 +2513,14 @@
improves system performance, but it may also
expose users to several CPU vulnerabilities.
Equivalent to: nopti [X86,PPC]
kpti=0 [ARM64]
nospectre_v1 [PPC]
nobp=0 [S390]
nospectre_v1 [X86]
nospectre_v2 [X86,PPC,S390]
nospectre_v2 [X86,PPC,S390,ARM64]
spectre_v2_user=off [X86]
spec_store_bypass_disable=off [X86,PPC]
ssbd=force-off [ARM64]
l1tf=off [X86]
mds=off [X86]
@ -2866,10 +2868,10 @@
(bounds check bypass). With this option data leaks
are possible in the system.
nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
(indirect branch prediction) vulnerability. System may
allow data leaks with this option, which is equivalent
to spectre_v2=off.
nospectre_v2 [X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
the Spectre variant 2 (indirect branch prediction)
vulnerability. System may allow data leaks with this
option.
nospec_store_bypass_disable
[HW] Disable all mitigations for the Speculative Store Bypass vulnerability

View file

@ -178,3 +178,7 @@ HWCAP_ILRCPC
HWCAP_FLAGM
Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
HWCAP_SSBS
Functionality implied by ID_AA64PFR1_EL1.SSBS == 0b0010.

View file

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 4
PATCHLEVEL = 19
SUBLEVEL = 78
SUBLEVEL = 79
EXTRAVERSION =
NAME = "People's Front"

View file

@ -85,6 +85,7 @@ config ARM64
select GENERIC_CLOCKEVENTS
select GENERIC_CLOCKEVENTS_BROADCAST
select GENERIC_CPU_AUTOPROBE
select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IRQ_MULTI_HANDLER

View file

@ -52,7 +52,8 @@
#define ARM64_MISMATCHED_CACHE_TYPE 31
#define ARM64_HAS_STAGE2_FWB 32
#define ARM64_WORKAROUND_1463225 33
#define ARM64_SSBS 34
#define ARM64_NCAPS 34
#define ARM64_NCAPS 35
#endif /* __ASM_CPUCAPS_H */

View file

@ -525,11 +525,7 @@ static inline int arm64_get_ssbd_state(void)
#endif
}
#ifdef CONFIG_ARM64_SSBD
void arm64_set_ssbd_mitigation(bool state);
#else
static inline void arm64_set_ssbd_mitigation(bool state) {}
#endif
#endif /* __ASSEMBLY__ */

View file

@ -398,6 +398,8 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state);
void __kvm_enable_ssbs(void);
static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
unsigned long hyp_stack_ptr,
unsigned long vector_ptr)
@ -418,6 +420,15 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
*/
BUG_ON(!static_branch_likely(&arm64_const_caps_ready));
__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2);
/*
* Disabling SSBD on a non-VHE system requires us to enable SSBS
* at EL2.
*/
if (!has_vhe() && this_cpu_has_cap(ARM64_SSBS) &&
arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
kvm_call_hyp(__kvm_enable_ssbs);
}
}
static inline bool kvm_arch_check_sve_has_vhe(void)

View file

@ -177,11 +177,25 @@ static inline void start_thread_common(struct pt_regs *regs, unsigned long pc)
regs->pc = pc;
}
static inline void set_ssbs_bit(struct pt_regs *regs)
{
regs->pstate |= PSR_SSBS_BIT;
}
static inline void set_compat_ssbs_bit(struct pt_regs *regs)
{
regs->pstate |= PSR_AA32_SSBS_BIT;
}
static inline void start_thread(struct pt_regs *regs, unsigned long pc,
unsigned long sp)
{
start_thread_common(regs, pc);
regs->pstate = PSR_MODE_EL0t;
if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
set_ssbs_bit(regs);
regs->sp = sp;
}
@ -198,6 +212,9 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
regs->pstate |= PSR_AA32_E_BIT;
#endif
if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
set_compat_ssbs_bit(regs);
regs->compat_sp = sp;
}
#endif

View file

@ -50,6 +50,7 @@
#define PSR_AA32_I_BIT 0x00000080
#define PSR_AA32_A_BIT 0x00000100
#define PSR_AA32_E_BIT 0x00000200
#define PSR_AA32_SSBS_BIT 0x00800000
#define PSR_AA32_DIT_BIT 0x01000000
#define PSR_AA32_Q_BIT 0x08000000
#define PSR_AA32_V_BIT 0x10000000

View file

@ -86,11 +86,14 @@
#define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4)
#define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3)
#define REG_PSTATE_SSBS_IMM sys_reg(0, 3, 4, 0, 1)
#define SET_PSTATE_PAN(x) __emit_inst(0xd5000000 | REG_PSTATE_PAN_IMM | \
(!!x)<<8 | 0x1f)
#define SET_PSTATE_UAO(x) __emit_inst(0xd5000000 | REG_PSTATE_UAO_IMM | \
(!!x)<<8 | 0x1f)
#define SET_PSTATE_SSBS(x) __emit_inst(0xd5000000 | REG_PSTATE_SSBS_IMM | \
(!!x)<<8 | 0x1f)
#define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2)
#define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2)
@ -419,6 +422,7 @@
#define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7)
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (1UL << 44)
#define SCTLR_ELx_EE (1 << 25)
#define SCTLR_ELx_IESB (1 << 21)
#define SCTLR_ELx_WXN (1 << 19)
@ -439,7 +443,7 @@
(1 << 10) | (1 << 13) | (1 << 14) | (1 << 15) | \
(1 << 17) | (1 << 20) | (1 << 24) | (1 << 26) | \
(1 << 27) | (1 << 30) | (1 << 31) | \
(0xffffffffUL << 32))
(0xffffefffUL << 32))
#ifdef CONFIG_CPU_BIG_ENDIAN
#define ENDIAN_SET_EL2 SCTLR_ELx_EE
@ -453,7 +457,7 @@
#define SCTLR_EL2_SET (SCTLR_ELx_IESB | ENDIAN_SET_EL2 | SCTLR_EL2_RES1)
#define SCTLR_EL2_CLEAR (SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \
SCTLR_ELx_SA | SCTLR_ELx_I | SCTLR_ELx_WXN | \
ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
SCTLR_ELx_DSSBS | ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffffffffffff
#error "Inconsistent SCTLR_EL2 set/clear bits"
@ -477,7 +481,7 @@
(1 << 29))
#define SCTLR_EL1_RES0 ((1 << 6) | (1 << 10) | (1 << 13) | (1 << 17) | \
(1 << 27) | (1 << 30) | (1 << 31) | \
(0xffffffffUL << 32))
(0xffffefffUL << 32))
#ifdef CONFIG_CPU_BIG_ENDIAN
#define ENDIAN_SET_EL1 (SCTLR_EL1_E0E | SCTLR_ELx_EE)
@ -494,7 +498,7 @@
ENDIAN_SET_EL1 | SCTLR_EL1_UCI | SCTLR_EL1_RES1)
#define SCTLR_EL1_CLEAR (SCTLR_ELx_A | SCTLR_EL1_CP15BEN | SCTLR_EL1_ITD |\
SCTLR_EL1_UMA | SCTLR_ELx_WXN | ENDIAN_CLEAR_EL1 |\
SCTLR_EL1_RES0)
SCTLR_ELx_DSSBS | SCTLR_EL1_RES0)
#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffffffffffff
#error "Inconsistent SCTLR_EL1 set/clear bits"
@ -544,6 +548,13 @@
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL0_32BIT_64BIT 0x2
/* id_aa64pfr1 */
#define ID_AA64PFR1_SSBS_SHIFT 4
#define ID_AA64PFR1_SSBS_PSTATE_NI 0
#define ID_AA64PFR1_SSBS_PSTATE_ONLY 1
#define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
/* id_aa64mmfr0 */
#define ID_AA64MMFR0_TGRAN4_SHIFT 28
#define ID_AA64MMFR0_TGRAN64_SHIFT 24

View file

@ -48,5 +48,6 @@
#define HWCAP_USCAT (1 << 25)
#define HWCAP_ILRCPC (1 << 26)
#define HWCAP_FLAGM (1 << 27)
#define HWCAP_SSBS (1 << 28)
#endif /* _UAPI__ASM_HWCAP_H */

View file

@ -46,6 +46,7 @@
#define PSR_I_BIT 0x00000080
#define PSR_A_BIT 0x00000100
#define PSR_D_BIT 0x00000200
#define PSR_SSBS_BIT 0x00001000
#define PSR_PAN_BIT 0x00400000
#define PSR_UAO_BIT 0x00800000
#define PSR_V_BIT 0x10000000

View file

@ -19,6 +19,7 @@
#include <linux/arm-smccc.h>
#include <linux/psci.h>
#include <linux/types.h>
#include <linux/cpu.h>
#include <asm/cpu.h>
#include <asm/cputype.h>
#include <asm/cpufeature.h>
@ -87,7 +88,6 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
@ -109,7 +109,7 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
}
static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
static void install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start,
const char *hyp_vecs_end)
{
@ -138,7 +138,7 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
#define __smccc_workaround_1_smc_start NULL
#define __smccc_workaround_1_smc_end NULL
static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
static void install_bp_hardening_cb(bp_hardening_cb_t fn,
const char *hyp_vecs_start,
const char *hyp_vecs_end)
{
@ -146,23 +146,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
}
#endif /* CONFIG_KVM_INDIRECT_VECTORS */
static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
bp_hardening_cb_t fn,
const char *hyp_vecs_start,
const char *hyp_vecs_end)
{
u64 pfr0;
if (!entry->matches(entry, SCOPE_LOCAL_CPU))
return;
pfr0 = read_cpuid(ID_AA64PFR0_EL1);
if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
return;
__install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
}
#include <uapi/linux/psci.h>
#include <linux/arm-smccc.h>
#include <linux/psci.h>
@ -189,60 +172,83 @@ static void qcom_link_stack_sanitization(void)
: "=&r" (tmp));
}
static void
enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
static bool __nospectre_v2;
static int __init parse_nospectre_v2(char *str)
{
__nospectre_v2 = true;
return 0;
}
early_param("nospectre_v2", parse_nospectre_v2);
/*
* -1: No workaround
* 0: No workaround required
* 1: Workaround installed
*/
static int detect_harden_bp_fw(void)
{
bp_hardening_cb_t cb;
void *smccc_start, *smccc_end;
struct arm_smccc_res res;
u32 midr = read_cpuid_id();
if (!entry->matches(entry, SCOPE_LOCAL_CPU))
return;
if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
return;
return -1;
switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 < 0)
return;
switch ((int)res.a0) {
case 1:
/* Firmware says we're just fine */
return 0;
case 0:
cb = call_hvc_arch_workaround_1;
/* This is a guest, no need to patch KVM vectors */
smccc_start = NULL;
smccc_end = NULL;
break;
default:
return -1;
}
break;
case PSCI_CONDUIT_SMC:
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 < 0)
return;
switch ((int)res.a0) {
case 1:
/* Firmware says we're just fine */
return 0;
case 0:
cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc_start;
smccc_end = __smccc_workaround_1_smc_end;
break;
default:
return -1;
}
break;
default:
return;
return -1;
}
if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
cb = qcom_link_stack_sanitization;
install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
install_bp_hardening_cb(cb, smccc_start, smccc_end);
return;
return 1;
}
#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
#ifdef CONFIG_ARM64_SSBD
DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
static bool __ssb_safe = true;
static const struct ssbd_options {
const char *str;
@ -312,6 +318,19 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
void arm64_set_ssbd_mitigation(bool state)
{
if (!IS_ENABLED(CONFIG_ARM64_SSBD)) {
pr_info_once("SSBD disabled by kernel configuration\n");
return;
}
if (this_cpu_has_cap(ARM64_SSBS)) {
if (state)
asm volatile(SET_PSTATE_SSBS(0));
else
asm volatile(SET_PSTATE_SSBS(1));
return;
}
switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL);
@ -333,11 +352,28 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
struct arm_smccc_res res;
bool required = true;
s32 val;
bool this_cpu_safe = false;
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
if (cpu_mitigations_off())
ssbd_state = ARM64_SSBD_FORCE_DISABLE;
/* delay setting __ssb_safe until we get a firmware response */
if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list))
this_cpu_safe = true;
if (this_cpu_has_cap(ARM64_SSBS)) {
if (!this_cpu_safe)
__ssb_safe = false;
required = false;
goto out_printmsg;
}
if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
ssbd_state = ARM64_SSBD_UNKNOWN;
if (!this_cpu_safe)
__ssb_safe = false;
return false;
}
@ -354,6 +390,8 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
default:
ssbd_state = ARM64_SSBD_UNKNOWN;
if (!this_cpu_safe)
__ssb_safe = false;
return false;
}
@ -362,14 +400,18 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
switch (val) {
case SMCCC_RET_NOT_SUPPORTED:
ssbd_state = ARM64_SSBD_UNKNOWN;
if (!this_cpu_safe)
__ssb_safe = false;
return false;
/* machines with mixed mitigation requirements must not return this */
case SMCCC_RET_NOT_REQUIRED:
pr_info_once("%s mitigation not required\n", entry->desc);
ssbd_state = ARM64_SSBD_MITIGATED;
return false;
case SMCCC_RET_SUCCESS:
__ssb_safe = false;
required = true;
break;
@ -379,12 +421,13 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
default:
WARN_ON(1);
if (!this_cpu_safe)
__ssb_safe = false;
return false;
}
switch (ssbd_state) {
case ARM64_SSBD_FORCE_DISABLE:
pr_info_once("%s disabled from command-line\n", entry->desc);
arm64_set_ssbd_mitigation(false);
required = false;
break;
@ -397,7 +440,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
break;
case ARM64_SSBD_FORCE_ENABLE:
pr_info_once("%s forced from command-line\n", entry->desc);
arm64_set_ssbd_mitigation(true);
required = true;
break;
@ -407,9 +449,27 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
break;
}
out_printmsg:
switch (ssbd_state) {
case ARM64_SSBD_FORCE_DISABLE:
pr_info_once("%s disabled from command-line\n", entry->desc);
break;
case ARM64_SSBD_FORCE_ENABLE:
pr_info_once("%s forced from command-line\n", entry->desc);
break;
}
return required;
}
#endif /* CONFIG_ARM64_SSBD */
/* known invulnerable cores */
static const struct midr_range arm64_ssb_cpus[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
{},
};
#ifdef CONFIG_ARM64_ERRATUM_1463225
DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
@ -464,6 +524,10 @@ has_cortex_a76_erratum_1463225(const struct arm64_cpu_capabilities *entry,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
CAP_MIDR_RANGE_LIST(midr_list)
/* Track overall mitigation state. We are only mitigated if all cores are ok */
static bool __hardenbp_enab = true;
static bool __spectrev2_safe = true;
/*
* Generic helper for handling capabilties with multiple (match,enable) pairs
* of call backs, sharing the same capability bit.
@ -496,26 +560,63 @@ multi_entry_cap_cpu_enable(const struct arm64_cpu_capabilities *entry)
caps->cpu_enable(caps);
}
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
/*
* List of CPUs where we need to issue a psci call to
* harden the branch predictor.
* List of CPUs that do not need any Spectre-v2 mitigation at all.
*/
static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER),
{},
static const struct midr_range spectre_v2_safe_list[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
{ /* sentinel */ }
};
#endif
/*
* Track overall bp hardening for all heterogeneous cores in the machine.
* We are only considered "safe" if all booted cores are known safe.
*/
static bool __maybe_unused
check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
{
int need_wa;
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
/* If the CPU has CSV2 set, we're safe */
if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1),
ID_AA64PFR0_CSV2_SHIFT))
return false;
/* Alternatively, we have a list of unaffected CPUs */
if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
return false;
/* Fallback to firmware detection */
need_wa = detect_harden_bp_fw();
if (!need_wa)
return false;
__spectrev2_safe = false;
if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
pr_warn_once("spectrev2 mitigation disabled by kernel configuration\n");
__hardenbp_enab = false;
return false;
}
/* forced off */
if (__nospectre_v2 || cpu_mitigations_off()) {
pr_info_once("spectrev2 mitigation disabled by command line option\n");
__hardenbp_enab = false;
return false;
}
if (need_wa < 0) {
pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
__hardenbp_enab = false;
}
return (need_wa > 0);
}
#ifdef CONFIG_HARDEN_EL2_VECTORS
@ -674,13 +775,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
},
#endif
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
.cpu_enable = enable_smccc_arch_workaround_1,
ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = check_branch_predictor,
},
#endif
#ifdef CONFIG_HARDEN_EL2_VECTORS
{
.desc = "EL2 vector hardening",
@ -688,14 +787,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
},
#endif
#ifdef CONFIG_ARM64_SSBD
{
.desc = "Speculative Store Bypass Disable",
.capability = ARM64_SSBD,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = has_ssbd_mitigation,
.midr_range_list = arm64_ssb_cpus,
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_1463225
{
.desc = "ARM erratum 1463225",
@ -707,3 +805,38 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
{
}
};
ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
}
ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
char *buf)
{
if (__spectrev2_safe)
return sprintf(buf, "Not affected\n");
if (__hardenbp_enab)
return sprintf(buf, "Mitigation: Branch predictor hardening\n");
return sprintf(buf, "Vulnerable\n");
}
ssize_t cpu_show_spec_store_bypass(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (__ssb_safe)
return sprintf(buf, "Not affected\n");
switch (ssbd_state) {
case ARM64_SSBD_KERNEL:
case ARM64_SSBD_FORCE_ENABLE:
if (IS_ENABLED(CONFIG_ARM64_SSBD))
return sprintf(buf,
"Mitigation: Speculative Store Bypass disabled via prctl\n");
}
return sprintf(buf, "Vulnerable\n");
}

View file

@ -24,6 +24,7 @@
#include <linux/stop_machine.h>
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/cpu.h>
#include <asm/cpu.h>
#include <asm/cpufeature.h>
#include <asm/cpu_ops.h>
@ -164,6 +165,11 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
/*
* We already refuse to boot CPUs that don't support our configured
@ -379,7 +385,7 @@ static const struct __ftr_reg_entry {
/* Op1 = 0, CRn = 0, CRm = 4 */
ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0),
ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_raz),
ARM64_FTR_REG(SYS_ID_AA64PFR1_EL1, ftr_id_aa64pfr1),
ARM64_FTR_REG(SYS_ID_AA64ZFR0_EL1, ftr_raz),
/* Op1 = 0, CRn = 0, CRm = 5 */
@ -669,7 +675,6 @@ void update_cpu_features(int cpu,
/*
* EL3 is not our concern.
* ID_AA64PFR1 is currently RES0.
*/
taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
@ -885,7 +890,7 @@ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
return ctr & BIT(CTR_DIC_SHIFT);
}
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
static bool __meltdown_safe = true;
static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@ -903,7 +908,17 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
{ /* sentinel */ }
};
char const *str = "command line option";
char const *str = "kpti command line option";
bool meltdown_safe;
meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
/* Defer to CPU feature registers */
if (has_cpuid_feature(entry, scope))
meltdown_safe = true;
if (!meltdown_safe)
__meltdown_safe = false;
/*
* For reasons that aren't entirely clear, enabling KPTI on Cavium
@ -915,6 +930,24 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
__kpti_forced = -1;
}
/* Useful for KASLR robustness */
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
if (!__kpti_forced) {
str = "KASLR";
__kpti_forced = 1;
}
}
if (cpu_mitigations_off() && !__kpti_forced) {
str = "mitigations=off";
__kpti_forced = -1;
}
if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
pr_info_once("kernel page table isolation disabled by kernel configuration\n");
return false;
}
/* Forced? */
if (__kpti_forced) {
pr_info_once("kernel page table isolation forced %s by %s\n",
@ -922,18 +955,10 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
return __kpti_forced > 0;
}
/* Useful for KASLR robustness */
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
return true;
/* Don't force KPTI for CPUs that are not vulnerable */
if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
return false;
/* Defer to CPU feature registers */
return !has_cpuid_feature(entry, scope);
return !meltdown_safe;
}
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
static void __nocfi
kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
{
@ -958,6 +983,12 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
return;
}
#else
static void
kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
{
}
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
static int __init parse_kpti(char *str)
{
@ -971,7 +1002,6 @@ static int __init parse_kpti(char *str)
return 0;
}
early_param("kpti", parse_kpti);
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
#ifdef CONFIG_ARM64_HW_AFDBM
static inline void __cpu_enable_hw_dbm(void)
@ -1067,6 +1097,48 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
WARN_ON(val & (7 << 27 | 7 << 21));
}
#ifdef CONFIG_ARM64_SSBD
static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
{
if (user_mode(regs))
return 1;
if (instr & BIT(CRm_shift))
regs->pstate |= PSR_SSBS_BIT;
else
regs->pstate &= ~PSR_SSBS_BIT;
arm64_skip_faulting_instruction(regs, 4);
return 0;
}
static struct undef_hook ssbs_emulation_hook = {
.instr_mask = ~(1U << CRm_shift),
.instr_val = 0xd500001f | REG_PSTATE_SSBS_IMM,
.fn = ssbs_emulation_handler,
};
static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused)
{
static bool undef_hook_registered = false;
static DEFINE_SPINLOCK(hook_lock);
spin_lock(&hook_lock);
if (!undef_hook_registered) {
register_undef_hook(&ssbs_emulation_hook);
undef_hook_registered = true;
}
spin_unlock(&hook_lock);
if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS);
arm64_set_ssbd_mitigation(false);
} else {
arm64_set_ssbd_mitigation(true);
}
}
#endif /* CONFIG_ARM64_SSBD */
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
@ -1150,7 +1222,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64PFR0_EL0_SHIFT,
.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
},
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
{
.desc = "Kernel page table isolation (KPTI)",
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
@ -1166,7 +1237,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = unmap_kernel_at_el0,
.cpu_enable = kpti_install_ng_mappings,
},
#endif
{
/* FP/SIMD is not implemented */
.capability = ARM64_HAS_NO_FPSIMD,
@ -1253,6 +1323,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_hw_dbm,
.cpu_enable = cpu_enable_hw_dbm,
},
#endif
#ifdef CONFIG_ARM64_SSBD
{
.desc = "Speculative Store Bypassing Safe (SSBS)",
.capability = ARM64_SSBS,
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
.matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64PFR1_EL1,
.field_pos = ID_AA64PFR1_SSBS_SHIFT,
.sign = FTR_UNSIGNED,
.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
.cpu_enable = cpu_enable_ssbs,
},
#endif
{},
};
@ -1299,6 +1382,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
#ifdef CONFIG_ARM64_SVE
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
#endif
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
{},
};
@ -1793,3 +1877,15 @@ void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
/* Firmware may have left a deferred SError in this register. */
write_sysreg_s(0, SYS_DISR_EL1);
}
ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
char *buf)
{
if (__meltdown_safe)
return sprintf(buf, "Not affected\n");
if (arm64_kernel_unmapped_at_el0())
return sprintf(buf, "Mitigation: PTI\n");
return sprintf(buf, "Vulnerable\n");
}

View file

@ -81,6 +81,7 @@ static const char *const hwcap_str[] = {
"uscat",
"ilrcpc",
"flagm",
"ssbs",
NULL
};

View file

@ -358,6 +358,10 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
if (IS_ENABLED(CONFIG_ARM64_UAO) &&
cpus_have_const_cap(ARM64_HAS_UAO))
childregs->pstate |= PSR_UAO_BIT;
if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE)
set_ssbs_bit(childregs);
p->thread.cpu_context.x19 = stack_start;
p->thread.cpu_context.x20 = stk_sz;
}
@ -397,6 +401,32 @@ void uao_thread_switch(struct task_struct *next)
}
}
/*
* Force SSBS state on context-switch, since it may be lost after migrating
* from a CPU which treats the bit as RES0 in a heterogeneous system.
*/
static void ssbs_thread_switch(struct task_struct *next)
{
struct pt_regs *regs = task_pt_regs(next);
/*
* Nothing to do for kernel threads, but 'regs' may be junk
* (e.g. idle task) so check the flags and bail early.
*/
if (unlikely(next->flags & PF_KTHREAD))
return;
/* If the mitigation is enabled, then we leave SSBS clear. */
if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
test_tsk_thread_flag(next, TIF_SSBD))
return;
if (compat_user_mode(regs))
set_compat_ssbs_bit(regs);
else if (user_mode(regs))
set_ssbs_bit(regs);
}
/*
* We store our current task in sp_el0, which is clobbered by userspace. Keep a
* shadow copy so that we can restore this upon entry from userspace.
@ -425,6 +455,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
contextidr_thread_switch(next);
entry_task_switch(next);
uao_thread_switch(next);
ssbs_thread_switch(next);
/*
* Complete any pending TLB or cache maintenance on this CPU in case

View file

@ -1666,19 +1666,20 @@ void syscall_trace_exit(struct pt_regs *regs)
}
/*
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a
* We also take into account DIT (bit 24), which is not yet documented, and
* treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be
* allocated an EL0 meaning in future.
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
* We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
* not described in ARM DDI 0487D.a.
* We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may
* be allocated an EL0 meaning in future.
* Userspace cannot use these until they have an architectural meaning.
* Note that this follows the SPSR_ELx format, not the AArch32 PSR format.
* We also reserve IL for the kernel; SS is handled dynamically.
*/
#define SPSR_EL1_AARCH64_RES0_BITS \
(GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5))
GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5))
#define SPSR_EL1_AARCH32_RES0_BITS \
(GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20))
(GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))
static int valid_compat_regs(struct user_pt_regs *regs)
{

View file

@ -3,13 +3,31 @@
* Copyright (C) 2018 ARM Ltd, All Rights Reserved.
*/
#include <linux/compat.h>
#include <linux/errno.h>
#include <linux/prctl.h>
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/thread_info.h>
#include <asm/cpufeature.h>
static void ssbd_ssbs_enable(struct task_struct *task)
{
u64 val = is_compat_thread(task_thread_info(task)) ?
PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
task_pt_regs(task)->pstate |= val;
}
static void ssbd_ssbs_disable(struct task_struct *task)
{
u64 val = is_compat_thread(task_thread_info(task)) ?
PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
task_pt_regs(task)->pstate &= ~val;
}
/*
* prctl interface for SSBD
* FIXME: Drop the below ifdefery once merged in 4.18.
@ -47,12 +65,14 @@ static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
return -EPERM;
task_clear_spec_ssb_disable(task);
clear_tsk_thread_flag(task, TIF_SSBD);
ssbd_ssbs_enable(task);
break;
case PR_SPEC_DISABLE:
if (state == ARM64_SSBD_FORCE_DISABLE)
return -EPERM;
task_set_spec_ssb_disable(task);
set_tsk_thread_flag(task, TIF_SSBD);
ssbd_ssbs_disable(task);
break;
case PR_SPEC_FORCE_DISABLE:
if (state == ARM64_SSBD_FORCE_DISABLE)
@ -60,6 +80,7 @@ static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
task_set_spec_ssb_disable(task);
task_set_spec_ssb_force_disable(task);
set_tsk_thread_flag(task, TIF_SSBD);
ssbd_ssbs_disable(task);
break;
default:
return -ERANGE;

View file

@ -293,3 +293,14 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
vcpu->arch.sysregs_loaded_on_cpu = false;
}
void __hyp_text __kvm_enable_ssbs(void)
{
u64 tmp;
asm volatile(
"mrs %0, sctlr_el2\n"
"orr %0, %0, %1\n"
"msr sctlr_el2, %0"
: "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS));
}

View file

@ -387,6 +387,22 @@
#define cpu_has_dsp3 __ase(MIPS_ASE_DSP3)
#endif
#ifndef cpu_has_loongson_mmi
#define cpu_has_loongson_mmi __ase(MIPS_ASE_LOONGSON_MMI)
#endif
#ifndef cpu_has_loongson_cam
#define cpu_has_loongson_cam __ase(MIPS_ASE_LOONGSON_CAM)
#endif
#ifndef cpu_has_loongson_ext
#define cpu_has_loongson_ext __ase(MIPS_ASE_LOONGSON_EXT)
#endif
#ifndef cpu_has_loongson_ext2
#define cpu_has_loongson_ext2 __ase(MIPS_ASE_LOONGSON_EXT2)
#endif
#ifndef cpu_has_mipsmt
#define cpu_has_mipsmt __isa_lt_and_ase(6, MIPS_ASE_MIPSMT)
#endif

View file

@ -436,5 +436,9 @@ enum cpu_type_enum {
#define MIPS_ASE_MSA 0x00000100 /* MIPS SIMD Architecture */
#define MIPS_ASE_DSP3 0x00000200 /* Signal Processing ASE Rev 3*/
#define MIPS_ASE_MIPS16E2 0x00000400 /* MIPS16e2 */
#define MIPS_ASE_LOONGSON_MMI 0x00000800 /* Loongson MultiMedia extensions Instructions */
#define MIPS_ASE_LOONGSON_CAM 0x00001000 /* Loongson CAM */
#define MIPS_ASE_LOONGSON_EXT 0x00002000 /* Loongson EXTensions */
#define MIPS_ASE_LOONGSON_EXT2 0x00004000 /* Loongson EXTensions R2 */
#endif /* _ASM_CPU_H */

View file

@ -1489,6 +1489,8 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
__cpu_name[cpu] = "ICT Loongson-3";
set_elf_platform(cpu, "loongson3a");
set_isa(c, MIPS_CPU_ISA_M64R1);
c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
MIPS_ASE_LOONGSON_EXT);
break;
case PRID_REV_LOONGSON3B_R1:
case PRID_REV_LOONGSON3B_R2:
@ -1496,6 +1498,8 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
__cpu_name[cpu] = "ICT Loongson-3";
set_elf_platform(cpu, "loongson3b");
set_isa(c, MIPS_CPU_ISA_M64R1);
c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
MIPS_ASE_LOONGSON_EXT);
break;
}
@ -1861,6 +1865,8 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
decode_configs(c);
c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
c->writecombine = _CACHE_UNCACHED_ACCELERATED;
c->ases |= (MIPS_ASE_LOONGSON_MMI | MIPS_ASE_LOONGSON_CAM |
MIPS_ASE_LOONGSON_EXT | MIPS_ASE_LOONGSON_EXT2);
break;
default:
panic("Unknown Loongson Processor ID!");

View file

@ -124,6 +124,10 @@ static int show_cpuinfo(struct seq_file *m, void *v)
if (cpu_has_eva) seq_printf(m, "%s", " eva");
if (cpu_has_htw) seq_printf(m, "%s", " htw");
if (cpu_has_xpa) seq_printf(m, "%s", " xpa");
if (cpu_has_loongson_mmi) seq_printf(m, "%s", " loongson-mmi");
if (cpu_has_loongson_cam) seq_printf(m, "%s", " loongson-cam");
if (cpu_has_loongson_ext) seq_printf(m, "%s", " loongson-ext");
if (cpu_has_loongson_ext2) seq_printf(m, "%s", " loongson-ext2");
seq_printf(m, "\n");
if (cpu_has_mmips) {

View file

@ -212,7 +212,7 @@ static inline void cpu_feature_keys_init(void) { }
#define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x0000080000000000)
#define CPU_FTR_P9_TM_HV_ASSIST LONG_ASM_CONST(0x0000100000000000)
#define CPU_FTR_P9_TM_XER_SO_BUG LONG_ASM_CONST(0x0000200000000000)
#define CPU_FTR_P9_TLBIE_BUG LONG_ASM_CONST(0x0000400000000000)
#define CPU_FTR_P9_TLBIE_STQ_BUG LONG_ASM_CONST(0x0000400000000000)
#define CPU_FTR_P9_TIDR LONG_ASM_CONST(0x0000800000000000)
#ifndef __ASSEMBLY__
@ -460,7 +460,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_ARCH_207S | \
CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | CPU_FTR_PKEY | \
CPU_FTR_P9_TLBIE_BUG | CPU_FTR_P9_TIDR)
CPU_FTR_P9_TLBIE_STQ_BUG | CPU_FTR_P9_TIDR)
#define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9
#define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1)
#define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \

View file

@ -694,9 +694,35 @@ static bool __init cpufeatures_process_feature(struct dt_cpu_feature *f)
return true;
}
/*
* Handle POWER9 broadcast tlbie invalidation issue using
* cpu feature flag.
*/
static __init void update_tlbie_feature_flag(unsigned long pvr)
{
if (PVR_VER(pvr) == PVR_POWER9) {
/*
* Set the tlbie feature flag for anything below
* Nimbus DD 2.3 and Cumulus DD 1.3
*/
if ((pvr & 0xe000) == 0) {
/* Nimbus */
if ((pvr & 0xfff) < 0x203)
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_STQ_BUG;
} else if ((pvr & 0xc000) == 0) {
/* Cumulus */
if ((pvr & 0xfff) < 0x103)
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_STQ_BUG;
} else {
WARN_ONCE(1, "Unknown PVR");
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_STQ_BUG;
}
}
}
static __init void cpufeatures_cpu_quirks(void)
{
int version = mfspr(SPRN_PVR);
unsigned long version = mfspr(SPRN_PVR);
/*
* Not all quirks can be derived from the cpufeatures device tree.
@ -715,10 +741,10 @@ static __init void cpufeatures_cpu_quirks(void)
if ((version & 0xffff0000) == 0x004e0000) {
cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR);
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_BUG;
cur_cpu_spec->cpu_features |= CPU_FTR_P9_TIDR;
}
update_tlbie_feature_flag(version);
/*
* PKEY was not in the initial base or feature node
* specification, but it should become optional in the next

View file

@ -45,6 +45,7 @@ static DEFINE_PER_CPU(struct machine_check_event[MAX_MC_EVT],
mce_ue_event_queue);
static void machine_check_process_queued_event(struct irq_work *work);
static void machine_check_ue_irq_work(struct irq_work *work);
void machine_check_ue_event(struct machine_check_event *evt);
static void machine_process_ue_event(struct work_struct *work);
@ -52,6 +53,10 @@ static struct irq_work mce_event_process_work = {
.func = machine_check_process_queued_event,
};
static struct irq_work mce_ue_event_irq_work = {
.func = machine_check_ue_irq_work,
};
DECLARE_WORK(mce_ue_event_work, machine_process_ue_event);
static void mce_set_error_info(struct machine_check_event *mce,
@ -208,6 +213,10 @@ void release_mce_event(void)
get_mce_event(NULL, true);
}
static void machine_check_ue_irq_work(struct irq_work *work)
{
schedule_work(&mce_ue_event_work);
}
/*
* Queue up the MCE event which then can be handled later.
@ -225,7 +234,7 @@ void machine_check_ue_event(struct machine_check_event *evt)
memcpy(this_cpu_ptr(&mce_ue_event_queue[index]), evt, sizeof(*evt));
/* Queue work to process this event later. */
schedule_work(&mce_ue_event_work);
irq_work_queue(&mce_ue_event_irq_work);
}
/*

View file

@ -39,6 +39,7 @@
static unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
{
pte_t *ptep;
unsigned int shift;
unsigned long flags;
struct mm_struct *mm;
@ -48,13 +49,18 @@ static unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
mm = &init_mm;
local_irq_save(flags);
if (mm == current->mm)
ptep = find_current_mm_pte(mm->pgd, addr, NULL, NULL);
else
ptep = find_init_mm_pte(addr, NULL);
ptep = __find_linux_pte(mm->pgd, addr, NULL, &shift);
local_irq_restore(flags);
if (!ptep || pte_special(*ptep))
return ULONG_MAX;
if (shift > PAGE_SHIFT) {
unsigned long rpnmask = (1ul << shift) - PAGE_SIZE;
return pte_pfn(__pte(pte_val(*ptep) | (addr & rpnmask)));
}
return pte_pfn(*ptep);
}
@ -339,7 +345,7 @@ static const struct mce_derror_table mce_p9_derror_table[] = {
MCE_INITIATOR_CPU, MCE_SEV_ERROR_SYNC, },
{ 0, false, 0, 0, 0, 0 } };
static int mce_find_instr_ea_and_pfn(struct pt_regs *regs, uint64_t *addr,
static int mce_find_instr_ea_and_phys(struct pt_regs *regs, uint64_t *addr,
uint64_t *phys_addr)
{
/*
@ -530,7 +536,8 @@ static int mce_handle_derror(struct pt_regs *regs,
* kernel/exception-64s.h
*/
if (get_paca()->in_mce < MAX_MCE_DEPTH)
mce_find_instr_ea_and_pfn(regs, addr, phys_addr);
mce_find_instr_ea_and_phys(regs, addr,
phys_addr);
}
found = 1;
}

View file

@ -1407,7 +1407,14 @@ static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
*val = get_reg_val(id, vcpu->arch.pspb);
break;
case KVM_REG_PPC_DPDES:
*val = get_reg_val(id, vcpu->arch.vcore->dpdes);
/*
* On POWER9, where we are emulating msgsndp etc.,
* we return 1 bit for each vcpu, which can come from
* either vcore->dpdes or doorbell_request.
* On POWER8, doorbell_request is 0.
*/
*val = get_reg_val(id, vcpu->arch.vcore->dpdes |
vcpu->arch.doorbell_request);
break;
case KVM_REG_PPC_VTB:
*val = get_reg_val(id, vcpu->arch.vcore->vtb);
@ -2550,7 +2557,7 @@ static void collect_piggybacks(struct core_info *cip, int target_threads)
if (!spin_trylock(&pvc->lock))
continue;
prepare_threads(pvc);
if (!pvc->n_runnable) {
if (!pvc->n_runnable || !pvc->kvm->arch.mmu_ready) {
list_del_init(&pvc->preempt_list);
if (pvc->runner == NULL) {
pvc->vcore_state = VCORE_INACTIVE;
@ -2571,15 +2578,20 @@ static void collect_piggybacks(struct core_info *cip, int target_threads)
spin_unlock(&lp->lock);
}
static bool recheck_signals(struct core_info *cip)
static bool recheck_signals_and_mmu(struct core_info *cip)
{
int sub, i;
struct kvm_vcpu *vcpu;
struct kvmppc_vcore *vc;
for (sub = 0; sub < cip->n_subcores; ++sub)
for_each_runnable_thread(i, vcpu, cip->vc[sub])
for (sub = 0; sub < cip->n_subcores; ++sub) {
vc = cip->vc[sub];
if (!vc->kvm->arch.mmu_ready)
return true;
for_each_runnable_thread(i, vcpu, vc)
if (signal_pending(vcpu->arch.run_task))
return true;
}
return false;
}
@ -2800,7 +2812,7 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
local_irq_disable();
hard_irq_disable();
if (lazy_irq_pending() || need_resched() ||
recheck_signals(&core_info) || !vc->kvm->arch.mmu_ready) {
recheck_signals_and_mmu(&core_info)) {
local_irq_enable();
vc->vcore_state = VCORE_INACTIVE;
/* Unlock all except the primary vcore */

View file

@ -452,7 +452,7 @@ static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues,
"r" (rbvalues[i]), "r" (kvm->arch.lpid));
}
if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
/*
* Need the extra ptesync to make sure we don't
* re-order the tlbie

View file

@ -2903,29 +2903,39 @@ kvm_cede_prodded:
kvm_cede_exit:
ld r9, HSTATE_KVM_VCPU(r13)
#ifdef CONFIG_KVM_XICS
/* Abort if we still have a pending escalation */
lbz r5, VCPU_XIVE_ESC_ON(r9)
cmpwi r5, 0
beq 1f
li r0, 0
stb r0, VCPU_CEDED(r9)
1: /* Enable XIVE escalation */
li r5, XIVE_ESB_SET_PQ_00
mfmsr r0
andi. r0, r0, MSR_DR /* in real mode? */
beq 1f
/* are we using XIVE with single escalation? */
ld r10, VCPU_XIVE_ESC_VADDR(r9)
cmpdi r10, 0
beq 3f
ldx r0, r10, r5
li r6, XIVE_ESB_SET_PQ_00
/*
* If we still have a pending escalation, abort the cede,
* and we must set PQ to 10 rather than 00 so that we don't
* potentially end up with two entries for the escalation
* interrupt in the XIVE interrupt queue. In that case
* we also don't want to set xive_esc_on to 1 here in
* case we race with xive_esc_irq().
*/
lbz r5, VCPU_XIVE_ESC_ON(r9)
cmpwi r5, 0
beq 4f
li r0, 0
stb r0, VCPU_CEDED(r9)
li r6, XIVE_ESB_SET_PQ_10
b 5f
4: li r0, 1
stb r0, VCPU_XIVE_ESC_ON(r9)
/* make sure store to xive_esc_on is seen before xive_esc_irq runs */
sync
5: /* Enable XIVE escalation */
mfmsr r0
andi. r0, r0, MSR_DR /* in real mode? */
beq 1f
ldx r0, r10, r6
b 2f
1: ld r10, VCPU_XIVE_ESC_RADDR(r9)
cmpdi r10, 0
beq 3f
ldcix r0, r10, r5
ldcix r0, r10, r6
2: sync
li r0, 1
stb r0, VCPU_XIVE_ESC_ON(r9)
#endif /* CONFIG_KVM_XICS */
3: b guest_exit_cont

View file

@ -1037,20 +1037,22 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu)
/* Mask the VP IPI */
xive_vm_esb_load(&xc->vp_ipi_data, XIVE_ESB_SET_PQ_01);
/* Disable the VP */
xive_native_disable_vp(xc->vp_id);
/* Free the queues & associated interrupts */
/* Free escalations */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
struct xive_q *q = &xc->queues[i];
/* Free the escalation irq */
if (xc->esc_virq[i]) {
free_irq(xc->esc_virq[i], vcpu);
irq_dispose_mapping(xc->esc_virq[i]);
kfree(xc->esc_virq_names[i]);
}
/* Free the queue */
}
/* Disable the VP */
xive_native_disable_vp(xc->vp_id);
/* Free the queues */
for (i = 0; i < KVMPPC_XIVE_Q_COUNT; i++) {
struct xive_q *q = &xc->queues[i];
xive_native_disable_queue(xc->vp_id, q, i);
if (q->qpage) {
free_pages((unsigned long)q->qpage,

View file

@ -203,7 +203,7 @@ static inline unsigned long ___tlbie(unsigned long vpn, int psize,
static inline void fixup_tlbie(unsigned long vpn, int psize, int apsize, int ssize)
{
if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
/* Need the extra ptesync to ensure we don't reorder tlbie*/
asm volatile("ptesync": : :"memory");
___tlbie(vpn, psize, apsize, ssize);

View file

@ -37,6 +37,7 @@
#include <linux/context_tracking.h>
#include <linux/libfdt.h>
#include <linux/pkeys.h>
#include <linux/cpu.h>
#include <asm/debugfs.h>
#include <asm/processor.h>
@ -1891,10 +1892,16 @@ static int hpt_order_get(void *data, u64 *val)
static int hpt_order_set(void *data, u64 val)
{
int ret;
if (!mmu_hash_ops.resize_hpt)
return -ENODEV;
return mmu_hash_ops.resize_hpt(val);
cpus_read_lock();
ret = mmu_hash_ops.resize_hpt(val);
cpus_read_unlock();
return ret;
}
DEFINE_SIMPLE_ATTRIBUTE(fops_hpt_order, hpt_order_get, hpt_order_set, "%llu\n");

View file

@ -220,7 +220,7 @@ static inline void fixup_tlbie(void)
unsigned long pid = 0;
unsigned long va = ((1UL << 52) - 1);
if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
asm volatile("ptesync": : :"memory");
__tlbie_va(va, pid, mmu_get_ap(MMU_PAGE_64K), RIC_FLUSH_TLB);
}
@ -230,7 +230,7 @@ static inline void fixup_tlbie_lpid(unsigned long lpid)
{
unsigned long va = ((1UL << 52) - 1);
if (cpu_has_feature(CPU_FTR_P9_TLBIE_BUG)) {
if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
asm volatile("ptesync": : :"memory");
__tlbie_lpid_va(va, lpid, mmu_get_ap(MMU_PAGE_64K), RIC_FLUSH_TLB);
}

View file

@ -680,7 +680,10 @@ static ssize_t symbol_map_read(struct file *fp, struct kobject *kobj,
bin_attr->size);
}
static BIN_ATTR_RO(symbol_map, 0);
static struct bin_attribute symbol_map_attr = {
.attr = {.name = "symbol_map", .mode = 0400},
.read = symbol_map_read
};
static void opal_export_symmap(void)
{
@ -697,10 +700,10 @@ static void opal_export_symmap(void)
return;
/* Setup attributes */
bin_attr_symbol_map.private = __va(be64_to_cpu(syms[0]));
bin_attr_symbol_map.size = be64_to_cpu(syms[1]);
symbol_map_attr.private = __va(be64_to_cpu(syms[0]));
symbol_map_attr.size = be64_to_cpu(syms[1]);
rc = sysfs_create_bin_file(opal_kobj, &bin_attr_symbol_map);
rc = sysfs_create_bin_file(opal_kobj, &symbol_map_attr);
if (rc)
pr_warn("Error %d creating OPAL symbols file\n", rc);
}

View file

@ -49,6 +49,9 @@ static __be64 *pnv_alloc_tce_level(int nid, unsigned int shift)
return addr;
}
static void pnv_pci_ioda2_table_do_free_pages(__be64 *addr,
unsigned long size, unsigned int levels);
static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
{
__be64 *tmp = user ? tbl->it_userspace : (__be64 *) tbl->it_base;
@ -58,9 +61,9 @@ static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
while (level) {
int n = (idx & mask) >> (level * shift);
unsigned long tce;
unsigned long oldtce, tce = be64_to_cpu(READ_ONCE(tmp[n]));
if (tmp[n] == 0) {
if (!tce) {
__be64 *tmp2;
if (!alloc)
@ -71,10 +74,15 @@ static __be64 *pnv_tce(struct iommu_table *tbl, bool user, long idx, bool alloc)
if (!tmp2)
return NULL;
tmp[n] = cpu_to_be64(__pa(tmp2) |
TCE_PCI_READ | TCE_PCI_WRITE);
tce = __pa(tmp2) | TCE_PCI_READ | TCE_PCI_WRITE;
oldtce = be64_to_cpu(cmpxchg(&tmp[n], 0,
cpu_to_be64(tce)));
if (oldtce) {
pnv_pci_ioda2_table_do_free_pages(tmp2,
ilog2(tbl->it_level_size) + 3, 1);
tce = oldtce;
}
}
tce = be64_to_cpu(tmp[n]);
tmp = __va(tce & ~(TCE_PCI_READ | TCE_PCI_WRITE));
idx &= ~mask;

View file

@ -647,7 +647,10 @@ static int pseries_lpar_resize_hpt_commit(void *data)
return 0;
}
/* Must be called in user context */
/*
* Must be called in process context. The caller must hold the
* cpus_lock.
*/
static int pseries_lpar_resize_hpt(unsigned long shift)
{
struct hpt_resize_state state = {
@ -699,7 +702,8 @@ static int pseries_lpar_resize_hpt(unsigned long shift)
t1 = ktime_get();
rc = stop_machine(pseries_lpar_resize_hpt_commit, &state, NULL);
rc = stop_machine_cpuslocked(pseries_lpar_resize_hpt_commit,
&state, NULL);
t2 = ktime_get();

View file

@ -171,9 +171,13 @@ ENTRY(handle_exception)
move a1, s4 /* scause */
tail do_IRQ
1:
/* Exceptions run with interrupts enabled */
/* Exceptions run with interrupts enabled or disabled
depending on the state of sstatus.SR_SPIE */
andi t0, s1, SR_SPIE
beqz t0, 1f
csrs sstatus, SR_SIE
1:
/* Handle syscalls */
li t0, EXC_SYSCALL
beq s4, t0, handle_syscall

View file

@ -183,20 +183,30 @@ unsigned long get_wchan(struct task_struct *p)
if (!p || p == current || p->state == TASK_RUNNING || !task_stack_page(p))
return 0;
if (!try_get_task_stack(p))
return 0;
low = task_stack_page(p);
high = (struct stack_frame *) task_pt_regs(p);
sf = (struct stack_frame *) p->thread.ksp;
if (sf <= low || sf > high)
return 0;
if (sf <= low || sf > high) {
return_address = 0;
goto out;
}
for (count = 0; count < 16; count++) {
sf = (struct stack_frame *) sf->back_chain;
if (sf <= low || sf > high)
return 0;
if (sf <= low || sf > high) {
return_address = 0;
goto out;
}
return_address = sf->gprs[8];
if (!in_sched_functions(return_address))
return return_address;
goto out;
}
return 0;
out:
put_task_stack(p);
return return_address;
}
unsigned long arch_align_stack(unsigned long sp)

View file

@ -311,6 +311,7 @@ int arch_update_cpu_topology(void)
on_each_cpu(__arch_update_dedicated_flag, NULL, 0);
for_each_online_cpu(cpu) {
dev = get_cpu_device(cpu);
if (dev)
kobject_uevent(&dev->kobj, KOBJ_CHANGE);
}
return rc;

View file

@ -3890,7 +3890,7 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu,
const u64 supported_flags = KVM_S390_MEMOP_F_INJECT_EXCEPTION
| KVM_S390_MEMOP_F_CHECK_ONLY;
if (mop->flags & ~supported_flags)
if (mop->flags & ~supported_flags || mop->ar >= NUM_ACRS || !mop->size)
return -EINVAL;
if (mop->size > MEM_OP_MAX_SIZE)

View file

@ -8801,7 +8801,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu)
/* _system ok, nested_vmx_check_permission has verified cpl=0 */
if (kvm_write_guest_virt_system(vcpu, gva, &field_value,
(is_long_mode(vcpu) ? 8 : 4),
NULL))
&e))
kvm_inject_page_fault(vcpu, &e);
}
@ -12574,7 +12574,7 @@ static int check_vmentry_prereqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
/* VM-entry exception error code */
if (has_error_code &&
vmcs12->vm_entry_exception_error_code & GENMASK(31, 15))
vmcs12->vm_entry_exception_error_code & GENMASK(31, 16))
return VMXERR_ENTRY_INVALID_CONTROL_FIELD;
/* VM-entry interruption-info field: reserved bits */

View file

@ -791,34 +791,42 @@ int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
}
EXPORT_SYMBOL_GPL(kvm_set_xcr);
static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{
if (cr4 & CR4_RESERVED_BITS)
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && (cr4 & X86_CR4_OSXSAVE))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMEP) && (cr4 & X86_CR4_SMEP))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMAP) && (cr4 & X86_CR4_SMAP))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_FSGSBASE) && (cr4 & X86_CR4_FSGSBASE))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_PKU) && (cr4 & X86_CR4_PKE))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
return -EINVAL;
if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
return -EINVAL;
return 0;
}
int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{
unsigned long old_cr4 = kvm_read_cr4(vcpu);
unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE;
if (cr4 & CR4_RESERVED_BITS)
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && (cr4 & X86_CR4_OSXSAVE))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMEP) && (cr4 & X86_CR4_SMEP))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_SMAP) && (cr4 & X86_CR4_SMAP))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_FSGSBASE) && (cr4 & X86_CR4_FSGSBASE))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_PKU) && (cr4 & X86_CR4_PKE))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_LA57) && (cr4 & X86_CR4_LA57))
return 1;
if (!guest_cpuid_has(vcpu, X86_FEATURE_UMIP) && (cr4 & X86_CR4_UMIP))
if (kvm_valid_cr4(vcpu, cr4))
return 1;
if (is_long_mode(vcpu)) {
@ -8237,10 +8245,6 @@ EXPORT_SYMBOL_GPL(kvm_task_switch);
static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
{
if (!guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
(sregs->cr4 & X86_CR4_OSXSAVE))
return -EINVAL;
if ((sregs->efer & EFER_LME) && (sregs->cr0 & X86_CR0_PG)) {
/*
* When EFER.LME and CR0.PG are set, the processor is in
@ -8259,7 +8263,7 @@ static int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
return -EINVAL;
}
return 0;
return kvm_valid_cr4(vcpu, sregs->cr4);
}
static int __set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)

View file

@ -23,6 +23,7 @@ KCOV_INSTRUMENT := n
PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss
PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN)
# Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That
# in turn leaves some undefined symbols like __fentry__ in purgatory and not

View file

@ -95,7 +95,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len)
return max(start, end_page);
}
static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
{
u8 *addr;
@ -103,19 +103,21 @@ static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
addr = skcipher_get_spot(addr, bsize);
scatterwalk_copychunks(addr, &walk->out, bsize,
(walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1);
return 0;
}
int skcipher_walk_done(struct skcipher_walk *walk, int err)
{
unsigned int n; /* bytes processed */
bool more;
unsigned int n = walk->nbytes;
unsigned int nbytes = 0;
if (unlikely(err < 0))
if (!n)
goto finish;
n = walk->nbytes - err;
walk->total -= n;
more = (walk->total != 0);
if (likely(err >= 0)) {
n -= err;
nbytes = walk->total - n;
}
if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
SKCIPHER_WALK_SLOW |
@ -131,7 +133,7 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err)
memcpy(walk->dst.virt.addr, walk->page, n);
skcipher_unmap_dst(walk);
} else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
if (err) {
if (err > 0) {
/*
* Didn't process all bytes. Either the algorithm is
* broken, or this was the last step and it turned out
@ -139,27 +141,29 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err)
* the algorithm requires it.
*/
err = -EINVAL;
goto finish;
}
skcipher_done_slow(walk, n);
goto already_advanced;
nbytes = 0;
} else
n = skcipher_done_slow(walk, n);
}
if (err > 0)
err = 0;
walk->total = nbytes;
walk->nbytes = 0;
scatterwalk_advance(&walk->in, n);
scatterwalk_advance(&walk->out, n);
already_advanced:
scatterwalk_done(&walk->in, 0, more);
scatterwalk_done(&walk->out, 1, more);
scatterwalk_done(&walk->in, 0, nbytes);
scatterwalk_done(&walk->out, 1, nbytes);
if (more) {
if (nbytes) {
crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
CRYPTO_TFM_REQ_MAY_SLEEP : 0);
return skcipher_walk_next(walk);
}
err = 0;
finish:
walk->nbytes = 0;
finish:
/* Short-circuit for the common/fast path. */
if (!((unsigned long)walk->buffer | (unsigned long)walk->page))
goto out;

View file

@ -106,6 +106,7 @@ struct nbd_device {
struct nbd_config *config;
struct mutex config_lock;
struct gendisk *disk;
struct workqueue_struct *recv_workq;
struct list_head list;
struct task_struct *task_recv;
@ -132,9 +133,10 @@ static struct dentry *nbd_dbg_dir;
#define NBD_MAGIC 0x68797548
#define NBD_DEF_BLKSIZE 1024
static unsigned int nbds_max = 16;
static int max_part = 16;
static struct workqueue_struct *recv_workqueue;
static int part_shift;
static int nbd_dev_dbg_init(struct nbd_device *nbd);
@ -1025,7 +1027,7 @@ static int nbd_reconnect_socket(struct nbd_device *nbd, unsigned long arg)
/* We take the tx_mutex in an error path in the recv_work, so we
* need to queue_work outside of the tx_mutex.
*/
queue_work(recv_workqueue, &args->work);
queue_work(nbd->recv_workq, &args->work);
atomic_inc(&config->live_connections);
wake_up(&config->conn_wait);
@ -1126,6 +1128,10 @@ static void nbd_config_put(struct nbd_device *nbd)
kfree(nbd->config);
nbd->config = NULL;
if (nbd->recv_workq)
destroy_workqueue(nbd->recv_workq);
nbd->recv_workq = NULL;
nbd->tag_set.timeout = 0;
nbd->disk->queue->limits.discard_granularity = 0;
nbd->disk->queue->limits.discard_alignment = 0;
@ -1154,6 +1160,14 @@ static int nbd_start_device(struct nbd_device *nbd)
return -EINVAL;
}
nbd->recv_workq = alloc_workqueue("knbd%d-recv",
WQ_MEM_RECLAIM | WQ_HIGHPRI |
WQ_UNBOUND, 0, nbd->index);
if (!nbd->recv_workq) {
dev_err(disk_to_dev(nbd->disk), "Could not allocate knbd recv work queue.\n");
return -ENOMEM;
}
blk_mq_update_nr_hw_queues(&nbd->tag_set, config->num_connections);
nbd->task_recv = current;
@ -1184,7 +1198,7 @@ static int nbd_start_device(struct nbd_device *nbd)
INIT_WORK(&args->work, recv_work);
args->nbd = nbd;
args->index = i;
queue_work(recv_workqueue, &args->work);
queue_work(nbd->recv_workq, &args->work);
}
nbd_size_update(nbd);
return error;
@ -1204,8 +1218,10 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
mutex_unlock(&nbd->config_lock);
ret = wait_event_interruptible(config->recv_wq,
atomic_read(&config->recv_threads) == 0);
if (ret)
if (ret) {
sock_shutdown(nbd);
flush_workqueue(nbd->recv_workq);
}
mutex_lock(&nbd->config_lock);
nbd_bdev_reset(bdev);
/* user requested, ignore socket errors */
@ -1227,6 +1243,14 @@ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
nbd_config_put(nbd);
}
static bool nbd_is_valid_blksize(unsigned long blksize)
{
if (!blksize || !is_power_of_2(blksize) || blksize < 512 ||
blksize > PAGE_SIZE)
return false;
return true;
}
/* Must be called with config_lock held */
static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
unsigned int cmd, unsigned long arg)
@ -1242,8 +1266,9 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
case NBD_SET_SOCK:
return nbd_add_socket(nbd, arg, false);
case NBD_SET_BLKSIZE:
if (!arg || !is_power_of_2(arg) || arg < 512 ||
arg > PAGE_SIZE)
if (!arg)
arg = NBD_DEF_BLKSIZE;
if (!nbd_is_valid_blksize(arg))
return -EINVAL;
nbd_size_set(nbd, arg,
div_s64(config->bytesize, arg));
@ -1323,7 +1348,7 @@ static struct nbd_config *nbd_alloc_config(void)
atomic_set(&config->recv_threads, 0);
init_waitqueue_head(&config->recv_wq);
init_waitqueue_head(&config->conn_wait);
config->blksize = 1024;
config->blksize = NBD_DEF_BLKSIZE;
atomic_set(&config->live_connections, 0);
try_module_get(THIS_MODULE);
return config;
@ -1759,6 +1784,12 @@ static int nbd_genl_connect(struct sk_buff *skb, struct genl_info *info)
if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]) {
u64 bsize =
nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]);
if (!bsize)
bsize = NBD_DEF_BLKSIZE;
if (!nbd_is_valid_blksize(bsize)) {
ret = -EINVAL;
goto out;
}
nbd_size_set(nbd, bsize, div64_u64(config->bytesize, bsize));
}
if (info->attrs[NBD_ATTR_TIMEOUT]) {
@ -1835,6 +1866,12 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
nbd_disconnect(nbd);
nbd_clear_sock(nbd);
mutex_unlock(&nbd->config_lock);
/*
* Make sure recv thread has finished, so it does not drop the last
* config ref and try to destroy the workqueue from inside the work
* queue.
*/
flush_workqueue(nbd->recv_workq);
if (test_and_clear_bit(NBD_HAS_CONFIG_REF,
&nbd->config->runtime_flags))
nbd_config_put(nbd);
@ -2215,20 +2252,12 @@ static int __init nbd_init(void)
if (nbds_max > 1UL << (MINORBITS - part_shift))
return -EINVAL;
recv_workqueue = alloc_workqueue("knbd-recv",
WQ_MEM_RECLAIM | WQ_HIGHPRI |
WQ_UNBOUND, 0);
if (!recv_workqueue)
return -ENOMEM;
if (register_blkdev(NBD_MAJOR, "nbd")) {
destroy_workqueue(recv_workqueue);
if (register_blkdev(NBD_MAJOR, "nbd"))
return -EIO;
}
if (genl_register_family(&nbd_genl_family)) {
unregister_blkdev(NBD_MAJOR, "nbd");
destroy_workqueue(recv_workqueue);
return -EINVAL;
}
nbd_dbg_init();
@ -2270,7 +2299,6 @@ static void __exit nbd_cleanup(void)
idr_destroy(&nbd_index_idr);
genl_unregister_family(&nbd_genl_family);
destroy_workqueue(recv_workqueue);
unregister_blkdev(NBD_MAJOR, "nbd");
}

View file

@ -509,6 +509,7 @@ void cnstr_shdsc_aead_givencap(u32 * const desc, struct alginfo *cdata,
const bool is_qi, int era)
{
u32 geniv, moveiv;
u32 *wait_cmd;
/* Note: Context registers are saved. */
init_sh_desc_key_aead(desc, cdata, adata, is_rfc3686, nonce, era);
@ -604,6 +605,14 @@ void cnstr_shdsc_aead_givencap(u32 * const desc, struct alginfo *cdata,
/* Will read cryptlen */
append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ);
/*
* Wait for IV transfer (ofifo -> class2) to finish before starting
* ciphertext transfer (ofifo -> external memory).
*/
wait_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | JUMP_COND_NIFP);
set_jump_tgt_here(desc, wait_cmd);
append_seq_fifo_load(desc, 0, FIFOLD_CLASS_BOTH | KEY_VLF |
FIFOLD_TYPE_MSG1OUT2 | FIFOLD_TYPE_LASTBOTH);
append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF);

View file

@ -12,7 +12,7 @@
#define DESC_AEAD_BASE (4 * CAAM_CMD_SZ)
#define DESC_AEAD_ENC_LEN (DESC_AEAD_BASE + 11 * CAAM_CMD_SZ)
#define DESC_AEAD_DEC_LEN (DESC_AEAD_BASE + 15 * CAAM_CMD_SZ)
#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 7 * CAAM_CMD_SZ)
#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 8 * CAAM_CMD_SZ)
#define DESC_QI_AEAD_ENC_LEN (DESC_AEAD_ENC_LEN + 3 * CAAM_CMD_SZ)
#define DESC_QI_AEAD_DEC_LEN (DESC_AEAD_DEC_LEN + 3 * CAAM_CMD_SZ)
#define DESC_QI_AEAD_GIVENC_LEN (DESC_AEAD_GIVENC_LEN + 3 * CAAM_CMD_SZ)

View file

@ -593,6 +593,7 @@ static const struct file_operations zip_stats_fops = {
.owner = THIS_MODULE,
.open = zip_stats_open,
.read = seq_read,
.release = single_release,
};
static int zip_clear_open(struct inode *inode, struct file *file)
@ -604,6 +605,7 @@ static const struct file_operations zip_clear_fops = {
.owner = THIS_MODULE,
.open = zip_clear_open,
.read = seq_read,
.release = single_release,
};
static int zip_regs_open(struct inode *inode, struct file *file)
@ -615,6 +617,7 @@ static const struct file_operations zip_regs_fops = {
.owner = THIS_MODULE,
.open = zip_regs_open,
.read = seq_read,
.release = single_release,
};
/* Root directory for thunderx_zip debugfs entry */

View file

@ -227,7 +227,7 @@ static void cc_aead_complete(struct device *dev, void *cc_req, int err)
/* In case of payload authentication failure, MUST NOT
* revealed the decrypted message --> zero its memory.
*/
cc_zero_sgl(areq->dst, areq_ctx->cryptlen);
cc_zero_sgl(areq->dst, areq->cryptlen);
err = -EBADMSG;
}
} else { /*ENCRYPT*/

View file

@ -21,7 +21,13 @@ static bool cc_get_tee_fips_status(struct cc_drvdata *drvdata)
u32 reg;
reg = cc_ioread(drvdata, CC_REG(GPR_HOST));
return (reg == (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK));
/* Did the TEE report status? */
if (reg & CC_FIPS_SYNC_TEE_STATUS)
/* Yes. Is it OK? */
return (reg & CC_FIPS_SYNC_MODULE_OK);
/* No. It's either not in use or will be reported later */
return true;
}
/*

View file

@ -95,7 +95,7 @@ struct service_hndl {
static inline int get_current_node(void)
{
return topology_physical_package_id(smp_processor_id());
return topology_physical_package_id(raw_smp_processor_id());
}
int adf_service_register(struct service_hndl *service);

View file

@ -486,11 +486,11 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
{
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
struct dev_pm_opp *opp;
unsigned long rate = *freq * KHZ;
unsigned long rate;
opp = devfreq_recommended_opp(dev, &rate, flags);
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp)) {
dev_err(dev, "Failed to find opp for %lu KHz\n", *freq);
dev_err(dev, "Failed to find opp for %lu Hz\n", *freq);
return PTR_ERR(opp);
}
rate = dev_pm_opp_get_freq(opp);
@ -499,8 +499,6 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
clk_set_min_rate(tegra->emc_clock, rate);
clk_set_rate(tegra->emc_clock, 0);
*freq = rate;
return 0;
}
@ -510,7 +508,7 @@ static int tegra_devfreq_get_dev_status(struct device *dev,
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
struct tegra_devfreq_device *actmon_dev;
stat->current_frequency = tegra->cur_freq;
stat->current_frequency = tegra->cur_freq * KHZ;
/* To be used by the tegra governor */
stat->private_data = tegra;
@ -565,7 +563,7 @@ static int tegra_governor_get_target(struct devfreq *devfreq,
target_freq = max(target_freq, dev->target_freq);
}
*freq = target_freq;
*freq = target_freq * KHZ;
return 0;
}

View file

@ -139,7 +139,8 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
/* ring tests don't use a job */
if (job) {
vm = job->vm;
fence_ctx = job->base.s_fence->scheduled.context;
fence_ctx = job->base.s_fence ?
job->base.s_fence->scheduled.context : 0;
} else {
vm = NULL;
fence_ctx = 0;

View file

@ -562,6 +562,9 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK)
sh_num = 0xffffffff;
if (info->read_mmr_reg.count > 128)
return -EINVAL;
regs = kmalloc_array(info->read_mmr_reg.count, sizeof(*regs), GFP_KERNEL);
if (!regs)
return -ENOMEM;

View file

@ -1276,9 +1276,6 @@ static int prepare_mm(struct intel_vgpu_workload *workload)
#define same_context(a, b) (((a)->context_id == (b)->context_id) && \
((a)->lrca == (b)->lrca))
#define get_last_workload(q) \
(list_empty(q) ? NULL : container_of(q->prev, \
struct intel_vgpu_workload, list))
/**
* intel_vgpu_create_workload - create a vGPU workload
* @vgpu: a vGPU
@ -1297,7 +1294,7 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
{
struct intel_vgpu_submission *s = &vgpu->submission;
struct list_head *q = workload_q_head(vgpu, ring_id);
struct intel_vgpu_workload *last_workload = get_last_workload(q);
struct intel_vgpu_workload *last_workload = NULL;
struct intel_vgpu_workload *workload = NULL;
struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
u64 ring_context_gpa;
@ -1320,8 +1317,11 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
head &= RB_HEAD_OFF_MASK;
tail &= RB_TAIL_OFF_MASK;
if (last_workload && same_context(&last_workload->ctx_desc, desc)) {
gvt_dbg_el("ring id %d cur workload == last\n", ring_id);
list_for_each_entry_reverse(last_workload, q, list) {
if (same_context(&last_workload->ctx_desc, desc)) {
gvt_dbg_el("ring id %d cur workload == last\n",
ring_id);
gvt_dbg_el("ctx head %x real head %lx\n", head,
last_workload->rb_tail);
/*
@ -1329,6 +1329,8 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
* as it might not be updated at this time
*/
head = last_workload->rb_tail;
break;
}
}
gvt_dbg_el("ring id %d begin a new workload\n", ring_id);

View file

@ -429,15 +429,15 @@ static int dsi_clk_init(struct msm_dsi_host *msm_host)
}
msm_host->byte_clk_src = clk_get_parent(msm_host->byte_clk);
if (!msm_host->byte_clk_src) {
ret = -ENODEV;
if (IS_ERR(msm_host->byte_clk_src)) {
ret = PTR_ERR(msm_host->byte_clk_src);
pr_err("%s: can't find byte_clk clock. ret=%d\n", __func__, ret);
goto exit;
}
msm_host->pixel_clk_src = clk_get_parent(msm_host->pixel_clk);
if (!msm_host->pixel_clk_src) {
ret = -ENODEV;
if (IS_ERR(msm_host->pixel_clk_src)) {
ret = PTR_ERR(msm_host->pixel_clk_src);
pr_err("%s: can't find pixel_clk clock. ret=%d\n", __func__, ret);
goto exit;
}

View file

@ -1517,7 +1517,8 @@ nv50_sor_create(struct drm_connector *connector, struct dcb_output *dcbe)
nv_encoder->aux = aux;
}
if ((data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len)) &&
if (nv_connector->type != DCB_CONNECTOR_eDP &&
(data = nvbios_dp_table(bios, &ver, &hdr, &cnt, &len)) &&
ver >= 0x40 && (nvbios_rd08(bios, data + 0x08) & 0x04)) {
ret = nv50_mstm_new(nv_encoder, &nv_connector->aux, 16,
nv_connector->base.base.id,

View file

@ -1110,7 +1110,7 @@ static const struct dss_features omap34xx_dss_feats = {
static const struct dss_features omap3630_dss_feats = {
.model = DSS_MODEL_OMAP3,
.fck_div_max = 32,
.fck_div_max = 31,
.fck_freq_max = 173000000,
.dss_fck_multiplier = 1,
.parent_clk_name = "dpll4_ck",

View file

@ -340,8 +340,39 @@ static int radeon_kick_out_firmware_fb(struct pci_dev *pdev)
static int radeon_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
unsigned long flags = 0;
int ret;
if (!ent)
return -ENODEV; /* Avoid NULL-ptr deref in drm_get_pci_dev */
flags = ent->driver_data;
if (!radeon_si_support) {
switch (flags & RADEON_FAMILY_MASK) {
case CHIP_TAHITI:
case CHIP_PITCAIRN:
case CHIP_VERDE:
case CHIP_OLAND:
case CHIP_HAINAN:
dev_info(&pdev->dev,
"SI support disabled by module param\n");
return -ENODEV;
}
}
if (!radeon_cik_support) {
switch (flags & RADEON_FAMILY_MASK) {
case CHIP_KAVERI:
case CHIP_BONAIRE:
case CHIP_HAWAII:
case CHIP_KABINI:
case CHIP_MULLINS:
dev_info(&pdev->dev,
"CIK support disabled by module param\n");
return -ENODEV;
}
}
if (vga_switcheroo_client_probe_defer(pdev))
return -EPROBE_DEFER;

View file

@ -95,31 +95,6 @@ int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags)
struct radeon_device *rdev;
int r, acpi_status;
if (!radeon_si_support) {
switch (flags & RADEON_FAMILY_MASK) {
case CHIP_TAHITI:
case CHIP_PITCAIRN:
case CHIP_VERDE:
case CHIP_OLAND:
case CHIP_HAINAN:
dev_info(dev->dev,
"SI support disabled by module param\n");
return -ENODEV;
}
}
if (!radeon_cik_support) {
switch (flags & RADEON_FAMILY_MASK) {
case CHIP_KAVERI:
case CHIP_BONAIRE:
case CHIP_HAWAII:
case CHIP_KABINI:
case CHIP_MULLINS:
dev_info(dev->dev,
"CIK support disabled by module param\n");
return -ENODEV;
}
}
rdev = kzalloc(sizeof(struct radeon_device), GFP_KERNEL);
if (rdev == NULL) {
return -ENOMEM;

View file

@ -174,6 +174,12 @@ static void etm4_enable_hw(void *info)
if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 0))
dev_err(drvdata->dev,
"timeout while waiting for Idle Trace Status\n");
/*
* As recommended by section 4.3.7 ("Synchronization when using the
* memory-mapped interface") of ARM IHI 0064D
*/
dsb(sy);
isb();
CS_LOCK(drvdata->base);
@ -324,8 +330,12 @@ static void etm4_disable_hw(void *info)
/* EN, bit[0] Trace unit enable bit */
control &= ~0x1;
/* make sure everything completes before disabling */
mb();
/*
* Make sure everything completes before disabling, as recommended
* by section 7.3.77 ("TRCVICTLR, ViewInst Main Control Register,
* SSTATUS") of ARM IHI 0064D
*/
dsb(sy);
isb();
writel_relaxed(control, drvdata->base + TRCPRGCTLR);

View file

@ -480,7 +480,12 @@ static int esdhc_of_enable_dma(struct sdhci_host *host)
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
if (of_dma_is_coherent(dev->of_node))
value |= ESDHC_DMA_SNOOP;
else
value &= ~ESDHC_DMA_SNOOP;
sdhci_writel(host, value, ESDHC_DMA_SYSCTL);
return 0;
}

View file

@ -2720,6 +2720,7 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask, u32 *intmask_p)
static void sdhci_adma_show_error(struct sdhci_host *host)
{
void *desc = host->adma_table;
dma_addr_t dma = host->adma_addr;
sdhci_dumpregs(host);
@ -2727,18 +2728,21 @@ static void sdhci_adma_show_error(struct sdhci_host *host)
struct sdhci_adma2_64_desc *dma_desc = desc;
if (host->flags & SDHCI_USE_64_BIT_DMA)
DBG("%p: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
desc, le32_to_cpu(dma_desc->addr_hi),
SDHCI_DUMP("%08llx: DMA 0x%08x%08x, LEN 0x%04x, Attr=0x%02x\n",
(unsigned long long)dma,
le32_to_cpu(dma_desc->addr_hi),
le32_to_cpu(dma_desc->addr_lo),
le16_to_cpu(dma_desc->len),
le16_to_cpu(dma_desc->cmd));
else
DBG("%p: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
desc, le32_to_cpu(dma_desc->addr_lo),
SDHCI_DUMP("%08llx: DMA 0x%08x, LEN 0x%04x, Attr=0x%02x\n",
(unsigned long long)dma,
le32_to_cpu(dma_desc->addr_lo),
le16_to_cpu(dma_desc->len),
le16_to_cpu(dma_desc->cmd));
desc += host->desc_sz;
dma += host->desc_sz;
if (dma_desc->cmd & cpu_to_le16(ADMA2_END))
break;
@ -2814,7 +2818,8 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
!= MMC_BUS_TEST_R)
host->data->error = -EILSEQ;
else if (intmask & SDHCI_INT_ADMA_ERROR) {
pr_err("%s: ADMA error\n", mmc_hostname(host->mmc));
pr_err("%s: ADMA error: 0x%08x\n", mmc_hostname(host->mmc),
intmask);
sdhci_adma_show_error(host);
host->data->error = -EIO;
if (host->ops->adma_workaround)

View file

@ -626,7 +626,7 @@ static int mcp251x_setup(struct net_device *net, struct spi_device *spi)
static int mcp251x_hw_reset(struct spi_device *spi)
{
struct mcp251x_priv *priv = spi_get_drvdata(spi);
u8 reg;
unsigned long timeout;
int ret;
/* Wait for oscillator startup timer after power up */
@ -640,10 +640,19 @@ static int mcp251x_hw_reset(struct spi_device *spi)
/* Wait for oscillator startup timer after reset */
mdelay(MCP251X_OST_DELAY_MS);
reg = mcp251x_read_reg(spi, CANSTAT);
if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF)
return -ENODEV;
/* Wait for reset to finish */
timeout = jiffies + HZ;
while ((mcp251x_read_reg(spi, CANSTAT) & CANCTRL_REQOP_MASK) !=
CANCTRL_REQOP_CONF) {
usleep_range(MCP251X_OST_DELAY_MS * 1000,
MCP251X_OST_DELAY_MS * 1000 * 2);
if (time_after(jiffies, timeout)) {
dev_err(&spi->dev,
"MCP251x didn't enter in conf mode after reset\n");
return -EBUSY;
}
}
return 0;
}

View file

@ -272,6 +272,7 @@ nfp_flower_spawn_vnic_reprs(struct nfp_app *app,
port = nfp_port_alloc(app, port_type, repr);
if (IS_ERR(port)) {
err = PTR_ERR(port);
kfree(repr_priv);
nfp_repr_free(repr);
goto err_reprs_clean;
}

View file

@ -1140,10 +1140,11 @@ static void atusb_disconnect(struct usb_interface *interface)
ieee802154_unregister_hw(atusb->hw);
usb_put_dev(atusb->usb_dev);
ieee802154_free_hw(atusb->hw);
usb_set_intfdata(interface, NULL);
usb_put_dev(atusb->usb_dev);
pr_debug("%s done\n", __func__);
}

View file

@ -1373,7 +1373,7 @@ static int perf_setup_peer_mw(struct perf_peer *peer)
int ret;
/* Get outbound MW parameters and map it */
ret = ntb_peer_mw_get_addr(perf->ntb, peer->gidx, &phys_addr,
ret = ntb_peer_mw_get_addr(perf->ntb, perf->gidx, &phys_addr,
&peer->outbuf_size);
if (ret)
return ret;

View file

@ -189,7 +189,7 @@ static int nvdimm_clear_badblocks_region(struct device *dev, void *data)
sector_t sector;
/* make sure device is a region */
if (!is_nd_pmem(dev))
if (!is_memory(dev))
return 0;
nd_region = to_nd_region(dev);

View file

@ -42,7 +42,7 @@ static int nd_region_probe(struct device *dev)
if (rc)
return rc;
if (is_nd_pmem(&nd_region->dev)) {
if (is_memory(&nd_region->dev)) {
struct resource ndr_res;
if (devm_init_badblocks(dev, &nd_region->bb))
@ -131,7 +131,7 @@ static void nd_region_notify(struct device *dev, enum nvdimm_event event)
struct nd_region *nd_region = to_nd_region(dev);
struct resource res;
if (is_nd_pmem(&nd_region->dev)) {
if (is_memory(&nd_region->dev)) {
res.start = nd_region->ndr_start;
res.end = nd_region->ndr_start +
nd_region->ndr_size - 1;

View file

@ -633,11 +633,11 @@ static umode_t region_visible(struct kobject *kobj, struct attribute *a, int n)
if (!is_memory(dev) && a == &dev_attr_dax_seed.attr)
return 0;
if (!is_nd_pmem(dev) && a == &dev_attr_badblocks.attr)
if (!is_memory(dev) && a == &dev_attr_badblocks.attr)
return 0;
if (a == &dev_attr_resource.attr) {
if (is_nd_pmem(dev))
if (is_memory(dev))
return 0400;
else
return 0;

View file

@ -31,6 +31,9 @@
#define PCI_REG_VMLOCK 0x70
#define MB2_SHADOW_EN(vmlock) (vmlock & 0x2)
#define MB2_SHADOW_OFFSET 0x2000
#define MB2_SHADOW_SIZE 16
enum vmd_features {
/*
* Device may contain registers which hint the physical location of the
@ -600,7 +603,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
u32 vmlock;
int ret;
membar2_offset = 0x2018;
membar2_offset = MB2_SHADOW_OFFSET + MB2_SHADOW_SIZE;
ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock);
if (ret || vmlock == ~0)
return -ENODEV;
@ -612,9 +615,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
if (!membar2)
return -ENOMEM;
offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
readq(membar2 + 0x2008);
readq(membar2 + MB2_SHADOW_OFFSET);
offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
readq(membar2 + 0x2010);
readq(membar2 + MB2_SHADOW_OFFSET + 8);
pci_iounmap(vmd->dev, membar2);
}
}

View file

@ -1366,7 +1366,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
res = pdev->resource + bar_idx;
size = order_base_2((resource_size(res) >> 20) | 1) - 1;
size = ilog2(resource_size(res)) - 20;
ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
ctrl |= size << PCI_REBAR_CTRL_BAR_SHIFT;
pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);

View file

@ -323,17 +323,22 @@ static int sbs_get_battery_presence_and_health(
{
int ret;
if (psp == POWER_SUPPLY_PROP_PRESENT) {
/* Dummy command; if it succeeds, battery is present. */
ret = sbs_read_word_data(client, sbs_data[REG_STATUS].addr);
if (ret < 0)
val->intval = 0; /* battery disconnected */
else
if (ret < 0) { /* battery not present*/
if (psp == POWER_SUPPLY_PROP_PRESENT) {
val->intval = 0;
return 0;
}
return ret;
}
if (psp == POWER_SUPPLY_PROP_PRESENT)
val->intval = 1; /* battery present */
} else { /* POWER_SUPPLY_PROP_HEALTH */
else /* POWER_SUPPLY_PROP_HEALTH */
/* SBS spec doesn't have a general health command. */
val->intval = POWER_SUPPLY_HEALTH_UNKNOWN;
}
return 0;
}
@ -629,12 +634,14 @@ static int sbs_get_property(struct power_supply *psy,
switch (psp) {
case POWER_SUPPLY_PROP_PRESENT:
case POWER_SUPPLY_PROP_HEALTH:
if (client->flags & SBS_FLAGS_TI_BQ20Z75)
if (chip->flags & SBS_FLAGS_TI_BQ20Z75)
ret = sbs_get_ti_battery_presence_and_health(client,
psp, val);
else
ret = sbs_get_battery_presence_and_health(client, psp,
val);
/* this can only be true if no gpio is used */
if (psp == POWER_SUPPLY_PROP_PRESENT)
return 0;
break;

View file

@ -58,6 +58,12 @@ static int stm32_pwm_lp_apply(struct pwm_chip *chip, struct pwm_device *pwm,
/* Calculate the period and prescaler value */
div = (unsigned long long)clk_get_rate(priv->clk) * state->period;
do_div(div, NSEC_PER_SEC);
if (!div) {
/* Clock is too slow to achieve requested period. */
dev_dbg(priv->chip.dev, "Can't reach %u ns\n", state->period);
return -EINVAL;
}
prd = div;
while (div > STM32_LPTIM_MAX_ARR) {
presc++;

View file

@ -372,7 +372,7 @@ int ccwgroup_create_dev(struct device *parent, struct ccwgroup_driver *gdrv,
goto error;
}
/* Check for trailing stuff. */
if (i == num_devices && strlen(buf) > 0) {
if (i == num_devices && buf && strlen(buf) > 0) {
rc = -EINVAL;
goto error;
}

View file

@ -1213,6 +1213,8 @@ device_initcall(cio_settle_init);
int sch_is_pseudo_sch(struct subchannel *sch)
{
if (!sch->dev.parent)
return 0;
return sch == to_css(sch->dev.parent)->pseudo_subchannel;
}

View file

@ -100,8 +100,15 @@ static int erofs_readdir(struct file *f, struct dir_context *ctx)
unsigned nameoff, maxsize;
dentry_page = read_mapping_page(mapping, i, NULL);
if (IS_ERR(dentry_page))
continue;
if (dentry_page == ERR_PTR(-ENOMEM)) {
err = -ENOMEM;
break;
} else if (IS_ERR(dentry_page)) {
errln("fail to readdir of logical block %u of nid %llu",
i, EROFS_V(dir)->nid);
err = PTR_ERR(dentry_page);
break;
}
lock_page(dentry_page);
de = (struct erofs_dirent *)kmap(dentry_page);

View file

@ -311,7 +311,11 @@ z_erofs_vle_work_lookup(struct super_block *sb,
/* if multiref is disabled, `primary' is always true */
primary = true;
DBG_BUGON(work->pageofs != pageofs);
if (work->pageofs != pageofs) {
DBG_BUGON(1);
erofs_workgroup_put(egrp);
return ERR_PTR(-EIO);
}
/*
* lock must be taken first to avoid grp->next == NIL between
@ -853,6 +857,7 @@ static int z_erofs_vle_unzip(struct super_block *sb,
for (i = 0; i < nr_pages; ++i)
pages[i] = NULL;
err = 0;
z_erofs_pagevec_ctor_init(&ctor,
Z_EROFS_VLE_INLINE_PAGEVECS, work->pagevec, 0);
@ -874,8 +879,17 @@ static int z_erofs_vle_unzip(struct super_block *sb,
pagenr = z_erofs_onlinepage_index(page);
DBG_BUGON(pagenr >= nr_pages);
DBG_BUGON(pages[pagenr]);
/*
* currently EROFS doesn't support multiref(dedup),
* so here erroring out one multiref page.
*/
if (pages[pagenr]) {
DBG_BUGON(1);
SetPageError(pages[pagenr]);
z_erofs_onlinepage_endio(pages[pagenr]);
err = -EIO;
}
pages[pagenr] = page;
}
sparsemem_pages = i;
@ -885,7 +899,6 @@ static int z_erofs_vle_unzip(struct super_block *sb,
overlapped = false;
compressed_pages = grp->compressed_pages;
err = 0;
for (i = 0; i < clusterpages; ++i) {
unsigned pagenr;
@ -911,7 +924,12 @@ static int z_erofs_vle_unzip(struct super_block *sb,
pagenr = z_erofs_onlinepage_index(page);
DBG_BUGON(pagenr >= nr_pages);
DBG_BUGON(pages[pagenr]);
if (pages[pagenr]) {
DBG_BUGON(1);
SetPageError(pages[pagenr]);
z_erofs_onlinepage_endio(pages[pagenr]);
err = -EIO;
}
++sparsemem_pages;
pages[pagenr] = page;
@ -1335,19 +1353,18 @@ static int z_erofs_vle_normalaccess_readpage(struct file *file,
err = z_erofs_do_read_page(&f, page, &pagepool);
(void)z_erofs_vle_work_iter_end(&f.builder);
if (err) {
errln("%s, failed to read, err [%d]", __func__, err);
goto out;
}
/* if some compressed cluster ready, need submit them anyway */
z_erofs_submit_and_unzip(&f, &pagepool, true);
out:
if (err)
errln("%s, failed to read, err [%d]", __func__, err);
if (f.m_iter.mpage != NULL)
put_page(f.m_iter.mpage);
/* clean up the remaining free pages */
put_pages_list(&pagepool);
return 0;
return err;
}
static inline int __z_erofs_vle_normalaccess_readpages(

View file

@ -296,7 +296,7 @@ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz,
mod_delayed_work(system_freezable_wq, &tz->poll_queue,
msecs_to_jiffies(delay));
else
cancel_delayed_work(&tz->poll_queue);
cancel_delayed_work_sync(&tz->poll_queue);
}
static void monitor_thermal_zone(struct thermal_zone_device *tz)

View file

@ -87,13 +87,17 @@ static struct thermal_hwmon_device *
thermal_hwmon_lookup_by_type(const struct thermal_zone_device *tz)
{
struct thermal_hwmon_device *hwmon;
char type[THERMAL_NAME_LENGTH];
mutex_lock(&thermal_hwmon_list_lock);
list_for_each_entry(hwmon, &thermal_hwmon_list, node)
if (!strcmp(hwmon->type, tz->type)) {
list_for_each_entry(hwmon, &thermal_hwmon_list, node) {
strcpy(type, tz->type);
strreplace(type, '-', '_');
if (!strcmp(hwmon->type, type)) {
mutex_unlock(&thermal_hwmon_list_lock);
return hwmon;
}
}
mutex_unlock(&thermal_hwmon_list_lock);
return NULL;

View file

@ -38,6 +38,7 @@ static const struct aspeed_wdt_config ast2500_config = {
static const struct of_device_id aspeed_wdt_of_table[] = {
{ .compatible = "aspeed,ast2400-wdt", .data = &ast2400_config },
{ .compatible = "aspeed,ast2500-wdt", .data = &ast2500_config },
{ .compatible = "aspeed,ast2600-wdt", .data = &ast2500_config },
{ },
};
MODULE_DEVICE_TABLE(of, aspeed_wdt_of_table);
@ -264,7 +265,8 @@ static int aspeed_wdt_probe(struct platform_device *pdev)
set_bit(WDOG_HW_RUNNING, &wdt->wdd.status);
}
if (of_device_is_compatible(np, "aspeed,ast2500-wdt")) {
if ((of_device_is_compatible(np, "aspeed,ast2500-wdt")) ||
(of_device_is_compatible(np, "aspeed,ast2600-wdt"))) {
u32 reg = readl(wdt->base + WDT_RESET_WIDTH);
reg &= config->ext_pulse_width_mask;

View file

@ -55,7 +55,7 @@
#define IMX2_WDT_WMCR 0x08 /* Misc Register */
#define IMX2_WDT_MAX_TIME 128
#define IMX2_WDT_MAX_TIME 128U
#define IMX2_WDT_DEFAULT_TIME 60 /* in seconds */
#define WDOG_SEC_TO_COUNT(s) ((s * 2 - 1) << 8)
@ -180,7 +180,7 @@ static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
{
unsigned int actual;
actual = min(new_timeout, wdog->max_hw_heartbeat_ms * 1000);
actual = min(new_timeout, IMX2_WDT_MAX_TIME);
__imx2_wdt_set_timeout(wdog, actual);
wdog->timeout = new_timeout;
return 0;

View file

@ -29,6 +29,8 @@
#include "../pci/pci.h"
#ifdef CONFIG_PCI_MMCONFIG
#include <asm/pci_x86.h>
static int xen_mcfg_late(void);
#endif
static bool __read_mostly pci_seg_supported = true;
@ -40,7 +42,18 @@ static int xen_add_device(struct device *dev)
#ifdef CONFIG_PCI_IOV
struct pci_dev *physfn = pci_dev->physfn;
#endif
#ifdef CONFIG_PCI_MMCONFIG
static bool pci_mcfg_reserved = false;
/*
* Reserve MCFG areas in Xen on first invocation due to this being
* potentially called from inside of acpi_init immediately after
* MCFG table has been finally parsed.
*/
if (!pci_mcfg_reserved) {
xen_mcfg_late();
pci_mcfg_reserved = true;
}
#endif
if (pci_seg_supported) {
struct {
struct physdev_pci_device_add add;
@ -213,7 +226,7 @@ static int __init register_xen_pci_notifier(void)
arch_initcall(register_xen_pci_notifier);
#ifdef CONFIG_PCI_MMCONFIG
static int __init xen_mcfg_late(void)
static int xen_mcfg_late(void)
{
struct pci_mmcfg_region *cfg;
int rc;
@ -252,8 +265,4 @@ static int __init xen_mcfg_late(void)
}
return 0;
}
/*
* Needs to be done after acpi_init which are subsys_initcall.
*/
subsys_initcall_sync(xen_mcfg_late);
#endif

View file

@ -55,6 +55,7 @@
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/miscdevice.h>
#include <linux/workqueue.h>
#include <xen/xenbus.h>
#include <xen/xen.h>
@ -116,6 +117,8 @@ struct xenbus_file_priv {
wait_queue_head_t read_waitq;
struct kref kref;
struct work_struct wq;
};
/* Read out any raw xenbus messages queued up. */
@ -300,14 +303,14 @@ static void watch_fired(struct xenbus_watch *watch,
mutex_unlock(&adap->dev_data->reply_mutex);
}
static void xenbus_file_free(struct kref *kref)
static void xenbus_worker(struct work_struct *wq)
{
struct xenbus_file_priv *u;
struct xenbus_transaction_holder *trans, *tmp;
struct watch_adapter *watch, *tmp_watch;
struct read_buffer *rb, *tmp_rb;
u = container_of(kref, struct xenbus_file_priv, kref);
u = container_of(wq, struct xenbus_file_priv, wq);
/*
* No need for locking here because there are no other users,
@ -333,6 +336,18 @@ static void xenbus_file_free(struct kref *kref)
kfree(u);
}
static void xenbus_file_free(struct kref *kref)
{
struct xenbus_file_priv *u;
/*
* We might be called in xenbus_thread().
* Use workqueue to avoid deadlock.
*/
u = container_of(kref, struct xenbus_file_priv, kref);
schedule_work(&u->wq);
}
static struct xenbus_transaction_holder *xenbus_get_transaction(
struct xenbus_file_priv *u, uint32_t tx_id)
{
@ -652,6 +667,7 @@ static int xenbus_file_open(struct inode *inode, struct file *filp)
INIT_LIST_HEAD(&u->watches);
INIT_LIST_HEAD(&u->read_buffers);
init_waitqueue_head(&u->read_waitq);
INIT_WORK(&u->wq, xenbus_worker);
mutex_init(&u->reply_mutex);
mutex_init(&u->msgbuffer_mutex);

View file

@ -528,6 +528,7 @@ v9fs_mmap_file_mmap(struct file *filp, struct vm_area_struct *vma)
v9inode = V9FS_I(inode);
mutex_lock(&v9inode->v_mutex);
if (!v9inode->writeback_fid &&
(vma->vm_flags & VM_SHARED) &&
(vma->vm_flags & VM_WRITE)) {
/*
* clone a fid and add it to writeback_fid
@ -629,6 +630,8 @@ static void v9fs_mmap_vm_close(struct vm_area_struct *vma)
(vma->vm_end - vma->vm_start - 1),
};
if (!(vma->vm_flags & VM_SHARED))
return;
p9_debug(P9_DEBUG_VFS, "9p VMA close, %p, flushing", vma);

View file

@ -807,7 +807,12 @@ static int fill_inode(struct inode *inode, struct page *locked_page,
/* update inode */
inode->i_rdev = le32_to_cpu(info->rdev);
inode->i_blkbits = fls(le32_to_cpu(info->layout.fl_stripe_unit)) - 1;
/* directories have fl_stripe_unit set to zero */
if (le32_to_cpu(info->layout.fl_stripe_unit))
inode->i_blkbits =
fls(le32_to_cpu(info->layout.fl_stripe_unit)) - 1;
else
inode->i_blkbits = CEPH_BLOCK_SHIFT;
__ceph_update_quota(ci, iinfo->max_bytes, iinfo->max_files);

View file

@ -3640,7 +3640,9 @@ static void delayed_work(struct work_struct *work)
pr_info("mds%d hung\n", s->s_mds);
}
}
if (s->s_state < CEPH_MDS_SESSION_OPEN) {
if (s->s_state == CEPH_MDS_SESSION_NEW ||
s->s_state == CEPH_MDS_SESSION_RESTARTING ||
s->s_state == CEPH_MDS_SESSION_REJECTED) {
/* this mds is failed or recovering, just wait */
ceph_put_mds_session(s);
continue;

View file

@ -518,6 +518,7 @@ static int cuse_channel_open(struct inode *inode, struct file *file)
rc = cuse_send_init(cc);
if (rc) {
fuse_dev_free(fud);
fuse_conn_put(&cc->fc);
return rc;
}
file->private_data = fud;

View file

@ -1171,7 +1171,7 @@ static void encode_attrs(struct xdr_stream *xdr, const struct iattr *iap,
} else
*p++ = cpu_to_be32(NFS4_SET_TO_SERVER_TIME);
}
if (bmval[2] & FATTR4_WORD2_SECURITY_LABEL) {
if (label && (bmval[2] & FATTR4_WORD2_SECURITY_LABEL)) {
*p++ = cpu_to_be32(label->lfs);
*p++ = cpu_to_be32(label->pi);
*p++ = cpu_to_be32(label->len);

View file

@ -1426,10 +1426,15 @@ void pnfs_roc_release(struct nfs4_layoutreturn_args *args,
const nfs4_stateid *res_stateid = NULL;
struct nfs4_xdr_opaque_data *ld_private = args->ld_private;
if (ret == 0) {
arg_stateid = &args->stateid;
switch (ret) {
case -NFS4ERR_NOMATCHING_LAYOUT:
break;
case 0:
if (res->lrs_present)
res_stateid = &res->stateid;
/* Fallthrough */
default:
arg_stateid = &args->stateid;
}
pnfs_layoutreturn_free_lsegs(lo, arg_stateid, &args->range,
res_stateid);

View file

@ -304,19 +304,10 @@ COMPAT_SYSCALL_DEFINE2(fstatfs, unsigned int, fd, struct compat_statfs __user *,
static int put_compat_statfs64(struct compat_statfs64 __user *ubuf, struct kstatfs *kbuf)
{
struct compat_statfs64 buf;
if (sizeof(ubuf->f_bsize) == 4) {
if ((kbuf->f_type | kbuf->f_bsize | kbuf->f_namelen |
kbuf->f_frsize | kbuf->f_flags) & 0xffffffff00000000ULL)
if ((kbuf->f_bsize | kbuf->f_frsize) & 0xffffffff00000000ULL)
return -EOVERFLOW;
/* f_files and f_ffree may be -1; it's okay
* to stuff that into 32 bits */
if (kbuf->f_files != 0xffffffffffffffffULL
&& (kbuf->f_files & 0xffffffff00000000ULL))
return -EOVERFLOW;
if (kbuf->f_ffree != 0xffffffffffffffffULL
&& (kbuf->f_ffree & 0xffffffff00000000ULL))
return -EOVERFLOW;
}
memset(&buf, 0, sizeof(struct compat_statfs64));
buf.f_type = kbuf->f_type;
buf.f_bsize = kbuf->f_bsize;

View file

@ -3185,4 +3185,57 @@ static inline bool ieee80211_action_contains_tpc(struct sk_buff *skb)
return true;
}
struct element {
u8 id;
u8 datalen;
u8 data[];
} __packed;
/* element iteration helpers */
#define for_each_element(_elem, _data, _datalen) \
for (_elem = (const struct element *)(_data); \
(const u8 *)(_data) + (_datalen) - (const u8 *)_elem >= \
(int)sizeof(*_elem) && \
(const u8 *)(_data) + (_datalen) - (const u8 *)_elem >= \
(int)sizeof(*_elem) + _elem->datalen; \
_elem = (const struct element *)(_elem->data + _elem->datalen))
#define for_each_element_id(element, _id, data, datalen) \
for_each_element(element, data, datalen) \
if (element->id == (_id))
#define for_each_element_extid(element, extid, data, datalen) \
for_each_element(element, data, datalen) \
if (element->id == WLAN_EID_EXTENSION && \
element->datalen > 0 && \
element->data[0] == (extid))
#define for_each_subelement(sub, element) \
for_each_element(sub, (element)->data, (element)->datalen)
#define for_each_subelement_id(sub, id, element) \
for_each_element_id(sub, id, (element)->data, (element)->datalen)
#define for_each_subelement_extid(sub, extid, element) \
for_each_element_extid(sub, extid, (element)->data, (element)->datalen)
/**
* for_each_element_completed - determine if element parsing consumed all data
* @element: element pointer after for_each_element() or friends
* @data: same data pointer as passed to for_each_element() or friends
* @datalen: same data length as passed to for_each_element() or friends
*
* This function returns %true if all the data was parsed or considered
* while walking the elements. Only use this if your for_each_element()
* loop cannot be broken out of, otherwise it always returns %false.
*
* If some data was malformed, this returns %false since the last parsed
* element will not fill the whole remaining data.
*/
static inline bool for_each_element_completed(const struct element *element,
const void *data, size_t datalen)
{
return (const u8 *)element == (const u8 *)data + datalen;
}
#endif /* LINUX_IEEE80211_H */

View file

@ -330,6 +330,8 @@ enum {
static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm)
{
if (current->mm != mm)
return;
if (likely(!(atomic_read(&mm->membarrier_state) &
MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE)))
return;

View file

@ -353,6 +353,8 @@ struct device;
#define SND_SOC_DAPM_WILL_PMD 0x80 /* called at start of sequence */
#define SND_SOC_DAPM_PRE_POST_PMD \
(SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMD)
#define SND_SOC_DAPM_PRE_POST_PMU \
(SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU)
/* convenience event type detection */
#define SND_SOC_DAPM_EVENT_ON(e) \

View file

@ -3,6 +3,7 @@
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/binfmts.h>
#include <linux/elfcore.h>
Elf_Half __weak elf_core_extra_phdrs(void)
{

View file

@ -271,7 +271,7 @@ pv_wait_early(struct pv_node *prev, int loop)
if ((loop & PV_PREV_CHECK_MASK) != 0)
return false;
return READ_ONCE(prev->state) != vcpu_running || vcpu_is_preempted(prev->cpu);
return READ_ONCE(prev->state) != vcpu_running;
}
/*

View file

@ -1090,7 +1090,8 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
if (cpumask_equal(&p->cpus_allowed, new_mask))
goto out;
if (!cpumask_intersects(new_mask, cpu_valid_mask)) {
dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
if (dest_cpu >= nr_cpu_ids) {
ret = -EINVAL;
goto out;
}
@ -1111,7 +1112,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
if (cpumask_test_cpu(task_cpu(p), new_mask))
goto out;
dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
if (task_running(rq, p) || p->state == TASK_WAKING) {
struct migration_arg arg = { p, dest_cpu };
/* Need help from migration thread: drop lock and wait. */

View file

@ -235,7 +235,7 @@ static int membarrier_register_private_expedited(int flags)
* groups, which use the same mm. (CLONE_VM but not
* CLONE_THREAD).
*/
if (atomic_read(&mm->membarrier_state) & state)
if ((atomic_read(&mm->membarrier_state) & state) == state)
return 0;
atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED, &mm->membarrier_state);
if (flags & MEMBARRIER_FLAG_SYNC_CORE)

View file

@ -44,34 +44,39 @@ static int bc_shutdown(struct clock_event_device *evt)
*/
static int bc_set_next(ktime_t expires, struct clock_event_device *bc)
{
int bc_moved;
/*
* We try to cancel the timer first. If the callback is on
* flight on some other cpu then we let it handle it. If we
* were able to cancel the timer nothing can rearm it as we
* own broadcast_lock.
* This is called either from enter/exit idle code or from the
* broadcast handler. In all cases tick_broadcast_lock is held.
*
* However we can also be called from the event handler of
* ce_broadcast_hrtimer itself when it expires. We cannot
* restart the timer because we are in the callback, but we
* can set the expiry time and let the callback return
* HRTIMER_RESTART.
* hrtimer_cancel() cannot be called here neither from the
* broadcast handler nor from the enter/exit idle code. The idle
* code can run into the problem described in bc_shutdown() and the
* broadcast handler cannot wait for itself to complete for obvious
* reasons.
*
* Since we are in the idle loop at this point and because
* hrtimer_{start/cancel} functions call into tracing,
* calls to these functions must be bound within RCU_NONIDLE.
* Each caller tries to arm the hrtimer on its own CPU, but if the
* hrtimer callbback function is currently running, then
* hrtimer_start() cannot move it and the timer stays on the CPU on
* which it is assigned at the moment.
*
* As this can be called from idle code, the hrtimer_start()
* invocation has to be wrapped with RCU_NONIDLE() as
* hrtimer_start() can call into tracing.
*/
RCU_NONIDLE( {
bc_moved = hrtimer_try_to_cancel(&bctimer) >= 0;
if (bc_moved)
hrtimer_start(&bctimer, expires,
HRTIMER_MODE_ABS_PINNED);});
if (bc_moved) {
/* Bind the "device" to the cpu */
bc->bound_on = smp_processor_id();
} else if (bc->bound_on == smp_processor_id()) {
hrtimer_set_expires(&bctimer, expires);
}
hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED);
/*
* The core tick broadcast mode expects bc->bound_on to be set
* correctly to prevent a CPU which has the broadcast hrtimer
* armed from going deep idle.
*
* As tick_broadcast_lock is held, nothing can change the cpu
* base which was just established in hrtimer_start() above. So
* the below access is safe even without holding the hrtimer
* base lock.
*/
bc->bound_on = bctimer.base->cpu_base->cpu;
} );
return 0;
}
@ -97,10 +102,6 @@ static enum hrtimer_restart bc_handler(struct hrtimer *t)
{
ce_broadcast_hrtimer.event_handler(&ce_broadcast_hrtimer);
if (clockevent_state_oneshot(&ce_broadcast_hrtimer))
if (ce_broadcast_hrtimer.next_event != KTIME_MAX)
return HRTIMER_RESTART;
return HRTIMER_NORESTART;
}

Some files were not shown because too many files have changed in this diff Show more