This is the 4.19.14 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAlw2Jd8ACgkQONu9yGCS aT5DIw//RlX7Djwh9VnEEgggVpPxzIDfO8BcIR5EvSpHoci2skeD6/M5a+xiKKLk HOuH/cqBobkifnCzHwHLQYP9rIbkRceW0wDU2tdaecTf6G82TPoa5rQzG0rMMTM4 HFrMlMXvQoWSlaALBi5xkGGa7AGOVcmiJBaIkbqNST4Ah8KMBRxEqDvnbh/ALXCe qLRc7lDf/WRoN9GBzoCJwuaF9EcDW/C3EyHowVroDkN3UobzfdFSmrjkteFbkIkp 9rMzoyIXmKAe762ggkQTk8hEaVHqs7YxWlq53cym6NBtiBgfjqIKtT6tEtGs5U3i sA+YK6PzCfwp4I0ffXVqUoFi3WfJ4Ist+co8e8Uu0+taRDzahBkxtxxmNb6URU64 1sosY0YyG7k72OYp9J4mYhCAbxUKC8S80TWjwPlyaVaUDWDHAbOQk5HDJ9wIERmN PltF9wQ7ZQrha4v4nafPYJn/FmQuDCfDA78vOJ09PEbNZoNBhqXbHJGx/GEShdDE /ZzoVigpN2tqIvXFM99rVPRDaTsWlCSiorOvn8vTyqv64EaGO2qZUDmvaReEbUxy i1jJ5YcQoPk4GbNI8hfShGOhT+eAtw/KW5pHwqHbEle6jyeK+7KIdBmzw5ZXQIM6 4tzDOgn7yIpkMc+qyj3n3WE1LqRLt/cbOoxMu85jHDf5LgrtF50= =Gqyx -----END PGP SIGNATURE----- Merge 4.19.14 into android-4.19 Changes in 4.19.14 ax25: fix a use-after-free in ax25_fillin_cb() gro_cell: add napi_disable in gro_cells_destroy ibmveth: fix DMA unmap error in ibmveth_xmit_start error path ieee802154: lowpan_header_create check must check daddr ip6mr: Fix potential Spectre v1 vulnerability ipv4: Fix potential Spectre v1 vulnerability ipv6: explicitly initialize udp6_addr in udp_sock_create6() ipv6: tunnels: fix two use-after-free ip: validate header length on virtual device xmit isdn: fix kernel-infoleak in capi_unlocked_ioctl net: clear skb->tstamp in forwarding paths net/hamradio/6pack: use mod_timer() to rearm timers net: ipv4: do not handle duplicate fragments as overlapping net: macb: restart tx after tx used bit read net: mvpp2: 10G modes aren't supported on all ports net: phy: Fix the issue that netif always links up after resuming netrom: fix locking in nr_find_socket() net/smc: fix TCP fallback socket release net: stmmac: Fix an error code in probe() net/tls: allocate tls context using GFP_ATOMIC net/wan: fix a double free in x25_asy_open_tty() packet: validate address length packet: validate address length if non-zero ptr_ring: wrap back ->producer in __ptr_ring_swap_queue() qmi_wwan: Added support for Fibocom NL668 series qmi_wwan: Added support for Telit LN940 series qmi_wwan: Add support for Fibocom NL678 series sctp: initialize sin6_flowinfo for ipv6 addrs in sctp_inet6addr_event sock: Make sock->sk_stamp thread-safe tcp: fix a race in inet_diag_dump_icsk() tipc: check tsk->group in tipc_wait_for_cond() tipc: compare remote and local protocols in tipc_udp_enable() tipc: fix a double free in tipc_enable_bearer() tipc: fix a double kfree_skb() tipc: use lock_sock() in tipc_sk_reinit() vhost: make sure used idx is seen before log in vhost_add_used_n() VSOCK: Send reset control packet when socket is partially bound xen/netfront: tolerate frags with no data net/mlx5: Typo fix in del_sw_hw_rule tipc: check group dests after tipc_wait_for_cond() net/mlx5e: Remove the false indication of software timestamping support ipv6: frags: Fix bogus skb->sk in reassembled packets net/ipv6: Fix a test against 'ipv6_find_idev()' return value nfp: flower: ensure TCP flags can be placed in IPv6 frame ipv6: route: Fix return value of ip6_neigh_lookup() on neigh_create() error mscc: Configured MAC entries should be locked. net/mlx5e: Cancel DIM work on close SQ net/mlx5e: RX, Verify MPWQE stride size is in range net: mvpp2: fix the phylink mode validation qed: Fix command number mismatch between driver and the mfw mlxsw: core: Increase timeout during firmware flash process net/mlx5e: Remove unused UDP GSO remaining counter net/mlx5e: RX, Fix wrong early return in receive queue poll net: mvneta: fix operation for 64K PAGE_SIZE net: Use __kernel_clockid_t in uapi net_stamp.h r8169: fix WoL device wakeup enable IB/hfi1: Incorrect sizing of sge for PIO will OOPs ALSA: rme9652: Fix potential Spectre v1 vulnerability ALSA: emu10k1: Fix potential Spectre v1 vulnerabilities ALSA: pcm: Fix potential Spectre v1 vulnerability ALSA: emux: Fix potential Spectre v1 vulnerabilities powerpc/fsl: Fix spectre_v2 mitigations reporting mtd: atmel-quadspi: disallow building on ebsa110 mtd: rawnand: marvell: prevent timeouts on a loaded machine mtd: rawnand: omap2: Pass the parent of pdev to dma_request_chan() ALSA: hda: add mute LED support for HP EliteBook 840 G4 ALSA: hda/realtek: Enable audio jacks of ASUS UX391UA with ALC294 ALSA: fireface: fix for state to fetch PCM frames ALSA: firewire-lib: fix wrong handling payload_length as payload_quadlet ALSA: firewire-lib: fix wrong assignment for 'out_packet_without_header' tracepoint ALSA: firewire-lib: use the same print format for 'without_header' tracepoints ALSA: hda/realtek: Enable the headset mic auto detection for ASUS laptops ALSA: hda/tegra: clear pending irq handlers usb: dwc2: host: use hrtimer for NAK retries USB: serial: pl2303: add ids for Hewlett-Packard HP POS pole displays USB: serial: option: add Fibocom NL678 series usb: r8a66597: Fix a possible concurrency use-after-free bug in r8a66597_endpoint_disable() usb: dwc2: disable power_down on Amlogic devices Revert "usb: dwc3: pci: Use devm functions to get the phy GPIOs" usb: roles: Add a description for the class to Kconfig media: dvb-usb-v2: Fix incorrect use of transfer_flags URB_FREE_BUFFER staging: wilc1000: fix missing read_write setting when reading data ASoC: intel: cht_bsw_max98090_ti: Add pmc_plt_clk_0 quirk for Chromebook Clapper ASoC: intel: cht_bsw_max98090_ti: Add pmc_plt_clk_0 quirk for Chromebook Gnawty s390/pci: fix sleeping in atomic during hotplug Input: atmel_mxt_ts - don't try to free unallocated kernel memory Input: elan_i2c - add ACPI ID for touchpad in ASUS Aspire F5-573G x86/speculation/l1tf: Drop the swap storage limit restriction when l1tf=off x86/mm: Drop usage of __flush_tlb_all() in kernel_physical_mapping_init() KVM: x86: Use jmp to invoke kvm_spurious_fault() from .fixup arm64: KVM: Make VHE Stage-2 TLB invalidation operations non-interruptible KVM: nVMX: Free the VMREAD/VMWRITE bitmaps if alloc_kvm_area() fails platform-msi: Free descriptors in platform_msi_domain_free() drm/v3d: Skip debugfs dumping GCA on platforms without GCA. DRM: UDL: get rid of useless vblank initialization clocksource/drivers/arc_timer: Utilize generic sched_clock perf machine: Record if a arch has a single user/kernel address space perf thread: Add fallback functions for cases where cpumode is insufficient perf tools: Use fallback for sample_addr_correlates_sym() cases perf script: Use fallbacks for branch stacks perf pmu: Suppress potential format-truncation warning perf env: Also consider env->arch == NULL as local operation ocxl: Fix endiannes bug in ocxl_link_update_pe() ocxl: Fix endiannes bug in read_afu_name() ext4: add ext4_sb_bread() to disambiguate ENOMEM cases ext4: fix possible use after free in ext4_quota_enable ext4: missing unlock/put_page() in ext4_try_to_write_inline_data() ext4: fix EXT4_IOC_GROUP_ADD ioctl ext4: include terminating u32 in size of xattr entries when expanding inodes ext4: avoid declaring fs inconsistent due to invalid file handles ext4: force inode writes when nfsd calls commit_metadata() ext4: check for shutdown and r/o file system in ext4_write_inode() spi: bcm2835: Fix race on DMA termination spi: bcm2835: Fix book-keeping of DMA termination spi: bcm2835: Avoid finishing transfer prematurely in IRQ mode clk: rockchip: fix typo in rk3188 spdif_frac parent clk: sunxi-ng: Use u64 for calculation of NM rate crypto: cavium/nitrox - fix a DMA pool free failure crypto: chcr - small packet Tx stalls the queue crypto: testmgr - add AES-CFB tests crypto: cfb - fix decryption cgroup: fix CSS_TASK_ITER_PROCS cdc-acm: fix abnormal DATA RX issue for Mediatek Preloader. btrfs: dev-replace: go back to suspended state if target device is missing btrfs: dev-replace: go back to suspend state if another EXCL_OP is running btrfs: skip file_extent generation check for free_space_inode in run_delalloc_nocow Btrfs: fix fsync of files with multiple hard links in new directories btrfs: run delayed items before dropping the snapshot Btrfs: send, fix race with transaction commits that create snapshots brcmfmac: fix roamoff=1 modparam brcmfmac: Fix out of bounds memory access during fw load powerpc/tm: Unset MSR[TS] if not recheckpointing dax: Don't access a freed inode dax: Use non-exclusive wait in wait_entry_unlocked() f2fs: read page index before freeing f2fs: fix validation of the block count in sanity_check_raw_super f2fs: sanity check of xattr entry size serial: uartps: Fix interrupt mask issue to handle the RX interrupts properly media: cec: keep track of outstanding transmits media: cec-pin: fix broken tx_ignore_nack_until_eom error injection media: rc: cec devices do not have a lirc chardev media: imx274: fix stack corruption in imx274_read_reg media: vivid: free bitmap_cap when updating std/timings/etc. media: vb2: check memory model for VIDIOC_CREATE_BUFS media: v4l2-tpg: array index could become negative tools lib traceevent: Fix processing of dereferenced args in bprintk events MIPS: math-emu: Write-protect delay slot emulation pages MIPS: c-r4k: Add r4k_blast_scache_node for Loongson-3 MIPS: Ensure pmd_present() returns false after pmd_mknotpresent() MIPS: Align kernel load address to 64KB MIPS: Expand MIPS32 ASIDs to 64 bits MIPS: OCTEON: mark RGMII interface disabled on OCTEON III MIPS: Fix a R10000_LLSC_WAR logic in atomic.h CIFS: Fix error mapping for SMB2_LOCK command which caused OFD lock problem smb3: fix large reads on encrypted connections arm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1 arm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs/SGIs ARM: dts: exynos: Specify I2S assigned clocks in proper node rtc: m41t80: Correct alarm month range with RTC reads KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled KVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum KVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state KVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq() iommu/arm-smmu-v3: Fix big-endian CMD_SYNC writes arm64: compat: Avoid sending SIGILL for unallocated syscall numbers tpm: tpm_try_transmit() refactor error flow. tpm: tpm_i2c_nuvoton: use correct command duration for TPM 2.x spi: bcm2835: Unbreak the build of esoteric configs MIPS: Only include mmzone.h when CONFIG_NEED_MULTIPLE_NODES=y Linux 4.19.14 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
8735c21738
204 changed files with 1505 additions and 638 deletions
|
@ -2073,6 +2073,9 @@
|
|||
off
|
||||
Disables hypervisor mitigations and doesn't
|
||||
emit any warnings.
|
||||
It also drops the swap size and available
|
||||
RAM limit restriction on both hypervisor and
|
||||
bare metal.
|
||||
|
||||
Default is 'flush'.
|
||||
|
||||
|
|
|
@ -405,6 +405,9 @@ time with the option "l1tf=". The valid arguments for this option are:
|
|||
|
||||
off Disables hypervisor mitigations and doesn't emit any
|
||||
warnings.
|
||||
It also drops the swap size and available RAM limit restrictions
|
||||
on both hypervisor and bare metal.
|
||||
|
||||
============ =============================================================
|
||||
|
||||
The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
|
||||
|
@ -576,7 +579,8 @@ Default mitigations
|
|||
The kernel default mitigations for vulnerable processors are:
|
||||
|
||||
- PTE inversion to protect against malicious user space. This is done
|
||||
unconditionally and cannot be controlled.
|
||||
unconditionally and cannot be controlled. The swap storage is limited
|
||||
to ~16TB.
|
||||
|
||||
- L1D conditional flushing on VMENTER when EPT is enabled for
|
||||
a guest.
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 13
|
||||
SUBLEVEL = 14
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
|
|
@ -26,6 +26,7 @@ config ARC
|
|||
select GENERIC_IRQ_SHOW
|
||||
select GENERIC_PCI_IOMAP
|
||||
select GENERIC_PENDING_IRQ if SMP
|
||||
select GENERIC_SCHED_CLOCK
|
||||
select GENERIC_SMP_IDLE_THREAD
|
||||
select HAVE_ARCH_KGDB
|
||||
select HAVE_ARCH_TRACEHOOK
|
||||
|
|
|
@ -26,8 +26,7 @@
|
|||
"Speakers", "SPKL",
|
||||
"Speakers", "SPKR";
|
||||
|
||||
assigned-clocks = <&i2s0 CLK_I2S_RCLK_SRC>,
|
||||
<&clock CLK_MOUT_EPLL>,
|
||||
assigned-clocks = <&clock CLK_MOUT_EPLL>,
|
||||
<&clock CLK_MOUT_MAU_EPLL>,
|
||||
<&clock CLK_MOUT_USER_MAU_EPLL>,
|
||||
<&clock_audss EXYNOS_MOUT_AUDSS>,
|
||||
|
@ -36,8 +35,7 @@
|
|||
<&clock_audss EXYNOS_DOUT_AUD_BUS>,
|
||||
<&clock_audss EXYNOS_DOUT_I2S>;
|
||||
|
||||
assigned-clock-parents = <&clock_audss EXYNOS_SCLK_I2S>,
|
||||
<&clock CLK_FOUT_EPLL>,
|
||||
assigned-clock-parents = <&clock CLK_FOUT_EPLL>,
|
||||
<&clock CLK_MOUT_EPLL>,
|
||||
<&clock CLK_MOUT_MAU_EPLL>,
|
||||
<&clock CLK_MAU_EPLL>,
|
||||
|
@ -48,7 +46,6 @@
|
|||
<0>,
|
||||
<0>,
|
||||
<0>,
|
||||
<0>,
|
||||
<196608001>,
|
||||
<(196608002 / 2)>,
|
||||
<196608000>;
|
||||
|
@ -84,4 +81,6 @@
|
|||
|
||||
&i2s0 {
|
||||
status = "okay";
|
||||
assigned-clocks = <&i2s0 CLK_I2S_RCLK_SRC>;
|
||||
assigned-clock-parents = <&clock_audss EXYNOS_SCLK_I2S>;
|
||||
};
|
||||
|
|
|
@ -33,8 +33,7 @@
|
|||
compatible = "samsung,odroid-xu3-audio";
|
||||
model = "Odroid-XU4";
|
||||
|
||||
assigned-clocks = <&i2s0 CLK_I2S_RCLK_SRC>,
|
||||
<&clock CLK_MOUT_EPLL>,
|
||||
assigned-clocks = <&clock CLK_MOUT_EPLL>,
|
||||
<&clock CLK_MOUT_MAU_EPLL>,
|
||||
<&clock CLK_MOUT_USER_MAU_EPLL>,
|
||||
<&clock_audss EXYNOS_MOUT_AUDSS>,
|
||||
|
@ -43,8 +42,7 @@
|
|||
<&clock_audss EXYNOS_DOUT_AUD_BUS>,
|
||||
<&clock_audss EXYNOS_DOUT_I2S>;
|
||||
|
||||
assigned-clock-parents = <&clock_audss EXYNOS_SCLK_I2S>,
|
||||
<&clock CLK_FOUT_EPLL>,
|
||||
assigned-clock-parents = <&clock CLK_FOUT_EPLL>,
|
||||
<&clock CLK_MOUT_EPLL>,
|
||||
<&clock CLK_MOUT_MAU_EPLL>,
|
||||
<&clock CLK_MAU_EPLL>,
|
||||
|
@ -55,7 +53,6 @@
|
|||
<0>,
|
||||
<0>,
|
||||
<0>,
|
||||
<0>,
|
||||
<196608001>,
|
||||
<(196608002 / 2)>,
|
||||
<196608000>;
|
||||
|
@ -79,6 +76,8 @@
|
|||
|
||||
&i2s0 {
|
||||
status = "okay";
|
||||
assigned-clocks = <&i2s0 CLK_I2S_RCLK_SRC>;
|
||||
assigned-clock-parents = <&clock_audss EXYNOS_SCLK_I2S>;
|
||||
};
|
||||
|
||||
&pwm {
|
||||
|
|
|
@ -104,7 +104,7 @@
|
|||
TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK | TCR_EL2_T0SZ_MASK)
|
||||
|
||||
/* VTCR_EL2 Registers bits */
|
||||
#define VTCR_EL2_RES1 (1 << 31)
|
||||
#define VTCR_EL2_RES1 (1U << 31)
|
||||
#define VTCR_EL2_HD (1 << 22)
|
||||
#define VTCR_EL2_HA (1 << 21)
|
||||
#define VTCR_EL2_PS_MASK TCR_EL2_PS_MASK
|
||||
|
|
|
@ -40,8 +40,9 @@
|
|||
* The following SVCs are ARM private.
|
||||
*/
|
||||
#define __ARM_NR_COMPAT_BASE 0x0f0000
|
||||
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
|
||||
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
|
||||
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE + 2)
|
||||
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5)
|
||||
#define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800)
|
||||
|
||||
#define __NR_compat_syscalls 399
|
||||
#endif
|
||||
|
|
|
@ -102,12 +102,12 @@ long compat_arm_syscall(struct pt_regs *regs)
|
|||
|
||||
default:
|
||||
/*
|
||||
* Calls 9f00xx..9f07ff are defined to return -ENOSYS
|
||||
* Calls 0xf0xxx..0xf07ff are defined to return -ENOSYS
|
||||
* if not implemented, rather than raising SIGILL. This
|
||||
* way the calling program can gracefully determine whether
|
||||
* a feature is supported.
|
||||
*/
|
||||
if ((no & 0xffff) <= 0x7ff)
|
||||
if (no < __ARM_NR_COMPAT_END)
|
||||
return -ENOSYS;
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -15,14 +15,19 @@
|
|||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/irqflags.h>
|
||||
|
||||
#include <asm/kvm_hyp.h>
|
||||
#include <asm/kvm_mmu.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm)
|
||||
static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm,
|
||||
unsigned long *flags)
|
||||
{
|
||||
u64 val;
|
||||
|
||||
local_irq_save(*flags);
|
||||
|
||||
/*
|
||||
* With VHE enabled, we have HCR_EL2.{E2H,TGE} = {1,1}, and
|
||||
* most TLB operations target EL2/EL0. In order to affect the
|
||||
|
@ -37,7 +42,8 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm)
|
|||
isb();
|
||||
}
|
||||
|
||||
static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm)
|
||||
static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm,
|
||||
unsigned long *flags)
|
||||
{
|
||||
write_sysreg(kvm->arch.vttbr, vttbr_el2);
|
||||
isb();
|
||||
|
@ -48,7 +54,8 @@ static hyp_alternate_select(__tlb_switch_to_guest,
|
|||
__tlb_switch_to_guest_vhe,
|
||||
ARM64_HAS_VIRT_HOST_EXTN);
|
||||
|
||||
static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
|
||||
static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm,
|
||||
unsigned long flags)
|
||||
{
|
||||
/*
|
||||
* We're done with the TLB operation, let's restore the host's
|
||||
|
@ -56,9 +63,12 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
|
|||
*/
|
||||
write_sysreg(0, vttbr_el2);
|
||||
write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
|
||||
isb();
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
|
||||
static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
|
||||
static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm,
|
||||
unsigned long flags)
|
||||
{
|
||||
write_sysreg(0, vttbr_el2);
|
||||
}
|
||||
|
@ -70,11 +80,13 @@ static hyp_alternate_select(__tlb_switch_to_host,
|
|||
|
||||
void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
dsb(ishst);
|
||||
|
||||
/* Switch to requested VMID */
|
||||
kvm = kern_hyp_va(kvm);
|
||||
__tlb_switch_to_guest()(kvm);
|
||||
__tlb_switch_to_guest()(kvm, &flags);
|
||||
|
||||
/*
|
||||
* We could do so much better if we had the VA as well.
|
||||
|
@ -117,36 +129,39 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
|
|||
if (!has_vhe() && icache_is_vpipt())
|
||||
__flush_icache_all();
|
||||
|
||||
__tlb_switch_to_host()(kvm);
|
||||
__tlb_switch_to_host()(kvm, flags);
|
||||
}
|
||||
|
||||
void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
dsb(ishst);
|
||||
|
||||
/* Switch to requested VMID */
|
||||
kvm = kern_hyp_va(kvm);
|
||||
__tlb_switch_to_guest()(kvm);
|
||||
__tlb_switch_to_guest()(kvm, &flags);
|
||||
|
||||
__tlbi(vmalls12e1is);
|
||||
dsb(ish);
|
||||
isb();
|
||||
|
||||
__tlb_switch_to_host()(kvm);
|
||||
__tlb_switch_to_host()(kvm, flags);
|
||||
}
|
||||
|
||||
void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
|
||||
unsigned long flags;
|
||||
|
||||
/* Switch to requested VMID */
|
||||
__tlb_switch_to_guest()(kvm);
|
||||
__tlb_switch_to_guest()(kvm, &flags);
|
||||
|
||||
__tlbi(vmalle1);
|
||||
dsb(nsh);
|
||||
isb();
|
||||
|
||||
__tlb_switch_to_host()(kvm);
|
||||
__tlb_switch_to_host()(kvm, flags);
|
||||
}
|
||||
|
||||
void __hyp_text __kvm_flush_vm_context(void)
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include "../../../../include/linux/sizes.h"
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
|
@ -45,11 +46,11 @@ int main(int argc, char *argv[])
|
|||
vmlinuz_load_addr = vmlinux_load_addr + vmlinux_size;
|
||||
|
||||
/*
|
||||
* Align with 16 bytes: "greater than that used for any standard data
|
||||
* types by a MIPS compiler." -- See MIPS Run Linux (Second Edition).
|
||||
* Align with 64KB: KEXEC needs load sections to be aligned to PAGE_SIZE,
|
||||
* which may be as large as 64KB depending on the kernel configuration.
|
||||
*/
|
||||
|
||||
vmlinuz_load_addr += (16 - vmlinux_size % 16);
|
||||
vmlinuz_load_addr += (SZ_64K - vmlinux_size % SZ_64K);
|
||||
|
||||
printf("0x%llx\n", vmlinuz_load_addr);
|
||||
|
||||
|
|
|
@ -286,7 +286,8 @@ static cvmx_helper_interface_mode_t __cvmx_get_mode_cn7xxx(int interface)
|
|||
case 3:
|
||||
return CVMX_HELPER_INTERFACE_MODE_LOOP;
|
||||
case 4:
|
||||
return CVMX_HELPER_INTERFACE_MODE_RGMII;
|
||||
/* TODO: Implement support for AGL (RGMII). */
|
||||
return CVMX_HELPER_INTERFACE_MODE_DISABLED;
|
||||
default:
|
||||
return CVMX_HELPER_INTERFACE_MODE_DISABLED;
|
||||
}
|
||||
|
|
|
@ -306,7 +306,7 @@ static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
|
|||
{ \
|
||||
long result; \
|
||||
\
|
||||
if (kernel_uses_llsc && R10000_LLSC_WAR) { \
|
||||
if (kernel_uses_llsc) { \
|
||||
long temp; \
|
||||
\
|
||||
__asm__ __volatile__( \
|
||||
|
|
|
@ -50,7 +50,7 @@ struct guest_info {
|
|||
#define MIPS_CACHE_PINDEX 0x00000020 /* Physically indexed cache */
|
||||
|
||||
struct cpuinfo_mips {
|
||||
unsigned long asid_cache;
|
||||
u64 asid_cache;
|
||||
#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
|
||||
unsigned long asid_mask;
|
||||
#endif
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
#define NODE3_ADDRSPACE_OFFSET 0x300000000000UL
|
||||
|
||||
#define pa_to_nid(addr) (((addr) & 0xf00000000000) >> NODE_ADDRSPACE_SHIFT)
|
||||
#define nid_to_addrbase(nid) ((nid) << NODE_ADDRSPACE_SHIFT)
|
||||
|
||||
#define LEVELS_PER_SLICE 128
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
#include <linux/wait.h>
|
||||
|
||||
typedef struct {
|
||||
unsigned long asid[NR_CPUS];
|
||||
u64 asid[NR_CPUS];
|
||||
void *vdso;
|
||||
atomic_t fp_mode_switching;
|
||||
|
||||
|
|
|
@ -76,14 +76,14 @@ extern unsigned long pgd_current[];
|
|||
* All unused by hardware upper bits will be considered
|
||||
* as a software asid extension.
|
||||
*/
|
||||
static unsigned long asid_version_mask(unsigned int cpu)
|
||||
static inline u64 asid_version_mask(unsigned int cpu)
|
||||
{
|
||||
unsigned long asid_mask = cpu_asid_mask(&cpu_data[cpu]);
|
||||
|
||||
return ~(asid_mask | (asid_mask - 1));
|
||||
return ~(u64)(asid_mask | (asid_mask - 1));
|
||||
}
|
||||
|
||||
static unsigned long asid_first_version(unsigned int cpu)
|
||||
static inline u64 asid_first_version(unsigned int cpu)
|
||||
{
|
||||
return ~asid_version_mask(cpu) + 1;
|
||||
}
|
||||
|
@ -102,14 +102,12 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
|
|||
static inline void
|
||||
get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
|
||||
{
|
||||
unsigned long asid = asid_cache(cpu);
|
||||
u64 asid = asid_cache(cpu);
|
||||
|
||||
if (!((asid += cpu_asid_inc()) & cpu_asid_mask(&cpu_data[cpu]))) {
|
||||
if (cpu_has_vtag_icache)
|
||||
flush_icache_all();
|
||||
local_flush_tlb_all(); /* start new asid cycle */
|
||||
if (!asid) /* fix version if needed */
|
||||
asid = asid_first_version(cpu);
|
||||
}
|
||||
|
||||
cpu_context(cpu, mm) = asid_cache(cpu) = asid;
|
||||
|
|
|
@ -7,7 +7,18 @@
|
|||
#define _ASM_MMZONE_H_
|
||||
|
||||
#include <asm/page.h>
|
||||
#include <mmzone.h>
|
||||
|
||||
#ifdef CONFIG_NEED_MULTIPLE_NODES
|
||||
# include <mmzone.h>
|
||||
#endif
|
||||
|
||||
#ifndef pa_to_nid
|
||||
#define pa_to_nid(addr) 0
|
||||
#endif
|
||||
|
||||
#ifndef nid_to_addrbase
|
||||
#define nid_to_addrbase(nid) 0
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DISCONTIGMEM
|
||||
|
||||
|
|
|
@ -265,6 +265,11 @@ static inline int pmd_bad(pmd_t pmd)
|
|||
|
||||
static inline int pmd_present(pmd_t pmd)
|
||||
{
|
||||
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
|
||||
if (unlikely(pmd_val(pmd) & _PAGE_HUGE))
|
||||
return pmd_val(pmd) & _PAGE_PRESENT;
|
||||
#endif
|
||||
|
||||
return pmd_val(pmd) != (unsigned long) invalid_pte_table;
|
||||
}
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include <asm/cpu-features.h>
|
||||
#include <asm/cpu-type.h>
|
||||
#include <asm/mipsmtregs.h>
|
||||
#include <asm/mmzone.h>
|
||||
#include <linux/uaccess.h> /* for uaccess_kernel() */
|
||||
|
||||
extern void (*r4k_blast_dcache)(void);
|
||||
|
@ -747,4 +748,25 @@ __BUILD_BLAST_CACHE_RANGE(s, scache, Hit_Writeback_Inv_SD, , )
|
|||
__BUILD_BLAST_CACHE_RANGE(inv_d, dcache, Hit_Invalidate_D, , )
|
||||
__BUILD_BLAST_CACHE_RANGE(inv_s, scache, Hit_Invalidate_SD, , )
|
||||
|
||||
/* Currently, this is very specific to Loongson-3 */
|
||||
#define __BUILD_BLAST_CACHE_NODE(pfx, desc, indexop, hitop, lsize) \
|
||||
static inline void blast_##pfx##cache##lsize##_node(long node) \
|
||||
{ \
|
||||
unsigned long start = CAC_BASE | nid_to_addrbase(node); \
|
||||
unsigned long end = start + current_cpu_data.desc.waysize; \
|
||||
unsigned long ws_inc = 1UL << current_cpu_data.desc.waybit; \
|
||||
unsigned long ws_end = current_cpu_data.desc.ways << \
|
||||
current_cpu_data.desc.waybit; \
|
||||
unsigned long ws, addr; \
|
||||
\
|
||||
for (ws = 0; ws < ws_end; ws += ws_inc) \
|
||||
for (addr = start; addr < end; addr += lsize * 32) \
|
||||
cache##lsize##_unroll32(addr|ws, indexop); \
|
||||
}
|
||||
|
||||
__BUILD_BLAST_CACHE_NODE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 16)
|
||||
__BUILD_BLAST_CACHE_NODE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 32)
|
||||
__BUILD_BLAST_CACHE_NODE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 64)
|
||||
__BUILD_BLAST_CACHE_NODE(s, scache, Index_Writeback_Inv_SD, Hit_Writeback_Inv_SD, 128)
|
||||
|
||||
#endif /* _ASM_R4KCACHE_H */
|
||||
|
|
|
@ -126,8 +126,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
|
|||
|
||||
/* Map delay slot emulation page */
|
||||
base = mmap_region(NULL, STACK_TOP, PAGE_SIZE,
|
||||
VM_READ|VM_WRITE|VM_EXEC|
|
||||
VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
|
||||
VM_READ | VM_EXEC |
|
||||
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC,
|
||||
0, NULL);
|
||||
if (IS_ERR_VALUE(base)) {
|
||||
ret = base;
|
||||
|
|
|
@ -214,8 +214,9 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir,
|
|||
{
|
||||
int isa16 = get_isa16_mode(regs->cp0_epc);
|
||||
mips_instruction break_math;
|
||||
struct emuframe __user *fr;
|
||||
int err, fr_idx;
|
||||
unsigned long fr_uaddr;
|
||||
struct emuframe fr;
|
||||
int fr_idx, ret;
|
||||
|
||||
/* NOP is easy */
|
||||
if (ir == 0)
|
||||
|
@ -250,27 +251,31 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir,
|
|||
fr_idx = alloc_emuframe();
|
||||
if (fr_idx == BD_EMUFRAME_NONE)
|
||||
return SIGBUS;
|
||||
fr = &dsemul_page()[fr_idx];
|
||||
|
||||
/* Retrieve the appropriately encoded break instruction */
|
||||
break_math = BREAK_MATH(isa16);
|
||||
|
||||
/* Write the instructions to the frame */
|
||||
if (isa16) {
|
||||
err = __put_user(ir >> 16,
|
||||
(u16 __user *)(&fr->emul));
|
||||
err |= __put_user(ir & 0xffff,
|
||||
(u16 __user *)((long)(&fr->emul) + 2));
|
||||
err |= __put_user(break_math >> 16,
|
||||
(u16 __user *)(&fr->badinst));
|
||||
err |= __put_user(break_math & 0xffff,
|
||||
(u16 __user *)((long)(&fr->badinst) + 2));
|
||||
union mips_instruction _emul = {
|
||||
.halfword = { ir >> 16, ir }
|
||||
};
|
||||
union mips_instruction _badinst = {
|
||||
.halfword = { break_math >> 16, break_math }
|
||||
};
|
||||
|
||||
fr.emul = _emul.word;
|
||||
fr.badinst = _badinst.word;
|
||||
} else {
|
||||
err = __put_user(ir, &fr->emul);
|
||||
err |= __put_user(break_math, &fr->badinst);
|
||||
fr.emul = ir;
|
||||
fr.badinst = break_math;
|
||||
}
|
||||
|
||||
if (unlikely(err)) {
|
||||
/* Write the frame to user memory */
|
||||
fr_uaddr = (unsigned long)&dsemul_page()[fr_idx];
|
||||
ret = access_process_vm(current, fr_uaddr, &fr, sizeof(fr),
|
||||
FOLL_FORCE | FOLL_WRITE);
|
||||
if (unlikely(ret != sizeof(fr))) {
|
||||
MIPS_FPU_EMU_INC_STATS(errors);
|
||||
free_emuframe(fr_idx, current->mm);
|
||||
return SIGBUS;
|
||||
|
@ -282,10 +287,7 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir,
|
|||
atomic_set(¤t->thread.bd_emu_frame, fr_idx);
|
||||
|
||||
/* Change user register context to execute the frame */
|
||||
regs->cp0_epc = (unsigned long)&fr->emul | isa16;
|
||||
|
||||
/* Ensure the icache observes our newly written frame */
|
||||
flush_cache_sigtramp((unsigned long)&fr->emul);
|
||||
regs->cp0_epc = fr_uaddr | isa16;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -245,7 +245,7 @@ static void r3k_flush_cache_page(struct vm_area_struct *vma,
|
|||
pmd_t *pmdp;
|
||||
pte_t *ptep;
|
||||
|
||||
pr_debug("cpage[%08lx,%08lx]\n",
|
||||
pr_debug("cpage[%08llx,%08lx]\n",
|
||||
cpu_context(smp_processor_id(), mm), addr);
|
||||
|
||||
/* No ASID => no such page in the cache. */
|
||||
|
|
|
@ -459,11 +459,28 @@ static void r4k_blast_scache_setup(void)
|
|||
r4k_blast_scache = blast_scache128;
|
||||
}
|
||||
|
||||
static void (*r4k_blast_scache_node)(long node);
|
||||
|
||||
static void r4k_blast_scache_node_setup(void)
|
||||
{
|
||||
unsigned long sc_lsize = cpu_scache_line_size();
|
||||
|
||||
if (current_cpu_type() != CPU_LOONGSON3)
|
||||
r4k_blast_scache_node = (void *)cache_noop;
|
||||
else if (sc_lsize == 16)
|
||||
r4k_blast_scache_node = blast_scache16_node;
|
||||
else if (sc_lsize == 32)
|
||||
r4k_blast_scache_node = blast_scache32_node;
|
||||
else if (sc_lsize == 64)
|
||||
r4k_blast_scache_node = blast_scache64_node;
|
||||
else if (sc_lsize == 128)
|
||||
r4k_blast_scache_node = blast_scache128_node;
|
||||
}
|
||||
|
||||
static inline void local_r4k___flush_cache_all(void * args)
|
||||
{
|
||||
switch (current_cpu_type()) {
|
||||
case CPU_LOONGSON2:
|
||||
case CPU_LOONGSON3:
|
||||
case CPU_R4000SC:
|
||||
case CPU_R4000MC:
|
||||
case CPU_R4400SC:
|
||||
|
@ -480,6 +497,11 @@ static inline void local_r4k___flush_cache_all(void * args)
|
|||
r4k_blast_scache();
|
||||
break;
|
||||
|
||||
case CPU_LOONGSON3:
|
||||
/* Use get_ebase_cpunum() for both NUMA=y/n */
|
||||
r4k_blast_scache_node(get_ebase_cpunum() >> 2);
|
||||
break;
|
||||
|
||||
case CPU_BMIPS5000:
|
||||
r4k_blast_scache();
|
||||
__sync();
|
||||
|
@ -840,10 +862,14 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
|
|||
|
||||
preempt_disable();
|
||||
if (cpu_has_inclusive_pcaches) {
|
||||
if (size >= scache_size)
|
||||
if (size >= scache_size) {
|
||||
if (current_cpu_type() != CPU_LOONGSON3)
|
||||
r4k_blast_scache();
|
||||
else
|
||||
r4k_blast_scache_node(pa_to_nid(addr));
|
||||
} else {
|
||||
blast_scache_range(addr, addr + size);
|
||||
}
|
||||
preempt_enable();
|
||||
__sync();
|
||||
return;
|
||||
|
@ -877,9 +903,12 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
|
|||
|
||||
preempt_disable();
|
||||
if (cpu_has_inclusive_pcaches) {
|
||||
if (size >= scache_size)
|
||||
if (size >= scache_size) {
|
||||
if (current_cpu_type() != CPU_LOONGSON3)
|
||||
r4k_blast_scache();
|
||||
else {
|
||||
else
|
||||
r4k_blast_scache_node(pa_to_nid(addr));
|
||||
} else {
|
||||
/*
|
||||
* There is no clearly documented alignment requirement
|
||||
* for the cache instruction on MIPS processors and
|
||||
|
@ -1918,6 +1947,7 @@ void r4k_cache_init(void)
|
|||
r4k_blast_scache_page_setup();
|
||||
r4k_blast_scache_page_indexed_setup();
|
||||
r4k_blast_scache_setup();
|
||||
r4k_blast_scache_node_setup();
|
||||
#ifdef CONFIG_EVA
|
||||
r4k_blast_dcache_user_page_setup();
|
||||
r4k_blast_icache_user_page_setup();
|
||||
|
|
|
@ -22,7 +22,7 @@ enum count_cache_flush_type {
|
|||
COUNT_CACHE_FLUSH_SW = 0x2,
|
||||
COUNT_CACHE_FLUSH_HW = 0x4,
|
||||
};
|
||||
static enum count_cache_flush_type count_cache_flush_type;
|
||||
static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
|
||||
|
||||
bool barrier_nospec_enabled;
|
||||
static bool no_nospec;
|
||||
|
|
|
@ -1140,11 +1140,11 @@ SYSCALL_DEFINE0(rt_sigreturn)
|
|||
{
|
||||
struct rt_sigframe __user *rt_sf;
|
||||
struct pt_regs *regs = current_pt_regs();
|
||||
int tm_restore = 0;
|
||||
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
||||
struct ucontext __user *uc_transact;
|
||||
unsigned long msr_hi;
|
||||
unsigned long tmp;
|
||||
int tm_restore = 0;
|
||||
#endif
|
||||
/* Always make any pending restarted system calls return -EINTR */
|
||||
current->restart_block.fn = do_no_restart_syscall;
|
||||
|
@ -1192,9 +1192,17 @@ SYSCALL_DEFINE0(rt_sigreturn)
|
|||
goto bad;
|
||||
}
|
||||
}
|
||||
if (!tm_restore)
|
||||
if (!tm_restore) {
|
||||
/*
|
||||
* Unset regs->msr because ucontext MSR TS is not
|
||||
* set, and recheckpoint was not called. This avoid
|
||||
* hitting a TM Bad thing at RFID
|
||||
*/
|
||||
regs->msr &= ~MSR_TS_MASK;
|
||||
}
|
||||
/* Fall through, for non-TM restore */
|
||||
#endif
|
||||
if (!tm_restore)
|
||||
if (do_setcontext(&rt_sf->uc, regs, 1))
|
||||
goto bad;
|
||||
|
||||
|
|
|
@ -740,11 +740,23 @@ SYSCALL_DEFINE0(rt_sigreturn)
|
|||
&uc_transact->uc_mcontext))
|
||||
goto badframe;
|
||||
}
|
||||
else
|
||||
/* Fall through, for non-TM restore */
|
||||
#endif
|
||||
/* Fall through, for non-TM restore */
|
||||
if (!MSR_TM_ACTIVE(msr)) {
|
||||
/*
|
||||
* Unset MSR[TS] on the thread regs since MSR from user
|
||||
* context does not have MSR active, and recheckpoint was
|
||||
* not called since restore_tm_sigcontexts() was not called
|
||||
* also.
|
||||
*
|
||||
* If not unsetting it, the code can RFID to userspace with
|
||||
* MSR[TS] set, but without CPU in the proper state,
|
||||
* causing a TM bad thing.
|
||||
*/
|
||||
current->thread.regs->msr &= ~MSR_TS_MASK;
|
||||
if (restore_sigcontext(current, NULL, 1, &uc->uc_mcontext))
|
||||
goto badframe;
|
||||
}
|
||||
|
||||
if (restore_altstack(&uc->uc_stack))
|
||||
goto badframe;
|
||||
|
|
|
@ -436,7 +436,7 @@ int clp_get_state(u32 fid, enum zpci_state *state)
|
|||
struct clp_state_data sd = {fid, ZPCI_FN_STATE_RESERVED};
|
||||
int rc;
|
||||
|
||||
rrb = clp_alloc_block(GFP_KERNEL);
|
||||
rrb = clp_alloc_block(GFP_ATOMIC);
|
||||
if (!rrb)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
@ -1441,7 +1441,7 @@ asmlinkage void kvm_spurious_fault(void);
|
|||
"cmpb $0, kvm_rebooting \n\t" \
|
||||
"jne 668b \n\t" \
|
||||
__ASM_SIZE(push) " $666b \n\t" \
|
||||
"call kvm_spurious_fault \n\t" \
|
||||
"jmp kvm_spurious_fault \n\t" \
|
||||
".popsection \n\t" \
|
||||
_ASM_EXTABLE(666b, 667b)
|
||||
|
||||
|
|
|
@ -1000,7 +1000,8 @@ static void __init l1tf_select_mitigation(void)
|
|||
#endif
|
||||
|
||||
half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
|
||||
if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
|
||||
if (l1tf_mitigation != L1TF_MITIGATION_OFF &&
|
||||
e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
|
||||
pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
|
||||
pr_info("You may make it effective by booting the kernel with mem=%llu parameter.\n",
|
||||
half_pa);
|
||||
|
|
|
@ -8011,7 +8011,10 @@ static __init int hardware_setup(void)
|
|||
|
||||
kvm_mce_cap_supported |= MCG_LMCE_P;
|
||||
|
||||
return alloc_kvm_area();
|
||||
r = alloc_kvm_area();
|
||||
if (r)
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
out:
|
||||
for (i = 0; i < VMX_BITMAP_NR; i++)
|
||||
|
|
|
@ -932,7 +932,7 @@ unsigned long max_swapfile_size(void)
|
|||
|
||||
pages = generic_max_swapfile_size();
|
||||
|
||||
if (boot_cpu_has_bug(X86_BUG_L1TF)) {
|
||||
if (boot_cpu_has_bug(X86_BUG_L1TF) && l1tf_mitigation != L1TF_MITIGATION_OFF) {
|
||||
/* Limit the swap file size to MAX_PA/2 for L1TF workaround */
|
||||
unsigned long long l1tf_limit = l1tf_pfn_limit();
|
||||
/*
|
||||
|
|
|
@ -585,7 +585,6 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
|
|||
paddr_end,
|
||||
page_size_mask,
|
||||
prot);
|
||||
__flush_tlb_all();
|
||||
continue;
|
||||
}
|
||||
/*
|
||||
|
@ -628,7 +627,6 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
|
|||
pud_populate(&init_mm, pud, pmd);
|
||||
spin_unlock(&init_mm.page_table_lock);
|
||||
}
|
||||
__flush_tlb_all();
|
||||
|
||||
update_page_count(PG_LEVEL_1G, pages);
|
||||
|
||||
|
@ -669,7 +667,6 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
|
|||
paddr_last = phys_pud_init(pud, paddr,
|
||||
paddr_end,
|
||||
page_size_mask);
|
||||
__flush_tlb_all();
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -681,7 +678,6 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
|
|||
p4d_populate(&init_mm, p4d, pud);
|
||||
spin_unlock(&init_mm.page_table_lock);
|
||||
}
|
||||
__flush_tlb_all();
|
||||
|
||||
return paddr_last;
|
||||
}
|
||||
|
@ -734,8 +730,6 @@ kernel_physical_mapping_init(unsigned long paddr_start,
|
|||
if (pgd_changed)
|
||||
sync_global_pgds(vaddr_start, vaddr_end - 1);
|
||||
|
||||
__flush_tlb_all();
|
||||
|
||||
return paddr_last;
|
||||
}
|
||||
|
||||
|
|
|
@ -144,7 +144,7 @@ static int crypto_cfb_decrypt_segment(struct skcipher_walk *walk,
|
|||
|
||||
do {
|
||||
crypto_cfb_encrypt_one(tfm, iv, dst);
|
||||
crypto_xor(dst, iv, bsize);
|
||||
crypto_xor(dst, src, bsize);
|
||||
iv = src;
|
||||
|
||||
src += bsize;
|
||||
|
|
|
@ -1736,6 +1736,7 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
|
|||
ret += tcrypt_test("xts(aes)");
|
||||
ret += tcrypt_test("ctr(aes)");
|
||||
ret += tcrypt_test("rfc3686(ctr(aes))");
|
||||
ret += tcrypt_test("cfb(aes)");
|
||||
break;
|
||||
|
||||
case 11:
|
||||
|
@ -2062,6 +2063,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
|
|||
speed_template_16_24_32);
|
||||
test_cipher_speed("ctr(aes)", DECRYPT, sec, NULL, 0,
|
||||
speed_template_16_24_32);
|
||||
test_cipher_speed("cfb(aes)", ENCRYPT, sec, NULL, 0,
|
||||
speed_template_16_24_32);
|
||||
test_cipher_speed("cfb(aes)", DECRYPT, sec, NULL, 0,
|
||||
speed_template_16_24_32);
|
||||
break;
|
||||
|
||||
case 201:
|
||||
|
|
|
@ -2696,6 +2696,13 @@ static const struct alg_test_desc alg_test_descs[] = {
|
|||
.dec = __VECS(aes_ccm_dec_tv_template)
|
||||
}
|
||||
}
|
||||
}, {
|
||||
.alg = "cfb(aes)",
|
||||
.test = alg_test_skcipher,
|
||||
.fips_allowed = 1,
|
||||
.suite = {
|
||||
.cipher = __VECS(aes_cfb_tv_template)
|
||||
},
|
||||
}, {
|
||||
.alg = "chacha20",
|
||||
.test = alg_test_skcipher,
|
||||
|
|
|
@ -12575,6 +12575,82 @@ static const struct cipher_testvec aes_cbc_tv_template[] = {
|
|||
},
|
||||
};
|
||||
|
||||
static const struct cipher_testvec aes_cfb_tv_template[] = {
|
||||
{ /* From NIST SP800-38A */
|
||||
.key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
|
||||
"\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
|
||||
.klen = 16,
|
||||
.iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
|
||||
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
|
||||
.ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
|
||||
"\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
|
||||
"\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
|
||||
"\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
|
||||
"\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
|
||||
"\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
|
||||
"\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
|
||||
"\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
|
||||
.ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
|
||||
"\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
|
||||
"\xc8\xa6\x45\x37\xa0\xb3\xa9\x3f"
|
||||
"\xcd\xe3\xcd\xad\x9f\x1c\xe5\x8b"
|
||||
"\x26\x75\x1f\x67\xa3\xcb\xb1\x40"
|
||||
"\xb1\x80\x8c\xf1\x87\xa4\xf4\xdf"
|
||||
"\xc0\x4b\x05\x35\x7c\x5d\x1c\x0e"
|
||||
"\xea\xc4\xc6\x6f\x9f\xf7\xf2\xe6",
|
||||
.len = 64,
|
||||
}, {
|
||||
.key = "\x8e\x73\xb0\xf7\xda\x0e\x64\x52"
|
||||
"\xc8\x10\xf3\x2b\x80\x90\x79\xe5"
|
||||
"\x62\xf8\xea\xd2\x52\x2c\x6b\x7b",
|
||||
.klen = 24,
|
||||
.iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
|
||||
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
|
||||
.ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
|
||||
"\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
|
||||
"\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
|
||||
"\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
|
||||
"\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
|
||||
"\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
|
||||
"\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
|
||||
"\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
|
||||
.ctext = "\xcd\xc8\x0d\x6f\xdd\xf1\x8c\xab"
|
||||
"\x34\xc2\x59\x09\xc9\x9a\x41\x74"
|
||||
"\x67\xce\x7f\x7f\x81\x17\x36\x21"
|
||||
"\x96\x1a\x2b\x70\x17\x1d\x3d\x7a"
|
||||
"\x2e\x1e\x8a\x1d\xd5\x9b\x88\xb1"
|
||||
"\xc8\xe6\x0f\xed\x1e\xfa\xc4\xc9"
|
||||
"\xc0\x5f\x9f\x9c\xa9\x83\x4f\xa0"
|
||||
"\x42\xae\x8f\xba\x58\x4b\x09\xff",
|
||||
.len = 64,
|
||||
}, {
|
||||
.key = "\x60\x3d\xeb\x10\x15\xca\x71\xbe"
|
||||
"\x2b\x73\xae\xf0\x85\x7d\x77\x81"
|
||||
"\x1f\x35\x2c\x07\x3b\x61\x08\xd7"
|
||||
"\x2d\x98\x10\xa3\x09\x14\xdf\xf4",
|
||||
.klen = 32,
|
||||
.iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
|
||||
"\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
|
||||
.ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
|
||||
"\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
|
||||
"\xae\x2d\x8a\x57\x1e\x03\xac\x9c"
|
||||
"\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
|
||||
"\x30\xc8\x1c\x46\xa3\x5c\xe4\x11"
|
||||
"\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
|
||||
"\xf6\x9f\x24\x45\xdf\x4f\x9b\x17"
|
||||
"\xad\x2b\x41\x7b\xe6\x6c\x37\x10",
|
||||
.ctext = "\xdc\x7e\x84\xbf\xda\x79\x16\x4b"
|
||||
"\x7e\xcd\x84\x86\x98\x5d\x38\x60"
|
||||
"\x39\xff\xed\x14\x3b\x28\xb1\xc8"
|
||||
"\x32\x11\x3c\x63\x31\xe5\x40\x7b"
|
||||
"\xdf\x10\x13\x24\x15\xe5\x4b\x92"
|
||||
"\xa1\x3e\xd0\xa8\x26\x7a\xe2\xf9"
|
||||
"\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
|
||||
"\x20\x31\x62\x3d\x55\xb1\xe4\x71",
|
||||
.len = 64,
|
||||
},
|
||||
};
|
||||
|
||||
static const struct aead_testvec hmac_md5_ecb_cipher_null_enc_tv_template[] = {
|
||||
{ /* Input data from RFC 2410 Case 1 */
|
||||
#ifdef __LITTLE_ENDIAN
|
||||
|
|
|
@ -366,14 +366,16 @@ void platform_msi_domain_free(struct irq_domain *domain, unsigned int virq,
|
|||
unsigned int nvec)
|
||||
{
|
||||
struct platform_msi_priv_data *data = domain->host_data;
|
||||
struct msi_desc *desc;
|
||||
for_each_msi_entry(desc, data->dev) {
|
||||
struct msi_desc *desc, *tmp;
|
||||
for_each_msi_entry_safe(desc, tmp, data->dev) {
|
||||
if (WARN_ON(!desc->irq || desc->nvec_used != 1))
|
||||
return;
|
||||
if (!(desc->irq >= virq && desc->irq < (virq + nvec)))
|
||||
continue;
|
||||
|
||||
irq_domain_free_irqs_common(domain, desc->irq, 1);
|
||||
list_del(&desc->list);
|
||||
free_msi_entry(desc);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -477,13 +477,15 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
|
|||
|
||||
if (need_locality) {
|
||||
rc = tpm_request_locality(chip, flags);
|
||||
if (rc < 0)
|
||||
goto out_no_locality;
|
||||
if (rc < 0) {
|
||||
need_locality = false;
|
||||
goto out_locality;
|
||||
}
|
||||
}
|
||||
|
||||
rc = tpm_cmd_ready(chip, flags);
|
||||
if (rc)
|
||||
goto out;
|
||||
goto out_locality;
|
||||
|
||||
rc = tpm2_prepare_space(chip, space, ordinal, buf);
|
||||
if (rc)
|
||||
|
@ -547,14 +549,13 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
|
|||
dev_err(&chip->dev, "tpm2_commit_space: error %d\n", rc);
|
||||
|
||||
out:
|
||||
rc = tpm_go_idle(chip, flags);
|
||||
if (rc)
|
||||
goto out;
|
||||
/* may fail but do not override previous error value in rc */
|
||||
tpm_go_idle(chip, flags);
|
||||
|
||||
out_locality:
|
||||
if (need_locality)
|
||||
tpm_relinquish_locality(chip, flags);
|
||||
|
||||
out_no_locality:
|
||||
if (chip->ops->clk_enable != NULL)
|
||||
chip->ops->clk_enable(chip, false);
|
||||
|
||||
|
|
|
@ -369,6 +369,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
|
|||
struct device *dev = chip->dev.parent;
|
||||
struct i2c_client *client = to_i2c_client(dev);
|
||||
u32 ordinal;
|
||||
unsigned long duration;
|
||||
size_t count = 0;
|
||||
int burst_count, bytes2write, retries, rc = -EIO;
|
||||
|
||||
|
@ -455,10 +456,12 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
|
|||
return rc;
|
||||
}
|
||||
ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
|
||||
rc = i2c_nuvoton_wait_for_data_avail(chip,
|
||||
tpm_calc_ordinal_duration(chip,
|
||||
ordinal),
|
||||
&priv->read_queue);
|
||||
if (chip->flags & TPM_CHIP_FLAG_TPM2)
|
||||
duration = tpm2_calc_ordinal_duration(chip, ordinal);
|
||||
else
|
||||
duration = tpm_calc_ordinal_duration(chip, ordinal);
|
||||
|
||||
rc = i2c_nuvoton_wait_for_data_avail(chip, duration, &priv->read_queue);
|
||||
if (rc) {
|
||||
dev_err(dev, "%s() timeout command duration\n", __func__);
|
||||
i2c_nuvoton_ready(chip);
|
||||
|
|
|
@ -382,7 +382,7 @@ static struct rockchip_clk_branch common_clk_branches[] __initdata = {
|
|||
COMPOSITE_NOMUX(0, "spdif_pre", "i2s_src", 0,
|
||||
RK2928_CLKSEL_CON(5), 0, 7, DFLAGS,
|
||||
RK2928_CLKGATE_CON(0), 13, GFLAGS),
|
||||
COMPOSITE_FRACMUX(0, "spdif_frac", "spdif_pll", CLK_SET_RATE_PARENT,
|
||||
COMPOSITE_FRACMUX(0, "spdif_frac", "spdif_pre", CLK_SET_RATE_PARENT,
|
||||
RK2928_CLKSEL_CON(9), 0,
|
||||
RK2928_CLKGATE_CON(0), 14, GFLAGS,
|
||||
&common_spdif_fracmux),
|
||||
|
|
|
@ -19,6 +19,17 @@ struct _ccu_nm {
|
|||
unsigned long m, min_m, max_m;
|
||||
};
|
||||
|
||||
static unsigned long ccu_nm_calc_rate(unsigned long parent,
|
||||
unsigned long n, unsigned long m)
|
||||
{
|
||||
u64 rate = parent;
|
||||
|
||||
rate *= n;
|
||||
do_div(rate, m);
|
||||
|
||||
return rate;
|
||||
}
|
||||
|
||||
static void ccu_nm_find_best(unsigned long parent, unsigned long rate,
|
||||
struct _ccu_nm *nm)
|
||||
{
|
||||
|
@ -28,7 +39,8 @@ static void ccu_nm_find_best(unsigned long parent, unsigned long rate,
|
|||
|
||||
for (_n = nm->min_n; _n <= nm->max_n; _n++) {
|
||||
for (_m = nm->min_m; _m <= nm->max_m; _m++) {
|
||||
unsigned long tmp_rate = parent * _n / _m;
|
||||
unsigned long tmp_rate = ccu_nm_calc_rate(parent,
|
||||
_n, _m);
|
||||
|
||||
if (tmp_rate > rate)
|
||||
continue;
|
||||
|
@ -100,7 +112,7 @@ static unsigned long ccu_nm_recalc_rate(struct clk_hw *hw,
|
|||
if (ccu_sdm_helper_is_enabled(&nm->common, &nm->sdm))
|
||||
rate = ccu_sdm_helper_read_rate(&nm->common, &nm->sdm, m, n);
|
||||
else
|
||||
rate = parent_rate * n / m;
|
||||
rate = ccu_nm_calc_rate(parent_rate, n, m);
|
||||
|
||||
if (nm->common.features & CCU_FEATURE_FIXED_POSTDIV)
|
||||
rate /= nm->fixed_post_div;
|
||||
|
@ -142,7 +154,7 @@ static long ccu_nm_round_rate(struct clk_hw *hw, unsigned long rate,
|
|||
_nm.max_m = nm->m.max ?: 1 << nm->m.width;
|
||||
|
||||
ccu_nm_find_best(*parent_rate, rate, &_nm);
|
||||
rate = *parent_rate * _nm.n / _nm.m;
|
||||
rate = ccu_nm_calc_rate(*parent_rate, _nm.n, _nm.m);
|
||||
|
||||
if (nm->common.features & CCU_FEATURE_FIXED_POSTDIV)
|
||||
rate /= nm->fixed_post_div;
|
||||
|
|
|
@ -290,6 +290,7 @@ config CLKSRC_MPS2
|
|||
|
||||
config ARC_TIMERS
|
||||
bool "Support for 32-bit TIMERn counters in ARC Cores" if COMPILE_TEST
|
||||
depends on GENERIC_SCHED_CLOCK
|
||||
select TIMER_OF
|
||||
help
|
||||
These are legacy 32-bit TIMER0 and TIMER1 counters found on all ARC cores
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
#include <linux/cpu.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/sched_clock.h>
|
||||
|
||||
#include <soc/arc/timers.h>
|
||||
#include <soc/arc/mcip.h>
|
||||
|
@ -88,6 +89,11 @@ static u64 arc_read_gfrc(struct clocksource *cs)
|
|||
return (((u64)h) << 32) | l;
|
||||
}
|
||||
|
||||
static notrace u64 arc_gfrc_clock_read(void)
|
||||
{
|
||||
return arc_read_gfrc(NULL);
|
||||
}
|
||||
|
||||
static struct clocksource arc_counter_gfrc = {
|
||||
.name = "ARConnect GFRC",
|
||||
.rating = 400,
|
||||
|
@ -111,6 +117,8 @@ static int __init arc_cs_setup_gfrc(struct device_node *node)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
sched_clock_register(arc_gfrc_clock_read, 64, arc_timer_freq);
|
||||
|
||||
return clocksource_register_hz(&arc_counter_gfrc, arc_timer_freq);
|
||||
}
|
||||
TIMER_OF_DECLARE(arc_gfrc, "snps,archs-timer-gfrc", arc_cs_setup_gfrc);
|
||||
|
@ -139,6 +147,11 @@ static u64 arc_read_rtc(struct clocksource *cs)
|
|||
return (((u64)h) << 32) | l;
|
||||
}
|
||||
|
||||
static notrace u64 arc_rtc_clock_read(void)
|
||||
{
|
||||
return arc_read_rtc(NULL);
|
||||
}
|
||||
|
||||
static struct clocksource arc_counter_rtc = {
|
||||
.name = "ARCv2 RTC",
|
||||
.rating = 350,
|
||||
|
@ -170,6 +183,8 @@ static int __init arc_cs_setup_rtc(struct device_node *node)
|
|||
|
||||
write_aux_reg(AUX_RTC_CTRL, 1);
|
||||
|
||||
sched_clock_register(arc_rtc_clock_read, 64, arc_timer_freq);
|
||||
|
||||
return clocksource_register_hz(&arc_counter_rtc, arc_timer_freq);
|
||||
}
|
||||
TIMER_OF_DECLARE(arc_rtc, "snps,archs-timer-rtc", arc_cs_setup_rtc);
|
||||
|
@ -185,6 +200,11 @@ static u64 arc_read_timer1(struct clocksource *cs)
|
|||
return (u64) read_aux_reg(ARC_REG_TIMER1_CNT);
|
||||
}
|
||||
|
||||
static notrace u64 arc_timer1_clock_read(void)
|
||||
{
|
||||
return arc_read_timer1(NULL);
|
||||
}
|
||||
|
||||
static struct clocksource arc_counter_timer1 = {
|
||||
.name = "ARC Timer1",
|
||||
.rating = 300,
|
||||
|
@ -209,6 +229,8 @@ static int __init arc_cs_setup_timer1(struct device_node *node)
|
|||
write_aux_reg(ARC_REG_TIMER1_CNT, 0);
|
||||
write_aux_reg(ARC_REG_TIMER1_CTRL, TIMER_CTRL_NH);
|
||||
|
||||
sched_clock_register(arc_timer1_clock_read, 32, arc_timer_freq);
|
||||
|
||||
return clocksource_register_hz(&arc_counter_timer1, arc_timer_freq);
|
||||
}
|
||||
|
||||
|
|
|
@ -73,7 +73,7 @@ static int flexi_aes_keylen(int keylen)
|
|||
static int nitrox_skcipher_init(struct crypto_skcipher *tfm)
|
||||
{
|
||||
struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(tfm);
|
||||
void *fctx;
|
||||
struct crypto_ctx_hdr *chdr;
|
||||
|
||||
/* get the first device */
|
||||
nctx->ndev = nitrox_get_first_device();
|
||||
|
@ -81,12 +81,14 @@ static int nitrox_skcipher_init(struct crypto_skcipher *tfm)
|
|||
return -ENODEV;
|
||||
|
||||
/* allocate nitrox crypto context */
|
||||
fctx = crypto_alloc_context(nctx->ndev);
|
||||
if (!fctx) {
|
||||
chdr = crypto_alloc_context(nctx->ndev);
|
||||
if (!chdr) {
|
||||
nitrox_put_device(nctx->ndev);
|
||||
return -ENOMEM;
|
||||
}
|
||||
nctx->u.ctx_handle = (uintptr_t)fctx;
|
||||
nctx->chdr = chdr;
|
||||
nctx->u.ctx_handle = (uintptr_t)((u8 *)chdr->vaddr +
|
||||
sizeof(struct ctx_hdr));
|
||||
crypto_skcipher_set_reqsize(tfm, crypto_skcipher_reqsize(tfm) +
|
||||
sizeof(struct nitrox_kcrypt_request));
|
||||
return 0;
|
||||
|
@ -102,7 +104,7 @@ static void nitrox_skcipher_exit(struct crypto_skcipher *tfm)
|
|||
|
||||
memset(&fctx->crypto, 0, sizeof(struct crypto_keys));
|
||||
memset(&fctx->auth, 0, sizeof(struct auth_keys));
|
||||
crypto_free_context((void *)fctx);
|
||||
crypto_free_context((void *)nctx->chdr);
|
||||
}
|
||||
nitrox_put_device(nctx->ndev);
|
||||
|
||||
|
|
|
@ -146,20 +146,31 @@ static void destroy_crypto_dma_pool(struct nitrox_device *ndev)
|
|||
void *crypto_alloc_context(struct nitrox_device *ndev)
|
||||
{
|
||||
struct ctx_hdr *ctx;
|
||||
struct crypto_ctx_hdr *chdr;
|
||||
void *vaddr;
|
||||
dma_addr_t dma;
|
||||
|
||||
vaddr = dma_pool_alloc(ndev->ctx_pool, (GFP_KERNEL | __GFP_ZERO), &dma);
|
||||
if (!vaddr)
|
||||
chdr = kmalloc(sizeof(*chdr), GFP_KERNEL);
|
||||
if (!chdr)
|
||||
return NULL;
|
||||
|
||||
vaddr = dma_pool_alloc(ndev->ctx_pool, (GFP_KERNEL | __GFP_ZERO), &dma);
|
||||
if (!vaddr) {
|
||||
kfree(chdr);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* fill meta data */
|
||||
ctx = vaddr;
|
||||
ctx->pool = ndev->ctx_pool;
|
||||
ctx->dma = dma;
|
||||
ctx->ctx_dma = dma + sizeof(struct ctx_hdr);
|
||||
|
||||
return ((u8 *)vaddr + sizeof(struct ctx_hdr));
|
||||
chdr->pool = ndev->ctx_pool;
|
||||
chdr->dma = dma;
|
||||
chdr->vaddr = vaddr;
|
||||
|
||||
return chdr;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -168,13 +179,14 @@ void *crypto_alloc_context(struct nitrox_device *ndev)
|
|||
*/
|
||||
void crypto_free_context(void *ctx)
|
||||
{
|
||||
struct ctx_hdr *ctxp;
|
||||
struct crypto_ctx_hdr *ctxp;
|
||||
|
||||
if (!ctx)
|
||||
return;
|
||||
|
||||
ctxp = (struct ctx_hdr *)((u8 *)ctx - sizeof(struct ctx_hdr));
|
||||
dma_pool_free(ctxp->pool, ctxp, ctxp->dma);
|
||||
ctxp = ctx;
|
||||
dma_pool_free(ctxp->pool, ctxp->vaddr, ctxp->dma);
|
||||
kfree(ctxp);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -181,12 +181,19 @@ struct flexi_crypto_context {
|
|||
struct auth_keys auth;
|
||||
};
|
||||
|
||||
struct crypto_ctx_hdr {
|
||||
struct dma_pool *pool;
|
||||
dma_addr_t dma;
|
||||
void *vaddr;
|
||||
};
|
||||
|
||||
struct nitrox_crypto_ctx {
|
||||
struct nitrox_device *ndev;
|
||||
union {
|
||||
u64 ctx_handle;
|
||||
struct flexi_crypto_context *fctx;
|
||||
} u;
|
||||
struct crypto_ctx_hdr *chdr;
|
||||
};
|
||||
|
||||
struct nitrox_kcrypt_request {
|
||||
|
|
|
@ -303,7 +303,10 @@ static bool chcr_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
|
|||
|
||||
static inline int is_eth_imm(const struct sk_buff *skb, unsigned int kctx_len)
|
||||
{
|
||||
int hdrlen = sizeof(struct chcr_ipsec_req) + kctx_len;
|
||||
int hdrlen;
|
||||
|
||||
hdrlen = sizeof(struct fw_ulptx_wr) +
|
||||
sizeof(struct chcr_ipsec_req) + kctx_len;
|
||||
|
||||
hdrlen += sizeof(struct cpl_tx_pkt);
|
||||
if (skb->len <= MAX_IMM_TX_PKT_LEN - hdrlen)
|
||||
|
|
|
@ -350,15 +350,10 @@ int udl_driver_load(struct drm_device *dev, unsigned long flags)
|
|||
if (ret)
|
||||
goto err;
|
||||
|
||||
ret = drm_vblank_init(dev, 1);
|
||||
if (ret)
|
||||
goto err_fb;
|
||||
|
||||
drm_kms_helper_poll_init(dev);
|
||||
|
||||
return 0;
|
||||
err_fb:
|
||||
udl_fbdev_cleanup(dev);
|
||||
|
||||
err:
|
||||
if (udl->urbs.count)
|
||||
udl_free_urb_list(dev);
|
||||
|
|
|
@ -71,11 +71,14 @@ static int v3d_v3d_debugfs_regs(struct seq_file *m, void *unused)
|
|||
V3D_READ(v3d_hub_reg_defs[i].reg));
|
||||
}
|
||||
|
||||
if (v3d->ver < 41) {
|
||||
for (i = 0; i < ARRAY_SIZE(v3d_gca_reg_defs); i++) {
|
||||
seq_printf(m, "%s (0x%04x): 0x%08x\n",
|
||||
v3d_gca_reg_defs[i].name, v3d_gca_reg_defs[i].reg,
|
||||
v3d_gca_reg_defs[i].name,
|
||||
v3d_gca_reg_defs[i].reg,
|
||||
V3D_GCA_READ(v3d_gca_reg_defs[i].reg));
|
||||
}
|
||||
}
|
||||
|
||||
for (core = 0; core < v3d->cores; core++) {
|
||||
for (i = 0; i < ARRAY_SIZE(v3d_core_reg_defs); i++) {
|
||||
|
|
|
@ -1141,6 +1141,8 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps,
|
|||
|
||||
if (slen > len)
|
||||
slen = len;
|
||||
if (slen > ss->sge.sge_length)
|
||||
slen = ss->sge.sge_length;
|
||||
rvt_update_sge(ss, slen, false);
|
||||
seg_pio_copy_mid(pbuf, addr, slen);
|
||||
len -= slen;
|
||||
|
|
|
@ -1336,6 +1336,7 @@ MODULE_DEVICE_TABLE(i2c, elan_id);
|
|||
static const struct acpi_device_id elan_acpi_id[] = {
|
||||
{ "ELAN0000", 0 },
|
||||
{ "ELAN0100", 0 },
|
||||
{ "ELAN0501", 0 },
|
||||
{ "ELAN0600", 0 },
|
||||
{ "ELAN0602", 0 },
|
||||
{ "ELAN0605", 0 },
|
||||
|
|
|
@ -1586,10 +1586,10 @@ static int mxt_update_cfg(struct mxt_data *data, const struct firmware *fw)
|
|||
/* T7 config may have changed */
|
||||
mxt_init_t7_power_cfg(data);
|
||||
|
||||
release_raw:
|
||||
kfree(cfg.raw);
|
||||
release_mem:
|
||||
kfree(cfg.mem);
|
||||
release_raw:
|
||||
kfree(cfg.raw);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -837,7 +837,13 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
|
|||
cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
|
||||
cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
|
||||
cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
|
||||
cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA, ent->sync.msidata);
|
||||
/*
|
||||
* Commands are written little-endian, but we want the SMMU to
|
||||
* receive MSIData, and thus write it back to memory, in CPU
|
||||
* byte order, so big-endian needs an extra byteswap here.
|
||||
*/
|
||||
cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
|
||||
cpu_to_le32(ent->sync.msidata));
|
||||
cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
|
||||
break;
|
||||
default:
|
||||
|
|
|
@ -852,7 +852,7 @@ u16 capi20_get_manufacturer(u32 contr, u8 *buf)
|
|||
u16 ret;
|
||||
|
||||
if (contr == 0) {
|
||||
strlcpy(buf, capi_manufakturer, CAPI_MANUFACTURER_LEN);
|
||||
strncpy(buf, capi_manufakturer, CAPI_MANUFACTURER_LEN);
|
||||
return CAPI_NOERROR;
|
||||
}
|
||||
|
||||
|
@ -860,7 +860,7 @@ u16 capi20_get_manufacturer(u32 contr, u8 *buf)
|
|||
|
||||
ctr = get_capi_ctr_by_nr(contr);
|
||||
if (ctr && ctr->state == CAPI_CTR_RUNNING) {
|
||||
strlcpy(buf, ctr->manu, CAPI_MANUFACTURER_LEN);
|
||||
strncpy(buf, ctr->manu, CAPI_MANUFACTURER_LEN);
|
||||
ret = CAPI_NOERROR;
|
||||
} else
|
||||
ret = CAPI_REGNOTINSTALLED;
|
||||
|
|
|
@ -442,7 +442,7 @@ int cec_thread_func(void *_adap)
|
|||
(adap->needs_hpd &&
|
||||
(!adap->is_configured && !adap->is_configuring)) ||
|
||||
kthread_should_stop() ||
|
||||
(!adap->transmitting &&
|
||||
(!adap->transmit_in_progress &&
|
||||
!list_empty(&adap->transmit_queue)),
|
||||
msecs_to_jiffies(CEC_XFER_TIMEOUT_MS));
|
||||
timeout = err == 0;
|
||||
|
@ -450,7 +450,7 @@ int cec_thread_func(void *_adap)
|
|||
/* Otherwise we just wait for something to happen. */
|
||||
wait_event_interruptible(adap->kthread_waitq,
|
||||
kthread_should_stop() ||
|
||||
(!adap->transmitting &&
|
||||
(!adap->transmit_in_progress &&
|
||||
!list_empty(&adap->transmit_queue)));
|
||||
}
|
||||
|
||||
|
@ -475,6 +475,7 @@ int cec_thread_func(void *_adap)
|
|||
pr_warn("cec-%s: message %*ph timed out\n", adap->name,
|
||||
adap->transmitting->msg.len,
|
||||
adap->transmitting->msg.msg);
|
||||
adap->transmit_in_progress = false;
|
||||
adap->tx_timeouts++;
|
||||
/* Just give up on this. */
|
||||
cec_data_cancel(adap->transmitting,
|
||||
|
@ -486,7 +487,7 @@ int cec_thread_func(void *_adap)
|
|||
* If we are still transmitting, or there is nothing new to
|
||||
* transmit, then just continue waiting.
|
||||
*/
|
||||
if (adap->transmitting || list_empty(&adap->transmit_queue))
|
||||
if (adap->transmit_in_progress || list_empty(&adap->transmit_queue))
|
||||
goto unlock;
|
||||
|
||||
/* Get a new message to transmit */
|
||||
|
@ -532,6 +533,8 @@ int cec_thread_func(void *_adap)
|
|||
if (adap->ops->adap_transmit(adap, data->attempts,
|
||||
signal_free_time, &data->msg))
|
||||
cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
|
||||
else
|
||||
adap->transmit_in_progress = true;
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&adap->lock);
|
||||
|
@ -562,14 +565,17 @@ void cec_transmit_done_ts(struct cec_adapter *adap, u8 status,
|
|||
data = adap->transmitting;
|
||||
if (!data) {
|
||||
/*
|
||||
* This can happen if a transmit was issued and the cable is
|
||||
* This might happen if a transmit was issued and the cable is
|
||||
* unplugged while the transmit is ongoing. Ignore this
|
||||
* transmit in that case.
|
||||
*/
|
||||
if (!adap->transmit_in_progress)
|
||||
dprintk(1, "%s was called without an ongoing transmit!\n",
|
||||
__func__);
|
||||
goto unlock;
|
||||
adap->transmit_in_progress = false;
|
||||
goto wake_thread;
|
||||
}
|
||||
adap->transmit_in_progress = false;
|
||||
|
||||
msg = &data->msg;
|
||||
|
||||
|
@ -635,7 +641,6 @@ void cec_transmit_done_ts(struct cec_adapter *adap, u8 status,
|
|||
* for transmitting or to retry the current message.
|
||||
*/
|
||||
wake_up_interruptible(&adap->kthread_waitq);
|
||||
unlock:
|
||||
mutex_unlock(&adap->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cec_transmit_done_ts);
|
||||
|
@ -1483,8 +1488,11 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
|
|||
if (adap->monitor_all_cnt)
|
||||
WARN_ON(call_op(adap, adap_monitor_all_enable, false));
|
||||
mutex_lock(&adap->devnode.lock);
|
||||
if (adap->needs_hpd || list_empty(&adap->devnode.fhs))
|
||||
if (adap->needs_hpd || list_empty(&adap->devnode.fhs)) {
|
||||
WARN_ON(adap->ops->adap_enable(adap, false));
|
||||
adap->transmit_in_progress = false;
|
||||
wake_up_interruptible(&adap->kthread_waitq);
|
||||
}
|
||||
mutex_unlock(&adap->devnode.lock);
|
||||
if (phys_addr == CEC_PHYS_ADDR_INVALID)
|
||||
return;
|
||||
|
@ -1492,6 +1500,7 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
|
|||
|
||||
mutex_lock(&adap->devnode.lock);
|
||||
adap->last_initiator = 0xff;
|
||||
adap->transmit_in_progress = false;
|
||||
|
||||
if ((adap->needs_hpd || list_empty(&adap->devnode.fhs)) &&
|
||||
adap->ops->adap_enable(adap, true)) {
|
||||
|
|
|
@ -601,8 +601,9 @@ static void cec_pin_tx_states(struct cec_pin *pin, ktime_t ts)
|
|||
break;
|
||||
/* Was the message ACKed? */
|
||||
ack = cec_msg_is_broadcast(&pin->tx_msg) ? v : !v;
|
||||
if (!ack && !pin->tx_ignore_nack_until_eom &&
|
||||
pin->tx_bit / 10 < pin->tx_msg.len && !pin->tx_post_eom) {
|
||||
if (!ack && (!pin->tx_ignore_nack_until_eom ||
|
||||
pin->tx_bit / 10 == pin->tx_msg.len - 1) &&
|
||||
!pin->tx_post_eom) {
|
||||
/*
|
||||
* Note: the CEC spec is ambiguous regarding
|
||||
* what action to take when a NACK appears
|
||||
|
|
|
@ -1738,7 +1738,7 @@ typedef struct { u16 __; u8 _; } __packed x24;
|
|||
unsigned s; \
|
||||
\
|
||||
for (s = 0; s < len; s++) { \
|
||||
u8 chr = font8x16[text[s] * 16 + line]; \
|
||||
u8 chr = font8x16[(u8)text[s] * 16 + line]; \
|
||||
\
|
||||
if (hdiv == 2 && tpg->hflip) { \
|
||||
pos[3] = (chr & (0x01 << 6) ? fg : bg); \
|
||||
|
|
|
@ -800,6 +800,9 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
|
|||
memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
|
||||
q->memory = memory;
|
||||
q->waiting_for_buffers = !q->is_output;
|
||||
} else if (q->memory != memory) {
|
||||
dprintk(1, "memory model mismatch\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
num_buffers = min(*count, VB2_MAX_FRAME - q->num_buffers);
|
||||
|
|
|
@ -636,16 +636,19 @@ static int imx274_write_table(struct stimx274 *priv, const struct reg_8 table[])
|
|||
|
||||
static inline int imx274_read_reg(struct stimx274 *priv, u16 addr, u8 *val)
|
||||
{
|
||||
unsigned int uint_val;
|
||||
int err;
|
||||
|
||||
err = regmap_read(priv->regmap, addr, (unsigned int *)val);
|
||||
err = regmap_read(priv->regmap, addr, &uint_val);
|
||||
if (err)
|
||||
dev_err(&priv->client->dev,
|
||||
"%s : i2c read failed, addr = %x\n", __func__, addr);
|
||||
else
|
||||
dev_dbg(&priv->client->dev,
|
||||
"%s : addr 0x%x, val=0x%x\n", __func__,
|
||||
addr, *val);
|
||||
addr, uint_val);
|
||||
|
||||
*val = uint_val;
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -438,6 +438,8 @@ void vivid_update_format_cap(struct vivid_dev *dev, bool keep_controls)
|
|||
tpg_s_rgb_range(&dev->tpg, v4l2_ctrl_g_ctrl(dev->rgb_range_cap));
|
||||
break;
|
||||
}
|
||||
vfree(dev->bitmap_cap);
|
||||
dev->bitmap_cap = NULL;
|
||||
vivid_update_quality(dev);
|
||||
tpg_reset_source(&dev->tpg, dev->src_rect.width, dev->src_rect.height, dev->field_cap);
|
||||
dev->crop_cap = dev->src_rect;
|
||||
|
|
|
@ -707,6 +707,7 @@ void rc_repeat(struct rc_dev *dev)
|
|||
(dev->last_toggle ? LIRC_SCANCODE_FLAG_TOGGLE : 0)
|
||||
};
|
||||
|
||||
if (dev->allowed_protocols != RC_PROTO_BIT_CEC)
|
||||
ir_lirc_scancode_event(dev, &sc);
|
||||
|
||||
spin_lock_irqsave(&dev->keylock, flags);
|
||||
|
@ -747,6 +748,7 @@ static void ir_do_keydown(struct rc_dev *dev, enum rc_proto protocol,
|
|||
.keycode = keycode
|
||||
};
|
||||
|
||||
if (dev->allowed_protocols != RC_PROTO_BIT_CEC)
|
||||
ir_lirc_scancode_event(dev, &sc);
|
||||
|
||||
if (new_event && dev->keypressed)
|
||||
|
|
|
@ -155,7 +155,6 @@ static int usb_urb_alloc_bulk_urbs(struct usb_data_stream *stream)
|
|||
stream->props.u.bulk.buffersize,
|
||||
usb_urb_complete, stream);
|
||||
|
||||
stream->urb_list[i]->transfer_flags = URB_FREE_BUFFER;
|
||||
stream->urbs_initialized++;
|
||||
}
|
||||
return 0;
|
||||
|
@ -186,7 +185,7 @@ static int usb_urb_alloc_isoc_urbs(struct usb_data_stream *stream)
|
|||
urb->complete = usb_urb_complete;
|
||||
urb->pipe = usb_rcvisocpipe(stream->udev,
|
||||
stream->props.endpoint);
|
||||
urb->transfer_flags = URB_ISO_ASAP | URB_FREE_BUFFER;
|
||||
urb->transfer_flags = URB_ISO_ASAP;
|
||||
urb->interval = stream->props.u.isoc.interval;
|
||||
urb->number_of_packets = stream->props.u.isoc.framesperurb;
|
||||
urb->transfer_buffer_length = stream->props.u.isoc.framesize *
|
||||
|
@ -210,7 +209,7 @@ static int usb_free_stream_buffers(struct usb_data_stream *stream)
|
|||
if (stream->state & USB_STATE_URB_BUF) {
|
||||
while (stream->buf_num) {
|
||||
stream->buf_num--;
|
||||
stream->buf_list[stream->buf_num] = NULL;
|
||||
kfree(stream->buf_list[stream->buf_num]);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -318,7 +318,7 @@ static int read_afu_name(struct pci_dev *dev, struct ocxl_fn_config *fn,
|
|||
if (rc)
|
||||
return rc;
|
||||
ptr = (u32 *) &afu->name[i];
|
||||
*ptr = val;
|
||||
*ptr = le32_to_cpu((__force __le32) val);
|
||||
}
|
||||
afu->name[OCXL_AFU_NAME_SZ - 1] = '\0'; /* play safe */
|
||||
return 0;
|
||||
|
|
|
@ -566,7 +566,7 @@ int ocxl_link_update_pe(void *link_handle, int pasid, __u16 tid)
|
|||
|
||||
mutex_lock(&spa->spa_lock);
|
||||
|
||||
pe->tid = tid;
|
||||
pe->tid = cpu_to_be32(tid);
|
||||
|
||||
/*
|
||||
* The barrier makes sure the PE is updated
|
||||
|
|
|
@ -444,9 +444,14 @@ static void marvell_nfc_enable_int(struct marvell_nfc *nfc, u32 int_mask)
|
|||
writel_relaxed(reg & ~int_mask, nfc->regs + NDCR);
|
||||
}
|
||||
|
||||
static void marvell_nfc_clear_int(struct marvell_nfc *nfc, u32 int_mask)
|
||||
static u32 marvell_nfc_clear_int(struct marvell_nfc *nfc, u32 int_mask)
|
||||
{
|
||||
u32 reg;
|
||||
|
||||
reg = readl_relaxed(nfc->regs + NDSR);
|
||||
writel_relaxed(int_mask, nfc->regs + NDSR);
|
||||
|
||||
return reg & int_mask;
|
||||
}
|
||||
|
||||
static void marvell_nfc_force_byte_access(struct nand_chip *chip,
|
||||
|
@ -613,6 +618,7 @@ static int marvell_nfc_wait_cmdd(struct nand_chip *chip)
|
|||
static int marvell_nfc_wait_op(struct nand_chip *chip, unsigned int timeout_ms)
|
||||
{
|
||||
struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);
|
||||
u32 pending;
|
||||
int ret;
|
||||
|
||||
/* Timeout is expressed in ms */
|
||||
|
@ -625,8 +631,13 @@ static int marvell_nfc_wait_op(struct nand_chip *chip, unsigned int timeout_ms)
|
|||
ret = wait_for_completion_timeout(&nfc->complete,
|
||||
msecs_to_jiffies(timeout_ms));
|
||||
marvell_nfc_disable_int(nfc, NDCR_RDYM);
|
||||
marvell_nfc_clear_int(nfc, NDSR_RDY(0) | NDSR_RDY(1));
|
||||
if (!ret) {
|
||||
pending = marvell_nfc_clear_int(nfc, NDSR_RDY(0) | NDSR_RDY(1));
|
||||
|
||||
/*
|
||||
* In case the interrupt was not served in the required time frame,
|
||||
* check if the ISR was not served or if something went actually wrong.
|
||||
*/
|
||||
if (ret && !pending) {
|
||||
dev_err(nfc->dev, "Timeout waiting for RB signal\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
|
|
@ -1938,7 +1938,7 @@ static int omap_nand_attach_chip(struct nand_chip *chip)
|
|||
case NAND_OMAP_PREFETCH_DMA:
|
||||
dma_cap_zero(mask);
|
||||
dma_cap_set(DMA_SLAVE, mask);
|
||||
info->dma = dma_request_chan(dev, "rxtx");
|
||||
info->dma = dma_request_chan(dev->parent, "rxtx");
|
||||
|
||||
if (IS_ERR(info->dma)) {
|
||||
dev_err(dev, "DMA engine request failed\n");
|
||||
|
|
|
@ -41,7 +41,7 @@ config SPI_ASPEED_SMC
|
|||
|
||||
config SPI_ATMEL_QUADSPI
|
||||
tristate "Atmel Quad SPI Controller"
|
||||
depends on ARCH_AT91 || (ARM && COMPILE_TEST)
|
||||
depends on ARCH_AT91 || (ARM && COMPILE_TEST && !ARCH_EBSA110)
|
||||
depends on OF && HAS_IOMEM
|
||||
help
|
||||
This enables support for the Quad SPI controller in master mode.
|
||||
|
|
|
@ -61,7 +61,8 @@
|
|||
#define MACB_TX_ERR_FLAGS (MACB_BIT(ISR_TUND) \
|
||||
| MACB_BIT(ISR_RLE) \
|
||||
| MACB_BIT(TXERR))
|
||||
#define MACB_TX_INT_FLAGS (MACB_TX_ERR_FLAGS | MACB_BIT(TCOMP))
|
||||
#define MACB_TX_INT_FLAGS (MACB_TX_ERR_FLAGS | MACB_BIT(TCOMP) \
|
||||
| MACB_BIT(TXUBR))
|
||||
|
||||
/* Max length of transmit frame must be a multiple of 8 bytes */
|
||||
#define MACB_TX_LEN_ALIGN 8
|
||||
|
@ -1313,6 +1314,21 @@ static void macb_hresp_error_task(unsigned long data)
|
|||
netif_tx_start_all_queues(dev);
|
||||
}
|
||||
|
||||
static void macb_tx_restart(struct macb_queue *queue)
|
||||
{
|
||||
unsigned int head = queue->tx_head;
|
||||
unsigned int tail = queue->tx_tail;
|
||||
struct macb *bp = queue->bp;
|
||||
|
||||
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
|
||||
queue_writel(queue, ISR, MACB_BIT(TXUBR));
|
||||
|
||||
if (head == tail)
|
||||
return;
|
||||
|
||||
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
|
||||
}
|
||||
|
||||
static irqreturn_t macb_interrupt(int irq, void *dev_id)
|
||||
{
|
||||
struct macb_queue *queue = dev_id;
|
||||
|
@ -1370,6 +1386,9 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
|
|||
if (status & MACB_BIT(TCOMP))
|
||||
macb_tx_interrupt(queue);
|
||||
|
||||
if (status & MACB_BIT(TXUBR))
|
||||
macb_tx_restart(queue);
|
||||
|
||||
/* Link change detection isn't possible with RMII, so we'll
|
||||
* add that if/when we get our hands on a full-blown MII PHY.
|
||||
*/
|
||||
|
|
|
@ -1172,11 +1172,15 @@ static netdev_tx_t ibmveth_start_xmit(struct sk_buff *skb,
|
|||
|
||||
map_failed_frags:
|
||||
last = i+1;
|
||||
for (i = 0; i < last; i++)
|
||||
for (i = 1; i < last; i++)
|
||||
dma_unmap_page(&adapter->vdev->dev, descs[i].fields.address,
|
||||
descs[i].fields.flags_len & IBMVETH_BUF_LEN_MASK,
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
dma_unmap_single(&adapter->vdev->dev,
|
||||
descs[0].fields.address,
|
||||
descs[0].fields.flags_len & IBMVETH_BUF_LEN_MASK,
|
||||
DMA_TO_DEVICE);
|
||||
map_failed:
|
||||
if (!firmware_has_feature(FW_FEATURE_CMO))
|
||||
netdev_err(netdev, "tx: unable to map xmit buffer\n");
|
||||
|
|
|
@ -406,7 +406,6 @@ struct mvneta_port {
|
|||
struct mvneta_pcpu_stats __percpu *stats;
|
||||
|
||||
int pkt_size;
|
||||
unsigned int frag_size;
|
||||
void __iomem *base;
|
||||
struct mvneta_rx_queue *rxqs;
|
||||
struct mvneta_tx_queue *txqs;
|
||||
|
@ -2905,7 +2904,9 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
|
|||
if (!pp->bm_priv) {
|
||||
/* Set Offset */
|
||||
mvneta_rxq_offset_set(pp, rxq, 0);
|
||||
mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
|
||||
mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
|
||||
PAGE_SIZE :
|
||||
MVNETA_RX_BUF_SIZE(pp->pkt_size));
|
||||
mvneta_rxq_bm_disable(pp, rxq);
|
||||
mvneta_rxq_fill(pp, rxq, rxq->size);
|
||||
} else {
|
||||
|
@ -3749,7 +3750,6 @@ static int mvneta_open(struct net_device *dev)
|
|||
int ret;
|
||||
|
||||
pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
|
||||
pp->frag_size = PAGE_SIZE;
|
||||
|
||||
ret = mvneta_setup_rxqs(pp);
|
||||
if (ret)
|
||||
|
|
|
@ -4292,12 +4292,15 @@ static void mvpp2_phylink_validate(struct net_device *dev,
|
|||
case PHY_INTERFACE_MODE_10GKR:
|
||||
case PHY_INTERFACE_MODE_XAUI:
|
||||
case PHY_INTERFACE_MODE_NA:
|
||||
if (port->gop_id == 0) {
|
||||
phylink_set(mask, 10000baseT_Full);
|
||||
phylink_set(mask, 10000baseCR_Full);
|
||||
phylink_set(mask, 10000baseSR_Full);
|
||||
phylink_set(mask, 10000baseLR_Full);
|
||||
phylink_set(mask, 10000baseLRM_Full);
|
||||
phylink_set(mask, 10000baseER_Full);
|
||||
phylink_set(mask, 10000baseKR_Full);
|
||||
}
|
||||
/* Fall-through */
|
||||
case PHY_INTERFACE_MODE_RGMII:
|
||||
case PHY_INTERFACE_MODE_RGMII_ID:
|
||||
|
@ -4308,7 +4311,6 @@ static void mvpp2_phylink_validate(struct net_device *dev,
|
|||
phylink_set(mask, 10baseT_Full);
|
||||
phylink_set(mask, 100baseT_Half);
|
||||
phylink_set(mask, 100baseT_Full);
|
||||
phylink_set(mask, 10000baseT_Full);
|
||||
/* Fall-through */
|
||||
case PHY_INTERFACE_MODE_1000BASEX:
|
||||
case PHY_INTERFACE_MODE_2500BASEX:
|
||||
|
|
|
@ -1101,11 +1101,6 @@ int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv,
|
|||
struct ethtool_ts_info *info)
|
||||
{
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
int ret;
|
||||
|
||||
ret = ethtool_op_get_ts_info(priv->netdev, info);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
info->phc_index = mlx5_clock_get_ptp_index(mdev);
|
||||
|
||||
|
@ -1113,7 +1108,7 @@ int mlx5e_ethtool_get_ts_info(struct mlx5e_priv *priv,
|
|||
info->phc_index == -1)
|
||||
return 0;
|
||||
|
||||
info->so_timestamping |= SOF_TIMESTAMPING_TX_HARDWARE |
|
||||
info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
|
||||
SOF_TIMESTAMPING_RX_HARDWARE |
|
||||
SOF_TIMESTAMPING_RAW_HARDWARE;
|
||||
|
||||
|
|
|
@ -128,6 +128,8 @@ static bool mlx5e_rx_is_linear_skb(struct mlx5_core_dev *mdev,
|
|||
return !params->lro_en && frag_sz <= PAGE_SIZE;
|
||||
}
|
||||
|
||||
#define MLX5_MAX_MPWQE_LOG_WQE_STRIDE_SZ ((BIT(__mlx5_bit_sz(wq, log_wqe_stride_size)) - 1) + \
|
||||
MLX5_MPWQE_LOG_STRIDE_SZ_BASE)
|
||||
static bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev,
|
||||
struct mlx5e_params *params)
|
||||
{
|
||||
|
@ -138,6 +140,9 @@ static bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev,
|
|||
if (!mlx5e_rx_is_linear_skb(mdev, params))
|
||||
return false;
|
||||
|
||||
if (order_base_2(frag_sz) > MLX5_MAX_MPWQE_LOG_WQE_STRIDE_SZ)
|
||||
return false;
|
||||
|
||||
if (MLX5_CAP_GEN(mdev, ext_stride_num_range))
|
||||
return true;
|
||||
|
||||
|
@ -1383,6 +1388,7 @@ static void mlx5e_close_txqsq(struct mlx5e_txqsq *sq)
|
|||
struct mlx5_core_dev *mdev = c->mdev;
|
||||
struct mlx5_rate_limit rl = {0};
|
||||
|
||||
cancel_work_sync(&sq->dim.work);
|
||||
mlx5e_destroy_sq(mdev, sq->sqn);
|
||||
if (sq->rate_limit) {
|
||||
rl.rate = sq->rate_limit;
|
||||
|
|
|
@ -1150,7 +1150,7 @@ void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
|
|||
int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
|
||||
{
|
||||
struct mlx5e_rq *rq = container_of(cq, struct mlx5e_rq, cq);
|
||||
struct mlx5e_xdpsq *xdpsq;
|
||||
struct mlx5e_xdpsq *xdpsq = &rq->xdpsq;
|
||||
struct mlx5_cqe64 *cqe;
|
||||
int work_done = 0;
|
||||
|
||||
|
@ -1161,10 +1161,11 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
|
|||
work_done += mlx5e_decompress_cqes_cont(rq, cq, 0, budget);
|
||||
|
||||
cqe = mlx5_cqwq_get_cqe(&cq->wq);
|
||||
if (!cqe)
|
||||
if (!cqe) {
|
||||
if (unlikely(work_done))
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
xdpsq = &rq->xdpsq;
|
||||
}
|
||||
|
||||
do {
|
||||
if (mlx5_get_cqe_format(cqe) == MLX5_COMPRESSED) {
|
||||
|
@ -1179,6 +1180,7 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
|
|||
rq->handle_rx_cqe(rq, cqe);
|
||||
} while ((++work_done < budget) && (cqe = mlx5_cqwq_get_cqe(&cq->wq)));
|
||||
|
||||
out:
|
||||
if (xdpsq->doorbell) {
|
||||
mlx5e_xmit_xdp_doorbell(xdpsq);
|
||||
xdpsq->doorbell = false;
|
||||
|
|
|
@ -73,7 +73,6 @@ static const struct counter_desc sw_stats_desc[] = {
|
|||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_recover) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_cqes) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_wake) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_udp_seg_rem) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_cqe_err) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_xmit) },
|
||||
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_full) },
|
||||
|
@ -194,7 +193,6 @@ void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
|
|||
s->tx_nop += sq_stats->nop;
|
||||
s->tx_queue_stopped += sq_stats->stopped;
|
||||
s->tx_queue_wake += sq_stats->wake;
|
||||
s->tx_udp_seg_rem += sq_stats->udp_seg_rem;
|
||||
s->tx_queue_dropped += sq_stats->dropped;
|
||||
s->tx_cqe_err += sq_stats->cqe_err;
|
||||
s->tx_recover += sq_stats->recover;
|
||||
|
|
|
@ -86,7 +86,6 @@ struct mlx5e_sw_stats {
|
|||
u64 tx_recover;
|
||||
u64 tx_cqes;
|
||||
u64 tx_queue_wake;
|
||||
u64 tx_udp_seg_rem;
|
||||
u64 tx_cqe_err;
|
||||
u64 tx_xdp_xmit;
|
||||
u64 tx_xdp_full;
|
||||
|
@ -217,7 +216,6 @@ struct mlx5e_sq_stats {
|
|||
u64 csum_partial_inner;
|
||||
u64 added_vlan_packets;
|
||||
u64 nop;
|
||||
u64 udp_seg_rem;
|
||||
#ifdef CONFIG_MLX5_EN_TLS
|
||||
u64 tls_ooo;
|
||||
u64 tls_resync_bytes;
|
||||
|
|
|
@ -432,7 +432,7 @@ static void del_sw_hw_rule(struct fs_node *node)
|
|||
|
||||
if ((fte->action.action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) &&
|
||||
--fte->dests_size) {
|
||||
modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST),
|
||||
modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST);
|
||||
update_fte = true;
|
||||
}
|
||||
out:
|
||||
|
|
|
@ -81,6 +81,7 @@ struct mlxsw_core {
|
|||
struct mlxsw_core_port *ports;
|
||||
unsigned int max_ports;
|
||||
bool reload_fail;
|
||||
bool fw_flash_in_progress;
|
||||
unsigned long driver_priv[0];
|
||||
/* driver_priv has to be always the last item */
|
||||
};
|
||||
|
@ -428,12 +429,16 @@ struct mlxsw_reg_trans {
|
|||
struct rcu_head rcu;
|
||||
};
|
||||
|
||||
#define MLXSW_EMAD_TIMEOUT_DURING_FW_FLASH_MS 3000
|
||||
#define MLXSW_EMAD_TIMEOUT_MS 200
|
||||
|
||||
static void mlxsw_emad_trans_timeout_schedule(struct mlxsw_reg_trans *trans)
|
||||
{
|
||||
unsigned long timeout = msecs_to_jiffies(MLXSW_EMAD_TIMEOUT_MS);
|
||||
|
||||
if (trans->core->fw_flash_in_progress)
|
||||
timeout = msecs_to_jiffies(MLXSW_EMAD_TIMEOUT_DURING_FW_FLASH_MS);
|
||||
|
||||
queue_delayed_work(trans->core->emad_wq, &trans->timeout_dw, timeout);
|
||||
}
|
||||
|
||||
|
@ -1854,6 +1859,18 @@ int mlxsw_core_kvd_sizes_get(struct mlxsw_core *mlxsw_core,
|
|||
}
|
||||
EXPORT_SYMBOL(mlxsw_core_kvd_sizes_get);
|
||||
|
||||
void mlxsw_core_fw_flash_start(struct mlxsw_core *mlxsw_core)
|
||||
{
|
||||
mlxsw_core->fw_flash_in_progress = true;
|
||||
}
|
||||
EXPORT_SYMBOL(mlxsw_core_fw_flash_start);
|
||||
|
||||
void mlxsw_core_fw_flash_end(struct mlxsw_core *mlxsw_core)
|
||||
{
|
||||
mlxsw_core->fw_flash_in_progress = false;
|
||||
}
|
||||
EXPORT_SYMBOL(mlxsw_core_fw_flash_end);
|
||||
|
||||
static int __init mlxsw_core_module_init(void)
|
||||
{
|
||||
int err;
|
||||
|
|
|
@ -292,6 +292,9 @@ int mlxsw_core_kvd_sizes_get(struct mlxsw_core *mlxsw_core,
|
|||
u64 *p_single_size, u64 *p_double_size,
|
||||
u64 *p_linear_size);
|
||||
|
||||
void mlxsw_core_fw_flash_start(struct mlxsw_core *mlxsw_core);
|
||||
void mlxsw_core_fw_flash_end(struct mlxsw_core *mlxsw_core);
|
||||
|
||||
bool mlxsw_core_res_valid(struct mlxsw_core *mlxsw_core,
|
||||
enum mlxsw_res_id res_id);
|
||||
|
||||
|
|
|
@ -308,8 +308,13 @@ static int mlxsw_sp_firmware_flash(struct mlxsw_sp *mlxsw_sp,
|
|||
},
|
||||
.mlxsw_sp = mlxsw_sp
|
||||
};
|
||||
int err;
|
||||
|
||||
return mlxfw_firmware_flash(&mlxsw_sp_mlxfw_dev.mlxfw_dev, firmware);
|
||||
mlxsw_core_fw_flash_start(mlxsw_sp->core);
|
||||
err = mlxfw_firmware_flash(&mlxsw_sp_mlxfw_dev.mlxfw_dev, firmware);
|
||||
mlxsw_core_fw_flash_end(mlxsw_sp->core);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int mlxsw_sp_fw_rev_validate(struct mlxsw_sp *mlxsw_sp)
|
||||
|
|
|
@ -733,7 +733,7 @@ static int ocelot_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
|
|||
}
|
||||
|
||||
return ocelot_mact_learn(ocelot, port->chip_port, addr, vid,
|
||||
ENTRYTYPE_NORMAL);
|
||||
ENTRYTYPE_LOCKED);
|
||||
}
|
||||
|
||||
static int ocelot_fdb_del(struct ndmsg *ndm, struct nlattr *tb[],
|
||||
|
|
|
@ -375,13 +375,29 @@ nfp_flower_calculate_key_layers(struct nfp_app *app,
|
|||
!(tcp_flags & (TCPHDR_FIN | TCPHDR_SYN | TCPHDR_RST)))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* We need to store TCP flags in the IPv4 key space, thus
|
||||
* we need to ensure we include a IPv4 key layer if we have
|
||||
* not done so already.
|
||||
/* We need to store TCP flags in the either the IPv4 or IPv6 key
|
||||
* space, thus we need to ensure we include a IPv4/IPv6 key
|
||||
* layer if we have not done so already.
|
||||
*/
|
||||
if (!(key_layer & NFP_FLOWER_LAYER_IPV4)) {
|
||||
if (!key_basic)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!(key_layer & NFP_FLOWER_LAYER_IPV4) &&
|
||||
!(key_layer & NFP_FLOWER_LAYER_IPV6)) {
|
||||
switch (key_basic->n_proto) {
|
||||
case cpu_to_be16(ETH_P_IP):
|
||||
key_layer |= NFP_FLOWER_LAYER_IPV4;
|
||||
key_size += sizeof(struct nfp_flower_ipv4);
|
||||
break;
|
||||
|
||||
case cpu_to_be16(ETH_P_IPV6):
|
||||
key_layer |= NFP_FLOWER_LAYER_IPV6;
|
||||
key_size += sizeof(struct nfp_flower_ipv6);
|
||||
break;
|
||||
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -12669,8 +12669,9 @@ enum MFW_DRV_MSG_TYPE {
|
|||
MFW_DRV_MSG_BW_UPDATE10,
|
||||
MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE,
|
||||
MFW_DRV_MSG_BW_UPDATE11,
|
||||
MFW_DRV_MSG_OEM_CFG_UPDATE,
|
||||
MFW_DRV_MSG_RESERVED,
|
||||
MFW_DRV_MSG_GET_TLV_REQ,
|
||||
MFW_DRV_MSG_OEM_CFG_UPDATE,
|
||||
MFW_DRV_MSG_MAX
|
||||
};
|
||||
|
||||
|
|
|
@ -1528,6 +1528,8 @@ static void __rtl8169_set_wol(struct rtl8169_private *tp, u32 wolopts)
|
|||
}
|
||||
|
||||
RTL_W8(tp, Cfg9346, Cfg9346_Lock);
|
||||
|
||||
device_set_wakeup_enable(tp_to_dev(tp), wolopts);
|
||||
}
|
||||
|
||||
static int rtl8169_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
|
||||
|
@ -1549,8 +1551,6 @@ static int rtl8169_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
|
|||
|
||||
rtl_unlock_work(tp);
|
||||
|
||||
device_set_wakeup_enable(d, tp->saved_wolopts);
|
||||
|
||||
pm_runtime_put_noidle(d);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -4247,6 +4247,7 @@ int stmmac_dvr_probe(struct device *device,
|
|||
priv->wq = create_singlethread_workqueue("stmmac_wq");
|
||||
if (!priv->wq) {
|
||||
dev_err(priv->device, "failed to create workqueue\n");
|
||||
ret = -ENOMEM;
|
||||
goto error_wq;
|
||||
}
|
||||
|
||||
|
|
|
@ -524,10 +524,7 @@ static void resync_tnc(struct timer_list *t)
|
|||
|
||||
|
||||
/* Start resync timer again -- the TNC might be still absent */
|
||||
|
||||
del_timer(&sp->resync_t);
|
||||
sp->resync_t.expires = jiffies + SIXP_RESYNC_TIMEOUT;
|
||||
add_timer(&sp->resync_t);
|
||||
mod_timer(&sp->resync_t, jiffies + SIXP_RESYNC_TIMEOUT);
|
||||
}
|
||||
|
||||
static inline int tnc_init(struct sixpack *sp)
|
||||
|
@ -538,9 +535,7 @@ static inline int tnc_init(struct sixpack *sp)
|
|||
|
||||
sp->tty->ops->write(sp->tty, &inbyte, 1);
|
||||
|
||||
del_timer(&sp->resync_t);
|
||||
sp->resync_t.expires = jiffies + SIXP_RESYNC_TIMEOUT;
|
||||
add_timer(&sp->resync_t);
|
||||
mod_timer(&sp->resync_t, jiffies + SIXP_RESYNC_TIMEOUT);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -918,11 +913,8 @@ static void decode_prio_command(struct sixpack *sp, unsigned char cmd)
|
|||
/* if the state byte has been received, the TNC is present,
|
||||
so the resync timer can be reset. */
|
||||
|
||||
if (sp->tnc_state == TNC_IN_SYNC) {
|
||||
del_timer(&sp->resync_t);
|
||||
sp->resync_t.expires = jiffies + SIXP_INIT_RESYNC_TIMEOUT;
|
||||
add_timer(&sp->resync_t);
|
||||
}
|
||||
if (sp->tnc_state == TNC_IN_SYNC)
|
||||
mod_timer(&sp->resync_t, jiffies + SIXP_INIT_RESYNC_TIMEOUT);
|
||||
|
||||
sp->status1 = cmd & SIXP_PRIO_DATA_MASK;
|
||||
}
|
||||
|
|
|
@ -164,10 +164,7 @@ static int mdio_bus_phy_restore(struct device *dev)
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/* The PHY needs to renegotiate. */
|
||||
phydev->link = 0;
|
||||
phydev->state = PHY_UP;
|
||||
|
||||
if (phydev->attached_dev && phydev->adjust_link)
|
||||
phy_start_machine(phydev);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -1117,6 +1117,7 @@ static const struct usb_device_id products[] = {
|
|||
{QMI_FIXED_INTF(0x1435, 0xd181, 4)}, /* Wistron NeWeb D18Q1 */
|
||||
{QMI_FIXED_INTF(0x1435, 0xd181, 5)}, /* Wistron NeWeb D18Q1 */
|
||||
{QMI_FIXED_INTF(0x1435, 0xd191, 4)}, /* Wistron NeWeb D19Q1 */
|
||||
{QMI_QUIRK_SET_DTR(0x1508, 0x1001, 4)}, /* Fibocom NL668 series */
|
||||
{QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */
|
||||
{QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */
|
||||
{QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */
|
||||
|
@ -1229,6 +1230,7 @@ static const struct usb_device_id products[] = {
|
|||
{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
|
||||
{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */
|
||||
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)}, /* Telit LN940 series */
|
||||
{QMI_FIXED_INTF(0x1c9e, 0x9801, 3)}, /* Telewell TW-3G HSPA+ */
|
||||
{QMI_FIXED_INTF(0x1c9e, 0x9803, 4)}, /* Telewell TW-3G HSPA+ */
|
||||
{QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)}, /* XS Stick W100-2 from 4G Systems */
|
||||
|
@ -1263,6 +1265,7 @@ static const struct usb_device_id products[] = {
|
|||
{QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */
|
||||
{QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */
|
||||
{QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
|
||||
{QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */
|
||||
|
||||
/* 4. Gobi 1000 devices */
|
||||
{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
|
||||
|
|
|
@ -486,8 +486,10 @@ static int x25_asy_open(struct net_device *dev)
|
|||
|
||||
/* Cleanup */
|
||||
kfree(sl->xbuff);
|
||||
sl->xbuff = NULL;
|
||||
noxbuff:
|
||||
kfree(sl->rbuff);
|
||||
sl->rbuff = NULL;
|
||||
norbuff:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
|
|
@ -5188,10 +5188,17 @@ static struct cfg80211_ops brcmf_cfg80211_ops = {
|
|||
.del_pmk = brcmf_cfg80211_del_pmk,
|
||||
};
|
||||
|
||||
struct cfg80211_ops *brcmf_cfg80211_get_ops(void)
|
||||
struct cfg80211_ops *brcmf_cfg80211_get_ops(struct brcmf_mp_device *settings)
|
||||
{
|
||||
return kmemdup(&brcmf_cfg80211_ops, sizeof(brcmf_cfg80211_ops),
|
||||
struct cfg80211_ops *ops;
|
||||
|
||||
ops = kmemdup(&brcmf_cfg80211_ops, sizeof(brcmf_cfg80211_ops),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (ops && settings->roamoff)
|
||||
ops->update_connect_params = NULL;
|
||||
|
||||
return ops;
|
||||
}
|
||||
|
||||
struct brcmf_cfg80211_vif *brcmf_alloc_vif(struct brcmf_cfg80211_info *cfg,
|
||||
|
|
|
@ -404,7 +404,7 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr,
|
|||
void brcmf_cfg80211_detach(struct brcmf_cfg80211_info *cfg);
|
||||
s32 brcmf_cfg80211_up(struct net_device *ndev);
|
||||
s32 brcmf_cfg80211_down(struct net_device *ndev);
|
||||
struct cfg80211_ops *brcmf_cfg80211_get_ops(void);
|
||||
struct cfg80211_ops *brcmf_cfg80211_get_ops(struct brcmf_mp_device *settings);
|
||||
enum nl80211_iftype brcmf_cfg80211_get_iftype(struct brcmf_if *ifp);
|
||||
|
||||
struct brcmf_cfg80211_vif *brcmf_alloc_vif(struct brcmf_cfg80211_info *cfg,
|
||||
|
|
|
@ -1130,7 +1130,7 @@ int brcmf_attach(struct device *dev, struct brcmf_mp_device *settings)
|
|||
|
||||
brcmf_dbg(TRACE, "Enter\n");
|
||||
|
||||
ops = brcmf_cfg80211_get_ops();
|
||||
ops = brcmf_cfg80211_get_ops(settings);
|
||||
if (!ops)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
@ -641,8 +641,9 @@ brcmf_fw_alloc_request(u32 chip, u32 chiprev,
|
|||
struct brcmf_fw_request *fwreq;
|
||||
char chipname[12];
|
||||
const char *mp_path;
|
||||
size_t mp_path_len;
|
||||
u32 i, j;
|
||||
char end;
|
||||
char end = '\0';
|
||||
size_t reqsz;
|
||||
|
||||
for (i = 0; i < table_size; i++) {
|
||||
|
@ -667,7 +668,10 @@ brcmf_fw_alloc_request(u32 chip, u32 chiprev,
|
|||
mapping_table[i].fw_base, chipname);
|
||||
|
||||
mp_path = brcmf_mp_global.firmware_path;
|
||||
end = mp_path[strlen(mp_path) - 1];
|
||||
mp_path_len = strnlen(mp_path, BRCMF_FW_ALTPATH_LEN);
|
||||
if (mp_path_len)
|
||||
end = mp_path[mp_path_len - 1];
|
||||
|
||||
fwreq->n_items = n_fwnames;
|
||||
|
||||
for (j = 0; j < n_fwnames; j++) {
|
||||
|
|
|
@ -905,7 +905,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
|
|||
if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) {
|
||||
unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
|
||||
|
||||
BUG_ON(pull_to <= skb_headlen(skb));
|
||||
BUG_ON(pull_to < skb_headlen(skb));
|
||||
__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
|
||||
}
|
||||
if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
|
||||
|
|
|
@ -393,7 +393,7 @@ static int m41t80_read_alarm(struct device *dev, struct rtc_wkalrm *alrm)
|
|||
alrm->time.tm_min = bcd2bin(alarmvals[3] & 0x7f);
|
||||
alrm->time.tm_hour = bcd2bin(alarmvals[2] & 0x3f);
|
||||
alrm->time.tm_mday = bcd2bin(alarmvals[1] & 0x3f);
|
||||
alrm->time.tm_mon = bcd2bin(alarmvals[0] & 0x3f);
|
||||
alrm->time.tm_mon = bcd2bin(alarmvals[0] & 0x3f) - 1;
|
||||
|
||||
alrm->enabled = !!(alarmvals[0] & M41T80_ALMON_AFE);
|
||||
alrm->pending = (flags & M41T80_FLAGS_AF) && alrm->enabled;
|
||||
|
|
|
@ -88,7 +88,7 @@ struct bcm2835_spi {
|
|||
u8 *rx_buf;
|
||||
int tx_len;
|
||||
int rx_len;
|
||||
bool dma_pending;
|
||||
unsigned int dma_pending;
|
||||
};
|
||||
|
||||
static inline u32 bcm2835_rd(struct bcm2835_spi *bs, unsigned reg)
|
||||
|
@ -155,8 +155,7 @@ static irqreturn_t bcm2835_spi_interrupt(int irq, void *dev_id)
|
|||
/* Write as many bytes as possible to FIFO */
|
||||
bcm2835_wr_fifo(bs);
|
||||
|
||||
/* based on flags decide if we can finish the transfer */
|
||||
if (bcm2835_rd(bs, BCM2835_SPI_CS) & BCM2835_SPI_CS_DONE) {
|
||||
if (!bs->rx_len) {
|
||||
/* Transfer complete - reset SPI HW */
|
||||
bcm2835_spi_reset_hw(master);
|
||||
/* wake up the framework */
|
||||
|
@ -233,10 +232,9 @@ static void bcm2835_spi_dma_done(void *data)
|
|||
* is called the tx-dma must have finished - can't get to this
|
||||
* situation otherwise...
|
||||
*/
|
||||
if (cmpxchg(&bs->dma_pending, true, false)) {
|
||||
dmaengine_terminate_all(master->dma_tx);
|
||||
|
||||
/* mark as no longer pending */
|
||||
bs->dma_pending = 0;
|
||||
}
|
||||
|
||||
/* and mark as completed */;
|
||||
complete(&master->xfer_completion);
|
||||
|
@ -342,6 +340,7 @@ static int bcm2835_spi_transfer_one_dma(struct spi_master *master,
|
|||
if (ret) {
|
||||
/* need to reset on errors */
|
||||
dmaengine_terminate_all(master->dma_tx);
|
||||
bs->dma_pending = false;
|
||||
bcm2835_spi_reset_hw(master);
|
||||
return ret;
|
||||
}
|
||||
|
@ -617,10 +616,9 @@ static void bcm2835_spi_handle_err(struct spi_master *master,
|
|||
struct bcm2835_spi *bs = spi_master_get_devdata(master);
|
||||
|
||||
/* if an error occurred and we have an active dma, then terminate */
|
||||
if (bs->dma_pending) {
|
||||
if (cmpxchg(&bs->dma_pending, true, false)) {
|
||||
dmaengine_terminate_all(master->dma_tx);
|
||||
dmaengine_terminate_all(master->dma_rx);
|
||||
bs->dma_pending = 0;
|
||||
}
|
||||
/* and reset */
|
||||
bcm2835_spi_reset_hw(master);
|
||||
|
|
|
@ -831,6 +831,7 @@ static int sdio_read_int(struct wilc *wilc, u32 *int_status)
|
|||
if (!g_sdio.irq_gpio) {
|
||||
int i;
|
||||
|
||||
cmd.read_write = 0;
|
||||
cmd.function = 1;
|
||||
cmd.address = 0x04;
|
||||
cmd.data = 0;
|
||||
|
|
|
@ -125,7 +125,7 @@ MODULE_PARM_DESC(rx_timeout, "Rx timeout, 1-255");
|
|||
#define CDNS_UART_IXR_RXTRIG 0x00000001 /* RX FIFO trigger interrupt */
|
||||
#define CDNS_UART_IXR_RXFULL 0x00000004 /* RX FIFO full interrupt. */
|
||||
#define CDNS_UART_IXR_RXEMPTY 0x00000002 /* RX FIFO empty interrupt. */
|
||||
#define CDNS_UART_IXR_MASK 0x00001FFF /* Valid bit mask */
|
||||
#define CDNS_UART_IXR_RXMASK 0x000021e7 /* Valid RX bit mask */
|
||||
|
||||
/*
|
||||
* Do not enable parity error interrupt for the following
|
||||
|
@ -362,7 +362,7 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
|
|||
cdns_uart_handle_tx(dev_id);
|
||||
isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
|
||||
}
|
||||
if (isrstatus & CDNS_UART_IXR_MASK)
|
||||
if (isrstatus & CDNS_UART_IXR_RXMASK)
|
||||
cdns_uart_handle_rx(dev_id, isrstatus);
|
||||
|
||||
spin_unlock(&port->lock);
|
||||
|
|
|
@ -205,8 +205,4 @@ config USB_ULPI_BUS
|
|||
To compile this driver as a module, choose M here: the module will
|
||||
be called ulpi.
|
||||
|
||||
config USB_ROLE_SWITCH
|
||||
tristate
|
||||
select USB_COMMON
|
||||
|
||||
endif # USB_SUPPORT
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue