This is the 4.19.124 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl7Ey84ACgkQONu9yGCS aT4enA/+JsigMJOLeEtEZ4Gf97S0HnxOIqvuz7759s07vTzwPV1BfQm2eafcS8Cl 8//BO73tDe+m5shH0mFCeFsy0p1qC4+ewIyLPnjulxls1BCZ86xK44/WD6N0DgX9 Fi0HACcObuNZD7814yIyrWaI9QHZO+OwJlmjCXBiZGC4gZwAnGcgY2+ffYf/hRv2 wgEyJF2Td0rORCOM3qp8Ipdt1S8inm2yZodGC5htSPajfBLPe8narmkOXxcN+tuB BvOwdTJoplmhNwpimWacytL+jQJYKHS/izPX0JYkFDfQ/bgOYXz3CWwa2DMOVsGd CQOHp4rK/Rl/caAANe3nD87jstRbaRKp7HZELCJ+KZrHpGfefAZs6g5j+LNC7KQt 6YloSnTQsnRC6nqu+b2ieI5KoZAfwWoyHrQf7obJi6PJF4Ge4XUbaLEDH9TuxZTN tZX5ZOGZ8/i32VgYqBA4mDAbV+n5TyEYl722XxXzgim73VUDl67F7JqtDxMMb4Ic KW98luDDXgoq+kM2FqWgXtjxoP4TpjRREjwCpNDEa03ydKW+dwM21D7IoQNtXUgT uE6aFPVuhRt5MAhOdSHtkSsbOjiJZjKuPKvYyUFvAQT5JMaYZg9pabnH89E6URQ9 x7M2JOvR/GMOmPRykQoewqV0027K37TYxBfRAzLbNFv8Iol/a7I= =pHmd -----END PGP SIGNATURE----- Merge 4.19.124 into android-4.19-stable Changes in 4.19.124 net: dsa: Do not make user port errors fatal shmem: fix possible deadlocks on shmlock_user_lock net/sonic: Fix a resource leak in an error handling path in 'jazz_sonic_probe()' net: moxa: Fix a potential double 'free_irq()' drop_monitor: work around gcc-10 stringop-overflow warning virtio-blk: handle block_device_operations callbacks after hot unplug scsi: sg: add sg_remove_request in sg_write mmc: sdhci-acpi: Add SDHCI_QUIRK2_BROKEN_64_BIT_DMA for AMDI0040 net: fix a potential recursive NETDEV_FEAT_CHANGE netlabel: cope with NULL catmap net: phy: fix aneg restart in phy_ethtool_set_eee pppoe: only process PADT targeted at local interfaces Revert "ipv6: add mtu lock check in __ip6_rt_update_pmtu" tcp: fix error recovery in tcp_zerocopy_receive() virtio_net: fix lockdep warning on 32 bit hinic: fix a bug of ndo_stop net: dsa: loop: Add module soft dependency net: ipv4: really enforce backoff for redirects netprio_cgroup: Fix unlimited memory leak of v2 cgroups net: tcp: fix rx timestamp behavior for tcp_recvmsg tcp: fix SO_RCVLOWAT hangs with fat skbs riscv: fix vdso build with lld dmaengine: pch_dma.c: Avoid data race between probe and irq handler dmaengine: mmp_tdma: Reset channel error on release cpufreq: intel_pstate: Only mention the BIOS disabling turbo mode once ALSA: hda/hdmi: fix race in monitor detection during probe drm/qxl: lost qxl_bo_kunmap_atomic_page in qxl_image_init_helper() ipc/util.c: sysvipc_find_ipc() incorrectly updates position index ALSA: hda/realtek - Fix S3 pop noise on Dell Wyse gfs2: Another gfs2_walk_metadata fix pinctrl: baytrail: Enable pin configuration setting for GPIO chip pinctrl: cherryview: Add missing spinlock usage in chv_gpio_irq_handler i40iw: Fix error handling in i40iw_manage_arp_cache() mmc: core: Check request type before completing the request mmc: block: Fix request completion in the CQE timeout path NFS: Fix fscache super_cookie index_key from changing after umount nfs: fscache: use timespec64 in inode auxdata NFSv4: Fix fscache cookie aux_data to ensure change_attr is included netfilter: conntrack: avoid gcc-10 zero-length-bounds warning arm64: fix the flush_icache_range arguments in machine_kexec netfilter: nft_set_rbtree: Introduce and use nft_rbtree_interval_start() IB/mlx4: Test return value of calls to ib_get_cached_pkey hwmon: (da9052) Synchronize access with mfd pnp: Use list_for_each_entry() instead of open coding gcc-10 warnings: fix low-hanging fruit kbuild: compute false-positive -Wmaybe-uninitialized cases in Kconfig Stop the ad-hoc games with -Wno-maybe-initialized gcc-10: disable 'zero-length-bounds' warning for now gcc-10: disable 'array-bounds' warning for now gcc-10: disable 'stringop-overflow' warning for now gcc-10: disable 'restrict' warning for now gcc-10: avoid shadowing standard library 'free()' in crypto ALSA: hda/realtek - Limit int mic boost for Thinkpad T530 ALSA: rawmidi: Fix racy buffer resize under concurrent accesses ALSA: usb-audio: Add control message quirk delay for Kingston HyperX headset usb: core: hub: limit HUB_QUIRK_DISABLE_AUTOSUSPEND to USB5534B usb: host: xhci-plat: keep runtime active when removing host USB: gadget: fix illegal array access in binding with UDC usb: xhci: Fix NULL pointer dereference when enqueuing trbs from urb sg list ARM: dts: dra7: Fix bus_dma_limit for PCIe ARM: dts: imx27-phytec-phycard-s-rdk: Fix the I2C1 pinctrl entries cifs: fix leaked reference on requeued write x86: Fix early boot crash on gcc-10, third try x86/unwind/orc: Fix error handling in __unwind_start() exec: Move would_dump into flush_old_exec clk: rockchip: fix incorrect configuration of rk3228 aclk_gpu* clocks dwc3: Remove check for HWO flag in dwc3_gadget_ep_reclaim_trb_sg() usb: gadget: net2272: Fix a memory leak in an error handling path in 'net2272_plat_probe()' usb: gadget: audio: Fix a missing error return value in audio_bind() usb: gadget: legacy: fix error return code in gncm_bind() usb: gadget: legacy: fix error return code in cdc_bind() Revert "ALSA: hda/realtek: Fix pop noise on ALC225" clk: Unlink clock if failed to prepare or enable arm64: dts: rockchip: Replace RK805 PMIC node name with "pmic" on rk3328 boards arm64: dts: rockchip: Rename dwc3 device nodes on rk3399 to make dtc happy ARM: dts: r8a73a4: Add missing CMT1 interrupts arm64: dts: renesas: r8a77980: Fix IPMMU VIP[01] nodes ARM: dts: r8a7740: Add missing extal2 to CPG node KVM: x86: Fix off-by-one error in kvm_vcpu_ioctl_x86_setup_mce Makefile: disallow data races on gcc-10 as well Linux 4.19.124 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I3d253f677cc08337e64d316005a0ec0c33717940
This commit is contained in:
commit
91d4544b24
84 changed files with 451 additions and 208 deletions
23
Makefile
23
Makefile
|
@ -1,7 +1,7 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 19
|
PATCHLEVEL = 19
|
||||||
SUBLEVEL = 123
|
SUBLEVEL = 124
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = "People's Front"
|
NAME = "People's Front"
|
||||||
|
|
||||||
|
@ -671,20 +671,14 @@ KBUILD_CFLAGS += $(call cc-disable-warning, int-in-bool-context)
|
||||||
KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)
|
KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)
|
||||||
|
|
||||||
ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
|
ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
|
||||||
KBUILD_CFLAGS += -Os $(call cc-disable-warning,maybe-uninitialized,)
|
KBUILD_CFLAGS += -Os
|
||||||
else
|
|
||||||
ifdef CONFIG_PROFILE_ALL_BRANCHES
|
|
||||||
KBUILD_CFLAGS += -O2 $(call cc-disable-warning,maybe-uninitialized,)
|
|
||||||
else
|
else
|
||||||
KBUILD_CFLAGS += -O2
|
KBUILD_CFLAGS += -O2
|
||||||
endif
|
endif
|
||||||
endif
|
|
||||||
|
|
||||||
KBUILD_CFLAGS += $(call cc-ifversion, -lt, 0409, \
|
|
||||||
$(call cc-disable-warning,maybe-uninitialized,))
|
|
||||||
|
|
||||||
# Tell gcc to never replace conditional load with a non-conditional one
|
# Tell gcc to never replace conditional load with a non-conditional one
|
||||||
KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)
|
KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)
|
||||||
|
KBUILD_CFLAGS += $(call cc-option,-fno-allow-store-data-races)
|
||||||
|
|
||||||
# check for 'asm goto'
|
# check for 'asm goto'
|
||||||
ifeq ($(call shell-cached,$(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLAGS)), y)
|
ifeq ($(call shell-cached,$(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLAGS)), y)
|
||||||
|
@ -887,6 +881,17 @@ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-sign)
|
||||||
# disable stringop warnings in gcc 8+
|
# disable stringop warnings in gcc 8+
|
||||||
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
|
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
|
||||||
|
|
||||||
|
# We'll want to enable this eventually, but it's not going away for 5.7 at least
|
||||||
|
KBUILD_CFLAGS += $(call cc-disable-warning, zero-length-bounds)
|
||||||
|
KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)
|
||||||
|
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-overflow)
|
||||||
|
|
||||||
|
# Another good warning that we'll want to enable eventually
|
||||||
|
KBUILD_CFLAGS += $(call cc-disable-warning, restrict)
|
||||||
|
|
||||||
|
# Enabled with W=2, disabled by default as noisy
|
||||||
|
KBUILD_CFLAGS += $(call cc-disable-warning, maybe-uninitialized)
|
||||||
|
|
||||||
# disable invalid "can't wrap" optimizations for signed / pointers
|
# disable invalid "can't wrap" optimizations for signed / pointers
|
||||||
KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
|
KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
|
||||||
|
|
||||||
|
|
|
@ -312,6 +312,7 @@
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
ranges = <0x51000000 0x51000000 0x3000
|
ranges = <0x51000000 0x51000000 0x3000
|
||||||
0x0 0x20000000 0x10000000>;
|
0x0 0x20000000 0x10000000>;
|
||||||
|
dma-ranges;
|
||||||
/**
|
/**
|
||||||
* To enable PCI endpoint mode, disable the pcie1_rc
|
* To enable PCI endpoint mode, disable the pcie1_rc
|
||||||
* node and enable pcie1_ep mode.
|
* node and enable pcie1_ep mode.
|
||||||
|
@ -325,7 +326,6 @@
|
||||||
device_type = "pci";
|
device_type = "pci";
|
||||||
ranges = <0x81000000 0 0 0x03000 0 0x00010000
|
ranges = <0x81000000 0 0 0x03000 0 0x00010000
|
||||||
0x82000000 0 0x20013000 0x13000 0 0xffed000>;
|
0x82000000 0 0x20013000 0x13000 0 0xffed000>;
|
||||||
dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>;
|
|
||||||
bus-range = <0x00 0xff>;
|
bus-range = <0x00 0xff>;
|
||||||
#interrupt-cells = <1>;
|
#interrupt-cells = <1>;
|
||||||
num-lanes = <1>;
|
num-lanes = <1>;
|
||||||
|
@ -368,6 +368,7 @@
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
ranges = <0x51800000 0x51800000 0x3000
|
ranges = <0x51800000 0x51800000 0x3000
|
||||||
0x0 0x30000000 0x10000000>;
|
0x0 0x30000000 0x10000000>;
|
||||||
|
dma-ranges;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
pcie2_rc: pcie@51800000 {
|
pcie2_rc: pcie@51800000 {
|
||||||
reg = <0x51800000 0x2000>, <0x51802000 0x14c>, <0x1000 0x2000>;
|
reg = <0x51800000 0x2000>, <0x51802000 0x14c>, <0x1000 0x2000>;
|
||||||
|
@ -378,7 +379,6 @@
|
||||||
device_type = "pci";
|
device_type = "pci";
|
||||||
ranges = <0x81000000 0 0 0x03000 0 0x00010000
|
ranges = <0x81000000 0 0 0x03000 0 0x00010000
|
||||||
0x82000000 0 0x30013000 0x13000 0 0xffed000>;
|
0x82000000 0 0x30013000 0x13000 0 0xffed000>;
|
||||||
dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>;
|
|
||||||
bus-range = <0x00 0xff>;
|
bus-range = <0x00 0xff>;
|
||||||
#interrupt-cells = <1>;
|
#interrupt-cells = <1>;
|
||||||
num-lanes = <1>;
|
num-lanes = <1>;
|
||||||
|
|
|
@ -81,8 +81,8 @@
|
||||||
imx27-phycard-s-rdk {
|
imx27-phycard-s-rdk {
|
||||||
pinctrl_i2c1: i2c1grp {
|
pinctrl_i2c1: i2c1grp {
|
||||||
fsl,pins = <
|
fsl,pins = <
|
||||||
MX27_PAD_I2C2_SDA__I2C2_SDA 0x0
|
MX27_PAD_I2C_DATA__I2C_DATA 0x0
|
||||||
MX27_PAD_I2C2_SCL__I2C2_SCL 0x0
|
MX27_PAD_I2C_CLK__I2C_CLK 0x0
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -131,7 +131,14 @@
|
||||||
cmt1: timer@e6130000 {
|
cmt1: timer@e6130000 {
|
||||||
compatible = "renesas,r8a73a4-cmt1", "renesas,rcar-gen2-cmt1";
|
compatible = "renesas,r8a73a4-cmt1", "renesas,rcar-gen2-cmt1";
|
||||||
reg = <0 0xe6130000 0 0x1004>;
|
reg = <0 0xe6130000 0 0x1004>;
|
||||||
interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&mstp3_clks R8A73A4_CLK_CMT1>;
|
clocks = <&mstp3_clks R8A73A4_CLK_CMT1>;
|
||||||
clock-names = "fck";
|
clock-names = "fck";
|
||||||
power-domains = <&pd_c5>;
|
power-domains = <&pd_c5>;
|
||||||
|
|
|
@ -479,7 +479,7 @@
|
||||||
cpg_clocks: cpg_clocks@e6150000 {
|
cpg_clocks: cpg_clocks@e6150000 {
|
||||||
compatible = "renesas,r8a7740-cpg-clocks";
|
compatible = "renesas,r8a7740-cpg-clocks";
|
||||||
reg = <0xe6150000 0x10000>;
|
reg = <0xe6150000 0x10000>;
|
||||||
clocks = <&extal1_clk>, <&extalr_clk>;
|
clocks = <&extal1_clk>, <&extal2_clk>, <&extalr_clk>;
|
||||||
#clock-cells = <1>;
|
#clock-cells = <1>;
|
||||||
clock-output-names = "system", "pllc0", "pllc1",
|
clock-output-names = "system", "pllc0", "pllc1",
|
||||||
"pllc2", "r",
|
"pllc2", "r",
|
||||||
|
|
|
@ -454,6 +454,7 @@
|
||||||
ipmmu_vip0: mmu@e7b00000 {
|
ipmmu_vip0: mmu@e7b00000 {
|
||||||
compatible = "renesas,ipmmu-r8a77980";
|
compatible = "renesas,ipmmu-r8a77980";
|
||||||
reg = <0 0xe7b00000 0 0x1000>;
|
reg = <0 0xe7b00000 0 0x1000>;
|
||||||
|
renesas,ipmmu-main = <&ipmmu_mm 4>;
|
||||||
power-domains = <&sysc R8A77980_PD_ALWAYS_ON>;
|
power-domains = <&sysc R8A77980_PD_ALWAYS_ON>;
|
||||||
#iommu-cells = <1>;
|
#iommu-cells = <1>;
|
||||||
};
|
};
|
||||||
|
@ -461,6 +462,7 @@
|
||||||
ipmmu_vip1: mmu@e7960000 {
|
ipmmu_vip1: mmu@e7960000 {
|
||||||
compatible = "renesas,ipmmu-r8a77980";
|
compatible = "renesas,ipmmu-r8a77980";
|
||||||
reg = <0 0xe7960000 0 0x1000>;
|
reg = <0 0xe7960000 0 0x1000>;
|
||||||
|
renesas,ipmmu-main = <&ipmmu_mm 11>;
|
||||||
power-domains = <&sysc R8A77980_PD_ALWAYS_ON>;
|
power-domains = <&sysc R8A77980_PD_ALWAYS_ON>;
|
||||||
#iommu-cells = <1>;
|
#iommu-cells = <1>;
|
||||||
};
|
};
|
||||||
|
|
|
@ -92,7 +92,7 @@
|
||||||
&i2c1 {
|
&i2c1 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
rk805: rk805@18 {
|
rk805: pmic@18 {
|
||||||
compatible = "rockchip,rk805";
|
compatible = "rockchip,rk805";
|
||||||
reg = <0x18>;
|
reg = <0x18>;
|
||||||
interrupt-parent = <&gpio2>;
|
interrupt-parent = <&gpio2>;
|
||||||
|
|
|
@ -112,7 +112,7 @@
|
||||||
&i2c1 {
|
&i2c1 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
rk805: rk805@18 {
|
rk805: pmic@18 {
|
||||||
compatible = "rockchip,rk805";
|
compatible = "rockchip,rk805";
|
||||||
reg = <0x18>;
|
reg = <0x18>;
|
||||||
interrupt-parent = <&gpio2>;
|
interrupt-parent = <&gpio2>;
|
||||||
|
|
|
@ -376,7 +376,7 @@
|
||||||
reset-names = "usb3-otg";
|
reset-names = "usb3-otg";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
|
|
||||||
usbdrd_dwc3_0: dwc3 {
|
usbdrd_dwc3_0: usb@fe800000 {
|
||||||
compatible = "snps,dwc3";
|
compatible = "snps,dwc3";
|
||||||
reg = <0x0 0xfe800000 0x0 0x100000>;
|
reg = <0x0 0xfe800000 0x0 0x100000>;
|
||||||
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH 0>;
|
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH 0>;
|
||||||
|
@ -409,7 +409,7 @@
|
||||||
reset-names = "usb3-otg";
|
reset-names = "usb3-otg";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
|
|
||||||
usbdrd_dwc3_1: dwc3 {
|
usbdrd_dwc3_1: usb@fe900000 {
|
||||||
compatible = "snps,dwc3";
|
compatible = "snps,dwc3";
|
||||||
reg = <0x0 0xfe900000 0x0 0x100000>;
|
reg = <0x0 0xfe900000 0x0 0x100000>;
|
||||||
interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
|
interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
|
||||||
|
|
|
@ -192,6 +192,7 @@ void machine_kexec(struct kimage *kimage)
|
||||||
* the offline CPUs. Therefore, we must use the __* variant here.
|
* the offline CPUs. Therefore, we must use the __* variant here.
|
||||||
*/
|
*/
|
||||||
__flush_icache_range((uintptr_t)reboot_code_buffer,
|
__flush_icache_range((uintptr_t)reboot_code_buffer,
|
||||||
|
(uintptr_t)reboot_code_buffer +
|
||||||
arm64_relocate_new_kernel_size);
|
arm64_relocate_new_kernel_size);
|
||||||
|
|
||||||
/* Flush the kimage list and its buffers. */
|
/* Flush the kimage list and its buffers. */
|
||||||
|
|
|
@ -30,15 +30,15 @@ $(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
|
||||||
$(call if_changed,vdsold)
|
$(call if_changed,vdsold)
|
||||||
|
|
||||||
# We also create a special relocatable object that should mirror the symbol
|
# We also create a special relocatable object that should mirror the symbol
|
||||||
# table and layout of the linked DSO. With ld -R we can then refer to
|
# table and layout of the linked DSO. With ld --just-symbols we can then
|
||||||
# these symbols in the kernel code rather than hand-coded addresses.
|
# refer to these symbols in the kernel code rather than hand-coded addresses.
|
||||||
|
|
||||||
SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
|
SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
|
||||||
$(call cc-ldoption, -Wl$(comma)--hash-style=both)
|
$(call cc-ldoption, -Wl$(comma)--hash-style=both)
|
||||||
$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE
|
$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE
|
||||||
$(call if_changed,vdsold)
|
$(call if_changed,vdsold)
|
||||||
|
|
||||||
LDFLAGS_vdso-syms.o := -r -R
|
LDFLAGS_vdso-syms.o := -r --just-symbols
|
||||||
$(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE
|
$(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE
|
||||||
$(call if_changed,ld)
|
$(call if_changed,ld)
|
||||||
|
|
||||||
|
|
|
@ -55,8 +55,13 @@
|
||||||
/*
|
/*
|
||||||
* Initialize the stackprotector canary value.
|
* Initialize the stackprotector canary value.
|
||||||
*
|
*
|
||||||
* NOTE: this must only be called from functions that never return,
|
* NOTE: this must only be called from functions that never return
|
||||||
* and it must always be inlined.
|
* and it must always be inlined.
|
||||||
|
*
|
||||||
|
* In addition, it should be called from a compilation unit for which
|
||||||
|
* stack protector is disabled. Alternatively, the caller should not end
|
||||||
|
* with a function call which gets tail-call optimized as that would
|
||||||
|
* lead to checking a modified canary value.
|
||||||
*/
|
*/
|
||||||
static __always_inline void boot_init_stack_canary(void)
|
static __always_inline void boot_init_stack_canary(void)
|
||||||
{
|
{
|
||||||
|
|
|
@ -269,6 +269,14 @@ static void notrace start_secondary(void *unused)
|
||||||
|
|
||||||
wmb();
|
wmb();
|
||||||
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
|
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Prevent tail call to cpu_startup_entry() because the stack protector
|
||||||
|
* guard has been changed a couple of function calls up, in
|
||||||
|
* boot_init_stack_canary() and must not be checked before tail calling
|
||||||
|
* another function.
|
||||||
|
*/
|
||||||
|
prevent_tail_call_optimization();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -589,23 +589,23 @@ EXPORT_SYMBOL_GPL(unwind_next_frame);
|
||||||
void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
||||||
struct pt_regs *regs, unsigned long *first_frame)
|
struct pt_regs *regs, unsigned long *first_frame)
|
||||||
{
|
{
|
||||||
if (!orc_init)
|
|
||||||
goto done;
|
|
||||||
|
|
||||||
memset(state, 0, sizeof(*state));
|
memset(state, 0, sizeof(*state));
|
||||||
state->task = task;
|
state->task = task;
|
||||||
|
|
||||||
|
if (!orc_init)
|
||||||
|
goto err;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Refuse to unwind the stack of a task while it's executing on another
|
* Refuse to unwind the stack of a task while it's executing on another
|
||||||
* CPU. This check is racy, but that's ok: the unwinder has other
|
* CPU. This check is racy, but that's ok: the unwinder has other
|
||||||
* checks to prevent it from going off the rails.
|
* checks to prevent it from going off the rails.
|
||||||
*/
|
*/
|
||||||
if (task_on_another_cpu(task))
|
if (task_on_another_cpu(task))
|
||||||
goto done;
|
goto err;
|
||||||
|
|
||||||
if (regs) {
|
if (regs) {
|
||||||
if (user_mode(regs))
|
if (user_mode(regs))
|
||||||
goto done;
|
goto the_end;
|
||||||
|
|
||||||
state->ip = regs->ip;
|
state->ip = regs->ip;
|
||||||
state->sp = kernel_stack_pointer(regs);
|
state->sp = kernel_stack_pointer(regs);
|
||||||
|
@ -638,6 +638,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
||||||
* generate some kind of backtrace if this happens.
|
* generate some kind of backtrace if this happens.
|
||||||
*/
|
*/
|
||||||
void *next_page = (void *)PAGE_ALIGN((unsigned long)state->sp);
|
void *next_page = (void *)PAGE_ALIGN((unsigned long)state->sp);
|
||||||
|
state->error = true;
|
||||||
if (get_stack_info(next_page, state->task, &state->stack_info,
|
if (get_stack_info(next_page, state->task, &state->stack_info,
|
||||||
&state->stack_mask))
|
&state->stack_mask))
|
||||||
return;
|
return;
|
||||||
|
@ -663,8 +664,9 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
||||||
|
|
||||||
return;
|
return;
|
||||||
|
|
||||||
done:
|
err:
|
||||||
|
state->error = true;
|
||||||
|
the_end:
|
||||||
state->stack_info.type = STACK_TYPE_UNKNOWN;
|
state->stack_info.type = STACK_TYPE_UNKNOWN;
|
||||||
return;
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(__unwind_start);
|
EXPORT_SYMBOL_GPL(__unwind_start);
|
||||||
|
|
|
@ -3423,7 +3423,7 @@ static int kvm_vcpu_ioctl_x86_setup_mce(struct kvm_vcpu *vcpu,
|
||||||
unsigned bank_num = mcg_cap & 0xff, bank;
|
unsigned bank_num = mcg_cap & 0xff, bank;
|
||||||
|
|
||||||
r = -EINVAL;
|
r = -EINVAL;
|
||||||
if (!bank_num || bank_num >= KVM_MAX_MCE_BANKS)
|
if (!bank_num || bank_num > KVM_MAX_MCE_BANKS)
|
||||||
goto out;
|
goto out;
|
||||||
if (mcg_cap & ~(kvm_mce_cap_supported | 0xff | 0xff0000))
|
if (mcg_cap & ~(kvm_mce_cap_supported | 0xff | 0xff0000))
|
||||||
goto out;
|
goto out;
|
||||||
|
|
|
@ -89,6 +89,7 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
|
||||||
{
|
{
|
||||||
cpu_bringup();
|
cpu_bringup();
|
||||||
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
|
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
|
||||||
|
prevent_tail_call_optimization();
|
||||||
}
|
}
|
||||||
|
|
||||||
void xen_smp_intr_free_pv(unsigned int cpu)
|
void xen_smp_intr_free_pv(unsigned int cpu)
|
||||||
|
|
|
@ -453,7 +453,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
|
||||||
crypto_free_skcipher(ctx->child);
|
crypto_free_skcipher(ctx->child);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void free(struct skcipher_instance *inst)
|
static void free_inst(struct skcipher_instance *inst)
|
||||||
{
|
{
|
||||||
crypto_drop_skcipher(skcipher_instance_ctx(inst));
|
crypto_drop_skcipher(skcipher_instance_ctx(inst));
|
||||||
kfree(inst);
|
kfree(inst);
|
||||||
|
@ -565,7 +565,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
|
||||||
inst->alg.encrypt = encrypt;
|
inst->alg.encrypt = encrypt;
|
||||||
inst->alg.decrypt = decrypt;
|
inst->alg.decrypt = decrypt;
|
||||||
|
|
||||||
inst->free = free;
|
inst->free = free_inst;
|
||||||
|
|
||||||
err = skcipher_register_instance(tmpl, inst);
|
err = skcipher_register_instance(tmpl, inst);
|
||||||
if (err)
|
if (err)
|
||||||
|
|
|
@ -393,7 +393,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
|
||||||
crypto_free_cipher(ctx->tweak);
|
crypto_free_cipher(ctx->tweak);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void free(struct skcipher_instance *inst)
|
static void free_inst(struct skcipher_instance *inst)
|
||||||
{
|
{
|
||||||
crypto_drop_skcipher(skcipher_instance_ctx(inst));
|
crypto_drop_skcipher(skcipher_instance_ctx(inst));
|
||||||
kfree(inst);
|
kfree(inst);
|
||||||
|
@ -504,7 +504,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
|
||||||
inst->alg.encrypt = encrypt;
|
inst->alg.encrypt = encrypt;
|
||||||
inst->alg.decrypt = decrypt;
|
inst->alg.decrypt = decrypt;
|
||||||
|
|
||||||
inst->free = free;
|
inst->free = free_inst;
|
||||||
|
|
||||||
err = skcipher_register_instance(tmpl, inst);
|
err = skcipher_register_instance(tmpl, inst);
|
||||||
if (err)
|
if (err)
|
||||||
|
|
|
@ -31,6 +31,15 @@ struct virtio_blk_vq {
|
||||||
} ____cacheline_aligned_in_smp;
|
} ____cacheline_aligned_in_smp;
|
||||||
|
|
||||||
struct virtio_blk {
|
struct virtio_blk {
|
||||||
|
/*
|
||||||
|
* This mutex must be held by anything that may run after
|
||||||
|
* virtblk_remove() sets vblk->vdev to NULL.
|
||||||
|
*
|
||||||
|
* blk-mq, virtqueue processing, and sysfs attribute code paths are
|
||||||
|
* shut down before vblk->vdev is set to NULL and therefore do not need
|
||||||
|
* to hold this mutex.
|
||||||
|
*/
|
||||||
|
struct mutex vdev_mutex;
|
||||||
struct virtio_device *vdev;
|
struct virtio_device *vdev;
|
||||||
|
|
||||||
/* The disk structure for the kernel. */
|
/* The disk structure for the kernel. */
|
||||||
|
@ -42,6 +51,13 @@ struct virtio_blk {
|
||||||
/* Process context for config space updates */
|
/* Process context for config space updates */
|
||||||
struct work_struct config_work;
|
struct work_struct config_work;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Tracks references from block_device_operations open/release and
|
||||||
|
* virtio_driver probe/remove so this object can be freed once no
|
||||||
|
* longer in use.
|
||||||
|
*/
|
||||||
|
refcount_t refs;
|
||||||
|
|
||||||
/* What host tells us, plus 2 for header & tailer. */
|
/* What host tells us, plus 2 for header & tailer. */
|
||||||
unsigned int sg_elems;
|
unsigned int sg_elems;
|
||||||
|
|
||||||
|
@ -320,10 +336,55 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void virtblk_get(struct virtio_blk *vblk)
|
||||||
|
{
|
||||||
|
refcount_inc(&vblk->refs);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void virtblk_put(struct virtio_blk *vblk)
|
||||||
|
{
|
||||||
|
if (refcount_dec_and_test(&vblk->refs)) {
|
||||||
|
ida_simple_remove(&vd_index_ida, vblk->index);
|
||||||
|
mutex_destroy(&vblk->vdev_mutex);
|
||||||
|
kfree(vblk);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int virtblk_open(struct block_device *bd, fmode_t mode)
|
||||||
|
{
|
||||||
|
struct virtio_blk *vblk = bd->bd_disk->private_data;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
mutex_lock(&vblk->vdev_mutex);
|
||||||
|
|
||||||
|
if (vblk->vdev)
|
||||||
|
virtblk_get(vblk);
|
||||||
|
else
|
||||||
|
ret = -ENXIO;
|
||||||
|
|
||||||
|
mutex_unlock(&vblk->vdev_mutex);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void virtblk_release(struct gendisk *disk, fmode_t mode)
|
||||||
|
{
|
||||||
|
struct virtio_blk *vblk = disk->private_data;
|
||||||
|
|
||||||
|
virtblk_put(vblk);
|
||||||
|
}
|
||||||
|
|
||||||
/* We provide getgeo only to please some old bootloader/partitioning tools */
|
/* We provide getgeo only to please some old bootloader/partitioning tools */
|
||||||
static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
|
static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
|
||||||
{
|
{
|
||||||
struct virtio_blk *vblk = bd->bd_disk->private_data;
|
struct virtio_blk *vblk = bd->bd_disk->private_data;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
mutex_lock(&vblk->vdev_mutex);
|
||||||
|
|
||||||
|
if (!vblk->vdev) {
|
||||||
|
ret = -ENXIO;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
/* see if the host passed in geometry config */
|
/* see if the host passed in geometry config */
|
||||||
if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) {
|
if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) {
|
||||||
|
@ -339,12 +400,16 @@ static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
|
||||||
geo->sectors = 1 << 5;
|
geo->sectors = 1 << 5;
|
||||||
geo->cylinders = get_capacity(bd->bd_disk) >> 11;
|
geo->cylinders = get_capacity(bd->bd_disk) >> 11;
|
||||||
}
|
}
|
||||||
return 0;
|
out:
|
||||||
|
mutex_unlock(&vblk->vdev_mutex);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct block_device_operations virtblk_fops = {
|
static const struct block_device_operations virtblk_fops = {
|
||||||
.ioctl = virtblk_ioctl,
|
.ioctl = virtblk_ioctl,
|
||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
|
.open = virtblk_open,
|
||||||
|
.release = virtblk_release,
|
||||||
.getgeo = virtblk_getgeo,
|
.getgeo = virtblk_getgeo,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -672,6 +737,10 @@ static int virtblk_probe(struct virtio_device *vdev)
|
||||||
goto out_free_index;
|
goto out_free_index;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* This reference is dropped in virtblk_remove(). */
|
||||||
|
refcount_set(&vblk->refs, 1);
|
||||||
|
mutex_init(&vblk->vdev_mutex);
|
||||||
|
|
||||||
vblk->vdev = vdev;
|
vblk->vdev = vdev;
|
||||||
vblk->sg_elems = sg_elems;
|
vblk->sg_elems = sg_elems;
|
||||||
|
|
||||||
|
@ -824,8 +893,6 @@ static int virtblk_probe(struct virtio_device *vdev)
|
||||||
static void virtblk_remove(struct virtio_device *vdev)
|
static void virtblk_remove(struct virtio_device *vdev)
|
||||||
{
|
{
|
||||||
struct virtio_blk *vblk = vdev->priv;
|
struct virtio_blk *vblk = vdev->priv;
|
||||||
int index = vblk->index;
|
|
||||||
int refc;
|
|
||||||
|
|
||||||
/* Make sure no work handler is accessing the device. */
|
/* Make sure no work handler is accessing the device. */
|
||||||
flush_work(&vblk->config_work);
|
flush_work(&vblk->config_work);
|
||||||
|
@ -835,18 +902,21 @@ static void virtblk_remove(struct virtio_device *vdev)
|
||||||
|
|
||||||
blk_mq_free_tag_set(&vblk->tag_set);
|
blk_mq_free_tag_set(&vblk->tag_set);
|
||||||
|
|
||||||
|
mutex_lock(&vblk->vdev_mutex);
|
||||||
|
|
||||||
/* Stop all the virtqueues. */
|
/* Stop all the virtqueues. */
|
||||||
vdev->config->reset(vdev);
|
vdev->config->reset(vdev);
|
||||||
|
|
||||||
refc = kref_read(&disk_to_dev(vblk->disk)->kobj.kref);
|
/* Virtqueues are stopped, nothing can use vblk->vdev anymore. */
|
||||||
|
vblk->vdev = NULL;
|
||||||
|
|
||||||
put_disk(vblk->disk);
|
put_disk(vblk->disk);
|
||||||
vdev->config->del_vqs(vdev);
|
vdev->config->del_vqs(vdev);
|
||||||
kfree(vblk->vqs);
|
kfree(vblk->vqs);
|
||||||
kfree(vblk);
|
|
||||||
|
|
||||||
/* Only free device id if we don't have any users */
|
mutex_unlock(&vblk->vdev_mutex);
|
||||||
if (refc == 1)
|
|
||||||
ida_simple_remove(&vd_index_ida, index);
|
virtblk_put(vblk);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_PM_SLEEP
|
#ifdef CONFIG_PM_SLEEP
|
||||||
|
|
|
@ -3711,6 +3711,9 @@ static int __clk_core_init(struct clk_core *core)
|
||||||
out:
|
out:
|
||||||
clk_pm_runtime_put(core);
|
clk_pm_runtime_put(core);
|
||||||
unlock:
|
unlock:
|
||||||
|
if (ret)
|
||||||
|
hlist_del_init(&core->child_node);
|
||||||
|
|
||||||
clk_prepare_unlock();
|
clk_prepare_unlock();
|
||||||
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
|
|
|
@ -163,8 +163,6 @@ PNAME(mux_i2s_out_p) = { "i2s1_pre", "xin12m" };
|
||||||
PNAME(mux_i2s2_p) = { "i2s2_src", "i2s2_frac", "xin12m" };
|
PNAME(mux_i2s2_p) = { "i2s2_src", "i2s2_frac", "xin12m" };
|
||||||
PNAME(mux_sclk_spdif_p) = { "sclk_spdif_src", "spdif_frac", "xin12m" };
|
PNAME(mux_sclk_spdif_p) = { "sclk_spdif_src", "spdif_frac", "xin12m" };
|
||||||
|
|
||||||
PNAME(mux_aclk_gpu_pre_p) = { "cpll_gpu", "gpll_gpu", "hdmiphy_gpu", "usb480m_gpu" };
|
|
||||||
|
|
||||||
PNAME(mux_uart0_p) = { "uart0_src", "uart0_frac", "xin24m" };
|
PNAME(mux_uart0_p) = { "uart0_src", "uart0_frac", "xin24m" };
|
||||||
PNAME(mux_uart1_p) = { "uart1_src", "uart1_frac", "xin24m" };
|
PNAME(mux_uart1_p) = { "uart1_src", "uart1_frac", "xin24m" };
|
||||||
PNAME(mux_uart2_p) = { "uart2_src", "uart2_frac", "xin24m" };
|
PNAME(mux_uart2_p) = { "uart2_src", "uart2_frac", "xin24m" };
|
||||||
|
@ -475,16 +473,9 @@ static struct rockchip_clk_branch rk3228_clk_branches[] __initdata = {
|
||||||
RK2928_CLKSEL_CON(24), 6, 10, DFLAGS,
|
RK2928_CLKSEL_CON(24), 6, 10, DFLAGS,
|
||||||
RK2928_CLKGATE_CON(2), 8, GFLAGS),
|
RK2928_CLKGATE_CON(2), 8, GFLAGS),
|
||||||
|
|
||||||
GATE(0, "cpll_gpu", "cpll", 0,
|
COMPOSITE(0, "aclk_gpu_pre", mux_pll_src_4plls_p, 0,
|
||||||
|
RK2928_CLKSEL_CON(34), 5, 2, MFLAGS, 0, 5, DFLAGS,
|
||||||
RK2928_CLKGATE_CON(3), 13, GFLAGS),
|
RK2928_CLKGATE_CON(3), 13, GFLAGS),
|
||||||
GATE(0, "gpll_gpu", "gpll", 0,
|
|
||||||
RK2928_CLKGATE_CON(3), 13, GFLAGS),
|
|
||||||
GATE(0, "hdmiphy_gpu", "hdmiphy", 0,
|
|
||||||
RK2928_CLKGATE_CON(3), 13, GFLAGS),
|
|
||||||
GATE(0, "usb480m_gpu", "usb480m", 0,
|
|
||||||
RK2928_CLKGATE_CON(3), 13, GFLAGS),
|
|
||||||
COMPOSITE_NOGATE(0, "aclk_gpu_pre", mux_aclk_gpu_pre_p, 0,
|
|
||||||
RK2928_CLKSEL_CON(34), 5, 2, MFLAGS, 0, 5, DFLAGS),
|
|
||||||
|
|
||||||
COMPOSITE(SCLK_SPI0, "sclk_spi0", mux_pll_src_2plls_p, 0,
|
COMPOSITE(SCLK_SPI0, "sclk_spi0", mux_pll_src_2plls_p, 0,
|
||||||
RK2928_CLKSEL_CON(25), 8, 1, MFLAGS, 0, 7, DFLAGS,
|
RK2928_CLKSEL_CON(25), 8, 1, MFLAGS, 0, 7, DFLAGS,
|
||||||
|
@ -589,8 +580,8 @@ static struct rockchip_clk_branch rk3228_clk_branches[] __initdata = {
|
||||||
GATE(0, "pclk_peri_noc", "pclk_peri", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(12), 2, GFLAGS),
|
GATE(0, "pclk_peri_noc", "pclk_peri", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(12), 2, GFLAGS),
|
||||||
|
|
||||||
/* PD_GPU */
|
/* PD_GPU */
|
||||||
GATE(ACLK_GPU, "aclk_gpu", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(13), 14, GFLAGS),
|
GATE(ACLK_GPU, "aclk_gpu", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(7), 14, GFLAGS),
|
||||||
GATE(0, "aclk_gpu_noc", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(13), 15, GFLAGS),
|
GATE(0, "aclk_gpu_noc", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(7), 15, GFLAGS),
|
||||||
|
|
||||||
/* PD_BUS */
|
/* PD_BUS */
|
||||||
GATE(0, "sclk_initmem_mbist", "aclk_cpu", 0, RK2928_CLKGATE_CON(8), 1, GFLAGS),
|
GATE(0, "sclk_initmem_mbist", "aclk_cpu", 0, RK2928_CLKGATE_CON(8), 1, GFLAGS),
|
||||||
|
|
|
@ -957,7 +957,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
|
||||||
|
|
||||||
update_turbo_state();
|
update_turbo_state();
|
||||||
if (global.turbo_disabled) {
|
if (global.turbo_disabled) {
|
||||||
pr_warn("Turbo disabled by BIOS or unavailable on processor\n");
|
pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n");
|
||||||
mutex_unlock(&intel_pstate_limits_lock);
|
mutex_unlock(&intel_pstate_limits_lock);
|
||||||
mutex_unlock(&intel_pstate_driver_lock);
|
mutex_unlock(&intel_pstate_driver_lock);
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
|
|
@ -362,6 +362,8 @@ static void mmp_tdma_free_descriptor(struct mmp_tdma_chan *tdmac)
|
||||||
gen_pool_free(gpool, (unsigned long)tdmac->desc_arr,
|
gen_pool_free(gpool, (unsigned long)tdmac->desc_arr,
|
||||||
size);
|
size);
|
||||||
tdmac->desc_arr = NULL;
|
tdmac->desc_arr = NULL;
|
||||||
|
if (tdmac->status == DMA_ERROR)
|
||||||
|
tdmac->status = DMA_COMPLETE;
|
||||||
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -873,6 +873,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
|
||||||
}
|
}
|
||||||
|
|
||||||
pci_set_master(pdev);
|
pci_set_master(pdev);
|
||||||
|
pd->dma.dev = &pdev->dev;
|
||||||
|
|
||||||
err = request_irq(pdev->irq, pd_irq, IRQF_SHARED, DRV_NAME, pd);
|
err = request_irq(pdev->irq, pd_irq, IRQF_SHARED, DRV_NAME, pd);
|
||||||
if (err) {
|
if (err) {
|
||||||
|
@ -888,7 +889,6 @@ static int pch_dma_probe(struct pci_dev *pdev,
|
||||||
goto err_free_irq;
|
goto err_free_irq;
|
||||||
}
|
}
|
||||||
|
|
||||||
pd->dma.dev = &pdev->dev;
|
|
||||||
|
|
||||||
INIT_LIST_HEAD(&pd->dma.channels);
|
INIT_LIST_HEAD(&pd->dma.channels);
|
||||||
|
|
||||||
|
|
|
@ -210,7 +210,8 @@ qxl_image_init_helper(struct qxl_device *qdev,
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
DRM_ERROR("unsupported image bit depth\n");
|
DRM_ERROR("unsupported image bit depth\n");
|
||||||
return -EINVAL; /* TODO: cleanup */
|
qxl_bo_kunmap_atomic_page(qdev, image_bo, ptr);
|
||||||
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
image->u.bitmap.flags = QXL_BITMAP_TOP_DOWN;
|
image->u.bitmap.flags = QXL_BITMAP_TOP_DOWN;
|
||||||
image->u.bitmap.x = width;
|
image->u.bitmap.x = width;
|
||||||
|
|
|
@ -250,9 +250,9 @@ static ssize_t da9052_read_tsi(struct device *dev,
|
||||||
int channel = to_sensor_dev_attr(devattr)->index;
|
int channel = to_sensor_dev_attr(devattr)->index;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
mutex_lock(&hwmon->hwmon_lock);
|
mutex_lock(&hwmon->da9052->auxadc_lock);
|
||||||
ret = __da9052_read_tsi(dev, channel);
|
ret = __da9052_read_tsi(dev, channel);
|
||||||
mutex_unlock(&hwmon->hwmon_lock);
|
mutex_unlock(&hwmon->da9052->auxadc_lock);
|
||||||
|
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -534,7 +534,7 @@ void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
|
||||||
int arp_index;
|
int arp_index;
|
||||||
|
|
||||||
arp_index = i40iw_arp_table(iwdev, ip_addr, ipv4, mac_addr, action);
|
arp_index = i40iw_arp_table(iwdev, ip_addr, ipv4, mac_addr, action);
|
||||||
if (arp_index == -1)
|
if (arp_index < 0)
|
||||||
return;
|
return;
|
||||||
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, false);
|
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, false);
|
||||||
if (!cqp_request)
|
if (!cqp_request)
|
||||||
|
|
|
@ -2807,6 +2807,7 @@ static int build_sriov_qp0_header(struct mlx4_ib_sqp *sqp,
|
||||||
int send_size;
|
int send_size;
|
||||||
int header_size;
|
int header_size;
|
||||||
int spc;
|
int spc;
|
||||||
|
int err;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (wr->wr.opcode != IB_WR_SEND)
|
if (wr->wr.opcode != IB_WR_SEND)
|
||||||
|
@ -2841,7 +2842,9 @@ static int build_sriov_qp0_header(struct mlx4_ib_sqp *sqp,
|
||||||
|
|
||||||
sqp->ud_header.lrh.virtual_lane = 0;
|
sqp->ud_header.lrh.virtual_lane = 0;
|
||||||
sqp->ud_header.bth.solicited_event = !!(wr->wr.send_flags & IB_SEND_SOLICITED);
|
sqp->ud_header.bth.solicited_event = !!(wr->wr.send_flags & IB_SEND_SOLICITED);
|
||||||
ib_get_cached_pkey(ib_dev, sqp->qp.port, 0, &pkey);
|
err = ib_get_cached_pkey(ib_dev, sqp->qp.port, 0, &pkey);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
|
sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
|
||||||
if (sqp->qp.mlx4_ib_qp_type == MLX4_IB_QPT_TUN_SMI_OWNER)
|
if (sqp->qp.mlx4_ib_qp_type == MLX4_IB_QPT_TUN_SMI_OWNER)
|
||||||
sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->remote_qpn);
|
sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->remote_qpn);
|
||||||
|
@ -3128,9 +3131,14 @@ static int build_mlx_header(struct mlx4_ib_sqp *sqp, const struct ib_ud_wr *wr,
|
||||||
}
|
}
|
||||||
sqp->ud_header.bth.solicited_event = !!(wr->wr.send_flags & IB_SEND_SOLICITED);
|
sqp->ud_header.bth.solicited_event = !!(wr->wr.send_flags & IB_SEND_SOLICITED);
|
||||||
if (!sqp->qp.ibqp.qp_num)
|
if (!sqp->qp.ibqp.qp_num)
|
||||||
ib_get_cached_pkey(ib_dev, sqp->qp.port, sqp->pkey_index, &pkey);
|
err = ib_get_cached_pkey(ib_dev, sqp->qp.port, sqp->pkey_index,
|
||||||
|
&pkey);
|
||||||
else
|
else
|
||||||
ib_get_cached_pkey(ib_dev, sqp->qp.port, wr->pkey_index, &pkey);
|
err = ib_get_cached_pkey(ib_dev, sqp->qp.port, wr->pkey_index,
|
||||||
|
&pkey);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
|
||||||
sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
|
sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
|
||||||
sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->remote_qpn);
|
sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->remote_qpn);
|
||||||
sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1));
|
sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1));
|
||||||
|
|
|
@ -1427,6 +1427,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
|
||||||
struct mmc_request *mrq = &mqrq->brq.mrq;
|
struct mmc_request *mrq = &mqrq->brq.mrq;
|
||||||
struct request_queue *q = req->q;
|
struct request_queue *q = req->q;
|
||||||
struct mmc_host *host = mq->card->host;
|
struct mmc_host *host = mq->card->host;
|
||||||
|
enum mmc_issue_type issue_type = mmc_issue_type(mq, req);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
bool put_card;
|
bool put_card;
|
||||||
int err;
|
int err;
|
||||||
|
@ -1456,7 +1457,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
|
||||||
|
|
||||||
spin_lock_irqsave(q->queue_lock, flags);
|
spin_lock_irqsave(q->queue_lock, flags);
|
||||||
|
|
||||||
mq->in_flight[mmc_issue_type(mq, req)] -= 1;
|
mq->in_flight[issue_type] -= 1;
|
||||||
|
|
||||||
put_card = (mmc_tot_in_flight(mq) == 0);
|
put_card = (mmc_tot_in_flight(mq) == 0);
|
||||||
|
|
||||||
|
|
|
@ -112,8 +112,7 @@ static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
|
||||||
__mmc_cqe_recovery_notifier(mq);
|
__mmc_cqe_recovery_notifier(mq);
|
||||||
return BLK_EH_RESET_TIMER;
|
return BLK_EH_RESET_TIMER;
|
||||||
}
|
}
|
||||||
/* No timeout (XXX: huh? comment doesn't make much sense) */
|
/* The request has gone already */
|
||||||
blk_mq_complete_request(req);
|
|
||||||
return BLK_EH_DONE;
|
return BLK_EH_DONE;
|
||||||
default:
|
default:
|
||||||
/* Timeout is handled by mmc core */
|
/* Timeout is handled by mmc core */
|
||||||
|
|
|
@ -552,10 +552,12 @@ static int sdhci_acpi_emmc_amd_probe_slot(struct platform_device *pdev,
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct sdhci_acpi_slot sdhci_acpi_slot_amd_emmc = {
|
static const struct sdhci_acpi_slot sdhci_acpi_slot_amd_emmc = {
|
||||||
.chip = &sdhci_acpi_chip_amd,
|
.chip = &sdhci_acpi_chip_amd,
|
||||||
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE,
|
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE,
|
||||||
.quirks = SDHCI_QUIRK_32BIT_DMA_ADDR | SDHCI_QUIRK_32BIT_DMA_SIZE |
|
.quirks = SDHCI_QUIRK_32BIT_DMA_ADDR |
|
||||||
SDHCI_QUIRK_32BIT_ADMA_SIZE,
|
SDHCI_QUIRK_32BIT_DMA_SIZE |
|
||||||
|
SDHCI_QUIRK_32BIT_ADMA_SIZE,
|
||||||
|
.quirks2 = SDHCI_QUIRK2_BROKEN_64_BIT_DMA,
|
||||||
.probe_slot = sdhci_acpi_emmc_amd_probe_slot,
|
.probe_slot = sdhci_acpi_emmc_amd_probe_slot,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -360,6 +360,7 @@ static void __exit dsa_loop_exit(void)
|
||||||
}
|
}
|
||||||
module_exit(dsa_loop_exit);
|
module_exit(dsa_loop_exit);
|
||||||
|
|
||||||
|
MODULE_SOFTDEP("pre: dsa_loop_bdinfo");
|
||||||
MODULE_LICENSE("GPL");
|
MODULE_LICENSE("GPL");
|
||||||
MODULE_AUTHOR("Florian Fainelli");
|
MODULE_AUTHOR("Florian Fainelli");
|
||||||
MODULE_DESCRIPTION("DSA loopback driver");
|
MODULE_DESCRIPTION("DSA loopback driver");
|
||||||
|
|
|
@ -54,6 +54,8 @@
|
||||||
|
|
||||||
#define MGMT_MSG_TIMEOUT 5000
|
#define MGMT_MSG_TIMEOUT 5000
|
||||||
|
|
||||||
|
#define SET_FUNC_PORT_MGMT_TIMEOUT 25000
|
||||||
|
|
||||||
#define mgmt_to_pfhwdev(pf_mgmt) \
|
#define mgmt_to_pfhwdev(pf_mgmt) \
|
||||||
container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt)
|
container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt)
|
||||||
|
|
||||||
|
@ -247,12 +249,13 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt,
|
||||||
u8 *buf_in, u16 in_size,
|
u8 *buf_in, u16 in_size,
|
||||||
u8 *buf_out, u16 *out_size,
|
u8 *buf_out, u16 *out_size,
|
||||||
enum mgmt_direction_type direction,
|
enum mgmt_direction_type direction,
|
||||||
u16 resp_msg_id)
|
u16 resp_msg_id, u32 timeout)
|
||||||
{
|
{
|
||||||
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
|
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
|
||||||
struct pci_dev *pdev = hwif->pdev;
|
struct pci_dev *pdev = hwif->pdev;
|
||||||
struct hinic_recv_msg *recv_msg;
|
struct hinic_recv_msg *recv_msg;
|
||||||
struct completion *recv_done;
|
struct completion *recv_done;
|
||||||
|
unsigned long timeo;
|
||||||
u16 msg_id;
|
u16 msg_id;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
|
@ -276,8 +279,9 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt,
|
||||||
goto unlock_sync_msg;
|
goto unlock_sync_msg;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!wait_for_completion_timeout(recv_done,
|
timeo = msecs_to_jiffies(timeout ? timeout : MGMT_MSG_TIMEOUT);
|
||||||
msecs_to_jiffies(MGMT_MSG_TIMEOUT))) {
|
|
||||||
|
if (!wait_for_completion_timeout(recv_done, timeo)) {
|
||||||
dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id);
|
dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id);
|
||||||
err = -ETIMEDOUT;
|
err = -ETIMEDOUT;
|
||||||
goto unlock_sync_msg;
|
goto unlock_sync_msg;
|
||||||
|
@ -351,6 +355,7 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
|
||||||
{
|
{
|
||||||
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
|
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
|
||||||
struct pci_dev *pdev = hwif->pdev;
|
struct pci_dev *pdev = hwif->pdev;
|
||||||
|
u32 timeout = 0;
|
||||||
|
|
||||||
if (sync != HINIC_MGMT_MSG_SYNC) {
|
if (sync != HINIC_MGMT_MSG_SYNC) {
|
||||||
dev_err(&pdev->dev, "Invalid MGMT msg type\n");
|
dev_err(&pdev->dev, "Invalid MGMT msg type\n");
|
||||||
|
@ -362,9 +367,12 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
|
||||||
|
timeout = SET_FUNC_PORT_MGMT_TIMEOUT;
|
||||||
|
|
||||||
return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
|
return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
|
||||||
buf_out, out_size, MGMT_DIRECT_SEND,
|
buf_out, out_size, MGMT_DIRECT_SEND,
|
||||||
MSG_NOT_RESP);
|
MSG_NOT_RESP, timeout);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -475,7 +475,6 @@ static int hinic_close(struct net_device *netdev)
|
||||||
{
|
{
|
||||||
struct hinic_dev *nic_dev = netdev_priv(netdev);
|
struct hinic_dev *nic_dev = netdev_priv(netdev);
|
||||||
unsigned int flags;
|
unsigned int flags;
|
||||||
int err;
|
|
||||||
|
|
||||||
down(&nic_dev->mgmt_lock);
|
down(&nic_dev->mgmt_lock);
|
||||||
|
|
||||||
|
@ -489,20 +488,9 @@ static int hinic_close(struct net_device *netdev)
|
||||||
|
|
||||||
up(&nic_dev->mgmt_lock);
|
up(&nic_dev->mgmt_lock);
|
||||||
|
|
||||||
err = hinic_port_set_func_state(nic_dev, HINIC_FUNC_PORT_DISABLE);
|
hinic_port_set_state(nic_dev, HINIC_PORT_DISABLE);
|
||||||
if (err) {
|
|
||||||
netif_err(nic_dev, drv, netdev,
|
|
||||||
"Failed to set func port state\n");
|
|
||||||
nic_dev->flags |= (flags & HINIC_INTF_UP);
|
|
||||||
return err;
|
|
||||||
}
|
|
||||||
|
|
||||||
err = hinic_port_set_state(nic_dev, HINIC_PORT_DISABLE);
|
hinic_port_set_func_state(nic_dev, HINIC_FUNC_PORT_DISABLE);
|
||||||
if (err) {
|
|
||||||
netif_err(nic_dev, drv, netdev, "Failed to set port state\n");
|
|
||||||
nic_dev->flags |= (flags & HINIC_INTF_UP);
|
|
||||||
return err;
|
|
||||||
}
|
|
||||||
|
|
||||||
free_rxqs(nic_dev);
|
free_rxqs(nic_dev);
|
||||||
free_txqs(nic_dev);
|
free_txqs(nic_dev);
|
||||||
|
|
|
@ -561,7 +561,7 @@ static int moxart_remove(struct platform_device *pdev)
|
||||||
struct net_device *ndev = platform_get_drvdata(pdev);
|
struct net_device *ndev = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
unregister_netdev(ndev);
|
unregister_netdev(ndev);
|
||||||
free_irq(ndev->irq, ndev);
|
devm_free_irq(&pdev->dev, ndev->irq, ndev);
|
||||||
moxart_mac_free_memory(ndev);
|
moxart_mac_free_memory(ndev);
|
||||||
free_netdev(ndev);
|
free_netdev(ndev);
|
||||||
|
|
||||||
|
|
|
@ -235,11 +235,13 @@ static int jazz_sonic_probe(struct platform_device *pdev)
|
||||||
|
|
||||||
err = register_netdev(dev);
|
err = register_netdev(dev);
|
||||||
if (err)
|
if (err)
|
||||||
goto out1;
|
goto undo_probe1;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
out1:
|
undo_probe1:
|
||||||
|
dma_free_coherent(lp->device, SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
|
||||||
|
lp->descriptors, lp->descriptors_laddr);
|
||||||
release_mem_region(dev->base_addr, SONIC_MEM_SIZE);
|
release_mem_region(dev->base_addr, SONIC_MEM_SIZE);
|
||||||
out:
|
out:
|
||||||
free_netdev(dev);
|
free_netdev(dev);
|
||||||
|
|
|
@ -1302,9 +1302,11 @@ int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_eee *data)
|
||||||
/* Restart autonegotiation so the new modes get sent to the
|
/* Restart autonegotiation so the new modes get sent to the
|
||||||
* link partner.
|
* link partner.
|
||||||
*/
|
*/
|
||||||
ret = phy_restart_aneg(phydev);
|
if (phydev->autoneg == AUTONEG_ENABLE) {
|
||||||
if (ret < 0)
|
ret = phy_restart_aneg(phydev);
|
||||||
return ret;
|
if (ret < 0)
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -497,6 +497,9 @@ static int pppoe_disc_rcv(struct sk_buff *skb, struct net_device *dev,
|
||||||
if (!skb)
|
if (!skb)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
if (skb->pkt_type != PACKET_HOST)
|
||||||
|
goto abort;
|
||||||
|
|
||||||
if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))
|
if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))
|
||||||
goto abort;
|
goto abort;
|
||||||
|
|
||||||
|
|
|
@ -1242,9 +1242,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
|
||||||
break;
|
break;
|
||||||
} while (rq->vq->num_free);
|
} while (rq->vq->num_free);
|
||||||
if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) {
|
if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) {
|
||||||
u64_stats_update_begin(&rq->stats.syncp);
|
unsigned long flags;
|
||||||
|
|
||||||
|
flags = u64_stats_update_begin_irqsave(&rq->stats.syncp);
|
||||||
rq->stats.kicks++;
|
rq->stats.kicks++;
|
||||||
u64_stats_update_end(&rq->stats.syncp);
|
u64_stats_update_end_irqrestore(&rq->stats.syncp, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
return !oom;
|
return !oom;
|
||||||
|
|
|
@ -1495,6 +1495,7 @@ static const struct gpio_chip byt_gpio_chip = {
|
||||||
.direction_output = byt_gpio_direction_output,
|
.direction_output = byt_gpio_direction_output,
|
||||||
.get = byt_gpio_get,
|
.get = byt_gpio_get,
|
||||||
.set = byt_gpio_set,
|
.set = byt_gpio_set,
|
||||||
|
.set_config = gpiochip_generic_config,
|
||||||
.dbg_show = byt_gpio_dbg_show,
|
.dbg_show = byt_gpio_dbg_show,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -1485,11 +1485,15 @@ static void chv_gpio_irq_handler(struct irq_desc *desc)
|
||||||
struct chv_pinctrl *pctrl = gpiochip_get_data(gc);
|
struct chv_pinctrl *pctrl = gpiochip_get_data(gc);
|
||||||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
struct irq_chip *chip = irq_desc_get_chip(desc);
|
||||||
unsigned long pending;
|
unsigned long pending;
|
||||||
|
unsigned long flags;
|
||||||
u32 intr_line;
|
u32 intr_line;
|
||||||
|
|
||||||
chained_irq_enter(chip, desc);
|
chained_irq_enter(chip, desc);
|
||||||
|
|
||||||
|
raw_spin_lock_irqsave(&chv_lock, flags);
|
||||||
pending = readl(pctrl->regs + CHV_INTSTAT);
|
pending = readl(pctrl->regs + CHV_INTSTAT);
|
||||||
|
raw_spin_unlock_irqrestore(&chv_lock, flags);
|
||||||
|
|
||||||
for_each_set_bit(intr_line, &pending, pctrl->community->nirqs) {
|
for_each_set_bit(intr_line, &pending, pctrl->community->nirqs) {
|
||||||
unsigned irq, offset;
|
unsigned irq, offset;
|
||||||
|
|
||||||
|
|
|
@ -694,8 +694,10 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
|
||||||
hp->flags = input_size; /* structure abuse ... */
|
hp->flags = input_size; /* structure abuse ... */
|
||||||
hp->pack_id = old_hdr.pack_id;
|
hp->pack_id = old_hdr.pack_id;
|
||||||
hp->usr_ptr = NULL;
|
hp->usr_ptr = NULL;
|
||||||
if (__copy_from_user(cmnd, buf, cmd_size))
|
if (__copy_from_user(cmnd, buf, cmd_size)) {
|
||||||
|
sg_remove_request(sfp, srp);
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
|
}
|
||||||
/*
|
/*
|
||||||
* SG_DXFER_TO_FROM_DEV is functionally equivalent to SG_DXFER_FROM_DEV,
|
* SG_DXFER_TO_FROM_DEV is functionally equivalent to SG_DXFER_FROM_DEV,
|
||||||
* but is is possible that the app intended SG_DXFER_TO_DEV, because there
|
* but is is possible that the app intended SG_DXFER_TO_DEV, because there
|
||||||
|
|
|
@ -38,6 +38,7 @@
|
||||||
|
|
||||||
#define USB_VENDOR_GENESYS_LOGIC 0x05e3
|
#define USB_VENDOR_GENESYS_LOGIC 0x05e3
|
||||||
#define USB_VENDOR_SMSC 0x0424
|
#define USB_VENDOR_SMSC 0x0424
|
||||||
|
#define USB_PRODUCT_USB5534B 0x5534
|
||||||
#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND 0x01
|
#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND 0x01
|
||||||
#define HUB_QUIRK_DISABLE_AUTOSUSPEND 0x02
|
#define HUB_QUIRK_DISABLE_AUTOSUSPEND 0x02
|
||||||
|
|
||||||
|
@ -5439,8 +5440,11 @@ static void hub_event(struct work_struct *work)
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct usb_device_id hub_id_table[] = {
|
static const struct usb_device_id hub_id_table[] = {
|
||||||
{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR | USB_DEVICE_ID_MATCH_INT_CLASS,
|
{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
|
||||||
|
| USB_DEVICE_ID_MATCH_PRODUCT
|
||||||
|
| USB_DEVICE_ID_MATCH_INT_CLASS,
|
||||||
.idVendor = USB_VENDOR_SMSC,
|
.idVendor = USB_VENDOR_SMSC,
|
||||||
|
.idProduct = USB_PRODUCT_USB5534B,
|
||||||
.bInterfaceClass = USB_CLASS_HUB,
|
.bInterfaceClass = USB_CLASS_HUB,
|
||||||
.driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
|
.driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
|
||||||
{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
|
{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR
|
||||||
|
|
|
@ -2279,9 +2279,6 @@ static int dwc3_gadget_ep_reclaim_trb_sg(struct dwc3_ep *dep,
|
||||||
for_each_sg(sg, s, pending, i) {
|
for_each_sg(sg, s, pending, i) {
|
||||||
trb = &dep->trb_pool[dep->trb_dequeue];
|
trb = &dep->trb_pool[dep->trb_dequeue];
|
||||||
|
|
||||||
if (trb->ctrl & DWC3_TRB_CTRL_HWO)
|
|
||||||
break;
|
|
||||||
|
|
||||||
req->sg = sg_next(s);
|
req->sg = sg_next(s);
|
||||||
req->num_pending_sgs--;
|
req->num_pending_sgs--;
|
||||||
|
|
||||||
|
|
|
@ -291,6 +291,9 @@ static ssize_t gadget_dev_desc_UDC_store(struct config_item *item,
|
||||||
char *name;
|
char *name;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
if (strlen(page) < len)
|
||||||
|
return -EOVERFLOW;
|
||||||
|
|
||||||
name = kstrdup(page, GFP_KERNEL);
|
name = kstrdup(page, GFP_KERNEL);
|
||||||
if (!name)
|
if (!name)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
|
@ -300,8 +300,10 @@ static int audio_bind(struct usb_composite_dev *cdev)
|
||||||
struct usb_descriptor_header *usb_desc;
|
struct usb_descriptor_header *usb_desc;
|
||||||
|
|
||||||
usb_desc = usb_otg_descriptor_alloc(cdev->gadget);
|
usb_desc = usb_otg_descriptor_alloc(cdev->gadget);
|
||||||
if (!usb_desc)
|
if (!usb_desc) {
|
||||||
|
status = -ENOMEM;
|
||||||
goto fail;
|
goto fail;
|
||||||
|
}
|
||||||
usb_otg_descriptor_init(cdev->gadget, usb_desc);
|
usb_otg_descriptor_init(cdev->gadget, usb_desc);
|
||||||
otg_desc[0] = usb_desc;
|
otg_desc[0] = usb_desc;
|
||||||
otg_desc[1] = NULL;
|
otg_desc[1] = NULL;
|
||||||
|
|
|
@ -179,8 +179,10 @@ static int cdc_bind(struct usb_composite_dev *cdev)
|
||||||
struct usb_descriptor_header *usb_desc;
|
struct usb_descriptor_header *usb_desc;
|
||||||
|
|
||||||
usb_desc = usb_otg_descriptor_alloc(gadget);
|
usb_desc = usb_otg_descriptor_alloc(gadget);
|
||||||
if (!usb_desc)
|
if (!usb_desc) {
|
||||||
|
status = -ENOMEM;
|
||||||
goto fail1;
|
goto fail1;
|
||||||
|
}
|
||||||
usb_otg_descriptor_init(gadget, usb_desc);
|
usb_otg_descriptor_init(gadget, usb_desc);
|
||||||
otg_desc[0] = usb_desc;
|
otg_desc[0] = usb_desc;
|
||||||
otg_desc[1] = NULL;
|
otg_desc[1] = NULL;
|
||||||
|
|
|
@ -156,8 +156,10 @@ static int gncm_bind(struct usb_composite_dev *cdev)
|
||||||
struct usb_descriptor_header *usb_desc;
|
struct usb_descriptor_header *usb_desc;
|
||||||
|
|
||||||
usb_desc = usb_otg_descriptor_alloc(gadget);
|
usb_desc = usb_otg_descriptor_alloc(gadget);
|
||||||
if (!usb_desc)
|
if (!usb_desc) {
|
||||||
|
status = -ENOMEM;
|
||||||
goto fail;
|
goto fail;
|
||||||
|
}
|
||||||
usb_otg_descriptor_init(gadget, usb_desc);
|
usb_otg_descriptor_init(gadget, usb_desc);
|
||||||
otg_desc[0] = usb_desc;
|
otg_desc[0] = usb_desc;
|
||||||
otg_desc[1] = NULL;
|
otg_desc[1] = NULL;
|
||||||
|
|
|
@ -2653,6 +2653,8 @@ net2272_plat_probe(struct platform_device *pdev)
|
||||||
err_req:
|
err_req:
|
||||||
release_mem_region(base, len);
|
release_mem_region(base, len);
|
||||||
err:
|
err:
|
||||||
|
kfree(dev);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -361,6 +361,7 @@ static int xhci_plat_remove(struct platform_device *dev)
|
||||||
struct clk *reg_clk = xhci->reg_clk;
|
struct clk *reg_clk = xhci->reg_clk;
|
||||||
struct usb_hcd *shared_hcd = xhci->shared_hcd;
|
struct usb_hcd *shared_hcd = xhci->shared_hcd;
|
||||||
|
|
||||||
|
pm_runtime_get_sync(&dev->dev);
|
||||||
xhci->xhc_state |= XHCI_STATE_REMOVING;
|
xhci->xhc_state |= XHCI_STATE_REMOVING;
|
||||||
|
|
||||||
usb_remove_hcd(shared_hcd);
|
usb_remove_hcd(shared_hcd);
|
||||||
|
@ -374,8 +375,9 @@ static int xhci_plat_remove(struct platform_device *dev)
|
||||||
clk_disable_unprepare(reg_clk);
|
clk_disable_unprepare(reg_clk);
|
||||||
usb_put_hcd(hcd);
|
usb_put_hcd(hcd);
|
||||||
|
|
||||||
pm_runtime_set_suspended(&dev->dev);
|
|
||||||
pm_runtime_disable(&dev->dev);
|
pm_runtime_disable(&dev->dev);
|
||||||
|
pm_runtime_put_noidle(&dev->dev);
|
||||||
|
pm_runtime_set_suspended(&dev->dev);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -3331,8 +3331,8 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
|
||||||
/* New sg entry */
|
/* New sg entry */
|
||||||
--num_sgs;
|
--num_sgs;
|
||||||
sent_len -= block_len;
|
sent_len -= block_len;
|
||||||
if (num_sgs != 0) {
|
sg = sg_next(sg);
|
||||||
sg = sg_next(sg);
|
if (num_sgs != 0 && sg) {
|
||||||
block_len = sg_dma_len(sg);
|
block_len = sg_dma_len(sg);
|
||||||
addr = (u64) sg_dma_address(sg);
|
addr = (u64) sg_dma_address(sg);
|
||||||
addr += sent_len;
|
addr += sent_len;
|
||||||
|
|
|
@ -2051,8 +2051,8 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
kref_put(&wdata2->refcount, cifs_writedata_release);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
kref_put(&wdata2->refcount, cifs_writedata_release);
|
|
||||||
if (is_retryable_error(rc))
|
if (is_retryable_error(rc))
|
||||||
continue;
|
continue;
|
||||||
i += nr_pages;
|
i += nr_pages;
|
||||||
|
|
|
@ -1269,6 +1269,8 @@ int flush_old_exec(struct linux_binprm * bprm)
|
||||||
*/
|
*/
|
||||||
set_mm_exe_file(bprm->mm, bprm->file);
|
set_mm_exe_file(bprm->mm, bprm->file);
|
||||||
|
|
||||||
|
would_dump(bprm, bprm->file);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Release all of the old mmap stuff
|
* Release all of the old mmap stuff
|
||||||
*/
|
*/
|
||||||
|
@ -1814,8 +1816,6 @@ static int __do_execve_file(int fd, struct filename *filename,
|
||||||
if (retval < 0)
|
if (retval < 0)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
would_dump(bprm, bprm->file);
|
|
||||||
|
|
||||||
retval = exec_binprm(bprm);
|
retval = exec_binprm(bprm);
|
||||||
if (retval < 0)
|
if (retval < 0)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
|
@ -530,10 +530,12 @@ static int gfs2_walk_metadata(struct inode *inode, struct metapath *mp,
|
||||||
|
|
||||||
/* Advance in metadata tree. */
|
/* Advance in metadata tree. */
|
||||||
(mp->mp_list[hgt])++;
|
(mp->mp_list[hgt])++;
|
||||||
if (mp->mp_list[hgt] >= sdp->sd_inptrs) {
|
if (hgt) {
|
||||||
if (!hgt)
|
if (mp->mp_list[hgt] >= sdp->sd_inptrs)
|
||||||
|
goto lower_metapath;
|
||||||
|
} else {
|
||||||
|
if (mp->mp_list[hgt] >= sdp->sd_diptrs)
|
||||||
break;
|
break;
|
||||||
goto lower_metapath;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fill_up_metapath:
|
fill_up_metapath:
|
||||||
|
@ -879,10 +881,9 @@ static int gfs2_iomap_get(struct inode *inode, loff_t pos, loff_t length,
|
||||||
ret = -ENOENT;
|
ret = -ENOENT;
|
||||||
goto unlock;
|
goto unlock;
|
||||||
} else {
|
} else {
|
||||||
/* report a hole */
|
|
||||||
iomap->offset = pos;
|
iomap->offset = pos;
|
||||||
iomap->length = length;
|
iomap->length = length;
|
||||||
goto do_alloc;
|
goto hole_found;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
iomap->length = size;
|
iomap->length = size;
|
||||||
|
@ -936,8 +937,6 @@ static int gfs2_iomap_get(struct inode *inode, loff_t pos, loff_t length,
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
do_alloc:
|
do_alloc:
|
||||||
iomap->addr = IOMAP_NULL_ADDR;
|
|
||||||
iomap->type = IOMAP_HOLE;
|
|
||||||
if (flags & IOMAP_REPORT) {
|
if (flags & IOMAP_REPORT) {
|
||||||
if (pos >= size)
|
if (pos >= size)
|
||||||
ret = -ENOENT;
|
ret = -ENOENT;
|
||||||
|
@ -959,6 +958,9 @@ static int gfs2_iomap_get(struct inode *inode, loff_t pos, loff_t length,
|
||||||
if (pos < size && height == ip->i_height)
|
if (pos < size && height == ip->i_height)
|
||||||
ret = gfs2_hole_size(inode, lblock, len, mp, iomap);
|
ret = gfs2_hole_size(inode, lblock, len, mp, iomap);
|
||||||
}
|
}
|
||||||
|
hole_found:
|
||||||
|
iomap->addr = IOMAP_NULL_ADDR;
|
||||||
|
iomap->type = IOMAP_HOLE;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -88,8 +88,10 @@ enum fscache_checkaux nfs_fscache_inode_check_aux(void *cookie_netfs_data,
|
||||||
return FSCACHE_CHECKAUX_OBSOLETE;
|
return FSCACHE_CHECKAUX_OBSOLETE;
|
||||||
|
|
||||||
memset(&auxdata, 0, sizeof(auxdata));
|
memset(&auxdata, 0, sizeof(auxdata));
|
||||||
auxdata.mtime = timespec64_to_timespec(nfsi->vfs_inode.i_mtime);
|
auxdata.mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec;
|
||||||
auxdata.ctime = timespec64_to_timespec(nfsi->vfs_inode.i_ctime);
|
auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec;
|
||||||
|
auxdata.ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec;
|
||||||
|
auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec;
|
||||||
|
|
||||||
if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4)
|
if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4)
|
||||||
auxdata.change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode);
|
auxdata.change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode);
|
||||||
|
|
|
@ -192,7 +192,8 @@ void nfs_fscache_get_super_cookie(struct super_block *sb, const char *uniq, int
|
||||||
/* create a cache index for looking up filehandles */
|
/* create a cache index for looking up filehandles */
|
||||||
nfss->fscache = fscache_acquire_cookie(nfss->nfs_client->fscache,
|
nfss->fscache = fscache_acquire_cookie(nfss->nfs_client->fscache,
|
||||||
&nfs_fscache_super_index_def,
|
&nfs_fscache_super_index_def,
|
||||||
key, sizeof(*key) + ulen,
|
&key->key,
|
||||||
|
sizeof(key->key) + ulen,
|
||||||
NULL, 0,
|
NULL, 0,
|
||||||
nfss, 0, true);
|
nfss, 0, true);
|
||||||
dfprintk(FSCACHE, "NFS: get superblock cookie (0x%p/0x%p)\n",
|
dfprintk(FSCACHE, "NFS: get superblock cookie (0x%p/0x%p)\n",
|
||||||
|
@ -230,6 +231,19 @@ void nfs_fscache_release_super_cookie(struct super_block *sb)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata,
|
||||||
|
struct nfs_inode *nfsi)
|
||||||
|
{
|
||||||
|
memset(auxdata, 0, sizeof(*auxdata));
|
||||||
|
auxdata->mtime_sec = nfsi->vfs_inode.i_mtime.tv_sec;
|
||||||
|
auxdata->mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec;
|
||||||
|
auxdata->ctime_sec = nfsi->vfs_inode.i_ctime.tv_sec;
|
||||||
|
auxdata->ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec;
|
||||||
|
|
||||||
|
if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4)
|
||||||
|
auxdata->change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialise the per-inode cache cookie pointer for an NFS inode.
|
* Initialise the per-inode cache cookie pointer for an NFS inode.
|
||||||
*/
|
*/
|
||||||
|
@ -243,12 +257,7 @@ void nfs_fscache_init_inode(struct inode *inode)
|
||||||
if (!(nfss->fscache && S_ISREG(inode->i_mode)))
|
if (!(nfss->fscache && S_ISREG(inode->i_mode)))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
memset(&auxdata, 0, sizeof(auxdata));
|
nfs_fscache_update_auxdata(&auxdata, nfsi);
|
||||||
auxdata.mtime = timespec64_to_timespec(nfsi->vfs_inode.i_mtime);
|
|
||||||
auxdata.ctime = timespec64_to_timespec(nfsi->vfs_inode.i_ctime);
|
|
||||||
|
|
||||||
if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4)
|
|
||||||
auxdata.change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode);
|
|
||||||
|
|
||||||
nfsi->fscache = fscache_acquire_cookie(NFS_SB(inode->i_sb)->fscache,
|
nfsi->fscache = fscache_acquire_cookie(NFS_SB(inode->i_sb)->fscache,
|
||||||
&nfs_fscache_inode_object_def,
|
&nfs_fscache_inode_object_def,
|
||||||
|
@ -268,9 +277,7 @@ void nfs_fscache_clear_inode(struct inode *inode)
|
||||||
|
|
||||||
dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n", nfsi, cookie);
|
dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n", nfsi, cookie);
|
||||||
|
|
||||||
memset(&auxdata, 0, sizeof(auxdata));
|
nfs_fscache_update_auxdata(&auxdata, nfsi);
|
||||||
auxdata.mtime = timespec64_to_timespec(nfsi->vfs_inode.i_mtime);
|
|
||||||
auxdata.ctime = timespec64_to_timespec(nfsi->vfs_inode.i_ctime);
|
|
||||||
fscache_relinquish_cookie(cookie, &auxdata, false);
|
fscache_relinquish_cookie(cookie, &auxdata, false);
|
||||||
nfsi->fscache = NULL;
|
nfsi->fscache = NULL;
|
||||||
}
|
}
|
||||||
|
@ -310,9 +317,7 @@ void nfs_fscache_open_file(struct inode *inode, struct file *filp)
|
||||||
if (!fscache_cookie_valid(cookie))
|
if (!fscache_cookie_valid(cookie))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
memset(&auxdata, 0, sizeof(auxdata));
|
nfs_fscache_update_auxdata(&auxdata, nfsi);
|
||||||
auxdata.mtime = timespec64_to_timespec(nfsi->vfs_inode.i_mtime);
|
|
||||||
auxdata.ctime = timespec64_to_timespec(nfsi->vfs_inode.i_ctime);
|
|
||||||
|
|
||||||
if (inode_is_open_for_write(inode)) {
|
if (inode_is_open_for_write(inode)) {
|
||||||
dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi);
|
dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi);
|
||||||
|
|
|
@ -66,9 +66,11 @@ struct nfs_fscache_key {
|
||||||
* cache object.
|
* cache object.
|
||||||
*/
|
*/
|
||||||
struct nfs_fscache_inode_auxdata {
|
struct nfs_fscache_inode_auxdata {
|
||||||
struct timespec mtime;
|
s64 mtime_sec;
|
||||||
struct timespec ctime;
|
s64 mtime_nsec;
|
||||||
u64 change_attr;
|
s64 ctime_sec;
|
||||||
|
s64 ctime_nsec;
|
||||||
|
u64 change_attr;
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -351,4 +351,10 @@ static inline void *offset_to_ptr(const int *off)
|
||||||
compiletime_assert(__native_word(t), \
|
compiletime_assert(__native_word(t), \
|
||||||
"Need native word sized stores/loads for atomicity.")
|
"Need native word sized stores/loads for atomicity.")
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This is needed in functions which generate the stack canary, see
|
||||||
|
* arch/x86/kernel/smpboot.c::start_secondary() for an example.
|
||||||
|
*/
|
||||||
|
#define prevent_tail_call_optimization() mb()
|
||||||
|
|
||||||
#endif /* __LINUX_COMPILER_H */
|
#endif /* __LINUX_COMPILER_H */
|
||||||
|
|
|
@ -959,7 +959,7 @@ struct file_handle {
|
||||||
__u32 handle_bytes;
|
__u32 handle_bytes;
|
||||||
int handle_type;
|
int handle_type;
|
||||||
/* file identifier */
|
/* file identifier */
|
||||||
unsigned char f_handle[0];
|
unsigned char f_handle[];
|
||||||
};
|
};
|
||||||
|
|
||||||
static inline struct file *get_file(struct file *f)
|
static inline struct file *get_file(struct file *f)
|
||||||
|
|
|
@ -220,10 +220,8 @@ struct pnp_card {
|
||||||
#define global_to_pnp_card(n) list_entry(n, struct pnp_card, global_list)
|
#define global_to_pnp_card(n) list_entry(n, struct pnp_card, global_list)
|
||||||
#define protocol_to_pnp_card(n) list_entry(n, struct pnp_card, protocol_list)
|
#define protocol_to_pnp_card(n) list_entry(n, struct pnp_card, protocol_list)
|
||||||
#define to_pnp_card(n) container_of(n, struct pnp_card, dev)
|
#define to_pnp_card(n) container_of(n, struct pnp_card, dev)
|
||||||
#define pnp_for_each_card(card) \
|
#define pnp_for_each_card(card) \
|
||||||
for((card) = global_to_pnp_card(pnp_cards.next); \
|
list_for_each_entry(card, &pnp_cards, global_list)
|
||||||
(card) != global_to_pnp_card(&pnp_cards); \
|
|
||||||
(card) = global_to_pnp_card((card)->global_list.next))
|
|
||||||
|
|
||||||
struct pnp_card_link {
|
struct pnp_card_link {
|
||||||
struct pnp_card *card;
|
struct pnp_card *card;
|
||||||
|
@ -276,14 +274,9 @@ struct pnp_dev {
|
||||||
#define card_to_pnp_dev(n) list_entry(n, struct pnp_dev, card_list)
|
#define card_to_pnp_dev(n) list_entry(n, struct pnp_dev, card_list)
|
||||||
#define protocol_to_pnp_dev(n) list_entry(n, struct pnp_dev, protocol_list)
|
#define protocol_to_pnp_dev(n) list_entry(n, struct pnp_dev, protocol_list)
|
||||||
#define to_pnp_dev(n) container_of(n, struct pnp_dev, dev)
|
#define to_pnp_dev(n) container_of(n, struct pnp_dev, dev)
|
||||||
#define pnp_for_each_dev(dev) \
|
#define pnp_for_each_dev(dev) list_for_each_entry(dev, &pnp_global, global_list)
|
||||||
for((dev) = global_to_pnp_dev(pnp_global.next); \
|
#define card_for_each_dev(card, dev) \
|
||||||
(dev) != global_to_pnp_dev(&pnp_global); \
|
list_for_each_entry(dev, &(card)->devices, card_list)
|
||||||
(dev) = global_to_pnp_dev((dev)->global_list.next))
|
|
||||||
#define card_for_each_dev(card,dev) \
|
|
||||||
for((dev) = card_to_pnp_dev((card)->devices.next); \
|
|
||||||
(dev) != card_to_pnp_dev(&(card)->devices); \
|
|
||||||
(dev) = card_to_pnp_dev((dev)->card_list.next))
|
|
||||||
#define pnp_dev_name(dev) (dev)->name
|
#define pnp_dev_name(dev) (dev)->name
|
||||||
|
|
||||||
static inline void *pnp_get_drvdata(struct pnp_dev *pdev)
|
static inline void *pnp_get_drvdata(struct pnp_dev *pdev)
|
||||||
|
@ -437,14 +430,10 @@ struct pnp_protocol {
|
||||||
};
|
};
|
||||||
|
|
||||||
#define to_pnp_protocol(n) list_entry(n, struct pnp_protocol, protocol_list)
|
#define to_pnp_protocol(n) list_entry(n, struct pnp_protocol, protocol_list)
|
||||||
#define protocol_for_each_card(protocol,card) \
|
#define protocol_for_each_card(protocol, card) \
|
||||||
for((card) = protocol_to_pnp_card((protocol)->cards.next); \
|
list_for_each_entry(card, &(protocol)->cards, protocol_list)
|
||||||
(card) != protocol_to_pnp_card(&(protocol)->cards); \
|
#define protocol_for_each_dev(protocol, dev) \
|
||||||
(card) = protocol_to_pnp_card((card)->protocol_list.next))
|
list_for_each_entry(dev, &(protocol)->devices, protocol_list)
|
||||||
#define protocol_for_each_dev(protocol,dev) \
|
|
||||||
for((dev) = protocol_to_pnp_dev((protocol)->devices.next); \
|
|
||||||
(dev) != protocol_to_pnp_dev(&(protocol)->devices); \
|
|
||||||
(dev) = protocol_to_pnp_dev((dev)->protocol_list.next))
|
|
||||||
|
|
||||||
extern struct bus_type pnp_bus_type;
|
extern struct bus_type pnp_bus_type;
|
||||||
|
|
||||||
|
|
|
@ -66,7 +66,7 @@ struct tty_buffer {
|
||||||
int read;
|
int read;
|
||||||
int flags;
|
int flags;
|
||||||
/* Data points here */
|
/* Data points here */
|
||||||
unsigned long data[0];
|
unsigned long data[];
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Values for .flags field of tty_buffer */
|
/* Values for .flags field of tty_buffer */
|
||||||
|
|
|
@ -85,7 +85,7 @@ struct nf_conn {
|
||||||
struct hlist_node nat_bysource;
|
struct hlist_node nat_bysource;
|
||||||
#endif
|
#endif
|
||||||
/* all members below initialized via memset */
|
/* all members below initialized via memset */
|
||||||
u8 __nfct_init_offset[0];
|
struct { } __nfct_init_offset;
|
||||||
|
|
||||||
/* If we were expected by an expectation, this will be it */
|
/* If we were expected by an expectation, this will be it */
|
||||||
struct nf_conn *master;
|
struct nf_conn *master;
|
||||||
|
|
|
@ -1373,6 +1373,19 @@ static inline int tcp_full_space(const struct sock *sk)
|
||||||
return tcp_win_from_space(sk, sk->sk_rcvbuf);
|
return tcp_win_from_space(sk, sk->sk_rcvbuf);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* We provision sk_rcvbuf around 200% of sk_rcvlowat.
|
||||||
|
* If 87.5 % (7/8) of the space has been consumed, we want to override
|
||||||
|
* SO_RCVLOWAT constraint, since we are receiving skbs with too small
|
||||||
|
* len/truesize ratio.
|
||||||
|
*/
|
||||||
|
static inline bool tcp_rmem_pressure(const struct sock *sk)
|
||||||
|
{
|
||||||
|
int rcvbuf = READ_ONCE(sk->sk_rcvbuf);
|
||||||
|
int threshold = rcvbuf - (rcvbuf >> 3);
|
||||||
|
|
||||||
|
return atomic_read(&sk->sk_rmem_alloc) > threshold;
|
||||||
|
}
|
||||||
|
|
||||||
extern void tcp_openreq_init_rwin(struct request_sock *req,
|
extern void tcp_openreq_init_rwin(struct request_sock *req,
|
||||||
const struct sock *sk_listener,
|
const struct sock *sk_listener,
|
||||||
const struct dst_entry *dst);
|
const struct dst_entry *dst);
|
||||||
|
|
|
@ -76,6 +76,7 @@ struct snd_rawmidi_runtime {
|
||||||
size_t avail_min; /* min avail for wakeup */
|
size_t avail_min; /* min avail for wakeup */
|
||||||
size_t avail; /* max used buffer for wakeup */
|
size_t avail; /* max used buffer for wakeup */
|
||||||
size_t xruns; /* over/underruns counter */
|
size_t xruns; /* over/underruns counter */
|
||||||
|
int buffer_ref; /* buffer reference count */
|
||||||
/* misc */
|
/* misc */
|
||||||
spinlock_t lock;
|
spinlock_t lock;
|
||||||
wait_queue_head_t sleep;
|
wait_queue_head_t sleep;
|
||||||
|
|
|
@ -767,6 +767,8 @@ asmlinkage __visible void __init start_kernel(void)
|
||||||
|
|
||||||
/* Do the rest non-__init'ed, we're now alive */
|
/* Do the rest non-__init'ed, we're now alive */
|
||||||
rest_init();
|
rest_init();
|
||||||
|
|
||||||
|
prevent_tail_call_optimization();
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Call all constructor functions linked into the kernel. */
|
/* Call all constructor functions linked into the kernel. */
|
||||||
|
|
12
ipc/util.c
12
ipc/util.c
|
@ -735,21 +735,21 @@ static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos,
|
||||||
total++;
|
total++;
|
||||||
}
|
}
|
||||||
|
|
||||||
*new_pos = pos + 1;
|
ipc = NULL;
|
||||||
if (total >= ids->in_use)
|
if (total >= ids->in_use)
|
||||||
return NULL;
|
goto out;
|
||||||
|
|
||||||
for (; pos < IPCMNI; pos++) {
|
for (; pos < IPCMNI; pos++) {
|
||||||
ipc = idr_find(&ids->ipcs_idr, pos);
|
ipc = idr_find(&ids->ipcs_idr, pos);
|
||||||
if (ipc != NULL) {
|
if (ipc != NULL) {
|
||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
ipc_lock_object(ipc);
|
ipc_lock_object(ipc);
|
||||||
return ipc;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
out:
|
||||||
/* Out of range - return NULL to terminate iteration */
|
*new_pos = pos + 1;
|
||||||
return NULL;
|
return ipc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void *sysvipc_proc_next(struct seq_file *s, void *it, loff_t *pos)
|
static void *sysvipc_proc_next(struct seq_file *s, void *it, loff_t *pos)
|
||||||
|
|
|
@ -2149,7 +2149,11 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
|
||||||
struct shmem_inode_info *info = SHMEM_I(inode);
|
struct shmem_inode_info *info = SHMEM_I(inode);
|
||||||
int retval = -ENOMEM;
|
int retval = -ENOMEM;
|
||||||
|
|
||||||
spin_lock_irq(&info->lock);
|
/*
|
||||||
|
* What serializes the accesses to info->flags?
|
||||||
|
* ipc_lock_object() when called from shmctl_do_lock(),
|
||||||
|
* no serialization needed when called from shm_destroy().
|
||||||
|
*/
|
||||||
if (lock && !(info->flags & VM_LOCKED)) {
|
if (lock && !(info->flags & VM_LOCKED)) {
|
||||||
if (!user_shm_lock(inode->i_size, user))
|
if (!user_shm_lock(inode->i_size, user))
|
||||||
goto out_nomem;
|
goto out_nomem;
|
||||||
|
@ -2164,7 +2168,6 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
|
||||||
retval = 0;
|
retval = 0;
|
||||||
|
|
||||||
out_nomem:
|
out_nomem:
|
||||||
spin_unlock_irq(&info->lock);
|
|
||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -8276,11 +8276,13 @@ static void netdev_sync_lower_features(struct net_device *upper,
|
||||||
netdev_dbg(upper, "Disabling feature %pNF on lower dev %s.\n",
|
netdev_dbg(upper, "Disabling feature %pNF on lower dev %s.\n",
|
||||||
&feature, lower->name);
|
&feature, lower->name);
|
||||||
lower->wanted_features &= ~feature;
|
lower->wanted_features &= ~feature;
|
||||||
netdev_update_features(lower);
|
__netdev_update_features(lower);
|
||||||
|
|
||||||
if (unlikely(lower->features & feature))
|
if (unlikely(lower->features & feature))
|
||||||
netdev_WARN(upper, "failed to disable %pNF on %s!\n",
|
netdev_WARN(upper, "failed to disable %pNF on %s!\n",
|
||||||
&feature, lower->name);
|
&feature, lower->name);
|
||||||
|
else
|
||||||
|
netdev_features_change(lower);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -154,6 +154,7 @@ static void sched_send_work(struct timer_list *t)
|
||||||
static void trace_drop_common(struct sk_buff *skb, void *location)
|
static void trace_drop_common(struct sk_buff *skb, void *location)
|
||||||
{
|
{
|
||||||
struct net_dm_alert_msg *msg;
|
struct net_dm_alert_msg *msg;
|
||||||
|
struct net_dm_drop_point *point;
|
||||||
struct nlmsghdr *nlh;
|
struct nlmsghdr *nlh;
|
||||||
struct nlattr *nla;
|
struct nlattr *nla;
|
||||||
int i;
|
int i;
|
||||||
|
@ -172,11 +173,13 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
|
||||||
nlh = (struct nlmsghdr *)dskb->data;
|
nlh = (struct nlmsghdr *)dskb->data;
|
||||||
nla = genlmsg_data(nlmsg_data(nlh));
|
nla = genlmsg_data(nlmsg_data(nlh));
|
||||||
msg = nla_data(nla);
|
msg = nla_data(nla);
|
||||||
|
point = msg->points;
|
||||||
for (i = 0; i < msg->entries; i++) {
|
for (i = 0; i < msg->entries; i++) {
|
||||||
if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) {
|
if (!memcmp(&location, &point->pc, sizeof(void *))) {
|
||||||
msg->points[i].count++;
|
point->count++;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
point++;
|
||||||
}
|
}
|
||||||
if (msg->entries == dm_hit_limit)
|
if (msg->entries == dm_hit_limit)
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -185,8 +188,8 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
|
||||||
*/
|
*/
|
||||||
__nla_reserve_nohdr(dskb, sizeof(struct net_dm_drop_point));
|
__nla_reserve_nohdr(dskb, sizeof(struct net_dm_drop_point));
|
||||||
nla->nla_len += NLA_ALIGN(sizeof(struct net_dm_drop_point));
|
nla->nla_len += NLA_ALIGN(sizeof(struct net_dm_drop_point));
|
||||||
memcpy(msg->points[msg->entries].pc, &location, sizeof(void *));
|
memcpy(point->pc, &location, sizeof(void *));
|
||||||
msg->points[msg->entries].count = 1;
|
point->count = 1;
|
||||||
msg->entries++;
|
msg->entries++;
|
||||||
|
|
||||||
if (!timer_pending(&data->send_timer)) {
|
if (!timer_pending(&data->send_timer)) {
|
||||||
|
|
|
@ -240,6 +240,8 @@ static void net_prio_attach(struct cgroup_taskset *tset)
|
||||||
struct task_struct *p;
|
struct task_struct *p;
|
||||||
struct cgroup_subsys_state *css;
|
struct cgroup_subsys_state *css;
|
||||||
|
|
||||||
|
cgroup_sk_alloc_disable();
|
||||||
|
|
||||||
cgroup_taskset_for_each(p, css, tset) {
|
cgroup_taskset_for_each(p, css, tset) {
|
||||||
void *v = (void *)(unsigned long)css->cgroup->id;
|
void *v = (void *)(unsigned long)css->cgroup->id;
|
||||||
|
|
||||||
|
|
|
@ -412,7 +412,7 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
|
||||||
|
|
||||||
err = dsa_switch_setup(ds);
|
err = dsa_switch_setup(ds);
|
||||||
if (err)
|
if (err)
|
||||||
return err;
|
continue;
|
||||||
|
|
||||||
for (port = 0; port < ds->num_ports; port++) {
|
for (port = 0; port < ds->num_ports; port++) {
|
||||||
dp = &ds->ports[port];
|
dp = &ds->ports[port];
|
||||||
|
|
|
@ -1272,7 +1272,8 @@ static int cipso_v4_parsetag_rbm(const struct cipso_v4_doi *doi_def,
|
||||||
return ret_val;
|
return ret_val;
|
||||||
}
|
}
|
||||||
|
|
||||||
secattr->flags |= NETLBL_SECATTR_MLS_CAT;
|
if (secattr->attr.mls.cat)
|
||||||
|
secattr->flags |= NETLBL_SECATTR_MLS_CAT;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -1453,7 +1454,8 @@ static int cipso_v4_parsetag_rng(const struct cipso_v4_doi *doi_def,
|
||||||
return ret_val;
|
return ret_val;
|
||||||
}
|
}
|
||||||
|
|
||||||
secattr->flags |= NETLBL_SECATTR_MLS_CAT;
|
if (secattr->attr.mls.cat)
|
||||||
|
secattr->flags |= NETLBL_SECATTR_MLS_CAT;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -906,7 +906,7 @@ void ip_rt_send_redirect(struct sk_buff *skb)
|
||||||
/* Check for load limit; set rate_last to the latest sent
|
/* Check for load limit; set rate_last to the latest sent
|
||||||
* redirect.
|
* redirect.
|
||||||
*/
|
*/
|
||||||
if (peer->rate_tokens == 0 ||
|
if (peer->n_redirects == 0 ||
|
||||||
time_after(jiffies,
|
time_after(jiffies,
|
||||||
(peer->rate_last +
|
(peer->rate_last +
|
||||||
(ip_rt_redirect_load << peer->n_redirects)))) {
|
(ip_rt_redirect_load << peer->n_redirects)))) {
|
||||||
|
|
|
@ -488,9 +488,17 @@ static void tcp_tx_timestamp(struct sock *sk, u16 tsflags)
|
||||||
static inline bool tcp_stream_is_readable(const struct tcp_sock *tp,
|
static inline bool tcp_stream_is_readable(const struct tcp_sock *tp,
|
||||||
int target, struct sock *sk)
|
int target, struct sock *sk)
|
||||||
{
|
{
|
||||||
return (READ_ONCE(tp->rcv_nxt) - tp->copied_seq >= target) ||
|
int avail = READ_ONCE(tp->rcv_nxt) - READ_ONCE(tp->copied_seq);
|
||||||
(sk->sk_prot->stream_memory_read ?
|
|
||||||
sk->sk_prot->stream_memory_read(sk) : false);
|
if (avail > 0) {
|
||||||
|
if (avail >= target)
|
||||||
|
return true;
|
||||||
|
if (tcp_rmem_pressure(sk))
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
if (sk->sk_prot->stream_memory_read)
|
||||||
|
return sk->sk_prot->stream_memory_read(sk);
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -1774,10 +1782,11 @@ static int tcp_zerocopy_receive(struct sock *sk,
|
||||||
|
|
||||||
down_read(¤t->mm->mmap_sem);
|
down_read(¤t->mm->mmap_sem);
|
||||||
|
|
||||||
ret = -EINVAL;
|
|
||||||
vma = find_vma(current->mm, address);
|
vma = find_vma(current->mm, address);
|
||||||
if (!vma || vma->vm_start > address || vma->vm_ops != &tcp_vm_ops)
|
if (!vma || vma->vm_start > address || vma->vm_ops != &tcp_vm_ops) {
|
||||||
goto out;
|
up_read(¤t->mm->mmap_sem);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
zc->length = min_t(unsigned long, zc->length, vma->vm_end - address);
|
zc->length = min_t(unsigned long, zc->length, vma->vm_end - address);
|
||||||
|
|
||||||
tp = tcp_sk(sk);
|
tp = tcp_sk(sk);
|
||||||
|
@ -2134,14 +2143,16 @@ int tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int nonblock,
|
||||||
tp->urg_data = 0;
|
tp->urg_data = 0;
|
||||||
tcp_fast_path_check(sk);
|
tcp_fast_path_check(sk);
|
||||||
}
|
}
|
||||||
if (used + offset < skb->len)
|
|
||||||
continue;
|
|
||||||
|
|
||||||
if (TCP_SKB_CB(skb)->has_rxtstamp) {
|
if (TCP_SKB_CB(skb)->has_rxtstamp) {
|
||||||
tcp_update_recv_tstamps(skb, &tss);
|
tcp_update_recv_tstamps(skb, &tss);
|
||||||
has_tss = true;
|
has_tss = true;
|
||||||
has_cmsg = true;
|
has_cmsg = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (used + offset < skb->len)
|
||||||
|
continue;
|
||||||
|
|
||||||
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
|
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
|
||||||
goto found_fin_ok;
|
goto found_fin_ok;
|
||||||
if (!(flags & MSG_PEEK))
|
if (!(flags & MSG_PEEK))
|
||||||
|
|
|
@ -4683,7 +4683,8 @@ void tcp_data_ready(struct sock *sk)
|
||||||
const struct tcp_sock *tp = tcp_sk(sk);
|
const struct tcp_sock *tp = tcp_sk(sk);
|
||||||
int avail = tp->rcv_nxt - tp->copied_seq;
|
int avail = tp->rcv_nxt - tp->copied_seq;
|
||||||
|
|
||||||
if (avail < sk->sk_rcvlowat && !sock_flag(sk, SOCK_DONE))
|
if (avail < sk->sk_rcvlowat && !tcp_rmem_pressure(sk) &&
|
||||||
|
!sock_flag(sk, SOCK_DONE))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
sk->sk_data_ready(sk);
|
sk->sk_data_ready(sk);
|
||||||
|
|
|
@ -1061,7 +1061,8 @@ static int calipso_opt_getattr(const unsigned char *calipso,
|
||||||
goto getattr_return;
|
goto getattr_return;
|
||||||
}
|
}
|
||||||
|
|
||||||
secattr->flags |= NETLBL_SECATTR_MLS_CAT;
|
if (secattr->attr.mls.cat)
|
||||||
|
secattr->flags |= NETLBL_SECATTR_MLS_CAT;
|
||||||
}
|
}
|
||||||
|
|
||||||
secattr->type = NETLBL_NLTYPE_CALIPSO;
|
secattr->type = NETLBL_NLTYPE_CALIPSO;
|
||||||
|
|
|
@ -2360,8 +2360,10 @@ static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk,
|
||||||
const struct in6_addr *daddr, *saddr;
|
const struct in6_addr *daddr, *saddr;
|
||||||
struct rt6_info *rt6 = (struct rt6_info *)dst;
|
struct rt6_info *rt6 = (struct rt6_info *)dst;
|
||||||
|
|
||||||
if (dst_metric_locked(dst, RTAX_MTU))
|
/* Note: do *NOT* check dst_metric_locked(dst, RTAX_MTU)
|
||||||
return;
|
* IPv6 pmtu discovery isn't optional, so 'mtu lock' cannot disable it.
|
||||||
|
* [see also comment in rt6_mtu_change_route()]
|
||||||
|
*/
|
||||||
|
|
||||||
if (iph) {
|
if (iph) {
|
||||||
daddr = &iph->daddr;
|
daddr = &iph->daddr;
|
||||||
|
|
|
@ -1352,9 +1352,9 @@ __nf_conntrack_alloc(struct net *net,
|
||||||
*(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash;
|
*(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash;
|
||||||
ct->status = 0;
|
ct->status = 0;
|
||||||
write_pnet(&ct->ct_net, net);
|
write_pnet(&ct->ct_net, net);
|
||||||
memset(&ct->__nfct_init_offset[0], 0,
|
memset(&ct->__nfct_init_offset, 0,
|
||||||
offsetof(struct nf_conn, proto) -
|
offsetof(struct nf_conn, proto) -
|
||||||
offsetof(struct nf_conn, __nfct_init_offset[0]));
|
offsetof(struct nf_conn, __nfct_init_offset));
|
||||||
|
|
||||||
nf_ct_zone_add(ct, zone);
|
nf_ct_zone_add(ct, zone);
|
||||||
|
|
||||||
|
|
|
@ -36,6 +36,11 @@ static bool nft_rbtree_interval_end(const struct nft_rbtree_elem *rbe)
|
||||||
(*nft_set_ext_flags(&rbe->ext) & NFT_SET_ELEM_INTERVAL_END);
|
(*nft_set_ext_flags(&rbe->ext) & NFT_SET_ELEM_INTERVAL_END);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool nft_rbtree_interval_start(const struct nft_rbtree_elem *rbe)
|
||||||
|
{
|
||||||
|
return !nft_rbtree_interval_end(rbe);
|
||||||
|
}
|
||||||
|
|
||||||
static bool nft_rbtree_equal(const struct nft_set *set, const void *this,
|
static bool nft_rbtree_equal(const struct nft_set *set, const void *this,
|
||||||
const struct nft_rbtree_elem *interval)
|
const struct nft_rbtree_elem *interval)
|
||||||
{
|
{
|
||||||
|
@ -67,7 +72,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
|
||||||
if (interval &&
|
if (interval &&
|
||||||
nft_rbtree_equal(set, this, interval) &&
|
nft_rbtree_equal(set, this, interval) &&
|
||||||
nft_rbtree_interval_end(rbe) &&
|
nft_rbtree_interval_end(rbe) &&
|
||||||
!nft_rbtree_interval_end(interval))
|
nft_rbtree_interval_start(interval))
|
||||||
continue;
|
continue;
|
||||||
interval = rbe;
|
interval = rbe;
|
||||||
} else if (d > 0)
|
} else if (d > 0)
|
||||||
|
@ -92,7 +97,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
|
||||||
|
|
||||||
if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
|
if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
|
||||||
nft_set_elem_active(&interval->ext, genmask) &&
|
nft_set_elem_active(&interval->ext, genmask) &&
|
||||||
!nft_rbtree_interval_end(interval)) {
|
nft_rbtree_interval_start(interval)) {
|
||||||
*ext = &interval->ext;
|
*ext = &interval->ext;
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
@ -221,9 +226,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
|
||||||
p = &parent->rb_right;
|
p = &parent->rb_right;
|
||||||
else {
|
else {
|
||||||
if (nft_rbtree_interval_end(rbe) &&
|
if (nft_rbtree_interval_end(rbe) &&
|
||||||
!nft_rbtree_interval_end(new)) {
|
nft_rbtree_interval_start(new)) {
|
||||||
p = &parent->rb_left;
|
p = &parent->rb_left;
|
||||||
} else if (!nft_rbtree_interval_end(rbe) &&
|
} else if (nft_rbtree_interval_start(rbe) &&
|
||||||
nft_rbtree_interval_end(new)) {
|
nft_rbtree_interval_end(new)) {
|
||||||
p = &parent->rb_right;
|
p = &parent->rb_right;
|
||||||
} else if (nft_set_elem_active(&rbe->ext, genmask)) {
|
} else if (nft_set_elem_active(&rbe->ext, genmask)) {
|
||||||
|
@ -314,10 +319,10 @@ static void *nft_rbtree_deactivate(const struct net *net,
|
||||||
parent = parent->rb_right;
|
parent = parent->rb_right;
|
||||||
else {
|
else {
|
||||||
if (nft_rbtree_interval_end(rbe) &&
|
if (nft_rbtree_interval_end(rbe) &&
|
||||||
!nft_rbtree_interval_end(this)) {
|
nft_rbtree_interval_start(this)) {
|
||||||
parent = parent->rb_left;
|
parent = parent->rb_left;
|
||||||
continue;
|
continue;
|
||||||
} else if (!nft_rbtree_interval_end(rbe) &&
|
} else if (nft_rbtree_interval_start(rbe) &&
|
||||||
nft_rbtree_interval_end(this)) {
|
nft_rbtree_interval_end(this)) {
|
||||||
parent = parent->rb_right;
|
parent = parent->rb_right;
|
||||||
continue;
|
continue;
|
||||||
|
|
|
@ -748,6 +748,12 @@ int netlbl_catmap_getlong(struct netlbl_lsm_catmap *catmap,
|
||||||
if ((off & (BITS_PER_LONG - 1)) != 0)
|
if ((off & (BITS_PER_LONG - 1)) != 0)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
/* a null catmap is equivalent to an empty one */
|
||||||
|
if (!catmap) {
|
||||||
|
*offset = (u32)-1;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
if (off < catmap->startbit) {
|
if (off < catmap->startbit) {
|
||||||
off = catmap->startbit;
|
off = catmap->startbit;
|
||||||
*offset = off;
|
*offset = off;
|
||||||
|
|
|
@ -112,6 +112,17 @@ static void snd_rawmidi_input_event_work(struct work_struct *work)
|
||||||
runtime->event(runtime->substream);
|
runtime->event(runtime->substream);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* buffer refcount management: call with runtime->lock held */
|
||||||
|
static inline void snd_rawmidi_buffer_ref(struct snd_rawmidi_runtime *runtime)
|
||||||
|
{
|
||||||
|
runtime->buffer_ref++;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void snd_rawmidi_buffer_unref(struct snd_rawmidi_runtime *runtime)
|
||||||
|
{
|
||||||
|
runtime->buffer_ref--;
|
||||||
|
}
|
||||||
|
|
||||||
static int snd_rawmidi_runtime_create(struct snd_rawmidi_substream *substream)
|
static int snd_rawmidi_runtime_create(struct snd_rawmidi_substream *substream)
|
||||||
{
|
{
|
||||||
struct snd_rawmidi_runtime *runtime;
|
struct snd_rawmidi_runtime *runtime;
|
||||||
|
@ -661,6 +672,11 @@ static int resize_runtime_buffer(struct snd_rawmidi_runtime *runtime,
|
||||||
if (!newbuf)
|
if (!newbuf)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
spin_lock_irq(&runtime->lock);
|
spin_lock_irq(&runtime->lock);
|
||||||
|
if (runtime->buffer_ref) {
|
||||||
|
spin_unlock_irq(&runtime->lock);
|
||||||
|
kvfree(newbuf);
|
||||||
|
return -EBUSY;
|
||||||
|
}
|
||||||
oldbuf = runtime->buffer;
|
oldbuf = runtime->buffer;
|
||||||
runtime->buffer = newbuf;
|
runtime->buffer = newbuf;
|
||||||
runtime->buffer_size = params->buffer_size;
|
runtime->buffer_size = params->buffer_size;
|
||||||
|
@ -960,8 +976,10 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
|
||||||
long result = 0, count1;
|
long result = 0, count1;
|
||||||
struct snd_rawmidi_runtime *runtime = substream->runtime;
|
struct snd_rawmidi_runtime *runtime = substream->runtime;
|
||||||
unsigned long appl_ptr;
|
unsigned long appl_ptr;
|
||||||
|
int err = 0;
|
||||||
|
|
||||||
spin_lock_irqsave(&runtime->lock, flags);
|
spin_lock_irqsave(&runtime->lock, flags);
|
||||||
|
snd_rawmidi_buffer_ref(runtime);
|
||||||
while (count > 0 && runtime->avail) {
|
while (count > 0 && runtime->avail) {
|
||||||
count1 = runtime->buffer_size - runtime->appl_ptr;
|
count1 = runtime->buffer_size - runtime->appl_ptr;
|
||||||
if (count1 > count)
|
if (count1 > count)
|
||||||
|
@ -980,16 +998,19 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
|
||||||
if (userbuf) {
|
if (userbuf) {
|
||||||
spin_unlock_irqrestore(&runtime->lock, flags);
|
spin_unlock_irqrestore(&runtime->lock, flags);
|
||||||
if (copy_to_user(userbuf + result,
|
if (copy_to_user(userbuf + result,
|
||||||
runtime->buffer + appl_ptr, count1)) {
|
runtime->buffer + appl_ptr, count1))
|
||||||
return result > 0 ? result : -EFAULT;
|
err = -EFAULT;
|
||||||
}
|
|
||||||
spin_lock_irqsave(&runtime->lock, flags);
|
spin_lock_irqsave(&runtime->lock, flags);
|
||||||
|
if (err)
|
||||||
|
goto out;
|
||||||
}
|
}
|
||||||
result += count1;
|
result += count1;
|
||||||
count -= count1;
|
count -= count1;
|
||||||
}
|
}
|
||||||
|
out:
|
||||||
|
snd_rawmidi_buffer_unref(runtime);
|
||||||
spin_unlock_irqrestore(&runtime->lock, flags);
|
spin_unlock_irqrestore(&runtime->lock, flags);
|
||||||
return result;
|
return result > 0 ? result : err;
|
||||||
}
|
}
|
||||||
|
|
||||||
long snd_rawmidi_kernel_read(struct snd_rawmidi_substream *substream,
|
long snd_rawmidi_kernel_read(struct snd_rawmidi_substream *substream,
|
||||||
|
@ -1261,6 +1282,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
|
||||||
return -EAGAIN;
|
return -EAGAIN;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
snd_rawmidi_buffer_ref(runtime);
|
||||||
while (count > 0 && runtime->avail > 0) {
|
while (count > 0 && runtime->avail > 0) {
|
||||||
count1 = runtime->buffer_size - runtime->appl_ptr;
|
count1 = runtime->buffer_size - runtime->appl_ptr;
|
||||||
if (count1 > count)
|
if (count1 > count)
|
||||||
|
@ -1292,6 +1314,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
|
||||||
}
|
}
|
||||||
__end:
|
__end:
|
||||||
count1 = runtime->avail < runtime->buffer_size;
|
count1 = runtime->avail < runtime->buffer_size;
|
||||||
|
snd_rawmidi_buffer_unref(runtime);
|
||||||
spin_unlock_irqrestore(&runtime->lock, flags);
|
spin_unlock_irqrestore(&runtime->lock, flags);
|
||||||
if (count1)
|
if (count1)
|
||||||
snd_rawmidi_output_trigger(substream, 1);
|
snd_rawmidi_output_trigger(substream, 1);
|
||||||
|
|
|
@ -2211,7 +2211,9 @@ static int generic_hdmi_build_controls(struct hda_codec *codec)
|
||||||
|
|
||||||
for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
|
for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
|
||||||
struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
|
struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
|
||||||
|
struct hdmi_eld *pin_eld = &per_pin->sink_eld;
|
||||||
|
|
||||||
|
pin_eld->eld_valid = false;
|
||||||
hdmi_present_sense(per_pin, 0);
|
hdmi_present_sense(per_pin, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -5524,6 +5524,15 @@ static void alc233_alc662_fixup_lenovo_dual_codecs(struct hda_codec *codec,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void alc225_fixup_s3_pop_noise(struct hda_codec *codec,
|
||||||
|
const struct hda_fixup *fix, int action)
|
||||||
|
{
|
||||||
|
if (action != HDA_FIXUP_ACT_PRE_PROBE)
|
||||||
|
return;
|
||||||
|
|
||||||
|
codec->power_save_node = 1;
|
||||||
|
}
|
||||||
|
|
||||||
/* Forcibly assign NID 0x03 to HP/LO while NID 0x02 to SPK for EQ */
|
/* Forcibly assign NID 0x03 to HP/LO while NID 0x02 to SPK for EQ */
|
||||||
static void alc274_fixup_bind_dacs(struct hda_codec *codec,
|
static void alc274_fixup_bind_dacs(struct hda_codec *codec,
|
||||||
const struct hda_fixup *fix, int action)
|
const struct hda_fixup *fix, int action)
|
||||||
|
@ -5607,6 +5616,7 @@ enum {
|
||||||
ALC269_FIXUP_HP_LINE1_MIC1_LED,
|
ALC269_FIXUP_HP_LINE1_MIC1_LED,
|
||||||
ALC269_FIXUP_INV_DMIC,
|
ALC269_FIXUP_INV_DMIC,
|
||||||
ALC269_FIXUP_LENOVO_DOCK,
|
ALC269_FIXUP_LENOVO_DOCK,
|
||||||
|
ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST,
|
||||||
ALC269_FIXUP_NO_SHUTUP,
|
ALC269_FIXUP_NO_SHUTUP,
|
||||||
ALC286_FIXUP_SONY_MIC_NO_PRESENCE,
|
ALC286_FIXUP_SONY_MIC_NO_PRESENCE,
|
||||||
ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT,
|
ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT,
|
||||||
|
@ -5690,6 +5700,7 @@ enum {
|
||||||
ALC233_FIXUP_ACER_HEADSET_MIC,
|
ALC233_FIXUP_ACER_HEADSET_MIC,
|
||||||
ALC294_FIXUP_LENOVO_MIC_LOCATION,
|
ALC294_FIXUP_LENOVO_MIC_LOCATION,
|
||||||
ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
|
ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
|
||||||
|
ALC225_FIXUP_S3_POP_NOISE,
|
||||||
ALC700_FIXUP_INTEL_REFERENCE,
|
ALC700_FIXUP_INTEL_REFERENCE,
|
||||||
ALC274_FIXUP_DELL_BIND_DACS,
|
ALC274_FIXUP_DELL_BIND_DACS,
|
||||||
ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
|
ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
|
||||||
|
@ -5918,6 +5929,12 @@ static const struct hda_fixup alc269_fixups[] = {
|
||||||
.chained = true,
|
.chained = true,
|
||||||
.chain_id = ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT
|
.chain_id = ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT
|
||||||
},
|
},
|
||||||
|
[ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST] = {
|
||||||
|
.type = HDA_FIXUP_FUNC,
|
||||||
|
.v.func = alc269_fixup_limit_int_mic_boost,
|
||||||
|
.chained = true,
|
||||||
|
.chain_id = ALC269_FIXUP_LENOVO_DOCK,
|
||||||
|
},
|
||||||
[ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT] = {
|
[ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT] = {
|
||||||
.type = HDA_FIXUP_FUNC,
|
.type = HDA_FIXUP_FUNC,
|
||||||
.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
|
.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
|
||||||
|
@ -6546,6 +6563,12 @@ static const struct hda_fixup alc269_fixups[] = {
|
||||||
{ }
|
{ }
|
||||||
},
|
},
|
||||||
.chained = true,
|
.chained = true,
|
||||||
|
.chain_id = ALC225_FIXUP_S3_POP_NOISE
|
||||||
|
},
|
||||||
|
[ALC225_FIXUP_S3_POP_NOISE] = {
|
||||||
|
.type = HDA_FIXUP_FUNC,
|
||||||
|
.v.func = alc225_fixup_s3_pop_noise,
|
||||||
|
.chained = true,
|
||||||
.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
|
.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
|
||||||
},
|
},
|
||||||
[ALC700_FIXUP_INTEL_REFERENCE] = {
|
[ALC700_FIXUP_INTEL_REFERENCE] = {
|
||||||
|
@ -6997,7 +7020,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
|
||||||
SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
|
SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
|
||||||
SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE),
|
SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE),
|
||||||
SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
|
SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
|
||||||
SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK),
|
SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST),
|
||||||
SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK),
|
SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK),
|
||||||
SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
|
SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
|
||||||
SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
|
SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
|
||||||
|
@ -7134,6 +7157,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
|
||||||
{.id = ALC269_FIXUP_HEADSET_MODE, .name = "headset-mode"},
|
{.id = ALC269_FIXUP_HEADSET_MODE, .name = "headset-mode"},
|
||||||
{.id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC, .name = "headset-mode-no-hp-mic"},
|
{.id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC, .name = "headset-mode-no-hp-mic"},
|
||||||
{.id = ALC269_FIXUP_LENOVO_DOCK, .name = "lenovo-dock"},
|
{.id = ALC269_FIXUP_LENOVO_DOCK, .name = "lenovo-dock"},
|
||||||
|
{.id = ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST, .name = "lenovo-dock-limit-boost"},
|
||||||
{.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
|
{.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
|
||||||
{.id = ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED, .name = "hp-dock-gpio-mic1-led"},
|
{.id = ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED, .name = "hp-dock-gpio-mic1-led"},
|
||||||
{.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"},
|
{.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"},
|
||||||
|
@ -7803,8 +7827,6 @@ static int patch_alc269(struct hda_codec *codec)
|
||||||
spec->gen.mixer_nid = 0;
|
spec->gen.mixer_nid = 0;
|
||||||
break;
|
break;
|
||||||
case 0x10ec0225:
|
case 0x10ec0225:
|
||||||
codec->power_save_node = 1;
|
|
||||||
/* fall through */
|
|
||||||
case 0x10ec0295:
|
case 0x10ec0295:
|
||||||
case 0x10ec0299:
|
case 0x10ec0299:
|
||||||
spec->codec_variant = ALC269_TYPE_ALC225;
|
spec->codec_variant = ALC269_TYPE_ALC225;
|
||||||
|
|
|
@ -1334,13 +1334,14 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
|
||||||
&& (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
&& (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
||||||
msleep(20);
|
msleep(20);
|
||||||
|
|
||||||
/* Zoom R16/24, Logitech H650e, Jabra 550a needs a tiny delay here,
|
/* Zoom R16/24, Logitech H650e, Jabra 550a, Kingston HyperX needs a tiny
|
||||||
* otherwise requests like get/set frequency return as failed despite
|
* delay here, otherwise requests like get/set frequency return as
|
||||||
* actually succeeding.
|
* failed despite actually succeeding.
|
||||||
*/
|
*/
|
||||||
if ((chip->usb_id == USB_ID(0x1686, 0x00dd) ||
|
if ((chip->usb_id == USB_ID(0x1686, 0x00dd) ||
|
||||||
chip->usb_id == USB_ID(0x046d, 0x0a46) ||
|
chip->usb_id == USB_ID(0x046d, 0x0a46) ||
|
||||||
chip->usb_id == USB_ID(0x0b0e, 0x0349)) &&
|
chip->usb_id == USB_ID(0x0b0e, 0x0349) ||
|
||||||
|
chip->usb_id == USB_ID(0x0951, 0x16ad)) &&
|
||||||
(requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
(requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
||||||
usleep_range(1000, 2000);
|
usleep_range(1000, 2000);
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in a new issue