This is the 4.19.154 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl+b3x4ACgkQONu9yGCS aT4V5A//Zjotx9tNhbPFY/P06seBYbrrgqDQT87CkPn4L0PN50Yv4yWjvP0lKw1k hE71dndlI0A+6EIJLFFthh0bmLK+TINjJy5bW+uLJM6i9Fa2IhHJaMjgb3W6iK/j Iqi8GFyLAacckSJSV+DYz54di4dXc/cp/WpeKwGVBJCvFh3H9uLZUU+nAQ5X1tpY PBP0hYFmkuRbGDsXjgiDxwTqeaqBXL9EG5QPj/HVF3Uxa9HjavOHRZHidI3HqA0h svzNrvvstgi/r4anMGpaWg0rXdnnLr7q79Ox1b7doSMn0OQFliLdJ9/RTMhsb4rw 9Iki8ZkUPCj86xCW4jBkja4AVEhP0Ep/5+dQUpMOYe115dfuREl8DkiZeh0HC+bh hoZk6GIbzxCTzUkVgDCL46BbBGSkTcOuaE8uriIPJlUCc9r/KrkB63tWRpL8wVuC u49MmAZBjlzV9/j9nYJzBha1v9px+vw56kH9LmQHLTm+nG4BrAmiPzb2mjrMo8iv PfVuUXSgTZNKDYKkTL6sz7nzrGESrKD5M1h3TN7f+vgYcaXqWT+pKPvhkvcRB3tR iwzs/A+s1jL+wjstUgUVia6z5DtNEiNQ2pou2U1EK0UuGeUMbqu8d9924NcHf60u Opg9dUWRLQTorl7dM2CsuDKFF5N+Vg08BfbAC2JNj0uFZBNL//0= =DCvO -----END PGP SIGNATURE----- Merge 4.19.154 into android-4.19-stable Changes in 4.19.154 powerpc/tau: Check processor type before enabling TAU interrupt powerpc/tau: Disable TAU between measurements powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm RDMA/cma: Remove dead code for kernel rdmacm multicast RDMA/cma: Consolidate the destruction of a cma_multicast in one place perf intel-pt: Fix "context_switch event has no tid" error RDMA/hns: Set the unsupported wr opcode RDMA/hns: Fix missing sq_sig_type when querying QP kdb: Fix pager search for multi-line strings overflow: Include header file with SIZE_MAX declaration powerpc/perf: Exclude pmc5/6 from the irrelevant PMU group constraints powerpc/perf/hv-gpci: Fix starting index value cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_reboot_notifier IB/rdmavt: Fix sizeof mismatch f2fs: wait for sysfs kobject removal before freeing f2fs_sb_info lib/crc32.c: fix trivial typo in preprocessor condition ramfs: fix nommu mmap with gaps in the page cache rapidio: fix error handling path rapidio: fix the missed put_device() for rio_mport_add_riodev mailbox: avoid timer start from callback i2c: rcar: Auto select RESET_CONTROLLER PCI: iproc: Set affinity mask on MSI interrupts rpmsg: smd: Fix a kobj leak in in qcom_smd_parse_edge() pwm: img: Fix null pointer access in probe clk: rockchip: Initialize hw to error to avoid undefined behavior clk: at91: clk-main: update key before writing AT91_CKGR_MOR clk: bcm2835: add missing release if devm_clk_hw_register fails watchdog: Fix memleak in watchdog_cdev_register watchdog: Use put_device on error watchdog: sp5100: Fix definition of EFCH_PM_DECODEEN3 svcrdma: fix bounce buffers for unaligned offsets and multiple pages ext4: limit entries returned when counting fsmap records vfio/pci: Clear token on bypass registration failure vfio iommu type1: Fix memory leak in vfio_iommu_type1_pin_pages SUNRPC: fix copying of multiple pages in gss_read_proxy_verf() Input: imx6ul_tsc - clean up some errors in imx6ul_tsc_resume() Input: stmfts - fix a & vs && typo Input: ep93xx_keypad - fix handling of platform_get_irq() error Input: omap4-keypad - fix handling of platform_get_irq() error Input: twl4030_keypad - fix handling of platform_get_irq() error Input: sun4i-ps2 - fix handling of platform_get_irq() error KVM: x86: emulating RDPID failure shall return #UD rather than #GP netfilter: conntrack: connection timeout after re-register netfilter: nf_fwd_netdev: clear timestamp in forwarding path ARM: dts: imx6sl: fix rng node ARM: dts: sun8i: r40: bananapi-m2-ultra: Fix dcdc1 regulator memory: omap-gpmc: Fix a couple off by ones memory: omap-gpmc: Fix build error without CONFIG_OF memory: fsl-corenet-cf: Fix handling of platform_get_irq() error arm64: dts: qcom: pm8916: Remove invalid reg size from wcd_codec arm64: dts: qcom: msm8916: Fix MDP/DSI interrupts ARM: dts: owl-s500: Fix incorrect PPI interrupt specifiers arm64: dts: zynqmp: Remove additional compatible string for i2c IPs powerpc/powernv/dump: Fix race while processing OPAL dump nvmet: fix uninitialized work for zero kato NTB: hw: amd: fix an issue about leak system resources sched/features: Fix !CONFIG_JUMP_LABEL case perf: correct SNOOPX field offset i2c: core: Restore acpi_walk_dep_device_list() getting called after registering the ACPI i2c devs block: ratelimit handle_bad_sector() message crypto: ccp - fix error handling media: firewire: fix memory leak media: ati_remote: sanity check for both endpoints media: st-delta: Fix reference count leak in delta_run_work media: sti: Fix reference count leaks media: exynos4-is: Fix several reference count leaks due to pm_runtime_get_sync media: exynos4-is: Fix a reference count leak due to pm_runtime_get_sync media: exynos4-is: Fix a reference count leak media: vsp1: Fix runtime PM imbalance on error media: platform: s3c-camif: Fix runtime PM imbalance on error media: platform: sti: hva: Fix runtime PM imbalance on error media: bdisp: Fix runtime PM imbalance on error media: media/pci: prevent memory leak in bttv_probe media: uvcvideo: Ensure all probed info is returned to v4l2 mmc: sdio: Check for CISTPL_VERS_1 buffer size media: saa7134: avoid a shift overflow fs: dlm: fix configfs memory leak media: venus: core: Fix runtime PM imbalance in venus_probe ntfs: add check for mft record size in superblock ip_gre: set dev->hard_header_len and dev->needed_headroom properly mac80211: handle lack of sband->bitrates in rates PM: hibernate: remove the bogus call to get_gendisk() in software_resume() scsi: mvumi: Fix error return in mvumi_io_attach() scsi: target: core: Add CONTROL field for trace events mic: vop: copy data to kernel space then write to io memory misc: vop: add round_up(x,4) for vring_size to avoid kernel panic usb: gadget: function: printer: fix use-after-free in __lock_acquire udf: Limit sparing table size udf: Avoid accessing uninitialized data on failed inode read USB: cdc-acm: handle broken union descriptors usb: dwc3: simple: add support for Hikey 970 can: flexcan: flexcan_chip_stop(): add error handling and propagate error value ath9k: hif_usb: fix race condition between usb_get_urb() and usb_kill_anchored_urbs() misc: rtsx: Fix memory leak in rtsx_pci_probe reiserfs: only call unlock_new_inode() if I_NEW xfs: make sure the rt allocator doesn't run off the end usb: ohci: Default to per-port over-current protection Bluetooth: Only mark socket zapped after unlocking scsi: ibmvfc: Fix error return in ibmvfc_probe() brcmsmac: fix memory leak in wlc_phy_attach_lcnphy rtl8xxxu: prevent potential memory leak Fix use after free in get_capset_info callback. scsi: qedi: Protect active command list to avoid list corruption scsi: qedi: Fix list_del corruption while removing active I/O tty: ipwireless: fix error handling ipvs: Fix uninit-value in do_ip_vs_set_ctl() reiserfs: Fix memory leak in reiserfs_parse_options() mwifiex: don't call del_timer_sync() on uninitialized timer brcm80211: fix possible memleak in brcmf_proto_msgbuf_attach usb: core: Solve race condition in anchor cleanup functions scsi: ufs: ufs-qcom: Fix race conditions caused by ufs_qcom_testbus_config() ath10k: check idx validity in __ath10k_htt_rx_ring_fill_n() net: korina: cast KSEG0 address to pointer in kfree tty: serial: fsl_lpuart: fix lpuart32_poll_get_char usb: cdc-acm: add quirk to blacklist ETAS ES58X devices USB: cdc-wdm: Make wdm_flush() interruptible and add wdm_fsync(). eeprom: at25: set minimum read/write access stride to 1 usb: gadget: f_ncm: allow using NCM in SuperSpeed Plus gadgets. Linux 4.19.154 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I242a1afee6c5297423afd0f11e81f9a9f14ded77
This commit is contained in:
commit
ac43e7e5e4
122 changed files with 809 additions and 451 deletions
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 19
|
PATCHLEVEL = 19
|
||||||
SUBLEVEL = 153
|
SUBLEVEL = 154
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = "People's Front"
|
NAME = "People's Front"
|
||||||
|
|
||||||
|
|
|
@ -922,8 +922,10 @@
|
||||||
};
|
};
|
||||||
|
|
||||||
rngb: rngb@21b4000 {
|
rngb: rngb@21b4000 {
|
||||||
|
compatible = "fsl,imx6sl-rngb", "fsl,imx25-rngb";
|
||||||
reg = <0x021b4000 0x4000>;
|
reg = <0x021b4000 0x4000>;
|
||||||
interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
clocks = <&clks IMX6SL_CLK_DUMMY>;
|
||||||
};
|
};
|
||||||
|
|
||||||
weim: weim@21b8000 {
|
weim: weim@21b8000 {
|
||||||
|
|
|
@ -85,21 +85,21 @@
|
||||||
global_timer: timer@b0020200 {
|
global_timer: timer@b0020200 {
|
||||||
compatible = "arm,cortex-a9-global-timer";
|
compatible = "arm,cortex-a9-global-timer";
|
||||||
reg = <0xb0020200 0x100>;
|
reg = <0xb0020200 0x100>;
|
||||||
interrupts = <GIC_PPI 0 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
|
interrupts = <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
twd_timer: timer@b0020600 {
|
twd_timer: timer@b0020600 {
|
||||||
compatible = "arm,cortex-a9-twd-timer";
|
compatible = "arm,cortex-a9-twd-timer";
|
||||||
reg = <0xb0020600 0x20>;
|
reg = <0xb0020600 0x20>;
|
||||||
interrupts = <GIC_PPI 2 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
|
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
twd_wdt: wdt@b0020620 {
|
twd_wdt: wdt@b0020620 {
|
||||||
compatible = "arm,cortex-a9-twd-wdt";
|
compatible = "arm,cortex-a9-twd-wdt";
|
||||||
reg = <0xb0020620 0xe0>;
|
reg = <0xb0020620 0xe0>;
|
||||||
interrupts = <GIC_PPI 3 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
|
interrupts = <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -206,16 +206,16 @@
|
||||||
};
|
};
|
||||||
|
|
||||||
®_dc1sw {
|
®_dc1sw {
|
||||||
regulator-min-microvolt = <3000000>;
|
regulator-min-microvolt = <3300000>;
|
||||||
regulator-max-microvolt = <3000000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
regulator-name = "vcc-gmac-phy";
|
regulator-name = "vcc-gmac-phy";
|
||||||
};
|
};
|
||||||
|
|
||||||
®_dcdc1 {
|
®_dcdc1 {
|
||||||
regulator-always-on;
|
regulator-always-on;
|
||||||
regulator-min-microvolt = <3000000>;
|
regulator-min-microvolt = <3300000>;
|
||||||
regulator-max-microvolt = <3000000>;
|
regulator-max-microvolt = <3300000>;
|
||||||
regulator-name = "vcc-3v0";
|
regulator-name = "vcc-3v3";
|
||||||
};
|
};
|
||||||
|
|
||||||
®_dcdc2 {
|
®_dcdc2 {
|
||||||
|
|
|
@ -877,7 +877,7 @@
|
||||||
reg-names = "mdp_phys";
|
reg-names = "mdp_phys";
|
||||||
|
|
||||||
interrupt-parent = <&mdss>;
|
interrupt-parent = <&mdss>;
|
||||||
interrupts = <0 0>;
|
interrupts = <0>;
|
||||||
|
|
||||||
clocks = <&gcc GCC_MDSS_AHB_CLK>,
|
clocks = <&gcc GCC_MDSS_AHB_CLK>,
|
||||||
<&gcc GCC_MDSS_AXI_CLK>,
|
<&gcc GCC_MDSS_AXI_CLK>,
|
||||||
|
@ -909,7 +909,7 @@
|
||||||
reg-names = "dsi_ctrl";
|
reg-names = "dsi_ctrl";
|
||||||
|
|
||||||
interrupt-parent = <&mdss>;
|
interrupt-parent = <&mdss>;
|
||||||
interrupts = <4 0>;
|
interrupts = <4>;
|
||||||
|
|
||||||
assigned-clocks = <&gcc BYTE0_CLK_SRC>,
|
assigned-clocks = <&gcc BYTE0_CLK_SRC>,
|
||||||
<&gcc PCLK0_CLK_SRC>;
|
<&gcc PCLK0_CLK_SRC>;
|
||||||
|
|
|
@ -99,7 +99,7 @@
|
||||||
|
|
||||||
wcd_codec: codec@f000 {
|
wcd_codec: codec@f000 {
|
||||||
compatible = "qcom,pm8916-wcd-analog-codec";
|
compatible = "qcom,pm8916-wcd-analog-codec";
|
||||||
reg = <0xf000 0x200>;
|
reg = <0xf000>;
|
||||||
reg-names = "pmic-codec-core";
|
reg-names = "pmic-codec-core";
|
||||||
clocks = <&gcc GCC_CODEC_DIGCODEC_CLK>;
|
clocks = <&gcc GCC_CODEC_DIGCODEC_CLK>;
|
||||||
clock-names = "mclk";
|
clock-names = "mclk";
|
||||||
|
|
|
@ -411,7 +411,7 @@
|
||||||
};
|
};
|
||||||
|
|
||||||
i2c0: i2c@ff020000 {
|
i2c0: i2c@ff020000 {
|
||||||
compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10";
|
compatible = "cdns,i2c-r1p14";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
interrupt-parent = <&gic>;
|
interrupt-parent = <&gic>;
|
||||||
interrupts = <0 17 4>;
|
interrupts = <0 17 4>;
|
||||||
|
@ -421,7 +421,7 @@
|
||||||
};
|
};
|
||||||
|
|
||||||
i2c1: i2c@ff030000 {
|
i2c1: i2c@ff030000 {
|
||||||
compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10";
|
compatible = "cdns,i2c-r1p14";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
interrupt-parent = <&gic>;
|
interrupt-parent = <&gic>;
|
||||||
interrupts = <0 18 4>;
|
interrupts = <0 18 4>;
|
||||||
|
|
|
@ -76,19 +76,6 @@ static inline int mm_is_thread_local(struct mm_struct *mm)
|
||||||
return false;
|
return false;
|
||||||
return cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm));
|
return cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm));
|
||||||
}
|
}
|
||||||
static inline void mm_reset_thread_local(struct mm_struct *mm)
|
|
||||||
{
|
|
||||||
WARN_ON(atomic_read(&mm->context.copros) > 0);
|
|
||||||
/*
|
|
||||||
* It's possible for mm_access to take a reference on mm_users to
|
|
||||||
* access the remote mm from another thread, but it's not allowed
|
|
||||||
* to set mm_cpumask, so mm_users may be > 1 here.
|
|
||||||
*/
|
|
||||||
WARN_ON(current->mm != mm);
|
|
||||||
atomic_set(&mm->context.active_cpus, 1);
|
|
||||||
cpumask_clear(mm_cpumask(mm));
|
|
||||||
cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));
|
|
||||||
}
|
|
||||||
#else /* CONFIG_PPC_BOOK3S_64 */
|
#else /* CONFIG_PPC_BOOK3S_64 */
|
||||||
static inline int mm_is_thread_local(struct mm_struct *mm)
|
static inline int mm_is_thread_local(struct mm_struct *mm)
|
||||||
{
|
{
|
||||||
|
|
|
@ -40,7 +40,7 @@ static struct tau_temp
|
||||||
unsigned char grew;
|
unsigned char grew;
|
||||||
} tau[NR_CPUS];
|
} tau[NR_CPUS];
|
||||||
|
|
||||||
#undef DEBUG
|
static bool tau_int_enable;
|
||||||
|
|
||||||
/* TODO: put these in a /proc interface, with some sanity checks, and maybe
|
/* TODO: put these in a /proc interface, with some sanity checks, and maybe
|
||||||
* dynamic adjustment to minimize # of interrupts */
|
* dynamic adjustment to minimize # of interrupts */
|
||||||
|
@ -54,62 +54,44 @@ static struct tau_temp
|
||||||
|
|
||||||
static void set_thresholds(unsigned long cpu)
|
static void set_thresholds(unsigned long cpu)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_TAU_INT
|
u32 maybe_tie = tau_int_enable ? THRM1_TIE : 0;
|
||||||
/*
|
|
||||||
* setup THRM1,
|
|
||||||
* threshold, valid bit, enable interrupts, interrupt when below threshold
|
|
||||||
*/
|
|
||||||
mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | THRM1_TIE | THRM1_TID);
|
|
||||||
|
|
||||||
/* setup THRM2,
|
/* setup THRM1, threshold, valid bit, interrupt when below threshold */
|
||||||
* threshold, valid bit, enable interrupts, interrupt when above threshold
|
mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | maybe_tie | THRM1_TID);
|
||||||
*/
|
|
||||||
mtspr (SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V | THRM1_TIE);
|
/* setup THRM2, threshold, valid bit, interrupt when above threshold */
|
||||||
#else
|
mtspr(SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V | maybe_tie);
|
||||||
/* same thing but don't enable interrupts */
|
|
||||||
mtspr(SPRN_THRM1, THRM1_THRES(tau[cpu].low) | THRM1_V | THRM1_TID);
|
|
||||||
mtspr(SPRN_THRM2, THRM1_THRES(tau[cpu].high) | THRM1_V);
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void TAUupdate(int cpu)
|
static void TAUupdate(int cpu)
|
||||||
{
|
{
|
||||||
unsigned thrm;
|
u32 thrm;
|
||||||
|
u32 bits = THRM1_TIV | THRM1_TIN | THRM1_V;
|
||||||
#ifdef DEBUG
|
|
||||||
printk("TAUupdate ");
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/* if both thresholds are crossed, the step_sizes cancel out
|
/* if both thresholds are crossed, the step_sizes cancel out
|
||||||
* and the window winds up getting expanded twice. */
|
* and the window winds up getting expanded twice. */
|
||||||
if((thrm = mfspr(SPRN_THRM1)) & THRM1_TIV){ /* is valid? */
|
thrm = mfspr(SPRN_THRM1);
|
||||||
if(thrm & THRM1_TIN){ /* crossed low threshold */
|
if ((thrm & bits) == bits) {
|
||||||
if (tau[cpu].low >= step_size){
|
mtspr(SPRN_THRM1, 0);
|
||||||
|
|
||||||
|
if (tau[cpu].low >= step_size) {
|
||||||
tau[cpu].low -= step_size;
|
tau[cpu].low -= step_size;
|
||||||
tau[cpu].high -= (step_size - window_expand);
|
tau[cpu].high -= (step_size - window_expand);
|
||||||
}
|
}
|
||||||
tau[cpu].grew = 1;
|
tau[cpu].grew = 1;
|
||||||
#ifdef DEBUG
|
pr_debug("%s: low threshold crossed\n", __func__);
|
||||||
printk("low threshold crossed ");
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
}
|
thrm = mfspr(SPRN_THRM2);
|
||||||
if((thrm = mfspr(SPRN_THRM2)) & THRM1_TIV){ /* is valid? */
|
if ((thrm & bits) == bits) {
|
||||||
if(thrm & THRM1_TIN){ /* crossed high threshold */
|
mtspr(SPRN_THRM2, 0);
|
||||||
if (tau[cpu].high <= 127-step_size){
|
|
||||||
|
if (tau[cpu].high <= 127 - step_size) {
|
||||||
tau[cpu].low += (step_size - window_expand);
|
tau[cpu].low += (step_size - window_expand);
|
||||||
tau[cpu].high += step_size;
|
tau[cpu].high += step_size;
|
||||||
}
|
}
|
||||||
tau[cpu].grew = 1;
|
tau[cpu].grew = 1;
|
||||||
#ifdef DEBUG
|
pr_debug("%s: high threshold crossed\n", __func__);
|
||||||
printk("high threshold crossed ");
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
#ifdef DEBUG
|
|
||||||
printk("grew = %d\n", tau[cpu].grew);
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_TAU_INT
|
#ifdef CONFIG_TAU_INT
|
||||||
|
@ -134,17 +116,16 @@ void TAUException(struct pt_regs * regs)
|
||||||
static void tau_timeout(void * info)
|
static void tau_timeout(void * info)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
unsigned long flags;
|
|
||||||
int size;
|
int size;
|
||||||
int shrink;
|
int shrink;
|
||||||
|
|
||||||
/* disabling interrupts *should* be okay */
|
|
||||||
local_irq_save(flags);
|
|
||||||
cpu = smp_processor_id();
|
cpu = smp_processor_id();
|
||||||
|
|
||||||
#ifndef CONFIG_TAU_INT
|
if (!tau_int_enable)
|
||||||
TAUupdate(cpu);
|
TAUupdate(cpu);
|
||||||
#endif
|
|
||||||
|
/* Stop thermal sensor comparisons and interrupts */
|
||||||
|
mtspr(SPRN_THRM3, 0);
|
||||||
|
|
||||||
size = tau[cpu].high - tau[cpu].low;
|
size = tau[cpu].high - tau[cpu].low;
|
||||||
if (size > min_window && ! tau[cpu].grew) {
|
if (size > min_window && ! tau[cpu].grew) {
|
||||||
|
@ -167,18 +148,12 @@ static void tau_timeout(void * info)
|
||||||
|
|
||||||
set_thresholds(cpu);
|
set_thresholds(cpu);
|
||||||
|
|
||||||
/*
|
/* Restart thermal sensor comparisons and interrupts.
|
||||||
* Do the enable every time, since otherwise a bunch of (relatively)
|
|
||||||
* complex sleep code needs to be added. One mtspr every time
|
|
||||||
* tau_timeout is called is probably not a big deal.
|
|
||||||
*
|
|
||||||
* The "PowerPC 740 and PowerPC 750 Microprocessor Datasheet"
|
* The "PowerPC 740 and PowerPC 750 Microprocessor Datasheet"
|
||||||
* recommends that "the maximum value be set in THRM3 under all
|
* recommends that "the maximum value be set in THRM3 under all
|
||||||
* conditions."
|
* conditions."
|
||||||
*/
|
*/
|
||||||
mtspr(SPRN_THRM3, THRM3_SITV(0x1fff) | THRM3_E);
|
mtspr(SPRN_THRM3, THRM3_SITV(0x1fff) | THRM3_E);
|
||||||
|
|
||||||
local_irq_restore(flags);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct workqueue_struct *tau_workq;
|
static struct workqueue_struct *tau_workq;
|
||||||
|
@ -225,6 +200,9 @@ static int __init TAU_init(void)
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tau_int_enable = IS_ENABLED(CONFIG_TAU_INT) &&
|
||||||
|
!strcmp(cur_cpu_spec->platform, "ppc750");
|
||||||
|
|
||||||
tau_workq = alloc_workqueue("tau", WQ_UNBOUND, 1);
|
tau_workq = alloc_workqueue("tau", WQ_UNBOUND, 1);
|
||||||
if (!tau_workq)
|
if (!tau_workq)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
@ -234,7 +212,7 @@ static int __init TAU_init(void)
|
||||||
queue_work(tau_workq, &tau_work);
|
queue_work(tau_workq, &tau_work);
|
||||||
|
|
||||||
pr_info("Thermal assist unit using %s, shrink_timer: %d ms\n",
|
pr_info("Thermal assist unit using %s, shrink_timer: %d ms\n",
|
||||||
IS_ENABLED(CONFIG_TAU_INT) ? "interrupts" : "workqueue", shrink_timer);
|
tau_int_enable ? "interrupts" : "workqueue", shrink_timer);
|
||||||
tau_initialized = 1;
|
tau_initialized = 1;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -598,19 +598,29 @@ static void do_exit_flush_lazy_tlb(void *arg)
|
||||||
struct mm_struct *mm = arg;
|
struct mm_struct *mm = arg;
|
||||||
unsigned long pid = mm->context.id;
|
unsigned long pid = mm->context.id;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* A kthread could have done a mmget_not_zero() after the flushing CPU
|
||||||
|
* checked mm_is_singlethreaded, and be in the process of
|
||||||
|
* kthread_use_mm when interrupted here. In that case, current->mm will
|
||||||
|
* be set to mm, because kthread_use_mm() setting ->mm and switching to
|
||||||
|
* the mm is done with interrupts off.
|
||||||
|
*/
|
||||||
if (current->mm == mm)
|
if (current->mm == mm)
|
||||||
return; /* Local CPU */
|
goto out_flush;
|
||||||
|
|
||||||
if (current->active_mm == mm) {
|
if (current->active_mm == mm) {
|
||||||
/*
|
WARN_ON_ONCE(current->mm != NULL);
|
||||||
* Must be a kernel thread because sender is single-threaded.
|
/* Is a kernel thread and is using mm as the lazy tlb */
|
||||||
*/
|
|
||||||
BUG_ON(current->mm);
|
|
||||||
mmgrab(&init_mm);
|
mmgrab(&init_mm);
|
||||||
switch_mm(mm, &init_mm, current);
|
|
||||||
current->active_mm = &init_mm;
|
current->active_mm = &init_mm;
|
||||||
|
switch_mm_irqs_off(mm, &init_mm, current);
|
||||||
mmdrop(mm);
|
mmdrop(mm);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
atomic_dec(&mm->context.active_cpus);
|
||||||
|
cpumask_clear_cpu(smp_processor_id(), mm_cpumask(mm));
|
||||||
|
|
||||||
|
out_flush:
|
||||||
_tlbiel_pid(pid, RIC_FLUSH_ALL);
|
_tlbiel_pid(pid, RIC_FLUSH_ALL);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -625,7 +635,6 @@ static void exit_flush_lazy_tlbs(struct mm_struct *mm)
|
||||||
*/
|
*/
|
||||||
smp_call_function_many(mm_cpumask(mm), do_exit_flush_lazy_tlb,
|
smp_call_function_many(mm_cpumask(mm), do_exit_flush_lazy_tlb,
|
||||||
(void *)mm, 1);
|
(void *)mm, 1);
|
||||||
mm_reset_thread_local(mm);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void radix__flush_tlb_mm(struct mm_struct *mm)
|
void radix__flush_tlb_mm(struct mm_struct *mm)
|
||||||
|
|
|
@ -95,7 +95,7 @@ REQUEST(__field(0, 8, partition_id)
|
||||||
|
|
||||||
#define REQUEST_NAME system_performance_capabilities
|
#define REQUEST_NAME system_performance_capabilities
|
||||||
#define REQUEST_NUM 0x40
|
#define REQUEST_NUM 0x40
|
||||||
#define REQUEST_IDX_KIND "starting_index=0xffffffffffffffff"
|
#define REQUEST_IDX_KIND "starting_index=0xffffffff"
|
||||||
#include I(REQUEST_BEGIN)
|
#include I(REQUEST_BEGIN)
|
||||||
REQUEST(__field(0, 1, perf_collect_privileged)
|
REQUEST(__field(0, 1, perf_collect_privileged)
|
||||||
__field(0x1, 1, capability_mask)
|
__field(0x1, 1, capability_mask)
|
||||||
|
@ -223,7 +223,7 @@ REQUEST(__field(0, 2, partition_id)
|
||||||
|
|
||||||
#define REQUEST_NAME system_hypervisor_times
|
#define REQUEST_NAME system_hypervisor_times
|
||||||
#define REQUEST_NUM 0xF0
|
#define REQUEST_NUM 0xF0
|
||||||
#define REQUEST_IDX_KIND "starting_index=0xffffffffffffffff"
|
#define REQUEST_IDX_KIND "starting_index=0xffffffff"
|
||||||
#include I(REQUEST_BEGIN)
|
#include I(REQUEST_BEGIN)
|
||||||
REQUEST(__count(0, 8, time_spent_to_dispatch_virtual_processors)
|
REQUEST(__count(0, 8, time_spent_to_dispatch_virtual_processors)
|
||||||
__count(0x8, 8, time_spent_processing_virtual_processor_timers)
|
__count(0x8, 8, time_spent_processing_virtual_processor_timers)
|
||||||
|
@ -234,7 +234,7 @@ REQUEST(__count(0, 8, time_spent_to_dispatch_virtual_processors)
|
||||||
|
|
||||||
#define REQUEST_NAME system_tlbie_count_and_time
|
#define REQUEST_NAME system_tlbie_count_and_time
|
||||||
#define REQUEST_NUM 0xF4
|
#define REQUEST_NUM 0xF4
|
||||||
#define REQUEST_IDX_KIND "starting_index=0xffffffffffffffff"
|
#define REQUEST_IDX_KIND "starting_index=0xffffffff"
|
||||||
#include I(REQUEST_BEGIN)
|
#include I(REQUEST_BEGIN)
|
||||||
REQUEST(__count(0, 8, tlbie_instructions_issued)
|
REQUEST(__count(0, 8, tlbie_instructions_issued)
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -273,6 +273,15 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
|
||||||
|
|
||||||
mask |= CNST_PMC_MASK(pmc);
|
mask |= CNST_PMC_MASK(pmc);
|
||||||
value |= CNST_PMC_VAL(pmc);
|
value |= CNST_PMC_VAL(pmc);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* PMC5 and PMC6 are used to count cycles and instructions and
|
||||||
|
* they do not support most of the constraint bits. Add a check
|
||||||
|
* to exclude PMC5/6 from most of the constraints except for
|
||||||
|
* EBB/BHRB.
|
||||||
|
*/
|
||||||
|
if (pmc >= 5)
|
||||||
|
goto ebb_bhrb;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pmc <= 4) {
|
if (pmc <= 4) {
|
||||||
|
@ -331,6 +340,7 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ebb_bhrb:
|
||||||
if (!pmc && ebb)
|
if (!pmc && ebb)
|
||||||
/* EBB events must specify the PMC */
|
/* EBB events must specify the PMC */
|
||||||
return -1;
|
return -1;
|
||||||
|
|
|
@ -238,12 +238,11 @@ config TAU
|
||||||
temperature within 2-4 degrees Celsius. This option shows the current
|
temperature within 2-4 degrees Celsius. This option shows the current
|
||||||
on-die temperature in /proc/cpuinfo if the cpu supports it.
|
on-die temperature in /proc/cpuinfo if the cpu supports it.
|
||||||
|
|
||||||
Unfortunately, on some chip revisions, this sensor is very inaccurate
|
Unfortunately, this sensor is very inaccurate when uncalibrated, so
|
||||||
and in many cases, does not work at all, so don't assume the cpu
|
don't assume the cpu temp is actually what /proc/cpuinfo says it is.
|
||||||
temp is actually what /proc/cpuinfo says it is.
|
|
||||||
|
|
||||||
config TAU_INT
|
config TAU_INT
|
||||||
bool "Interrupt driven TAU driver (DANGEROUS)"
|
bool "Interrupt driven TAU driver (EXPERIMENTAL)"
|
||||||
depends on TAU
|
depends on TAU
|
||||||
---help---
|
---help---
|
||||||
The TAU supports an interrupt driven mode which causes an interrupt
|
The TAU supports an interrupt driven mode which causes an interrupt
|
||||||
|
@ -251,12 +250,7 @@ config TAU_INT
|
||||||
to get notified the temp has exceeded a range. With this option off,
|
to get notified the temp has exceeded a range. With this option off,
|
||||||
a timer is used to re-check the temperature periodically.
|
a timer is used to re-check the temperature periodically.
|
||||||
|
|
||||||
However, on some cpus it appears that the TAU interrupt hardware
|
If in doubt, say N here.
|
||||||
is buggy and can cause a situation which would lead unexplained hard
|
|
||||||
lockups.
|
|
||||||
|
|
||||||
Unless you are extending the TAU driver, or enjoy kernel/hardware
|
|
||||||
debugging, leave this option off.
|
|
||||||
|
|
||||||
config TAU_AVERAGE
|
config TAU_AVERAGE
|
||||||
bool "Average high and low temp"
|
bool "Average high and low temp"
|
||||||
|
|
|
@ -322,15 +322,14 @@ static ssize_t dump_attr_read(struct file *filep, struct kobject *kobj,
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct dump_obj *create_dump_obj(uint32_t id, size_t size,
|
static void create_dump_obj(uint32_t id, size_t size, uint32_t type)
|
||||||
uint32_t type)
|
|
||||||
{
|
{
|
||||||
struct dump_obj *dump;
|
struct dump_obj *dump;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
dump = kzalloc(sizeof(*dump), GFP_KERNEL);
|
dump = kzalloc(sizeof(*dump), GFP_KERNEL);
|
||||||
if (!dump)
|
if (!dump)
|
||||||
return NULL;
|
return;
|
||||||
|
|
||||||
dump->kobj.kset = dump_kset;
|
dump->kobj.kset = dump_kset;
|
||||||
|
|
||||||
|
@ -350,21 +349,39 @@ static struct dump_obj *create_dump_obj(uint32_t id, size_t size,
|
||||||
rc = kobject_add(&dump->kobj, NULL, "0x%x-0x%x", type, id);
|
rc = kobject_add(&dump->kobj, NULL, "0x%x-0x%x", type, id);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
kobject_put(&dump->kobj);
|
kobject_put(&dump->kobj);
|
||||||
return NULL;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* As soon as the sysfs file for this dump is created/activated there is
|
||||||
|
* a chance the opal_errd daemon (or any userspace) might read and
|
||||||
|
* acknowledge the dump before kobject_uevent() is called. If that
|
||||||
|
* happens then there is a potential race between
|
||||||
|
* dump_ack_store->kobject_put() and kobject_uevent() which leads to a
|
||||||
|
* use-after-free of a kernfs object resulting in a kernel crash.
|
||||||
|
*
|
||||||
|
* To avoid that, we need to take a reference on behalf of the bin file,
|
||||||
|
* so that our reference remains valid while we call kobject_uevent().
|
||||||
|
* We then drop our reference before exiting the function, leaving the
|
||||||
|
* bin file to drop the last reference (if it hasn't already).
|
||||||
|
*/
|
||||||
|
|
||||||
|
/* Take a reference for the bin file */
|
||||||
|
kobject_get(&dump->kobj);
|
||||||
rc = sysfs_create_bin_file(&dump->kobj, &dump->dump_attr);
|
rc = sysfs_create_bin_file(&dump->kobj, &dump->dump_attr);
|
||||||
if (rc) {
|
if (rc == 0) {
|
||||||
kobject_put(&dump->kobj);
|
kobject_uevent(&dump->kobj, KOBJ_ADD);
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
pr_info("%s: New platform dump. ID = 0x%x Size %u\n",
|
pr_info("%s: New platform dump. ID = 0x%x Size %u\n",
|
||||||
__func__, dump->id, dump->size);
|
__func__, dump->id, dump->size);
|
||||||
|
} else {
|
||||||
|
/* Drop reference count taken for bin file */
|
||||||
|
kobject_put(&dump->kobj);
|
||||||
|
}
|
||||||
|
|
||||||
kobject_uevent(&dump->kobj, KOBJ_ADD);
|
/* Drop our reference */
|
||||||
|
kobject_put(&dump->kobj);
|
||||||
return dump;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
static irqreturn_t process_dump(int irq, void *data)
|
static irqreturn_t process_dump(int irq, void *data)
|
||||||
|
|
|
@ -3561,7 +3561,7 @@ static int em_rdpid(struct x86_emulate_ctxt *ctxt)
|
||||||
u64 tsc_aux = 0;
|
u64 tsc_aux = 0;
|
||||||
|
|
||||||
if (ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux))
|
if (ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux))
|
||||||
return emulate_gp(ctxt, 0);
|
return emulate_ud(ctxt);
|
||||||
ctxt->dst.val = tsc_aux;
|
ctxt->dst.val = tsc_aux;
|
||||||
return X86EMUL_CONTINUE;
|
return X86EMUL_CONTINUE;
|
||||||
}
|
}
|
||||||
|
|
|
@ -2129,11 +2129,10 @@ static void handle_bad_sector(struct bio *bio, sector_t maxsector)
|
||||||
{
|
{
|
||||||
char b[BDEVNAME_SIZE];
|
char b[BDEVNAME_SIZE];
|
||||||
|
|
||||||
printk(KERN_INFO "attempt to access beyond end of device\n");
|
pr_info_ratelimited("attempt to access beyond end of device\n"
|
||||||
printk(KERN_INFO "%s: rw=%d, want=%Lu, limit=%Lu\n",
|
"%s: rw=%d, want=%llu, limit=%llu\n",
|
||||||
bio_devname(bio, b), bio->bi_opf,
|
bio_devname(bio, b), bio->bi_opf,
|
||||||
(unsigned long long)bio_end_sector(bio),
|
bio_end_sector(bio), maxsector);
|
||||||
(long long)maxsector);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_FAIL_MAKE_REQUEST
|
#ifdef CONFIG_FAIL_MAKE_REQUEST
|
||||||
|
|
|
@ -517,12 +517,17 @@ static int clk_sam9x5_main_set_parent(struct clk_hw *hw, u8 index)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
regmap_read(regmap, AT91_CKGR_MOR, &tmp);
|
regmap_read(regmap, AT91_CKGR_MOR, &tmp);
|
||||||
tmp &= ~MOR_KEY_MASK;
|
|
||||||
|
|
||||||
if (index && !(tmp & AT91_PMC_MOSCSEL))
|
if (index && !(tmp & AT91_PMC_MOSCSEL))
|
||||||
regmap_write(regmap, AT91_CKGR_MOR, tmp | AT91_PMC_MOSCSEL);
|
tmp = AT91_PMC_MOSCSEL;
|
||||||
else if (!index && (tmp & AT91_PMC_MOSCSEL))
|
else if (!index && (tmp & AT91_PMC_MOSCSEL))
|
||||||
regmap_write(regmap, AT91_CKGR_MOR, tmp & ~AT91_PMC_MOSCSEL);
|
tmp = 0;
|
||||||
|
else
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
regmap_update_bits(regmap, AT91_CKGR_MOR,
|
||||||
|
AT91_PMC_MOSCSEL | MOR_KEY_MASK,
|
||||||
|
tmp | AT91_PMC_KEY);
|
||||||
|
|
||||||
while (!clk_sam9x5_main_ready(regmap))
|
while (!clk_sam9x5_main_ready(regmap))
|
||||||
cpu_relax();
|
cpu_relax();
|
||||||
|
|
|
@ -1319,8 +1319,10 @@ static struct clk_hw *bcm2835_register_pll(struct bcm2835_cprman *cprman,
|
||||||
pll->hw.init = &init;
|
pll->hw.init = &init;
|
||||||
|
|
||||||
ret = devm_clk_hw_register(cprman->dev, &pll->hw);
|
ret = devm_clk_hw_register(cprman->dev, &pll->hw);
|
||||||
if (ret)
|
if (ret) {
|
||||||
|
kfree(pll);
|
||||||
return NULL;
|
return NULL;
|
||||||
|
}
|
||||||
return &pll->hw;
|
return &pll->hw;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -166,7 +166,7 @@ struct clk *rockchip_clk_register_halfdiv(const char *name,
|
||||||
unsigned long flags,
|
unsigned long flags,
|
||||||
spinlock_t *lock)
|
spinlock_t *lock)
|
||||||
{
|
{
|
||||||
struct clk *clk;
|
struct clk *clk = ERR_PTR(-ENOMEM);
|
||||||
struct clk_mux *mux = NULL;
|
struct clk_mux *mux = NULL;
|
||||||
struct clk_gate *gate = NULL;
|
struct clk_gate *gate = NULL;
|
||||||
struct clk_divider *div = NULL;
|
struct clk_divider *div = NULL;
|
||||||
|
|
|
@ -885,12 +885,15 @@ static int powernv_cpufreq_reboot_notifier(struct notifier_block *nb,
|
||||||
unsigned long action, void *unused)
|
unsigned long action, void *unused)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
struct cpufreq_policy cpu_policy;
|
struct cpufreq_policy *cpu_policy;
|
||||||
|
|
||||||
rebooting = true;
|
rebooting = true;
|
||||||
for_each_online_cpu(cpu) {
|
for_each_online_cpu(cpu) {
|
||||||
cpufreq_get_policy(&cpu_policy, cpu);
|
cpu_policy = cpufreq_cpu_get(cpu);
|
||||||
powernv_cpufreq_target_index(&cpu_policy, get_nominal_index());
|
if (!cpu_policy)
|
||||||
|
continue;
|
||||||
|
powernv_cpufreq_target_index(cpu_policy, get_nominal_index());
|
||||||
|
cpufreq_cpu_put(cpu_policy);
|
||||||
}
|
}
|
||||||
|
|
||||||
return NOTIFY_DONE;
|
return NOTIFY_DONE;
|
||||||
|
|
|
@ -1752,7 +1752,7 @@ ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto e_ctx;
|
goto e_data;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
/* Stash the context */
|
/* Stash the context */
|
||||||
|
|
|
@ -94,8 +94,10 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev,
|
||||||
vgdev->capsets[i].id > 0, 5 * HZ);
|
vgdev->capsets[i].id > 0, 5 * HZ);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
DRM_ERROR("timed out waiting for cap set %d\n", i);
|
DRM_ERROR("timed out waiting for cap set %d\n", i);
|
||||||
|
spin_lock(&vgdev->display_info_lock);
|
||||||
kfree(vgdev->capsets);
|
kfree(vgdev->capsets);
|
||||||
vgdev->capsets = NULL;
|
vgdev->capsets = NULL;
|
||||||
|
spin_unlock(&vgdev->display_info_lock);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
DRM_INFO("cap set %d: id %d, max-version %d, max-size %d\n",
|
DRM_INFO("cap set %d: id %d, max-version %d, max-size %d\n",
|
||||||
|
|
|
@ -571,9 +571,13 @@ static void virtio_gpu_cmd_get_capset_info_cb(struct virtio_gpu_device *vgdev,
|
||||||
int i = le32_to_cpu(cmd->capset_index);
|
int i = le32_to_cpu(cmd->capset_index);
|
||||||
|
|
||||||
spin_lock(&vgdev->display_info_lock);
|
spin_lock(&vgdev->display_info_lock);
|
||||||
|
if (vgdev->capsets) {
|
||||||
vgdev->capsets[i].id = le32_to_cpu(resp->capset_id);
|
vgdev->capsets[i].id = le32_to_cpu(resp->capset_id);
|
||||||
vgdev->capsets[i].max_version = le32_to_cpu(resp->capset_max_version);
|
vgdev->capsets[i].max_version = le32_to_cpu(resp->capset_max_version);
|
||||||
vgdev->capsets[i].max_size = le32_to_cpu(resp->capset_max_size);
|
vgdev->capsets[i].max_size = le32_to_cpu(resp->capset_max_size);
|
||||||
|
} else {
|
||||||
|
DRM_ERROR("invalid capset memory.");
|
||||||
|
}
|
||||||
spin_unlock(&vgdev->display_info_lock);
|
spin_unlock(&vgdev->display_info_lock);
|
||||||
wake_up(&vgdev->resp_wq);
|
wake_up(&vgdev->resp_wq);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1117,6 +1117,7 @@ config I2C_RCAR
|
||||||
tristate "Renesas R-Car I2C Controller"
|
tristate "Renesas R-Car I2C Controller"
|
||||||
depends on ARCH_RENESAS || COMPILE_TEST
|
depends on ARCH_RENESAS || COMPILE_TEST
|
||||||
select I2C_SLAVE
|
select I2C_SLAVE
|
||||||
|
select RESET_CONTROLLER if ARCH_RCAR_GEN3
|
||||||
help
|
help
|
||||||
If you say yes to this option, support will be included for the
|
If you say yes to this option, support will be included for the
|
||||||
R-Car I2C controller.
|
R-Car I2C controller.
|
||||||
|
|
|
@ -219,6 +219,7 @@ static acpi_status i2c_acpi_add_device(acpi_handle handle, u32 level,
|
||||||
void i2c_acpi_register_devices(struct i2c_adapter *adap)
|
void i2c_acpi_register_devices(struct i2c_adapter *adap)
|
||||||
{
|
{
|
||||||
acpi_status status;
|
acpi_status status;
|
||||||
|
acpi_handle handle;
|
||||||
|
|
||||||
if (!has_acpi_companion(&adap->dev))
|
if (!has_acpi_companion(&adap->dev))
|
||||||
return;
|
return;
|
||||||
|
@ -229,6 +230,15 @@ void i2c_acpi_register_devices(struct i2c_adapter *adap)
|
||||||
adap, NULL);
|
adap, NULL);
|
||||||
if (ACPI_FAILURE(status))
|
if (ACPI_FAILURE(status))
|
||||||
dev_warn(&adap->dev, "failed to enumerate I2C slaves\n");
|
dev_warn(&adap->dev, "failed to enumerate I2C slaves\n");
|
||||||
|
|
||||||
|
if (!adap->dev.parent)
|
||||||
|
return;
|
||||||
|
|
||||||
|
handle = ACPI_HANDLE(adap->dev.parent);
|
||||||
|
if (!handle)
|
||||||
|
return;
|
||||||
|
|
||||||
|
acpi_walk_dep_device_list(handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
const struct acpi_device_id *
|
const struct acpi_device_id *
|
||||||
|
@ -693,7 +703,6 @@ int i2c_acpi_install_space_handler(struct i2c_adapter *adapter)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|
||||||
acpi_walk_dep_device_list(handle);
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1678,19 +1678,30 @@ static void cma_release_port(struct rdma_id_private *id_priv)
|
||||||
mutex_unlock(&lock);
|
mutex_unlock(&lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cma_leave_roce_mc_group(struct rdma_id_private *id_priv,
|
static void destroy_mc(struct rdma_id_private *id_priv,
|
||||||
struct cma_multicast *mc)
|
struct cma_multicast *mc)
|
||||||
{
|
{
|
||||||
struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr;
|
if (rdma_cap_ib_mcast(id_priv->id.device, id_priv->id.port_num)) {
|
||||||
|
ib_sa_free_multicast(mc->multicast.ib);
|
||||||
|
kfree(mc);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (rdma_protocol_roce(id_priv->id.device,
|
||||||
|
id_priv->id.port_num)) {
|
||||||
|
struct rdma_dev_addr *dev_addr =
|
||||||
|
&id_priv->id.route.addr.dev_addr;
|
||||||
struct net_device *ndev = NULL;
|
struct net_device *ndev = NULL;
|
||||||
|
|
||||||
if (dev_addr->bound_dev_if)
|
if (dev_addr->bound_dev_if)
|
||||||
ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if);
|
ndev = dev_get_by_index(dev_addr->net,
|
||||||
|
dev_addr->bound_dev_if);
|
||||||
if (ndev) {
|
if (ndev) {
|
||||||
cma_igmp_send(ndev, &mc->multicast.ib->rec.mgid, false);
|
cma_igmp_send(ndev, &mc->multicast.ib->rec.mgid, false);
|
||||||
dev_put(ndev);
|
dev_put(ndev);
|
||||||
}
|
}
|
||||||
kref_put(&mc->mcref, release_mc);
|
kref_put(&mc->mcref, release_mc);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cma_leave_mc_groups(struct rdma_id_private *id_priv)
|
static void cma_leave_mc_groups(struct rdma_id_private *id_priv)
|
||||||
|
@ -1698,16 +1709,10 @@ static void cma_leave_mc_groups(struct rdma_id_private *id_priv)
|
||||||
struct cma_multicast *mc;
|
struct cma_multicast *mc;
|
||||||
|
|
||||||
while (!list_empty(&id_priv->mc_list)) {
|
while (!list_empty(&id_priv->mc_list)) {
|
||||||
mc = container_of(id_priv->mc_list.next,
|
mc = list_first_entry(&id_priv->mc_list, struct cma_multicast,
|
||||||
struct cma_multicast, list);
|
list);
|
||||||
list_del(&mc->list);
|
list_del(&mc->list);
|
||||||
if (rdma_cap_ib_mcast(id_priv->cma_dev->device,
|
destroy_mc(id_priv, mc);
|
||||||
id_priv->id.port_num)) {
|
|
||||||
ib_sa_free_multicast(mc->multicast.ib);
|
|
||||||
kfree(mc);
|
|
||||||
} else {
|
|
||||||
cma_leave_roce_mc_group(id_priv, mc);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4020,16 +4025,6 @@ static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast)
|
||||||
else
|
else
|
||||||
pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to join multicast. status %d\n",
|
pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to join multicast. status %d\n",
|
||||||
status);
|
status);
|
||||||
mutex_lock(&id_priv->qp_mutex);
|
|
||||||
if (!status && id_priv->id.qp) {
|
|
||||||
status = ib_attach_mcast(id_priv->id.qp, &multicast->rec.mgid,
|
|
||||||
be16_to_cpu(multicast->rec.mlid));
|
|
||||||
if (status)
|
|
||||||
pr_debug_ratelimited("RDMA CM: MULTICAST_ERROR: failed to attach QP. status %d\n",
|
|
||||||
status);
|
|
||||||
}
|
|
||||||
mutex_unlock(&id_priv->qp_mutex);
|
|
||||||
|
|
||||||
event.status = status;
|
event.status = status;
|
||||||
event.param.ud.private_data = mc->context;
|
event.param.ud.private_data = mc->context;
|
||||||
if (!status) {
|
if (!status) {
|
||||||
|
@ -4283,6 +4278,10 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
|
||||||
struct cma_multicast *mc;
|
struct cma_multicast *mc;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
/* Not supported for kernel QPs */
|
||||||
|
if (WARN_ON(id->qp))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (!id->device)
|
if (!id->device)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
@ -4333,26 +4332,15 @@ void rdma_leave_multicast(struct rdma_cm_id *id, struct sockaddr *addr)
|
||||||
id_priv = container_of(id, struct rdma_id_private, id);
|
id_priv = container_of(id, struct rdma_id_private, id);
|
||||||
spin_lock_irq(&id_priv->lock);
|
spin_lock_irq(&id_priv->lock);
|
||||||
list_for_each_entry(mc, &id_priv->mc_list, list) {
|
list_for_each_entry(mc, &id_priv->mc_list, list) {
|
||||||
if (!memcmp(&mc->addr, addr, rdma_addr_size(addr))) {
|
if (memcmp(&mc->addr, addr, rdma_addr_size(addr)) != 0)
|
||||||
|
continue;
|
||||||
list_del(&mc->list);
|
list_del(&mc->list);
|
||||||
spin_unlock_irq(&id_priv->lock);
|
spin_unlock_irq(&id_priv->lock);
|
||||||
|
|
||||||
if (id->qp)
|
WARN_ON(id_priv->cma_dev->device != id->device);
|
||||||
ib_detach_mcast(id->qp,
|
destroy_mc(id_priv, mc);
|
||||||
&mc->multicast.ib->rec.mgid,
|
|
||||||
be16_to_cpu(mc->multicast.ib->rec.mlid));
|
|
||||||
|
|
||||||
BUG_ON(id_priv->cma_dev->device != id->device);
|
|
||||||
|
|
||||||
if (rdma_cap_ib_mcast(id->device, id->port_num)) {
|
|
||||||
ib_sa_free_multicast(mc->multicast.ib);
|
|
||||||
kfree(mc);
|
|
||||||
} else if (rdma_protocol_roce(id->device, id->port_num)) {
|
|
||||||
cma_leave_roce_mc_group(id_priv, mc);
|
|
||||||
}
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
|
||||||
spin_unlock_irq(&id_priv->lock);
|
spin_unlock_irq(&id_priv->lock);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(rdma_leave_multicast);
|
EXPORT_SYMBOL(rdma_leave_multicast);
|
||||||
|
|
|
@ -274,7 +274,6 @@ static int hns_roce_v1_post_send(struct ib_qp *ibqp,
|
||||||
ps_opcode = HNS_ROCE_WQE_OPCODE_SEND;
|
ps_opcode = HNS_ROCE_WQE_OPCODE_SEND;
|
||||||
break;
|
break;
|
||||||
case IB_WR_LOCAL_INV:
|
case IB_WR_LOCAL_INV:
|
||||||
break;
|
|
||||||
case IB_WR_ATOMIC_CMP_AND_SWP:
|
case IB_WR_ATOMIC_CMP_AND_SWP:
|
||||||
case IB_WR_ATOMIC_FETCH_AND_ADD:
|
case IB_WR_ATOMIC_FETCH_AND_ADD:
|
||||||
case IB_WR_LSO:
|
case IB_WR_LSO:
|
||||||
|
|
|
@ -3821,6 +3821,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
|
||||||
}
|
}
|
||||||
|
|
||||||
qp_init_attr->cap = qp_attr->cap;
|
qp_init_attr->cap = qp_attr->cap;
|
||||||
|
qp_init_attr->sq_sig_type = hr_qp->sq_signal_bits;
|
||||||
|
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&hr_qp->mutex);
|
mutex_unlock(&hr_qp->mutex);
|
||||||
|
|
|
@ -95,9 +95,7 @@ struct rvt_dev_info *rvt_alloc_device(size_t size, int nports)
|
||||||
if (!rdi)
|
if (!rdi)
|
||||||
return rdi;
|
return rdi;
|
||||||
|
|
||||||
rdi->ports = kcalloc(nports,
|
rdi->ports = kcalloc(nports, sizeof(*rdi->ports), GFP_KERNEL);
|
||||||
sizeof(struct rvt_ibport **),
|
|
||||||
GFP_KERNEL);
|
|
||||||
if (!rdi->ports)
|
if (!rdi->ports)
|
||||||
ib_dealloc_device(&rdi->ibdev);
|
ib_dealloc_device(&rdi->ibdev);
|
||||||
|
|
||||||
|
|
|
@ -257,8 +257,8 @@ static int ep93xx_keypad_probe(struct platform_device *pdev)
|
||||||
}
|
}
|
||||||
|
|
||||||
keypad->irq = platform_get_irq(pdev, 0);
|
keypad->irq = platform_get_irq(pdev, 0);
|
||||||
if (!keypad->irq) {
|
if (keypad->irq < 0) {
|
||||||
err = -ENXIO;
|
err = keypad->irq;
|
||||||
goto failed_free;
|
goto failed_free;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -253,10 +253,8 @@ static int omap4_keypad_probe(struct platform_device *pdev)
|
||||||
}
|
}
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
if (!irq) {
|
if (irq < 0)
|
||||||
dev_err(&pdev->dev, "no keyboard irq assigned\n");
|
return irq;
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
keypad_data = kzalloc(sizeof(struct omap4_keypad), GFP_KERNEL);
|
keypad_data = kzalloc(sizeof(struct omap4_keypad), GFP_KERNEL);
|
||||||
if (!keypad_data) {
|
if (!keypad_data) {
|
||||||
|
|
|
@ -63,7 +63,7 @@ struct twl4030_keypad {
|
||||||
bool autorepeat;
|
bool autorepeat;
|
||||||
unsigned int n_rows;
|
unsigned int n_rows;
|
||||||
unsigned int n_cols;
|
unsigned int n_cols;
|
||||||
unsigned int irq;
|
int irq;
|
||||||
|
|
||||||
struct device *dbg_dev;
|
struct device *dbg_dev;
|
||||||
struct input_dev *input;
|
struct input_dev *input;
|
||||||
|
@ -389,10 +389,8 @@ static int twl4030_kp_probe(struct platform_device *pdev)
|
||||||
}
|
}
|
||||||
|
|
||||||
kp->irq = platform_get_irq(pdev, 0);
|
kp->irq = platform_get_irq(pdev, 0);
|
||||||
if (!kp->irq) {
|
if (kp->irq < 0)
|
||||||
dev_err(&pdev->dev, "no keyboard irq assigned\n");
|
return kp->irq;
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
error = matrix_keypad_build_keymap(keymap_data, NULL,
|
error = matrix_keypad_build_keymap(keymap_data, NULL,
|
||||||
TWL4030_MAX_ROWS,
|
TWL4030_MAX_ROWS,
|
||||||
|
|
|
@ -210,7 +210,6 @@ static int sun4i_ps2_probe(struct platform_device *pdev)
|
||||||
struct sun4i_ps2data *drvdata;
|
struct sun4i_ps2data *drvdata;
|
||||||
struct serio *serio;
|
struct serio *serio;
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
unsigned int irq;
|
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
drvdata = kzalloc(sizeof(struct sun4i_ps2data), GFP_KERNEL);
|
drvdata = kzalloc(sizeof(struct sun4i_ps2data), GFP_KERNEL);
|
||||||
|
@ -263,14 +262,12 @@ static int sun4i_ps2_probe(struct platform_device *pdev)
|
||||||
writel(0, drvdata->reg_base + PS2_REG_GCTL);
|
writel(0, drvdata->reg_base + PS2_REG_GCTL);
|
||||||
|
|
||||||
/* Get IRQ for the device */
|
/* Get IRQ for the device */
|
||||||
irq = platform_get_irq(pdev, 0);
|
drvdata->irq = platform_get_irq(pdev, 0);
|
||||||
if (!irq) {
|
if (drvdata->irq < 0) {
|
||||||
dev_err(dev, "no IRQ found\n");
|
error = drvdata->irq;
|
||||||
error = -ENXIO;
|
|
||||||
goto err_disable_clk;
|
goto err_disable_clk;
|
||||||
}
|
}
|
||||||
|
|
||||||
drvdata->irq = irq;
|
|
||||||
drvdata->serio = serio;
|
drvdata->serio = serio;
|
||||||
drvdata->dev = dev;
|
drvdata->dev = dev;
|
||||||
|
|
||||||
|
|
|
@ -538,7 +538,9 @@ static int __maybe_unused imx6ul_tsc_resume(struct device *dev)
|
||||||
|
|
||||||
mutex_lock(&input_dev->mutex);
|
mutex_lock(&input_dev->mutex);
|
||||||
|
|
||||||
if (input_dev->users) {
|
if (!input_dev->users)
|
||||||
|
goto out;
|
||||||
|
|
||||||
retval = clk_prepare_enable(tsc->adc_clk);
|
retval = clk_prepare_enable(tsc->adc_clk);
|
||||||
if (retval)
|
if (retval)
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -550,8 +552,11 @@ static int __maybe_unused imx6ul_tsc_resume(struct device *dev)
|
||||||
}
|
}
|
||||||
|
|
||||||
retval = imx6ul_tsc_init(tsc);
|
retval = imx6ul_tsc_init(tsc);
|
||||||
|
if (retval) {
|
||||||
|
clk_disable_unprepare(tsc->tsc_clk);
|
||||||
|
clk_disable_unprepare(tsc->adc_clk);
|
||||||
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
out:
|
out:
|
||||||
mutex_unlock(&input_dev->mutex);
|
mutex_unlock(&input_dev->mutex);
|
||||||
return retval;
|
return retval;
|
||||||
|
|
|
@ -479,7 +479,7 @@ static ssize_t stmfts_sysfs_hover_enable_write(struct device *dev,
|
||||||
|
|
||||||
mutex_lock(&sdata->mutex);
|
mutex_lock(&sdata->mutex);
|
||||||
|
|
||||||
if (value & sdata->hover_enabled)
|
if (value && sdata->hover_enabled)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
if (sdata->running)
|
if (sdata->running)
|
||||||
|
|
|
@ -103,9 +103,12 @@ static void msg_submit(struct mbox_chan *chan)
|
||||||
err = __msg_submit(chan);
|
err = __msg_submit(chan);
|
||||||
} while (err == -EAGAIN);
|
} while (err == -EAGAIN);
|
||||||
|
|
||||||
if (!err && (chan->txdone_method & TXDONE_BY_POLL))
|
|
||||||
/* kick start the timer immediately to avoid delays */
|
/* kick start the timer immediately to avoid delays */
|
||||||
|
if (!err && (chan->txdone_method & TXDONE_BY_POLL)) {
|
||||||
|
/* but only if not already active */
|
||||||
|
if (!hrtimer_active(&chan->mbox->poll_hrt))
|
||||||
hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
|
hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tx_tick(struct mbox_chan *chan, int r)
|
static void tx_tick(struct mbox_chan *chan, int r)
|
||||||
|
@ -143,11 +146,10 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer)
|
||||||
struct mbox_chan *chan = &mbox->chans[i];
|
struct mbox_chan *chan = &mbox->chans[i];
|
||||||
|
|
||||||
if (chan->active_req && chan->cl) {
|
if (chan->active_req && chan->cl) {
|
||||||
|
resched = true;
|
||||||
txdone = chan->mbox->ops->last_tx_done(chan);
|
txdone = chan->mbox->ops->last_tx_done(chan);
|
||||||
if (txdone)
|
if (txdone)
|
||||||
tx_tick(chan, 0);
|
tx_tick(chan, 0);
|
||||||
else
|
|
||||||
resched = true;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -271,8 +271,10 @@ static int node_probe(struct fw_unit *unit, const struct ieee1394_device_id *id)
|
||||||
|
|
||||||
name_len = fw_csr_string(unit->directory, CSR_MODEL,
|
name_len = fw_csr_string(unit->directory, CSR_MODEL,
|
||||||
name, sizeof(name));
|
name, sizeof(name));
|
||||||
if (name_len < 0)
|
if (name_len < 0) {
|
||||||
return name_len;
|
err = name_len;
|
||||||
|
goto fail_free;
|
||||||
|
}
|
||||||
for (i = ARRAY_SIZE(model_names); --i; )
|
for (i = ARRAY_SIZE(model_names); --i; )
|
||||||
if (strlen(model_names[i]) <= name_len &&
|
if (strlen(model_names[i]) <= name_len &&
|
||||||
strncmp(name, model_names[i], name_len) == 0)
|
strncmp(name, model_names[i], name_len) == 0)
|
||||||
|
|
|
@ -4055,11 +4055,13 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
|
||||||
btv->id = dev->device;
|
btv->id = dev->device;
|
||||||
if (pci_enable_device(dev)) {
|
if (pci_enable_device(dev)) {
|
||||||
pr_warn("%d: Can't enable device\n", btv->c.nr);
|
pr_warn("%d: Can't enable device\n", btv->c.nr);
|
||||||
return -EIO;
|
result = -EIO;
|
||||||
|
goto free_mem;
|
||||||
}
|
}
|
||||||
if (pci_set_dma_mask(dev, DMA_BIT_MASK(32))) {
|
if (pci_set_dma_mask(dev, DMA_BIT_MASK(32))) {
|
||||||
pr_warn("%d: No suitable DMA available\n", btv->c.nr);
|
pr_warn("%d: No suitable DMA available\n", btv->c.nr);
|
||||||
return -EIO;
|
result = -EIO;
|
||||||
|
goto free_mem;
|
||||||
}
|
}
|
||||||
if (!request_mem_region(pci_resource_start(dev,0),
|
if (!request_mem_region(pci_resource_start(dev,0),
|
||||||
pci_resource_len(dev,0),
|
pci_resource_len(dev,0),
|
||||||
|
@ -4067,7 +4069,8 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
|
||||||
pr_warn("%d: can't request iomem (0x%llx)\n",
|
pr_warn("%d: can't request iomem (0x%llx)\n",
|
||||||
btv->c.nr,
|
btv->c.nr,
|
||||||
(unsigned long long)pci_resource_start(dev, 0));
|
(unsigned long long)pci_resource_start(dev, 0));
|
||||||
return -EBUSY;
|
result = -EBUSY;
|
||||||
|
goto free_mem;
|
||||||
}
|
}
|
||||||
pci_set_master(dev);
|
pci_set_master(dev);
|
||||||
pci_set_command(dev);
|
pci_set_command(dev);
|
||||||
|
@ -4253,6 +4256,10 @@ static int bttv_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
|
||||||
release_mem_region(pci_resource_start(btv->c.pci,0),
|
release_mem_region(pci_resource_start(btv->c.pci,0),
|
||||||
pci_resource_len(btv->c.pci,0));
|
pci_resource_len(btv->c.pci,0));
|
||||||
pci_disable_device(btv->c.pci);
|
pci_disable_device(btv->c.pci);
|
||||||
|
|
||||||
|
free_mem:
|
||||||
|
bttvs[btv->c.nr] = NULL;
|
||||||
|
kfree(btv);
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -693,7 +693,8 @@ int saa_dsp_writel(struct saa7134_dev *dev, int reg, u32 value)
|
||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
audio_dbg(2, "dsp write reg 0x%x = 0x%06x\n", reg << 2, value);
|
audio_dbg(2, "dsp write reg 0x%x = 0x%06x\n",
|
||||||
|
(reg << 2) & 0xffffffff, value);
|
||||||
err = saa_dsp_wait_bit(dev,SAA7135_DSP_RWSTATE_WRR);
|
err = saa_dsp_wait_bit(dev,SAA7135_DSP_RWSTATE_WRR);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
|
@ -311,8 +311,10 @@ static int fimc_isp_subdev_s_power(struct v4l2_subdev *sd, int on)
|
||||||
|
|
||||||
if (on) {
|
if (on) {
|
||||||
ret = pm_runtime_get_sync(&is->pdev->dev);
|
ret = pm_runtime_get_sync(&is->pdev->dev);
|
||||||
if (ret < 0)
|
if (ret < 0) {
|
||||||
|
pm_runtime_put(&is->pdev->dev);
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
set_bit(IS_ST_PWR_ON, &is->state);
|
set_bit(IS_ST_PWR_ON, &is->state);
|
||||||
|
|
||||||
ret = fimc_is_start_firmware(is);
|
ret = fimc_is_start_firmware(is);
|
||||||
|
|
|
@ -480,7 +480,7 @@ static int fimc_lite_open(struct file *file)
|
||||||
set_bit(ST_FLITE_IN_USE, &fimc->state);
|
set_bit(ST_FLITE_IN_USE, &fimc->state);
|
||||||
ret = pm_runtime_get_sync(&fimc->pdev->dev);
|
ret = pm_runtime_get_sync(&fimc->pdev->dev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto unlock;
|
goto err_pm;
|
||||||
|
|
||||||
ret = v4l2_fh_open(file);
|
ret = v4l2_fh_open(file);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
|
@ -481,8 +481,10 @@ static int fimc_md_register_sensor_entities(struct fimc_md *fmd)
|
||||||
return -ENXIO;
|
return -ENXIO;
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(fmd->pmf);
|
ret = pm_runtime_get_sync(fmd->pmf);
|
||||||
if (ret < 0)
|
if (ret < 0) {
|
||||||
|
pm_runtime_put(fmd->pmf);
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
fmd->num_sensors = 0;
|
fmd->num_sensors = 0;
|
||||||
|
|
||||||
|
|
|
@ -513,9 +513,11 @@ static int s5pcsis_s_stream(struct v4l2_subdev *sd, int enable)
|
||||||
if (enable) {
|
if (enable) {
|
||||||
s5pcsis_clear_counters(state);
|
s5pcsis_clear_counters(state);
|
||||||
ret = pm_runtime_get_sync(&state->pdev->dev);
|
ret = pm_runtime_get_sync(&state->pdev->dev);
|
||||||
if (ret && ret != 1)
|
if (ret && ret != 1) {
|
||||||
|
pm_runtime_put_noidle(&state->pdev->dev);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
mutex_lock(&state->lock);
|
mutex_lock(&state->lock);
|
||||||
if (enable) {
|
if (enable) {
|
||||||
|
|
|
@ -321,8 +321,10 @@ static int venus_probe(struct platform_device *pdev)
|
||||||
goto err_dev_unregister;
|
goto err_dev_unregister;
|
||||||
|
|
||||||
ret = pm_runtime_put_sync(dev);
|
ret = pm_runtime_put_sync(dev);
|
||||||
if (ret)
|
if (ret) {
|
||||||
|
pm_runtime_get_noresume(dev);
|
||||||
goto err_dev_unregister;
|
goto err_dev_unregister;
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
@ -333,6 +335,7 @@ static int venus_probe(struct platform_device *pdev)
|
||||||
err_venus_shutdown:
|
err_venus_shutdown:
|
||||||
venus_shutdown(dev);
|
venus_shutdown(dev);
|
||||||
err_runtime_disable:
|
err_runtime_disable:
|
||||||
|
pm_runtime_put_noidle(dev);
|
||||||
pm_runtime_set_suspended(dev);
|
pm_runtime_set_suspended(dev);
|
||||||
pm_runtime_disable(dev);
|
pm_runtime_disable(dev);
|
||||||
hfi_destroy(core);
|
hfi_destroy(core);
|
||||||
|
|
|
@ -476,7 +476,7 @@ static int s3c_camif_probe(struct platform_device *pdev)
|
||||||
|
|
||||||
ret = camif_media_dev_init(camif);
|
ret = camif_media_dev_init(camif);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto err_alloc;
|
goto err_pm;
|
||||||
|
|
||||||
ret = camif_register_sensor(camif);
|
ret = camif_register_sensor(camif);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
@ -510,10 +510,9 @@ static int s3c_camif_probe(struct platform_device *pdev)
|
||||||
media_device_unregister(&camif->media_dev);
|
media_device_unregister(&camif->media_dev);
|
||||||
media_device_cleanup(&camif->media_dev);
|
media_device_cleanup(&camif->media_dev);
|
||||||
camif_unregister_media_entities(camif);
|
camif_unregister_media_entities(camif);
|
||||||
err_alloc:
|
err_pm:
|
||||||
pm_runtime_put(dev);
|
pm_runtime_put(dev);
|
||||||
pm_runtime_disable(dev);
|
pm_runtime_disable(dev);
|
||||||
err_pm:
|
|
||||||
camif_clk_put(camif);
|
camif_clk_put(camif);
|
||||||
err_clk:
|
err_clk:
|
||||||
s3c_camif_unregister_subdev(camif);
|
s3c_camif_unregister_subdev(camif);
|
||||||
|
|
|
@ -1371,7 +1371,7 @@ static int bdisp_probe(struct platform_device *pdev)
|
||||||
ret = pm_runtime_get_sync(dev);
|
ret = pm_runtime_get_sync(dev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "failed to set PM\n");
|
dev_err(dev, "failed to set PM\n");
|
||||||
goto err_dbg;
|
goto err_pm;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Filters */
|
/* Filters */
|
||||||
|
@ -1399,7 +1399,6 @@ static int bdisp_probe(struct platform_device *pdev)
|
||||||
bdisp_hw_free_filters(bdisp->dev);
|
bdisp_hw_free_filters(bdisp->dev);
|
||||||
err_pm:
|
err_pm:
|
||||||
pm_runtime_put(dev);
|
pm_runtime_put(dev);
|
||||||
err_dbg:
|
|
||||||
bdisp_debugfs_remove(bdisp);
|
bdisp_debugfs_remove(bdisp);
|
||||||
err_v4l2:
|
err_v4l2:
|
||||||
v4l2_device_unregister(&bdisp->v4l2_dev);
|
v4l2_device_unregister(&bdisp->v4l2_dev);
|
||||||
|
|
|
@ -954,9 +954,11 @@ static void delta_run_work(struct work_struct *work)
|
||||||
/* enable the hardware */
|
/* enable the hardware */
|
||||||
if (!dec->pm) {
|
if (!dec->pm) {
|
||||||
ret = delta_get_sync(ctx);
|
ret = delta_get_sync(ctx);
|
||||||
if (ret)
|
if (ret) {
|
||||||
|
delta_put_autosuspend(ctx);
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/* decode this access unit */
|
/* decode this access unit */
|
||||||
ret = call_dec_op(dec, decode, ctx, au);
|
ret = call_dec_op(dec, decode, ctx, au);
|
||||||
|
|
|
@ -272,6 +272,7 @@ static unsigned long int hva_hw_get_ip_version(struct hva_dev *hva)
|
||||||
|
|
||||||
if (pm_runtime_get_sync(dev) < 0) {
|
if (pm_runtime_get_sync(dev) < 0) {
|
||||||
dev_err(dev, "%s failed to get pm_runtime\n", HVA_PREFIX);
|
dev_err(dev, "%s failed to get pm_runtime\n", HVA_PREFIX);
|
||||||
|
pm_runtime_put_noidle(dev);
|
||||||
mutex_unlock(&hva->protect_mutex);
|
mutex_unlock(&hva->protect_mutex);
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
}
|
}
|
||||||
|
@ -392,7 +393,7 @@ int hva_hw_probe(struct platform_device *pdev, struct hva_dev *hva)
|
||||||
ret = pm_runtime_get_sync(dev);
|
ret = pm_runtime_get_sync(dev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "%s failed to set PM\n", HVA_PREFIX);
|
dev_err(dev, "%s failed to set PM\n", HVA_PREFIX);
|
||||||
goto err_clk;
|
goto err_pm;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* check IP hardware version */
|
/* check IP hardware version */
|
||||||
|
@ -557,6 +558,7 @@ void hva_hw_dump_regs(struct hva_dev *hva, struct seq_file *s)
|
||||||
|
|
||||||
if (pm_runtime_get_sync(dev) < 0) {
|
if (pm_runtime_get_sync(dev) < 0) {
|
||||||
seq_puts(s, "Cannot wake up IP\n");
|
seq_puts(s, "Cannot wake up IP\n");
|
||||||
|
pm_runtime_put_noidle(dev);
|
||||||
mutex_unlock(&hva->protect_mutex);
|
mutex_unlock(&hva->protect_mutex);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -562,7 +562,12 @@ int vsp1_device_get(struct vsp1_device *vsp1)
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(vsp1->dev);
|
ret = pm_runtime_get_sync(vsp1->dev);
|
||||||
return ret < 0 ? ret : 0;
|
if (ret < 0) {
|
||||||
|
pm_runtime_put_noidle(vsp1->dev);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -845,12 +850,12 @@ static int vsp1_probe(struct platform_device *pdev)
|
||||||
/* Configure device parameters based on the version register. */
|
/* Configure device parameters based on the version register. */
|
||||||
pm_runtime_enable(&pdev->dev);
|
pm_runtime_enable(&pdev->dev);
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(&pdev->dev);
|
ret = vsp1_device_get(vsp1);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto done;
|
goto done;
|
||||||
|
|
||||||
vsp1->version = vsp1_read(vsp1, VI6_IP_VERSION);
|
vsp1->version = vsp1_read(vsp1, VI6_IP_VERSION);
|
||||||
pm_runtime_put_sync(&pdev->dev);
|
vsp1_device_put(vsp1);
|
||||||
|
|
||||||
for (i = 0; i < ARRAY_SIZE(vsp1_device_infos); ++i) {
|
for (i = 0; i < ARRAY_SIZE(vsp1_device_infos); ++i) {
|
||||||
if ((vsp1->version & VI6_IP_VERSION_MODEL_MASK) ==
|
if ((vsp1->version & VI6_IP_VERSION_MODEL_MASK) ==
|
||||||
|
|
|
@ -845,6 +845,10 @@ static int ati_remote_probe(struct usb_interface *interface,
|
||||||
err("%s: endpoint_in message size==0? \n", __func__);
|
err("%s: endpoint_in message size==0? \n", __func__);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
if (!usb_endpoint_is_int_out(endpoint_out)) {
|
||||||
|
err("%s: Unexpected endpoint_out\n", __func__);
|
||||||
|
return -ENODEV;
|
||||||
|
}
|
||||||
|
|
||||||
ati_remote = kzalloc(sizeof (struct ati_remote), GFP_KERNEL);
|
ati_remote = kzalloc(sizeof (struct ati_remote), GFP_KERNEL);
|
||||||
rc_dev = rc_allocate_device(RC_DRIVER_SCANCODE);
|
rc_dev = rc_allocate_device(RC_DRIVER_SCANCODE);
|
||||||
|
|
|
@ -252,11 +252,41 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream,
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto done;
|
goto done;
|
||||||
|
|
||||||
|
/* After the probe, update fmt with the values returned from
|
||||||
|
* negotiation with the device.
|
||||||
|
*/
|
||||||
|
for (i = 0; i < stream->nformats; ++i) {
|
||||||
|
if (probe->bFormatIndex == stream->format[i].index) {
|
||||||
|
format = &stream->format[i];
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (i == stream->nformats) {
|
||||||
|
uvc_trace(UVC_TRACE_FORMAT, "Unknown bFormatIndex %u\n",
|
||||||
|
probe->bFormatIndex);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (i = 0; i < format->nframes; ++i) {
|
||||||
|
if (probe->bFrameIndex == format->frame[i].bFrameIndex) {
|
||||||
|
frame = &format->frame[i];
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (i == format->nframes) {
|
||||||
|
uvc_trace(UVC_TRACE_FORMAT, "Unknown bFrameIndex %u\n",
|
||||||
|
probe->bFrameIndex);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
fmt->fmt.pix.width = frame->wWidth;
|
fmt->fmt.pix.width = frame->wWidth;
|
||||||
fmt->fmt.pix.height = frame->wHeight;
|
fmt->fmt.pix.height = frame->wHeight;
|
||||||
fmt->fmt.pix.field = V4L2_FIELD_NONE;
|
fmt->fmt.pix.field = V4L2_FIELD_NONE;
|
||||||
fmt->fmt.pix.bytesperline = uvc_v4l2_get_bytesperline(format, frame);
|
fmt->fmt.pix.bytesperline = uvc_v4l2_get_bytesperline(format, frame);
|
||||||
fmt->fmt.pix.sizeimage = probe->dwMaxVideoFrameSize;
|
fmt->fmt.pix.sizeimage = probe->dwMaxVideoFrameSize;
|
||||||
|
fmt->fmt.pix.pixelformat = format->fcc;
|
||||||
fmt->fmt.pix.colorspace = format->colorspace;
|
fmt->fmt.pix.colorspace = format->colorspace;
|
||||||
fmt->fmt.pix.priv = 0;
|
fmt->fmt.pix.priv = 0;
|
||||||
|
|
||||||
|
|
|
@ -215,10 +215,8 @@ static int ccf_probe(struct platform_device *pdev)
|
||||||
dev_set_drvdata(&pdev->dev, ccf);
|
dev_set_drvdata(&pdev->dev, ccf);
|
||||||
|
|
||||||
irq = platform_get_irq(pdev, 0);
|
irq = platform_get_irq(pdev, 0);
|
||||||
if (!irq) {
|
if (irq < 0)
|
||||||
dev_err(&pdev->dev, "%s: no irq\n", __func__);
|
return irq;
|
||||||
return -ENXIO;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = devm_request_irq(&pdev->dev, irq, ccf_irq, 0, pdev->name, ccf);
|
ret = devm_request_irq(&pdev->dev, irq, ccf_irq, 0, pdev->name, ccf);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
|
|
@ -951,7 +951,7 @@ static int gpmc_cs_remap(int cs, u32 base)
|
||||||
int ret;
|
int ret;
|
||||||
u32 old_base, size;
|
u32 old_base, size;
|
||||||
|
|
||||||
if (cs > gpmc_cs_num) {
|
if (cs >= gpmc_cs_num) {
|
||||||
pr_err("%s: requested chip-select is disabled\n", __func__);
|
pr_err("%s: requested chip-select is disabled\n", __func__);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
@ -986,7 +986,7 @@ int gpmc_cs_request(int cs, unsigned long size, unsigned long *base)
|
||||||
struct resource *res = &gpmc->mem;
|
struct resource *res = &gpmc->mem;
|
||||||
int r = -1;
|
int r = -1;
|
||||||
|
|
||||||
if (cs > gpmc_cs_num) {
|
if (cs >= gpmc_cs_num) {
|
||||||
pr_err("%s: requested chip-select is disabled\n", __func__);
|
pr_err("%s: requested chip-select is disabled\n", __func__);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
@ -2278,6 +2278,10 @@ static void gpmc_probe_dt_children(struct platform_device *pdev)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
|
void gpmc_read_settings_dt(struct device_node *np, struct gpmc_settings *p)
|
||||||
|
{
|
||||||
|
memset(p, 0, sizeof(*p));
|
||||||
|
}
|
||||||
static int gpmc_probe_dt(struct platform_device *pdev)
|
static int gpmc_probe_dt(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -1524,12 +1524,14 @@ static int rtsx_pci_probe(struct pci_dev *pcidev,
|
||||||
ret = mfd_add_devices(&pcidev->dev, pcr->id, rtsx_pcr_cells,
|
ret = mfd_add_devices(&pcidev->dev, pcr->id, rtsx_pcr_cells,
|
||||||
ARRAY_SIZE(rtsx_pcr_cells), NULL, 0, NULL);
|
ARRAY_SIZE(rtsx_pcr_cells), NULL, 0, NULL);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto disable_irq;
|
goto free_slots;
|
||||||
|
|
||||||
schedule_delayed_work(&pcr->idle_work, msecs_to_jiffies(200));
|
schedule_delayed_work(&pcr->idle_work, msecs_to_jiffies(200));
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
free_slots:
|
||||||
|
kfree(pcr->slots);
|
||||||
disable_irq:
|
disable_irq:
|
||||||
free_irq(pcr->irq, (void *)pcr);
|
free_irq(pcr->irq, (void *)pcr);
|
||||||
disable_msi:
|
disable_msi:
|
||||||
|
|
|
@ -362,7 +362,7 @@ static int at25_probe(struct spi_device *spi)
|
||||||
at25->nvmem_config.reg_read = at25_ee_read;
|
at25->nvmem_config.reg_read = at25_ee_read;
|
||||||
at25->nvmem_config.reg_write = at25_ee_write;
|
at25->nvmem_config.reg_write = at25_ee_write;
|
||||||
at25->nvmem_config.priv = at25;
|
at25->nvmem_config.priv = at25;
|
||||||
at25->nvmem_config.stride = 4;
|
at25->nvmem_config.stride = 1;
|
||||||
at25->nvmem_config.word_size = 1;
|
at25->nvmem_config.word_size = 1;
|
||||||
at25->nvmem_config.size = chip.byte_len;
|
at25->nvmem_config.size = chip.byte_len;
|
||||||
|
|
||||||
|
|
|
@ -301,7 +301,7 @@ static struct virtqueue *vop_find_vq(struct virtio_device *dev,
|
||||||
/* First assign the vring's allocated in host memory */
|
/* First assign the vring's allocated in host memory */
|
||||||
vqconfig = _vop_vq_config(vdev->desc) + index;
|
vqconfig = _vop_vq_config(vdev->desc) + index;
|
||||||
memcpy_fromio(&config, vqconfig, sizeof(config));
|
memcpy_fromio(&config, vqconfig, sizeof(config));
|
||||||
_vr_size = vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN);
|
_vr_size = round_up(vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN), 4);
|
||||||
vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info));
|
vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info));
|
||||||
va = vpdev->hw_ops->ioremap(vpdev, le64_to_cpu(config.address),
|
va = vpdev->hw_ops->ioremap(vpdev, le64_to_cpu(config.address),
|
||||||
vr_size);
|
vr_size);
|
||||||
|
|
|
@ -308,7 +308,7 @@ static int vop_virtio_add_device(struct vop_vdev *vdev,
|
||||||
|
|
||||||
num = le16_to_cpu(vqconfig[i].num);
|
num = le16_to_cpu(vqconfig[i].num);
|
||||||
mutex_init(&vvr->vr_mutex);
|
mutex_init(&vvr->vr_mutex);
|
||||||
vr_size = PAGE_ALIGN(vring_size(num, MIC_VIRTIO_RING_ALIGN) +
|
vr_size = PAGE_ALIGN(round_up(vring_size(num, MIC_VIRTIO_RING_ALIGN), 4) +
|
||||||
sizeof(struct _mic_vring_info));
|
sizeof(struct _mic_vring_info));
|
||||||
vr->va = (void *)
|
vr->va = (void *)
|
||||||
__get_free_pages(GFP_KERNEL | __GFP_ZERO,
|
__get_free_pages(GFP_KERNEL | __GFP_ZERO,
|
||||||
|
@ -320,7 +320,7 @@ static int vop_virtio_add_device(struct vop_vdev *vdev,
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
vr->len = vr_size;
|
vr->len = vr_size;
|
||||||
vr->info = vr->va + vring_size(num, MIC_VIRTIO_RING_ALIGN);
|
vr->info = vr->va + round_up(vring_size(num, MIC_VIRTIO_RING_ALIGN), 4);
|
||||||
vr->info->magic = cpu_to_le32(MIC_MAGIC + vdev->virtio_id + i);
|
vr->info->magic = cpu_to_le32(MIC_MAGIC + vdev->virtio_id + i);
|
||||||
vr_addr = dma_map_single(&vpdev->dev, vr->va, vr_size,
|
vr_addr = dma_map_single(&vpdev->dev, vr->va, vr_size,
|
||||||
DMA_BIDIRECTIONAL);
|
DMA_BIDIRECTIONAL);
|
||||||
|
@ -611,6 +611,7 @@ static int vop_virtio_copy_from_user(struct vop_vdev *vdev, void __user *ubuf,
|
||||||
size_t partlen;
|
size_t partlen;
|
||||||
bool dma = VOP_USE_DMA;
|
bool dma = VOP_USE_DMA;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
|
size_t offset = 0;
|
||||||
|
|
||||||
if (daddr & (dma_alignment - 1)) {
|
if (daddr & (dma_alignment - 1)) {
|
||||||
vdev->tx_dst_unaligned += len;
|
vdev->tx_dst_unaligned += len;
|
||||||
|
@ -659,13 +660,20 @@ static int vop_virtio_copy_from_user(struct vop_vdev *vdev, void __user *ubuf,
|
||||||
* We are copying to IO below and should ideally use something
|
* We are copying to IO below and should ideally use something
|
||||||
* like copy_from_user_toio(..) if it existed.
|
* like copy_from_user_toio(..) if it existed.
|
||||||
*/
|
*/
|
||||||
if (copy_from_user((void __force *)dbuf, ubuf, len)) {
|
while (len) {
|
||||||
|
partlen = min_t(size_t, len, VOP_INT_DMA_BUF_SIZE);
|
||||||
|
|
||||||
|
if (copy_from_user(vvr->buf, ubuf + offset, partlen)) {
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
dev_err(vop_dev(vdev), "%s %d err %d\n",
|
dev_err(vop_dev(vdev), "%s %d err %d\n",
|
||||||
__func__, __LINE__, err);
|
__func__, __LINE__, err);
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
vdev->out_bytes += len;
|
memcpy_toio(dbuf + offset, vvr->buf, partlen);
|
||||||
|
offset += partlen;
|
||||||
|
vdev->out_bytes += partlen;
|
||||||
|
len -= partlen;
|
||||||
|
}
|
||||||
err = 0;
|
err = 0;
|
||||||
err:
|
err:
|
||||||
vpdev->hw_ops->iounmap(vpdev, dbuf);
|
vpdev->hw_ops->iounmap(vpdev, dbuf);
|
||||||
|
|
|
@ -30,6 +30,9 @@ static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func,
|
||||||
unsigned i, nr_strings;
|
unsigned i, nr_strings;
|
||||||
char **buffer, *string;
|
char **buffer, *string;
|
||||||
|
|
||||||
|
if (size < 2)
|
||||||
|
return 0;
|
||||||
|
|
||||||
/* Find all null-terminated (including zero length) strings in
|
/* Find all null-terminated (including zero length) strings in
|
||||||
the TPLLV1_INFO field. Trailing garbage is ignored. */
|
the TPLLV1_INFO field. Trailing garbage is ignored. */
|
||||||
buf += 2;
|
buf += 2;
|
||||||
|
|
|
@ -1091,18 +1091,23 @@ static int flexcan_chip_start(struct net_device *dev)
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* flexcan_chip_stop
|
/* __flexcan_chip_stop
|
||||||
*
|
*
|
||||||
* this functions is entered with clocks enabled
|
* this function is entered with clocks enabled
|
||||||
*/
|
*/
|
||||||
static void flexcan_chip_stop(struct net_device *dev)
|
static int __flexcan_chip_stop(struct net_device *dev, bool disable_on_error)
|
||||||
{
|
{
|
||||||
struct flexcan_priv *priv = netdev_priv(dev);
|
struct flexcan_priv *priv = netdev_priv(dev);
|
||||||
struct flexcan_regs __iomem *regs = priv->regs;
|
struct flexcan_regs __iomem *regs = priv->regs;
|
||||||
|
int err;
|
||||||
|
|
||||||
/* freeze + disable module */
|
/* freeze + disable module */
|
||||||
flexcan_chip_freeze(priv);
|
err = flexcan_chip_freeze(priv);
|
||||||
flexcan_chip_disable(priv);
|
if (err && !disable_on_error)
|
||||||
|
return err;
|
||||||
|
err = flexcan_chip_disable(priv);
|
||||||
|
if (err && !disable_on_error)
|
||||||
|
goto out_chip_unfreeze;
|
||||||
|
|
||||||
/* Disable all interrupts */
|
/* Disable all interrupts */
|
||||||
priv->write(0, ®s->imask2);
|
priv->write(0, ®s->imask2);
|
||||||
|
@ -1112,6 +1117,23 @@ static void flexcan_chip_stop(struct net_device *dev)
|
||||||
|
|
||||||
flexcan_transceiver_disable(priv);
|
flexcan_transceiver_disable(priv);
|
||||||
priv->can.state = CAN_STATE_STOPPED;
|
priv->can.state = CAN_STATE_STOPPED;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
out_chip_unfreeze:
|
||||||
|
flexcan_chip_unfreeze(priv);
|
||||||
|
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int flexcan_chip_stop_disable_on_error(struct net_device *dev)
|
||||||
|
{
|
||||||
|
return __flexcan_chip_stop(dev, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline int flexcan_chip_stop(struct net_device *dev)
|
||||||
|
{
|
||||||
|
return __flexcan_chip_stop(dev, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int flexcan_open(struct net_device *dev)
|
static int flexcan_open(struct net_device *dev)
|
||||||
|
@ -1165,7 +1187,7 @@ static int flexcan_close(struct net_device *dev)
|
||||||
|
|
||||||
netif_stop_queue(dev);
|
netif_stop_queue(dev);
|
||||||
can_rx_offload_disable(&priv->offload);
|
can_rx_offload_disable(&priv->offload);
|
||||||
flexcan_chip_stop(dev);
|
flexcan_chip_stop_disable_on_error(dev);
|
||||||
|
|
||||||
free_irq(dev->irq, dev);
|
free_irq(dev->irq, dev);
|
||||||
clk_disable_unprepare(priv->clk_per);
|
clk_disable_unprepare(priv->clk_per);
|
||||||
|
|
|
@ -1113,7 +1113,7 @@ static int korina_probe(struct platform_device *pdev)
|
||||||
return rc;
|
return rc;
|
||||||
|
|
||||||
probe_err_register:
|
probe_err_register:
|
||||||
kfree(KSEG0ADDR(lp->td_ring));
|
kfree((struct dma_desc *)KSEG0ADDR(lp->td_ring));
|
||||||
probe_err_td_ring:
|
probe_err_td_ring:
|
||||||
iounmap(lp->tx_dma_regs);
|
iounmap(lp->tx_dma_regs);
|
||||||
probe_err_dma_tx:
|
probe_err_dma_tx:
|
||||||
|
@ -1133,7 +1133,7 @@ static int korina_remove(struct platform_device *pdev)
|
||||||
iounmap(lp->eth_regs);
|
iounmap(lp->eth_regs);
|
||||||
iounmap(lp->rx_dma_regs);
|
iounmap(lp->rx_dma_regs);
|
||||||
iounmap(lp->tx_dma_regs);
|
iounmap(lp->tx_dma_regs);
|
||||||
kfree(KSEG0ADDR(lp->td_ring));
|
kfree((struct dma_desc *)KSEG0ADDR(lp->td_ring));
|
||||||
|
|
||||||
unregister_netdev(bif->dev);
|
unregister_netdev(bif->dev);
|
||||||
free_netdev(bif->dev);
|
free_netdev(bif->dev);
|
||||||
|
|
|
@ -153,6 +153,14 @@ static int __ath10k_htt_rx_ring_fill_n(struct ath10k_htt *htt, int num)
|
||||||
BUILD_BUG_ON(HTT_RX_RING_FILL_LEVEL >= HTT_RX_RING_SIZE / 2);
|
BUILD_BUG_ON(HTT_RX_RING_FILL_LEVEL >= HTT_RX_RING_SIZE / 2);
|
||||||
|
|
||||||
idx = __le32_to_cpu(*htt->rx_ring.alloc_idx.vaddr);
|
idx = __le32_to_cpu(*htt->rx_ring.alloc_idx.vaddr);
|
||||||
|
|
||||||
|
if (idx < 0 || idx >= htt->rx_ring.size) {
|
||||||
|
ath10k_err(htt->ar, "rx ring index is not valid, firmware malfunctioning?\n");
|
||||||
|
idx &= htt->rx_ring.size_mask;
|
||||||
|
ret = -ENOMEM;
|
||||||
|
goto fail;
|
||||||
|
}
|
||||||
|
|
||||||
while (num > 0) {
|
while (num > 0) {
|
||||||
skb = dev_alloc_skb(HTT_RX_BUF_SIZE + HTT_RX_DESC_ALIGN);
|
skb = dev_alloc_skb(HTT_RX_BUF_SIZE + HTT_RX_DESC_ALIGN);
|
||||||
if (!skb) {
|
if (!skb) {
|
||||||
|
|
|
@ -449,10 +449,19 @@ static void hif_usb_stop(void *hif_handle)
|
||||||
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
|
|
||||||
/* The pending URBs have to be canceled. */
|
/* The pending URBs have to be canceled. */
|
||||||
|
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
||||||
list_for_each_entry_safe(tx_buf, tx_buf_tmp,
|
list_for_each_entry_safe(tx_buf, tx_buf_tmp,
|
||||||
&hif_dev->tx.tx_pending, list) {
|
&hif_dev->tx.tx_pending, list) {
|
||||||
|
usb_get_urb(tx_buf->urb);
|
||||||
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
usb_kill_urb(tx_buf->urb);
|
usb_kill_urb(tx_buf->urb);
|
||||||
|
list_del(&tx_buf->list);
|
||||||
|
usb_free_urb(tx_buf->urb);
|
||||||
|
kfree(tx_buf->buf);
|
||||||
|
kfree(tx_buf);
|
||||||
|
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
||||||
}
|
}
|
||||||
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
|
|
||||||
usb_kill_anchored_urbs(&hif_dev->mgmt_submitted);
|
usb_kill_anchored_urbs(&hif_dev->mgmt_submitted);
|
||||||
}
|
}
|
||||||
|
@ -762,27 +771,37 @@ static void ath9k_hif_usb_dealloc_tx_urbs(struct hif_device_usb *hif_dev)
|
||||||
struct tx_buf *tx_buf = NULL, *tx_buf_tmp = NULL;
|
struct tx_buf *tx_buf = NULL, *tx_buf_tmp = NULL;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
|
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
||||||
list_for_each_entry_safe(tx_buf, tx_buf_tmp,
|
list_for_each_entry_safe(tx_buf, tx_buf_tmp,
|
||||||
&hif_dev->tx.tx_buf, list) {
|
&hif_dev->tx.tx_buf, list) {
|
||||||
|
usb_get_urb(tx_buf->urb);
|
||||||
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
usb_kill_urb(tx_buf->urb);
|
usb_kill_urb(tx_buf->urb);
|
||||||
list_del(&tx_buf->list);
|
list_del(&tx_buf->list);
|
||||||
usb_free_urb(tx_buf->urb);
|
usb_free_urb(tx_buf->urb);
|
||||||
kfree(tx_buf->buf);
|
kfree(tx_buf->buf);
|
||||||
kfree(tx_buf);
|
kfree(tx_buf);
|
||||||
|
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
||||||
}
|
}
|
||||||
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
|
|
||||||
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
||||||
hif_dev->tx.flags |= HIF_USB_TX_FLUSH;
|
hif_dev->tx.flags |= HIF_USB_TX_FLUSH;
|
||||||
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
|
|
||||||
|
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
||||||
list_for_each_entry_safe(tx_buf, tx_buf_tmp,
|
list_for_each_entry_safe(tx_buf, tx_buf_tmp,
|
||||||
&hif_dev->tx.tx_pending, list) {
|
&hif_dev->tx.tx_pending, list) {
|
||||||
|
usb_get_urb(tx_buf->urb);
|
||||||
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
usb_kill_urb(tx_buf->urb);
|
usb_kill_urb(tx_buf->urb);
|
||||||
list_del(&tx_buf->list);
|
list_del(&tx_buf->list);
|
||||||
usb_free_urb(tx_buf->urb);
|
usb_free_urb(tx_buf->urb);
|
||||||
kfree(tx_buf->buf);
|
kfree(tx_buf->buf);
|
||||||
kfree(tx_buf);
|
kfree(tx_buf);
|
||||||
|
spin_lock_irqsave(&hif_dev->tx.tx_lock, flags);
|
||||||
}
|
}
|
||||||
|
spin_unlock_irqrestore(&hif_dev->tx.tx_lock, flags);
|
||||||
|
|
||||||
usb_kill_anchored_urbs(&hif_dev->mgmt_submitted);
|
usb_kill_anchored_urbs(&hif_dev->mgmt_submitted);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1563,6 +1563,8 @@ int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr)
|
||||||
BRCMF_TX_IOCTL_MAX_MSG_SIZE,
|
BRCMF_TX_IOCTL_MAX_MSG_SIZE,
|
||||||
msgbuf->ioctbuf,
|
msgbuf->ioctbuf,
|
||||||
msgbuf->ioctbuf_handle);
|
msgbuf->ioctbuf_handle);
|
||||||
|
if (msgbuf->txflow_wq)
|
||||||
|
destroy_workqueue(msgbuf->txflow_wq);
|
||||||
kfree(msgbuf);
|
kfree(msgbuf);
|
||||||
}
|
}
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
|
@ -5085,8 +5085,10 @@ bool wlc_phy_attach_lcnphy(struct brcms_phy *pi)
|
||||||
pi->pi_fptr.radioloftget = wlc_lcnphy_get_radio_loft;
|
pi->pi_fptr.radioloftget = wlc_lcnphy_get_radio_loft;
|
||||||
pi->pi_fptr.detach = wlc_phy_detach_lcnphy;
|
pi->pi_fptr.detach = wlc_phy_detach_lcnphy;
|
||||||
|
|
||||||
if (!wlc_phy_txpwr_srom_read_lcnphy(pi))
|
if (!wlc_phy_txpwr_srom_read_lcnphy(pi)) {
|
||||||
|
kfree(pi->u.pi_lcnphy);
|
||||||
return false;
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
if (LCNREV_IS(pi->pubpi.phy_rev, 1)) {
|
if (LCNREV_IS(pi->pubpi.phy_rev, 1)) {
|
||||||
if (pi_lcn->lcnphy_tempsense_option == 3) {
|
if (pi_lcn->lcnphy_tempsense_option == 3) {
|
||||||
|
|
|
@ -1355,6 +1355,7 @@ static void mwifiex_usb_cleanup_tx_aggr(struct mwifiex_adapter *adapter)
|
||||||
skb_dequeue(&port->tx_aggr.aggr_list)))
|
skb_dequeue(&port->tx_aggr.aggr_list)))
|
||||||
mwifiex_write_data_complete(adapter, skb_tmp,
|
mwifiex_write_data_complete(adapter, skb_tmp,
|
||||||
0, -1);
|
0, -1);
|
||||||
|
if (port->tx_aggr.timer_cnxt.hold_timer.function)
|
||||||
del_timer_sync(&port->tx_aggr.timer_cnxt.hold_timer);
|
del_timer_sync(&port->tx_aggr.timer_cnxt.hold_timer);
|
||||||
port->tx_aggr.timer_cnxt.is_hold_timer_set = false;
|
port->tx_aggr.timer_cnxt.is_hold_timer_set = false;
|
||||||
port->tx_aggr.timer_cnxt.hold_tmo_msecs = 0;
|
port->tx_aggr.timer_cnxt.hold_tmo_msecs = 0;
|
||||||
|
|
|
@ -5453,7 +5453,6 @@ static int rtl8xxxu_submit_int_urb(struct ieee80211_hw *hw)
|
||||||
ret = usb_submit_urb(urb, GFP_KERNEL);
|
ret = usb_submit_urb(urb, GFP_KERNEL);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
usb_unanchor_urb(urb);
|
usb_unanchor_urb(urb);
|
||||||
usb_free_urb(urb);
|
|
||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -5462,6 +5461,7 @@ static int rtl8xxxu_submit_int_urb(struct ieee80211_hw *hw)
|
||||||
rtl8xxxu_write32(priv, REG_USB_HIMR, val32);
|
rtl8xxxu_write32(priv, REG_USB_HIMR, val32);
|
||||||
|
|
||||||
error:
|
error:
|
||||||
|
usb_free_urb(urb);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -5787,6 +5787,7 @@ static int rtl8xxxu_start(struct ieee80211_hw *hw)
|
||||||
struct rtl8xxxu_priv *priv = hw->priv;
|
struct rtl8xxxu_priv *priv = hw->priv;
|
||||||
struct rtl8xxxu_rx_urb *rx_urb;
|
struct rtl8xxxu_rx_urb *rx_urb;
|
||||||
struct rtl8xxxu_tx_urb *tx_urb;
|
struct rtl8xxxu_tx_urb *tx_urb;
|
||||||
|
struct sk_buff *skb;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int ret, i;
|
int ret, i;
|
||||||
|
|
||||||
|
@ -5837,6 +5838,13 @@ static int rtl8xxxu_start(struct ieee80211_hw *hw)
|
||||||
rx_urb->hw = hw;
|
rx_urb->hw = hw;
|
||||||
|
|
||||||
ret = rtl8xxxu_submit_rx_urb(priv, rx_urb);
|
ret = rtl8xxxu_submit_rx_urb(priv, rx_urb);
|
||||||
|
if (ret) {
|
||||||
|
if (ret != -ENOMEM) {
|
||||||
|
skb = (struct sk_buff *)rx_urb->urb.context;
|
||||||
|
dev_kfree_skb(skb);
|
||||||
|
}
|
||||||
|
rtl8xxxu_queue_rx_urb(priv, rx_urb);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
exit:
|
exit:
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -1036,6 +1036,7 @@ static int amd_ntb_init_pci(struct amd_ntb_dev *ndev,
|
||||||
|
|
||||||
err_dma_mask:
|
err_dma_mask:
|
||||||
pci_clear_master(pdev);
|
pci_clear_master(pdev);
|
||||||
|
pci_release_regions(pdev);
|
||||||
err_pci_regions:
|
err_pci_regions:
|
||||||
pci_disable_device(pdev);
|
pci_disable_device(pdev);
|
||||||
err_pci_enable:
|
err_pci_enable:
|
||||||
|
|
|
@ -787,6 +787,7 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
|
||||||
* in case a host died before it enabled the controller. Hence, simply
|
* in case a host died before it enabled the controller. Hence, simply
|
||||||
* reset the keep alive timer when the controller is enabled.
|
* reset the keep alive timer when the controller is enabled.
|
||||||
*/
|
*/
|
||||||
|
if (ctrl->kato)
|
||||||
mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
|
mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -209,15 +209,20 @@ static int iproc_msi_irq_set_affinity(struct irq_data *data,
|
||||||
struct iproc_msi *msi = irq_data_get_irq_chip_data(data);
|
struct iproc_msi *msi = irq_data_get_irq_chip_data(data);
|
||||||
int target_cpu = cpumask_first(mask);
|
int target_cpu = cpumask_first(mask);
|
||||||
int curr_cpu;
|
int curr_cpu;
|
||||||
|
int ret;
|
||||||
|
|
||||||
curr_cpu = hwirq_to_cpu(msi, data->hwirq);
|
curr_cpu = hwirq_to_cpu(msi, data->hwirq);
|
||||||
if (curr_cpu == target_cpu)
|
if (curr_cpu == target_cpu)
|
||||||
return IRQ_SET_MASK_OK_DONE;
|
ret = IRQ_SET_MASK_OK_DONE;
|
||||||
|
else {
|
||||||
/* steer MSI to the target CPU */
|
/* steer MSI to the target CPU */
|
||||||
data->hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq) + target_cpu;
|
data->hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq) + target_cpu;
|
||||||
|
ret = IRQ_SET_MASK_OK;
|
||||||
|
}
|
||||||
|
|
||||||
return IRQ_SET_MASK_OK;
|
irq_data_update_effective_affinity(data, cpumask_of(target_cpu));
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void iproc_msi_irq_compose_msi_msg(struct irq_data *data,
|
static void iproc_msi_irq_compose_msi_msg(struct irq_data *data,
|
||||||
|
|
|
@ -280,6 +280,8 @@ static int img_pwm_probe(struct platform_device *pdev)
|
||||||
return PTR_ERR(pwm->pwm_clk);
|
return PTR_ERR(pwm->pwm_clk);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
platform_set_drvdata(pdev, pwm);
|
||||||
|
|
||||||
pm_runtime_set_autosuspend_delay(&pdev->dev, IMG_PWM_PM_TIMEOUT);
|
pm_runtime_set_autosuspend_delay(&pdev->dev, IMG_PWM_PM_TIMEOUT);
|
||||||
pm_runtime_use_autosuspend(&pdev->dev);
|
pm_runtime_use_autosuspend(&pdev->dev);
|
||||||
pm_runtime_enable(&pdev->dev);
|
pm_runtime_enable(&pdev->dev);
|
||||||
|
@ -316,7 +318,6 @@ static int img_pwm_probe(struct platform_device *pdev)
|
||||||
goto err_suspend;
|
goto err_suspend;
|
||||||
}
|
}
|
||||||
|
|
||||||
platform_set_drvdata(pdev, pwm);
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_suspend:
|
err_suspend:
|
||||||
|
|
|
@ -875,15 +875,16 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
|
||||||
rmcd_error("get_user_pages_unlocked err=%ld",
|
rmcd_error("get_user_pages_unlocked err=%ld",
|
||||||
pinned);
|
pinned);
|
||||||
nr_pages = 0;
|
nr_pages = 0;
|
||||||
} else
|
} else {
|
||||||
rmcd_error("pinned %ld out of %ld pages",
|
rmcd_error("pinned %ld out of %ld pages",
|
||||||
pinned, nr_pages);
|
pinned, nr_pages);
|
||||||
ret = -EFAULT;
|
|
||||||
/*
|
/*
|
||||||
* Set nr_pages up to mean "how many pages to unpin, in
|
* Set nr_pages up to mean "how many pages to unpin, in
|
||||||
* the error handler:
|
* the error handler:
|
||||||
*/
|
*/
|
||||||
nr_pages = pinned;
|
nr_pages = pinned;
|
||||||
|
}
|
||||||
|
ret = -EFAULT;
|
||||||
goto err_pg;
|
goto err_pg;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1684,6 +1685,7 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
|
||||||
struct rio_dev *rdev;
|
struct rio_dev *rdev;
|
||||||
struct rio_switch *rswitch = NULL;
|
struct rio_switch *rswitch = NULL;
|
||||||
struct rio_mport *mport;
|
struct rio_mport *mport;
|
||||||
|
struct device *dev;
|
||||||
size_t size;
|
size_t size;
|
||||||
u32 rval;
|
u32 rval;
|
||||||
u32 swpinfo = 0;
|
u32 swpinfo = 0;
|
||||||
|
@ -1698,8 +1700,10 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
|
||||||
rmcd_debug(RDEV, "name:%s ct:0x%x did:0x%x hc:0x%x", dev_info.name,
|
rmcd_debug(RDEV, "name:%s ct:0x%x did:0x%x hc:0x%x", dev_info.name,
|
||||||
dev_info.comptag, dev_info.destid, dev_info.hopcount);
|
dev_info.comptag, dev_info.destid, dev_info.hopcount);
|
||||||
|
|
||||||
if (bus_find_device_by_name(&rio_bus_type, NULL, dev_info.name)) {
|
dev = bus_find_device_by_name(&rio_bus_type, NULL, dev_info.name);
|
||||||
|
if (dev) {
|
||||||
rmcd_debug(RDEV, "device %s already exists", dev_info.name);
|
rmcd_debug(RDEV, "device %s already exists", dev_info.name);
|
||||||
|
put_device(dev);
|
||||||
return -EEXIST;
|
return -EEXIST;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1338,7 +1338,7 @@ static int qcom_smd_parse_edge(struct device *dev,
|
||||||
ret = of_property_read_u32(node, key, &edge->edge_id);
|
ret = of_property_read_u32(node, key, &edge->edge_id);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(dev, "edge missing %s property\n", key);
|
dev_err(dev, "edge missing %s property\n", key);
|
||||||
return -EINVAL;
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
edge->remote_pid = QCOM_SMEM_HOST_ANY;
|
edge->remote_pid = QCOM_SMEM_HOST_ANY;
|
||||||
|
@ -1349,32 +1349,37 @@ static int qcom_smd_parse_edge(struct device *dev,
|
||||||
edge->mbox_client.knows_txdone = true;
|
edge->mbox_client.knows_txdone = true;
|
||||||
edge->mbox_chan = mbox_request_channel(&edge->mbox_client, 0);
|
edge->mbox_chan = mbox_request_channel(&edge->mbox_client, 0);
|
||||||
if (IS_ERR(edge->mbox_chan)) {
|
if (IS_ERR(edge->mbox_chan)) {
|
||||||
if (PTR_ERR(edge->mbox_chan) != -ENODEV)
|
if (PTR_ERR(edge->mbox_chan) != -ENODEV) {
|
||||||
return PTR_ERR(edge->mbox_chan);
|
ret = PTR_ERR(edge->mbox_chan);
|
||||||
|
goto put_node;
|
||||||
|
}
|
||||||
|
|
||||||
edge->mbox_chan = NULL;
|
edge->mbox_chan = NULL;
|
||||||
|
|
||||||
syscon_np = of_parse_phandle(node, "qcom,ipc", 0);
|
syscon_np = of_parse_phandle(node, "qcom,ipc", 0);
|
||||||
if (!syscon_np) {
|
if (!syscon_np) {
|
||||||
dev_err(dev, "no qcom,ipc node\n");
|
dev_err(dev, "no qcom,ipc node\n");
|
||||||
return -ENODEV;
|
ret = -ENODEV;
|
||||||
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
edge->ipc_regmap = syscon_node_to_regmap(syscon_np);
|
edge->ipc_regmap = syscon_node_to_regmap(syscon_np);
|
||||||
if (IS_ERR(edge->ipc_regmap))
|
if (IS_ERR(edge->ipc_regmap)) {
|
||||||
return PTR_ERR(edge->ipc_regmap);
|
ret = PTR_ERR(edge->ipc_regmap);
|
||||||
|
goto put_node;
|
||||||
|
}
|
||||||
|
|
||||||
key = "qcom,ipc";
|
key = "qcom,ipc";
|
||||||
ret = of_property_read_u32_index(node, key, 1, &edge->ipc_offset);
|
ret = of_property_read_u32_index(node, key, 1, &edge->ipc_offset);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "no offset in %s\n", key);
|
dev_err(dev, "no offset in %s\n", key);
|
||||||
return -EINVAL;
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = of_property_read_u32_index(node, key, 2, &edge->ipc_bit);
|
ret = of_property_read_u32_index(node, key, 2, &edge->ipc_bit);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(dev, "no bit in %s\n", key);
|
dev_err(dev, "no bit in %s\n", key);
|
||||||
return -EINVAL;
|
goto put_node;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1385,7 +1390,8 @@ static int qcom_smd_parse_edge(struct device *dev,
|
||||||
irq = irq_of_parse_and_map(node, 0);
|
irq = irq_of_parse_and_map(node, 0);
|
||||||
if (irq < 0) {
|
if (irq < 0) {
|
||||||
dev_err(dev, "required smd interrupt missing\n");
|
dev_err(dev, "required smd interrupt missing\n");
|
||||||
return -EINVAL;
|
ret = irq;
|
||||||
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = devm_request_irq(dev, irq,
|
ret = devm_request_irq(dev, irq,
|
||||||
|
@ -1393,12 +1399,18 @@ static int qcom_smd_parse_edge(struct device *dev,
|
||||||
node->name, edge);
|
node->name, edge);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
dev_err(dev, "failed to request smd irq\n");
|
dev_err(dev, "failed to request smd irq\n");
|
||||||
return ret;
|
goto put_node;
|
||||||
}
|
}
|
||||||
|
|
||||||
edge->irq = irq;
|
edge->irq = irq;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
put_node:
|
||||||
|
of_node_put(node);
|
||||||
|
edge->of_node = NULL;
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -4795,6 +4795,7 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
|
||||||
if (IS_ERR(vhost->work_thread)) {
|
if (IS_ERR(vhost->work_thread)) {
|
||||||
dev_err(dev, "Couldn't create kernel thread: %ld\n",
|
dev_err(dev, "Couldn't create kernel thread: %ld\n",
|
||||||
PTR_ERR(vhost->work_thread));
|
PTR_ERR(vhost->work_thread));
|
||||||
|
rc = PTR_ERR(vhost->work_thread);
|
||||||
goto free_host_mem;
|
goto free_host_mem;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -2439,6 +2439,7 @@ static int mvumi_io_attach(struct mvumi_hba *mhba)
|
||||||
if (IS_ERR(mhba->dm_thread)) {
|
if (IS_ERR(mhba->dm_thread)) {
|
||||||
dev_err(&mhba->pdev->dev,
|
dev_err(&mhba->pdev->dev,
|
||||||
"failed to create device scan thread\n");
|
"failed to create device scan thread\n");
|
||||||
|
ret = PTR_ERR(mhba->dm_thread);
|
||||||
mutex_unlock(&mhba->sas_discovery_mutex);
|
mutex_unlock(&mhba->sas_discovery_mutex);
|
||||||
goto fail_create_thread;
|
goto fail_create_thread;
|
||||||
}
|
}
|
||||||
|
|
|
@ -62,6 +62,7 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi,
|
||||||
"Freeing tid=0x%x for cid=0x%x\n",
|
"Freeing tid=0x%x for cid=0x%x\n",
|
||||||
cmd->task_id, qedi_conn->iscsi_conn_id);
|
cmd->task_id, qedi_conn->iscsi_conn_id);
|
||||||
|
|
||||||
|
spin_lock(&qedi_conn->list_lock);
|
||||||
if (likely(cmd->io_cmd_in_list)) {
|
if (likely(cmd->io_cmd_in_list)) {
|
||||||
cmd->io_cmd_in_list = false;
|
cmd->io_cmd_in_list = false;
|
||||||
list_del_init(&cmd->io_cmd);
|
list_del_init(&cmd->io_cmd);
|
||||||
|
@ -72,6 +73,7 @@ static void qedi_process_logout_resp(struct qedi_ctx *qedi,
|
||||||
cmd->task_id, qedi_conn->iscsi_conn_id,
|
cmd->task_id, qedi_conn->iscsi_conn_id,
|
||||||
&cmd->io_cmd);
|
&cmd->io_cmd);
|
||||||
}
|
}
|
||||||
|
spin_unlock(&qedi_conn->list_lock);
|
||||||
|
|
||||||
cmd->state = RESPONSE_RECEIVED;
|
cmd->state = RESPONSE_RECEIVED;
|
||||||
qedi_clear_task_idx(qedi, cmd->task_id);
|
qedi_clear_task_idx(qedi, cmd->task_id);
|
||||||
|
@ -125,6 +127,7 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
|
||||||
"Freeing tid=0x%x for cid=0x%x\n",
|
"Freeing tid=0x%x for cid=0x%x\n",
|
||||||
cmd->task_id, qedi_conn->iscsi_conn_id);
|
cmd->task_id, qedi_conn->iscsi_conn_id);
|
||||||
|
|
||||||
|
spin_lock(&qedi_conn->list_lock);
|
||||||
if (likely(cmd->io_cmd_in_list)) {
|
if (likely(cmd->io_cmd_in_list)) {
|
||||||
cmd->io_cmd_in_list = false;
|
cmd->io_cmd_in_list = false;
|
||||||
list_del_init(&cmd->io_cmd);
|
list_del_init(&cmd->io_cmd);
|
||||||
|
@ -135,6 +138,7 @@ static void qedi_process_text_resp(struct qedi_ctx *qedi,
|
||||||
cmd->task_id, qedi_conn->iscsi_conn_id,
|
cmd->task_id, qedi_conn->iscsi_conn_id,
|
||||||
&cmd->io_cmd);
|
&cmd->io_cmd);
|
||||||
}
|
}
|
||||||
|
spin_unlock(&qedi_conn->list_lock);
|
||||||
|
|
||||||
cmd->state = RESPONSE_RECEIVED;
|
cmd->state = RESPONSE_RECEIVED;
|
||||||
qedi_clear_task_idx(qedi, cmd->task_id);
|
qedi_clear_task_idx(qedi, cmd->task_id);
|
||||||
|
@ -227,11 +231,13 @@ static void qedi_process_tmf_resp(struct qedi_ctx *qedi,
|
||||||
|
|
||||||
tmf_hdr = (struct iscsi_tm *)qedi_cmd->task->hdr;
|
tmf_hdr = (struct iscsi_tm *)qedi_cmd->task->hdr;
|
||||||
|
|
||||||
|
spin_lock(&qedi_conn->list_lock);
|
||||||
if (likely(qedi_cmd->io_cmd_in_list)) {
|
if (likely(qedi_cmd->io_cmd_in_list)) {
|
||||||
qedi_cmd->io_cmd_in_list = false;
|
qedi_cmd->io_cmd_in_list = false;
|
||||||
list_del_init(&qedi_cmd->io_cmd);
|
list_del_init(&qedi_cmd->io_cmd);
|
||||||
qedi_conn->active_cmd_count--;
|
qedi_conn->active_cmd_count--;
|
||||||
}
|
}
|
||||||
|
spin_unlock(&qedi_conn->list_lock);
|
||||||
|
|
||||||
if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
|
if (((tmf_hdr->flags & ISCSI_FLAG_TM_FUNC_MASK) ==
|
||||||
ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
|
ISCSI_TM_FUNC_LOGICAL_UNIT_RESET) ||
|
||||||
|
@ -293,11 +299,13 @@ static void qedi_process_login_resp(struct qedi_ctx *qedi,
|
||||||
ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK;
|
ISCSI_LOGIN_RESPONSE_HDR_DATA_SEG_LEN_MASK;
|
||||||
qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
|
qedi_conn->gen_pdu.resp_wr_ptr = qedi_conn->gen_pdu.resp_buf + pld_len;
|
||||||
|
|
||||||
|
spin_lock(&qedi_conn->list_lock);
|
||||||
if (likely(cmd->io_cmd_in_list)) {
|
if (likely(cmd->io_cmd_in_list)) {
|
||||||
cmd->io_cmd_in_list = false;
|
cmd->io_cmd_in_list = false;
|
||||||
list_del_init(&cmd->io_cmd);
|
list_del_init(&cmd->io_cmd);
|
||||||
qedi_conn->active_cmd_count--;
|
qedi_conn->active_cmd_count--;
|
||||||
}
|
}
|
||||||
|
spin_unlock(&qedi_conn->list_lock);
|
||||||
|
|
||||||
memset(task_ctx, '\0', sizeof(*task_ctx));
|
memset(task_ctx, '\0', sizeof(*task_ctx));
|
||||||
|
|
||||||
|
@ -829,8 +837,11 @@ static void qedi_process_cmd_cleanup_resp(struct qedi_ctx *qedi,
|
||||||
qedi_clear_task_idx(qedi_conn->qedi, rtid);
|
qedi_clear_task_idx(qedi_conn->qedi, rtid);
|
||||||
|
|
||||||
spin_lock(&qedi_conn->list_lock);
|
spin_lock(&qedi_conn->list_lock);
|
||||||
|
if (likely(dbg_cmd->io_cmd_in_list)) {
|
||||||
|
dbg_cmd->io_cmd_in_list = false;
|
||||||
list_del_init(&dbg_cmd->io_cmd);
|
list_del_init(&dbg_cmd->io_cmd);
|
||||||
qedi_conn->active_cmd_count--;
|
qedi_conn->active_cmd_count--;
|
||||||
|
}
|
||||||
spin_unlock(&qedi_conn->list_lock);
|
spin_unlock(&qedi_conn->list_lock);
|
||||||
qedi_cmd->state = CLEANUP_RECV;
|
qedi_cmd->state = CLEANUP_RECV;
|
||||||
wake_up_interruptible(&qedi_conn->wait_queue);
|
wake_up_interruptible(&qedi_conn->wait_queue);
|
||||||
|
@ -1249,6 +1260,7 @@ int qedi_cleanup_all_io(struct qedi_ctx *qedi, struct qedi_conn *qedi_conn,
|
||||||
qedi_conn->cmd_cleanup_req++;
|
qedi_conn->cmd_cleanup_req++;
|
||||||
qedi_iscsi_cleanup_task(ctask, true);
|
qedi_iscsi_cleanup_task(ctask, true);
|
||||||
|
|
||||||
|
cmd->io_cmd_in_list = false;
|
||||||
list_del_init(&cmd->io_cmd);
|
list_del_init(&cmd->io_cmd);
|
||||||
qedi_conn->active_cmd_count--;
|
qedi_conn->active_cmd_count--;
|
||||||
QEDI_WARN(&qedi->dbg_ctx,
|
QEDI_WARN(&qedi->dbg_ctx,
|
||||||
|
@ -1462,8 +1474,11 @@ static void qedi_tmf_work(struct work_struct *work)
|
||||||
spin_unlock_bh(&qedi_conn->tmf_work_lock);
|
spin_unlock_bh(&qedi_conn->tmf_work_lock);
|
||||||
|
|
||||||
spin_lock(&qedi_conn->list_lock);
|
spin_lock(&qedi_conn->list_lock);
|
||||||
|
if (likely(cmd->io_cmd_in_list)) {
|
||||||
|
cmd->io_cmd_in_list = false;
|
||||||
list_del_init(&cmd->io_cmd);
|
list_del_init(&cmd->io_cmd);
|
||||||
qedi_conn->active_cmd_count--;
|
qedi_conn->active_cmd_count--;
|
||||||
|
}
|
||||||
spin_unlock(&qedi_conn->list_lock);
|
spin_unlock(&qedi_conn->list_lock);
|
||||||
|
|
||||||
clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
|
clear_bit(QEDI_CONN_FW_CLEANUP, &qedi_conn->flags);
|
||||||
|
|
|
@ -976,11 +976,13 @@ static void qedi_cleanup_active_cmd_list(struct qedi_conn *qedi_conn)
|
||||||
{
|
{
|
||||||
struct qedi_cmd *cmd, *cmd_tmp;
|
struct qedi_cmd *cmd, *cmd_tmp;
|
||||||
|
|
||||||
|
spin_lock(&qedi_conn->list_lock);
|
||||||
list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
|
list_for_each_entry_safe(cmd, cmd_tmp, &qedi_conn->active_cmd_list,
|
||||||
io_cmd) {
|
io_cmd) {
|
||||||
list_del_init(&cmd->io_cmd);
|
list_del_init(&cmd->io_cmd);
|
||||||
qedi_conn->active_cmd_count--;
|
qedi_conn->active_cmd_count--;
|
||||||
}
|
}
|
||||||
|
spin_unlock(&qedi_conn->list_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
|
static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
|
||||||
|
|
|
@ -1588,9 +1588,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
|
||||||
*/
|
*/
|
||||||
}
|
}
|
||||||
mask <<= offset;
|
mask <<= offset;
|
||||||
|
|
||||||
pm_runtime_get_sync(host->hba->dev);
|
|
||||||
ufshcd_hold(host->hba, false);
|
|
||||||
ufshcd_rmwl(host->hba, TEST_BUS_SEL,
|
ufshcd_rmwl(host->hba, TEST_BUS_SEL,
|
||||||
(u32)host->testbus.select_major << 19,
|
(u32)host->testbus.select_major << 19,
|
||||||
REG_UFS_CFG1);
|
REG_UFS_CFG1);
|
||||||
|
@ -1603,8 +1600,6 @@ int ufs_qcom_testbus_config(struct ufs_qcom_host *host)
|
||||||
* committed before returning.
|
* committed before returning.
|
||||||
*/
|
*/
|
||||||
mb();
|
mb();
|
||||||
ufshcd_release(host->hba);
|
|
||||||
pm_runtime_put_sync(host->hba->dev);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -117,7 +117,7 @@ static int ipwireless_ppp_start_xmit(struct ppp_channel *ppp_channel,
|
||||||
skb->len,
|
skb->len,
|
||||||
notify_packet_sent,
|
notify_packet_sent,
|
||||||
network);
|
network);
|
||||||
if (ret == -1) {
|
if (ret < 0) {
|
||||||
skb_pull(skb, 2);
|
skb_pull(skb, 2);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -134,7 +134,7 @@ static int ipwireless_ppp_start_xmit(struct ppp_channel *ppp_channel,
|
||||||
notify_packet_sent,
|
notify_packet_sent,
|
||||||
network);
|
network);
|
||||||
kfree(buf);
|
kfree(buf);
|
||||||
if (ret == -1)
|
if (ret < 0)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
kfree_skb(skb);
|
kfree_skb(skb);
|
||||||
|
|
|
@ -218,7 +218,7 @@ static int ipw_write(struct tty_struct *linux_tty,
|
||||||
ret = ipwireless_send_packet(tty->hardware, IPW_CHANNEL_RAS,
|
ret = ipwireless_send_packet(tty->hardware, IPW_CHANNEL_RAS,
|
||||||
buf, count,
|
buf, count,
|
||||||
ipw_write_packet_sent_callback, tty);
|
ipw_write_packet_sent_callback, tty);
|
||||||
if (ret == -1) {
|
if (ret < 0) {
|
||||||
mutex_unlock(&tty->ipw_tty_mutex);
|
mutex_unlock(&tty->ipw_tty_mutex);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -563,7 +563,7 @@ static void lpuart32_poll_put_char(struct uart_port *port, unsigned char c)
|
||||||
|
|
||||||
static int lpuart32_poll_get_char(struct uart_port *port)
|
static int lpuart32_poll_get_char(struct uart_port *port)
|
||||||
{
|
{
|
||||||
if (!(lpuart32_read(port, UARTSTAT) & UARTSTAT_RDRF))
|
if (!(lpuart32_read(port, UARTWATER) >> UARTWATER_RXCNT_OFF))
|
||||||
return NO_POLL_CHAR;
|
return NO_POLL_CHAR;
|
||||||
|
|
||||||
return lpuart32_read(port, UARTDATA);
|
return lpuart32_read(port, UARTDATA);
|
||||||
|
|
|
@ -1275,9 +1275,21 @@ static int acm_probe(struct usb_interface *intf,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
int class = -1;
|
||||||
|
|
||||||
data_intf_num = union_header->bSlaveInterface0;
|
data_intf_num = union_header->bSlaveInterface0;
|
||||||
control_interface = usb_ifnum_to_if(usb_dev, union_header->bMasterInterface0);
|
control_interface = usb_ifnum_to_if(usb_dev, union_header->bMasterInterface0);
|
||||||
data_interface = usb_ifnum_to_if(usb_dev, data_intf_num);
|
data_interface = usb_ifnum_to_if(usb_dev, data_intf_num);
|
||||||
|
|
||||||
|
if (control_interface)
|
||||||
|
class = control_interface->cur_altsetting->desc.bInterfaceClass;
|
||||||
|
|
||||||
|
if (class != USB_CLASS_COMM && class != USB_CLASS_CDC_DATA) {
|
||||||
|
dev_dbg(&intf->dev, "Broken union descriptor, assuming single interface\n");
|
||||||
|
combined_interfaces = 1;
|
||||||
|
control_interface = data_interface = intf;
|
||||||
|
goto look_for_collapsed_interface;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!control_interface || !data_interface) {
|
if (!control_interface || !data_interface) {
|
||||||
|
@ -1932,6 +1944,17 @@ static const struct usb_device_id acm_ids[] = {
|
||||||
.driver_info = IGNORE_DEVICE,
|
.driver_info = IGNORE_DEVICE,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
/* Exclude ETAS ES58x */
|
||||||
|
{ USB_DEVICE(0x108c, 0x0159), /* ES581.4 */
|
||||||
|
.driver_info = IGNORE_DEVICE,
|
||||||
|
},
|
||||||
|
{ USB_DEVICE(0x108c, 0x0168), /* ES582.1 */
|
||||||
|
.driver_info = IGNORE_DEVICE,
|
||||||
|
},
|
||||||
|
{ USB_DEVICE(0x108c, 0x0169), /* ES584.1 */
|
||||||
|
.driver_info = IGNORE_DEVICE,
|
||||||
|
},
|
||||||
|
|
||||||
{ USB_DEVICE(0x1bc7, 0x0021), /* Telit 3G ACM only composition */
|
{ USB_DEVICE(0x1bc7, 0x0021), /* Telit 3G ACM only composition */
|
||||||
.driver_info = SEND_ZERO_PACKET,
|
.driver_info = SEND_ZERO_PACKET,
|
||||||
},
|
},
|
||||||
|
|
|
@ -58,6 +58,9 @@ MODULE_DEVICE_TABLE (usb, wdm_ids);
|
||||||
|
|
||||||
#define WDM_MAX 16
|
#define WDM_MAX 16
|
||||||
|
|
||||||
|
/* we cannot wait forever at flush() */
|
||||||
|
#define WDM_FLUSH_TIMEOUT (30 * HZ)
|
||||||
|
|
||||||
/* CDC-WMC r1.1 requires wMaxCommand to be "at least 256 decimal (0x100)" */
|
/* CDC-WMC r1.1 requires wMaxCommand to be "at least 256 decimal (0x100)" */
|
||||||
#define WDM_DEFAULT_BUFSIZE 256
|
#define WDM_DEFAULT_BUFSIZE 256
|
||||||
|
|
||||||
|
@ -151,7 +154,7 @@ static void wdm_out_callback(struct urb *urb)
|
||||||
kfree(desc->outbuf);
|
kfree(desc->outbuf);
|
||||||
desc->outbuf = NULL;
|
desc->outbuf = NULL;
|
||||||
clear_bit(WDM_IN_USE, &desc->flags);
|
clear_bit(WDM_IN_USE, &desc->flags);
|
||||||
wake_up(&desc->wait);
|
wake_up_all(&desc->wait);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void wdm_in_callback(struct urb *urb)
|
static void wdm_in_callback(struct urb *urb)
|
||||||
|
@ -393,6 +396,9 @@ static ssize_t wdm_write
|
||||||
if (test_bit(WDM_RESETTING, &desc->flags))
|
if (test_bit(WDM_RESETTING, &desc->flags))
|
||||||
r = -EIO;
|
r = -EIO;
|
||||||
|
|
||||||
|
if (test_bit(WDM_DISCONNECTING, &desc->flags))
|
||||||
|
r = -ENODEV;
|
||||||
|
|
||||||
if (r < 0) {
|
if (r < 0) {
|
||||||
rv = r;
|
rv = r;
|
||||||
goto out_free_mem_pm;
|
goto out_free_mem_pm;
|
||||||
|
@ -424,6 +430,7 @@ static ssize_t wdm_write
|
||||||
if (rv < 0) {
|
if (rv < 0) {
|
||||||
desc->outbuf = NULL;
|
desc->outbuf = NULL;
|
||||||
clear_bit(WDM_IN_USE, &desc->flags);
|
clear_bit(WDM_IN_USE, &desc->flags);
|
||||||
|
wake_up_all(&desc->wait); /* for wdm_wait_for_response() */
|
||||||
dev_err(&desc->intf->dev, "Tx URB error: %d\n", rv);
|
dev_err(&desc->intf->dev, "Tx URB error: %d\n", rv);
|
||||||
rv = usb_translate_errors(rv);
|
rv = usb_translate_errors(rv);
|
||||||
goto out_free_mem_pm;
|
goto out_free_mem_pm;
|
||||||
|
@ -583,28 +590,58 @@ static ssize_t wdm_read
|
||||||
return rv;
|
return rv;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int wdm_flush(struct file *file, fl_owner_t id)
|
static int wdm_wait_for_response(struct file *file, long timeout)
|
||||||
{
|
{
|
||||||
struct wdm_device *desc = file->private_data;
|
struct wdm_device *desc = file->private_data;
|
||||||
|
long rv; /* Use long here because (int) MAX_SCHEDULE_TIMEOUT < 0. */
|
||||||
|
|
||||||
wait_event(desc->wait,
|
|
||||||
/*
|
/*
|
||||||
* needs both flags. We cannot do with one
|
* Needs both flags. We cannot do with one because resetting it would
|
||||||
* because resetting it would cause a race
|
* cause a race with write() yet we need to signal a disconnect.
|
||||||
* with write() yet we need to signal
|
|
||||||
* a disconnect
|
|
||||||
*/
|
*/
|
||||||
|
rv = wait_event_interruptible_timeout(desc->wait,
|
||||||
!test_bit(WDM_IN_USE, &desc->flags) ||
|
!test_bit(WDM_IN_USE, &desc->flags) ||
|
||||||
test_bit(WDM_DISCONNECTING, &desc->flags));
|
test_bit(WDM_DISCONNECTING, &desc->flags),
|
||||||
|
timeout);
|
||||||
|
|
||||||
/* cannot dereference desc->intf if WDM_DISCONNECTING */
|
/*
|
||||||
|
* To report the correct error. This is best effort.
|
||||||
|
* We are inevitably racing with the hardware.
|
||||||
|
*/
|
||||||
if (test_bit(WDM_DISCONNECTING, &desc->flags))
|
if (test_bit(WDM_DISCONNECTING, &desc->flags))
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
if (desc->werr < 0)
|
if (!rv)
|
||||||
dev_err(&desc->intf->dev, "Error in flush path: %d\n",
|
return -EIO;
|
||||||
desc->werr);
|
if (rv < 0)
|
||||||
|
return -EINTR;
|
||||||
|
|
||||||
return usb_translate_errors(desc->werr);
|
spin_lock_irq(&desc->iuspin);
|
||||||
|
rv = desc->werr;
|
||||||
|
desc->werr = 0;
|
||||||
|
spin_unlock_irq(&desc->iuspin);
|
||||||
|
|
||||||
|
return usb_translate_errors(rv);
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* You need to send a signal when you react to malicious or defective hardware.
|
||||||
|
* Also, don't abort when fsync() returned -EINVAL, for older kernels which do
|
||||||
|
* not implement wdm_flush() will return -EINVAL.
|
||||||
|
*/
|
||||||
|
static int wdm_fsync(struct file *file, loff_t start, loff_t end, int datasync)
|
||||||
|
{
|
||||||
|
return wdm_wait_for_response(file, MAX_SCHEDULE_TIMEOUT);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Same with wdm_fsync(), except it uses finite timeout in order to react to
|
||||||
|
* malicious or defective hardware which ceased communication after close() was
|
||||||
|
* implicitly called due to process termination.
|
||||||
|
*/
|
||||||
|
static int wdm_flush(struct file *file, fl_owner_t id)
|
||||||
|
{
|
||||||
|
return wdm_wait_for_response(file, WDM_FLUSH_TIMEOUT);
|
||||||
}
|
}
|
||||||
|
|
||||||
static __poll_t wdm_poll(struct file *file, struct poll_table_struct *wait)
|
static __poll_t wdm_poll(struct file *file, struct poll_table_struct *wait)
|
||||||
|
@ -729,6 +766,7 @@ static const struct file_operations wdm_fops = {
|
||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
.read = wdm_read,
|
.read = wdm_read,
|
||||||
.write = wdm_write,
|
.write = wdm_write,
|
||||||
|
.fsync = wdm_fsync,
|
||||||
.open = wdm_open,
|
.open = wdm_open,
|
||||||
.flush = wdm_flush,
|
.flush = wdm_flush,
|
||||||
.release = wdm_release,
|
.release = wdm_release,
|
||||||
|
|
|
@ -773,11 +773,12 @@ void usb_block_urb(struct urb *urb)
|
||||||
EXPORT_SYMBOL_GPL(usb_block_urb);
|
EXPORT_SYMBOL_GPL(usb_block_urb);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* usb_kill_anchored_urbs - cancel transfer requests en masse
|
* usb_kill_anchored_urbs - kill all URBs associated with an anchor
|
||||||
* @anchor: anchor the requests are bound to
|
* @anchor: anchor the requests are bound to
|
||||||
*
|
*
|
||||||
* this allows all outstanding URBs to be killed starting
|
* This kills all outstanding URBs starting from the back of the queue,
|
||||||
* from the back of the queue
|
* with guarantee that no completer callbacks will take place from the
|
||||||
|
* anchor after this function returns.
|
||||||
*
|
*
|
||||||
* This routine should not be called by a driver after its disconnect
|
* This routine should not be called by a driver after its disconnect
|
||||||
* method has returned.
|
* method has returned.
|
||||||
|
@ -785,12 +786,14 @@ EXPORT_SYMBOL_GPL(usb_block_urb);
|
||||||
void usb_kill_anchored_urbs(struct usb_anchor *anchor)
|
void usb_kill_anchored_urbs(struct usb_anchor *anchor)
|
||||||
{
|
{
|
||||||
struct urb *victim;
|
struct urb *victim;
|
||||||
|
int surely_empty;
|
||||||
|
|
||||||
|
do {
|
||||||
spin_lock_irq(&anchor->lock);
|
spin_lock_irq(&anchor->lock);
|
||||||
while (!list_empty(&anchor->urb_list)) {
|
while (!list_empty(&anchor->urb_list)) {
|
||||||
victim = list_entry(anchor->urb_list.prev, struct urb,
|
victim = list_entry(anchor->urb_list.prev,
|
||||||
anchor_list);
|
struct urb, anchor_list);
|
||||||
/* we must make sure the URB isn't freed before we kill it*/
|
/* make sure the URB isn't freed before we kill it */
|
||||||
usb_get_urb(victim);
|
usb_get_urb(victim);
|
||||||
spin_unlock_irq(&anchor->lock);
|
spin_unlock_irq(&anchor->lock);
|
||||||
/* this will unanchor the URB */
|
/* this will unanchor the URB */
|
||||||
|
@ -798,7 +801,11 @@ void usb_kill_anchored_urbs(struct usb_anchor *anchor)
|
||||||
usb_put_urb(victim);
|
usb_put_urb(victim);
|
||||||
spin_lock_irq(&anchor->lock);
|
spin_lock_irq(&anchor->lock);
|
||||||
}
|
}
|
||||||
|
surely_empty = usb_anchor_check_wakeup(anchor);
|
||||||
|
|
||||||
spin_unlock_irq(&anchor->lock);
|
spin_unlock_irq(&anchor->lock);
|
||||||
|
cpu_relax();
|
||||||
|
} while (!surely_empty);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
|
EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
|
||||||
|
|
||||||
|
@ -817,13 +824,15 @@ EXPORT_SYMBOL_GPL(usb_kill_anchored_urbs);
|
||||||
void usb_poison_anchored_urbs(struct usb_anchor *anchor)
|
void usb_poison_anchored_urbs(struct usb_anchor *anchor)
|
||||||
{
|
{
|
||||||
struct urb *victim;
|
struct urb *victim;
|
||||||
|
int surely_empty;
|
||||||
|
|
||||||
|
do {
|
||||||
spin_lock_irq(&anchor->lock);
|
spin_lock_irq(&anchor->lock);
|
||||||
anchor->poisoned = 1;
|
anchor->poisoned = 1;
|
||||||
while (!list_empty(&anchor->urb_list)) {
|
while (!list_empty(&anchor->urb_list)) {
|
||||||
victim = list_entry(anchor->urb_list.prev, struct urb,
|
victim = list_entry(anchor->urb_list.prev,
|
||||||
anchor_list);
|
struct urb, anchor_list);
|
||||||
/* we must make sure the URB isn't freed before we kill it*/
|
/* make sure the URB isn't freed before we kill it */
|
||||||
usb_get_urb(victim);
|
usb_get_urb(victim);
|
||||||
spin_unlock_irq(&anchor->lock);
|
spin_unlock_irq(&anchor->lock);
|
||||||
/* this will unanchor the URB */
|
/* this will unanchor the URB */
|
||||||
|
@ -831,7 +840,11 @@ void usb_poison_anchored_urbs(struct usb_anchor *anchor)
|
||||||
usb_put_urb(victim);
|
usb_put_urb(victim);
|
||||||
spin_lock_irq(&anchor->lock);
|
spin_lock_irq(&anchor->lock);
|
||||||
}
|
}
|
||||||
|
surely_empty = usb_anchor_check_wakeup(anchor);
|
||||||
|
|
||||||
spin_unlock_irq(&anchor->lock);
|
spin_unlock_irq(&anchor->lock);
|
||||||
|
cpu_relax();
|
||||||
|
} while (!surely_empty);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(usb_poison_anchored_urbs);
|
EXPORT_SYMBOL_GPL(usb_poison_anchored_urbs);
|
||||||
|
|
||||||
|
@ -971,14 +984,20 @@ void usb_scuttle_anchored_urbs(struct usb_anchor *anchor)
|
||||||
{
|
{
|
||||||
struct urb *victim;
|
struct urb *victim;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
int surely_empty;
|
||||||
|
|
||||||
|
do {
|
||||||
spin_lock_irqsave(&anchor->lock, flags);
|
spin_lock_irqsave(&anchor->lock, flags);
|
||||||
while (!list_empty(&anchor->urb_list)) {
|
while (!list_empty(&anchor->urb_list)) {
|
||||||
victim = list_entry(anchor->urb_list.prev, struct urb,
|
victim = list_entry(anchor->urb_list.prev,
|
||||||
anchor_list);
|
struct urb, anchor_list);
|
||||||
__usb_unanchor_urb(victim, anchor);
|
__usb_unanchor_urb(victim, anchor);
|
||||||
}
|
}
|
||||||
|
surely_empty = usb_anchor_check_wakeup(anchor);
|
||||||
|
|
||||||
spin_unlock_irqrestore(&anchor->lock, flags);
|
spin_unlock_irqrestore(&anchor->lock, flags);
|
||||||
|
cpu_relax();
|
||||||
|
} while (!surely_empty);
|
||||||
}
|
}
|
||||||
|
|
||||||
EXPORT_SYMBOL_GPL(usb_scuttle_anchored_urbs);
|
EXPORT_SYMBOL_GPL(usb_scuttle_anchored_urbs);
|
||||||
|
|
|
@ -243,6 +243,7 @@ static const struct of_device_id of_dwc3_simple_match[] = {
|
||||||
{ .compatible = "amlogic,meson-axg-dwc3" },
|
{ .compatible = "amlogic,meson-axg-dwc3" },
|
||||||
{ .compatible = "amlogic,meson-gxl-dwc3" },
|
{ .compatible = "amlogic,meson-gxl-dwc3" },
|
||||||
{ .compatible = "allwinner,sun50i-h6-dwc3" },
|
{ .compatible = "allwinner,sun50i-h6-dwc3" },
|
||||||
|
{ .compatible = "hisilicon,hi3670-dwc3" },
|
||||||
{ /* Sentinel */ }
|
{ /* Sentinel */ }
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(of, of_dwc3_simple_match);
|
MODULE_DEVICE_TABLE(of, of_dwc3_simple_match);
|
||||||
|
|
|
@ -1523,7 +1523,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
|
||||||
fs_ncm_notify_desc.bEndpointAddress;
|
fs_ncm_notify_desc.bEndpointAddress;
|
||||||
|
|
||||||
status = usb_assign_descriptors(f, ncm_fs_function, ncm_hs_function,
|
status = usb_assign_descriptors(f, ncm_fs_function, ncm_hs_function,
|
||||||
ncm_ss_function, NULL);
|
ncm_ss_function, ncm_ss_function);
|
||||||
if (status)
|
if (status)
|
||||||
goto fail;
|
goto fail;
|
||||||
|
|
||||||
|
|
|
@ -31,6 +31,7 @@
|
||||||
#include <linux/types.h>
|
#include <linux/types.h>
|
||||||
#include <linux/ctype.h>
|
#include <linux/ctype.h>
|
||||||
#include <linux/cdev.h>
|
#include <linux/cdev.h>
|
||||||
|
#include <linux/kref.h>
|
||||||
|
|
||||||
#include <asm/byteorder.h>
|
#include <asm/byteorder.h>
|
||||||
#include <linux/io.h>
|
#include <linux/io.h>
|
||||||
|
@ -64,7 +65,7 @@ struct printer_dev {
|
||||||
struct usb_gadget *gadget;
|
struct usb_gadget *gadget;
|
||||||
s8 interface;
|
s8 interface;
|
||||||
struct usb_ep *in_ep, *out_ep;
|
struct usb_ep *in_ep, *out_ep;
|
||||||
|
struct kref kref;
|
||||||
struct list_head rx_reqs; /* List of free RX structs */
|
struct list_head rx_reqs; /* List of free RX structs */
|
||||||
struct list_head rx_reqs_active; /* List of Active RX xfers */
|
struct list_head rx_reqs_active; /* List of Active RX xfers */
|
||||||
struct list_head rx_buffers; /* List of completed xfers */
|
struct list_head rx_buffers; /* List of completed xfers */
|
||||||
|
@ -218,6 +219,13 @@ static inline struct usb_endpoint_descriptor *ep_desc(struct usb_gadget *gadget,
|
||||||
|
|
||||||
/*-------------------------------------------------------------------------*/
|
/*-------------------------------------------------------------------------*/
|
||||||
|
|
||||||
|
static void printer_dev_free(struct kref *kref)
|
||||||
|
{
|
||||||
|
struct printer_dev *dev = container_of(kref, struct printer_dev, kref);
|
||||||
|
|
||||||
|
kfree(dev);
|
||||||
|
}
|
||||||
|
|
||||||
static struct usb_request *
|
static struct usb_request *
|
||||||
printer_req_alloc(struct usb_ep *ep, unsigned len, gfp_t gfp_flags)
|
printer_req_alloc(struct usb_ep *ep, unsigned len, gfp_t gfp_flags)
|
||||||
{
|
{
|
||||||
|
@ -348,6 +356,7 @@ printer_open(struct inode *inode, struct file *fd)
|
||||||
|
|
||||||
spin_unlock_irqrestore(&dev->lock, flags);
|
spin_unlock_irqrestore(&dev->lock, flags);
|
||||||
|
|
||||||
|
kref_get(&dev->kref);
|
||||||
DBG(dev, "printer_open returned %x\n", ret);
|
DBG(dev, "printer_open returned %x\n", ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -365,6 +374,7 @@ printer_close(struct inode *inode, struct file *fd)
|
||||||
dev->printer_status &= ~PRINTER_SELECTED;
|
dev->printer_status &= ~PRINTER_SELECTED;
|
||||||
spin_unlock_irqrestore(&dev->lock, flags);
|
spin_unlock_irqrestore(&dev->lock, flags);
|
||||||
|
|
||||||
|
kref_put(&dev->kref, printer_dev_free);
|
||||||
DBG(dev, "printer_close\n");
|
DBG(dev, "printer_close\n");
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -1350,7 +1360,8 @@ static void gprinter_free(struct usb_function *f)
|
||||||
struct f_printer_opts *opts;
|
struct f_printer_opts *opts;
|
||||||
|
|
||||||
opts = container_of(f->fi, struct f_printer_opts, func_inst);
|
opts = container_of(f->fi, struct f_printer_opts, func_inst);
|
||||||
kfree(dev);
|
|
||||||
|
kref_put(&dev->kref, printer_dev_free);
|
||||||
mutex_lock(&opts->lock);
|
mutex_lock(&opts->lock);
|
||||||
--opts->refcnt;
|
--opts->refcnt;
|
||||||
mutex_unlock(&opts->lock);
|
mutex_unlock(&opts->lock);
|
||||||
|
@ -1419,6 +1430,7 @@ static struct usb_function *gprinter_alloc(struct usb_function_instance *fi)
|
||||||
return ERR_PTR(-ENOMEM);
|
return ERR_PTR(-ENOMEM);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
kref_init(&dev->kref);
|
||||||
++opts->refcnt;
|
++opts->refcnt;
|
||||||
dev->minor = opts->minor;
|
dev->minor = opts->minor;
|
||||||
dev->pnp_string = opts->pnp_string;
|
dev->pnp_string = opts->pnp_string;
|
||||||
|
|
|
@ -665,20 +665,24 @@ static int ohci_run (struct ohci_hcd *ohci)
|
||||||
|
|
||||||
/* handle root hub init quirks ... */
|
/* handle root hub init quirks ... */
|
||||||
val = roothub_a (ohci);
|
val = roothub_a (ohci);
|
||||||
val &= ~(RH_A_PSM | RH_A_OCPM);
|
/* Configure for per-port over-current protection by default */
|
||||||
|
val &= ~RH_A_NOCP;
|
||||||
|
val |= RH_A_OCPM;
|
||||||
if (ohci->flags & OHCI_QUIRK_SUPERIO) {
|
if (ohci->flags & OHCI_QUIRK_SUPERIO) {
|
||||||
/* NSC 87560 and maybe others */
|
/* NSC 87560 and maybe others.
|
||||||
|
* Ganged power switching, no over-current protection.
|
||||||
|
*/
|
||||||
val |= RH_A_NOCP;
|
val |= RH_A_NOCP;
|
||||||
val &= ~(RH_A_POTPGT | RH_A_NPS);
|
val &= ~(RH_A_POTPGT | RH_A_NPS | RH_A_PSM | RH_A_OCPM);
|
||||||
ohci_writel (ohci, val, &ohci->regs->roothub.a);
|
|
||||||
} else if ((ohci->flags & OHCI_QUIRK_AMD756) ||
|
} else if ((ohci->flags & OHCI_QUIRK_AMD756) ||
|
||||||
(ohci->flags & OHCI_QUIRK_HUB_POWER)) {
|
(ohci->flags & OHCI_QUIRK_HUB_POWER)) {
|
||||||
/* hub power always on; required for AMD-756 and some
|
/* hub power always on; required for AMD-756 and some
|
||||||
* Mac platforms. ganged overcurrent reporting, if any.
|
* Mac platforms.
|
||||||
*/
|
*/
|
||||||
val |= RH_A_NPS;
|
val |= RH_A_NPS;
|
||||||
ohci_writel (ohci, val, &ohci->regs->roothub.a);
|
|
||||||
}
|
}
|
||||||
|
ohci_writel(ohci, val, &ohci->regs->roothub.a);
|
||||||
|
|
||||||
ohci_writel (ohci, RH_HS_LPSC, &ohci->regs->roothub.status);
|
ohci_writel (ohci, RH_HS_LPSC, &ohci->regs->roothub.status);
|
||||||
ohci_writel (ohci, (val & RH_A_NPS) ? 0 : RH_B_PPCM,
|
ohci_writel (ohci, (val & RH_A_NPS) ? 0 : RH_B_PPCM,
|
||||||
&ohci->regs->roothub.b);
|
&ohci->regs->roothub.b);
|
||||||
|
|
|
@ -355,11 +355,13 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_device *vdev,
|
||||||
vdev->ctx[vector].producer.token = trigger;
|
vdev->ctx[vector].producer.token = trigger;
|
||||||
vdev->ctx[vector].producer.irq = irq;
|
vdev->ctx[vector].producer.irq = irq;
|
||||||
ret = irq_bypass_register_producer(&vdev->ctx[vector].producer);
|
ret = irq_bypass_register_producer(&vdev->ctx[vector].producer);
|
||||||
if (unlikely(ret))
|
if (unlikely(ret)) {
|
||||||
dev_info(&pdev->dev,
|
dev_info(&pdev->dev,
|
||||||
"irq bypass producer (token %p) registration fails: %d\n",
|
"irq bypass producer (token %p) registration fails: %d\n",
|
||||||
vdev->ctx[vector].producer.token, ret);
|
vdev->ctx[vector].producer.token, ret);
|
||||||
|
|
||||||
|
vdev->ctx[vector].producer.token = NULL;
|
||||||
|
}
|
||||||
vdev->ctx[vector].trigger = trigger;
|
vdev->ctx[vector].trigger = trigger;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -638,7 +638,8 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
|
||||||
|
|
||||||
ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
|
ret = vfio_add_to_pfn_list(dma, iova, phys_pfn[i]);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
vfio_unpin_page_external(dma, iova, do_accounting);
|
if (put_pfn(phys_pfn[i], dma->prot) && do_accounting)
|
||||||
|
vfio_lock_acct(dma, -1, true);
|
||||||
goto pin_unwind;
|
goto pin_unwind;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -70,7 +70,7 @@
|
||||||
#define EFCH_PM_DECODEEN_WDT_TMREN BIT(7)
|
#define EFCH_PM_DECODEEN_WDT_TMREN BIT(7)
|
||||||
|
|
||||||
|
|
||||||
#define EFCH_PM_DECODEEN3 0x00
|
#define EFCH_PM_DECODEEN3 0x03
|
||||||
#define EFCH_PM_DECODEEN_SECOND_RES GENMASK(1, 0)
|
#define EFCH_PM_DECODEEN_SECOND_RES GENMASK(1, 0)
|
||||||
#define EFCH_PM_WATCHDOG_DISABLE ((u8)GENMASK(3, 2))
|
#define EFCH_PM_WATCHDOG_DISABLE ((u8)GENMASK(3, 2))
|
||||||
|
|
||||||
|
|
|
@ -944,8 +944,10 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
|
||||||
wd_data->wdd = wdd;
|
wd_data->wdd = wdd;
|
||||||
wdd->wd_data = wd_data;
|
wdd->wd_data = wd_data;
|
||||||
|
|
||||||
if (IS_ERR_OR_NULL(watchdog_kworker))
|
if (IS_ERR_OR_NULL(watchdog_kworker)) {
|
||||||
|
kfree(wd_data);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
}
|
||||||
|
|
||||||
device_initialize(&wd_data->dev);
|
device_initialize(&wd_data->dev);
|
||||||
wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
|
wd_data->dev.devt = MKDEV(MAJOR(watchdog_devt), wdd->id);
|
||||||
|
@ -971,7 +973,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
|
||||||
pr_err("%s: a legacy watchdog module is probably present.\n",
|
pr_err("%s: a legacy watchdog module is probably present.\n",
|
||||||
wdd->info->identity);
|
wdd->info->identity);
|
||||||
old_wd_data = NULL;
|
old_wd_data = NULL;
|
||||||
kfree(wd_data);
|
put_device(&wd_data->dev);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -218,6 +218,7 @@ struct dlm_space {
|
||||||
struct list_head members;
|
struct list_head members;
|
||||||
struct mutex members_lock;
|
struct mutex members_lock;
|
||||||
int members_count;
|
int members_count;
|
||||||
|
struct dlm_nodes *nds;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct dlm_comms {
|
struct dlm_comms {
|
||||||
|
@ -426,6 +427,7 @@ static struct config_group *make_space(struct config_group *g, const char *name)
|
||||||
INIT_LIST_HEAD(&sp->members);
|
INIT_LIST_HEAD(&sp->members);
|
||||||
mutex_init(&sp->members_lock);
|
mutex_init(&sp->members_lock);
|
||||||
sp->members_count = 0;
|
sp->members_count = 0;
|
||||||
|
sp->nds = nds;
|
||||||
return &sp->group;
|
return &sp->group;
|
||||||
|
|
||||||
fail:
|
fail:
|
||||||
|
@ -447,6 +449,7 @@ static void drop_space(struct config_group *g, struct config_item *i)
|
||||||
static void release_space(struct config_item *i)
|
static void release_space(struct config_item *i)
|
||||||
{
|
{
|
||||||
struct dlm_space *sp = config_item_to_space(i);
|
struct dlm_space *sp = config_item_to_space(i);
|
||||||
|
kfree(sp->nds);
|
||||||
kfree(sp);
|
kfree(sp);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -108,6 +108,9 @@ static int ext4_getfsmap_helper(struct super_block *sb,
|
||||||
|
|
||||||
/* Are we just counting mappings? */
|
/* Are we just counting mappings? */
|
||||||
if (info->gfi_head->fmh_count == 0) {
|
if (info->gfi_head->fmh_count == 0) {
|
||||||
|
if (info->gfi_head->fmh_entries == UINT_MAX)
|
||||||
|
return EXT4_QUERY_RANGE_ABORT;
|
||||||
|
|
||||||
if (rec_fsblk > info->gfi_next_fsblk)
|
if (rec_fsblk > info->gfi_next_fsblk)
|
||||||
info->gfi_head->fmh_entries++;
|
info->gfi_head->fmh_entries++;
|
||||||
|
|
||||||
|
|
|
@ -957,4 +957,5 @@ void f2fs_unregister_sysfs(struct f2fs_sb_info *sbi)
|
||||||
}
|
}
|
||||||
kobject_del(&sbi->s_kobj);
|
kobject_del(&sbi->s_kobj);
|
||||||
kobject_put(&sbi->s_kobj);
|
kobject_put(&sbi->s_kobj);
|
||||||
|
wait_for_completion(&sbi->s_kobj_unregister);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1835,6 +1835,12 @@ int ntfs_read_inode_mount(struct inode *vi)
|
||||||
brelse(bh);
|
brelse(bh);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (le32_to_cpu(m->bytes_allocated) != vol->mft_record_size) {
|
||||||
|
ntfs_error(sb, "Incorrect mft record size %u in superblock, should be %u.",
|
||||||
|
le32_to_cpu(m->bytes_allocated), vol->mft_record_size);
|
||||||
|
goto err_out;
|
||||||
|
}
|
||||||
|
|
||||||
/* Apply the mst fixups. */
|
/* Apply the mst fixups. */
|
||||||
if (post_read_mst_fixup((NTFS_RECORD*)m, vol->mft_record_size)) {
|
if (post_read_mst_fixup((NTFS_RECORD*)m, vol->mft_record_size)) {
|
||||||
/* FIXME: Try to use the $MFTMirr now. */
|
/* FIXME: Try to use the $MFTMirr now. */
|
||||||
|
|
|
@ -228,7 +228,7 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
|
||||||
if (!pages)
|
if (!pages)
|
||||||
goto out_free;
|
goto out_free;
|
||||||
|
|
||||||
nr = find_get_pages(inode->i_mapping, &pgoff, lpages, pages);
|
nr = find_get_pages_contig(inode->i_mapping, pgoff, lpages, pages);
|
||||||
if (nr != lpages)
|
if (nr != lpages)
|
||||||
goto out_free_pages; /* leave if some pages were missing */
|
goto out_free_pages; /* leave if some pages were missing */
|
||||||
|
|
||||||
|
|
|
@ -2161,7 +2161,8 @@ int reiserfs_new_inode(struct reiserfs_transaction_handle *th,
|
||||||
out_inserted_sd:
|
out_inserted_sd:
|
||||||
clear_nlink(inode);
|
clear_nlink(inode);
|
||||||
th->t_trans_id = 0; /* so the caller can't use this handle later */
|
th->t_trans_id = 0; /* so the caller can't use this handle later */
|
||||||
unlock_new_inode(inode); /* OK to do even if we hadn't locked it */
|
if (inode->i_state & I_NEW)
|
||||||
|
unlock_new_inode(inode);
|
||||||
iput(inode);
|
iput(inode);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1264,6 +1264,10 @@ static int reiserfs_parse_options(struct super_block *s,
|
||||||
"turned on.");
|
"turned on.");
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
if (qf_names[qtype] !=
|
||||||
|
REISERFS_SB(s)->s_qf_names[qtype])
|
||||||
|
kfree(qf_names[qtype]);
|
||||||
|
qf_names[qtype] = NULL;
|
||||||
if (*arg) { /* Some filename specified? */
|
if (*arg) { /* Some filename specified? */
|
||||||
if (REISERFS_SB(s)->s_qf_names[qtype]
|
if (REISERFS_SB(s)->s_qf_names[qtype]
|
||||||
&& strcmp(REISERFS_SB(s)->s_qf_names[qtype],
|
&& strcmp(REISERFS_SB(s)->s_qf_names[qtype],
|
||||||
|
@ -1293,10 +1297,6 @@ static int reiserfs_parse_options(struct super_block *s,
|
||||||
else
|
else
|
||||||
*mount_options |= 1 << REISERFS_GRPQUOTA;
|
*mount_options |= 1 << REISERFS_GRPQUOTA;
|
||||||
} else {
|
} else {
|
||||||
if (qf_names[qtype] !=
|
|
||||||
REISERFS_SB(s)->s_qf_names[qtype])
|
|
||||||
kfree(qf_names[qtype]);
|
|
||||||
qf_names[qtype] = NULL;
|
|
||||||
if (qtype == USRQUOTA)
|
if (qtype == USRQUOTA)
|
||||||
*mount_options &= ~(1 << REISERFS_USRQUOTA);
|
*mount_options &= ~(1 << REISERFS_USRQUOTA);
|
||||||
else
|
else
|
||||||
|
|
|
@ -132,21 +132,24 @@ void udf_evict_inode(struct inode *inode)
|
||||||
struct udf_inode_info *iinfo = UDF_I(inode);
|
struct udf_inode_info *iinfo = UDF_I(inode);
|
||||||
int want_delete = 0;
|
int want_delete = 0;
|
||||||
|
|
||||||
if (!inode->i_nlink && !is_bad_inode(inode)) {
|
if (!is_bad_inode(inode)) {
|
||||||
|
if (!inode->i_nlink) {
|
||||||
want_delete = 1;
|
want_delete = 1;
|
||||||
udf_setsize(inode, 0);
|
udf_setsize(inode, 0);
|
||||||
udf_update_inode(inode, IS_SYNC(inode));
|
udf_update_inode(inode, IS_SYNC(inode));
|
||||||
}
|
}
|
||||||
truncate_inode_pages_final(&inode->i_data);
|
|
||||||
invalidate_inode_buffers(inode);
|
|
||||||
clear_inode(inode);
|
|
||||||
if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB &&
|
if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB &&
|
||||||
inode->i_size != iinfo->i_lenExtents) {
|
inode->i_size != iinfo->i_lenExtents) {
|
||||||
udf_warn(inode->i_sb, "Inode %lu (mode %o) has inode size %llu different from extent length %llu. Filesystem need not be standards compliant.\n",
|
udf_warn(inode->i_sb,
|
||||||
|
"Inode %lu (mode %o) has inode size %llu different from extent length %llu. Filesystem need not be standards compliant.\n",
|
||||||
inode->i_ino, inode->i_mode,
|
inode->i_ino, inode->i_mode,
|
||||||
(unsigned long long)inode->i_size,
|
(unsigned long long)inode->i_size,
|
||||||
(unsigned long long)iinfo->i_lenExtents);
|
(unsigned long long)iinfo->i_lenExtents);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
truncate_inode_pages_final(&inode->i_data);
|
||||||
|
invalidate_inode_buffers(inode);
|
||||||
|
clear_inode(inode);
|
||||||
kfree(iinfo->i_ext.i_data);
|
kfree(iinfo->i_ext.i_data);
|
||||||
iinfo->i_ext.i_data = NULL;
|
iinfo->i_ext.i_data = NULL;
|
||||||
udf_clear_extent_cache(inode);
|
udf_clear_extent_cache(inode);
|
||||||
|
|
|
@ -1349,6 +1349,12 @@ static int udf_load_sparable_map(struct super_block *sb,
|
||||||
(int)spm->numSparingTables);
|
(int)spm->numSparingTables);
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
if (le32_to_cpu(spm->sizeSparingTable) > sb->s_blocksize) {
|
||||||
|
udf_err(sb, "error loading logical volume descriptor: "
|
||||||
|
"Too big sparing table size (%u)\n",
|
||||||
|
le32_to_cpu(spm->sizeSparingTable));
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
|
||||||
for (i = 0; i < spm->numSparingTables; i++) {
|
for (i = 0; i < spm->numSparingTables; i++) {
|
||||||
loc = le32_to_cpu(spm->locSparingTable[i]);
|
loc = le32_to_cpu(spm->locSparingTable[i]);
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue