Merge "Merge android-4.19-q.81 (9045ee1
) into msm-4.19"
This commit is contained in:
commit
a81d2e678c
188 changed files with 1500 additions and 1721 deletions
|
@ -1,138 +0,0 @@
|
|||
Copyright (C) 1999, 2000 Bruce Tenison
|
||||
Portions Copyright (C) 1999, 2000 David Nelson
|
||||
Thanks to David Nelson for guidance and the usage of the scanner.txt
|
||||
and scanner.c files to model our driver and this informative file.
|
||||
|
||||
Mar. 2, 2000
|
||||
|
||||
CHANGES
|
||||
|
||||
- Initial Revision
|
||||
|
||||
|
||||
OVERVIEW
|
||||
|
||||
This README will address issues regarding how to configure the kernel
|
||||
to access a RIO 500 mp3 player.
|
||||
Before I explain how to use this to access the Rio500 please be warned:
|
||||
|
||||
W A R N I N G:
|
||||
--------------
|
||||
|
||||
Please note that this software is still under development. The authors
|
||||
are in no way responsible for any damage that may occur, no matter how
|
||||
inconsequential.
|
||||
|
||||
It seems that the Rio has a problem when sending .mp3 with low batteries.
|
||||
I suggest when the batteries are low and you want to transfer stuff that you
|
||||
replace it with a fresh one. In my case, what happened is I lost two 16kb
|
||||
blocks (they are no longer usable to store information to it). But I don't
|
||||
know if that's normal or not; it could simply be a problem with the flash
|
||||
memory.
|
||||
|
||||
In an extreme case, I left my Rio playing overnight and the batteries wore
|
||||
down to nothing and appear to have corrupted the flash memory. My RIO
|
||||
needed to be replaced as a result. Diamond tech support is aware of the
|
||||
problem. Do NOT allow your batteries to wear down to nothing before
|
||||
changing them. It appears RIO 500 firmware does not handle low battery
|
||||
power well at all.
|
||||
|
||||
On systems with OHCI controllers, the kernel OHCI code appears to have
|
||||
power on problems with some chipsets. If you are having problems
|
||||
connecting to your RIO 500, try turning it on first and then plugging it
|
||||
into the USB cable.
|
||||
|
||||
Contact information:
|
||||
--------------------
|
||||
|
||||
The main page for the project is hosted at sourceforge.net in the following
|
||||
URL: <http://rio500.sourceforge.net>. You can also go to the project's
|
||||
sourceforge home page at: <http://sourceforge.net/projects/rio500/>.
|
||||
There is also a mailing list: rio500-users@lists.sourceforge.net
|
||||
|
||||
Authors:
|
||||
-------
|
||||
|
||||
Most of the code was written by Cesar Miquel <miquel@df.uba.ar>. Keith
|
||||
Clayton <kclayton@jps.net> is incharge of the PPC port and making sure
|
||||
things work there. Bruce Tenison <btenison@dibbs.net> is adding support
|
||||
for .fon files and also does testing. The program will mostly sure be
|
||||
re-written and Pete Ikusz along with the rest will re-design it. I would
|
||||
also like to thank Tri Nguyen <tmn_3022000@hotmail.com> who provided use
|
||||
with some important information regarding the communication with the Rio.
|
||||
|
||||
ADDITIONAL INFORMATION and Userspace tools
|
||||
|
||||
http://rio500.sourceforge.net/
|
||||
|
||||
|
||||
REQUIREMENTS
|
||||
|
||||
A host with a USB port. Ideally, either a UHCI (Intel) or OHCI
|
||||
(Compaq and others) hardware port should work.
|
||||
|
||||
A Linux development kernel (2.3.x) with USB support enabled or a
|
||||
backported version to linux-2.2.x. See http://www.linux-usb.org for
|
||||
more information on accomplishing this.
|
||||
|
||||
A Linux kernel with RIO 500 support enabled.
|
||||
|
||||
'lspci' which is only needed to determine the type of USB hardware
|
||||
available in your machine.
|
||||
|
||||
CONFIGURATION
|
||||
|
||||
Using `lspci -v`, determine the type of USB hardware available.
|
||||
|
||||
If you see something like:
|
||||
|
||||
USB Controller: ......
|
||||
Flags: .....
|
||||
I/O ports at ....
|
||||
|
||||
Then you have a UHCI based controller.
|
||||
|
||||
If you see something like:
|
||||
|
||||
USB Controller: .....
|
||||
Flags: ....
|
||||
Memory at .....
|
||||
|
||||
Then you have a OHCI based controller.
|
||||
|
||||
Using `make menuconfig` or your preferred method for configuring the
|
||||
kernel, select 'Support for USB', 'OHCI/UHCI' depending on your
|
||||
hardware (determined from the steps above), 'USB Diamond Rio500 support', and
|
||||
'Preliminary USB device filesystem'. Compile and install the modules
|
||||
(you may need to execute `depmod -a` to update the module
|
||||
dependencies).
|
||||
|
||||
Add a device for the USB rio500:
|
||||
`mknod /dev/usb/rio500 c 180 64`
|
||||
|
||||
Set appropriate permissions for /dev/usb/rio500 (don't forget about
|
||||
group and world permissions). Both read and write permissions are
|
||||
required for proper operation.
|
||||
|
||||
Load the appropriate modules (if compiled as modules):
|
||||
|
||||
OHCI:
|
||||
modprobe usbcore
|
||||
modprobe usb-ohci
|
||||
modprobe rio500
|
||||
|
||||
UHCI:
|
||||
modprobe usbcore
|
||||
modprobe usb-uhci (or uhci)
|
||||
modprobe rio500
|
||||
|
||||
That's it. The Rio500 Utils at: http://rio500.sourceforge.net should
|
||||
be able to access the rio500.
|
||||
|
||||
BUGS
|
||||
|
||||
If you encounter any problems feel free to drop me an email.
|
||||
|
||||
Bruce Tenison
|
||||
btenison@dibbs.net
|
||||
|
|
@ -15125,13 +15125,6 @@ W: http://www.linux-usb.org/usbnet
|
|||
S: Maintained
|
||||
F: drivers/net/usb/dm9601.c
|
||||
|
||||
USB DIAMOND RIO500 DRIVER
|
||||
M: Cesar Miquel <miquel@df.uba.ar>
|
||||
L: rio500-users@lists.sourceforge.net
|
||||
W: http://rio500.sourceforge.net
|
||||
S: Maintained
|
||||
F: drivers/usb/misc/rio500*
|
||||
|
||||
USB EHCI DRIVER
|
||||
M: Alan Stern <stern@rowland.harvard.edu>
|
||||
L: linux-usb@vger.kernel.org
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 79
|
||||
SUBLEVEL = 81
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
|
|
@ -1142,6 +1142,8 @@
|
|||
ti,hwmods = "dss_dispc";
|
||||
clocks = <&disp_clk>;
|
||||
clock-names = "fck";
|
||||
|
||||
max-memory-bandwidth = <230000000>;
|
||||
};
|
||||
|
||||
rfbi: rfbi@4832a800 {
|
||||
|
|
|
@ -91,7 +91,6 @@ CONFIG_USB_SERIAL_PL2303=m
|
|||
CONFIG_USB_SERIAL_CYBERJACK=m
|
||||
CONFIG_USB_SERIAL_XIRCOM=m
|
||||
CONFIG_USB_SERIAL_OMNINET=m
|
||||
CONFIG_USB_RIO500=m
|
||||
CONFIG_EXT2_FS=m
|
||||
CONFIG_EXT3_FS=m
|
||||
CONFIG_MSDOS_FS=y
|
||||
|
|
|
@ -197,7 +197,6 @@ CONFIG_USB_SERIAL_XIRCOM=m
|
|||
CONFIG_USB_SERIAL_OMNINET=m
|
||||
CONFIG_USB_EMI62=m
|
||||
CONFIG_USB_EMI26=m
|
||||
CONFIG_USB_RIO500=m
|
||||
CONFIG_USB_LEGOTOWER=m
|
||||
CONFIG_USB_LCD=m
|
||||
CONFIG_USB_CYTHERM=m
|
||||
|
|
|
@ -588,7 +588,6 @@ CONFIG_USB_SERIAL_XIRCOM=m
|
|||
CONFIG_USB_SERIAL_OMNINET=m
|
||||
CONFIG_USB_EMI62=m
|
||||
CONFIG_USB_EMI26=m
|
||||
CONFIG_USB_RIO500=m
|
||||
CONFIG_USB_LEGOTOWER=m
|
||||
CONFIG_USB_LCD=m
|
||||
CONFIG_USB_CYTHERM=m
|
||||
|
|
|
@ -334,7 +334,6 @@ CONFIG_USB_EMI62=m
|
|||
CONFIG_USB_EMI26=m
|
||||
CONFIG_USB_ADUTUX=m
|
||||
CONFIG_USB_SEVSEG=m
|
||||
CONFIG_USB_RIO500=m
|
||||
CONFIG_USB_LEGOTOWER=m
|
||||
CONFIG_USB_LCD=m
|
||||
CONFIG_USB_CYPRESS_CY7C63=m
|
||||
|
|
|
@ -191,7 +191,6 @@ CONFIG_USB_SERIAL_XIRCOM=m
|
|||
CONFIG_USB_SERIAL_OMNINET=m
|
||||
CONFIG_USB_EMI62=m
|
||||
CONFIG_USB_EMI26=m
|
||||
CONFIG_USB_RIO500=m
|
||||
CONFIG_USB_LEGOTOWER=m
|
||||
CONFIG_USB_LCD=m
|
||||
CONFIG_USB_CYTHERM=m
|
||||
|
|
|
@ -946,7 +946,8 @@ static struct omap_hwmod_class_sysconfig am33xx_timer_sysc = {
|
|||
.rev_offs = 0x0000,
|
||||
.sysc_offs = 0x0010,
|
||||
.syss_offs = 0x0014,
|
||||
.sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET),
|
||||
.sysc_flags = SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET |
|
||||
SYSC_HAS_RESET_STATUS,
|
||||
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
|
||||
SIDLE_SMART_WKUP),
|
||||
.sysc_fields = &omap_hwmod_sysc_type2,
|
||||
|
|
|
@ -77,83 +77,6 @@ int omap_pm_clkdms_setup(struct clockdomain *clkdm, void *unused)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* This API is to be called during init to set the various voltage
|
||||
* domains to the voltage as per the opp table. Typically we boot up
|
||||
* at the nominal voltage. So this function finds out the rate of
|
||||
* the clock associated with the voltage domain, finds out the correct
|
||||
* opp entry and sets the voltage domain to the voltage specified
|
||||
* in the opp entry
|
||||
*/
|
||||
static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
|
||||
const char *oh_name)
|
||||
{
|
||||
struct voltagedomain *voltdm;
|
||||
struct clk *clk;
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long freq, bootup_volt;
|
||||
struct device *dev;
|
||||
|
||||
if (!vdd_name || !clk_name || !oh_name) {
|
||||
pr_err("%s: invalid parameters\n", __func__);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
if (!strncmp(oh_name, "mpu", 3))
|
||||
/*
|
||||
* All current OMAPs share voltage rail and clock
|
||||
* source, so CPU0 is used to represent the MPU-SS.
|
||||
*/
|
||||
dev = get_cpu_device(0);
|
||||
else
|
||||
dev = omap_device_get_by_hwmod_name(oh_name);
|
||||
|
||||
if (IS_ERR(dev)) {
|
||||
pr_err("%s: Unable to get dev pointer for hwmod %s\n",
|
||||
__func__, oh_name);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
voltdm = voltdm_lookup(vdd_name);
|
||||
if (!voltdm) {
|
||||
pr_err("%s: unable to get vdd pointer for vdd_%s\n",
|
||||
__func__, vdd_name);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
clk = clk_get(NULL, clk_name);
|
||||
if (IS_ERR(clk)) {
|
||||
pr_err("%s: unable to get clk %s\n", __func__, clk_name);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
freq = clk_get_rate(clk);
|
||||
clk_put(clk);
|
||||
|
||||
opp = dev_pm_opp_find_freq_ceil(dev, &freq);
|
||||
if (IS_ERR(opp)) {
|
||||
pr_err("%s: unable to find boot up OPP for vdd_%s\n",
|
||||
__func__, vdd_name);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
bootup_volt = dev_pm_opp_get_voltage(opp);
|
||||
dev_pm_opp_put(opp);
|
||||
|
||||
if (!bootup_volt) {
|
||||
pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",
|
||||
__func__, vdd_name);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
voltdm_scale(voltdm, bootup_volt);
|
||||
return 0;
|
||||
|
||||
exit:
|
||||
pr_err("%s: unable to set vdd_%s\n", __func__, vdd_name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SUSPEND
|
||||
static int omap_pm_enter(suspend_state_t suspend_state)
|
||||
{
|
||||
|
@ -211,25 +134,6 @@ void omap_common_suspend_init(void *pm_suspend)
|
|||
}
|
||||
#endif /* CONFIG_SUSPEND */
|
||||
|
||||
static void __init omap3_init_voltages(void)
|
||||
{
|
||||
if (!soc_is_omap34xx())
|
||||
return;
|
||||
|
||||
omap2_set_init_voltage("mpu_iva", "dpll1_ck", "mpu");
|
||||
omap2_set_init_voltage("core", "l3_ick", "l3_main");
|
||||
}
|
||||
|
||||
static void __init omap4_init_voltages(void)
|
||||
{
|
||||
if (!soc_is_omap44xx())
|
||||
return;
|
||||
|
||||
omap2_set_init_voltage("mpu", "dpll_mpu_ck", "mpu");
|
||||
omap2_set_init_voltage("core", "l3_div_ck", "l3_main_1");
|
||||
omap2_set_init_voltage("iva", "dpll_iva_m5x2_ck", "iva");
|
||||
}
|
||||
|
||||
int __maybe_unused omap_pm_nop_init(void)
|
||||
{
|
||||
return 0;
|
||||
|
@ -249,10 +153,6 @@ int __init omap2_common_pm_late_init(void)
|
|||
omap4_twl_init();
|
||||
omap_voltage_late_init();
|
||||
|
||||
/* Initialize the voltages */
|
||||
omap3_init_voltages();
|
||||
omap4_init_voltages();
|
||||
|
||||
/* Smartreflex device init */
|
||||
omap_devinit_smartreflex();
|
||||
|
||||
|
|
|
@ -31,7 +31,9 @@ void __init xen_efi_runtime_setup(void)
|
|||
efi.get_variable = xen_efi_get_variable;
|
||||
efi.get_next_variable = xen_efi_get_next_variable;
|
||||
efi.set_variable = xen_efi_set_variable;
|
||||
efi.set_variable_nonblocking = xen_efi_set_variable;
|
||||
efi.query_variable_info = xen_efi_query_variable_info;
|
||||
efi.query_variable_info_nonblocking = xen_efi_query_variable_info;
|
||||
efi.update_capsule = xen_efi_update_capsule;
|
||||
efi.query_capsule_caps = xen_efi_query_capsule_caps;
|
||||
efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
#include <asm/cpu.h>
|
||||
#include <asm/cputype.h>
|
||||
#include <asm/cpufeature.h>
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
static bool __maybe_unused
|
||||
is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
|
||||
|
@ -618,6 +619,30 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
|
|||
return (need_wa > 0);
|
||||
}
|
||||
|
||||
static const __maybe_unused struct midr_range tx2_family_cpus[] = {
|
||||
MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
|
||||
MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
|
||||
{},
|
||||
};
|
||||
|
||||
static bool __maybe_unused
|
||||
needs_tx2_tvm_workaround(const struct arm64_cpu_capabilities *entry,
|
||||
int scope)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (!is_affected_midr_range_list(entry, scope) ||
|
||||
!is_hyp_mode_available())
|
||||
return false;
|
||||
|
||||
for_each_possible_cpu(i) {
|
||||
if (MPIDR_AFFINITY_LEVEL(cpu_logical_map(i), 0) != 0)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HARDEN_EL2_VECTORS
|
||||
|
||||
static const struct midr_range arm64_harden_el2_vectors[] = {
|
||||
|
@ -842,6 +867,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
|
|||
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
|
||||
.matches = has_cortex_a76_erratum_1463225,
|
||||
},
|
||||
#endif
|
||||
#ifdef CONFIG_CAVIUM_TX2_ERRATUM_219
|
||||
{
|
||||
.desc = "Cavium ThunderX2 erratum 219 (KVM guest sysreg trapping)",
|
||||
.capability = ARM64_WORKAROUND_CAVIUM_TX2_219_TVM,
|
||||
ERRATA_MIDR_RANGE_LIST(tx2_family_cpus),
|
||||
.matches = needs_tx2_tvm_workaround,
|
||||
},
|
||||
#endif
|
||||
{
|
||||
}
|
||||
|
|
|
@ -364,22 +364,27 @@ void arch_release_task_struct(struct task_struct *tsk)
|
|||
fpsimd_release_task(tsk);
|
||||
}
|
||||
|
||||
/*
|
||||
* src and dst may temporarily have aliased sve_state after task_struct
|
||||
* is copied. We cannot fix this properly here, because src may have
|
||||
* live SVE state and dst's thread_info may not exist yet, so tweaking
|
||||
* either src's or dst's TIF_SVE is not safe.
|
||||
*
|
||||
* The unaliasing is done in copy_thread() instead. This works because
|
||||
* dst is not schedulable or traceable until both of these functions
|
||||
* have been called.
|
||||
*/
|
||||
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
|
||||
{
|
||||
if (current->mm)
|
||||
fpsimd_preserve_current_state();
|
||||
*dst = *src;
|
||||
|
||||
/* We rely on the above assignment to initialize dst's thread_flags: */
|
||||
BUILD_BUG_ON(!IS_ENABLED(CONFIG_THREAD_INFO_IN_TASK));
|
||||
|
||||
/*
|
||||
* Detach src's sve_state (if any) from dst so that it does not
|
||||
* get erroneously used or freed prematurely. dst's sve_state
|
||||
* will be allocated on demand later on if dst uses SVE.
|
||||
* For consistency, also clear TIF_SVE here: this could be done
|
||||
* later in copy_process(), but to avoid tripping up future
|
||||
* maintainers it is best not to leave TIF_SVE and sve_state in
|
||||
* an inconsistent state, even temporarily.
|
||||
*/
|
||||
dst->thread.sve_state = NULL;
|
||||
clear_tsk_thread_flag(dst, TIF_SVE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -392,13 +397,6 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
|
|||
|
||||
memset(&p->thread.cpu_context, 0, sizeof(struct cpu_context));
|
||||
|
||||
/*
|
||||
* Unalias p->thread.sve_state (if any) from the parent task
|
||||
* and disable discard SVE state for p:
|
||||
*/
|
||||
clear_tsk_thread_flag(p, TIF_SVE);
|
||||
p->thread.sve_state = NULL;
|
||||
|
||||
/*
|
||||
* In case p was allocated the same task_struct pointer as some
|
||||
* other recently-exited task, make sure p is disassociated from
|
||||
|
|
|
@ -359,17 +359,28 @@ void remove_cpu_topology(unsigned int cpu)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static bool __init acpi_cpu_is_threaded(int cpu)
|
||||
{
|
||||
int is_threaded = acpi_pptt_cpu_is_thread(cpu);
|
||||
|
||||
/*
|
||||
* if the PPTT doesn't have thread information, assume a homogeneous
|
||||
* machine and return the current CPU's thread state.
|
||||
*/
|
||||
if (is_threaded < 0)
|
||||
is_threaded = read_cpuid_mpidr() & MPIDR_MT_BITMASK;
|
||||
|
||||
return !!is_threaded;
|
||||
}
|
||||
|
||||
/*
|
||||
* Propagate the topology information of the processor_topology_node tree to the
|
||||
* cpu_topology array.
|
||||
*/
|
||||
static int __init parse_acpi_topology(void)
|
||||
{
|
||||
bool is_threaded;
|
||||
int cpu, topology_id;
|
||||
|
||||
is_threaded = read_cpuid_mpidr() & MPIDR_MT_BITMASK;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
int i, cache_id;
|
||||
|
||||
|
@ -377,7 +388,7 @@ static int __init parse_acpi_topology(void)
|
|||
if (topology_id < 0)
|
||||
return topology_id;
|
||||
|
||||
if (is_threaded) {
|
||||
if (acpi_cpu_is_threaded(cpu)) {
|
||||
cpu_topology[cpu].thread_id = topology_id;
|
||||
topology_id = find_acpi_cpu_topology(cpu, 1);
|
||||
cpu_topology[cpu].core_id = topology_id;
|
||||
|
|
|
@ -99,7 +99,7 @@
|
|||
|
||||
miscintc: interrupt-controller@18060010 {
|
||||
compatible = "qca,ar7240-misc-intc";
|
||||
reg = <0x18060010 0x4>;
|
||||
reg = <0x18060010 0x8>;
|
||||
|
||||
interrupt-parent = <&cpuintc>;
|
||||
interrupts = <6>;
|
||||
|
|
|
@ -623,7 +623,6 @@ CONFIG_USB_SERIAL_OMNINET=m
|
|||
CONFIG_USB_EMI62=m
|
||||
CONFIG_USB_EMI26=m
|
||||
CONFIG_USB_ADUTUX=m
|
||||
CONFIG_USB_RIO500=m
|
||||
CONFIG_USB_LEGOTOWER=m
|
||||
CONFIG_USB_LCD=m
|
||||
CONFIG_USB_CYPRESS_CY7C63=m
|
||||
|
|
|
@ -335,7 +335,6 @@ CONFIG_USB_SERIAL_SAFE_PADDED=y
|
|||
CONFIG_USB_SERIAL_CYBERJACK=m
|
||||
CONFIG_USB_SERIAL_XIRCOM=m
|
||||
CONFIG_USB_SERIAL_OMNINET=m
|
||||
CONFIG_USB_RIO500=m
|
||||
CONFIG_USB_LEGOTOWER=m
|
||||
CONFIG_USB_LCD=m
|
||||
CONFIG_USB_CYTHERM=m
|
||||
|
|
|
@ -6,5 +6,16 @@
|
|||
#define HWCAP_MIPS_R6 (1 << 0)
|
||||
#define HWCAP_MIPS_MSA (1 << 1)
|
||||
#define HWCAP_MIPS_CRC32 (1 << 2)
|
||||
#define HWCAP_MIPS_MIPS16 (1 << 3)
|
||||
#define HWCAP_MIPS_MDMX (1 << 4)
|
||||
#define HWCAP_MIPS_MIPS3D (1 << 5)
|
||||
#define HWCAP_MIPS_SMARTMIPS (1 << 6)
|
||||
#define HWCAP_MIPS_DSP (1 << 7)
|
||||
#define HWCAP_MIPS_DSP2 (1 << 8)
|
||||
#define HWCAP_MIPS_DSP3 (1 << 9)
|
||||
#define HWCAP_MIPS_MIPS16E2 (1 << 10)
|
||||
#define HWCAP_LOONGSON_MMI (1 << 11)
|
||||
#define HWCAP_LOONGSON_EXT (1 << 12)
|
||||
#define HWCAP_LOONGSON_EXT2 (1 << 13)
|
||||
|
||||
#endif /* _UAPI_ASM_HWCAP_H */
|
||||
|
|
|
@ -2105,6 +2105,39 @@ void cpu_probe(void)
|
|||
elf_hwcap |= HWCAP_MIPS_MSA;
|
||||
}
|
||||
|
||||
if (cpu_has_mips16)
|
||||
elf_hwcap |= HWCAP_MIPS_MIPS16;
|
||||
|
||||
if (cpu_has_mdmx)
|
||||
elf_hwcap |= HWCAP_MIPS_MDMX;
|
||||
|
||||
if (cpu_has_mips3d)
|
||||
elf_hwcap |= HWCAP_MIPS_MIPS3D;
|
||||
|
||||
if (cpu_has_smartmips)
|
||||
elf_hwcap |= HWCAP_MIPS_SMARTMIPS;
|
||||
|
||||
if (cpu_has_dsp)
|
||||
elf_hwcap |= HWCAP_MIPS_DSP;
|
||||
|
||||
if (cpu_has_dsp2)
|
||||
elf_hwcap |= HWCAP_MIPS_DSP2;
|
||||
|
||||
if (cpu_has_dsp3)
|
||||
elf_hwcap |= HWCAP_MIPS_DSP3;
|
||||
|
||||
if (cpu_has_mips16e2)
|
||||
elf_hwcap |= HWCAP_MIPS_MIPS16E2;
|
||||
|
||||
if (cpu_has_loongson_mmi)
|
||||
elf_hwcap |= HWCAP_LOONGSON_MMI;
|
||||
|
||||
if (cpu_has_loongson_ext)
|
||||
elf_hwcap |= HWCAP_LOONGSON_EXT;
|
||||
|
||||
if (cpu_has_loongson_ext2)
|
||||
elf_hwcap |= HWCAP_LOONGSON_EXT2;
|
||||
|
||||
if (cpu_has_vz)
|
||||
cpu_probe_vz(c);
|
||||
|
||||
|
|
|
@ -43,6 +43,10 @@ else
|
|||
$(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64)
|
||||
endif
|
||||
|
||||
# Some -march= flags enable MMI instructions, and GCC complains about that
|
||||
# support being enabled alongside -msoft-float. Thus explicitly disable MMI.
|
||||
cflags-y += $(call cc-option,-mno-loongson-mmi)
|
||||
|
||||
#
|
||||
# Loongson Machines' Support
|
||||
#
|
||||
|
|
|
@ -110,7 +110,7 @@ static int __init serial_init(void)
|
|||
}
|
||||
module_init(serial_init);
|
||||
|
||||
static void __init serial_exit(void)
|
||||
static void __exit serial_exit(void)
|
||||
{
|
||||
platform_device_unregister(&uart8250_device);
|
||||
}
|
||||
|
|
|
@ -654,6 +654,13 @@ static void build_restore_pagemask(u32 **p, struct uasm_reloc **r,
|
|||
int restore_scratch)
|
||||
{
|
||||
if (restore_scratch) {
|
||||
/*
|
||||
* Ensure the MFC0 below observes the value written to the
|
||||
* KScratch register by the prior MTC0.
|
||||
*/
|
||||
if (scratch_reg >= 0)
|
||||
uasm_i_ehb(p);
|
||||
|
||||
/* Reset default page size */
|
||||
if (PM_DEFAULT_MASK >> 16) {
|
||||
uasm_i_lui(p, tmp, PM_DEFAULT_MASK >> 16);
|
||||
|
@ -668,12 +675,10 @@ static void build_restore_pagemask(u32 **p, struct uasm_reloc **r,
|
|||
uasm_i_mtc0(p, 0, C0_PAGEMASK);
|
||||
uasm_il_b(p, r, lid);
|
||||
}
|
||||
if (scratch_reg >= 0) {
|
||||
uasm_i_ehb(p);
|
||||
if (scratch_reg >= 0)
|
||||
UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
|
||||
} else {
|
||||
else
|
||||
UASM_i_LW(p, 1, scratchpad_offset(0), 0);
|
||||
}
|
||||
} else {
|
||||
/* Reset default page size */
|
||||
if (PM_DEFAULT_MASK >> 16) {
|
||||
|
@ -922,6 +927,10 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
|
|||
}
|
||||
if (mode != not_refill && check_for_high_segbits) {
|
||||
uasm_l_large_segbits_fault(l, *p);
|
||||
|
||||
if (mode == refill_scratch && scratch_reg >= 0)
|
||||
uasm_i_ehb(p);
|
||||
|
||||
/*
|
||||
* We get here if we are an xsseg address, or if we are
|
||||
* an xuseg address above (PGDIR_SHIFT+PGDIR_BITS) boundary.
|
||||
|
@ -938,12 +947,10 @@ build_get_pgd_vmalloc64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
|
|||
uasm_i_jr(p, ptr);
|
||||
|
||||
if (mode == refill_scratch) {
|
||||
if (scratch_reg >= 0) {
|
||||
uasm_i_ehb(p);
|
||||
if (scratch_reg >= 0)
|
||||
UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg);
|
||||
} else {
|
||||
else
|
||||
UASM_i_LW(p, 1, scratchpad_offset(0), 0);
|
||||
}
|
||||
} else {
|
||||
uasm_i_nop(p);
|
||||
}
|
||||
|
|
|
@ -9,6 +9,7 @@ ccflags-vdso := \
|
|||
$(filter -mmicromips,$(KBUILD_CFLAGS)) \
|
||||
$(filter -march=%,$(KBUILD_CFLAGS)) \
|
||||
$(filter -m%-float,$(KBUILD_CFLAGS)) \
|
||||
$(filter -mno-loongson-%,$(KBUILD_CFLAGS)) \
|
||||
-D__VDSO__
|
||||
|
||||
ifeq ($(cc-name),clang)
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
* arch/parisc/mm/ioremap.c
|
||||
*
|
||||
* (C) Copyright 1995 1996 Linus Torvalds
|
||||
* (C) Copyright 2001-2006 Helge Deller <deller@gmx.de>
|
||||
* (C) Copyright 2001-2019 Helge Deller <deller@gmx.de>
|
||||
* (C) Copyright 2005 Kyle McMartin <kyle@parisc-linux.org>
|
||||
*/
|
||||
|
||||
|
@ -84,7 +84,7 @@ void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned l
|
|||
addr = (void __iomem *) area->addr;
|
||||
if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size,
|
||||
phys_addr, pgprot)) {
|
||||
vfree(addr);
|
||||
vunmap(addr);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -92,9 +92,11 @@ void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned l
|
|||
}
|
||||
EXPORT_SYMBOL(__ioremap);
|
||||
|
||||
void iounmap(const volatile void __iomem *addr)
|
||||
void iounmap(const volatile void __iomem *io_addr)
|
||||
{
|
||||
if (addr > high_memory)
|
||||
return vfree((void *) (PAGE_MASK & (unsigned long __force) addr));
|
||||
unsigned long addr = (unsigned long)io_addr & PAGE_MASK;
|
||||
|
||||
if (is_vmalloc_addr((void *)addr))
|
||||
vunmap((void *)addr);
|
||||
}
|
||||
EXPORT_SYMBOL(iounmap);
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
#define MWAIT_ECX_INTERRUPT_BREAK 0x1
|
||||
#define MWAITX_ECX_TIMER_ENABLE BIT(1)
|
||||
#define MWAITX_MAX_LOOPS ((u32)-1)
|
||||
#define MWAITX_DISABLE_CSTATES 0xf
|
||||
#define MWAITX_DISABLE_CSTATES 0xf0
|
||||
|
||||
static inline void __monitor(const void *eax, unsigned long ecx,
|
||||
unsigned long edx)
|
||||
|
|
|
@ -158,7 +158,8 @@ static int x2apic_dead_cpu(unsigned int dead_cpu)
|
|||
{
|
||||
struct cluster_mask *cmsk = per_cpu(cluster_masks, dead_cpu);
|
||||
|
||||
cpumask_clear_cpu(dead_cpu, &cmsk->mask);
|
||||
if (cmsk)
|
||||
cpumask_clear_cpu(dead_cpu, &cmsk->mask);
|
||||
free_cpumask_var(per_cpu(ipi_mask, dead_cpu));
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -222,13 +222,31 @@ unsigned long __head __startup_64(unsigned long physaddr,
|
|||
* we might write invalid pmds, when the kernel is relocated
|
||||
* cleanup_highmap() fixes this up along with the mappings
|
||||
* beyond _end.
|
||||
*
|
||||
* Only the region occupied by the kernel image has so far
|
||||
* been checked against the table of usable memory regions
|
||||
* provided by the firmware, so invalidate pages outside that
|
||||
* region. A page table entry that maps to a reserved area of
|
||||
* memory would allow processor speculation into that area,
|
||||
* and on some hardware (particularly the UV platform) even
|
||||
* speculative access to some reserved areas is caught as an
|
||||
* error, causing the BIOS to halt the system.
|
||||
*/
|
||||
|
||||
pmd = fixup_pointer(level2_kernel_pgt, physaddr);
|
||||
for (i = 0; i < PTRS_PER_PMD; i++) {
|
||||
|
||||
/* invalidate pages before the kernel image */
|
||||
for (i = 0; i < pmd_index((unsigned long)_text); i++)
|
||||
pmd[i] &= ~_PAGE_PRESENT;
|
||||
|
||||
/* fixup pages that are part of the kernel image */
|
||||
for (; i <= pmd_index((unsigned long)_end); i++)
|
||||
if (pmd[i] & _PAGE_PRESENT)
|
||||
pmd[i] += load_delta;
|
||||
}
|
||||
|
||||
/* invalidate pages after the kernel image */
|
||||
for (; i < PTRS_PER_PMD; i++)
|
||||
pmd[i] &= ~_PAGE_PRESENT;
|
||||
|
||||
/*
|
||||
* Fixup phys_base - remove the memory encryption mask to obtain
|
||||
|
|
|
@ -113,8 +113,8 @@ static void delay_mwaitx(unsigned long __loops)
|
|||
__monitorx(raw_cpu_ptr(&cpu_tss_rw), 0, 0);
|
||||
|
||||
/*
|
||||
* AMD, like Intel, supports the EAX hint and EAX=0xf
|
||||
* means, do not enter any deep C-state and we use it
|
||||
* AMD, like Intel's MWAIT version, supports the EAX hint and
|
||||
* EAX=0xf0 means, do not enter any deep C-state and we use it
|
||||
* here in delay() to minimize wakeup latency.
|
||||
*/
|
||||
__mwaitx(MWAITX_DISABLE_CSTATES, delay, MWAITX_ECX_TIMER_ENABLE);
|
||||
|
|
|
@ -77,7 +77,9 @@ static efi_system_table_t __init *xen_efi_probe(void)
|
|||
efi.get_variable = xen_efi_get_variable;
|
||||
efi.get_next_variable = xen_efi_get_next_variable;
|
||||
efi.set_variable = xen_efi_set_variable;
|
||||
efi.set_variable_nonblocking = xen_efi_set_variable;
|
||||
efi.query_variable_info = xen_efi_query_variable_info;
|
||||
efi.query_variable_info_nonblocking = xen_efi_query_variable_info;
|
||||
efi.update_capsule = xen_efi_update_capsule;
|
||||
efi.query_capsule_caps = xen_efi_query_capsule_caps;
|
||||
efi.get_next_high_mono_count = xen_efi_get_next_high_mono_count;
|
||||
|
|
|
@ -119,13 +119,6 @@ EXPORT_SYMBOL(__invalidate_icache_range);
|
|||
// FIXME EXPORT_SYMBOL(screen_info);
|
||||
#endif
|
||||
|
||||
EXPORT_SYMBOL(outsb);
|
||||
EXPORT_SYMBOL(outsw);
|
||||
EXPORT_SYMBOL(outsl);
|
||||
EXPORT_SYMBOL(insb);
|
||||
EXPORT_SYMBOL(insw);
|
||||
EXPORT_SYMBOL(insl);
|
||||
|
||||
extern long common_exception_return;
|
||||
EXPORT_SYMBOL(common_exception_return);
|
||||
|
||||
|
|
|
@ -148,24 +148,27 @@ bool rq_depth_calc_max_depth(struct rq_depth *rqd)
|
|||
return ret;
|
||||
}
|
||||
|
||||
void rq_depth_scale_up(struct rq_depth *rqd)
|
||||
/* Returns true on success and false if scaling up wasn't possible */
|
||||
bool rq_depth_scale_up(struct rq_depth *rqd)
|
||||
{
|
||||
/*
|
||||
* Hit max in previous round, stop here
|
||||
*/
|
||||
if (rqd->scaled_max)
|
||||
return;
|
||||
return false;
|
||||
|
||||
rqd->scale_step--;
|
||||
|
||||
rqd->scaled_max = rq_depth_calc_max_depth(rqd);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Scale rwb down. If 'hard_throttle' is set, do it quicker, since we
|
||||
* had a latency violation.
|
||||
* had a latency violation. Returns true on success and returns false if
|
||||
* scaling down wasn't possible.
|
||||
*/
|
||||
void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
|
||||
bool rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
|
||||
{
|
||||
/*
|
||||
* Stop scaling down when we've hit the limit. This also prevents
|
||||
|
@ -173,7 +176,7 @@ void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
|
|||
* keep up.
|
||||
*/
|
||||
if (rqd->max_depth == 1)
|
||||
return;
|
||||
return false;
|
||||
|
||||
if (rqd->scale_step < 0 && hard_throttle)
|
||||
rqd->scale_step = 0;
|
||||
|
@ -182,6 +185,7 @@ void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle)
|
|||
|
||||
rqd->scaled_max = false;
|
||||
rq_depth_calc_max_depth(rqd);
|
||||
return true;
|
||||
}
|
||||
|
||||
void rq_qos_exit(struct request_queue *q)
|
||||
|
|
|
@ -80,22 +80,19 @@ static inline void rq_qos_add(struct request_queue *q, struct rq_qos *rqos)
|
|||
|
||||
static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos)
|
||||
{
|
||||
struct rq_qos *cur, *prev = NULL;
|
||||
for (cur = q->rq_qos; cur; cur = cur->next) {
|
||||
if (cur == rqos) {
|
||||
if (prev)
|
||||
prev->next = rqos->next;
|
||||
else
|
||||
q->rq_qos = cur;
|
||||
struct rq_qos **cur;
|
||||
|
||||
for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
|
||||
if (*cur == rqos) {
|
||||
*cur = rqos->next;
|
||||
break;
|
||||
}
|
||||
prev = cur;
|
||||
}
|
||||
}
|
||||
|
||||
bool rq_wait_inc_below(struct rq_wait *rq_wait, unsigned int limit);
|
||||
void rq_depth_scale_up(struct rq_depth *rqd);
|
||||
void rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle);
|
||||
bool rq_depth_scale_up(struct rq_depth *rqd);
|
||||
bool rq_depth_scale_down(struct rq_depth *rqd, bool hard_throttle);
|
||||
bool rq_depth_calc_max_depth(struct rq_depth *rqd);
|
||||
|
||||
void rq_qos_cleanup(struct request_queue *, struct bio *);
|
||||
|
|
|
@ -307,7 +307,8 @@ static void calc_wb_limits(struct rq_wb *rwb)
|
|||
|
||||
static void scale_up(struct rq_wb *rwb)
|
||||
{
|
||||
rq_depth_scale_up(&rwb->rq_depth);
|
||||
if (!rq_depth_scale_up(&rwb->rq_depth))
|
||||
return;
|
||||
calc_wb_limits(rwb);
|
||||
rwb->unknown_cnt = 0;
|
||||
rwb_wake_all(rwb);
|
||||
|
@ -316,7 +317,8 @@ static void scale_up(struct rq_wb *rwb)
|
|||
|
||||
static void scale_down(struct rq_wb *rwb, bool hard_throttle)
|
||||
{
|
||||
rq_depth_scale_down(&rwb->rq_depth, hard_throttle);
|
||||
if (!rq_depth_scale_down(&rwb->rq_depth, hard_throttle))
|
||||
return;
|
||||
calc_wb_limits(rwb);
|
||||
rwb->unknown_cnt = 0;
|
||||
rwb_trace_step(rwb, "scale down");
|
||||
|
|
11
build.config.aarch64
Normal file
11
build.config.aarch64
Normal file
|
@ -0,0 +1,11 @@
|
|||
ARCH=arm64
|
||||
|
||||
CLANG_TRIPLE=aarch64-linux-gnu-
|
||||
CROSS_COMPILE=aarch64-linux-androidkernel-
|
||||
LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin
|
||||
|
||||
FILES="
|
||||
arch/arm64/boot/Image.gz
|
||||
vmlinux
|
||||
System.map
|
||||
"
|
9
build.config.common
Normal file
9
build.config.common
Normal file
|
@ -0,0 +1,9 @@
|
|||
BRANCH=android-4.19-q
|
||||
KERNEL_DIR=common
|
||||
|
||||
CC=clang
|
||||
CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r365631c/bin
|
||||
|
||||
EXTRA_CMDS=''
|
||||
STOP_SHIP_TRACEPRINTK=1
|
||||
LD=ld.lld
|
|
@ -1,18 +1,5 @@
|
|||
ARCH=arm64
|
||||
BRANCH=android-4.19
|
||||
CC=clang
|
||||
CLANG_TRIPLE=aarch64-linux-gnu-
|
||||
CROSS_COMPILE=aarch64-linux-androidkernel-
|
||||
. ${ROOT_DIR}/common/build.config.common
|
||||
. ${ROOT_DIR}/common/build.config.aarch64
|
||||
|
||||
DEFCONFIG=cuttlefish_defconfig
|
||||
EXTRA_CMDS=''
|
||||
KERNEL_DIR=common
|
||||
POST_DEFCONFIG_CMDS="check_defconfig"
|
||||
CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r353983c/bin
|
||||
LD=ld.lld
|
||||
LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin
|
||||
FILES="
|
||||
arch/arm64/boot/Image.gz
|
||||
vmlinux
|
||||
System.map
|
||||
"
|
||||
STOP_SHIP_TRACEPRINTK=1
|
||||
|
|
|
@ -1,18 +1,5 @@
|
|||
ARCH=x86_64
|
||||
BRANCH=android-4.19
|
||||
CC=clang
|
||||
CLANG_TRIPLE=x86_64-linux-gnu-
|
||||
CROSS_COMPILE=x86_64-linux-androidkernel-
|
||||
. ${ROOT_DIR}/common/build.config.common
|
||||
. ${ROOT_DIR}/common/build.config.x86_64
|
||||
|
||||
DEFCONFIG=x86_64_cuttlefish_defconfig
|
||||
EXTRA_CMDS=''
|
||||
KERNEL_DIR=common
|
||||
POST_DEFCONFIG_CMDS="check_defconfig"
|
||||
CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r353983c/bin
|
||||
LD=ld.lld
|
||||
LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/x86/x86_64-linux-android-4.9/bin
|
||||
FILES="
|
||||
arch/x86/boot/bzImage
|
||||
vmlinux
|
||||
System.map
|
||||
"
|
||||
STOP_SHIP_TRACEPRINTK=1
|
||||
|
|
11
build.config.x86_64
Normal file
11
build.config.x86_64
Normal file
|
@ -0,0 +1,11 @@
|
|||
ARCH=x86_64
|
||||
|
||||
CLANG_TRIPLE=x86_64-linux-gnu-
|
||||
CROSS_COMPILE=x86_64-linux-androidkernel-
|
||||
LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/x86/x86_64-linux-android-4.9/bin
|
||||
|
||||
FILES="
|
||||
arch/x86/boot/bzImage
|
||||
vmlinux
|
||||
System.map
|
||||
"
|
|
@ -909,8 +909,8 @@ void acpi_cppc_processor_exit(struct acpi_processor *pr)
|
|||
pcc_data[pcc_ss_id]->refcount--;
|
||||
if (!pcc_data[pcc_ss_id]->refcount) {
|
||||
pcc_mbox_free_channel(pcc_data[pcc_ss_id]->pcc_channel);
|
||||
pcc_data[pcc_ss_id]->pcc_channel_acquired = 0;
|
||||
kfree(pcc_data[pcc_ss_id]);
|
||||
pcc_data[pcc_ss_id] = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -509,6 +509,44 @@ static int find_acpi_cpu_topology_tag(unsigned int cpu, int level, int flag)
|
|||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
* check_acpi_cpu_flag() - Determine if CPU node has a flag set
|
||||
* @cpu: Kernel logical CPU number
|
||||
* @rev: The minimum PPTT revision defining the flag
|
||||
* @flag: The flag itself
|
||||
*
|
||||
* Check the node representing a CPU for a given flag.
|
||||
*
|
||||
* Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found or
|
||||
* the table revision isn't new enough.
|
||||
* 1, any passed flag set
|
||||
* 0, flag unset
|
||||
*/
|
||||
static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag)
|
||||
{
|
||||
struct acpi_table_header *table;
|
||||
acpi_status status;
|
||||
u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu);
|
||||
struct acpi_pptt_processor *cpu_node = NULL;
|
||||
int ret = -ENOENT;
|
||||
|
||||
status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
pr_warn_once("No PPTT table found, cpu topology may be inaccurate\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (table->revision >= rev)
|
||||
cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
|
||||
|
||||
if (cpu_node)
|
||||
ret = (cpu_node->flags & flag) != 0;
|
||||
|
||||
acpi_put_table(table);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_find_last_cache_level() - Determines the number of cache levels for a PE
|
||||
* @cpu: Kernel logical cpu number
|
||||
|
@ -573,6 +611,20 @@ int cache_setup_acpi(unsigned int cpu)
|
|||
return status;
|
||||
}
|
||||
|
||||
/**
|
||||
* acpi_pptt_cpu_is_thread() - Determine if CPU is a thread
|
||||
* @cpu: Kernel logical CPU number
|
||||
*
|
||||
* Return: 1, a thread
|
||||
* 0, not a thread
|
||||
* -ENOENT ,if the PPTT doesn't exist, the CPU cannot be found or
|
||||
* the table revision isn't new enough.
|
||||
*/
|
||||
int acpi_pptt_cpu_is_thread(unsigned int cpu)
|
||||
{
|
||||
return check_acpi_cpu_flag(cpu, 2, ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD);
|
||||
}
|
||||
|
||||
/**
|
||||
* find_acpi_cpu_topology() - Determine a unique topology value for a given cpu
|
||||
* @cpu: Kernel logical cpu number
|
||||
|
|
|
@ -1633,7 +1633,9 @@ static void ahci_intel_pcs_quirk(struct pci_dev *pdev, struct ahci_host_priv *hp
|
|||
*/
|
||||
if (!id || id->vendor != PCI_VENDOR_ID_INTEL)
|
||||
return;
|
||||
if (((enum board_ids) id->driver_data) < board_ahci_pcs7)
|
||||
|
||||
/* Skip applying the quirk on Denverton and beyond */
|
||||
if (((enum board_ids) id->driver_data) >= board_ahci_pcs7)
|
||||
return;
|
||||
|
||||
/*
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
* Copyright (c) 2006 Novell, Inc.
|
||||
*/
|
||||
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/fwnode.h>
|
||||
|
@ -2948,6 +2949,8 @@ void device_shutdown(void)
|
|||
wait_for_device_probe();
|
||||
device_block_probing();
|
||||
|
||||
cpufreq_suspend();
|
||||
|
||||
spin_lock(&devices_kset->list_lock);
|
||||
/*
|
||||
* Walk the devices list backward, shutting down each in turn.
|
||||
|
|
|
@ -635,6 +635,9 @@ store_soft_offline_page(struct device *dev,
|
|||
pfn >>= PAGE_SHIFT;
|
||||
if (!pfn_valid(pfn))
|
||||
return -ENXIO;
|
||||
/* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */
|
||||
if (!pfn_to_online_page(pfn))
|
||||
return -EIO;
|
||||
ret = soft_offline_page(pfn_to_page(pfn), 0);
|
||||
return ret == 0 ? count : ret;
|
||||
}
|
||||
|
|
|
@ -2620,14 +2620,6 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_unregister_driver);
|
||||
|
||||
/*
|
||||
* Stop cpufreq at shutdown to make sure it isn't holding any locks
|
||||
* or mutexes when secondary CPUs are halted.
|
||||
*/
|
||||
static struct syscore_ops cpufreq_syscore_ops = {
|
||||
.shutdown = cpufreq_suspend,
|
||||
};
|
||||
|
||||
struct kobject *cpufreq_global_kobject;
|
||||
EXPORT_SYMBOL(cpufreq_global_kobject);
|
||||
|
||||
|
@ -2639,8 +2631,6 @@ static int __init cpufreq_core_init(void)
|
|||
cpufreq_global_kobject = kobject_create_and_add("cpufreq", &cpu_subsys.dev_root->kobj);
|
||||
BUG_ON(!cpufreq_global_kobject);
|
||||
|
||||
register_syscore_ops(&cpufreq_syscore_ops);
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_param(off, int, 0444);
|
||||
|
|
|
@ -532,7 +532,11 @@ void ghes_edac_unregister(struct ghes *ghes)
|
|||
if (!ghes_pvt)
|
||||
return;
|
||||
|
||||
if (atomic_dec_return(&ghes_init))
|
||||
return;
|
||||
|
||||
mci = ghes_pvt->mci;
|
||||
ghes_pvt = NULL;
|
||||
edac_mc_del_mc(mci->pdev);
|
||||
edac_mc_free(mci);
|
||||
}
|
||||
|
|
|
@ -281,6 +281,9 @@ static __init int efivar_ssdt_load(void)
|
|||
void *data;
|
||||
int ret;
|
||||
|
||||
if (!efivar_ssdt[0])
|
||||
return 0;
|
||||
|
||||
ret = efivar_init(efivar_ssdt_iter, &entries, true, &entries);
|
||||
|
||||
list_for_each_entry_safe(entry, aux, &entries, list) {
|
||||
|
|
|
@ -62,7 +62,7 @@ static int vpd_decode_entry(const u32 max_len, const u8 *input_buf,
|
|||
if (max_len - consumed < *entry_len)
|
||||
return VPD_FAIL;
|
||||
|
||||
consumed += decoded_len;
|
||||
consumed += *entry_len;
|
||||
*_consumed = consumed;
|
||||
return VPD_OK;
|
||||
}
|
||||
|
|
|
@ -529,11 +529,12 @@ static void sprd_eic_handle_one_type(struct gpio_chip *chip)
|
|||
}
|
||||
|
||||
for_each_set_bit(n, ®, SPRD_EIC_PER_BANK_NR) {
|
||||
girq = irq_find_mapping(chip->irq.domain,
|
||||
bank * SPRD_EIC_PER_BANK_NR + n);
|
||||
u32 offset = bank * SPRD_EIC_PER_BANK_NR + n;
|
||||
|
||||
girq = irq_find_mapping(chip->irq.domain, offset);
|
||||
|
||||
generic_handle_irq(girq);
|
||||
sprd_eic_toggle_trigger(chip, girq, n);
|
||||
sprd_eic_toggle_trigger(chip, girq, offset);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2656,8 +2656,10 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
|
|||
if (!ret)
|
||||
goto set_output_value;
|
||||
/* Emulate open drain by not actively driving the line high */
|
||||
if (value)
|
||||
return gpiod_direction_input(desc);
|
||||
if (value) {
|
||||
ret = gpiod_direction_input(desc);
|
||||
goto set_output_flag;
|
||||
}
|
||||
}
|
||||
else if (test_bit(FLAG_OPEN_SOURCE, &desc->flags)) {
|
||||
ret = gpio_set_drive_single_ended(gc, gpio_chip_hwgpio(desc),
|
||||
|
@ -2665,8 +2667,10 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
|
|||
if (!ret)
|
||||
goto set_output_value;
|
||||
/* Emulate open source by not actively driving the line low */
|
||||
if (!value)
|
||||
return gpiod_direction_input(desc);
|
||||
if (!value) {
|
||||
ret = gpiod_direction_input(desc);
|
||||
goto set_output_flag;
|
||||
}
|
||||
} else {
|
||||
gpio_set_drive_single_ended(gc, gpio_chip_hwgpio(desc),
|
||||
PIN_CONFIG_DRIVE_PUSH_PULL);
|
||||
|
@ -2674,6 +2678,17 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
|
|||
|
||||
set_output_value:
|
||||
return gpiod_direction_output_raw_commit(desc, value);
|
||||
|
||||
set_output_flag:
|
||||
/*
|
||||
* When emulating open-source or open-drain functionalities by not
|
||||
* actively driving the line (setting mode to input) we still need to
|
||||
* set the IS_OUT flag or otherwise we won't be able to set the line
|
||||
* value anymore.
|
||||
*/
|
||||
if (ret == 0)
|
||||
set_bit(FLAG_IS_OUT, &desc->flags);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(gpiod_direction_output);
|
||||
|
||||
|
@ -2987,8 +3002,6 @@ static void gpio_set_open_drain_value_commit(struct gpio_desc *desc, bool value)
|
|||
|
||||
if (value) {
|
||||
err = chip->direction_input(chip, offset);
|
||||
if (!err)
|
||||
clear_bit(FLAG_IS_OUT, &desc->flags);
|
||||
} else {
|
||||
err = chip->direction_output(chip, offset, 0);
|
||||
if (!err)
|
||||
|
@ -3018,8 +3031,6 @@ static void gpio_set_open_source_value_commit(struct gpio_desc *desc, bool value
|
|||
set_bit(FLAG_IS_OUT, &desc->flags);
|
||||
} else {
|
||||
err = chip->direction_input(chip, offset);
|
||||
if (!err)
|
||||
clear_bit(FLAG_IS_OUT, &desc->flags);
|
||||
}
|
||||
trace_gpio_direction(desc_to_gpio(desc), !value, err);
|
||||
if (err < 0)
|
||||
|
|
|
@ -841,6 +841,41 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
|||
if (ret == -EPROBE_DEFER)
|
||||
return ret;
|
||||
|
||||
#ifdef CONFIG_DRM_AMDGPU_SI
|
||||
if (!amdgpu_si_support) {
|
||||
switch (flags & AMD_ASIC_MASK) {
|
||||
case CHIP_TAHITI:
|
||||
case CHIP_PITCAIRN:
|
||||
case CHIP_VERDE:
|
||||
case CHIP_OLAND:
|
||||
case CHIP_HAINAN:
|
||||
dev_info(&pdev->dev,
|
||||
"SI support provided by radeon.\n");
|
||||
dev_info(&pdev->dev,
|
||||
"Use radeon.si_support=0 amdgpu.si_support=1 to override.\n"
|
||||
);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
#ifdef CONFIG_DRM_AMDGPU_CIK
|
||||
if (!amdgpu_cik_support) {
|
||||
switch (flags & AMD_ASIC_MASK) {
|
||||
case CHIP_KAVERI:
|
||||
case CHIP_BONAIRE:
|
||||
case CHIP_HAWAII:
|
||||
case CHIP_KABINI:
|
||||
case CHIP_MULLINS:
|
||||
dev_info(&pdev->dev,
|
||||
"CIK support provided by radeon.\n");
|
||||
dev_info(&pdev->dev,
|
||||
"Use radeon.cik_support=0 amdgpu.cik_support=1 to override.\n"
|
||||
);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Get rid of things like offb */
|
||||
ret = amdgpu_kick_out_firmware_fb(pdev);
|
||||
if (ret)
|
||||
|
|
|
@ -87,41 +87,6 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
|
|||
struct amdgpu_device *adev;
|
||||
int r, acpi_status;
|
||||
|
||||
#ifdef CONFIG_DRM_AMDGPU_SI
|
||||
if (!amdgpu_si_support) {
|
||||
switch (flags & AMD_ASIC_MASK) {
|
||||
case CHIP_TAHITI:
|
||||
case CHIP_PITCAIRN:
|
||||
case CHIP_VERDE:
|
||||
case CHIP_OLAND:
|
||||
case CHIP_HAINAN:
|
||||
dev_info(dev->dev,
|
||||
"SI support provided by radeon.\n");
|
||||
dev_info(dev->dev,
|
||||
"Use radeon.si_support=0 amdgpu.si_support=1 to override.\n"
|
||||
);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
#ifdef CONFIG_DRM_AMDGPU_CIK
|
||||
if (!amdgpu_cik_support) {
|
||||
switch (flags & AMD_ASIC_MASK) {
|
||||
case CHIP_KAVERI:
|
||||
case CHIP_BONAIRE:
|
||||
case CHIP_HAWAII:
|
||||
case CHIP_KABINI:
|
||||
case CHIP_MULLINS:
|
||||
dev_info(dev->dev,
|
||||
"CIK support provided by radeon.\n");
|
||||
dev_info(dev->dev,
|
||||
"Use radeon.cik_support=0 amdgpu.cik_support=1 to override.\n"
|
||||
);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
adev = kzalloc(sizeof(struct amdgpu_device), GFP_KERNEL);
|
||||
if (adev == NULL) {
|
||||
return -ENOMEM;
|
||||
|
|
|
@ -174,6 +174,9 @@ static const struct edid_quirk {
|
|||
/* Medion MD 30217 PG */
|
||||
{ "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 },
|
||||
|
||||
/* Lenovo G50 */
|
||||
{ "SDC", 18514, EDID_QUIRK_FORCE_6BPC },
|
||||
|
||||
/* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
|
||||
{ "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC },
|
||||
|
||||
|
|
|
@ -395,19 +395,11 @@ radeon_pci_remove(struct pci_dev *pdev)
|
|||
static void
|
||||
radeon_pci_shutdown(struct pci_dev *pdev)
|
||||
{
|
||||
struct drm_device *ddev = pci_get_drvdata(pdev);
|
||||
|
||||
/* if we are running in a VM, make sure the device
|
||||
* torn down properly on reboot/shutdown
|
||||
*/
|
||||
if (radeon_device_is_virtual())
|
||||
radeon_pci_remove(pdev);
|
||||
|
||||
/* Some adapters need to be suspended before a
|
||||
* shutdown occurs in order to prevent an error
|
||||
* during kexec.
|
||||
*/
|
||||
radeon_suspend_kms(ddev, true, true, false);
|
||||
}
|
||||
|
||||
static int radeon_pmops_suspend(struct device *dev)
|
||||
|
|
|
@ -273,15 +273,13 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
|
|||
else
|
||||
ret = vmf_insert_pfn(&cvma, address, pfn);
|
||||
|
||||
/*
|
||||
* Somebody beat us to this PTE or prefaulting to
|
||||
* an already populated PTE, or prefaulting error.
|
||||
*/
|
||||
|
||||
if (unlikely((ret == VM_FAULT_NOPAGE && i > 0)))
|
||||
break;
|
||||
else if (unlikely(ret & VM_FAULT_ERROR))
|
||||
goto out_io_unlock;
|
||||
/* Never error on prefaulted PTEs */
|
||||
if (unlikely((ret & VM_FAULT_ERROR))) {
|
||||
if (i == 0)
|
||||
goto out_io_unlock;
|
||||
else
|
||||
break;
|
||||
}
|
||||
|
||||
address += PAGE_SIZE;
|
||||
if (unlikely(++page_offset >= page_last))
|
||||
|
|
|
@ -814,10 +814,10 @@ static int ad799x_probe(struct i2c_client *client,
|
|||
|
||||
ret = ad799x_write_config(st, st->chip_config->default_config);
|
||||
if (ret < 0)
|
||||
goto error_disable_reg;
|
||||
goto error_disable_vref;
|
||||
ret = ad799x_read_config(st);
|
||||
if (ret < 0)
|
||||
goto error_disable_reg;
|
||||
goto error_disable_vref;
|
||||
st->config = ret;
|
||||
|
||||
ret = iio_triggered_buffer_setup(indio_dev, NULL,
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
*
|
||||
*/
|
||||
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/device.h>
|
||||
|
@ -34,6 +35,11 @@
|
|||
#define AXP288_ADC_EN_MASK 0xF0
|
||||
#define AXP288_ADC_TS_ENABLE 0x01
|
||||
|
||||
#define AXP288_ADC_TS_BIAS_MASK GENMASK(5, 4)
|
||||
#define AXP288_ADC_TS_BIAS_20UA (0 << 4)
|
||||
#define AXP288_ADC_TS_BIAS_40UA (1 << 4)
|
||||
#define AXP288_ADC_TS_BIAS_60UA (2 << 4)
|
||||
#define AXP288_ADC_TS_BIAS_80UA (3 << 4)
|
||||
#define AXP288_ADC_TS_CURRENT_ON_OFF_MASK GENMASK(1, 0)
|
||||
#define AXP288_ADC_TS_CURRENT_OFF (0 << 0)
|
||||
#define AXP288_ADC_TS_CURRENT_ON_WHEN_CHARGING (1 << 0)
|
||||
|
@ -186,10 +192,36 @@ static int axp288_adc_read_raw(struct iio_dev *indio_dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* We rely on the machine's firmware to correctly setup the TS pin bias current
|
||||
* at boot. This lists systems with broken fw where we need to set it ourselves.
|
||||
*/
|
||||
static const struct dmi_system_id axp288_adc_ts_bias_override[] = {
|
||||
{
|
||||
/* Lenovo Ideapad 100S (11 inch) */
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad 100S-11IBY"),
|
||||
},
|
||||
.driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA,
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
static int axp288_adc_initialize(struct axp288_adc_info *info)
|
||||
{
|
||||
const struct dmi_system_id *bias_override;
|
||||
int ret, adc_enable_val;
|
||||
|
||||
bias_override = dmi_first_match(axp288_adc_ts_bias_override);
|
||||
if (bias_override) {
|
||||
ret = regmap_update_bits(info->regmap, AXP288_ADC_TS_PIN_CTRL,
|
||||
AXP288_ADC_TS_BIAS_MASK,
|
||||
(uintptr_t)bias_override->driver_data);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Determine if the TS pin is enabled and set the TS current-source
|
||||
* accordingly.
|
||||
|
|
|
@ -109,14 +109,14 @@ struct hx711_data {
|
|||
|
||||
static int hx711_cycle(struct hx711_data *hx711_data)
|
||||
{
|
||||
int val;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
* if preempted for more then 60us while PD_SCK is high:
|
||||
* hx711 is going in reset
|
||||
* ==> measuring is false
|
||||
*/
|
||||
preempt_disable();
|
||||
local_irq_save(flags);
|
||||
gpiod_set_value(hx711_data->gpiod_pd_sck, 1);
|
||||
|
||||
/*
|
||||
|
@ -126,7 +126,6 @@ static int hx711_cycle(struct hx711_data *hx711_data)
|
|||
*/
|
||||
ndelay(hx711_data->data_ready_delay_ns);
|
||||
|
||||
val = gpiod_get_value(hx711_data->gpiod_dout);
|
||||
/*
|
||||
* here we are not waiting for 0.2 us as suggested by the datasheet,
|
||||
* because the oscilloscope showed in a test scenario
|
||||
|
@ -134,7 +133,7 @@ static int hx711_cycle(struct hx711_data *hx711_data)
|
|||
* and 0.56 us for PD_SCK low on TI Sitara with 800 MHz
|
||||
*/
|
||||
gpiod_set_value(hx711_data->gpiod_pd_sck, 0);
|
||||
preempt_enable();
|
||||
local_irq_restore(flags);
|
||||
|
||||
/*
|
||||
* make it a square wave for addressing cases with capacitance on
|
||||
|
@ -142,7 +141,8 @@ static int hx711_cycle(struct hx711_data *hx711_data)
|
|||
*/
|
||||
ndelay(hx711_data->data_ready_delay_ns);
|
||||
|
||||
return val;
|
||||
/* sample as late as possible */
|
||||
return gpiod_get_value(hx711_data->gpiod_dout);
|
||||
}
|
||||
|
||||
static int hx711_read(struct hx711_data *hx711_data)
|
||||
|
|
|
@ -21,45 +21,22 @@
|
|||
|
||||
#include "stm32-adc-core.h"
|
||||
|
||||
/* STM32F4 - common registers for all ADC instances: 1, 2 & 3 */
|
||||
#define STM32F4_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
|
||||
#define STM32F4_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x04)
|
||||
|
||||
/* STM32F4_ADC_CSR - bit fields */
|
||||
#define STM32F4_EOC3 BIT(17)
|
||||
#define STM32F4_EOC2 BIT(9)
|
||||
#define STM32F4_EOC1 BIT(1)
|
||||
|
||||
/* STM32F4_ADC_CCR - bit fields */
|
||||
#define STM32F4_ADC_ADCPRE_SHIFT 16
|
||||
#define STM32F4_ADC_ADCPRE_MASK GENMASK(17, 16)
|
||||
|
||||
/* STM32H7 - common registers for all ADC instances */
|
||||
#define STM32H7_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
|
||||
#define STM32H7_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x08)
|
||||
|
||||
/* STM32H7_ADC_CSR - bit fields */
|
||||
#define STM32H7_EOC_SLV BIT(18)
|
||||
#define STM32H7_EOC_MST BIT(2)
|
||||
|
||||
/* STM32H7_ADC_CCR - bit fields */
|
||||
#define STM32H7_PRESC_SHIFT 18
|
||||
#define STM32H7_PRESC_MASK GENMASK(21, 18)
|
||||
#define STM32H7_CKMODE_SHIFT 16
|
||||
#define STM32H7_CKMODE_MASK GENMASK(17, 16)
|
||||
|
||||
/**
|
||||
* stm32_adc_common_regs - stm32 common registers, compatible dependent data
|
||||
* @csr: common status register offset
|
||||
* @eoc1: adc1 end of conversion flag in @csr
|
||||
* @eoc2: adc2 end of conversion flag in @csr
|
||||
* @eoc3: adc3 end of conversion flag in @csr
|
||||
* @ier: interrupt enable register offset for each adc
|
||||
* @eocie_msk: end of conversion interrupt enable mask in @ier
|
||||
*/
|
||||
struct stm32_adc_common_regs {
|
||||
u32 csr;
|
||||
u32 eoc1_msk;
|
||||
u32 eoc2_msk;
|
||||
u32 eoc3_msk;
|
||||
u32 ier;
|
||||
u32 eocie_msk;
|
||||
};
|
||||
|
||||
struct stm32_adc_priv;
|
||||
|
@ -268,6 +245,8 @@ static const struct stm32_adc_common_regs stm32f4_adc_common_regs = {
|
|||
.eoc1_msk = STM32F4_EOC1,
|
||||
.eoc2_msk = STM32F4_EOC2,
|
||||
.eoc3_msk = STM32F4_EOC3,
|
||||
.ier = STM32F4_ADC_CR1,
|
||||
.eocie_msk = STM32F4_EOCIE,
|
||||
};
|
||||
|
||||
/* STM32H7 common registers definitions */
|
||||
|
@ -275,8 +254,24 @@ static const struct stm32_adc_common_regs stm32h7_adc_common_regs = {
|
|||
.csr = STM32H7_ADC_CSR,
|
||||
.eoc1_msk = STM32H7_EOC_MST,
|
||||
.eoc2_msk = STM32H7_EOC_SLV,
|
||||
.ier = STM32H7_ADC_IER,
|
||||
.eocie_msk = STM32H7_EOCIE,
|
||||
};
|
||||
|
||||
static const unsigned int stm32_adc_offset[STM32_ADC_MAX_ADCS] = {
|
||||
0, STM32_ADC_OFFSET, STM32_ADC_OFFSET * 2,
|
||||
};
|
||||
|
||||
static unsigned int stm32_adc_eoc_enabled(struct stm32_adc_priv *priv,
|
||||
unsigned int adc)
|
||||
{
|
||||
u32 ier, offset = stm32_adc_offset[adc];
|
||||
|
||||
ier = readl_relaxed(priv->common.base + offset + priv->cfg->regs->ier);
|
||||
|
||||
return ier & priv->cfg->regs->eocie_msk;
|
||||
}
|
||||
|
||||
/* ADC common interrupt for all instances */
|
||||
static void stm32_adc_irq_handler(struct irq_desc *desc)
|
||||
{
|
||||
|
@ -287,13 +282,28 @@ static void stm32_adc_irq_handler(struct irq_desc *desc)
|
|||
chained_irq_enter(chip, desc);
|
||||
status = readl_relaxed(priv->common.base + priv->cfg->regs->csr);
|
||||
|
||||
if (status & priv->cfg->regs->eoc1_msk)
|
||||
/*
|
||||
* End of conversion may be handled by using IRQ or DMA. There may be a
|
||||
* race here when two conversions complete at the same time on several
|
||||
* ADCs. EOC may be read 'set' for several ADCs, with:
|
||||
* - an ADC configured to use DMA (EOC triggers the DMA request, and
|
||||
* is then automatically cleared by DR read in hardware)
|
||||
* - an ADC configured to use IRQs (EOCIE bit is set. The handler must
|
||||
* be called in this case)
|
||||
* So both EOC status bit in CSR and EOCIE control bit must be checked
|
||||
* before invoking the interrupt handler (e.g. call ISR only for
|
||||
* IRQ-enabled ADCs).
|
||||
*/
|
||||
if (status & priv->cfg->regs->eoc1_msk &&
|
||||
stm32_adc_eoc_enabled(priv, 0))
|
||||
generic_handle_irq(irq_find_mapping(priv->domain, 0));
|
||||
|
||||
if (status & priv->cfg->regs->eoc2_msk)
|
||||
if (status & priv->cfg->regs->eoc2_msk &&
|
||||
stm32_adc_eoc_enabled(priv, 1))
|
||||
generic_handle_irq(irq_find_mapping(priv->domain, 1));
|
||||
|
||||
if (status & priv->cfg->regs->eoc3_msk)
|
||||
if (status & priv->cfg->regs->eoc3_msk &&
|
||||
stm32_adc_eoc_enabled(priv, 2))
|
||||
generic_handle_irq(irq_find_mapping(priv->domain, 2));
|
||||
|
||||
chained_irq_exit(chip, desc);
|
||||
|
|
|
@ -25,8 +25,145 @@
|
|||
* --------------------------------------------------------
|
||||
*/
|
||||
#define STM32_ADC_MAX_ADCS 3
|
||||
#define STM32_ADC_OFFSET 0x100
|
||||
#define STM32_ADCX_COMN_OFFSET 0x300
|
||||
|
||||
/* STM32F4 - Registers for each ADC instance */
|
||||
#define STM32F4_ADC_SR 0x00
|
||||
#define STM32F4_ADC_CR1 0x04
|
||||
#define STM32F4_ADC_CR2 0x08
|
||||
#define STM32F4_ADC_SMPR1 0x0C
|
||||
#define STM32F4_ADC_SMPR2 0x10
|
||||
#define STM32F4_ADC_HTR 0x24
|
||||
#define STM32F4_ADC_LTR 0x28
|
||||
#define STM32F4_ADC_SQR1 0x2C
|
||||
#define STM32F4_ADC_SQR2 0x30
|
||||
#define STM32F4_ADC_SQR3 0x34
|
||||
#define STM32F4_ADC_JSQR 0x38
|
||||
#define STM32F4_ADC_JDR1 0x3C
|
||||
#define STM32F4_ADC_JDR2 0x40
|
||||
#define STM32F4_ADC_JDR3 0x44
|
||||
#define STM32F4_ADC_JDR4 0x48
|
||||
#define STM32F4_ADC_DR 0x4C
|
||||
|
||||
/* STM32F4 - common registers for all ADC instances: 1, 2 & 3 */
|
||||
#define STM32F4_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
|
||||
#define STM32F4_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x04)
|
||||
|
||||
/* STM32F4_ADC_SR - bit fields */
|
||||
#define STM32F4_STRT BIT(4)
|
||||
#define STM32F4_EOC BIT(1)
|
||||
|
||||
/* STM32F4_ADC_CR1 - bit fields */
|
||||
#define STM32F4_RES_SHIFT 24
|
||||
#define STM32F4_RES_MASK GENMASK(25, 24)
|
||||
#define STM32F4_SCAN BIT(8)
|
||||
#define STM32F4_EOCIE BIT(5)
|
||||
|
||||
/* STM32F4_ADC_CR2 - bit fields */
|
||||
#define STM32F4_SWSTART BIT(30)
|
||||
#define STM32F4_EXTEN_SHIFT 28
|
||||
#define STM32F4_EXTEN_MASK GENMASK(29, 28)
|
||||
#define STM32F4_EXTSEL_SHIFT 24
|
||||
#define STM32F4_EXTSEL_MASK GENMASK(27, 24)
|
||||
#define STM32F4_EOCS BIT(10)
|
||||
#define STM32F4_DDS BIT(9)
|
||||
#define STM32F4_DMA BIT(8)
|
||||
#define STM32F4_ADON BIT(0)
|
||||
|
||||
/* STM32F4_ADC_CSR - bit fields */
|
||||
#define STM32F4_EOC3 BIT(17)
|
||||
#define STM32F4_EOC2 BIT(9)
|
||||
#define STM32F4_EOC1 BIT(1)
|
||||
|
||||
/* STM32F4_ADC_CCR - bit fields */
|
||||
#define STM32F4_ADC_ADCPRE_SHIFT 16
|
||||
#define STM32F4_ADC_ADCPRE_MASK GENMASK(17, 16)
|
||||
|
||||
/* STM32H7 - Registers for each ADC instance */
|
||||
#define STM32H7_ADC_ISR 0x00
|
||||
#define STM32H7_ADC_IER 0x04
|
||||
#define STM32H7_ADC_CR 0x08
|
||||
#define STM32H7_ADC_CFGR 0x0C
|
||||
#define STM32H7_ADC_SMPR1 0x14
|
||||
#define STM32H7_ADC_SMPR2 0x18
|
||||
#define STM32H7_ADC_PCSEL 0x1C
|
||||
#define STM32H7_ADC_SQR1 0x30
|
||||
#define STM32H7_ADC_SQR2 0x34
|
||||
#define STM32H7_ADC_SQR3 0x38
|
||||
#define STM32H7_ADC_SQR4 0x3C
|
||||
#define STM32H7_ADC_DR 0x40
|
||||
#define STM32H7_ADC_DIFSEL 0xC0
|
||||
#define STM32H7_ADC_CALFACT 0xC4
|
||||
#define STM32H7_ADC_CALFACT2 0xC8
|
||||
|
||||
/* STM32H7 - common registers for all ADC instances */
|
||||
#define STM32H7_ADC_CSR (STM32_ADCX_COMN_OFFSET + 0x00)
|
||||
#define STM32H7_ADC_CCR (STM32_ADCX_COMN_OFFSET + 0x08)
|
||||
|
||||
/* STM32H7_ADC_ISR - bit fields */
|
||||
#define STM32MP1_VREGREADY BIT(12)
|
||||
#define STM32H7_EOC BIT(2)
|
||||
#define STM32H7_ADRDY BIT(0)
|
||||
|
||||
/* STM32H7_ADC_IER - bit fields */
|
||||
#define STM32H7_EOCIE STM32H7_EOC
|
||||
|
||||
/* STM32H7_ADC_CR - bit fields */
|
||||
#define STM32H7_ADCAL BIT(31)
|
||||
#define STM32H7_ADCALDIF BIT(30)
|
||||
#define STM32H7_DEEPPWD BIT(29)
|
||||
#define STM32H7_ADVREGEN BIT(28)
|
||||
#define STM32H7_LINCALRDYW6 BIT(27)
|
||||
#define STM32H7_LINCALRDYW5 BIT(26)
|
||||
#define STM32H7_LINCALRDYW4 BIT(25)
|
||||
#define STM32H7_LINCALRDYW3 BIT(24)
|
||||
#define STM32H7_LINCALRDYW2 BIT(23)
|
||||
#define STM32H7_LINCALRDYW1 BIT(22)
|
||||
#define STM32H7_ADCALLIN BIT(16)
|
||||
#define STM32H7_BOOST BIT(8)
|
||||
#define STM32H7_ADSTP BIT(4)
|
||||
#define STM32H7_ADSTART BIT(2)
|
||||
#define STM32H7_ADDIS BIT(1)
|
||||
#define STM32H7_ADEN BIT(0)
|
||||
|
||||
/* STM32H7_ADC_CFGR bit fields */
|
||||
#define STM32H7_EXTEN_SHIFT 10
|
||||
#define STM32H7_EXTEN_MASK GENMASK(11, 10)
|
||||
#define STM32H7_EXTSEL_SHIFT 5
|
||||
#define STM32H7_EXTSEL_MASK GENMASK(9, 5)
|
||||
#define STM32H7_RES_SHIFT 2
|
||||
#define STM32H7_RES_MASK GENMASK(4, 2)
|
||||
#define STM32H7_DMNGT_SHIFT 0
|
||||
#define STM32H7_DMNGT_MASK GENMASK(1, 0)
|
||||
|
||||
enum stm32h7_adc_dmngt {
|
||||
STM32H7_DMNGT_DR_ONLY, /* Regular data in DR only */
|
||||
STM32H7_DMNGT_DMA_ONESHOT, /* DMA one shot mode */
|
||||
STM32H7_DMNGT_DFSDM, /* DFSDM mode */
|
||||
STM32H7_DMNGT_DMA_CIRC, /* DMA circular mode */
|
||||
};
|
||||
|
||||
/* STM32H7_ADC_CALFACT - bit fields */
|
||||
#define STM32H7_CALFACT_D_SHIFT 16
|
||||
#define STM32H7_CALFACT_D_MASK GENMASK(26, 16)
|
||||
#define STM32H7_CALFACT_S_SHIFT 0
|
||||
#define STM32H7_CALFACT_S_MASK GENMASK(10, 0)
|
||||
|
||||
/* STM32H7_ADC_CALFACT2 - bit fields */
|
||||
#define STM32H7_LINCALFACT_SHIFT 0
|
||||
#define STM32H7_LINCALFACT_MASK GENMASK(29, 0)
|
||||
|
||||
/* STM32H7_ADC_CSR - bit fields */
|
||||
#define STM32H7_EOC_SLV BIT(18)
|
||||
#define STM32H7_EOC_MST BIT(2)
|
||||
|
||||
/* STM32H7_ADC_CCR - bit fields */
|
||||
#define STM32H7_PRESC_SHIFT 18
|
||||
#define STM32H7_PRESC_MASK GENMASK(21, 18)
|
||||
#define STM32H7_CKMODE_SHIFT 16
|
||||
#define STM32H7_CKMODE_MASK GENMASK(17, 16)
|
||||
|
||||
/**
|
||||
* struct stm32_adc_common - stm32 ADC driver common data (for all instances)
|
||||
* @base: control registers base cpu addr
|
||||
|
|
|
@ -27,115 +27,6 @@
|
|||
|
||||
#include "stm32-adc-core.h"
|
||||
|
||||
/* STM32F4 - Registers for each ADC instance */
|
||||
#define STM32F4_ADC_SR 0x00
|
||||
#define STM32F4_ADC_CR1 0x04
|
||||
#define STM32F4_ADC_CR2 0x08
|
||||
#define STM32F4_ADC_SMPR1 0x0C
|
||||
#define STM32F4_ADC_SMPR2 0x10
|
||||
#define STM32F4_ADC_HTR 0x24
|
||||
#define STM32F4_ADC_LTR 0x28
|
||||
#define STM32F4_ADC_SQR1 0x2C
|
||||
#define STM32F4_ADC_SQR2 0x30
|
||||
#define STM32F4_ADC_SQR3 0x34
|
||||
#define STM32F4_ADC_JSQR 0x38
|
||||
#define STM32F4_ADC_JDR1 0x3C
|
||||
#define STM32F4_ADC_JDR2 0x40
|
||||
#define STM32F4_ADC_JDR3 0x44
|
||||
#define STM32F4_ADC_JDR4 0x48
|
||||
#define STM32F4_ADC_DR 0x4C
|
||||
|
||||
/* STM32F4_ADC_SR - bit fields */
|
||||
#define STM32F4_STRT BIT(4)
|
||||
#define STM32F4_EOC BIT(1)
|
||||
|
||||
/* STM32F4_ADC_CR1 - bit fields */
|
||||
#define STM32F4_RES_SHIFT 24
|
||||
#define STM32F4_RES_MASK GENMASK(25, 24)
|
||||
#define STM32F4_SCAN BIT(8)
|
||||
#define STM32F4_EOCIE BIT(5)
|
||||
|
||||
/* STM32F4_ADC_CR2 - bit fields */
|
||||
#define STM32F4_SWSTART BIT(30)
|
||||
#define STM32F4_EXTEN_SHIFT 28
|
||||
#define STM32F4_EXTEN_MASK GENMASK(29, 28)
|
||||
#define STM32F4_EXTSEL_SHIFT 24
|
||||
#define STM32F4_EXTSEL_MASK GENMASK(27, 24)
|
||||
#define STM32F4_EOCS BIT(10)
|
||||
#define STM32F4_DDS BIT(9)
|
||||
#define STM32F4_DMA BIT(8)
|
||||
#define STM32F4_ADON BIT(0)
|
||||
|
||||
/* STM32H7 - Registers for each ADC instance */
|
||||
#define STM32H7_ADC_ISR 0x00
|
||||
#define STM32H7_ADC_IER 0x04
|
||||
#define STM32H7_ADC_CR 0x08
|
||||
#define STM32H7_ADC_CFGR 0x0C
|
||||
#define STM32H7_ADC_SMPR1 0x14
|
||||
#define STM32H7_ADC_SMPR2 0x18
|
||||
#define STM32H7_ADC_PCSEL 0x1C
|
||||
#define STM32H7_ADC_SQR1 0x30
|
||||
#define STM32H7_ADC_SQR2 0x34
|
||||
#define STM32H7_ADC_SQR3 0x38
|
||||
#define STM32H7_ADC_SQR4 0x3C
|
||||
#define STM32H7_ADC_DR 0x40
|
||||
#define STM32H7_ADC_DIFSEL 0xC0
|
||||
#define STM32H7_ADC_CALFACT 0xC4
|
||||
#define STM32H7_ADC_CALFACT2 0xC8
|
||||
|
||||
/* STM32H7_ADC_ISR - bit fields */
|
||||
#define STM32MP1_VREGREADY BIT(12)
|
||||
#define STM32H7_EOC BIT(2)
|
||||
#define STM32H7_ADRDY BIT(0)
|
||||
|
||||
/* STM32H7_ADC_IER - bit fields */
|
||||
#define STM32H7_EOCIE STM32H7_EOC
|
||||
|
||||
/* STM32H7_ADC_CR - bit fields */
|
||||
#define STM32H7_ADCAL BIT(31)
|
||||
#define STM32H7_ADCALDIF BIT(30)
|
||||
#define STM32H7_DEEPPWD BIT(29)
|
||||
#define STM32H7_ADVREGEN BIT(28)
|
||||
#define STM32H7_LINCALRDYW6 BIT(27)
|
||||
#define STM32H7_LINCALRDYW5 BIT(26)
|
||||
#define STM32H7_LINCALRDYW4 BIT(25)
|
||||
#define STM32H7_LINCALRDYW3 BIT(24)
|
||||
#define STM32H7_LINCALRDYW2 BIT(23)
|
||||
#define STM32H7_LINCALRDYW1 BIT(22)
|
||||
#define STM32H7_ADCALLIN BIT(16)
|
||||
#define STM32H7_BOOST BIT(8)
|
||||
#define STM32H7_ADSTP BIT(4)
|
||||
#define STM32H7_ADSTART BIT(2)
|
||||
#define STM32H7_ADDIS BIT(1)
|
||||
#define STM32H7_ADEN BIT(0)
|
||||
|
||||
/* STM32H7_ADC_CFGR bit fields */
|
||||
#define STM32H7_EXTEN_SHIFT 10
|
||||
#define STM32H7_EXTEN_MASK GENMASK(11, 10)
|
||||
#define STM32H7_EXTSEL_SHIFT 5
|
||||
#define STM32H7_EXTSEL_MASK GENMASK(9, 5)
|
||||
#define STM32H7_RES_SHIFT 2
|
||||
#define STM32H7_RES_MASK GENMASK(4, 2)
|
||||
#define STM32H7_DMNGT_SHIFT 0
|
||||
#define STM32H7_DMNGT_MASK GENMASK(1, 0)
|
||||
|
||||
enum stm32h7_adc_dmngt {
|
||||
STM32H7_DMNGT_DR_ONLY, /* Regular data in DR only */
|
||||
STM32H7_DMNGT_DMA_ONESHOT, /* DMA one shot mode */
|
||||
STM32H7_DMNGT_DFSDM, /* DFSDM mode */
|
||||
STM32H7_DMNGT_DMA_CIRC, /* DMA circular mode */
|
||||
};
|
||||
|
||||
/* STM32H7_ADC_CALFACT - bit fields */
|
||||
#define STM32H7_CALFACT_D_SHIFT 16
|
||||
#define STM32H7_CALFACT_D_MASK GENMASK(26, 16)
|
||||
#define STM32H7_CALFACT_S_SHIFT 0
|
||||
#define STM32H7_CALFACT_S_MASK GENMASK(10, 0)
|
||||
|
||||
/* STM32H7_ADC_CALFACT2 - bit fields */
|
||||
#define STM32H7_LINCALFACT_SHIFT 0
|
||||
#define STM32H7_LINCALFACT_MASK GENMASK(29, 0)
|
||||
|
||||
/* Number of linear calibration shadow registers / LINCALRDYW control bits */
|
||||
#define STM32H7_LINCALFACT_NUM 6
|
||||
|
||||
|
|
|
@ -694,6 +694,7 @@ static irqreturn_t opt3001_irq(int irq, void *_iio)
|
|||
struct iio_dev *iio = _iio;
|
||||
struct opt3001 *opt = iio_priv(iio);
|
||||
int ret;
|
||||
bool wake_result_ready_queue = false;
|
||||
|
||||
if (!opt->ok_to_ignore_lock)
|
||||
mutex_lock(&opt->lock);
|
||||
|
@ -728,13 +729,16 @@ static irqreturn_t opt3001_irq(int irq, void *_iio)
|
|||
}
|
||||
opt->result = ret;
|
||||
opt->result_ready = true;
|
||||
wake_up(&opt->result_ready_queue);
|
||||
wake_result_ready_queue = true;
|
||||
}
|
||||
|
||||
out:
|
||||
if (!opt->ok_to_ignore_lock)
|
||||
mutex_unlock(&opt->lock);
|
||||
|
||||
if (wake_result_ready_queue)
|
||||
wake_up(&opt->result_ready_queue);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
|
|
|
@ -274,13 +274,17 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
|
|||
struct sk_buff *skb, struct c4iw_wr_wait *wr_waitp)
|
||||
{
|
||||
int err;
|
||||
struct fw_ri_tpte tpt;
|
||||
struct fw_ri_tpte *tpt;
|
||||
u32 stag_idx;
|
||||
static atomic_t key;
|
||||
|
||||
if (c4iw_fatal_error(rdev))
|
||||
return -EIO;
|
||||
|
||||
tpt = kmalloc(sizeof(*tpt), GFP_KERNEL);
|
||||
if (!tpt)
|
||||
return -ENOMEM;
|
||||
|
||||
stag_state = stag_state > 0;
|
||||
stag_idx = (*stag) >> 8;
|
||||
|
||||
|
@ -290,6 +294,7 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
|
|||
mutex_lock(&rdev->stats.lock);
|
||||
rdev->stats.stag.fail++;
|
||||
mutex_unlock(&rdev->stats.lock);
|
||||
kfree(tpt);
|
||||
return -ENOMEM;
|
||||
}
|
||||
mutex_lock(&rdev->stats.lock);
|
||||
|
@ -304,28 +309,28 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
|
|||
|
||||
/* write TPT entry */
|
||||
if (reset_tpt_entry)
|
||||
memset(&tpt, 0, sizeof(tpt));
|
||||
memset(tpt, 0, sizeof(*tpt));
|
||||
else {
|
||||
tpt.valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F |
|
||||
tpt->valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F |
|
||||
FW_RI_TPTE_STAGKEY_V((*stag & FW_RI_TPTE_STAGKEY_M)) |
|
||||
FW_RI_TPTE_STAGSTATE_V(stag_state) |
|
||||
FW_RI_TPTE_STAGTYPE_V(type) | FW_RI_TPTE_PDID_V(pdid));
|
||||
tpt.locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) |
|
||||
tpt->locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) |
|
||||
(bind_enabled ? FW_RI_TPTE_MWBINDEN_F : 0) |
|
||||
FW_RI_TPTE_ADDRTYPE_V((zbva ? FW_RI_ZERO_BASED_TO :
|
||||
FW_RI_VA_BASED_TO))|
|
||||
FW_RI_TPTE_PS_V(page_size));
|
||||
tpt.nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32(
|
||||
tpt->nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32(
|
||||
FW_RI_TPTE_PBLADDR_V(PBL_OFF(rdev, pbl_addr)>>3));
|
||||
tpt.len_lo = cpu_to_be32((u32)(len & 0xffffffffUL));
|
||||
tpt.va_hi = cpu_to_be32((u32)(to >> 32));
|
||||
tpt.va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL));
|
||||
tpt.dca_mwbcnt_pstag = cpu_to_be32(0);
|
||||
tpt.len_hi = cpu_to_be32((u32)(len >> 32));
|
||||
tpt->len_lo = cpu_to_be32((u32)(len & 0xffffffffUL));
|
||||
tpt->va_hi = cpu_to_be32((u32)(to >> 32));
|
||||
tpt->va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL));
|
||||
tpt->dca_mwbcnt_pstag = cpu_to_be32(0);
|
||||
tpt->len_hi = cpu_to_be32((u32)(len >> 32));
|
||||
}
|
||||
err = write_adapter_mem(rdev, stag_idx +
|
||||
(rdev->lldi.vr->stag.start >> 5),
|
||||
sizeof(tpt), &tpt, skb, wr_waitp);
|
||||
sizeof(*tpt), tpt, skb, wr_waitp);
|
||||
|
||||
if (reset_tpt_entry) {
|
||||
c4iw_put_resource(&rdev->resource.tpt_table, stag_idx);
|
||||
|
@ -333,6 +338,7 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry,
|
|||
rdev->stats.stag.cur -= 32;
|
||||
mutex_unlock(&rdev->stats.lock);
|
||||
}
|
||||
kfree(tpt);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -248,10 +248,7 @@ static int da9063_onkey_probe(struct platform_device *pdev)
|
|||
onkey->input->phys = onkey->phys;
|
||||
onkey->input->dev.parent = &pdev->dev;
|
||||
|
||||
if (onkey->key_power)
|
||||
input_set_capability(onkey->input, EV_KEY, KEY_POWER);
|
||||
|
||||
input_set_capability(onkey->input, EV_KEY, KEY_SLEEP);
|
||||
input_set_capability(onkey->input, EV_KEY, KEY_POWER);
|
||||
|
||||
INIT_DELAYED_WORK(&onkey->work, da9063_poll_on);
|
||||
|
||||
|
|
|
@ -149,7 +149,7 @@ static int rmi_process_interrupt_requests(struct rmi_device *rmi_dev)
|
|||
}
|
||||
|
||||
mutex_lock(&data->irq_mutex);
|
||||
bitmap_and(data->irq_status, data->irq_status, data->current_irq_mask,
|
||||
bitmap_and(data->irq_status, data->irq_status, data->fn_irq_bits,
|
||||
data->irq_count);
|
||||
/*
|
||||
* At this point, irq_status has all bits that are set in the
|
||||
|
@ -388,6 +388,8 @@ static int rmi_driver_set_irq_bits(struct rmi_device *rmi_dev,
|
|||
bitmap_copy(data->current_irq_mask, data->new_irq_mask,
|
||||
data->num_of_irq_regs);
|
||||
|
||||
bitmap_or(data->fn_irq_bits, data->fn_irq_bits, mask, data->irq_count);
|
||||
|
||||
error_unlock:
|
||||
mutex_unlock(&data->irq_mutex);
|
||||
return error;
|
||||
|
@ -401,6 +403,8 @@ static int rmi_driver_clear_irq_bits(struct rmi_device *rmi_dev,
|
|||
struct device *dev = &rmi_dev->dev;
|
||||
|
||||
mutex_lock(&data->irq_mutex);
|
||||
bitmap_andnot(data->fn_irq_bits,
|
||||
data->fn_irq_bits, mask, data->irq_count);
|
||||
bitmap_andnot(data->new_irq_mask,
|
||||
data->current_irq_mask, mask, data->irq_count);
|
||||
|
||||
|
|
|
@ -541,7 +541,7 @@ static void wake_migration_worker(struct cache *cache)
|
|||
|
||||
static struct dm_bio_prison_cell_v2 *alloc_prison_cell(struct cache *cache)
|
||||
{
|
||||
return dm_bio_prison_alloc_cell_v2(cache->prison, GFP_NOWAIT);
|
||||
return dm_bio_prison_alloc_cell_v2(cache->prison, GFP_NOIO);
|
||||
}
|
||||
|
||||
static void free_prison_cell(struct cache *cache, struct dm_bio_prison_cell_v2 *cell)
|
||||
|
@ -553,9 +553,7 @@ static struct dm_cache_migration *alloc_migration(struct cache *cache)
|
|||
{
|
||||
struct dm_cache_migration *mg;
|
||||
|
||||
mg = mempool_alloc(&cache->migration_pool, GFP_NOWAIT);
|
||||
if (!mg)
|
||||
return NULL;
|
||||
mg = mempool_alloc(&cache->migration_pool, GFP_NOIO);
|
||||
|
||||
memset(mg, 0, sizeof(*mg));
|
||||
|
||||
|
@ -663,10 +661,6 @@ static bool bio_detain_shared(struct cache *cache, dm_oblock_t oblock, struct bi
|
|||
struct dm_bio_prison_cell_v2 *cell_prealloc, *cell;
|
||||
|
||||
cell_prealloc = alloc_prison_cell(cache); /* FIXME: allow wait if calling from worker */
|
||||
if (!cell_prealloc) {
|
||||
defer_bio(cache, bio);
|
||||
return false;
|
||||
}
|
||||
|
||||
build_key(oblock, end, &key);
|
||||
r = dm_cell_get_v2(cache->prison, &key, lock_level(bio), bio, cell_prealloc, &cell);
|
||||
|
@ -1492,11 +1486,6 @@ static int mg_lock_writes(struct dm_cache_migration *mg)
|
|||
struct dm_bio_prison_cell_v2 *prealloc;
|
||||
|
||||
prealloc = alloc_prison_cell(cache);
|
||||
if (!prealloc) {
|
||||
DMERR_LIMIT("%s: alloc_prison_cell failed", cache_device_name(cache));
|
||||
mg_complete(mg, false);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* Prevent writes to the block, but allow reads to continue.
|
||||
|
@ -1534,11 +1523,6 @@ static int mg_start(struct cache *cache, struct policy_work *op, struct bio *bio
|
|||
}
|
||||
|
||||
mg = alloc_migration(cache);
|
||||
if (!mg) {
|
||||
policy_complete_background_work(cache->policy, op, false);
|
||||
background_work_end(cache);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
mg->op = op;
|
||||
mg->overwrite_bio = bio;
|
||||
|
@ -1627,10 +1611,6 @@ static int invalidate_lock(struct dm_cache_migration *mg)
|
|||
struct dm_bio_prison_cell_v2 *prealloc;
|
||||
|
||||
prealloc = alloc_prison_cell(cache);
|
||||
if (!prealloc) {
|
||||
invalidate_complete(mg, false);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
build_key(mg->invalidate_oblock, oblock_succ(mg->invalidate_oblock), &key);
|
||||
r = dm_cell_lock_v2(cache->prison, &key,
|
||||
|
@ -1668,10 +1648,6 @@ static int invalidate_start(struct cache *cache, dm_cblock_t cblock,
|
|||
return -EPERM;
|
||||
|
||||
mg = alloc_migration(cache);
|
||||
if (!mg) {
|
||||
background_work_end(cache);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
mg->overwrite_bio = bio;
|
||||
mg->invalidate_cblock = cblock;
|
||||
|
|
|
@ -158,7 +158,7 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
|
|||
} else {
|
||||
pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
|
||||
mdname(mddev));
|
||||
pr_err("md/raid0: please set raid.default_layout to 1 or 2\n");
|
||||
pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n");
|
||||
err = -ENOTSUPP;
|
||||
goto abort;
|
||||
}
|
||||
|
|
|
@ -641,8 +641,7 @@ static int v4l_stk_release(struct file *fp)
|
|||
dev->owner = NULL;
|
||||
}
|
||||
|
||||
if (is_present(dev))
|
||||
usb_autopm_put_interface(dev->interface);
|
||||
usb_autopm_put_interface(dev->interface);
|
||||
mutex_unlock(&dev->lock);
|
||||
return v4l2_fh_release(fp);
|
||||
}
|
||||
|
|
|
@ -949,7 +949,7 @@ static int jmb38x_ms_probe(struct pci_dev *pdev,
|
|||
if (!cnt) {
|
||||
rc = -ENODEV;
|
||||
pci_dev_busy = 1;
|
||||
goto err_out;
|
||||
goto err_out_int;
|
||||
}
|
||||
|
||||
jm = kzalloc(sizeof(struct jmb38x_ms)
|
||||
|
|
|
@ -214,13 +214,21 @@ static void mei_mkhi_fix(struct mei_cl_device *cldev)
|
|||
{
|
||||
int ret;
|
||||
|
||||
/* No need to enable the client if nothing is needed from it */
|
||||
if (!cldev->bus->fw_f_fw_ver_supported &&
|
||||
!cldev->bus->hbm_f_os_supported)
|
||||
return;
|
||||
|
||||
ret = mei_cldev_enable(cldev);
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
ret = mei_fwver(cldev);
|
||||
if (ret < 0)
|
||||
dev_err(&cldev->dev, "FW version command failed %d\n", ret);
|
||||
if (cldev->bus->fw_f_fw_ver_supported) {
|
||||
ret = mei_fwver(cldev);
|
||||
if (ret < 0)
|
||||
dev_err(&cldev->dev, "FW version command failed %d\n",
|
||||
ret);
|
||||
}
|
||||
|
||||
if (cldev->bus->hbm_f_os_supported) {
|
||||
ret = mei_osver(cldev);
|
||||
|
|
|
@ -139,6 +139,9 @@
|
|||
#define MEI_DEV_ID_CNP_H 0xA360 /* Cannon Point H */
|
||||
#define MEI_DEV_ID_CNP_H_4 0xA364 /* Cannon Point H 4 (iTouch) */
|
||||
|
||||
#define MEI_DEV_ID_CMP_LP 0x02e0 /* Comet Point LP */
|
||||
#define MEI_DEV_ID_CMP_LP_3 0x02e4 /* Comet Point LP 3 (iTouch) */
|
||||
|
||||
#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */
|
||||
|
||||
#define MEI_DEV_ID_TGP_LP 0xA0E0 /* Tiger Lake Point LP */
|
||||
|
|
|
@ -1368,6 +1368,8 @@ static bool mei_me_fw_type_sps(struct pci_dev *pdev)
|
|||
#define MEI_CFG_FW_SPS \
|
||||
.quirk_probe = mei_me_fw_type_sps
|
||||
|
||||
#define MEI_CFG_FW_VER_SUPP \
|
||||
.fw_ver_supported = 1
|
||||
|
||||
#define MEI_CFG_ICH_HFS \
|
||||
.fw_status.count = 0
|
||||
|
@ -1405,31 +1407,41 @@ static const struct mei_cfg mei_me_ich10_cfg = {
|
|||
MEI_CFG_ICH10_HFS,
|
||||
};
|
||||
|
||||
/* PCH devices */
|
||||
static const struct mei_cfg mei_me_pch_cfg = {
|
||||
/* PCH6 devices */
|
||||
static const struct mei_cfg mei_me_pch6_cfg = {
|
||||
MEI_CFG_PCH_HFS,
|
||||
};
|
||||
|
||||
/* PCH7 devices */
|
||||
static const struct mei_cfg mei_me_pch7_cfg = {
|
||||
MEI_CFG_PCH_HFS,
|
||||
MEI_CFG_FW_VER_SUPP,
|
||||
};
|
||||
|
||||
/* PCH Cougar Point and Patsburg with quirk for Node Manager exclusion */
|
||||
static const struct mei_cfg mei_me_pch_cpt_pbg_cfg = {
|
||||
MEI_CFG_PCH_HFS,
|
||||
MEI_CFG_FW_VER_SUPP,
|
||||
MEI_CFG_FW_NM,
|
||||
};
|
||||
|
||||
/* PCH8 Lynx Point and newer devices */
|
||||
static const struct mei_cfg mei_me_pch8_cfg = {
|
||||
MEI_CFG_PCH8_HFS,
|
||||
MEI_CFG_FW_VER_SUPP,
|
||||
};
|
||||
|
||||
/* PCH8 Lynx Point with quirk for SPS Firmware exclusion */
|
||||
static const struct mei_cfg mei_me_pch8_sps_cfg = {
|
||||
MEI_CFG_PCH8_HFS,
|
||||
MEI_CFG_FW_VER_SUPP,
|
||||
MEI_CFG_FW_SPS,
|
||||
};
|
||||
|
||||
/* Cannon Lake and newer devices */
|
||||
static const struct mei_cfg mei_me_pch12_cfg = {
|
||||
MEI_CFG_PCH8_HFS,
|
||||
MEI_CFG_FW_VER_SUPP,
|
||||
MEI_CFG_DMA_128,
|
||||
};
|
||||
|
||||
|
@ -1441,7 +1453,8 @@ static const struct mei_cfg *const mei_cfg_list[] = {
|
|||
[MEI_ME_UNDEF_CFG] = NULL,
|
||||
[MEI_ME_ICH_CFG] = &mei_me_ich_cfg,
|
||||
[MEI_ME_ICH10_CFG] = &mei_me_ich10_cfg,
|
||||
[MEI_ME_PCH_CFG] = &mei_me_pch_cfg,
|
||||
[MEI_ME_PCH6_CFG] = &mei_me_pch6_cfg,
|
||||
[MEI_ME_PCH7_CFG] = &mei_me_pch7_cfg,
|
||||
[MEI_ME_PCH_CPT_PBG_CFG] = &mei_me_pch_cpt_pbg_cfg,
|
||||
[MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg,
|
||||
[MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg,
|
||||
|
@ -1480,6 +1493,8 @@ struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
|
|||
|
||||
mei_device_init(dev, &pdev->dev, &mei_me_hw_ops);
|
||||
hw->cfg = cfg;
|
||||
dev->fw_f_fw_ver_supported = cfg->fw_ver_supported;
|
||||
|
||||
return dev;
|
||||
}
|
||||
|
||||
|
|
|
@ -32,11 +32,13 @@
|
|||
* @fw_status: FW status
|
||||
* @quirk_probe: device exclusion quirk
|
||||
* @dma_size: device DMA buffers size
|
||||
* @fw_ver_supported: is fw version retrievable from FW
|
||||
*/
|
||||
struct mei_cfg {
|
||||
const struct mei_fw_status fw_status;
|
||||
bool (*quirk_probe)(struct pci_dev *pdev);
|
||||
size_t dma_size[DMA_DSCR_NUM];
|
||||
u32 fw_ver_supported:1;
|
||||
};
|
||||
|
||||
|
||||
|
@ -74,7 +76,8 @@ struct mei_me_hw {
|
|||
* @MEI_ME_UNDEF_CFG: Lower sentinel.
|
||||
* @MEI_ME_ICH_CFG: I/O Controller Hub legacy devices.
|
||||
* @MEI_ME_ICH10_CFG: I/O Controller Hub platforms Gen10
|
||||
* @MEI_ME_PCH_CFG: Platform Controller Hub platforms (Up to Gen8).
|
||||
* @MEI_ME_PCH6_CFG: Platform Controller Hub platforms (Gen6).
|
||||
* @MEI_ME_PCH7_CFG: Platform Controller Hub platforms (Gen7).
|
||||
* @MEI_ME_PCH_CPT_PBG_CFG:Platform Controller Hub workstations
|
||||
* with quirk for Node Manager exclusion.
|
||||
* @MEI_ME_PCH8_CFG: Platform Controller Hub Gen8 and newer
|
||||
|
@ -89,7 +92,8 @@ enum mei_cfg_idx {
|
|||
MEI_ME_UNDEF_CFG,
|
||||
MEI_ME_ICH_CFG,
|
||||
MEI_ME_ICH10_CFG,
|
||||
MEI_ME_PCH_CFG,
|
||||
MEI_ME_PCH6_CFG,
|
||||
MEI_ME_PCH7_CFG,
|
||||
MEI_ME_PCH_CPT_PBG_CFG,
|
||||
MEI_ME_PCH8_CFG,
|
||||
MEI_ME_PCH8_SPS_CFG,
|
||||
|
|
|
@ -422,6 +422,8 @@ struct mei_fw_version {
|
|||
*
|
||||
* @fw_ver : FW versions
|
||||
*
|
||||
* @fw_f_fw_ver_supported : fw feature: fw version supported
|
||||
*
|
||||
* @me_clients_rwsem: rw lock over me_clients list
|
||||
* @me_clients : list of FW clients
|
||||
* @me_clients_map : FW clients bit map
|
||||
|
@ -500,6 +502,8 @@ struct mei_device {
|
|||
|
||||
struct mei_fw_version fw_ver[MEI_MAX_FW_VER_BLOCKS];
|
||||
|
||||
unsigned int fw_f_fw_ver_supported:1;
|
||||
|
||||
struct rw_semaphore me_clients_rwsem;
|
||||
struct list_head me_clients;
|
||||
DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX);
|
||||
|
|
|
@ -70,13 +70,13 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
|
|||
{MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_3, MEI_ME_ICH10_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_4, MEI_ME_ICH10_CFG)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_1, MEI_ME_PCH_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_2, MEI_ME_PCH_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_1, MEI_ME_PCH6_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_2, MEI_ME_PCH6_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_CPT_1, MEI_ME_PCH_CPT_PBG_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PBG_1, MEI_ME_PCH_CPT_PBG_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PPT_1, MEI_ME_PCH_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PPT_2, MEI_ME_PCH_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PPT_3, MEI_ME_PCH_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PPT_1, MEI_ME_PCH7_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PPT_2, MEI_ME_PCH7_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_PPT_3, MEI_ME_PCH7_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_LPT_H, MEI_ME_PCH8_SPS_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_LPT_W, MEI_ME_PCH8_SPS_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_LPT_LP, MEI_ME_PCH8_CFG)},
|
||||
|
@ -105,6 +105,9 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
|
|||
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH8_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_4, MEI_ME_PCH8_CFG)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP, MEI_ME_PCH12_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP_3, MEI_ME_PCH8_CFG)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH12_CFG)},
|
||||
|
|
|
@ -721,6 +721,8 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
|
|||
BUG();
|
||||
}
|
||||
mmc_log_string(mmc, "tag: %d\n", tag);
|
||||
/* Make sure descriptors are ready before ringing the doorbell */
|
||||
wmb();
|
||||
cqhci_writel(cq_host, 1 << tag, CQHCI_TDBR);
|
||||
/* Commit the doorbell write immediately */
|
||||
wmb();
|
||||
|
|
|
@ -543,7 +543,7 @@ qca8k_setup(struct dsa_switch *ds)
|
|||
BIT(0) << QCA8K_GLOBAL_FW_CTRL1_UC_DP_S);
|
||||
|
||||
/* Setup connection between CPU port & user ports */
|
||||
for (i = 0; i < DSA_MAX_PORTS; i++) {
|
||||
for (i = 0; i < QCA8K_NUM_PORTS; i++) {
|
||||
/* CPU port gets connected to all user ports of the switch */
|
||||
if (dsa_is_cpu_port(ds, i)) {
|
||||
qca8k_rmw(priv, QCA8K_PORT_LOOKUP_CTRL(QCA8K_CPU_PORT),
|
||||
|
@ -897,7 +897,7 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
|
|||
if (id != QCA8K_ID_QCA8337)
|
||||
return -ENODEV;
|
||||
|
||||
priv->ds = dsa_switch_alloc(&mdiodev->dev, DSA_MAX_PORTS);
|
||||
priv->ds = dsa_switch_alloc(&mdiodev->dev, QCA8K_NUM_PORTS);
|
||||
if (!priv->ds)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
@ -507,7 +507,8 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
|
|||
irq = of_irq_get(intc, 0);
|
||||
if (irq <= 0) {
|
||||
dev_err(smi->dev, "failed to get parent IRQ\n");
|
||||
return irq ? irq : -EINVAL;
|
||||
ret = irq ? irq : -EINVAL;
|
||||
goto out_put_node;
|
||||
}
|
||||
|
||||
/* This clears the IRQ status register */
|
||||
|
@ -515,7 +516,7 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
|
|||
&val);
|
||||
if (ret) {
|
||||
dev_err(smi->dev, "can't read interrupt status\n");
|
||||
return ret;
|
||||
goto out_put_node;
|
||||
}
|
||||
|
||||
/* Fetch IRQ edge information from the descriptor */
|
||||
|
@ -537,7 +538,7 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
|
|||
val);
|
||||
if (ret) {
|
||||
dev_err(smi->dev, "could not configure IRQ polarity\n");
|
||||
return ret;
|
||||
goto out_put_node;
|
||||
}
|
||||
|
||||
ret = devm_request_threaded_irq(smi->dev, irq, NULL,
|
||||
|
@ -545,7 +546,7 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
|
|||
"RTL8366RB", smi);
|
||||
if (ret) {
|
||||
dev_err(smi->dev, "unable to request irq: %d\n", ret);
|
||||
return ret;
|
||||
goto out_put_node;
|
||||
}
|
||||
smi->irqdomain = irq_domain_add_linear(intc,
|
||||
RTL8366RB_NUM_INTERRUPT,
|
||||
|
@ -553,12 +554,15 @@ static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
|
|||
smi);
|
||||
if (!smi->irqdomain) {
|
||||
dev_err(smi->dev, "failed to create IRQ domain\n");
|
||||
return -EINVAL;
|
||||
ret = -EINVAL;
|
||||
goto out_put_node;
|
||||
}
|
||||
for (i = 0; i < smi->num_ports; i++)
|
||||
irq_set_parent(irq_create_mapping(smi->irqdomain, i), irq);
|
||||
|
||||
return 0;
|
||||
out_put_node:
|
||||
of_node_put(intc);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int rtl8366rb_set_addr(struct realtek_smi *smi)
|
||||
|
|
|
@ -369,6 +369,7 @@ struct bcmgenet_mib_counters {
|
|||
#define EXT_PWR_DOWN_PHY_EN (1 << 20)
|
||||
|
||||
#define EXT_RGMII_OOB_CTRL 0x0C
|
||||
#define RGMII_MODE_EN_V123 (1 << 0)
|
||||
#define RGMII_LINK (1 << 4)
|
||||
#define OOB_DISABLE (1 << 5)
|
||||
#define RGMII_MODE_EN (1 << 6)
|
||||
|
|
|
@ -261,7 +261,11 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
|
|||
*/
|
||||
if (priv->ext_phy) {
|
||||
reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL);
|
||||
reg |= RGMII_MODE_EN | id_mode_dis;
|
||||
reg |= id_mode_dis;
|
||||
if (GENET_IS_V1(priv) || GENET_IS_V2(priv) || GENET_IS_V3(priv))
|
||||
reg |= RGMII_MODE_EN_V123;
|
||||
else
|
||||
reg |= RGMII_MODE_EN;
|
||||
bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL);
|
||||
}
|
||||
|
||||
|
@ -276,11 +280,12 @@ int bcmgenet_mii_probe(struct net_device *dev)
|
|||
struct bcmgenet_priv *priv = netdev_priv(dev);
|
||||
struct device_node *dn = priv->pdev->dev.of_node;
|
||||
struct phy_device *phydev;
|
||||
u32 phy_flags;
|
||||
u32 phy_flags = 0;
|
||||
int ret;
|
||||
|
||||
/* Communicate the integrated PHY revision */
|
||||
phy_flags = priv->gphy_rev;
|
||||
if (priv->internal_phy)
|
||||
phy_flags = priv->gphy_rev;
|
||||
|
||||
/* Initialize link state variables that bcmgenet_mii_setup() uses */
|
||||
priv->old_link = -1;
|
||||
|
|
|
@ -156,11 +156,15 @@ static int mdio_sc_cfg_reg_write(struct hns_mdio_device *mdio_dev,
|
|||
{
|
||||
u32 time_cnt;
|
||||
u32 reg_value;
|
||||
int ret;
|
||||
|
||||
regmap_write(mdio_dev->subctrl_vbase, cfg_reg, set_val);
|
||||
|
||||
for (time_cnt = MDIO_TIMEOUT; time_cnt; time_cnt--) {
|
||||
regmap_read(mdio_dev->subctrl_vbase, st_reg, ®_value);
|
||||
ret = regmap_read(mdio_dev->subctrl_vbase, st_reg, ®_value);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
reg_value &= st_msk;
|
||||
if ((!!check_st) == (!!reg_value))
|
||||
break;
|
||||
|
|
|
@ -96,6 +96,8 @@
|
|||
|
||||
#define OPT_SWAP_PORT 0x0001 /* Need to wordswp on the MPU port */
|
||||
|
||||
#define LIB82596_DMA_ATTR DMA_ATTR_NON_CONSISTENT
|
||||
|
||||
#define DMA_WBACK(ndev, addr, len) \
|
||||
do { dma_cache_sync((ndev)->dev.parent, (void *)addr, len, DMA_TO_DEVICE); } while (0)
|
||||
|
||||
|
@ -199,7 +201,7 @@ static int __exit lan_remove_chip(struct parisc_device *pdev)
|
|||
|
||||
unregister_netdev (dev);
|
||||
dma_free_attrs(&pdev->dev, sizeof(struct i596_private), lp->dma,
|
||||
lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
|
||||
lp->dma_addr, LIB82596_DMA_ATTR);
|
||||
free_netdev (dev);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -1065,7 +1065,7 @@ static int i82596_probe(struct net_device *dev)
|
|||
|
||||
dma = dma_alloc_attrs(dev->dev.parent, sizeof(struct i596_dma),
|
||||
&lp->dma_addr, GFP_KERNEL,
|
||||
DMA_ATTR_NON_CONSISTENT);
|
||||
LIB82596_DMA_ATTR);
|
||||
if (!dma) {
|
||||
printk(KERN_ERR "%s: Couldn't get shared memory\n", __FILE__);
|
||||
return -ENOMEM;
|
||||
|
@ -1087,7 +1087,7 @@ static int i82596_probe(struct net_device *dev)
|
|||
i = register_netdev(dev);
|
||||
if (i) {
|
||||
dma_free_attrs(dev->dev.parent, sizeof(struct i596_dma),
|
||||
dma, lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
|
||||
dma, lp->dma_addr, LIB82596_DMA_ATTR);
|
||||
return i;
|
||||
}
|
||||
|
||||
|
|
|
@ -23,6 +23,8 @@
|
|||
|
||||
static const char sni_82596_string[] = "snirm_82596";
|
||||
|
||||
#define LIB82596_DMA_ATTR 0
|
||||
|
||||
#define DMA_WBACK(priv, addr, len) do { } while (0)
|
||||
#define DMA_INV(priv, addr, len) do { } while (0)
|
||||
#define DMA_WBACK_INV(priv, addr, len) do { } while (0)
|
||||
|
@ -151,7 +153,7 @@ static int sni_82596_driver_remove(struct platform_device *pdev)
|
|||
|
||||
unregister_netdev(dev);
|
||||
dma_free_attrs(dev->dev.parent, sizeof(struct i596_private), lp->dma,
|
||||
lp->dma_addr, DMA_ATTR_NON_CONSISTENT);
|
||||
lp->dma_addr, LIB82596_DMA_ATTR);
|
||||
iounmap(lp->ca);
|
||||
iounmap(lp->mpu_port);
|
||||
free_netdev (dev);
|
||||
|
|
|
@ -2731,12 +2731,10 @@ static int enable_scrq_irq(struct ibmvnic_adapter *adapter,
|
|||
|
||||
if (adapter->resetting &&
|
||||
adapter->reset_reason == VNIC_RESET_MOBILITY) {
|
||||
u64 val = (0xff000000) | scrq->hw_irq;
|
||||
struct irq_desc *desc = irq_to_desc(scrq->irq);
|
||||
struct irq_chip *chip = irq_desc_get_chip(desc);
|
||||
|
||||
rc = plpar_hcall_norets(H_EOI, val);
|
||||
if (rc)
|
||||
dev_err(dev, "H_EOI FAILED irq 0x%llx. rc=%ld\n",
|
||||
val, rc);
|
||||
chip->irq_eoi(&desc->irq_data);
|
||||
}
|
||||
|
||||
rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address,
|
||||
|
|
|
@ -4522,8 +4522,10 @@ int stmmac_suspend(struct device *dev)
|
|||
stmmac_mac_set(priv, priv->ioaddr, false);
|
||||
pinctrl_pm_select_sleep_state(priv->device);
|
||||
/* Disable clock in case of PWM is off */
|
||||
clk_disable(priv->plat->pclk);
|
||||
clk_disable(priv->plat->stmmac_clk);
|
||||
if (priv->plat->clk_ptp_ref)
|
||||
clk_disable_unprepare(priv->plat->clk_ptp_ref);
|
||||
clk_disable_unprepare(priv->plat->pclk);
|
||||
clk_disable_unprepare(priv->plat->stmmac_clk);
|
||||
}
|
||||
mutex_unlock(&priv->lock);
|
||||
|
||||
|
@ -4588,8 +4590,10 @@ int stmmac_resume(struct device *dev)
|
|||
} else {
|
||||
pinctrl_pm_select_default_state(priv->device);
|
||||
/* enable the clk previously disabled */
|
||||
clk_enable(priv->plat->stmmac_clk);
|
||||
clk_enable(priv->plat->pclk);
|
||||
clk_prepare_enable(priv->plat->stmmac_clk);
|
||||
clk_prepare_enable(priv->plat->pclk);
|
||||
if (priv->plat->clk_ptp_ref)
|
||||
clk_prepare_enable(priv->plat->clk_ptp_ref);
|
||||
/* reset the phy so that it's ready */
|
||||
if (priv->mii)
|
||||
stmmac_mdio_reset(priv->mii);
|
||||
|
|
|
@ -3151,12 +3151,12 @@ static int ca8210_probe(struct spi_device *spi_device)
|
|||
goto error;
|
||||
}
|
||||
|
||||
priv->spi->dev.platform_data = pdata;
|
||||
ret = ca8210_get_platform_data(priv->spi, pdata);
|
||||
if (ret) {
|
||||
dev_crit(&spi_device->dev, "ca8210_get_platform_data failed\n");
|
||||
goto error;
|
||||
}
|
||||
priv->spi->dev.platform_data = pdata;
|
||||
|
||||
ret = ca8210_dev_com_init(priv);
|
||||
if (ret) {
|
||||
|
|
|
@ -4474,10 +4474,9 @@ static int rtl8152_reset_resume(struct usb_interface *intf)
|
|||
struct r8152 *tp = usb_get_intfdata(intf);
|
||||
|
||||
clear_bit(SELECTIVE_SUSPEND, &tp->flags);
|
||||
mutex_lock(&tp->control);
|
||||
tp->rtl_ops.init(tp);
|
||||
queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0);
|
||||
mutex_unlock(&tp->control);
|
||||
set_ethernet_addr(tp);
|
||||
return rtl8152_resume(intf);
|
||||
}
|
||||
|
||||
|
|
|
@ -775,6 +775,9 @@ static void rtl_p2p_noa_ie(struct ieee80211_hw *hw, void *data,
|
|||
return;
|
||||
} else {
|
||||
noa_num = (noa_len - 2) / 13;
|
||||
if (noa_num > P2P_MAX_NOA_NUM)
|
||||
noa_num = P2P_MAX_NOA_NUM;
|
||||
|
||||
}
|
||||
noa_index = ie[3];
|
||||
if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode ==
|
||||
|
@ -869,6 +872,9 @@ static void rtl_p2p_action_ie(struct ieee80211_hw *hw, void *data,
|
|||
return;
|
||||
} else {
|
||||
noa_num = (noa_len - 2) / 13;
|
||||
if (noa_num > P2P_MAX_NOA_NUM)
|
||||
noa_num = P2P_MAX_NOA_NUM;
|
||||
|
||||
}
|
||||
noa_index = ie[3];
|
||||
if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode ==
|
||||
|
|
|
@ -718,7 +718,6 @@ int xenvif_connect_data(struct xenvif_queue *queue,
|
|||
xenvif_unmap_frontend_data_rings(queue);
|
||||
netif_napi_del(&queue->napi);
|
||||
err:
|
||||
module_put(THIS_MODULE);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -111,10 +111,13 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
|
|||
*/
|
||||
if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
|
||||
return;
|
||||
revalidate_disk(ns->disk);
|
||||
blk_set_queue_dying(ns->queue);
|
||||
/* Forcibly unquiesce queues to avoid blocking dispatch */
|
||||
blk_mq_unquiesce_queue(ns->queue);
|
||||
/*
|
||||
* Revalidate after unblocking dispatchers that may be holding bd_butex
|
||||
*/
|
||||
revalidate_disk(ns->disk);
|
||||
}
|
||||
|
||||
static void nvme_queue_scan(struct nvme_ctrl *ctrl)
|
||||
|
|
|
@ -97,6 +97,7 @@ struct vmd_dev {
|
|||
struct resource resources[3];
|
||||
struct irq_domain *irq_domain;
|
||||
struct pci_bus *bus;
|
||||
u8 busn_start;
|
||||
|
||||
#ifdef CONFIG_X86_DEV_DMA_OPS
|
||||
struct dma_map_ops dma_ops;
|
||||
|
@ -468,7 +469,8 @@ static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
|
|||
unsigned int devfn, int reg, int len)
|
||||
{
|
||||
char __iomem *addr = vmd->cfgbar +
|
||||
(bus->number << 20) + (devfn << 12) + reg;
|
||||
((bus->number - vmd->busn_start) << 20) +
|
||||
(devfn << 12) + reg;
|
||||
|
||||
if ((addr - vmd->cfgbar) + len >=
|
||||
resource_size(&vmd->dev->resource[VMD_CFGBAR]))
|
||||
|
@ -591,7 +593,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
|||
unsigned long flags;
|
||||
LIST_HEAD(resources);
|
||||
resource_size_t offset[2] = {0};
|
||||
resource_size_t membar2_offset = 0x2000, busn_start = 0;
|
||||
resource_size_t membar2_offset = 0x2000;
|
||||
|
||||
/*
|
||||
* Shadow registers may exist in certain VMD device ids which allow
|
||||
|
@ -633,14 +635,14 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
|||
pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig);
|
||||
if (BUS_RESTRICT_CAP(vmcap) &&
|
||||
(BUS_RESTRICT_CFG(vmconfig) == 0x1))
|
||||
busn_start = 128;
|
||||
vmd->busn_start = 128;
|
||||
}
|
||||
|
||||
res = &vmd->dev->resource[VMD_CFGBAR];
|
||||
vmd->resources[0] = (struct resource) {
|
||||
.name = "VMD CFGBAR",
|
||||
.start = busn_start,
|
||||
.end = busn_start + (resource_size(res) >> 20) - 1,
|
||||
.start = vmd->busn_start,
|
||||
.end = vmd->busn_start + (resource_size(res) >> 20) - 1,
|
||||
.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
|
||||
};
|
||||
|
||||
|
@ -708,8 +710,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
|
|||
pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
|
||||
pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
|
||||
|
||||
vmd->bus = pci_create_root_bus(&vmd->dev->dev, busn_start, &vmd_ops,
|
||||
sd, &resources);
|
||||
vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
|
||||
&vmd_ops, sd, &resources);
|
||||
if (!vmd->bus) {
|
||||
pci_free_resource_list(&resources);
|
||||
irq_domain_remove(vmd->irq_domain);
|
||||
|
|
|
@ -925,19 +925,6 @@ void pci_update_current_state(struct pci_dev *dev, pci_power_t state)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_power_up - Put the given device into D0 forcibly
|
||||
* @dev: PCI device to power up
|
||||
*/
|
||||
void pci_power_up(struct pci_dev *dev)
|
||||
{
|
||||
if (platform_pci_power_manageable(dev))
|
||||
platform_pci_set_power_state(dev, PCI_D0);
|
||||
|
||||
pci_raw_set_power_state(dev, PCI_D0);
|
||||
pci_update_current_state(dev, PCI_D0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_platform_power_transition - Use platform to change device power state
|
||||
* @dev: PCI device to handle.
|
||||
|
@ -1116,6 +1103,17 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
|
|||
}
|
||||
EXPORT_SYMBOL(pci_set_power_state);
|
||||
|
||||
/**
|
||||
* pci_power_up - Put the given device into D0 forcibly
|
||||
* @dev: PCI device to power up
|
||||
*/
|
||||
void pci_power_up(struct pci_dev *dev)
|
||||
{
|
||||
__pci_start_power_transition(dev, PCI_D0);
|
||||
pci_raw_set_power_state(dev, PCI_D0);
|
||||
pci_update_current_state(dev, PCI_D0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pci_choose_state - Choose the power state of a PCI device
|
||||
* @dev: PCI device to be suspended
|
||||
|
|
|
@ -1524,7 +1524,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
|
|||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
|
||||
DMI_MATCH(DMI_PRODUCT_FAMILY, "Intel_Strago"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -1532,7 +1531,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
|
|||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "HP"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "Setzer"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -1540,7 +1538,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
|
|||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "Cyan"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -1548,7 +1545,6 @@ static const struct dmi_system_id chv_no_valid_mask[] = {
|
|||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "Celes"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "1.0"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
|
|
|
@ -183,10 +183,10 @@ static struct armada_37xx_pin_group armada_37xx_nb_groups[] = {
|
|||
PIN_GRP_EXTRA("uart2", 9, 2, BIT(1) | BIT(13) | BIT(14) | BIT(19),
|
||||
BIT(1) | BIT(13) | BIT(14), BIT(1) | BIT(19),
|
||||
18, 2, "gpio", "uart"),
|
||||
PIN_GRP_GPIO("led0_od", 11, 1, BIT(20), "led"),
|
||||
PIN_GRP_GPIO("led1_od", 12, 1, BIT(21), "led"),
|
||||
PIN_GRP_GPIO("led2_od", 13, 1, BIT(22), "led"),
|
||||
PIN_GRP_GPIO("led3_od", 14, 1, BIT(23), "led"),
|
||||
PIN_GRP_GPIO_2("led0_od", 11, 1, BIT(20), BIT(20), 0, "led"),
|
||||
PIN_GRP_GPIO_2("led1_od", 12, 1, BIT(21), BIT(21), 0, "led"),
|
||||
PIN_GRP_GPIO_2("led2_od", 13, 1, BIT(22), BIT(22), 0, "led"),
|
||||
PIN_GRP_GPIO_2("led3_od", 14, 1, BIT(23), BIT(23), 0, "led"),
|
||||
|
||||
};
|
||||
|
||||
|
@ -218,11 +218,11 @@ static const struct armada_37xx_pin_data armada_37xx_pin_sb = {
|
|||
};
|
||||
|
||||
static inline void armada_37xx_update_reg(unsigned int *reg,
|
||||
unsigned int offset)
|
||||
unsigned int *offset)
|
||||
{
|
||||
/* We never have more than 2 registers */
|
||||
if (offset >= GPIO_PER_REG) {
|
||||
offset -= GPIO_PER_REG;
|
||||
if (*offset >= GPIO_PER_REG) {
|
||||
*offset -= GPIO_PER_REG;
|
||||
*reg += sizeof(u32);
|
||||
}
|
||||
}
|
||||
|
@ -373,7 +373,7 @@ static inline void armada_37xx_irq_update_reg(unsigned int *reg,
|
|||
{
|
||||
int offset = irqd_to_hwirq(d);
|
||||
|
||||
armada_37xx_update_reg(reg, offset);
|
||||
armada_37xx_update_reg(reg, &offset);
|
||||
}
|
||||
|
||||
static int armada_37xx_gpio_direction_input(struct gpio_chip *chip,
|
||||
|
@ -383,7 +383,7 @@ static int armada_37xx_gpio_direction_input(struct gpio_chip *chip,
|
|||
unsigned int reg = OUTPUT_EN;
|
||||
unsigned int mask;
|
||||
|
||||
armada_37xx_update_reg(®, offset);
|
||||
armada_37xx_update_reg(®, &offset);
|
||||
mask = BIT(offset);
|
||||
|
||||
return regmap_update_bits(info->regmap, reg, mask, 0);
|
||||
|
@ -396,7 +396,7 @@ static int armada_37xx_gpio_get_direction(struct gpio_chip *chip,
|
|||
unsigned int reg = OUTPUT_EN;
|
||||
unsigned int val, mask;
|
||||
|
||||
armada_37xx_update_reg(®, offset);
|
||||
armada_37xx_update_reg(®, &offset);
|
||||
mask = BIT(offset);
|
||||
regmap_read(info->regmap, reg, &val);
|
||||
|
||||
|
@ -410,7 +410,7 @@ static int armada_37xx_gpio_direction_output(struct gpio_chip *chip,
|
|||
unsigned int reg = OUTPUT_EN;
|
||||
unsigned int mask, val, ret;
|
||||
|
||||
armada_37xx_update_reg(®, offset);
|
||||
armada_37xx_update_reg(®, &offset);
|
||||
mask = BIT(offset);
|
||||
|
||||
ret = regmap_update_bits(info->regmap, reg, mask, mask);
|
||||
|
@ -431,7 +431,7 @@ static int armada_37xx_gpio_get(struct gpio_chip *chip, unsigned int offset)
|
|||
unsigned int reg = INPUT_VAL;
|
||||
unsigned int val, mask;
|
||||
|
||||
armada_37xx_update_reg(®, offset);
|
||||
armada_37xx_update_reg(®, &offset);
|
||||
mask = BIT(offset);
|
||||
|
||||
regmap_read(info->regmap, reg, &val);
|
||||
|
@ -446,7 +446,7 @@ static void armada_37xx_gpio_set(struct gpio_chip *chip, unsigned int offset,
|
|||
unsigned int reg = OUTPUT_VAL;
|
||||
unsigned int mask, val;
|
||||
|
||||
armada_37xx_update_reg(®, offset);
|
||||
armada_37xx_update_reg(®, &offset);
|
||||
mask = BIT(offset);
|
||||
val = value ? mask : 0;
|
||||
|
||||
|
|
|
@ -21,6 +21,11 @@
|
|||
|
||||
struct kmem_cache *zfcp_fsf_qtcb_cache;
|
||||
|
||||
static bool ber_stop = true;
|
||||
module_param(ber_stop, bool, 0600);
|
||||
MODULE_PARM_DESC(ber_stop,
|
||||
"Shuts down FCP devices for FCP channels that report a bit-error count in excess of its threshold (default on)");
|
||||
|
||||
static void zfcp_fsf_request_timeout_handler(struct timer_list *t)
|
||||
{
|
||||
struct zfcp_fsf_req *fsf_req = from_timer(fsf_req, t, timer);
|
||||
|
@ -230,10 +235,15 @@ static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req)
|
|||
case FSF_STATUS_READ_SENSE_DATA_AVAIL:
|
||||
break;
|
||||
case FSF_STATUS_READ_BIT_ERROR_THRESHOLD:
|
||||
dev_warn(&adapter->ccw_device->dev,
|
||||
"The error threshold for checksum statistics "
|
||||
"has been exceeded\n");
|
||||
zfcp_dbf_hba_bit_err("fssrh_3", req);
|
||||
if (ber_stop) {
|
||||
dev_warn(&adapter->ccw_device->dev,
|
||||
"All paths over this FCP device are disused because of excessive bit errors\n");
|
||||
zfcp_erp_adapter_shutdown(adapter, 0, "fssrh_b");
|
||||
} else {
|
||||
dev_warn(&adapter->ccw_device->dev,
|
||||
"The error threshold for checksum statistics has been exceeded\n");
|
||||
}
|
||||
break;
|
||||
case FSF_STATUS_READ_LINK_DOWN:
|
||||
zfcp_fsf_status_read_link_down(req);
|
||||
|
|
|
@ -578,7 +578,6 @@ ch_release(struct inode *inode, struct file *file)
|
|||
scsi_changer *ch = file->private_data;
|
||||
|
||||
scsi_device_put(ch->device);
|
||||
ch->device = NULL;
|
||||
file->private_data = NULL;
|
||||
kref_put(&ch->ref, ch_destroy);
|
||||
return 0;
|
||||
|
|
|
@ -4189,11 +4189,11 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
*/
|
||||
if (pdev->subsystem_vendor == PCI_VENDOR_ID_COMPAQ &&
|
||||
pdev->subsystem_device == 0xC000)
|
||||
return -ENODEV;
|
||||
goto out_disable_device;
|
||||
/* Now check the magic signature byte */
|
||||
pci_read_config_word(pdev, PCI_CONF_AMISIG, &magic);
|
||||
if (magic != HBA_SIGNATURE_471 && magic != HBA_SIGNATURE)
|
||||
return -ENODEV;
|
||||
goto out_disable_device;
|
||||
/* Ok it is probably a megaraid */
|
||||
}
|
||||
|
||||
|
|
|
@ -1023,6 +1023,7 @@ void qlt_free_session_done(struct work_struct *work)
|
|||
|
||||
if (logout_started) {
|
||||
bool traced = false;
|
||||
u16 cnt = 0;
|
||||
|
||||
while (!READ_ONCE(sess->logout_completed)) {
|
||||
if (!traced) {
|
||||
|
@ -1032,6 +1033,9 @@ void qlt_free_session_done(struct work_struct *work)
|
|||
traced = true;
|
||||
}
|
||||
msleep(100);
|
||||
cnt++;
|
||||
if (cnt > 200)
|
||||
break;
|
||||
}
|
||||
|
||||
ql_dbg(ql_dbg_disc, vha, 0xf087,
|
||||
|
|
|
@ -970,6 +970,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
|
|||
ses->sdb = scmd->sdb;
|
||||
ses->next_rq = scmd->request->next_rq;
|
||||
ses->result = scmd->result;
|
||||
ses->resid_len = scmd->req.resid_len;
|
||||
ses->underflow = scmd->underflow;
|
||||
ses->prot_op = scmd->prot_op;
|
||||
ses->eh_eflags = scmd->eh_eflags;
|
||||
|
@ -981,6 +982,7 @@ void scsi_eh_prep_cmnd(struct scsi_cmnd *scmd, struct scsi_eh_save *ses,
|
|||
memset(&scmd->sdb, 0, sizeof(scmd->sdb));
|
||||
scmd->request->next_rq = NULL;
|
||||
scmd->result = 0;
|
||||
scmd->req.resid_len = 0;
|
||||
|
||||
if (sense_bytes) {
|
||||
scmd->sdb.length = min_t(unsigned, SCSI_SENSE_BUFFERSIZE,
|
||||
|
@ -1034,6 +1036,7 @@ void scsi_eh_restore_cmnd(struct scsi_cmnd* scmd, struct scsi_eh_save *ses)
|
|||
scmd->sdb = ses->sdb;
|
||||
scmd->request->next_rq = ses->next_rq;
|
||||
scmd->result = ses->result;
|
||||
scmd->req.resid_len = ses->resid_len;
|
||||
scmd->underflow = ses->underflow;
|
||||
scmd->prot_op = ses->prot_op;
|
||||
scmd->eh_eflags = ses->eh_eflags;
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue