This is the 4.19.149 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl91ulMACgkQONu9yGCS aT7ezhAArTOQxPGkhktgdGfCMYgjvIHdny8o4pNGumnxW6TG7FCiJHoZuj8OLkdx 2x5brOOvSGgcGTOwJXyUjL6opQzD5syTCuzbgEpGB2Tyd1x5q8vgqvI2XPxZeYHy x+mUDgacT+4m7FNbFDhNMZoTS4KCiJ3IcTevjeQexDtIs6R38HhxNl0Ee67gkqxZ p7c6L3kbUuR5T9EWGE1DPPLhOFGeOMk592qzkFsCGERsuswQOpXrxyw6zkik/0UG 6Losmo2i+OtQFeiDz0WYJZNO9ySI511j+7R2Ewch/nFuTp6yFzy9kJZnP0YWK/KE U4BLmopgzCs9q+TQ/QNjxlCltl4eOrrjkFXF3Zz8o5ddbKwrugEsJUdUUDIpva71 qEUgSw7vguGKoCttBenCDwyYOcjIVJRBFSWTVDzkgw5pXrz3m7qePF1Kj+KzG0pN 8gTqosXPlYPzH1mh+2vRVntiCpZRMJYo18CX+ifqN20dHH3dsM4vA5NiWwjTJVY8 JddRXfujxBQ0jxs2jFKvPZNrgqeY3Mh51L0a5G+HbHCIb+4kgD+2jl+C/X38TKch osTM1/qQriFVxtlH9TkTa8opYvrYBWO+G+XhNVc2tSpmd8T2EaKokMAVVvGiK3l9 ZPq06SytJyKDPsSLvk4BKxCUv5CY0VT18k6mCYd1fq4oxTR92A4= =5bC5 -----END PGP SIGNATURE----- Merge 4.19.149 into android-4.19-stable Changes in 4.19.149 selinux: allow labeling before policy is loaded media: mc-device.c: fix memleak in media_device_register_entity dma-fence: Serialise signal enabling (dma_fence_enable_sw_signaling) ath10k: fix array out-of-bounds access ath10k: fix memory leak for tpc_stats_final mm: fix double page fault on arm64 if PTE_AF is cleared scsi: aacraid: fix illegal IO beyond last LBA m68k: q40: Fix info-leak in rtc_ioctl gma/gma500: fix a memory disclosure bug due to uninitialized bytes ASoC: kirkwood: fix IRQ error handling media: smiapp: Fix error handling at NVM reading arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback x86/ioapic: Unbreak check_timer() ALSA: usb-audio: Add delay quirk for H570e USB headsets ALSA: hda/realtek - Couldn't detect Mic if booting with headset plugged ALSA: hda/realtek: Enable front panel headset LED on Lenovo ThinkStation P520 lib/string.c: implement stpcpy leds: mlxreg: Fix possible buffer overflow PM / devfreq: tegra30: Fix integer overflow on CPU's freq max out scsi: fnic: fix use after free scsi: lpfc: Fix kernel crash at lpfc_nvme_info_show during remote port bounce net: silence data-races on sk_backlog.tail clk/ti/adpll: allocate room for terminating null drm/amdgpu/powerplay: fix AVFS handling with custom powerplay table mtd: cfi_cmdset_0002: don't free cfi->cfiq in error path of cfi_amdstd_setup() mfd: mfd-core: Protect against NULL call-back function pointer drm/amdgpu/powerplay/smu7: fix AVFS handling with custom powerplay table tpm_crb: fix fTPM on AMD Zen+ CPUs tracing: Adding NULL checks for trace_array descriptor pointer bcache: fix a lost wake-up problem caused by mca_cannibalize_lock dmaengine: mediatek: hsdma_probe: fixed a memory leak when devm_request_irq fails RDMA/qedr: Fix potential use after free RDMA/i40iw: Fix potential use after free fix dget_parent() fastpath race xfs: fix attr leaf header freemap.size underflow RDMA/iw_cgxb4: Fix an error handling path in 'c4iw_connect()' ubi: Fix producing anchor PEBs mmc: core: Fix size overflow for mmc partitions gfs2: clean up iopen glock mess in gfs2_create_inode scsi: pm80xx: Cleanup command when a reset times out debugfs: Fix !DEBUG_FS debugfs_create_automount CIFS: Properly process SMB3 lease breaks ASoC: max98090: remove msleep in PLL unlocked workaround kernel/sys.c: avoid copying possible padding bytes in copy_to_user KVM: arm/arm64: vgic: Fix potential double free dist->spis in __kvm_vgic_destroy() xfs: fix log reservation overflows when allocating large rt extents neigh_stat_seq_next() should increase position index rt_cpu_seq_next should increase position index ipv6_route_seq_next should increase position index seqlock: Require WRITE_ONCE surrounding raw_seqcount_barrier media: ti-vpe: cal: Restrict DMA to avoid memory corruption sctp: move trace_sctp_probe_path into sctp_outq_sack ACPI: EC: Reference count query handlers under lock scsi: ufs: Make ufshcd_add_command_trace() easier to read scsi: ufs: Fix a race condition in the tracing code dmaengine: zynqmp_dma: fix burst length configuration s390/cpum_sf: Use kzalloc and minor changes powerpc/eeh: Only dump stack once if an MMIO loop is detected Bluetooth: btrtl: Use kvmalloc for FW allocations tracing: Set kernel_stack's caller size properly ARM: 8948/1: Prevent OOB access in stacktrace ar5523: Add USB ID of SMCWUSBT-G2 wireless adapter ceph: ensure we have a new cap before continuing in fill_inode selftests/ftrace: fix glob selftest tools/power/x86/intel_pstate_tracer: changes for python 3 compatibility Bluetooth: Fix refcount use-after-free issue mm/swapfile.c: swap_next should increase position index mm: pagewalk: fix termination condition in walk_pte_range() Bluetooth: prefetch channel before killing sock KVM: fix overflow of zero page refcount with ksm running ALSA: hda: Clear RIRB status before reading WP skbuff: fix a data race in skb_queue_len() audit: CONFIG_CHANGE don't log internal bookkeeping as an event selinux: sel_avc_get_stat_idx should increase position index scsi: lpfc: Fix RQ buffer leakage when no IOCBs available scsi: lpfc: Fix coverity errors in fmdi attribute handling drm/omap: fix possible object reference leak clk: stratix10: use do_div() for 64-bit calculation crypto: chelsio - This fixes the kernel panic which occurs during a libkcapi test mt76: clear skb pointers from rx aggregation reorder buffer during cleanup ALSA: usb-audio: Don't create a mixer element with bogus volume range perf test: Fix test trace+probe_vfs_getname.sh on s390 RDMA/rxe: Fix configuration of atomic queue pair attributes KVM: x86: fix incorrect comparison in trace event dmaengine: stm32-mdma: use vchan_terminate_vdesc() in .terminate_all media: staging/imx: Missing assignment in imx_media_capture_device_register() x86/pkeys: Add check for pkey "overflow" bpf: Remove recursion prevention from rcu free callback dmaengine: stm32-dma: use vchan_terminate_vdesc() in .terminate_all dmaengine: tegra-apb: Prevent race conditions on channel's freeing drm/amd/display: dal_ddc_i2c_payloads_create can fail causing panic firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp random: fix data races at timer_rand_state bus: hisi_lpc: Fixup IO ports addresses to avoid use-after-free in host removal media: go7007: Fix URB type for interrupt handling Bluetooth: guard against controllers sending zero'd events timekeeping: Prevent 32bit truncation in scale64_check_overflow() ext4: fix a data race at inode->i_disksize perf jevents: Fix leak of mapfile memory mm: avoid data corruption on CoW fault into PFN-mapped VMA drm/amdgpu: increase atombios cmd timeout drm/amd/display: Stop if retimer is not available ath10k: use kzalloc to read for ath10k_sdio_hif_diag_read scsi: aacraid: Disabling TM path and only processing IOP reset Bluetooth: L2CAP: handle l2cap config request during open state media: tda10071: fix unsigned sign extension overflow xfs: don't ever return a stale pointer from __xfs_dir3_free_read xfs: mark dir corrupt when lookup-by-hash fails ext4: mark block bitmap corrupted when found instead of BUGON tpm: ibmvtpm: Wait for buffer to be set before proceeding rtc: sa1100: fix possible race condition rtc: ds1374: fix possible race condition nfsd: Don't add locks to closed or closing open stateids RDMA/cm: Remove a race freeing timewait_info KVM: PPC: Book3S HV: Treat TM-related invalid form instructions on P9 like the valid ones drm/msm: fix leaks if initialization fails drm/msm/a5xx: Always set an OPP supported hardware value tracing: Use address-of operator on section symbols thermal: rcar_thermal: Handle probe error gracefully perf parse-events: Fix 3 use after frees found with clang ASAN serial: 8250_port: Don't service RX FIFO if throttled serial: 8250_omap: Fix sleeping function called from invalid context during probe serial: 8250: 8250_omap: Terminate DMA before pushing data on RX timeout perf cpumap: Fix snprintf overflow check cpufreq: powernv: Fix frame-size-overflow in powernv_cpufreq_work_fn tools: gpio-hammer: Avoid potential overflow in main nvme-multipath: do not reset on unknown status nvme: Fix controller creation races with teardown flow RDMA/rxe: Set sys_image_guid to be aligned with HW IB devices scsi: hpsa: correct race condition in offload enabled SUNRPC: Fix a potential buffer overflow in 'svc_print_xprts()' svcrdma: Fix leak of transport addresses PCI: Use ioremap(), not phys_to_virt() for platform ROM ubifs: Fix out-of-bounds memory access caused by abnormal value of node_len ALSA: usb-audio: Fix case when USB MIDI interface has more than one extra endpoint descriptor PCI: pciehp: Fix MSI interrupt race NFS: Fix races nfs_page_group_destroy() vs nfs_destroy_unlinked_subrequests() mm/kmemleak.c: use address-of operator on section symbols mm/filemap.c: clear page error before actual read mm/vmscan.c: fix data races using kswapd_classzone_idx nvmet-rdma: fix double free of rdma queue mm/mmap.c: initialize align_offset explicitly for vm_unmapped_area scsi: qedi: Fix termination timeouts in session logout serial: uartps: Wait for tx_empty in console setup KVM: Remove CREATE_IRQCHIP/SET_PIT2 race bdev: Reduce time holding bd_mutex in sync in blkdev_close() drivers: char: tlclk.c: Avoid data race between init and interrupt handler KVM: arm64: vgic-its: Fix memory leak on the error path of vgic_add_lpi() net: openvswitch: use u64 for meter bucket scsi: aacraid: Fix error handling paths in aac_probe_one() staging:r8188eu: avoid skb_clone for amsdu to msdu conversion sparc64: vcc: Fix error return code in vcc_probe() arm64: cpufeature: Relax checks for AArch32 support at EL[0-2] dt-bindings: sound: wm8994: Correct required supplies based on actual implementaion atm: fix a memory leak of vcc->user_back perf mem2node: Avoid double free related to realloc power: supply: max17040: Correct voltage reading phy: samsung: s5pv210-usb2: Add delay after reset Bluetooth: Handle Inquiry Cancel error after Inquiry Complete USB: EHCI: ehci-mv: fix error handling in mv_ehci_probe() tipc: fix memory leak in service subscripting tty: serial: samsung: Correct clock selection logic ALSA: hda: Fix potential race in unsol event handler powerpc/traps: Make unrecoverable NMIs die instead of panic fuse: don't check refcount after stealing page USB: EHCI: ehci-mv: fix less than zero comparison of an unsigned int scsi: cxlflash: Fix error return code in cxlflash_probe() arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register e1000: Do not perform reset in reset_task if we are already down drm/nouveau/debugfs: fix runtime pm imbalance on error drm/nouveau: fix runtime pm imbalance on error drm/nouveau/dispnv50: fix runtime pm imbalance on error printk: handle blank console arguments passed in. usb: dwc3: Increase timeout for CmdAct cleared by device controller btrfs: don't force read-only after error in drop snapshot vfio/pci: fix memory leaks of eventfd ctx perf evsel: Fix 2 memory leaks perf trace: Fix the selection for architectures to generate the errno name tables perf stat: Fix duration_time value for higher intervals perf util: Fix memory leak of prefix_if_not_in perf metricgroup: Free metric_events on error perf kcore_copy: Fix module map when there are no modules loaded ASoC: img-i2s-out: Fix runtime PM imbalance on error wlcore: fix runtime pm imbalance in wl1271_tx_work wlcore: fix runtime pm imbalance in wlcore_regdomain_config mtd: rawnand: omap_elm: Fix runtime PM imbalance on error PCI: tegra: Fix runtime PM imbalance on error ceph: fix potential race in ceph_check_caps mm/swap_state: fix a data race in swapin_nr_pages rapidio: avoid data race between file operation callbacks and mport_cdev_add(). mtd: parser: cmdline: Support MTD names containing one or more colons x86/speculation/mds: Mark mds_user_clear_cpu_buffers() __always_inline vfio/pci: Clear error and request eventfd ctx after releasing cifs: Fix double add page to memcg when cifs_readpages nvme: fix possible deadlock when I/O is blocked scsi: libfc: Handling of extra kref scsi: libfc: Skip additional kref updating work event selftests/x86/syscall_nt: Clear weird flags after each test vfio/pci: fix racy on error and request eventfd ctx btrfs: qgroup: fix data leak caused by race between writeback and truncate ubi: fastmap: Free unused fastmap anchor peb during detach perf parse-events: Use strcmp() to compare the PMU name net: openvswitch: use div_u64() for 64-by-32 divisions nvme: explicitly update mpath disk capacity on revalidation ASoC: wm8994: Skip setting of the WM8994_MICBIAS register for WM1811 ASoC: wm8994: Ensure the device is resumed in wm89xx_mic_detect functions ASoC: Intel: bytcr_rt5640: Add quirk for MPMAN Converter9 2-in-1 RISC-V: Take text_mutex in ftrace_init_nop() s390/init: add missing __init annotations lockdep: fix order in trace_hardirqs_off_caller() drm/amdkfd: fix a memory leak issue i2c: core: Call i2c_acpi_install_space_handler() before i2c_acpi_register_devices() objtool: Fix noreturn detection for ignored functions ieee802154: fix one possible memleak in ca8210_dev_com_init ieee802154/adf7242: check status of adf7242_read_reg clocksource/drivers/h8300_timer8: Fix wrong return value in h8300_8timer_init() mwifiex: Increase AES key storage size to 256 bits batman-adv: bla: fix type misuse for backbone_gw hash indexing atm: eni: fix the missed pci_disable_device() for eni_init_one() batman-adv: mcast/TT: fix wrongly dropped or rerouted packets mac802154: tx: fix use-after-free bpf: Fix clobbering of r2 in bpf_gen_ld_abs drm/vc4/vc4_hdmi: fill ASoC card owner net: qed: RDMA personality shouldn't fail VF load drm/sun4i: sun8i-csc: Secondary CSC register correction batman-adv: Add missing include for in_interrupt() batman-adv: mcast: fix duplicate mcast packets in BLA backbone from mesh batman-adv: mcast: fix duplicate mcast packets from BLA backbone to mesh bpf: Fix a rcu warning for bpffs map pretty-print ALSA: asihpi: fix iounmap in error handler regmap: fix page selection for noinc reads MIPS: Add the missing 'CPU_1074K' into __get_cpu_type() KVM: x86: Reset MMU context if guest toggles CR4.SMAP or CR4.PKE KVM: SVM: Add a dedicated INVD intercept routine tracing: fix double free s390/dasd: Fix zero write for FBA devices kprobes: Fix to check probe enabled before disarm_kprobe_ftrace() mm, THP, swap: fix allocating cluster for swapfile by mistake s390/zcrypt: Fix ZCRYPT_PERDEV_REQCNT ioctl kprobes: Fix compiler warning for !CONFIG_KPROBES_ON_FTRACE ata: define AC_ERR_OK ata: make qc_prep return ata_completion_errors ata: sata_mv, avoid trigerrable BUG_ON KVM: arm64: Assume write fault on S1PTW permission fault on instruction fetch Linux 4.19.149 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Idfc1b35ec63b4b464aeb6e32709102bee0efc872
This commit is contained in:
commit
9ce79d9bed
278 changed files with 2367 additions and 1215 deletions
|
@ -14,9 +14,15 @@ Required properties:
|
||||||
- #gpio-cells : Must be 2. The first cell is the pin number and the
|
- #gpio-cells : Must be 2. The first cell is the pin number and the
|
||||||
second cell is used to specify optional parameters (currently unused).
|
second cell is used to specify optional parameters (currently unused).
|
||||||
|
|
||||||
- AVDD2-supply, DBVDD1-supply, DBVDD2-supply, DBVDD3-supply, CPVDD-supply,
|
- power supplies for the device, as covered in
|
||||||
SPKVDD1-supply, SPKVDD2-supply : power supplies for the device, as covered
|
Documentation/devicetree/bindings/regulator/regulator.txt, depending
|
||||||
in Documentation/devicetree/bindings/regulator/regulator.txt
|
on compatible:
|
||||||
|
- for wlf,wm1811 and wlf,wm8958:
|
||||||
|
AVDD1-supply, AVDD2-supply, DBVDD1-supply, DBVDD2-supply, DBVDD3-supply,
|
||||||
|
DCVDD-supply, CPVDD-supply, SPKVDD1-supply, SPKVDD2-supply
|
||||||
|
- for wlf,wm8994:
|
||||||
|
AVDD1-supply, AVDD2-supply, DBVDD-supply, DCVDD-supply, CPVDD-supply,
|
||||||
|
SPKVDD1-supply, SPKVDD2-supply
|
||||||
|
|
||||||
Optional properties:
|
Optional properties:
|
||||||
|
|
||||||
|
@ -73,11 +79,11 @@ wm8994: codec@1a {
|
||||||
|
|
||||||
lineout1-se;
|
lineout1-se;
|
||||||
|
|
||||||
|
AVDD1-supply = <®ulator>;
|
||||||
AVDD2-supply = <®ulator>;
|
AVDD2-supply = <®ulator>;
|
||||||
CPVDD-supply = <®ulator>;
|
CPVDD-supply = <®ulator>;
|
||||||
DBVDD1-supply = <®ulator>;
|
DBVDD-supply = <®ulator>;
|
||||||
DBVDD2-supply = <®ulator>;
|
DCVDD-supply = <®ulator>;
|
||||||
DBVDD3-supply = <®ulator>;
|
|
||||||
SPKVDD1-supply = <®ulator>;
|
SPKVDD1-supply = <®ulator>;
|
||||||
SPKVDD2-supply = <®ulator>;
|
SPKVDD2-supply = <®ulator>;
|
||||||
};
|
};
|
||||||
|
|
|
@ -250,7 +250,7 @@ High-level taskfile hooks
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
void (*qc_prep) (struct ata_queued_cmd *qc);
|
enum ata_completion_errors (*qc_prep) (struct ata_queued_cmd *qc);
|
||||||
int (*qc_issue) (struct ata_queued_cmd *qc);
|
int (*qc_issue) (struct ata_queued_cmd *qc);
|
||||||
|
|
||||||
|
|
||||||
|
|
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 19
|
PATCHLEVEL = 19
|
||||||
SUBLEVEL = 148
|
SUBLEVEL = 149
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = "People's Front"
|
NAME = "People's Front"
|
||||||
|
|
||||||
|
|
|
@ -216,7 +216,7 @@ static inline int kvm_vcpu_dabt_get_rd(struct kvm_vcpu *vcpu)
|
||||||
return (kvm_vcpu_get_hsr(vcpu) & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
|
return (kvm_vcpu_get_hsr(vcpu) & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu)
|
static inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW;
|
return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW;
|
||||||
}
|
}
|
||||||
|
@ -248,16 +248,21 @@ static inline bool kvm_vcpu_trap_il_is32bit(struct kvm_vcpu *vcpu)
|
||||||
return kvm_vcpu_get_hsr(vcpu) & HSR_IL;
|
return kvm_vcpu_get_hsr(vcpu) & HSR_IL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline u8 kvm_vcpu_trap_get_class(struct kvm_vcpu *vcpu)
|
static inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return kvm_vcpu_get_hsr(vcpu) >> HSR_EC_SHIFT;
|
return kvm_vcpu_get_hsr(vcpu) >> HSR_EC_SHIFT;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool kvm_vcpu_trap_is_iabt(struct kvm_vcpu *vcpu)
|
static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return kvm_vcpu_trap_get_class(vcpu) == HSR_EC_IABT;
|
return kvm_vcpu_trap_get_class(vcpu) == HSR_EC_IABT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu)
|
||||||
|
{
|
||||||
|
return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu);
|
||||||
|
}
|
||||||
|
|
||||||
static inline u8 kvm_vcpu_trap_get_fault(struct kvm_vcpu *vcpu)
|
static inline u8 kvm_vcpu_trap_get_fault(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return kvm_vcpu_get_hsr(vcpu) & HSR_FSC;
|
return kvm_vcpu_get_hsr(vcpu) & HSR_FSC;
|
||||||
|
|
|
@ -115,6 +115,8 @@ static int save_trace(struct stackframe *frame, void *d)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
regs = (struct pt_regs *)frame->sp;
|
regs = (struct pt_regs *)frame->sp;
|
||||||
|
if ((unsigned long)®s[1] > ALIGN(frame->sp, THREAD_SIZE))
|
||||||
|
return 0;
|
||||||
|
|
||||||
trace->entries[trace->nr_entries++] = regs->ARM_pc;
|
trace->entries[trace->nr_entries++] = regs->ARM_pc;
|
||||||
|
|
||||||
|
|
|
@ -67,14 +67,16 @@ static void dump_mem(const char *, const char *, unsigned long, unsigned long);
|
||||||
|
|
||||||
void dump_backtrace_entry(unsigned long where, unsigned long from, unsigned long frame)
|
void dump_backtrace_entry(unsigned long where, unsigned long from, unsigned long frame)
|
||||||
{
|
{
|
||||||
|
unsigned long end = frame + 4 + sizeof(struct pt_regs);
|
||||||
|
|
||||||
#ifdef CONFIG_KALLSYMS
|
#ifdef CONFIG_KALLSYMS
|
||||||
printk("[<%08lx>] (%ps) from [<%08lx>] (%pS)\n", where, (void *)where, from, (void *)from);
|
printk("[<%08lx>] (%ps) from [<%08lx>] (%pS)\n", where, (void *)where, from, (void *)from);
|
||||||
#else
|
#else
|
||||||
printk("Function entered at [<%08lx>] from [<%08lx>]\n", where, from);
|
printk("Function entered at [<%08lx>] from [<%08lx>]\n", where, from);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
if (in_entry_text(from))
|
if (in_entry_text(from) && end <= ALIGN(frame, THREAD_SIZE))
|
||||||
dump_mem("", "Exception stack", frame + 4, frame + 4 + sizeof(struct pt_regs));
|
dump_mem("", "Exception stack", frame + 4, end);
|
||||||
}
|
}
|
||||||
|
|
||||||
void dump_backtrace_stm(u32 *stack, u32 instruction)
|
void dump_backtrace_stm(u32 *stack, u32 instruction)
|
||||||
|
|
|
@ -303,7 +303,7 @@ static inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu)
|
||||||
return (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
|
return (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
|
static inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
|
return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
|
||||||
}
|
}
|
||||||
|
@ -311,7 +311,7 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
|
||||||
static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
|
static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
|
return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
|
||||||
kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
|
kvm_vcpu_abt_iss1tw(vcpu); /* AF/DBM update */
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
|
static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
|
||||||
|
@ -340,6 +340,11 @@ static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
|
||||||
return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW;
|
return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu)
|
||||||
|
{
|
||||||
|
return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu);
|
||||||
|
}
|
||||||
|
|
||||||
static inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
|
static inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC;
|
return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC;
|
||||||
|
|
|
@ -155,11 +155,10 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
|
||||||
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
|
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
|
||||||
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
|
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
|
||||||
/* Linux doesn't care about the EL3 */
|
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
|
||||||
ARM64_FTR_END,
|
ARM64_FTR_END,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -301,7 +300,7 @@ static const struct arm64_ftr_bits ftr_id_pfr0[] = {
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct arm64_ftr_bits ftr_id_dfr0[] = {
|
static const struct arm64_ftr_bits ftr_id_dfr0[] = {
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
|
/* [31:28] TraceFilt */
|
||||||
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0xf), /* PerfMon */
|
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0xf), /* PerfMon */
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
|
||||||
|
@ -671,9 +670,6 @@ void update_cpu_features(int cpu,
|
||||||
taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
|
taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
|
||||||
info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2);
|
info->reg_id_aa64mmfr2, boot->reg_id_aa64mmfr2);
|
||||||
|
|
||||||
/*
|
|
||||||
* EL3 is not our concern.
|
|
||||||
*/
|
|
||||||
taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
|
taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
|
||||||
info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
|
info->reg_id_aa64pfr0, boot->reg_id_aa64pfr0);
|
||||||
taint |= check_update_ftr_reg(SYS_ID_AA64PFR1_EL1, cpu,
|
taint |= check_update_ftr_reg(SYS_ID_AA64PFR1_EL1, cpu,
|
||||||
|
|
|
@ -430,7 +430,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
|
||||||
kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
|
kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
|
||||||
kvm_vcpu_dabt_isvalid(vcpu) &&
|
kvm_vcpu_dabt_isvalid(vcpu) &&
|
||||||
!kvm_vcpu_dabt_isextabt(vcpu) &&
|
!kvm_vcpu_dabt_isextabt(vcpu) &&
|
||||||
!kvm_vcpu_dabt_iss1tw(vcpu);
|
!kvm_vcpu_abt_iss1tw(vcpu);
|
||||||
|
|
||||||
if (valid) {
|
if (valid) {
|
||||||
int ret = __vgic_v2_perform_cpuif_access(vcpu);
|
int ret = __vgic_v2_perform_cpuif_access(vcpu);
|
||||||
|
|
|
@ -273,6 +273,7 @@ static int q40_get_rtc_pll(struct rtc_pll_info *pll)
|
||||||
{
|
{
|
||||||
int tmp = Q40_RTC_CTRL;
|
int tmp = Q40_RTC_CTRL;
|
||||||
|
|
||||||
|
pll->pll_ctrl = 0;
|
||||||
pll->pll_value = tmp & Q40_RTC_PLL_MASK;
|
pll->pll_value = tmp & Q40_RTC_PLL_MASK;
|
||||||
if (tmp & Q40_RTC_PLL_SIGN)
|
if (tmp & Q40_RTC_PLL_SIGN)
|
||||||
pll->pll_value = -pll->pll_value;
|
pll->pll_value = -pll->pll_value;
|
||||||
|
|
|
@ -47,6 +47,7 @@ static inline int __pure __get_cpu_type(const int cpu_type)
|
||||||
case CPU_34K:
|
case CPU_34K:
|
||||||
case CPU_1004K:
|
case CPU_1004K:
|
||||||
case CPU_74K:
|
case CPU_74K:
|
||||||
|
case CPU_1074K:
|
||||||
case CPU_M14KC:
|
case CPU_M14KC:
|
||||||
case CPU_M14KEC:
|
case CPU_M14KEC:
|
||||||
case CPU_INTERAPTIV:
|
case CPU_INTERAPTIV:
|
||||||
|
|
|
@ -163,4 +163,7 @@
|
||||||
|
|
||||||
#define KVM_INST_FETCH_FAILED -1
|
#define KVM_INST_FETCH_FAILED -1
|
||||||
|
|
||||||
|
/* Extract PO and XOP opcode fields */
|
||||||
|
#define PO_XOP_OPCODE_MASK 0xfc0007fe
|
||||||
|
|
||||||
#endif /* __POWERPC_KVM_ASM_H__ */
|
#endif /* __POWERPC_KVM_ASM_H__ */
|
||||||
|
|
|
@ -502,7 +502,7 @@ int eeh_dev_check_failure(struct eeh_dev *edev)
|
||||||
rc = 1;
|
rc = 1;
|
||||||
if (pe->state & EEH_PE_ISOLATED) {
|
if (pe->state & EEH_PE_ISOLATED) {
|
||||||
pe->check_count++;
|
pe->check_count++;
|
||||||
if (pe->check_count % EEH_MAX_FAILS == 0) {
|
if (pe->check_count == EEH_MAX_FAILS) {
|
||||||
dn = pci_device_to_OF_node(dev);
|
dn = pci_device_to_OF_node(dev);
|
||||||
if (dn)
|
if (dn)
|
||||||
location = of_get_property(dn, "ibm,loc-code",
|
location = of_get_property(dn, "ibm,loc-code",
|
||||||
|
|
|
@ -430,11 +430,11 @@ void system_reset_exception(struct pt_regs *regs)
|
||||||
#ifdef CONFIG_PPC_BOOK3S_64
|
#ifdef CONFIG_PPC_BOOK3S_64
|
||||||
BUG_ON(get_paca()->in_nmi == 0);
|
BUG_ON(get_paca()->in_nmi == 0);
|
||||||
if (get_paca()->in_nmi > 1)
|
if (get_paca()->in_nmi > 1)
|
||||||
nmi_panic(regs, "Unrecoverable nested System Reset");
|
die("Unrecoverable nested System Reset", regs, SIGABRT);
|
||||||
#endif
|
#endif
|
||||||
/* Must die if the interrupt is not recoverable */
|
/* Must die if the interrupt is not recoverable */
|
||||||
if (!(regs->msr & MSR_RI))
|
if (!(regs->msr & MSR_RI))
|
||||||
nmi_panic(regs, "Unrecoverable System Reset");
|
die("Unrecoverable System Reset", regs, SIGABRT);
|
||||||
|
|
||||||
if (!nested)
|
if (!nested)
|
||||||
nmi_exit();
|
nmi_exit();
|
||||||
|
@ -775,7 +775,7 @@ void machine_check_exception(struct pt_regs *regs)
|
||||||
|
|
||||||
/* Must die if the interrupt is not recoverable */
|
/* Must die if the interrupt is not recoverable */
|
||||||
if (!(regs->msr & MSR_RI))
|
if (!(regs->msr & MSR_RI))
|
||||||
nmi_panic(regs, "Unrecoverable Machine check");
|
die("Unrecoverable Machine check", regs, SIGBUS);
|
||||||
|
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
|
|
@ -6,6 +6,8 @@
|
||||||
* published by the Free Software Foundation.
|
* published by the Free Software Foundation.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||||
|
|
||||||
#include <linux/kvm_host.h>
|
#include <linux/kvm_host.h>
|
||||||
|
|
||||||
#include <asm/kvm_ppc.h>
|
#include <asm/kvm_ppc.h>
|
||||||
|
@ -47,7 +49,18 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
|
||||||
u64 newmsr, bescr;
|
u64 newmsr, bescr;
|
||||||
int ra, rs;
|
int ra, rs;
|
||||||
|
|
||||||
switch (instr & 0xfc0007ff) {
|
/*
|
||||||
|
* rfid, rfebb, and mtmsrd encode bit 31 = 0 since it's a reserved bit
|
||||||
|
* in these instructions, so masking bit 31 out doesn't change these
|
||||||
|
* instructions. For treclaim., tsr., and trechkpt. instructions if bit
|
||||||
|
* 31 = 0 then they are per ISA invalid forms, however P9 UM, in section
|
||||||
|
* 4.6.10 Book II Invalid Forms, informs specifically that ignoring bit
|
||||||
|
* 31 is an acceptable way to handle these invalid forms that have
|
||||||
|
* bit 31 = 0. Moreover, for emulation purposes both forms (w/ and wo/
|
||||||
|
* bit 31 set) can generate a softpatch interrupt. Hence both forms
|
||||||
|
* are handled below for these instructions so they behave the same way.
|
||||||
|
*/
|
||||||
|
switch (instr & PO_XOP_OPCODE_MASK) {
|
||||||
case PPC_INST_RFID:
|
case PPC_INST_RFID:
|
||||||
/* XXX do we need to check for PR=0 here? */
|
/* XXX do we need to check for PR=0 here? */
|
||||||
newmsr = vcpu->arch.shregs.srr1;
|
newmsr = vcpu->arch.shregs.srr1;
|
||||||
|
@ -108,7 +121,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
|
||||||
vcpu->arch.shregs.msr = newmsr;
|
vcpu->arch.shregs.msr = newmsr;
|
||||||
return RESUME_GUEST;
|
return RESUME_GUEST;
|
||||||
|
|
||||||
case PPC_INST_TSR:
|
/* ignore bit 31, see comment above */
|
||||||
|
case (PPC_INST_TSR & PO_XOP_OPCODE_MASK):
|
||||||
/* check for PR=1 and arch 2.06 bit set in PCR */
|
/* check for PR=1 and arch 2.06 bit set in PCR */
|
||||||
if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) {
|
if ((msr & MSR_PR) && (vcpu->arch.vcore->pcr & PCR_ARCH_206)) {
|
||||||
/* generate an illegal instruction interrupt */
|
/* generate an illegal instruction interrupt */
|
||||||
|
@ -143,7 +157,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
|
||||||
vcpu->arch.shregs.msr = msr;
|
vcpu->arch.shregs.msr = msr;
|
||||||
return RESUME_GUEST;
|
return RESUME_GUEST;
|
||||||
|
|
||||||
case PPC_INST_TRECLAIM:
|
/* ignore bit 31, see comment above */
|
||||||
|
case (PPC_INST_TRECLAIM & PO_XOP_OPCODE_MASK):
|
||||||
/* check for TM disabled in the HFSCR or MSR */
|
/* check for TM disabled in the HFSCR or MSR */
|
||||||
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
|
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
|
||||||
/* generate an illegal instruction interrupt */
|
/* generate an illegal instruction interrupt */
|
||||||
|
@ -179,7 +194,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
|
||||||
vcpu->arch.shregs.msr &= ~MSR_TS_MASK;
|
vcpu->arch.shregs.msr &= ~MSR_TS_MASK;
|
||||||
return RESUME_GUEST;
|
return RESUME_GUEST;
|
||||||
|
|
||||||
case PPC_INST_TRECHKPT:
|
/* ignore bit 31, see comment above */
|
||||||
|
case (PPC_INST_TRECHKPT & PO_XOP_OPCODE_MASK):
|
||||||
/* XXX do we need to check for PR=0 here? */
|
/* XXX do we need to check for PR=0 here? */
|
||||||
/* check for TM disabled in the HFSCR or MSR */
|
/* check for TM disabled in the HFSCR or MSR */
|
||||||
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
|
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
|
||||||
|
@ -211,6 +227,8 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* What should we do here? We didn't recognize the instruction */
|
/* What should we do here? We didn't recognize the instruction */
|
||||||
WARN_ON_ONCE(1);
|
kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
|
||||||
|
pr_warn_ratelimited("Unrecognized TM-related instruction %#x for emulation", instr);
|
||||||
|
|
||||||
return RESUME_GUEST;
|
return RESUME_GUEST;
|
||||||
}
|
}
|
||||||
|
|
|
@ -26,7 +26,18 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
|
||||||
u64 newmsr, msr, bescr;
|
u64 newmsr, msr, bescr;
|
||||||
int rs;
|
int rs;
|
||||||
|
|
||||||
switch (instr & 0xfc0007ff) {
|
/*
|
||||||
|
* rfid, rfebb, and mtmsrd encode bit 31 = 0 since it's a reserved bit
|
||||||
|
* in these instructions, so masking bit 31 out doesn't change these
|
||||||
|
* instructions. For the tsr. instruction if bit 31 = 0 then it is per
|
||||||
|
* ISA an invalid form, however P9 UM, in section 4.6.10 Book II Invalid
|
||||||
|
* Forms, informs specifically that ignoring bit 31 is an acceptable way
|
||||||
|
* to handle TM-related invalid forms that have bit 31 = 0. Moreover,
|
||||||
|
* for emulation purposes both forms (w/ and wo/ bit 31 set) can
|
||||||
|
* generate a softpatch interrupt. Hence both forms are handled below
|
||||||
|
* for tsr. to make them behave the same way.
|
||||||
|
*/
|
||||||
|
switch (instr & PO_XOP_OPCODE_MASK) {
|
||||||
case PPC_INST_RFID:
|
case PPC_INST_RFID:
|
||||||
/* XXX do we need to check for PR=0 here? */
|
/* XXX do we need to check for PR=0 here? */
|
||||||
newmsr = vcpu->arch.shregs.srr1;
|
newmsr = vcpu->arch.shregs.srr1;
|
||||||
|
@ -76,7 +87,8 @@ int kvmhv_p9_tm_emulation_early(struct kvm_vcpu *vcpu)
|
||||||
vcpu->arch.shregs.msr = newmsr;
|
vcpu->arch.shregs.msr = newmsr;
|
||||||
return 1;
|
return 1;
|
||||||
|
|
||||||
case PPC_INST_TSR:
|
/* ignore bit 31, see comment above */
|
||||||
|
case (PPC_INST_TSR & PO_XOP_OPCODE_MASK):
|
||||||
/* we know the MSR has the TS field = S (0b01) here */
|
/* we know the MSR has the TS field = S (0b01) here */
|
||||||
msr = vcpu->arch.shregs.msr;
|
msr = vcpu->arch.shregs.msr;
|
||||||
/* check for PR=1 and arch 2.06 bit set in PCR */
|
/* check for PR=1 and arch 2.06 bit set in PCR */
|
||||||
|
|
|
@ -63,4 +63,11 @@ do { \
|
||||||
* Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
|
* Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
|
||||||
*/
|
*/
|
||||||
#define MCOUNT_INSN_SIZE 8
|
#define MCOUNT_INSN_SIZE 8
|
||||||
|
|
||||||
|
#ifndef __ASSEMBLY__
|
||||||
|
struct dyn_ftrace;
|
||||||
|
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
|
||||||
|
#define ftrace_init_nop ftrace_init_nop
|
||||||
|
#endif
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -88,6 +88,25 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
|
||||||
return __ftrace_modify_call(rec->ip, addr, false);
|
return __ftrace_modify_call(rec->ip, addr, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This is called early on, and isn't wrapped by
|
||||||
|
* ftrace_arch_code_modify_{prepare,post_process}() and therefor doesn't hold
|
||||||
|
* text_mutex, which triggers a lockdep failure. SMP isn't running so we could
|
||||||
|
* just directly poke the text, but it's simpler to just take the lock
|
||||||
|
* ourselves.
|
||||||
|
*/
|
||||||
|
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
|
||||||
|
{
|
||||||
|
int out;
|
||||||
|
|
||||||
|
ftrace_arch_code_modify_prepare();
|
||||||
|
out = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
|
||||||
|
ftrace_arch_code_modify_post_process();
|
||||||
|
|
||||||
|
return out;
|
||||||
|
}
|
||||||
|
|
||||||
int ftrace_update_ftrace_func(ftrace_func_t func)
|
int ftrace_update_ftrace_func(ftrace_func_t func)
|
||||||
{
|
{
|
||||||
int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
|
int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
|
||||||
|
|
|
@ -1377,8 +1377,8 @@ static int aux_output_begin(struct perf_output_handle *handle,
|
||||||
idx = aux->empty_mark + 1;
|
idx = aux->empty_mark + 1;
|
||||||
for (i = 0; i < range_scan; i++, idx++) {
|
for (i = 0; i < range_scan; i++, idx++) {
|
||||||
te = aux_sdb_trailer(aux, idx);
|
te = aux_sdb_trailer(aux, idx);
|
||||||
te->flags = te->flags & ~SDB_TE_BUFFER_FULL_MASK;
|
te->flags &= ~(SDB_TE_BUFFER_FULL_MASK |
|
||||||
te->flags = te->flags & ~SDB_TE_ALERT_REQ_MASK;
|
SDB_TE_ALERT_REQ_MASK);
|
||||||
te->overflow = 0;
|
te->overflow = 0;
|
||||||
}
|
}
|
||||||
/* Save the position of empty SDBs */
|
/* Save the position of empty SDBs */
|
||||||
|
@ -1425,8 +1425,7 @@ static bool aux_set_alert(struct aux_buffer *aux, unsigned long alert_index,
|
||||||
te = aux_sdb_trailer(aux, alert_index);
|
te = aux_sdb_trailer(aux, alert_index);
|
||||||
do {
|
do {
|
||||||
orig_flags = te->flags;
|
orig_flags = te->flags;
|
||||||
orig_overflow = te->overflow;
|
*overflow = orig_overflow = te->overflow;
|
||||||
*overflow = orig_overflow;
|
|
||||||
if (orig_flags & SDB_TE_BUFFER_FULL_MASK) {
|
if (orig_flags & SDB_TE_BUFFER_FULL_MASK) {
|
||||||
/*
|
/*
|
||||||
* SDB is already set by hardware.
|
* SDB is already set by hardware.
|
||||||
|
@ -1660,7 +1659,7 @@ static void *aux_buffer_setup(struct perf_event *event, void **pages,
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Allocate aux_buffer struct for the event */
|
/* Allocate aux_buffer struct for the event */
|
||||||
aux = kmalloc(sizeof(struct aux_buffer), GFP_KERNEL);
|
aux = kzalloc(sizeof(struct aux_buffer), GFP_KERNEL);
|
||||||
if (!aux)
|
if (!aux)
|
||||||
goto no_aux;
|
goto no_aux;
|
||||||
sfb = &aux->sfb;
|
sfb = &aux->sfb;
|
||||||
|
|
|
@ -537,7 +537,7 @@ static struct notifier_block kdump_mem_nb = {
|
||||||
/*
|
/*
|
||||||
* Make sure that the area behind memory_end is protected
|
* Make sure that the area behind memory_end is protected
|
||||||
*/
|
*/
|
||||||
static void reserve_memory_end(void)
|
static void __init reserve_memory_end(void)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_CRASH_DUMP
|
#ifdef CONFIG_CRASH_DUMP
|
||||||
if (ipl_info.type == IPL_TYPE_FCP_DUMP &&
|
if (ipl_info.type == IPL_TYPE_FCP_DUMP &&
|
||||||
|
@ -555,7 +555,7 @@ static void reserve_memory_end(void)
|
||||||
/*
|
/*
|
||||||
* Make sure that oldmem, where the dump is stored, is protected
|
* Make sure that oldmem, where the dump is stored, is protected
|
||||||
*/
|
*/
|
||||||
static void reserve_oldmem(void)
|
static void __init reserve_oldmem(void)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_CRASH_DUMP
|
#ifdef CONFIG_CRASH_DUMP
|
||||||
if (OLDMEM_BASE)
|
if (OLDMEM_BASE)
|
||||||
|
@ -567,7 +567,7 @@ static void reserve_oldmem(void)
|
||||||
/*
|
/*
|
||||||
* Make sure that oldmem, where the dump is stored, is protected
|
* Make sure that oldmem, where the dump is stored, is protected
|
||||||
*/
|
*/
|
||||||
static void remove_oldmem(void)
|
static void __init remove_oldmem(void)
|
||||||
{
|
{
|
||||||
#ifdef CONFIG_CRASH_DUMP
|
#ifdef CONFIG_CRASH_DUMP
|
||||||
if (OLDMEM_BASE)
|
if (OLDMEM_BASE)
|
||||||
|
|
|
@ -330,7 +330,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
|
||||||
* combination with microcode which triggers a CPU buffer flush when the
|
* combination with microcode which triggers a CPU buffer flush when the
|
||||||
* instruction is executed.
|
* instruction is executed.
|
||||||
*/
|
*/
|
||||||
static inline void mds_clear_cpu_buffers(void)
|
static __always_inline void mds_clear_cpu_buffers(void)
|
||||||
{
|
{
|
||||||
static const u16 ds = __KERNEL_DS;
|
static const u16 ds = __KERNEL_DS;
|
||||||
|
|
||||||
|
@ -351,7 +351,7 @@ static inline void mds_clear_cpu_buffers(void)
|
||||||
*
|
*
|
||||||
* Clear CPU buffers if the corresponding static key is enabled
|
* Clear CPU buffers if the corresponding static key is enabled
|
||||||
*/
|
*/
|
||||||
static inline void mds_user_clear_cpu_buffers(void)
|
static __always_inline void mds_user_clear_cpu_buffers(void)
|
||||||
{
|
{
|
||||||
if (static_branch_likely(&mds_user_clear))
|
if (static_branch_likely(&mds_user_clear))
|
||||||
mds_clear_cpu_buffers();
|
mds_clear_cpu_buffers();
|
||||||
|
|
|
@ -4,6 +4,11 @@
|
||||||
|
|
||||||
#define ARCH_DEFAULT_PKEY 0
|
#define ARCH_DEFAULT_PKEY 0
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If more than 16 keys are ever supported, a thorough audit
|
||||||
|
* will be necessary to ensure that the types that store key
|
||||||
|
* numbers and masks have sufficient capacity.
|
||||||
|
*/
|
||||||
#define arch_max_pkey() (boot_cpu_has(X86_FEATURE_OSPKE) ? 16 : 1)
|
#define arch_max_pkey() (boot_cpu_has(X86_FEATURE_OSPKE) ? 16 : 1)
|
||||||
|
|
||||||
extern int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
|
extern int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
|
||||||
|
|
|
@ -2250,6 +2250,7 @@ static inline void __init check_timer(void)
|
||||||
legacy_pic->init(0);
|
legacy_pic->init(0);
|
||||||
legacy_pic->make_irq(0);
|
legacy_pic->make_irq(0);
|
||||||
apic_write(APIC_LVT0, APIC_DM_EXTINT);
|
apic_write(APIC_LVT0, APIC_DM_EXTINT);
|
||||||
|
legacy_pic->unmask(0);
|
||||||
|
|
||||||
unlock_ExtINT_logic();
|
unlock_ExtINT_logic();
|
||||||
|
|
||||||
|
|
|
@ -907,8 +907,6 @@ const void *get_xsave_field_ptr(int xsave_state)
|
||||||
|
|
||||||
#ifdef CONFIG_ARCH_HAS_PKEYS
|
#ifdef CONFIG_ARCH_HAS_PKEYS
|
||||||
|
|
||||||
#define NR_VALID_PKRU_BITS (CONFIG_NR_PROTECTION_KEYS * 2)
|
|
||||||
#define PKRU_VALID_MASK (NR_VALID_PKRU_BITS - 1)
|
|
||||||
/*
|
/*
|
||||||
* This will go out and modify PKRU register to set the access
|
* This will go out and modify PKRU register to set the access
|
||||||
* rights for @pkey to @init_val.
|
* rights for @pkey to @init_val.
|
||||||
|
@ -927,6 +925,13 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
|
||||||
if (!boot_cpu_has(X86_FEATURE_OSPKE))
|
if (!boot_cpu_has(X86_FEATURE_OSPKE))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This code should only be called with valid 'pkey'
|
||||||
|
* values originating from in-kernel users. Complain
|
||||||
|
* if a bad value is observed.
|
||||||
|
*/
|
||||||
|
WARN_ON_ONCE(pkey >= arch_max_pkey());
|
||||||
|
|
||||||
/* Set the bits we need in PKRU: */
|
/* Set the bits we need in PKRU: */
|
||||||
if (init_val & PKEY_DISABLE_ACCESS)
|
if (init_val & PKEY_DISABLE_ACCESS)
|
||||||
new_pkru_bits |= PKRU_AD_BIT;
|
new_pkru_bits |= PKRU_AD_BIT;
|
||||||
|
|
|
@ -339,7 +339,7 @@ TRACE_EVENT(
|
||||||
/* These depend on page entry type, so compute them now. */
|
/* These depend on page entry type, so compute them now. */
|
||||||
__field(bool, r)
|
__field(bool, r)
|
||||||
__field(bool, x)
|
__field(bool, x)
|
||||||
__field(u8, u)
|
__field(signed char, u)
|
||||||
),
|
),
|
||||||
|
|
||||||
TP_fast_assign(
|
TP_fast_assign(
|
||||||
|
|
|
@ -3942,6 +3942,12 @@ static int iret_interception(struct vcpu_svm *svm)
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int invd_interception(struct vcpu_svm *svm)
|
||||||
|
{
|
||||||
|
/* Treat an INVD instruction as a NOP and just skip it. */
|
||||||
|
return kvm_skip_emulated_instruction(&svm->vcpu);
|
||||||
|
}
|
||||||
|
|
||||||
static int invlpg_interception(struct vcpu_svm *svm)
|
static int invlpg_interception(struct vcpu_svm *svm)
|
||||||
{
|
{
|
||||||
if (!static_cpu_has(X86_FEATURE_DECODEASSISTS))
|
if (!static_cpu_has(X86_FEATURE_DECODEASSISTS))
|
||||||
|
@ -4831,7 +4837,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = {
|
||||||
[SVM_EXIT_RDPMC] = rdpmc_interception,
|
[SVM_EXIT_RDPMC] = rdpmc_interception,
|
||||||
[SVM_EXIT_CPUID] = cpuid_interception,
|
[SVM_EXIT_CPUID] = cpuid_interception,
|
||||||
[SVM_EXIT_IRET] = iret_interception,
|
[SVM_EXIT_IRET] = iret_interception,
|
||||||
[SVM_EXIT_INVD] = emulate_on_interception,
|
[SVM_EXIT_INVD] = invd_interception,
|
||||||
[SVM_EXIT_PAUSE] = pause_interception,
|
[SVM_EXIT_PAUSE] = pause_interception,
|
||||||
[SVM_EXIT_HLT] = halt_interception,
|
[SVM_EXIT_HLT] = halt_interception,
|
||||||
[SVM_EXIT_INVLPG] = invlpg_interception,
|
[SVM_EXIT_INVLPG] = invlpg_interception,
|
||||||
|
|
|
@ -858,6 +858,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
|
||||||
unsigned long old_cr4 = kvm_read_cr4(vcpu);
|
unsigned long old_cr4 = kvm_read_cr4(vcpu);
|
||||||
unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
|
unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
|
||||||
X86_CR4_SMEP;
|
X86_CR4_SMEP;
|
||||||
|
unsigned long mmu_role_bits = pdptr_bits | X86_CR4_SMAP | X86_CR4_PKE;
|
||||||
|
|
||||||
if (kvm_valid_cr4(vcpu, cr4))
|
if (kvm_valid_cr4(vcpu, cr4))
|
||||||
return 1;
|
return 1;
|
||||||
|
@ -885,7 +886,7 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
|
||||||
if (kvm_x86_ops->set_cr4(vcpu, cr4))
|
if (kvm_x86_ops->set_cr4(vcpu, cr4))
|
||||||
return 1;
|
return 1;
|
||||||
|
|
||||||
if (((cr4 ^ old_cr4) & pdptr_bits) ||
|
if (((cr4 ^ old_cr4) & mmu_role_bits) ||
|
||||||
(!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
|
(!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
|
||||||
kvm_mmu_reset_context(vcpu);
|
kvm_mmu_reset_context(vcpu);
|
||||||
|
|
||||||
|
@ -4668,10 +4669,13 @@ long kvm_arch_vm_ioctl(struct file *filp,
|
||||||
r = -EFAULT;
|
r = -EFAULT;
|
||||||
if (copy_from_user(&u.ps, argp, sizeof u.ps))
|
if (copy_from_user(&u.ps, argp, sizeof u.ps))
|
||||||
goto out;
|
goto out;
|
||||||
|
mutex_lock(&kvm->lock);
|
||||||
r = -ENXIO;
|
r = -ENXIO;
|
||||||
if (!kvm->arch.vpit)
|
if (!kvm->arch.vpit)
|
||||||
goto out;
|
goto set_pit_out;
|
||||||
r = kvm_vm_ioctl_set_pit(kvm, &u.ps);
|
r = kvm_vm_ioctl_set_pit(kvm, &u.ps);
|
||||||
|
set_pit_out:
|
||||||
|
mutex_unlock(&kvm->lock);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case KVM_GET_PIT2: {
|
case KVM_GET_PIT2: {
|
||||||
|
@ -4691,10 +4695,13 @@ long kvm_arch_vm_ioctl(struct file *filp,
|
||||||
r = -EFAULT;
|
r = -EFAULT;
|
||||||
if (copy_from_user(&u.ps2, argp, sizeof(u.ps2)))
|
if (copy_from_user(&u.ps2, argp, sizeof(u.ps2)))
|
||||||
goto out;
|
goto out;
|
||||||
|
mutex_lock(&kvm->lock);
|
||||||
r = -ENXIO;
|
r = -ENXIO;
|
||||||
if (!kvm->arch.vpit)
|
if (!kvm->arch.vpit)
|
||||||
goto out;
|
goto set_pit2_out;
|
||||||
r = kvm_vm_ioctl_set_pit2(kvm, &u.ps2);
|
r = kvm_vm_ioctl_set_pit2(kvm, &u.ps2);
|
||||||
|
set_pit2_out:
|
||||||
|
mutex_unlock(&kvm->lock);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case KVM_REINJECT_CONTROL: {
|
case KVM_REINJECT_CONTROL: {
|
||||||
|
|
|
@ -139,7 +139,7 @@ long __copy_user_flushcache(void *dst, const void __user *src, unsigned size)
|
||||||
*/
|
*/
|
||||||
if (size < 8) {
|
if (size < 8) {
|
||||||
if (!IS_ALIGNED(dest, 4) || size != 4)
|
if (!IS_ALIGNED(dest, 4) || size != 4)
|
||||||
clean_cache_range(dst, 1);
|
clean_cache_range(dst, size);
|
||||||
} else {
|
} else {
|
||||||
if (!IS_ALIGNED(dest, 8)) {
|
if (!IS_ALIGNED(dest, 8)) {
|
||||||
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
|
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
|
||||||
|
|
|
@ -1080,29 +1080,21 @@ void acpi_ec_dispatch_gpe(void)
|
||||||
/* --------------------------------------------------------------------------
|
/* --------------------------------------------------------------------------
|
||||||
Event Management
|
Event Management
|
||||||
-------------------------------------------------------------------------- */
|
-------------------------------------------------------------------------- */
|
||||||
static struct acpi_ec_query_handler *
|
|
||||||
acpi_ec_get_query_handler(struct acpi_ec_query_handler *handler)
|
|
||||||
{
|
|
||||||
if (handler)
|
|
||||||
kref_get(&handler->kref);
|
|
||||||
return handler;
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct acpi_ec_query_handler *
|
static struct acpi_ec_query_handler *
|
||||||
acpi_ec_get_query_handler_by_value(struct acpi_ec *ec, u8 value)
|
acpi_ec_get_query_handler_by_value(struct acpi_ec *ec, u8 value)
|
||||||
{
|
{
|
||||||
struct acpi_ec_query_handler *handler;
|
struct acpi_ec_query_handler *handler;
|
||||||
bool found = false;
|
|
||||||
|
|
||||||
mutex_lock(&ec->mutex);
|
mutex_lock(&ec->mutex);
|
||||||
list_for_each_entry(handler, &ec->list, node) {
|
list_for_each_entry(handler, &ec->list, node) {
|
||||||
if (value == handler->query_bit) {
|
if (value == handler->query_bit) {
|
||||||
found = true;
|
kref_get(&handler->kref);
|
||||||
break;
|
mutex_unlock(&ec->mutex);
|
||||||
|
return handler;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
mutex_unlock(&ec->mutex);
|
mutex_unlock(&ec->mutex);
|
||||||
return found ? acpi_ec_get_query_handler(handler) : NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void acpi_ec_query_handler_release(struct kref *kref)
|
static void acpi_ec_query_handler_release(struct kref *kref)
|
||||||
|
|
|
@ -72,7 +72,7 @@ struct acard_sg {
|
||||||
__le32 size; /* bit 31 (EOT) max==0x10000 (64k) */
|
__le32 size; /* bit 31 (EOT) max==0x10000 (64k) */
|
||||||
};
|
};
|
||||||
|
|
||||||
static void acard_ahci_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors acard_ahci_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static bool acard_ahci_qc_fill_rtf(struct ata_queued_cmd *qc);
|
static bool acard_ahci_qc_fill_rtf(struct ata_queued_cmd *qc);
|
||||||
static int acard_ahci_port_start(struct ata_port *ap);
|
static int acard_ahci_port_start(struct ata_port *ap);
|
||||||
static int acard_ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
static int acard_ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
||||||
|
@ -257,7 +257,7 @@ static unsigned int acard_ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl)
|
||||||
return si;
|
return si;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void acard_ahci_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors acard_ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct ata_port *ap = qc->ap;
|
struct ata_port *ap = qc->ap;
|
||||||
struct ahci_port_priv *pp = ap->private_data;
|
struct ahci_port_priv *pp = ap->private_data;
|
||||||
|
@ -295,6 +295,8 @@ static void acard_ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||||
opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH;
|
opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH;
|
||||||
|
|
||||||
ahci_fill_cmd_slot(pp, qc->hw_tag, opts);
|
ahci_fill_cmd_slot(pp, qc->hw_tag, opts);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool acard_ahci_qc_fill_rtf(struct ata_queued_cmd *qc)
|
static bool acard_ahci_qc_fill_rtf(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -73,7 +73,7 @@ static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
|
||||||
static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc);
|
static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc);
|
||||||
static int ahci_port_start(struct ata_port *ap);
|
static int ahci_port_start(struct ata_port *ap);
|
||||||
static void ahci_port_stop(struct ata_port *ap);
|
static void ahci_port_stop(struct ata_port *ap);
|
||||||
static void ahci_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors ahci_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static int ahci_pmp_qc_defer(struct ata_queued_cmd *qc);
|
static int ahci_pmp_qc_defer(struct ata_queued_cmd *qc);
|
||||||
static void ahci_freeze(struct ata_port *ap);
|
static void ahci_freeze(struct ata_port *ap);
|
||||||
static void ahci_thaw(struct ata_port *ap);
|
static void ahci_thaw(struct ata_port *ap);
|
||||||
|
@ -1640,7 +1640,7 @@ static int ahci_pmp_qc_defer(struct ata_queued_cmd *qc)
|
||||||
return sata_pmp_qc_defer_cmd_switch(qc);
|
return sata_pmp_qc_defer_cmd_switch(qc);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void ahci_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct ata_port *ap = qc->ap;
|
struct ata_port *ap = qc->ap;
|
||||||
struct ahci_port_priv *pp = ap->private_data;
|
struct ahci_port_priv *pp = ap->private_data;
|
||||||
|
@ -1676,6 +1676,8 @@ static void ahci_qc_prep(struct ata_queued_cmd *qc)
|
||||||
opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH;
|
opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH;
|
||||||
|
|
||||||
ahci_fill_cmd_slot(pp, qc->hw_tag, opts);
|
ahci_fill_cmd_slot(pp, qc->hw_tag, opts);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void ahci_fbs_dec_intr(struct ata_port *ap)
|
static void ahci_fbs_dec_intr(struct ata_port *ap)
|
||||||
|
|
|
@ -4996,7 +4996,10 @@ int ata_std_qc_defer(struct ata_queued_cmd *qc)
|
||||||
return ATA_DEFER_LINK;
|
return ATA_DEFER_LINK;
|
||||||
}
|
}
|
||||||
|
|
||||||
void ata_noop_qc_prep(struct ata_queued_cmd *qc) { }
|
enum ata_completion_errors ata_noop_qc_prep(struct ata_queued_cmd *qc)
|
||||||
|
{
|
||||||
|
return AC_ERR_OK;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ata_sg_init - Associate command with scatter-gather table.
|
* ata_sg_init - Associate command with scatter-gather table.
|
||||||
|
@ -5483,7 +5486,9 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
ap->ops->qc_prep(qc);
|
qc->err_mask |= ap->ops->qc_prep(qc);
|
||||||
|
if (unlikely(qc->err_mask))
|
||||||
|
goto err;
|
||||||
trace_ata_qc_issue(qc);
|
trace_ata_qc_issue(qc);
|
||||||
qc->err_mask |= ap->ops->qc_issue(qc);
|
qc->err_mask |= ap->ops->qc_issue(qc);
|
||||||
if (unlikely(qc->err_mask))
|
if (unlikely(qc->err_mask))
|
||||||
|
|
|
@ -2695,12 +2695,14 @@ static void ata_bmdma_fill_sg_dumb(struct ata_queued_cmd *qc)
|
||||||
* LOCKING:
|
* LOCKING:
|
||||||
* spin_lock_irqsave(host lock)
|
* spin_lock_irqsave(host lock)
|
||||||
*/
|
*/
|
||||||
void ata_bmdma_qc_prep(struct ata_queued_cmd *qc)
|
enum ata_completion_errors ata_bmdma_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
ata_bmdma_fill_sg(qc);
|
ata_bmdma_fill_sg(qc);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(ata_bmdma_qc_prep);
|
EXPORT_SYMBOL_GPL(ata_bmdma_qc_prep);
|
||||||
|
|
||||||
|
@ -2713,12 +2715,14 @@ EXPORT_SYMBOL_GPL(ata_bmdma_qc_prep);
|
||||||
* LOCKING:
|
* LOCKING:
|
||||||
* spin_lock_irqsave(host lock)
|
* spin_lock_irqsave(host lock)
|
||||||
*/
|
*/
|
||||||
void ata_bmdma_dumb_qc_prep(struct ata_queued_cmd *qc)
|
enum ata_completion_errors ata_bmdma_dumb_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
ata_bmdma_fill_sg_dumb(qc);
|
ata_bmdma_fill_sg_dumb(qc);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(ata_bmdma_dumb_qc_prep);
|
EXPORT_SYMBOL_GPL(ata_bmdma_dumb_qc_prep);
|
||||||
|
|
||||||
|
|
|
@ -507,7 +507,7 @@ static int pata_macio_cable_detect(struct ata_port *ap)
|
||||||
return ATA_CBL_PATA40;
|
return ATA_CBL_PATA40;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
unsigned int write = (qc->tf.flags & ATA_TFLAG_WRITE);
|
unsigned int write = (qc->tf.flags & ATA_TFLAG_WRITE);
|
||||||
struct ata_port *ap = qc->ap;
|
struct ata_port *ap = qc->ap;
|
||||||
|
@ -520,7 +520,7 @@ static void pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
||||||
__func__, qc, qc->flags, write, qc->dev->devno);
|
__func__, qc, qc->flags, write, qc->dev->devno);
|
||||||
|
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
table = (struct dbdma_cmd *) priv->dma_table_cpu;
|
table = (struct dbdma_cmd *) priv->dma_table_cpu;
|
||||||
|
|
||||||
|
@ -565,6 +565,8 @@ static void pata_macio_qc_prep(struct ata_queued_cmd *qc)
|
||||||
table->command = cpu_to_le16(DBDMA_STOP);
|
table->command = cpu_to_le16(DBDMA_STOP);
|
||||||
|
|
||||||
dev_dbgdma(priv->dev, "%s: %d DMA list entries\n", __func__, pi);
|
dev_dbgdma(priv->dev, "%s: %d DMA list entries\n", __func__, pi);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -58,25 +58,27 @@ static void pxa_ata_dma_irq(void *d)
|
||||||
/*
|
/*
|
||||||
* Prepare taskfile for submission.
|
* Prepare taskfile for submission.
|
||||||
*/
|
*/
|
||||||
static void pxa_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors pxa_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct pata_pxa_data *pd = qc->ap->private_data;
|
struct pata_pxa_data *pd = qc->ap->private_data;
|
||||||
struct dma_async_tx_descriptor *tx;
|
struct dma_async_tx_descriptor *tx;
|
||||||
enum dma_transfer_direction dir;
|
enum dma_transfer_direction dir;
|
||||||
|
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
dir = (qc->dma_dir == DMA_TO_DEVICE ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM);
|
dir = (qc->dma_dir == DMA_TO_DEVICE ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM);
|
||||||
tx = dmaengine_prep_slave_sg(pd->dma_chan, qc->sg, qc->n_elem, dir,
|
tx = dmaengine_prep_slave_sg(pd->dma_chan, qc->sg, qc->n_elem, dir,
|
||||||
DMA_PREP_INTERRUPT);
|
DMA_PREP_INTERRUPT);
|
||||||
if (!tx) {
|
if (!tx) {
|
||||||
ata_dev_err(qc->dev, "prep_slave_sg() failed\n");
|
ata_dev_err(qc->dev, "prep_slave_sg() failed\n");
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
tx->callback = pxa_ata_dma_irq;
|
tx->callback = pxa_ata_dma_irq;
|
||||||
tx->callback_param = pd;
|
tx->callback_param = pd;
|
||||||
pd->dma_cookie = dmaengine_submit(tx);
|
pd->dma_cookie = dmaengine_submit(tx);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -132,7 +132,7 @@ static int adma_ata_init_one(struct pci_dev *pdev,
|
||||||
const struct pci_device_id *ent);
|
const struct pci_device_id *ent);
|
||||||
static int adma_port_start(struct ata_port *ap);
|
static int adma_port_start(struct ata_port *ap);
|
||||||
static void adma_port_stop(struct ata_port *ap);
|
static void adma_port_stop(struct ata_port *ap);
|
||||||
static void adma_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors adma_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static unsigned int adma_qc_issue(struct ata_queued_cmd *qc);
|
static unsigned int adma_qc_issue(struct ata_queued_cmd *qc);
|
||||||
static int adma_check_atapi_dma(struct ata_queued_cmd *qc);
|
static int adma_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||||
static void adma_freeze(struct ata_port *ap);
|
static void adma_freeze(struct ata_port *ap);
|
||||||
|
@ -311,7 +311,7 @@ static int adma_fill_sg(struct ata_queued_cmd *qc)
|
||||||
return i;
|
return i;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void adma_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors adma_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct adma_port_priv *pp = qc->ap->private_data;
|
struct adma_port_priv *pp = qc->ap->private_data;
|
||||||
u8 *buf = pp->pkt;
|
u8 *buf = pp->pkt;
|
||||||
|
@ -322,7 +322,7 @@ static void adma_qc_prep(struct ata_queued_cmd *qc)
|
||||||
|
|
||||||
adma_enter_reg_mode(qc->ap);
|
adma_enter_reg_mode(qc->ap);
|
||||||
if (qc->tf.protocol != ATA_PROT_DMA)
|
if (qc->tf.protocol != ATA_PROT_DMA)
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
buf[i++] = 0; /* Response flags */
|
buf[i++] = 0; /* Response flags */
|
||||||
buf[i++] = 0; /* reserved */
|
buf[i++] = 0; /* reserved */
|
||||||
|
@ -387,6 +387,7 @@ static void adma_qc_prep(struct ata_queued_cmd *qc)
|
||||||
printk("%s\n", obuf);
|
printk("%s\n", obuf);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void adma_packet_start(struct ata_queued_cmd *qc)
|
static inline void adma_packet_start(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -507,7 +507,7 @@ static unsigned int sata_fsl_fill_sg(struct ata_queued_cmd *qc, void *cmd_desc,
|
||||||
return num_prde;
|
return num_prde;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sata_fsl_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors sata_fsl_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct ata_port *ap = qc->ap;
|
struct ata_port *ap = qc->ap;
|
||||||
struct sata_fsl_port_priv *pp = ap->private_data;
|
struct sata_fsl_port_priv *pp = ap->private_data;
|
||||||
|
@ -553,6 +553,8 @@ static void sata_fsl_qc_prep(struct ata_queued_cmd *qc)
|
||||||
|
|
||||||
VPRINTK("SATA FSL : xx_qc_prep, di = 0x%x, ttl = %d, num_prde = %d\n",
|
VPRINTK("SATA FSL : xx_qc_prep, di = 0x%x, ttl = %d, num_prde = %d\n",
|
||||||
desc_info, ttl_dwords, num_prde);
|
desc_info, ttl_dwords, num_prde);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned int sata_fsl_qc_issue(struct ata_queued_cmd *qc)
|
static unsigned int sata_fsl_qc_issue(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -472,7 +472,7 @@ static void inic_fill_sg(struct inic_prd *prd, struct ata_queued_cmd *qc)
|
||||||
prd[-1].flags |= PRD_END;
|
prd[-1].flags |= PRD_END;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void inic_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors inic_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct inic_port_priv *pp = qc->ap->private_data;
|
struct inic_port_priv *pp = qc->ap->private_data;
|
||||||
struct inic_pkt *pkt = pp->pkt;
|
struct inic_pkt *pkt = pp->pkt;
|
||||||
|
@ -532,6 +532,8 @@ static void inic_qc_prep(struct ata_queued_cmd *qc)
|
||||||
inic_fill_sg(prd, qc);
|
inic_fill_sg(prd, qc);
|
||||||
|
|
||||||
pp->cpb_tbl[0] = pp->pkt_dma;
|
pp->cpb_tbl[0] = pp->pkt_dma;
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned int inic_qc_issue(struct ata_queued_cmd *qc)
|
static unsigned int inic_qc_issue(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -605,8 +605,8 @@ static int mv5_scr_write(struct ata_link *link, unsigned int sc_reg_in, u32 val)
|
||||||
static int mv_port_start(struct ata_port *ap);
|
static int mv_port_start(struct ata_port *ap);
|
||||||
static void mv_port_stop(struct ata_port *ap);
|
static void mv_port_stop(struct ata_port *ap);
|
||||||
static int mv_qc_defer(struct ata_queued_cmd *qc);
|
static int mv_qc_defer(struct ata_queued_cmd *qc);
|
||||||
static void mv_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors mv_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static void mv_qc_prep_iie(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors mv_qc_prep_iie(struct ata_queued_cmd *qc);
|
||||||
static unsigned int mv_qc_issue(struct ata_queued_cmd *qc);
|
static unsigned int mv_qc_issue(struct ata_queued_cmd *qc);
|
||||||
static int mv_hardreset(struct ata_link *link, unsigned int *class,
|
static int mv_hardreset(struct ata_link *link, unsigned int *class,
|
||||||
unsigned long deadline);
|
unsigned long deadline);
|
||||||
|
@ -2044,7 +2044,7 @@ static void mv_rw_multi_errata_sata24(struct ata_queued_cmd *qc)
|
||||||
* LOCKING:
|
* LOCKING:
|
||||||
* Inherited from caller.
|
* Inherited from caller.
|
||||||
*/
|
*/
|
||||||
static void mv_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors mv_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct ata_port *ap = qc->ap;
|
struct ata_port *ap = qc->ap;
|
||||||
struct mv_port_priv *pp = ap->private_data;
|
struct mv_port_priv *pp = ap->private_data;
|
||||||
|
@ -2056,15 +2056,15 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||||
switch (tf->protocol) {
|
switch (tf->protocol) {
|
||||||
case ATA_PROT_DMA:
|
case ATA_PROT_DMA:
|
||||||
if (tf->command == ATA_CMD_DSM)
|
if (tf->command == ATA_CMD_DSM)
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
/* fall-thru */
|
/* fall-thru */
|
||||||
case ATA_PROT_NCQ:
|
case ATA_PROT_NCQ:
|
||||||
break; /* continue below */
|
break; /* continue below */
|
||||||
case ATA_PROT_PIO:
|
case ATA_PROT_PIO:
|
||||||
mv_rw_multi_errata_sata24(qc);
|
mv_rw_multi_errata_sata24(qc);
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
default:
|
default:
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Fill in command request block
|
/* Fill in command request block
|
||||||
|
@ -2111,12 +2111,10 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||||
* non-NCQ mode are: [RW] STREAM DMA and W DMA FUA EXT, none
|
* non-NCQ mode are: [RW] STREAM DMA and W DMA FUA EXT, none
|
||||||
* of which are defined/used by Linux. If we get here, this
|
* of which are defined/used by Linux. If we get here, this
|
||||||
* driver needs work.
|
* driver needs work.
|
||||||
*
|
|
||||||
* FIXME: modify libata to give qc_prep a return value and
|
|
||||||
* return error here.
|
|
||||||
*/
|
*/
|
||||||
BUG_ON(tf->command);
|
ata_port_err(ap, "%s: unsupported command: %.2x\n", __func__,
|
||||||
break;
|
tf->command);
|
||||||
|
return AC_ERR_INVALID;
|
||||||
}
|
}
|
||||||
mv_crqb_pack_cmd(cw++, tf->nsect, ATA_REG_NSECT, 0);
|
mv_crqb_pack_cmd(cw++, tf->nsect, ATA_REG_NSECT, 0);
|
||||||
mv_crqb_pack_cmd(cw++, tf->hob_lbal, ATA_REG_LBAL, 0);
|
mv_crqb_pack_cmd(cw++, tf->hob_lbal, ATA_REG_LBAL, 0);
|
||||||
|
@ -2129,8 +2127,10 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||||
mv_crqb_pack_cmd(cw++, tf->command, ATA_REG_CMD, 1); /* last */
|
mv_crqb_pack_cmd(cw++, tf->command, ATA_REG_CMD, 1); /* last */
|
||||||
|
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
mv_fill_sg(qc);
|
mv_fill_sg(qc);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -2145,7 +2145,7 @@ static void mv_qc_prep(struct ata_queued_cmd *qc)
|
||||||
* LOCKING:
|
* LOCKING:
|
||||||
* Inherited from caller.
|
* Inherited from caller.
|
||||||
*/
|
*/
|
||||||
static void mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct ata_port *ap = qc->ap;
|
struct ata_port *ap = qc->ap;
|
||||||
struct mv_port_priv *pp = ap->private_data;
|
struct mv_port_priv *pp = ap->private_data;
|
||||||
|
@ -2156,9 +2156,9 @@ static void mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
||||||
|
|
||||||
if ((tf->protocol != ATA_PROT_DMA) &&
|
if ((tf->protocol != ATA_PROT_DMA) &&
|
||||||
(tf->protocol != ATA_PROT_NCQ))
|
(tf->protocol != ATA_PROT_NCQ))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
if (tf->command == ATA_CMD_DSM)
|
if (tf->command == ATA_CMD_DSM)
|
||||||
return; /* use bmdma for this */
|
return AC_ERR_OK; /* use bmdma for this */
|
||||||
|
|
||||||
/* Fill in Gen IIE command request block */
|
/* Fill in Gen IIE command request block */
|
||||||
if (!(tf->flags & ATA_TFLAG_WRITE))
|
if (!(tf->flags & ATA_TFLAG_WRITE))
|
||||||
|
@ -2199,8 +2199,10 @@ static void mv_qc_prep_iie(struct ata_queued_cmd *qc)
|
||||||
);
|
);
|
||||||
|
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
mv_fill_sg(qc);
|
mv_fill_sg(qc);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -313,7 +313,7 @@ static void nv_ck804_freeze(struct ata_port *ap);
|
||||||
static void nv_ck804_thaw(struct ata_port *ap);
|
static void nv_ck804_thaw(struct ata_port *ap);
|
||||||
static int nv_adma_slave_config(struct scsi_device *sdev);
|
static int nv_adma_slave_config(struct scsi_device *sdev);
|
||||||
static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc);
|
static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||||
static void nv_adma_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc);
|
static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc);
|
||||||
static irqreturn_t nv_adma_interrupt(int irq, void *dev_instance);
|
static irqreturn_t nv_adma_interrupt(int irq, void *dev_instance);
|
||||||
static void nv_adma_irq_clear(struct ata_port *ap);
|
static void nv_adma_irq_clear(struct ata_port *ap);
|
||||||
|
@ -335,7 +335,7 @@ static void nv_mcp55_freeze(struct ata_port *ap);
|
||||||
static void nv_swncq_error_handler(struct ata_port *ap);
|
static void nv_swncq_error_handler(struct ata_port *ap);
|
||||||
static int nv_swncq_slave_config(struct scsi_device *sdev);
|
static int nv_swncq_slave_config(struct scsi_device *sdev);
|
||||||
static int nv_swncq_port_start(struct ata_port *ap);
|
static int nv_swncq_port_start(struct ata_port *ap);
|
||||||
static void nv_swncq_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static void nv_swncq_fill_sg(struct ata_queued_cmd *qc);
|
static void nv_swncq_fill_sg(struct ata_queued_cmd *qc);
|
||||||
static unsigned int nv_swncq_qc_issue(struct ata_queued_cmd *qc);
|
static unsigned int nv_swncq_qc_issue(struct ata_queued_cmd *qc);
|
||||||
static void nv_swncq_irq_clear(struct ata_port *ap, u16 fis);
|
static void nv_swncq_irq_clear(struct ata_port *ap, u16 fis);
|
||||||
|
@ -1365,7 +1365,7 @@ static int nv_adma_use_reg_mode(struct ata_queued_cmd *qc)
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct nv_adma_port_priv *pp = qc->ap->private_data;
|
struct nv_adma_port_priv *pp = qc->ap->private_data;
|
||||||
struct nv_adma_cpb *cpb = &pp->cpb[qc->hw_tag];
|
struct nv_adma_cpb *cpb = &pp->cpb[qc->hw_tag];
|
||||||
|
@ -1377,7 +1377,7 @@ static void nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
||||||
(qc->flags & ATA_QCFLAG_DMAMAP));
|
(qc->flags & ATA_QCFLAG_DMAMAP));
|
||||||
nv_adma_register_mode(qc->ap);
|
nv_adma_register_mode(qc->ap);
|
||||||
ata_bmdma_qc_prep(qc);
|
ata_bmdma_qc_prep(qc);
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
cpb->resp_flags = NV_CPB_RESP_DONE;
|
cpb->resp_flags = NV_CPB_RESP_DONE;
|
||||||
|
@ -1409,6 +1409,8 @@ static void nv_adma_qc_prep(struct ata_queued_cmd *qc)
|
||||||
cpb->ctl_flags = ctl_flags;
|
cpb->ctl_flags = ctl_flags;
|
||||||
wmb();
|
wmb();
|
||||||
cpb->resp_flags = 0;
|
cpb->resp_flags = 0;
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc)
|
static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc)
|
||||||
|
@ -1972,17 +1974,19 @@ static int nv_swncq_port_start(struct ata_port *ap)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nv_swncq_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
if (qc->tf.protocol != ATA_PROT_NCQ) {
|
if (qc->tf.protocol != ATA_PROT_NCQ) {
|
||||||
ata_bmdma_qc_prep(qc);
|
ata_bmdma_qc_prep(qc);
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
nv_swncq_fill_sg(qc);
|
nv_swncq_fill_sg(qc);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nv_swncq_fill_sg(struct ata_queued_cmd *qc)
|
static void nv_swncq_fill_sg(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -155,7 +155,7 @@ static int pdc_sata_scr_write(struct ata_link *link, unsigned int sc_reg, u32 va
|
||||||
static int pdc_ata_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
static int pdc_ata_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
||||||
static int pdc_common_port_start(struct ata_port *ap);
|
static int pdc_common_port_start(struct ata_port *ap);
|
||||||
static int pdc_sata_port_start(struct ata_port *ap);
|
static int pdc_sata_port_start(struct ata_port *ap);
|
||||||
static void pdc_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors pdc_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||||
static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||||
static int pdc_check_atapi_dma(struct ata_queued_cmd *qc);
|
static int pdc_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||||
|
@ -649,7 +649,7 @@ static void pdc_fill_sg(struct ata_queued_cmd *qc)
|
||||||
prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
|
prd[idx - 1].flags_len |= cpu_to_le32(ATA_PRD_EOT);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void pdc_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors pdc_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct pdc_port_priv *pp = qc->ap->private_data;
|
struct pdc_port_priv *pp = qc->ap->private_data;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
|
@ -681,6 +681,8 @@ static void pdc_qc_prep(struct ata_queued_cmd *qc)
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int pdc_is_sataii_tx4(unsigned long flags)
|
static int pdc_is_sataii_tx4(unsigned long flags)
|
||||||
|
|
|
@ -116,7 +116,7 @@ static int qs_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
|
||||||
static int qs_ata_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
static int qs_ata_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
|
||||||
static int qs_port_start(struct ata_port *ap);
|
static int qs_port_start(struct ata_port *ap);
|
||||||
static void qs_host_stop(struct ata_host *host);
|
static void qs_host_stop(struct ata_host *host);
|
||||||
static void qs_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors qs_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static unsigned int qs_qc_issue(struct ata_queued_cmd *qc);
|
static unsigned int qs_qc_issue(struct ata_queued_cmd *qc);
|
||||||
static int qs_check_atapi_dma(struct ata_queued_cmd *qc);
|
static int qs_check_atapi_dma(struct ata_queued_cmd *qc);
|
||||||
static void qs_freeze(struct ata_port *ap);
|
static void qs_freeze(struct ata_port *ap);
|
||||||
|
@ -276,7 +276,7 @@ static unsigned int qs_fill_sg(struct ata_queued_cmd *qc)
|
||||||
return si;
|
return si;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void qs_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors qs_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct qs_port_priv *pp = qc->ap->private_data;
|
struct qs_port_priv *pp = qc->ap->private_data;
|
||||||
u8 dflags = QS_DF_PORD, *buf = pp->pkt;
|
u8 dflags = QS_DF_PORD, *buf = pp->pkt;
|
||||||
|
@ -288,7 +288,7 @@ static void qs_qc_prep(struct ata_queued_cmd *qc)
|
||||||
|
|
||||||
qs_enter_reg_mode(qc->ap);
|
qs_enter_reg_mode(qc->ap);
|
||||||
if (qc->tf.protocol != ATA_PROT_DMA)
|
if (qc->tf.protocol != ATA_PROT_DMA)
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
nelem = qs_fill_sg(qc);
|
nelem = qs_fill_sg(qc);
|
||||||
|
|
||||||
|
@ -311,6 +311,8 @@ static void qs_qc_prep(struct ata_queued_cmd *qc)
|
||||||
|
|
||||||
/* frame information structure (FIS) */
|
/* frame information structure (FIS) */
|
||||||
ata_tf_to_fis(&qc->tf, 0, 1, &buf[32]);
|
ata_tf_to_fis(&qc->tf, 0, 1, &buf[32]);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void qs_packet_start(struct ata_queued_cmd *qc)
|
static inline void qs_packet_start(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -554,12 +554,14 @@ static void sata_rcar_bmdma_fill_sg(struct ata_queued_cmd *qc)
|
||||||
prd[si - 1].addr |= cpu_to_le32(SATA_RCAR_DTEND);
|
prd[si - 1].addr |= cpu_to_le32(SATA_RCAR_DTEND);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sata_rcar_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors sata_rcar_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
sata_rcar_bmdma_fill_sg(qc);
|
sata_rcar_bmdma_fill_sg(qc);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sata_rcar_bmdma_setup(struct ata_queued_cmd *qc)
|
static void sata_rcar_bmdma_setup(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -119,7 +119,7 @@ static void sil_dev_config(struct ata_device *dev);
|
||||||
static int sil_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val);
|
static int sil_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val);
|
||||||
static int sil_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
|
static int sil_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);
|
||||||
static int sil_set_mode(struct ata_link *link, struct ata_device **r_failed);
|
static int sil_set_mode(struct ata_link *link, struct ata_device **r_failed);
|
||||||
static void sil_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors sil_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static void sil_bmdma_setup(struct ata_queued_cmd *qc);
|
static void sil_bmdma_setup(struct ata_queued_cmd *qc);
|
||||||
static void sil_bmdma_start(struct ata_queued_cmd *qc);
|
static void sil_bmdma_start(struct ata_queued_cmd *qc);
|
||||||
static void sil_bmdma_stop(struct ata_queued_cmd *qc);
|
static void sil_bmdma_stop(struct ata_queued_cmd *qc);
|
||||||
|
@ -333,12 +333,14 @@ static void sil_fill_sg(struct ata_queued_cmd *qc)
|
||||||
last_prd->flags_len |= cpu_to_le32(ATA_PRD_EOT);
|
last_prd->flags_len |= cpu_to_le32(ATA_PRD_EOT);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sil_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors sil_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
if (!(qc->flags & ATA_QCFLAG_DMAMAP))
|
||||||
return;
|
return AC_ERR_OK;
|
||||||
|
|
||||||
sil_fill_sg(qc);
|
sil_fill_sg(qc);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned char sil_get_device_cache_line(struct pci_dev *pdev)
|
static unsigned char sil_get_device_cache_line(struct pci_dev *pdev)
|
||||||
|
|
|
@ -336,7 +336,7 @@ static void sil24_dev_config(struct ata_device *dev);
|
||||||
static int sil24_scr_read(struct ata_link *link, unsigned sc_reg, u32 *val);
|
static int sil24_scr_read(struct ata_link *link, unsigned sc_reg, u32 *val);
|
||||||
static int sil24_scr_write(struct ata_link *link, unsigned sc_reg, u32 val);
|
static int sil24_scr_write(struct ata_link *link, unsigned sc_reg, u32 val);
|
||||||
static int sil24_qc_defer(struct ata_queued_cmd *qc);
|
static int sil24_qc_defer(struct ata_queued_cmd *qc);
|
||||||
static void sil24_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors sil24_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc);
|
static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc);
|
||||||
static bool sil24_qc_fill_rtf(struct ata_queued_cmd *qc);
|
static bool sil24_qc_fill_rtf(struct ata_queued_cmd *qc);
|
||||||
static void sil24_pmp_attach(struct ata_port *ap);
|
static void sil24_pmp_attach(struct ata_port *ap);
|
||||||
|
@ -840,7 +840,7 @@ static int sil24_qc_defer(struct ata_queued_cmd *qc)
|
||||||
return ata_std_qc_defer(qc);
|
return ata_std_qc_defer(qc);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sil24_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors sil24_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
struct ata_port *ap = qc->ap;
|
struct ata_port *ap = qc->ap;
|
||||||
struct sil24_port_priv *pp = ap->private_data;
|
struct sil24_port_priv *pp = ap->private_data;
|
||||||
|
@ -884,6 +884,8 @@ static void sil24_qc_prep(struct ata_queued_cmd *qc)
|
||||||
|
|
||||||
if (qc->flags & ATA_QCFLAG_DMAMAP)
|
if (qc->flags & ATA_QCFLAG_DMAMAP)
|
||||||
sil24_fill_sg(qc, sge);
|
sil24_fill_sg(qc, sge);
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc)
|
static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc)
|
||||||
|
|
|
@ -218,7 +218,7 @@ static void pdc_error_handler(struct ata_port *ap);
|
||||||
static void pdc_freeze(struct ata_port *ap);
|
static void pdc_freeze(struct ata_port *ap);
|
||||||
static void pdc_thaw(struct ata_port *ap);
|
static void pdc_thaw(struct ata_port *ap);
|
||||||
static int pdc_port_start(struct ata_port *ap);
|
static int pdc_port_start(struct ata_port *ap);
|
||||||
static void pdc20621_qc_prep(struct ata_queued_cmd *qc);
|
static enum ata_completion_errors pdc20621_qc_prep(struct ata_queued_cmd *qc);
|
||||||
static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
static void pdc_tf_load_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||||
static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
static void pdc_exec_command_mmio(struct ata_port *ap, const struct ata_taskfile *tf);
|
||||||
static unsigned int pdc20621_dimm_init(struct ata_host *host);
|
static unsigned int pdc20621_dimm_init(struct ata_host *host);
|
||||||
|
@ -546,7 +546,7 @@ static void pdc20621_nodata_prep(struct ata_queued_cmd *qc)
|
||||||
VPRINTK("ata pkt buf ofs %u, mmio copied\n", i);
|
VPRINTK("ata pkt buf ofs %u, mmio copied\n", i);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void pdc20621_qc_prep(struct ata_queued_cmd *qc)
|
static enum ata_completion_errors pdc20621_qc_prep(struct ata_queued_cmd *qc)
|
||||||
{
|
{
|
||||||
switch (qc->tf.protocol) {
|
switch (qc->tf.protocol) {
|
||||||
case ATA_PROT_DMA:
|
case ATA_PROT_DMA:
|
||||||
|
@ -558,6 +558,8 @@ static void pdc20621_qc_prep(struct ata_queued_cmd *qc)
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return AC_ERR_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __pdc20621_push_hdma(struct ata_queued_cmd *qc,
|
static void __pdc20621_push_hdma(struct ata_queued_cmd *qc,
|
||||||
|
|
|
@ -2243,7 +2243,7 @@ static int eni_init_one(struct pci_dev *pci_dev,
|
||||||
|
|
||||||
rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
|
rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
|
||||||
if (rc < 0)
|
if (rc < 0)
|
||||||
goto out;
|
goto err_disable;
|
||||||
|
|
||||||
rc = -ENOMEM;
|
rc = -ENOMEM;
|
||||||
eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
|
eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
|
||||||
|
|
|
@ -2367,7 +2367,7 @@ int regmap_raw_write_async(struct regmap *map, unsigned int reg,
|
||||||
EXPORT_SYMBOL_GPL(regmap_raw_write_async);
|
EXPORT_SYMBOL_GPL(regmap_raw_write_async);
|
||||||
|
|
||||||
static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
|
static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
|
||||||
unsigned int val_len)
|
unsigned int val_len, bool noinc)
|
||||||
{
|
{
|
||||||
struct regmap_range_node *range;
|
struct regmap_range_node *range;
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -2380,7 +2380,7 @@ static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
|
||||||
range = _regmap_range_lookup(map, reg);
|
range = _regmap_range_lookup(map, reg);
|
||||||
if (range) {
|
if (range) {
|
||||||
ret = _regmap_select_page(map, ®, range,
|
ret = _regmap_select_page(map, ®, range,
|
||||||
val_len / map->format.val_bytes);
|
noinc ? 1 : val_len / map->format.val_bytes);
|
||||||
if (ret != 0)
|
if (ret != 0)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -2418,7 +2418,7 @@ static int _regmap_bus_read(void *context, unsigned int reg,
|
||||||
if (!map->format.parse_val)
|
if (!map->format.parse_val)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes);
|
ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes, false);
|
||||||
if (ret == 0)
|
if (ret == 0)
|
||||||
*val = map->format.parse_val(work_val);
|
*val = map->format.parse_val(work_val);
|
||||||
|
|
||||||
|
@ -2536,7 +2536,7 @@ int regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
|
||||||
|
|
||||||
/* Read bytes that fit into whole chunks */
|
/* Read bytes that fit into whole chunks */
|
||||||
for (i = 0; i < chunk_count; i++) {
|
for (i = 0; i < chunk_count; i++) {
|
||||||
ret = _regmap_raw_read(map, reg, val, chunk_bytes);
|
ret = _regmap_raw_read(map, reg, val, chunk_bytes, false);
|
||||||
if (ret != 0)
|
if (ret != 0)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
|
@ -2547,7 +2547,7 @@ int regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
|
||||||
|
|
||||||
/* Read remaining bytes */
|
/* Read remaining bytes */
|
||||||
if (val_len) {
|
if (val_len) {
|
||||||
ret = _regmap_raw_read(map, reg, val, val_len);
|
ret = _regmap_raw_read(map, reg, val, val_len, false);
|
||||||
if (ret != 0)
|
if (ret != 0)
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
@ -2622,7 +2622,7 @@ int regmap_noinc_read(struct regmap *map, unsigned int reg,
|
||||||
read_len = map->max_raw_read;
|
read_len = map->max_raw_read;
|
||||||
else
|
else
|
||||||
read_len = val_len;
|
read_len = val_len;
|
||||||
ret = _regmap_raw_read(map, reg, val, read_len);
|
ret = _regmap_raw_read(map, reg, val, read_len, true);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out_unlock;
|
goto out_unlock;
|
||||||
val = ((u8 *)val) + read_len;
|
val = ((u8 *)val) + read_len;
|
||||||
|
|
|
@ -343,11 +343,11 @@ static int rtlbt_parse_firmware(struct hci_dev *hdev,
|
||||||
* the end.
|
* the end.
|
||||||
*/
|
*/
|
||||||
len = patch_length;
|
len = patch_length;
|
||||||
buf = kmemdup(btrtl_dev->fw_data + patch_offset, patch_length,
|
buf = kvmalloc(patch_length, GFP_KERNEL);
|
||||||
GFP_KERNEL);
|
|
||||||
if (!buf)
|
if (!buf)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
memcpy(buf, btrtl_dev->fw_data + patch_offset, patch_length - 4);
|
||||||
memcpy(buf + patch_length - 4, &epatch_info->fw_version, 4);
|
memcpy(buf + patch_length - 4, &epatch_info->fw_version, 4);
|
||||||
|
|
||||||
*_buf = buf;
|
*_buf = buf;
|
||||||
|
@ -415,8 +415,10 @@ static int rtl_load_file(struct hci_dev *hdev, const char *name, u8 **buff)
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
ret = fw->size;
|
ret = fw->size;
|
||||||
*buff = kmemdup(fw->data, ret, GFP_KERNEL);
|
*buff = kvmalloc(fw->size, GFP_KERNEL);
|
||||||
if (!*buff)
|
if (*buff)
|
||||||
|
memcpy(*buff, fw->data, ret);
|
||||||
|
else
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
|
||||||
release_firmware(fw);
|
release_firmware(fw);
|
||||||
|
@ -454,14 +456,14 @@ static int btrtl_setup_rtl8723b(struct hci_dev *hdev,
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
if (btrtl_dev->cfg_len > 0) {
|
if (btrtl_dev->cfg_len > 0) {
|
||||||
tbuff = kzalloc(ret + btrtl_dev->cfg_len, GFP_KERNEL);
|
tbuff = kvzalloc(ret + btrtl_dev->cfg_len, GFP_KERNEL);
|
||||||
if (!tbuff) {
|
if (!tbuff) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
memcpy(tbuff, fw_data, ret);
|
memcpy(tbuff, fw_data, ret);
|
||||||
kfree(fw_data);
|
kvfree(fw_data);
|
||||||
|
|
||||||
memcpy(tbuff + ret, btrtl_dev->cfg_data, btrtl_dev->cfg_len);
|
memcpy(tbuff + ret, btrtl_dev->cfg_data, btrtl_dev->cfg_len);
|
||||||
ret += btrtl_dev->cfg_len;
|
ret += btrtl_dev->cfg_len;
|
||||||
|
@ -474,7 +476,7 @@ static int btrtl_setup_rtl8723b(struct hci_dev *hdev,
|
||||||
ret = rtl_download_firmware(hdev, fw_data, ret);
|
ret = rtl_download_firmware(hdev, fw_data, ret);
|
||||||
|
|
||||||
out:
|
out:
|
||||||
kfree(fw_data);
|
kvfree(fw_data);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -501,8 +503,8 @@ static struct sk_buff *btrtl_read_local_version(struct hci_dev *hdev)
|
||||||
|
|
||||||
void btrtl_free(struct btrtl_device_info *btrtl_dev)
|
void btrtl_free(struct btrtl_device_info *btrtl_dev)
|
||||||
{
|
{
|
||||||
kfree(btrtl_dev->fw_data);
|
kvfree(btrtl_dev->fw_data);
|
||||||
kfree(btrtl_dev->cfg_data);
|
kvfree(btrtl_dev->cfg_data);
|
||||||
kfree(btrtl_dev);
|
kfree(btrtl_dev);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(btrtl_free);
|
EXPORT_SYMBOL_GPL(btrtl_free);
|
||||||
|
|
|
@ -358,6 +358,26 @@ static int hisi_lpc_acpi_xlat_io_res(struct acpi_device *adev,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Released firmware describes the IO port max address as 0x3fff, which is
|
||||||
|
* the max host bus address. Fixup to a proper range. This will probably
|
||||||
|
* never be fixed in firmware.
|
||||||
|
*/
|
||||||
|
static void hisi_lpc_acpi_fixup_child_resource(struct device *hostdev,
|
||||||
|
struct resource *r)
|
||||||
|
{
|
||||||
|
if (r->end != 0x3fff)
|
||||||
|
return;
|
||||||
|
|
||||||
|
if (r->start == 0xe4)
|
||||||
|
r->end = 0xe4 + 0x04 - 1;
|
||||||
|
else if (r->start == 0x2f8)
|
||||||
|
r->end = 0x2f8 + 0x08 - 1;
|
||||||
|
else
|
||||||
|
dev_warn(hostdev, "unrecognised resource %pR to fixup, ignoring\n",
|
||||||
|
r);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* hisi_lpc_acpi_set_io_res - set the resources for a child
|
* hisi_lpc_acpi_set_io_res - set the resources for a child
|
||||||
* @child: the device node to be updated the I/O resource
|
* @child: the device node to be updated the I/O resource
|
||||||
|
@ -419,8 +439,11 @@ static int hisi_lpc_acpi_set_io_res(struct device *child,
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
count = 0;
|
count = 0;
|
||||||
list_for_each_entry(rentry, &resource_list, node)
|
list_for_each_entry(rentry, &resource_list, node) {
|
||||||
resources[count++] = *rentry->res;
|
resources[count] = *rentry->res;
|
||||||
|
hisi_lpc_acpi_fixup_child_resource(hostdev, &resources[count]);
|
||||||
|
count++;
|
||||||
|
}
|
||||||
|
|
||||||
acpi_dev_free_resource_list(&resource_list);
|
acpi_dev_free_resource_list(&resource_list);
|
||||||
|
|
||||||
|
|
|
@ -1142,14 +1142,14 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
|
||||||
* We take into account the first, second and third-order deltas
|
* We take into account the first, second and third-order deltas
|
||||||
* in order to make our estimate.
|
* in order to make our estimate.
|
||||||
*/
|
*/
|
||||||
delta = sample.jiffies - state->last_time;
|
delta = sample.jiffies - READ_ONCE(state->last_time);
|
||||||
state->last_time = sample.jiffies;
|
WRITE_ONCE(state->last_time, sample.jiffies);
|
||||||
|
|
||||||
delta2 = delta - state->last_delta;
|
delta2 = delta - READ_ONCE(state->last_delta);
|
||||||
state->last_delta = delta;
|
WRITE_ONCE(state->last_delta, delta);
|
||||||
|
|
||||||
delta3 = delta2 - state->last_delta2;
|
delta3 = delta2 - READ_ONCE(state->last_delta2);
|
||||||
state->last_delta2 = delta2;
|
WRITE_ONCE(state->last_delta2, delta2);
|
||||||
|
|
||||||
if (delta < 0)
|
if (delta < 0)
|
||||||
delta = -delta;
|
delta = -delta;
|
||||||
|
|
|
@ -776,18 +776,22 @@ static int __init tlclk_init(void)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = register_chrdev(tlclk_major, "telco_clock", &tlclk_fops);
|
telclk_interrupt = (inb(TLCLK_REG7) & 0x0f);
|
||||||
if (ret < 0) {
|
|
||||||
printk(KERN_ERR "tlclk: can't get major %d.\n", tlclk_major);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
tlclk_major = ret;
|
|
||||||
alarm_events = kzalloc( sizeof(struct tlclk_alarms), GFP_KERNEL);
|
alarm_events = kzalloc( sizeof(struct tlclk_alarms), GFP_KERNEL);
|
||||||
if (!alarm_events) {
|
if (!alarm_events) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto out1;
|
goto out1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ret = register_chrdev(tlclk_major, "telco_clock", &tlclk_fops);
|
||||||
|
if (ret < 0) {
|
||||||
|
printk(KERN_ERR "tlclk: can't get major %d.\n", tlclk_major);
|
||||||
|
kfree(alarm_events);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
tlclk_major = ret;
|
||||||
|
|
||||||
/* Read telecom clock IRQ number (Set by BIOS) */
|
/* Read telecom clock IRQ number (Set by BIOS) */
|
||||||
if (!request_region(TLCLK_BASE, 8, "telco_clock")) {
|
if (!request_region(TLCLK_BASE, 8, "telco_clock")) {
|
||||||
printk(KERN_ERR "tlclk: request_region 0x%X failed.\n",
|
printk(KERN_ERR "tlclk: request_region 0x%X failed.\n",
|
||||||
|
@ -795,7 +799,6 @@ static int __init tlclk_init(void)
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto out2;
|
goto out2;
|
||||||
}
|
}
|
||||||
telclk_interrupt = (inb(TLCLK_REG7) & 0x0f);
|
|
||||||
|
|
||||||
if (0x0F == telclk_interrupt ) { /* not MCPBL0010 ? */
|
if (0x0F == telclk_interrupt ) { /* not MCPBL0010 ? */
|
||||||
printk(KERN_ERR "telclk_interrupt = 0x%x non-mcpbl0010 hw.\n",
|
printk(KERN_ERR "telclk_interrupt = 0x%x non-mcpbl0010 hw.\n",
|
||||||
|
@ -836,8 +839,8 @@ static int __init tlclk_init(void)
|
||||||
release_region(TLCLK_BASE, 8);
|
release_region(TLCLK_BASE, 8);
|
||||||
out2:
|
out2:
|
||||||
kfree(alarm_events);
|
kfree(alarm_events);
|
||||||
out1:
|
|
||||||
unregister_chrdev(tlclk_major, "telco_clock");
|
unregister_chrdev(tlclk_major, "telco_clock");
|
||||||
|
out1:
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -26,6 +26,7 @@
|
||||||
#include "tpm.h"
|
#include "tpm.h"
|
||||||
|
|
||||||
#define ACPI_SIG_TPM2 "TPM2"
|
#define ACPI_SIG_TPM2 "TPM2"
|
||||||
|
#define TPM_CRB_MAX_RESOURCES 3
|
||||||
|
|
||||||
static const guid_t crb_acpi_start_guid =
|
static const guid_t crb_acpi_start_guid =
|
||||||
GUID_INIT(0x6BBF6CAB, 0x5463, 0x4714,
|
GUID_INIT(0x6BBF6CAB, 0x5463, 0x4714,
|
||||||
|
@ -95,7 +96,6 @@ enum crb_status {
|
||||||
struct crb_priv {
|
struct crb_priv {
|
||||||
u32 sm;
|
u32 sm;
|
||||||
const char *hid;
|
const char *hid;
|
||||||
void __iomem *iobase;
|
|
||||||
struct crb_regs_head __iomem *regs_h;
|
struct crb_regs_head __iomem *regs_h;
|
||||||
struct crb_regs_tail __iomem *regs_t;
|
struct crb_regs_tail __iomem *regs_t;
|
||||||
u8 __iomem *cmd;
|
u8 __iomem *cmd;
|
||||||
|
@ -438,21 +438,27 @@ static const struct tpm_class_ops tpm_crb = {
|
||||||
|
|
||||||
static int crb_check_resource(struct acpi_resource *ares, void *data)
|
static int crb_check_resource(struct acpi_resource *ares, void *data)
|
||||||
{
|
{
|
||||||
struct resource *io_res = data;
|
struct resource *iores_array = data;
|
||||||
struct resource_win win;
|
struct resource_win win;
|
||||||
struct resource *res = &(win.res);
|
struct resource *res = &(win.res);
|
||||||
|
int i;
|
||||||
|
|
||||||
if (acpi_dev_resource_memory(ares, res) ||
|
if (acpi_dev_resource_memory(ares, res) ||
|
||||||
acpi_dev_resource_address_space(ares, &win)) {
|
acpi_dev_resource_address_space(ares, &win)) {
|
||||||
*io_res = *res;
|
for (i = 0; i < TPM_CRB_MAX_RESOURCES + 1; ++i) {
|
||||||
io_res->name = NULL;
|
if (resource_type(iores_array + i) != IORESOURCE_MEM) {
|
||||||
|
iores_array[i] = *res;
|
||||||
|
iores_array[i].name = NULL;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __iomem *crb_map_res(struct device *dev, struct crb_priv *priv,
|
static void __iomem *crb_map_res(struct device *dev, struct resource *iores,
|
||||||
struct resource *io_res, u64 start, u32 size)
|
void __iomem **iobase_ptr, u64 start, u32 size)
|
||||||
{
|
{
|
||||||
struct resource new_res = {
|
struct resource new_res = {
|
||||||
.start = start,
|
.start = start,
|
||||||
|
@ -464,10 +470,16 @@ static void __iomem *crb_map_res(struct device *dev, struct crb_priv *priv,
|
||||||
if (start != new_res.start)
|
if (start != new_res.start)
|
||||||
return (void __iomem *) ERR_PTR(-EINVAL);
|
return (void __iomem *) ERR_PTR(-EINVAL);
|
||||||
|
|
||||||
if (!resource_contains(io_res, &new_res))
|
if (!iores)
|
||||||
return devm_ioremap_resource(dev, &new_res);
|
return devm_ioremap_resource(dev, &new_res);
|
||||||
|
|
||||||
return priv->iobase + (new_res.start - io_res->start);
|
if (!*iobase_ptr) {
|
||||||
|
*iobase_ptr = devm_ioremap_resource(dev, iores);
|
||||||
|
if (IS_ERR(*iobase_ptr))
|
||||||
|
return *iobase_ptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
return *iobase_ptr + (new_res.start - iores->start);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -494,9 +506,13 @@ static u64 crb_fixup_cmd_size(struct device *dev, struct resource *io_res,
|
||||||
static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
|
static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
|
||||||
struct acpi_table_tpm2 *buf)
|
struct acpi_table_tpm2 *buf)
|
||||||
{
|
{
|
||||||
struct list_head resources;
|
struct list_head acpi_resource_list;
|
||||||
struct resource io_res;
|
struct resource iores_array[TPM_CRB_MAX_RESOURCES + 1] = { {0} };
|
||||||
|
void __iomem *iobase_array[TPM_CRB_MAX_RESOURCES] = {NULL};
|
||||||
struct device *dev = &device->dev;
|
struct device *dev = &device->dev;
|
||||||
|
struct resource *iores;
|
||||||
|
void __iomem **iobase_ptr;
|
||||||
|
int i;
|
||||||
u32 pa_high, pa_low;
|
u32 pa_high, pa_low;
|
||||||
u64 cmd_pa;
|
u64 cmd_pa;
|
||||||
u32 cmd_size;
|
u32 cmd_size;
|
||||||
|
@ -505,21 +521,41 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
|
||||||
u32 rsp_size;
|
u32 rsp_size;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
INIT_LIST_HEAD(&resources);
|
INIT_LIST_HEAD(&acpi_resource_list);
|
||||||
ret = acpi_dev_get_resources(device, &resources, crb_check_resource,
|
ret = acpi_dev_get_resources(device, &acpi_resource_list,
|
||||||
&io_res);
|
crb_check_resource, iores_array);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return ret;
|
||||||
acpi_dev_free_resource_list(&resources);
|
acpi_dev_free_resource_list(&acpi_resource_list);
|
||||||
|
|
||||||
if (resource_type(&io_res) != IORESOURCE_MEM) {
|
if (resource_type(iores_array) != IORESOURCE_MEM) {
|
||||||
dev_err(dev, FW_BUG "TPM2 ACPI table does not define a memory resource\n");
|
dev_err(dev, FW_BUG "TPM2 ACPI table does not define a memory resource\n");
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
} else if (resource_type(iores_array + TPM_CRB_MAX_RESOURCES) ==
|
||||||
|
IORESOURCE_MEM) {
|
||||||
|
dev_warn(dev, "TPM2 ACPI table defines too many memory resources\n");
|
||||||
|
memset(iores_array + TPM_CRB_MAX_RESOURCES,
|
||||||
|
0, sizeof(*iores_array));
|
||||||
|
iores_array[TPM_CRB_MAX_RESOURCES].flags = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
priv->iobase = devm_ioremap_resource(dev, &io_res);
|
iores = NULL;
|
||||||
if (IS_ERR(priv->iobase))
|
iobase_ptr = NULL;
|
||||||
return PTR_ERR(priv->iobase);
|
for (i = 0; resource_type(iores_array + i) == IORESOURCE_MEM; ++i) {
|
||||||
|
if (buf->control_address >= iores_array[i].start &&
|
||||||
|
buf->control_address + sizeof(struct crb_regs_tail) - 1 <=
|
||||||
|
iores_array[i].end) {
|
||||||
|
iores = iores_array + i;
|
||||||
|
iobase_ptr = iobase_array + i;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
priv->regs_t = crb_map_res(dev, iores, iobase_ptr, buf->control_address,
|
||||||
|
sizeof(struct crb_regs_tail));
|
||||||
|
|
||||||
|
if (IS_ERR(priv->regs_t))
|
||||||
|
return PTR_ERR(priv->regs_t);
|
||||||
|
|
||||||
/* The ACPI IO region starts at the head area and continues to include
|
/* The ACPI IO region starts at the head area and continues to include
|
||||||
* the control area, as one nice sane region except for some older
|
* the control area, as one nice sane region except for some older
|
||||||
|
@ -527,9 +563,10 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
|
||||||
*/
|
*/
|
||||||
if ((priv->sm == ACPI_TPM2_COMMAND_BUFFER) ||
|
if ((priv->sm == ACPI_TPM2_COMMAND_BUFFER) ||
|
||||||
(priv->sm == ACPI_TPM2_MEMORY_MAPPED)) {
|
(priv->sm == ACPI_TPM2_MEMORY_MAPPED)) {
|
||||||
if (buf->control_address == io_res.start +
|
if (iores &&
|
||||||
|
buf->control_address == iores->start +
|
||||||
sizeof(*priv->regs_h))
|
sizeof(*priv->regs_h))
|
||||||
priv->regs_h = priv->iobase;
|
priv->regs_h = *iobase_ptr;
|
||||||
else
|
else
|
||||||
dev_warn(dev, FW_BUG "Bad ACPI memory layout");
|
dev_warn(dev, FW_BUG "Bad ACPI memory layout");
|
||||||
}
|
}
|
||||||
|
@ -538,13 +575,6 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
priv->regs_t = crb_map_res(dev, priv, &io_res, buf->control_address,
|
|
||||||
sizeof(struct crb_regs_tail));
|
|
||||||
if (IS_ERR(priv->regs_t)) {
|
|
||||||
ret = PTR_ERR(priv->regs_t);
|
|
||||||
goto out_relinquish_locality;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* PTT HW bug w/a: wake up the device to access
|
* PTT HW bug w/a: wake up the device to access
|
||||||
* possibly not retained registers.
|
* possibly not retained registers.
|
||||||
|
@ -556,13 +586,26 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
|
||||||
pa_high = ioread32(&priv->regs_t->ctrl_cmd_pa_high);
|
pa_high = ioread32(&priv->regs_t->ctrl_cmd_pa_high);
|
||||||
pa_low = ioread32(&priv->regs_t->ctrl_cmd_pa_low);
|
pa_low = ioread32(&priv->regs_t->ctrl_cmd_pa_low);
|
||||||
cmd_pa = ((u64)pa_high << 32) | pa_low;
|
cmd_pa = ((u64)pa_high << 32) | pa_low;
|
||||||
cmd_size = crb_fixup_cmd_size(dev, &io_res, cmd_pa,
|
cmd_size = ioread32(&priv->regs_t->ctrl_cmd_size);
|
||||||
ioread32(&priv->regs_t->ctrl_cmd_size));
|
|
||||||
|
iores = NULL;
|
||||||
|
iobase_ptr = NULL;
|
||||||
|
for (i = 0; iores_array[i].end; ++i) {
|
||||||
|
if (cmd_pa >= iores_array[i].start &&
|
||||||
|
cmd_pa <= iores_array[i].end) {
|
||||||
|
iores = iores_array + i;
|
||||||
|
iobase_ptr = iobase_array + i;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (iores)
|
||||||
|
cmd_size = crb_fixup_cmd_size(dev, iores, cmd_pa, cmd_size);
|
||||||
|
|
||||||
dev_dbg(dev, "cmd_hi = %X cmd_low = %X cmd_size %X\n",
|
dev_dbg(dev, "cmd_hi = %X cmd_low = %X cmd_size %X\n",
|
||||||
pa_high, pa_low, cmd_size);
|
pa_high, pa_low, cmd_size);
|
||||||
|
|
||||||
priv->cmd = crb_map_res(dev, priv, &io_res, cmd_pa, cmd_size);
|
priv->cmd = crb_map_res(dev, iores, iobase_ptr, cmd_pa, cmd_size);
|
||||||
if (IS_ERR(priv->cmd)) {
|
if (IS_ERR(priv->cmd)) {
|
||||||
ret = PTR_ERR(priv->cmd);
|
ret = PTR_ERR(priv->cmd);
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -570,11 +613,25 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
|
||||||
|
|
||||||
memcpy_fromio(&__rsp_pa, &priv->regs_t->ctrl_rsp_pa, 8);
|
memcpy_fromio(&__rsp_pa, &priv->regs_t->ctrl_rsp_pa, 8);
|
||||||
rsp_pa = le64_to_cpu(__rsp_pa);
|
rsp_pa = le64_to_cpu(__rsp_pa);
|
||||||
rsp_size = crb_fixup_cmd_size(dev, &io_res, rsp_pa,
|
rsp_size = ioread32(&priv->regs_t->ctrl_rsp_size);
|
||||||
ioread32(&priv->regs_t->ctrl_rsp_size));
|
|
||||||
|
iores = NULL;
|
||||||
|
iobase_ptr = NULL;
|
||||||
|
for (i = 0; resource_type(iores_array + i) == IORESOURCE_MEM; ++i) {
|
||||||
|
if (rsp_pa >= iores_array[i].start &&
|
||||||
|
rsp_pa <= iores_array[i].end) {
|
||||||
|
iores = iores_array + i;
|
||||||
|
iobase_ptr = iobase_array + i;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (iores)
|
||||||
|
rsp_size = crb_fixup_cmd_size(dev, iores, rsp_pa, rsp_size);
|
||||||
|
|
||||||
if (cmd_pa != rsp_pa) {
|
if (cmd_pa != rsp_pa) {
|
||||||
priv->rsp = crb_map_res(dev, priv, &io_res, rsp_pa, rsp_size);
|
priv->rsp = crb_map_res(dev, iores, iobase_ptr,
|
||||||
|
rsp_pa, rsp_size);
|
||||||
ret = PTR_ERR_OR_ZERO(priv->rsp);
|
ret = PTR_ERR_OR_ZERO(priv->rsp);
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
|
@ -588,6 +588,7 @@ static irqreturn_t ibmvtpm_interrupt(int irq, void *vtpm_instance)
|
||||||
*/
|
*/
|
||||||
while ((crq = ibmvtpm_crq_get_next(ibmvtpm)) != NULL) {
|
while ((crq = ibmvtpm_crq_get_next(ibmvtpm)) != NULL) {
|
||||||
ibmvtpm_crq_process(crq, ibmvtpm);
|
ibmvtpm_crq_process(crq, ibmvtpm);
|
||||||
|
wake_up_interruptible(&ibmvtpm->crq_queue.wq);
|
||||||
crq->valid = 0;
|
crq->valid = 0;
|
||||||
smp_wmb();
|
smp_wmb();
|
||||||
}
|
}
|
||||||
|
@ -635,6 +636,7 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
|
||||||
}
|
}
|
||||||
|
|
||||||
crq_q->num_entry = CRQ_RES_BUF_SIZE / sizeof(*crq_q->crq_addr);
|
crq_q->num_entry = CRQ_RES_BUF_SIZE / sizeof(*crq_q->crq_addr);
|
||||||
|
init_waitqueue_head(&crq_q->wq);
|
||||||
ibmvtpm->crq_dma_handle = dma_map_single(dev, crq_q->crq_addr,
|
ibmvtpm->crq_dma_handle = dma_map_single(dev, crq_q->crq_addr,
|
||||||
CRQ_RES_BUF_SIZE,
|
CRQ_RES_BUF_SIZE,
|
||||||
DMA_BIDIRECTIONAL);
|
DMA_BIDIRECTIONAL);
|
||||||
|
@ -687,6 +689,13 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
|
||||||
if (rc)
|
if (rc)
|
||||||
goto init_irq_cleanup;
|
goto init_irq_cleanup;
|
||||||
|
|
||||||
|
if (!wait_event_timeout(ibmvtpm->crq_queue.wq,
|
||||||
|
ibmvtpm->rtce_buf != NULL,
|
||||||
|
HZ)) {
|
||||||
|
dev_err(dev, "CRQ response timed out\n");
|
||||||
|
goto init_irq_cleanup;
|
||||||
|
}
|
||||||
|
|
||||||
return tpm_chip_register(chip);
|
return tpm_chip_register(chip);
|
||||||
init_irq_cleanup:
|
init_irq_cleanup:
|
||||||
do {
|
do {
|
||||||
|
|
|
@ -31,6 +31,7 @@ struct ibmvtpm_crq_queue {
|
||||||
struct ibmvtpm_crq *crq_addr;
|
struct ibmvtpm_crq *crq_addr;
|
||||||
u32 index;
|
u32 index;
|
||||||
u32 num_entry;
|
u32 num_entry;
|
||||||
|
wait_queue_head_t wq;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct ibmvtpm_dev {
|
struct ibmvtpm_dev {
|
||||||
|
|
|
@ -38,7 +38,9 @@ static unsigned long clk_pll_recalc_rate(struct clk_hw *hwclk,
|
||||||
/* read VCO1 reg for numerator and denominator */
|
/* read VCO1 reg for numerator and denominator */
|
||||||
reg = readl(socfpgaclk->hw.reg);
|
reg = readl(socfpgaclk->hw.reg);
|
||||||
refdiv = (reg & SOCFPGA_PLL_REFDIV_MASK) >> SOCFPGA_PLL_REFDIV_SHIFT;
|
refdiv = (reg & SOCFPGA_PLL_REFDIV_MASK) >> SOCFPGA_PLL_REFDIV_SHIFT;
|
||||||
vco_freq = (unsigned long long)parent_rate / refdiv;
|
|
||||||
|
vco_freq = parent_rate;
|
||||||
|
do_div(vco_freq, refdiv);
|
||||||
|
|
||||||
/* Read mdiv and fdiv from the fdbck register */
|
/* Read mdiv and fdiv from the fdbck register */
|
||||||
reg = readl(socfpgaclk->hw.reg + 0x4);
|
reg = readl(socfpgaclk->hw.reg + 0x4);
|
||||||
|
|
|
@ -193,15 +193,8 @@ static const char *ti_adpll_clk_get_name(struct ti_adpll_data *d,
|
||||||
if (err)
|
if (err)
|
||||||
return NULL;
|
return NULL;
|
||||||
} else {
|
} else {
|
||||||
const char *base_name = "adpll";
|
name = devm_kasprintf(d->dev, GFP_KERNEL, "%08lx.adpll.%s",
|
||||||
char *buf;
|
d->pa, postfix);
|
||||||
|
|
||||||
buf = devm_kzalloc(d->dev, 8 + 1 + strlen(base_name) + 1 +
|
|
||||||
strlen(postfix), GFP_KERNEL);
|
|
||||||
if (!buf)
|
|
||||||
return NULL;
|
|
||||||
sprintf(buf, "%08lx.%s.%s", d->pa, base_name, postfix);
|
|
||||||
name = buf;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return name;
|
return name;
|
||||||
|
|
|
@ -169,7 +169,7 @@ static int __init h8300_8timer_init(struct device_node *node)
|
||||||
return PTR_ERR(clk);
|
return PTR_ERR(clk);
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = ENXIO;
|
ret = -ENXIO;
|
||||||
base = of_iomap(node, 0);
|
base = of_iomap(node, 0);
|
||||||
if (!base) {
|
if (!base) {
|
||||||
pr_err("failed to map registers for clockevent\n");
|
pr_err("failed to map registers for clockevent\n");
|
||||||
|
|
|
@ -903,6 +903,7 @@ static struct notifier_block powernv_cpufreq_reboot_nb = {
|
||||||
void powernv_cpufreq_work_fn(struct work_struct *work)
|
void powernv_cpufreq_work_fn(struct work_struct *work)
|
||||||
{
|
{
|
||||||
struct chip *chip = container_of(work, struct chip, throttle);
|
struct chip *chip = container_of(work, struct chip, throttle);
|
||||||
|
struct cpufreq_policy *policy;
|
||||||
unsigned int cpu;
|
unsigned int cpu;
|
||||||
cpumask_t mask;
|
cpumask_t mask;
|
||||||
|
|
||||||
|
@ -917,12 +918,14 @@ void powernv_cpufreq_work_fn(struct work_struct *work)
|
||||||
chip->restore = false;
|
chip->restore = false;
|
||||||
for_each_cpu(cpu, &mask) {
|
for_each_cpu(cpu, &mask) {
|
||||||
int index;
|
int index;
|
||||||
struct cpufreq_policy policy;
|
|
||||||
|
|
||||||
cpufreq_get_policy(&policy, cpu);
|
policy = cpufreq_cpu_get(cpu);
|
||||||
index = cpufreq_table_find_index_c(&policy, policy.cur);
|
if (!policy)
|
||||||
powernv_cpufreq_target_index(&policy, index);
|
continue;
|
||||||
cpumask_andnot(&mask, &mask, policy.cpus);
|
index = cpufreq_table_find_index_c(policy, policy->cur);
|
||||||
|
powernv_cpufreq_target_index(policy, index);
|
||||||
|
cpumask_andnot(&mask, &mask, policy->cpus);
|
||||||
|
cpufreq_cpu_put(policy);
|
||||||
}
|
}
|
||||||
out:
|
out:
|
||||||
put_online_cpus();
|
put_online_cpus();
|
||||||
|
|
|
@ -2418,8 +2418,9 @@ int chcr_aead_dma_map(struct device *dev,
|
||||||
else
|
else
|
||||||
reqctx->b0_dma = 0;
|
reqctx->b0_dma = 0;
|
||||||
if (req->src == req->dst) {
|
if (req->src == req->dst) {
|
||||||
error = dma_map_sg(dev, req->src, sg_nents(req->src),
|
error = dma_map_sg(dev, req->src,
|
||||||
DMA_BIDIRECTIONAL);
|
sg_nents_for_len(req->src, dst_size),
|
||||||
|
DMA_BIDIRECTIONAL);
|
||||||
if (!error)
|
if (!error)
|
||||||
goto err;
|
goto err;
|
||||||
} else {
|
} else {
|
||||||
|
|
|
@ -1449,7 +1449,7 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
|
||||||
csk->wr_max_credits))
|
csk->wr_max_credits))
|
||||||
sk->sk_write_space(sk);
|
sk->sk_write_space(sk);
|
||||||
|
|
||||||
if (copied >= target && !sk->sk_backlog.tail)
|
if (copied >= target && !READ_ONCE(sk->sk_backlog.tail))
|
||||||
break;
|
break;
|
||||||
|
|
||||||
if (copied) {
|
if (copied) {
|
||||||
|
@ -1482,7 +1482,7 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (sk->sk_backlog.tail) {
|
if (READ_ONCE(sk->sk_backlog.tail)) {
|
||||||
release_sock(sk);
|
release_sock(sk);
|
||||||
lock_sock(sk);
|
lock_sock(sk);
|
||||||
chtls_cleanup_rbuf(sk, copied);
|
chtls_cleanup_rbuf(sk, copied);
|
||||||
|
@ -1627,7 +1627,7 @@ static int peekmsg(struct sock *sk, struct msghdr *msg,
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (sk->sk_backlog.tail) {
|
if (READ_ONCE(sk->sk_backlog.tail)) {
|
||||||
/* Do not sleep, just process backlog. */
|
/* Do not sleep, just process backlog. */
|
||||||
release_sock(sk);
|
release_sock(sk);
|
||||||
lock_sock(sk);
|
lock_sock(sk);
|
||||||
|
@ -1759,7 +1759,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
|
||||||
csk->wr_max_credits))
|
csk->wr_max_credits))
|
||||||
sk->sk_write_space(sk);
|
sk->sk_write_space(sk);
|
||||||
|
|
||||||
if (copied >= target && !sk->sk_backlog.tail)
|
if (copied >= target && !READ_ONCE(sk->sk_backlog.tail))
|
||||||
break;
|
break;
|
||||||
|
|
||||||
if (copied) {
|
if (copied) {
|
||||||
|
@ -1790,7 +1790,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (sk->sk_backlog.tail) {
|
if (READ_ONCE(sk->sk_backlog.tail)) {
|
||||||
release_sock(sk);
|
release_sock(sk);
|
||||||
lock_sock(sk);
|
lock_sock(sk);
|
||||||
chtls_cleanup_rbuf(sk, copied);
|
chtls_cleanup_rbuf(sk, copied);
|
||||||
|
|
|
@ -80,6 +80,8 @@
|
||||||
|
|
||||||
#define KHZ 1000
|
#define KHZ 1000
|
||||||
|
|
||||||
|
#define KHZ_MAX (ULONG_MAX / KHZ)
|
||||||
|
|
||||||
/* Assume that the bus is saturated if the utilization is 25% */
|
/* Assume that the bus is saturated if the utilization is 25% */
|
||||||
#define BUS_SATURATION_RATIO 25
|
#define BUS_SATURATION_RATIO 25
|
||||||
|
|
||||||
|
@ -180,7 +182,7 @@ struct tegra_actmon_emc_ratio {
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct tegra_actmon_emc_ratio actmon_emc_ratios[] = {
|
static struct tegra_actmon_emc_ratio actmon_emc_ratios[] = {
|
||||||
{ 1400000, ULONG_MAX },
|
{ 1400000, KHZ_MAX },
|
||||||
{ 1200000, 750000 },
|
{ 1200000, 750000 },
|
||||||
{ 1100000, 600000 },
|
{ 1100000, 600000 },
|
||||||
{ 1000000, 500000 },
|
{ 1000000, 500000 },
|
||||||
|
|
|
@ -244,6 +244,30 @@ void dma_fence_free(struct dma_fence *fence)
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(dma_fence_free);
|
EXPORT_SYMBOL(dma_fence_free);
|
||||||
|
|
||||||
|
static bool __dma_fence_enable_signaling(struct dma_fence *fence)
|
||||||
|
{
|
||||||
|
bool was_set;
|
||||||
|
|
||||||
|
lockdep_assert_held(fence->lock);
|
||||||
|
|
||||||
|
was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
||||||
|
&fence->flags);
|
||||||
|
|
||||||
|
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if (!was_set && fence->ops->enable_signaling) {
|
||||||
|
trace_dma_fence_enable_signal(fence);
|
||||||
|
|
||||||
|
if (!fence->ops->enable_signaling(fence)) {
|
||||||
|
dma_fence_signal_locked(fence);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* dma_fence_enable_sw_signaling - enable signaling on fence
|
* dma_fence_enable_sw_signaling - enable signaling on fence
|
||||||
* @fence: the fence to enable
|
* @fence: the fence to enable
|
||||||
|
@ -256,19 +280,12 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
if (!test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||||
&fence->flags) &&
|
return;
|
||||||
!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) &&
|
|
||||||
fence->ops->enable_signaling) {
|
|
||||||
trace_dma_fence_enable_signal(fence);
|
|
||||||
|
|
||||||
spin_lock_irqsave(fence->lock, flags);
|
spin_lock_irqsave(fence->lock, flags);
|
||||||
|
__dma_fence_enable_signaling(fence);
|
||||||
if (!fence->ops->enable_signaling(fence))
|
spin_unlock_irqrestore(fence->lock, flags);
|
||||||
dma_fence_signal_locked(fence);
|
|
||||||
|
|
||||||
spin_unlock_irqrestore(fence->lock, flags);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
|
EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
|
||||||
|
|
||||||
|
@ -302,7 +319,6 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
bool was_set;
|
|
||||||
|
|
||||||
if (WARN_ON(!fence || !func))
|
if (WARN_ON(!fence || !func))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
@ -314,25 +330,14 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
|
||||||
|
|
||||||
spin_lock_irqsave(fence->lock, flags);
|
spin_lock_irqsave(fence->lock, flags);
|
||||||
|
|
||||||
was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
if (__dma_fence_enable_signaling(fence)) {
|
||||||
&fence->flags);
|
|
||||||
|
|
||||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
|
||||||
ret = -ENOENT;
|
|
||||||
else if (!was_set && fence->ops->enable_signaling) {
|
|
||||||
trace_dma_fence_enable_signal(fence);
|
|
||||||
|
|
||||||
if (!fence->ops->enable_signaling(fence)) {
|
|
||||||
dma_fence_signal_locked(fence);
|
|
||||||
ret = -ENOENT;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!ret) {
|
|
||||||
cb->func = func;
|
cb->func = func;
|
||||||
list_add_tail(&cb->node, &fence->cb_list);
|
list_add_tail(&cb->node, &fence->cb_list);
|
||||||
} else
|
} else {
|
||||||
INIT_LIST_HEAD(&cb->node);
|
INIT_LIST_HEAD(&cb->node);
|
||||||
|
ret = -ENOENT;
|
||||||
|
}
|
||||||
|
|
||||||
spin_unlock_irqrestore(fence->lock, flags);
|
spin_unlock_irqrestore(fence->lock, flags);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -432,7 +437,6 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
|
||||||
struct default_wait_cb cb;
|
struct default_wait_cb cb;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
signed long ret = timeout ? timeout : 1;
|
signed long ret = timeout ? timeout : 1;
|
||||||
bool was_set;
|
|
||||||
|
|
||||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -444,21 +448,9 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
if (!__dma_fence_enable_signaling(fence))
|
||||||
&fence->flags);
|
|
||||||
|
|
||||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
if (!was_set && fence->ops->enable_signaling) {
|
|
||||||
trace_dma_fence_enable_signal(fence);
|
|
||||||
|
|
||||||
if (!fence->ops->enable_signaling(fence)) {
|
|
||||||
dma_fence_signal_locked(fence);
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!timeout) {
|
if (!timeout) {
|
||||||
ret = 0;
|
ret = 0;
|
||||||
goto out;
|
goto out;
|
||||||
|
|
|
@ -997,7 +997,7 @@ static int mtk_hsdma_probe(struct platform_device *pdev)
|
||||||
if (err) {
|
if (err) {
|
||||||
dev_err(&pdev->dev,
|
dev_err(&pdev->dev,
|
||||||
"request_irq failed with err %d\n", err);
|
"request_irq failed with err %d\n", err);
|
||||||
goto err_unregister;
|
goto err_free;
|
||||||
}
|
}
|
||||||
|
|
||||||
platform_set_drvdata(pdev, hsdma);
|
platform_set_drvdata(pdev, hsdma);
|
||||||
|
@ -1006,6 +1006,8 @@ static int mtk_hsdma_probe(struct platform_device *pdev)
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_free:
|
||||||
|
of_dma_controller_free(pdev->dev.of_node);
|
||||||
err_unregister:
|
err_unregister:
|
||||||
dma_async_device_unregister(dd);
|
dma_async_device_unregister(dd);
|
||||||
|
|
||||||
|
|
|
@ -494,8 +494,10 @@ static int stm32_dma_terminate_all(struct dma_chan *c)
|
||||||
|
|
||||||
spin_lock_irqsave(&chan->vchan.lock, flags);
|
spin_lock_irqsave(&chan->vchan.lock, flags);
|
||||||
|
|
||||||
if (chan->busy) {
|
if (chan->desc) {
|
||||||
stm32_dma_stop(chan);
|
vchan_terminate_vdesc(&chan->desc->vdesc);
|
||||||
|
if (chan->busy)
|
||||||
|
stm32_dma_stop(chan);
|
||||||
chan->desc = NULL;
|
chan->desc = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -551,6 +553,8 @@ static void stm32_dma_start_transfer(struct stm32_dma_chan *chan)
|
||||||
if (!vdesc)
|
if (!vdesc)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
list_del(&vdesc->node);
|
||||||
|
|
||||||
chan->desc = to_stm32_dma_desc(vdesc);
|
chan->desc = to_stm32_dma_desc(vdesc);
|
||||||
chan->next_sg = 0;
|
chan->next_sg = 0;
|
||||||
}
|
}
|
||||||
|
@ -628,7 +632,6 @@ static void stm32_dma_handle_chan_done(struct stm32_dma_chan *chan)
|
||||||
} else {
|
} else {
|
||||||
chan->busy = false;
|
chan->busy = false;
|
||||||
if (chan->next_sg == chan->desc->num_sgs) {
|
if (chan->next_sg == chan->desc->num_sgs) {
|
||||||
list_del(&chan->desc->vdesc.node);
|
|
||||||
vchan_cookie_complete(&chan->desc->vdesc);
|
vchan_cookie_complete(&chan->desc->vdesc);
|
||||||
chan->desc = NULL;
|
chan->desc = NULL;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1137,6 +1137,8 @@ static void stm32_mdma_start_transfer(struct stm32_mdma_chan *chan)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
list_del(&vdesc->node);
|
||||||
|
|
||||||
chan->desc = to_stm32_mdma_desc(vdesc);
|
chan->desc = to_stm32_mdma_desc(vdesc);
|
||||||
hwdesc = chan->desc->node[0].hwdesc;
|
hwdesc = chan->desc->node[0].hwdesc;
|
||||||
chan->curr_hwdesc = 0;
|
chan->curr_hwdesc = 0;
|
||||||
|
@ -1252,8 +1254,10 @@ static int stm32_mdma_terminate_all(struct dma_chan *c)
|
||||||
LIST_HEAD(head);
|
LIST_HEAD(head);
|
||||||
|
|
||||||
spin_lock_irqsave(&chan->vchan.lock, flags);
|
spin_lock_irqsave(&chan->vchan.lock, flags);
|
||||||
if (chan->busy) {
|
if (chan->desc) {
|
||||||
stm32_mdma_stop(chan);
|
vchan_terminate_vdesc(&chan->desc->vdesc);
|
||||||
|
if (chan->busy)
|
||||||
|
stm32_mdma_stop(chan);
|
||||||
chan->desc = NULL;
|
chan->desc = NULL;
|
||||||
}
|
}
|
||||||
vchan_get_all_descriptors(&chan->vchan, &head);
|
vchan_get_all_descriptors(&chan->vchan, &head);
|
||||||
|
@ -1341,7 +1345,6 @@ static enum dma_status stm32_mdma_tx_status(struct dma_chan *c,
|
||||||
|
|
||||||
static void stm32_mdma_xfer_end(struct stm32_mdma_chan *chan)
|
static void stm32_mdma_xfer_end(struct stm32_mdma_chan *chan)
|
||||||
{
|
{
|
||||||
list_del(&chan->desc->vdesc.node);
|
|
||||||
vchan_cookie_complete(&chan->desc->vdesc);
|
vchan_cookie_complete(&chan->desc->vdesc);
|
||||||
chan->desc = NULL;
|
chan->desc = NULL;
|
||||||
chan->busy = false;
|
chan->busy = false;
|
||||||
|
|
|
@ -1225,8 +1225,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc)
|
||||||
|
|
||||||
dev_dbg(tdc2dev(tdc), "Freeing channel %d\n", tdc->id);
|
dev_dbg(tdc2dev(tdc), "Freeing channel %d\n", tdc->id);
|
||||||
|
|
||||||
if (tdc->busy)
|
tegra_dma_terminate_all(dc);
|
||||||
tegra_dma_terminate_all(dc);
|
|
||||||
|
|
||||||
spin_lock_irqsave(&tdc->lock, flags);
|
spin_lock_irqsave(&tdc->lock, flags);
|
||||||
list_splice_init(&tdc->pending_sg_req, &sg_req_list);
|
list_splice_init(&tdc->pending_sg_req, &sg_req_list);
|
||||||
|
|
|
@ -127,10 +127,12 @@
|
||||||
/* Max transfer size per descriptor */
|
/* Max transfer size per descriptor */
|
||||||
#define ZYNQMP_DMA_MAX_TRANS_LEN 0x40000000
|
#define ZYNQMP_DMA_MAX_TRANS_LEN 0x40000000
|
||||||
|
|
||||||
|
/* Max burst lengths */
|
||||||
|
#define ZYNQMP_DMA_MAX_DST_BURST_LEN 32768U
|
||||||
|
#define ZYNQMP_DMA_MAX_SRC_BURST_LEN 32768U
|
||||||
|
|
||||||
/* Reset values for data attributes */
|
/* Reset values for data attributes */
|
||||||
#define ZYNQMP_DMA_AXCACHE_VAL 0xF
|
#define ZYNQMP_DMA_AXCACHE_VAL 0xF
|
||||||
#define ZYNQMP_DMA_ARLEN_RST_VAL 0xF
|
|
||||||
#define ZYNQMP_DMA_AWLEN_RST_VAL 0xF
|
|
||||||
|
|
||||||
#define ZYNQMP_DMA_SRC_ISSUE_RST_VAL 0x1F
|
#define ZYNQMP_DMA_SRC_ISSUE_RST_VAL 0x1F
|
||||||
|
|
||||||
|
@ -536,17 +538,19 @@ static void zynqmp_dma_handle_ovfl_int(struct zynqmp_dma_chan *chan, u32 status)
|
||||||
|
|
||||||
static void zynqmp_dma_config(struct zynqmp_dma_chan *chan)
|
static void zynqmp_dma_config(struct zynqmp_dma_chan *chan)
|
||||||
{
|
{
|
||||||
u32 val;
|
u32 val, burst_val;
|
||||||
|
|
||||||
val = readl(chan->regs + ZYNQMP_DMA_CTRL0);
|
val = readl(chan->regs + ZYNQMP_DMA_CTRL0);
|
||||||
val |= ZYNQMP_DMA_POINT_TYPE_SG;
|
val |= ZYNQMP_DMA_POINT_TYPE_SG;
|
||||||
writel(val, chan->regs + ZYNQMP_DMA_CTRL0);
|
writel(val, chan->regs + ZYNQMP_DMA_CTRL0);
|
||||||
|
|
||||||
val = readl(chan->regs + ZYNQMP_DMA_DATA_ATTR);
|
val = readl(chan->regs + ZYNQMP_DMA_DATA_ATTR);
|
||||||
|
burst_val = __ilog2_u32(chan->src_burst_len);
|
||||||
val = (val & ~ZYNQMP_DMA_ARLEN) |
|
val = (val & ~ZYNQMP_DMA_ARLEN) |
|
||||||
(chan->src_burst_len << ZYNQMP_DMA_ARLEN_OFST);
|
((burst_val << ZYNQMP_DMA_ARLEN_OFST) & ZYNQMP_DMA_ARLEN);
|
||||||
|
burst_val = __ilog2_u32(chan->dst_burst_len);
|
||||||
val = (val & ~ZYNQMP_DMA_AWLEN) |
|
val = (val & ~ZYNQMP_DMA_AWLEN) |
|
||||||
(chan->dst_burst_len << ZYNQMP_DMA_AWLEN_OFST);
|
((burst_val << ZYNQMP_DMA_AWLEN_OFST) & ZYNQMP_DMA_AWLEN);
|
||||||
writel(val, chan->regs + ZYNQMP_DMA_DATA_ATTR);
|
writel(val, chan->regs + ZYNQMP_DMA_DATA_ATTR);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -562,8 +566,10 @@ static int zynqmp_dma_device_config(struct dma_chan *dchan,
|
||||||
{
|
{
|
||||||
struct zynqmp_dma_chan *chan = to_chan(dchan);
|
struct zynqmp_dma_chan *chan = to_chan(dchan);
|
||||||
|
|
||||||
chan->src_burst_len = config->src_maxburst;
|
chan->src_burst_len = clamp(config->src_maxburst, 1U,
|
||||||
chan->dst_burst_len = config->dst_maxburst;
|
ZYNQMP_DMA_MAX_SRC_BURST_LEN);
|
||||||
|
chan->dst_burst_len = clamp(config->dst_maxburst, 1U,
|
||||||
|
ZYNQMP_DMA_MAX_DST_BURST_LEN);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -884,8 +890,8 @@ static int zynqmp_dma_chan_probe(struct zynqmp_dma_device *zdev,
|
||||||
return PTR_ERR(chan->regs);
|
return PTR_ERR(chan->regs);
|
||||||
|
|
||||||
chan->bus_width = ZYNQMP_DMA_BUS_WIDTH_64;
|
chan->bus_width = ZYNQMP_DMA_BUS_WIDTH_64;
|
||||||
chan->dst_burst_len = ZYNQMP_DMA_AWLEN_RST_VAL;
|
chan->dst_burst_len = ZYNQMP_DMA_MAX_DST_BURST_LEN;
|
||||||
chan->src_burst_len = ZYNQMP_DMA_ARLEN_RST_VAL;
|
chan->src_burst_len = ZYNQMP_DMA_MAX_SRC_BURST_LEN;
|
||||||
err = of_property_read_u32(node, "xlnx,bus-width", &chan->bus_width);
|
err = of_property_read_u32(node, "xlnx,bus-width", &chan->bus_width);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
dev_err(&pdev->dev, "missing xlnx,bus-width property\n");
|
dev_err(&pdev->dev, "missing xlnx,bus-width property\n");
|
||||||
|
|
|
@ -410,14 +410,19 @@ int sdei_event_enable(u32 event_num)
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock(&sdei_list_lock);
|
|
||||||
event->reenable = true;
|
|
||||||
spin_unlock(&sdei_list_lock);
|
|
||||||
|
|
||||||
|
cpus_read_lock();
|
||||||
if (event->type == SDEI_EVENT_TYPE_SHARED)
|
if (event->type == SDEI_EVENT_TYPE_SHARED)
|
||||||
err = sdei_api_event_enable(event->event_num);
|
err = sdei_api_event_enable(event->event_num);
|
||||||
else
|
else
|
||||||
err = sdei_do_cross_call(_local_event_enable, event);
|
err = sdei_do_cross_call(_local_event_enable, event);
|
||||||
|
|
||||||
|
if (!err) {
|
||||||
|
spin_lock(&sdei_list_lock);
|
||||||
|
event->reenable = true;
|
||||||
|
spin_unlock(&sdei_list_lock);
|
||||||
|
}
|
||||||
|
cpus_read_unlock();
|
||||||
mutex_unlock(&sdei_events_lock);
|
mutex_unlock(&sdei_events_lock);
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
|
@ -619,21 +624,18 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock(&sdei_list_lock);
|
cpus_read_lock();
|
||||||
event->reregister = true;
|
|
||||||
spin_unlock(&sdei_list_lock);
|
|
||||||
|
|
||||||
err = _sdei_event_register(event);
|
err = _sdei_event_register(event);
|
||||||
if (err) {
|
if (err) {
|
||||||
spin_lock(&sdei_list_lock);
|
|
||||||
event->reregister = false;
|
|
||||||
event->reenable = false;
|
|
||||||
spin_unlock(&sdei_list_lock);
|
|
||||||
|
|
||||||
sdei_event_destroy(event);
|
sdei_event_destroy(event);
|
||||||
pr_warn("Failed to register event %u: %d\n", event_num,
|
pr_warn("Failed to register event %u: %d\n", event_num,
|
||||||
err);
|
err);
|
||||||
|
} else {
|
||||||
|
spin_lock(&sdei_list_lock);
|
||||||
|
event->reregister = true;
|
||||||
|
spin_unlock(&sdei_list_lock);
|
||||||
}
|
}
|
||||||
|
cpus_read_unlock();
|
||||||
} while (0);
|
} while (0);
|
||||||
mutex_unlock(&sdei_events_lock);
|
mutex_unlock(&sdei_events_lock);
|
||||||
|
|
||||||
|
|
|
@ -191,30 +191,35 @@ static bool amdgpu_read_bios_from_rom(struct amdgpu_device *adev)
|
||||||
|
|
||||||
static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
|
static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
|
||||||
{
|
{
|
||||||
uint8_t __iomem *bios;
|
phys_addr_t rom = adev->pdev->rom;
|
||||||
size_t size;
|
size_t romlen = adev->pdev->romlen;
|
||||||
|
void __iomem *bios;
|
||||||
|
|
||||||
adev->bios = NULL;
|
adev->bios = NULL;
|
||||||
|
|
||||||
bios = pci_platform_rom(adev->pdev, &size);
|
if (!rom || romlen == 0)
|
||||||
if (!bios) {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
adev->bios = kzalloc(size, GFP_KERNEL);
|
|
||||||
if (adev->bios == NULL)
|
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
memcpy_fromio(adev->bios, bios, size);
|
adev->bios = kzalloc(romlen, GFP_KERNEL);
|
||||||
|
if (!adev->bios)
|
||||||
if (!check_atom_bios(adev->bios, size)) {
|
|
||||||
kfree(adev->bios);
|
|
||||||
return false;
|
return false;
|
||||||
}
|
|
||||||
|
|
||||||
adev->bios_size = size;
|
bios = ioremap(rom, romlen);
|
||||||
|
if (!bios)
|
||||||
|
goto free_bios;
|
||||||
|
|
||||||
|
memcpy_fromio(adev->bios, bios, romlen);
|
||||||
|
iounmap(bios);
|
||||||
|
|
||||||
|
if (!check_atom_bios(adev->bios, romlen))
|
||||||
|
goto free_bios;
|
||||||
|
|
||||||
|
adev->bios_size = romlen;
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
|
free_bios:
|
||||||
|
kfree(adev->bios);
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_ACPI
|
#ifdef CONFIG_ACPI
|
||||||
|
|
|
@ -742,8 +742,8 @@ static void atom_op_jump(atom_exec_context *ctx, int *ptr, int arg)
|
||||||
cjiffies = jiffies;
|
cjiffies = jiffies;
|
||||||
if (time_after(cjiffies, ctx->last_jump_jiffies)) {
|
if (time_after(cjiffies, ctx->last_jump_jiffies)) {
|
||||||
cjiffies -= ctx->last_jump_jiffies;
|
cjiffies -= ctx->last_jump_jiffies;
|
||||||
if ((jiffies_to_msecs(cjiffies) > 5000)) {
|
if ((jiffies_to_msecs(cjiffies) > 10000)) {
|
||||||
DRM_ERROR("atombios stuck in loop for more than 5secs aborting\n");
|
DRM_ERROR("atombios stuck in loop for more than 10secs aborting\n");
|
||||||
ctx->abort = true;
|
ctx->abort = true;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|
|
@ -1101,6 +1101,8 @@ static int stop_cpsch(struct device_queue_manager *dqm)
|
||||||
unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
|
unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
|
||||||
dqm_unlock(dqm);
|
dqm_unlock(dqm);
|
||||||
|
|
||||||
|
pm_release_ib(&dqm->packets);
|
||||||
|
|
||||||
kfd_gtt_sa_free(dqm->dev, dqm->fence_mem);
|
kfd_gtt_sa_free(dqm->dev, dqm->fence_mem);
|
||||||
pm_uninit(&dqm->packets);
|
pm_uninit(&dqm->packets);
|
||||||
|
|
||||||
|
|
|
@ -1576,8 +1576,7 @@ static void write_i2c_retimer_setting(
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
|
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Based on DP159 specs, APPLY_RX_TX_CHANGE bit in 0x0A
|
/* Based on DP159 specs, APPLY_RX_TX_CHANGE bit in 0x0A
|
||||||
* needs to be set to 1 on every 0xA-0xC write.
|
* needs to be set to 1 on every 0xA-0xC write.
|
||||||
|
@ -1595,8 +1594,7 @@ static void write_i2c_retimer_setting(
|
||||||
pipe_ctx->stream->sink->link->ddc,
|
pipe_ctx->stream->sink->link->ddc,
|
||||||
slave_address, &offset, 1, &value, 1);
|
slave_address, &offset, 1, &value, 1);
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
buffer[0] = offset;
|
buffer[0] = offset;
|
||||||
|
@ -1605,8 +1603,7 @@ static void write_i2c_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1623,8 +1620,7 @@ static void write_i2c_retimer_setting(
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
|
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Based on DP159 specs, APPLY_RX_TX_CHANGE bit in 0x0A
|
/* Based on DP159 specs, APPLY_RX_TX_CHANGE bit in 0x0A
|
||||||
* needs to be set to 1 on every 0xA-0xC write.
|
* needs to be set to 1 on every 0xA-0xC write.
|
||||||
|
@ -1642,8 +1638,7 @@ static void write_i2c_retimer_setting(
|
||||||
pipe_ctx->stream->sink->link->ddc,
|
pipe_ctx->stream->sink->link->ddc,
|
||||||
slave_address, &offset, 1, &value, 1);
|
slave_address, &offset, 1, &value, 1);
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
buffer[0] = offset;
|
buffer[0] = offset;
|
||||||
|
@ -1652,8 +1647,7 @@ static void write_i2c_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1668,8 +1662,7 @@ static void write_i2c_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0x00 to 0x23 */
|
/* Write offset 0x00 to 0x23 */
|
||||||
buffer[0] = 0x00;
|
buffer[0] = 0x00;
|
||||||
|
@ -1677,8 +1670,7 @@ static void write_i2c_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0xff to 0x00 */
|
/* Write offset 0xff to 0x00 */
|
||||||
buffer[0] = 0xff;
|
buffer[0] = 0xff;
|
||||||
|
@ -1686,10 +1678,14 @@ static void write_i2c_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return;
|
||||||
|
|
||||||
|
i2c_write_fail:
|
||||||
|
DC_LOG_DEBUG("Set retimer failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
static void write_i2c_default_retimer_setting(
|
static void write_i2c_default_retimer_setting(
|
||||||
|
@ -1710,8 +1706,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0x0A to 0x17 */
|
/* Write offset 0x0A to 0x17 */
|
||||||
buffer[0] = 0x0A;
|
buffer[0] = 0x0A;
|
||||||
|
@ -1719,8 +1714,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0x0B to 0xDA or 0xD8 */
|
/* Write offset 0x0B to 0xDA or 0xD8 */
|
||||||
buffer[0] = 0x0B;
|
buffer[0] = 0x0B;
|
||||||
|
@ -1728,8 +1722,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0x0A to 0x17 */
|
/* Write offset 0x0A to 0x17 */
|
||||||
buffer[0] = 0x0A;
|
buffer[0] = 0x0A;
|
||||||
|
@ -1737,8 +1730,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0x0C to 0x1D or 0x91 */
|
/* Write offset 0x0C to 0x1D or 0x91 */
|
||||||
buffer[0] = 0x0C;
|
buffer[0] = 0x0C;
|
||||||
|
@ -1746,8 +1738,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0x0A to 0x17 */
|
/* Write offset 0x0A to 0x17 */
|
||||||
buffer[0] = 0x0A;
|
buffer[0] = 0x0A;
|
||||||
|
@ -1755,8 +1746,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
|
|
||||||
if (is_vga_mode) {
|
if (is_vga_mode) {
|
||||||
|
@ -1768,8 +1758,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0x00 to 0x23 */
|
/* Write offset 0x00 to 0x23 */
|
||||||
buffer[0] = 0x00;
|
buffer[0] = 0x00;
|
||||||
|
@ -1777,8 +1766,7 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
|
|
||||||
/* Write offset 0xff to 0x00 */
|
/* Write offset 0xff to 0x00 */
|
||||||
buffer[0] = 0xff;
|
buffer[0] = 0xff;
|
||||||
|
@ -1786,9 +1774,13 @@ static void write_i2c_default_retimer_setting(
|
||||||
i2c_success = i2c_write(pipe_ctx, slave_address,
|
i2c_success = i2c_write(pipe_ctx, slave_address,
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
goto i2c_write_fail;
|
||||||
ASSERT(i2c_success);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return;
|
||||||
|
|
||||||
|
i2c_write_fail:
|
||||||
|
DC_LOG_DEBUG("Set default retimer failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
static void write_i2c_redriver_setting(
|
static void write_i2c_redriver_setting(
|
||||||
|
@ -1811,8 +1803,7 @@ static void write_i2c_redriver_setting(
|
||||||
buffer, sizeof(buffer));
|
buffer, sizeof(buffer));
|
||||||
|
|
||||||
if (!i2c_success)
|
if (!i2c_success)
|
||||||
/* Write failure */
|
DC_LOG_DEBUG("Set redriver failed");
|
||||||
ASSERT(i2c_success);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void enable_link_hdmi(struct pipe_ctx *pipe_ctx)
|
static void enable_link_hdmi(struct pipe_ctx *pipe_ctx)
|
||||||
|
|
|
@ -127,22 +127,16 @@ struct aux_payloads {
|
||||||
struct vector payloads;
|
struct vector payloads;
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct i2c_payloads *dal_ddc_i2c_payloads_create(struct dc_context *ctx, uint32_t count)
|
static bool dal_ddc_i2c_payloads_create(
|
||||||
|
struct dc_context *ctx,
|
||||||
|
struct i2c_payloads *payloads,
|
||||||
|
uint32_t count)
|
||||||
{
|
{
|
||||||
struct i2c_payloads *payloads;
|
|
||||||
|
|
||||||
payloads = kzalloc(sizeof(struct i2c_payloads), GFP_KERNEL);
|
|
||||||
|
|
||||||
if (!payloads)
|
|
||||||
return NULL;
|
|
||||||
|
|
||||||
if (dal_vector_construct(
|
if (dal_vector_construct(
|
||||||
&payloads->payloads, ctx, count, sizeof(struct i2c_payload)))
|
&payloads->payloads, ctx, count, sizeof(struct i2c_payload)))
|
||||||
return payloads;
|
return true;
|
||||||
|
|
||||||
kfree(payloads);
|
|
||||||
return NULL;
|
|
||||||
|
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct i2c_payload *dal_ddc_i2c_payloads_get(struct i2c_payloads *p)
|
static struct i2c_payload *dal_ddc_i2c_payloads_get(struct i2c_payloads *p)
|
||||||
|
@ -155,14 +149,12 @@ static uint32_t dal_ddc_i2c_payloads_get_count(struct i2c_payloads *p)
|
||||||
return p->payloads.count;
|
return p->payloads.count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dal_ddc_i2c_payloads_destroy(struct i2c_payloads **p)
|
static void dal_ddc_i2c_payloads_destroy(struct i2c_payloads *p)
|
||||||
{
|
{
|
||||||
if (!p || !*p)
|
if (!p)
|
||||||
return;
|
return;
|
||||||
dal_vector_destruct(&(*p)->payloads);
|
|
||||||
kfree(*p);
|
|
||||||
*p = NULL;
|
|
||||||
|
|
||||||
|
dal_vector_destruct(&p->payloads);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct aux_payloads *dal_ddc_aux_payloads_create(struct dc_context *ctx, uint32_t count)
|
static struct aux_payloads *dal_ddc_aux_payloads_create(struct dc_context *ctx, uint32_t count)
|
||||||
|
@ -580,9 +572,13 @@ bool dal_ddc_service_query_ddc_data(
|
||||||
|
|
||||||
uint32_t payloads_num = write_payloads + read_payloads;
|
uint32_t payloads_num = write_payloads + read_payloads;
|
||||||
|
|
||||||
|
|
||||||
if (write_size > EDID_SEGMENT_SIZE || read_size > EDID_SEGMENT_SIZE)
|
if (write_size > EDID_SEGMENT_SIZE || read_size > EDID_SEGMENT_SIZE)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
if (!payloads_num)
|
||||||
|
return false;
|
||||||
|
|
||||||
/*TODO: len of payload data for i2c and aux is uint8!!!!,
|
/*TODO: len of payload data for i2c and aux is uint8!!!!,
|
||||||
* but we want to read 256 over i2c!!!!*/
|
* but we want to read 256 over i2c!!!!*/
|
||||||
if (dal_ddc_service_is_in_aux_transaction_mode(ddc)) {
|
if (dal_ddc_service_is_in_aux_transaction_mode(ddc)) {
|
||||||
|
@ -613,23 +609,25 @@ bool dal_ddc_service_query_ddc_data(
|
||||||
dal_ddc_aux_payloads_destroy(&payloads);
|
dal_ddc_aux_payloads_destroy(&payloads);
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
struct i2c_payloads *payloads =
|
struct i2c_command command = {0};
|
||||||
dal_ddc_i2c_payloads_create(ddc->ctx, payloads_num);
|
struct i2c_payloads payloads;
|
||||||
|
|
||||||
struct i2c_command command = {
|
if (!dal_ddc_i2c_payloads_create(ddc->ctx, &payloads, payloads_num))
|
||||||
.payloads = dal_ddc_i2c_payloads_get(payloads),
|
return false;
|
||||||
.number_of_payloads = 0,
|
|
||||||
.engine = DDC_I2C_COMMAND_ENGINE,
|
command.payloads = dal_ddc_i2c_payloads_get(&payloads);
|
||||||
.speed = ddc->ctx->dc->caps.i2c_speed_in_khz };
|
command.number_of_payloads = 0;
|
||||||
|
command.engine = DDC_I2C_COMMAND_ENGINE;
|
||||||
|
command.speed = ddc->ctx->dc->caps.i2c_speed_in_khz;
|
||||||
|
|
||||||
dal_ddc_i2c_payloads_add(
|
dal_ddc_i2c_payloads_add(
|
||||||
payloads, address, write_size, write_buf, true);
|
&payloads, address, write_size, write_buf, true);
|
||||||
|
|
||||||
dal_ddc_i2c_payloads_add(
|
dal_ddc_i2c_payloads_add(
|
||||||
payloads, address, read_size, read_buf, false);
|
&payloads, address, read_size, read_buf, false);
|
||||||
|
|
||||||
command.number_of_payloads =
|
command.number_of_payloads =
|
||||||
dal_ddc_i2c_payloads_get_count(payloads);
|
dal_ddc_i2c_payloads_get_count(&payloads);
|
||||||
|
|
||||||
ret = dm_helpers_submit_i2c(
|
ret = dm_helpers_submit_i2c(
|
||||||
ddc->ctx,
|
ddc->ctx,
|
||||||
|
|
|
@ -3970,6 +3970,13 @@ static int smu7_set_power_state_tasks(struct pp_hwmgr *hwmgr, const void *input)
|
||||||
"Failed to populate and upload SCLK MCLK DPM levels!",
|
"Failed to populate and upload SCLK MCLK DPM levels!",
|
||||||
result = tmp_result);
|
result = tmp_result);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If a custom pp table is loaded, set DPMTABLE_OD_UPDATE_VDDC flag.
|
||||||
|
* That effectively disables AVFS feature.
|
||||||
|
*/
|
||||||
|
if (hwmgr->hardcode_pp_table != NULL)
|
||||||
|
data->need_update_smu7_dpm_table |= DPMTABLE_OD_UPDATE_VDDC;
|
||||||
|
|
||||||
tmp_result = smu7_update_avfs(hwmgr);
|
tmp_result = smu7_update_avfs(hwmgr);
|
||||||
PP_ASSERT_WITH_CODE((0 == tmp_result),
|
PP_ASSERT_WITH_CODE((0 == tmp_result),
|
||||||
"Failed to update avfs voltages!",
|
"Failed to update avfs voltages!",
|
||||||
|
|
|
@ -3591,6 +3591,13 @@ static int vega10_set_power_state_tasks(struct pp_hwmgr *hwmgr,
|
||||||
PP_ASSERT_WITH_CODE(!result,
|
PP_ASSERT_WITH_CODE(!result,
|
||||||
"Failed to upload PPtable!", return result);
|
"Failed to upload PPtable!", return result);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If a custom pp table is loaded, set DPMTABLE_OD_UPDATE_VDDC flag.
|
||||||
|
* That effectively disables AVFS feature.
|
||||||
|
*/
|
||||||
|
if(hwmgr->hardcode_pp_table != NULL)
|
||||||
|
data->need_update_dpm_table |= DPMTABLE_OD_UPDATE_VDDC;
|
||||||
|
|
||||||
vega10_update_avfs(hwmgr);
|
vega10_update_avfs(hwmgr);
|
||||||
|
|
||||||
data->need_update_dpm_table &= DPMTABLE_OD_UPDATE_VDDC;
|
data->need_update_dpm_table &= DPMTABLE_OD_UPDATE_VDDC;
|
||||||
|
|
|
@ -415,6 +415,8 @@ static bool cdv_intel_find_dp_pll(const struct gma_limit_t *limit,
|
||||||
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
|
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
|
||||||
struct gma_clock_t clock;
|
struct gma_clock_t clock;
|
||||||
|
|
||||||
|
memset(&clock, 0, sizeof(clock));
|
||||||
|
|
||||||
switch (refclk) {
|
switch (refclk) {
|
||||||
case 27000:
|
case 27000:
|
||||||
if (target < 200000) {
|
if (target < 200000) {
|
||||||
|
|
|
@ -1474,18 +1474,31 @@ static const struct adreno_gpu_funcs funcs = {
|
||||||
static void check_speed_bin(struct device *dev)
|
static void check_speed_bin(struct device *dev)
|
||||||
{
|
{
|
||||||
struct nvmem_cell *cell;
|
struct nvmem_cell *cell;
|
||||||
u32 bin, val;
|
u32 val;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the OPP table specifies a opp-supported-hw property then we have
|
||||||
|
* to set something with dev_pm_opp_set_supported_hw() or the table
|
||||||
|
* doesn't get populated so pick an arbitrary value that should
|
||||||
|
* ensure the default frequencies are selected but not conflict with any
|
||||||
|
* actual bins
|
||||||
|
*/
|
||||||
|
val = 0x80;
|
||||||
|
|
||||||
cell = nvmem_cell_get(dev, "speed_bin");
|
cell = nvmem_cell_get(dev, "speed_bin");
|
||||||
|
|
||||||
/* If a nvmem cell isn't defined, nothing to do */
|
if (!IS_ERR(cell)) {
|
||||||
if (IS_ERR(cell))
|
void *buf = nvmem_cell_read(cell, NULL);
|
||||||
return;
|
|
||||||
|
|
||||||
bin = *((u32 *) nvmem_cell_read(cell, NULL));
|
if (!IS_ERR(buf)) {
|
||||||
nvmem_cell_put(cell);
|
u8 bin = *((u8 *) buf);
|
||||||
|
|
||||||
val = (1 << bin);
|
val = (1 << bin);
|
||||||
|
kfree(buf);
|
||||||
|
}
|
||||||
|
|
||||||
|
nvmem_cell_put(cell);
|
||||||
|
}
|
||||||
|
|
||||||
dev_pm_opp_set_supported_hw(dev, &val, 1);
|
dev_pm_opp_set_supported_hw(dev, &val, 1);
|
||||||
}
|
}
|
||||||
|
|
|
@ -495,8 +495,10 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv)
|
||||||
if (!dev->dma_parms) {
|
if (!dev->dma_parms) {
|
||||||
dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms),
|
dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!dev->dma_parms)
|
if (!dev->dma_parms) {
|
||||||
return -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
goto err_msm_uninit;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
dma_set_max_seg_size(dev, DMA_BIT_MASK(32));
|
dma_set_max_seg_size(dev, DMA_BIT_MASK(32));
|
||||||
|
|
||||||
|
|
|
@ -909,8 +909,10 @@ nv50_mstc_detect(struct drm_connector *connector, bool force)
|
||||||
return connector_status_disconnected;
|
return connector_status_disconnected;
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(connector->dev->dev);
|
ret = pm_runtime_get_sync(connector->dev->dev);
|
||||||
if (ret < 0 && ret != -EACCES)
|
if (ret < 0 && ret != -EACCES) {
|
||||||
|
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||||
return connector_status_disconnected;
|
return connector_status_disconnected;
|
||||||
|
}
|
||||||
|
|
||||||
conn_status = drm_dp_mst_detect_port(connector, mstc->port->mgr,
|
conn_status = drm_dp_mst_detect_port(connector, mstc->port->mgr,
|
||||||
mstc->port);
|
mstc->port);
|
||||||
|
|
|
@ -161,8 +161,11 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(drm->dev);
|
ret = pm_runtime_get_sync(drm->dev);
|
||||||
if (ret < 0 && ret != -EACCES)
|
if (ret < 0 && ret != -EACCES) {
|
||||||
|
pm_runtime_put_autosuspend(drm->dev);
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
|
ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
|
||||||
pm_runtime_put_autosuspend(drm->dev);
|
pm_runtime_put_autosuspend(drm->dev);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
|
@ -82,8 +82,10 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(dev);
|
ret = pm_runtime_get_sync(dev);
|
||||||
if (ret < 0 && ret != -EACCES)
|
if (ret < 0 && ret != -EACCES) {
|
||||||
|
pm_runtime_put_autosuspend(dev);
|
||||||
goto out;
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
ret = nouveau_vma_new(nvbo, &cli->vmm, &vma);
|
ret = nouveau_vma_new(nvbo, &cli->vmm, &vma);
|
||||||
pm_runtime_mark_last_busy(dev);
|
pm_runtime_mark_last_busy(dev);
|
||||||
|
|
|
@ -101,9 +101,13 @@ platform_init(struct nvkm_bios *bios, const char *name)
|
||||||
else
|
else
|
||||||
return ERR_PTR(-ENODEV);
|
return ERR_PTR(-ENODEV);
|
||||||
|
|
||||||
|
if (!pdev->rom || pdev->romlen == 0)
|
||||||
|
return ERR_PTR(-ENODEV);
|
||||||
|
|
||||||
if ((priv = kmalloc(sizeof(*priv), GFP_KERNEL))) {
|
if ((priv = kmalloc(sizeof(*priv), GFP_KERNEL))) {
|
||||||
|
priv->size = pdev->romlen;
|
||||||
if (ret = -ENODEV,
|
if (ret = -ENODEV,
|
||||||
(priv->rom = pci_platform_rom(pdev, &priv->size)))
|
(priv->rom = ioremap(pdev->rom, pdev->romlen)))
|
||||||
return priv;
|
return priv;
|
||||||
kfree(priv);
|
kfree(priv);
|
||||||
}
|
}
|
||||||
|
@ -111,11 +115,20 @@ platform_init(struct nvkm_bios *bios, const char *name)
|
||||||
return ERR_PTR(ret);
|
return ERR_PTR(ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
platform_fini(void *data)
|
||||||
|
{
|
||||||
|
struct priv *priv = data;
|
||||||
|
|
||||||
|
iounmap(priv->rom);
|
||||||
|
kfree(priv);
|
||||||
|
}
|
||||||
|
|
||||||
const struct nvbios_source
|
const struct nvbios_source
|
||||||
nvbios_platform = {
|
nvbios_platform = {
|
||||||
.name = "PLATFORM",
|
.name = "PLATFORM",
|
||||||
.init = platform_init,
|
.init = platform_init,
|
||||||
.fini = (void(*)(void *))kfree,
|
.fini = platform_fini,
|
||||||
.read = pcirom_read,
|
.read = pcirom_read,
|
||||||
.rw = true,
|
.rw = true,
|
||||||
};
|
};
|
||||||
|
|
|
@ -193,7 +193,7 @@ static int __init omapdss_boot_init(void)
|
||||||
dss = of_find_matching_node(NULL, omapdss_of_match);
|
dss = of_find_matching_node(NULL, omapdss_of_match);
|
||||||
|
|
||||||
if (dss == NULL || !of_device_is_available(dss))
|
if (dss == NULL || !of_device_is_available(dss))
|
||||||
return 0;
|
goto put_node;
|
||||||
|
|
||||||
omapdss_walk_device(dss, true);
|
omapdss_walk_device(dss, true);
|
||||||
|
|
||||||
|
@ -218,6 +218,8 @@ static int __init omapdss_boot_init(void)
|
||||||
kfree(n);
|
kfree(n);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
put_node:
|
||||||
|
of_node_put(dss);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -104,25 +104,33 @@ static bool radeon_read_bios(struct radeon_device *rdev)
|
||||||
|
|
||||||
static bool radeon_read_platform_bios(struct radeon_device *rdev)
|
static bool radeon_read_platform_bios(struct radeon_device *rdev)
|
||||||
{
|
{
|
||||||
uint8_t __iomem *bios;
|
phys_addr_t rom = rdev->pdev->rom;
|
||||||
size_t size;
|
size_t romlen = rdev->pdev->romlen;
|
||||||
|
void __iomem *bios;
|
||||||
|
|
||||||
rdev->bios = NULL;
|
rdev->bios = NULL;
|
||||||
|
|
||||||
bios = pci_platform_rom(rdev->pdev, &size);
|
if (!rom || romlen == 0)
|
||||||
if (!bios) {
|
|
||||||
return false;
|
return false;
|
||||||
}
|
|
||||||
|
|
||||||
if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) {
|
rdev->bios = kzalloc(romlen, GFP_KERNEL);
|
||||||
|
if (!rdev->bios)
|
||||||
return false;
|
return false;
|
||||||
}
|
|
||||||
rdev->bios = kmemdup(bios, size, GFP_KERNEL);
|
bios = ioremap(rom, romlen);
|
||||||
if (rdev->bios == NULL) {
|
if (!bios)
|
||||||
return false;
|
goto free_bios;
|
||||||
}
|
|
||||||
|
memcpy_fromio(rdev->bios, bios, romlen);
|
||||||
|
iounmap(bios);
|
||||||
|
|
||||||
|
if (rdev->bios[0] != 0x55 || rdev->bios[1] != 0xaa)
|
||||||
|
goto free_bios;
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
|
free_bios:
|
||||||
|
kfree(rdev->bios);
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_ACPI
|
#ifdef CONFIG_ACPI
|
||||||
|
|
|
@ -14,7 +14,7 @@ struct sun8i_mixer;
|
||||||
|
|
||||||
/* VI channel CSC units offsets */
|
/* VI channel CSC units offsets */
|
||||||
#define CCSC00_OFFSET 0xAA050
|
#define CCSC00_OFFSET 0xAA050
|
||||||
#define CCSC01_OFFSET 0xFA000
|
#define CCSC01_OFFSET 0xFA050
|
||||||
#define CCSC10_OFFSET 0xA0000
|
#define CCSC10_OFFSET 0xA0000
|
||||||
#define CCSC11_OFFSET 0xF0000
|
#define CCSC11_OFFSET 0xF0000
|
||||||
|
|
||||||
|
|
|
@ -1134,6 +1134,7 @@ static int vc4_hdmi_audio_init(struct vc4_hdmi *hdmi)
|
||||||
card->num_links = 1;
|
card->num_links = 1;
|
||||||
card->name = "vc4-hdmi";
|
card->name = "vc4-hdmi";
|
||||||
card->dev = dev;
|
card->dev = dev;
|
||||||
|
card->owner = THIS_MODULE;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Be careful, snd_soc_register_card() calls dev_set_drvdata() and
|
* Be careful, snd_soc_register_card() calls dev_set_drvdata() and
|
||||||
|
|
|
@ -1292,8 +1292,8 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
|
||||||
|
|
||||||
/* create pre-declared device nodes */
|
/* create pre-declared device nodes */
|
||||||
of_i2c_register_devices(adap);
|
of_i2c_register_devices(adap);
|
||||||
i2c_acpi_register_devices(adap);
|
|
||||||
i2c_acpi_install_space_handler(adap);
|
i2c_acpi_install_space_handler(adap);
|
||||||
|
i2c_acpi_register_devices(adap);
|
||||||
|
|
||||||
if (adap->nr < __i2c_first_dynamic_bus_num)
|
if (adap->nr < __i2c_first_dynamic_bus_num)
|
||||||
i2c_scan_static_board_info(adap);
|
i2c_scan_static_board_info(adap);
|
||||||
|
|
|
@ -1100,14 +1100,22 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
spin_lock_irq(&cm.lock);
|
spin_lock_irq(&cm_id_priv->lock);
|
||||||
|
spin_lock(&cm.lock);
|
||||||
|
/* Required for cleanup paths related cm_req_handler() */
|
||||||
|
if (cm_id_priv->timewait_info) {
|
||||||
|
cm_cleanup_timewait(cm_id_priv->timewait_info);
|
||||||
|
kfree(cm_id_priv->timewait_info);
|
||||||
|
cm_id_priv->timewait_info = NULL;
|
||||||
|
}
|
||||||
if (!list_empty(&cm_id_priv->altr_list) &&
|
if (!list_empty(&cm_id_priv->altr_list) &&
|
||||||
(!cm_id_priv->altr_send_port_not_ready))
|
(!cm_id_priv->altr_send_port_not_ready))
|
||||||
list_del(&cm_id_priv->altr_list);
|
list_del(&cm_id_priv->altr_list);
|
||||||
if (!list_empty(&cm_id_priv->prim_list) &&
|
if (!list_empty(&cm_id_priv->prim_list) &&
|
||||||
(!cm_id_priv->prim_send_port_not_ready))
|
(!cm_id_priv->prim_send_port_not_ready))
|
||||||
list_del(&cm_id_priv->prim_list);
|
list_del(&cm_id_priv->prim_list);
|
||||||
spin_unlock_irq(&cm.lock);
|
spin_unlock(&cm.lock);
|
||||||
|
spin_unlock_irq(&cm_id_priv->lock);
|
||||||
|
|
||||||
cm_free_id(cm_id->local_id);
|
cm_free_id(cm_id->local_id);
|
||||||
cm_deref_id(cm_id_priv);
|
cm_deref_id(cm_id_priv);
|
||||||
|
@ -1424,7 +1432,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
|
||||||
/* Verify that we're not in timewait. */
|
/* Verify that we're not in timewait. */
|
||||||
cm_id_priv = container_of(cm_id, struct cm_id_private, id);
|
cm_id_priv = container_of(cm_id, struct cm_id_private, id);
|
||||||
spin_lock_irqsave(&cm_id_priv->lock, flags);
|
spin_lock_irqsave(&cm_id_priv->lock, flags);
|
||||||
if (cm_id->state != IB_CM_IDLE) {
|
if (cm_id->state != IB_CM_IDLE || WARN_ON(cm_id_priv->timewait_info)) {
|
||||||
spin_unlock_irqrestore(&cm_id_priv->lock, flags);
|
spin_unlock_irqrestore(&cm_id_priv->lock, flags);
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -1442,12 +1450,12 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
|
||||||
param->ppath_sgid_attr, &cm_id_priv->av,
|
param->ppath_sgid_attr, &cm_id_priv->av,
|
||||||
cm_id_priv);
|
cm_id_priv);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto error1;
|
goto out;
|
||||||
if (param->alternate_path) {
|
if (param->alternate_path) {
|
||||||
ret = cm_init_av_by_path(param->alternate_path, NULL,
|
ret = cm_init_av_by_path(param->alternate_path, NULL,
|
||||||
&cm_id_priv->alt_av, cm_id_priv);
|
&cm_id_priv->alt_av, cm_id_priv);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto error1;
|
goto out;
|
||||||
}
|
}
|
||||||
cm_id->service_id = param->service_id;
|
cm_id->service_id = param->service_id;
|
||||||
cm_id->service_mask = ~cpu_to_be64(0);
|
cm_id->service_mask = ~cpu_to_be64(0);
|
||||||
|
@ -1465,7 +1473,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
|
||||||
|
|
||||||
ret = cm_alloc_msg(cm_id_priv, &cm_id_priv->msg);
|
ret = cm_alloc_msg(cm_id_priv, &cm_id_priv->msg);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto error1;
|
goto out;
|
||||||
|
|
||||||
req_msg = (struct cm_req_msg *) cm_id_priv->msg->mad;
|
req_msg = (struct cm_req_msg *) cm_id_priv->msg->mad;
|
||||||
cm_format_req(req_msg, cm_id_priv, param);
|
cm_format_req(req_msg, cm_id_priv, param);
|
||||||
|
@ -1488,7 +1496,6 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
error2: cm_free_msg(cm_id_priv->msg);
|
error2: cm_free_msg(cm_id_priv->msg);
|
||||||
error1: kfree(cm_id_priv->timewait_info);
|
|
||||||
out: return ret;
|
out: return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(ib_send_cm_req);
|
EXPORT_SYMBOL(ib_send_cm_req);
|
||||||
|
@ -1973,7 +1980,7 @@ static int cm_req_handler(struct cm_work *work)
|
||||||
pr_debug("%s: local_id %d, no listen_cm_id_priv\n", __func__,
|
pr_debug("%s: local_id %d, no listen_cm_id_priv\n", __func__,
|
||||||
be32_to_cpu(cm_id->local_id));
|
be32_to_cpu(cm_id->local_id));
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto free_timeinfo;
|
goto destroy;
|
||||||
}
|
}
|
||||||
|
|
||||||
cm_id_priv->id.cm_handler = listen_cm_id_priv->id.cm_handler;
|
cm_id_priv->id.cm_handler = listen_cm_id_priv->id.cm_handler;
|
||||||
|
@ -2057,8 +2064,6 @@ static int cm_req_handler(struct cm_work *work)
|
||||||
rejected:
|
rejected:
|
||||||
atomic_dec(&cm_id_priv->refcount);
|
atomic_dec(&cm_id_priv->refcount);
|
||||||
cm_deref_id(listen_cm_id_priv);
|
cm_deref_id(listen_cm_id_priv);
|
||||||
free_timeinfo:
|
|
||||||
kfree(cm_id_priv->timewait_info);
|
|
||||||
destroy:
|
destroy:
|
||||||
ib_destroy_cm_id(cm_id);
|
ib_destroy_cm_id(cm_id);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -3293,7 +3293,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
|
||||||
if (raddr->sin_addr.s_addr == htonl(INADDR_ANY)) {
|
if (raddr->sin_addr.s_addr == htonl(INADDR_ANY)) {
|
||||||
err = pick_local_ipaddrs(dev, cm_id);
|
err = pick_local_ipaddrs(dev, cm_id);
|
||||||
if (err)
|
if (err)
|
||||||
goto fail2;
|
goto fail3;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* find a route */
|
/* find a route */
|
||||||
|
@ -3315,7 +3315,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
|
||||||
if (ipv6_addr_type(&raddr6->sin6_addr) == IPV6_ADDR_ANY) {
|
if (ipv6_addr_type(&raddr6->sin6_addr) == IPV6_ADDR_ANY) {
|
||||||
err = pick_local_ip6addrs(dev, cm_id);
|
err = pick_local_ip6addrs(dev, cm_id);
|
||||||
if (err)
|
if (err)
|
||||||
goto fail2;
|
goto fail3;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* find a route */
|
/* find a route */
|
||||||
|
|
|
@ -2071,9 +2071,9 @@ static int i40iw_addr_resolve_neigh_ipv6(struct i40iw_device *iwdev,
|
||||||
dst = i40iw_get_dst_ipv6(&src_addr, &dst_addr);
|
dst = i40iw_get_dst_ipv6(&src_addr, &dst_addr);
|
||||||
if (!dst || dst->error) {
|
if (!dst || dst->error) {
|
||||||
if (dst) {
|
if (dst) {
|
||||||
dst_release(dst);
|
|
||||||
i40iw_pr_err("ip6_route_output returned dst->error = %d\n",
|
i40iw_pr_err("ip6_route_output returned dst->error = %d\n",
|
||||||
dst->error);
|
dst->error);
|
||||||
|
dst_release(dst);
|
||||||
}
|
}
|
||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
|
@ -460,10 +460,10 @@ qedr_addr6_resolve(struct qedr_dev *dev,
|
||||||
|
|
||||||
if ((!dst) || dst->error) {
|
if ((!dst) || dst->error) {
|
||||||
if (dst) {
|
if (dst) {
|
||||||
dst_release(dst);
|
|
||||||
DP_ERR(dev,
|
DP_ERR(dev,
|
||||||
"ip6_route_output returned dst->error = %d\n",
|
"ip6_route_output returned dst->error = %d\n",
|
||||||
dst->error);
|
dst->error);
|
||||||
|
dst_release(dst);
|
||||||
}
|
}
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
|
@ -121,6 +121,8 @@ static void rxe_init_device_param(struct rxe_dev *rxe)
|
||||||
rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN;
|
rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN;
|
||||||
rxe->attr.max_pkeys = RXE_MAX_PKEYS;
|
rxe->attr.max_pkeys = RXE_MAX_PKEYS;
|
||||||
rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY;
|
rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY;
|
||||||
|
addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid,
|
||||||
|
rxe->ndev->dev_addr);
|
||||||
|
|
||||||
rxe->max_ucontext = RXE_MAX_UCONTEXT;
|
rxe->max_ucontext = RXE_MAX_UCONTEXT;
|
||||||
}
|
}
|
||||||
|
|
|
@ -583,15 +583,16 @@ int rxe_qp_from_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask,
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
if (mask & IB_QP_MAX_QP_RD_ATOMIC) {
|
if (mask & IB_QP_MAX_QP_RD_ATOMIC) {
|
||||||
int max_rd_atomic = __roundup_pow_of_two(attr->max_rd_atomic);
|
int max_rd_atomic = attr->max_rd_atomic ?
|
||||||
|
roundup_pow_of_two(attr->max_rd_atomic) : 0;
|
||||||
|
|
||||||
qp->attr.max_rd_atomic = max_rd_atomic;
|
qp->attr.max_rd_atomic = max_rd_atomic;
|
||||||
atomic_set(&qp->req.rd_atomic, max_rd_atomic);
|
atomic_set(&qp->req.rd_atomic, max_rd_atomic);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mask & IB_QP_MAX_DEST_RD_ATOMIC) {
|
if (mask & IB_QP_MAX_DEST_RD_ATOMIC) {
|
||||||
int max_dest_rd_atomic =
|
int max_dest_rd_atomic = attr->max_dest_rd_atomic ?
|
||||||
__roundup_pow_of_two(attr->max_dest_rd_atomic);
|
roundup_pow_of_two(attr->max_dest_rd_atomic) : 0;
|
||||||
|
|
||||||
qp->attr.max_dest_rd_atomic = max_dest_rd_atomic;
|
qp->attr.max_dest_rd_atomic = max_dest_rd_atomic;
|
||||||
|
|
||||||
|
|
|
@ -209,8 +209,8 @@ static int mlxreg_led_config(struct mlxreg_led_priv_data *priv)
|
||||||
brightness = LED_OFF;
|
brightness = LED_OFF;
|
||||||
led_data->base_color = MLXREG_LED_GREEN_SOLID;
|
led_data->base_color = MLXREG_LED_GREEN_SOLID;
|
||||||
}
|
}
|
||||||
sprintf(led_data->led_cdev_name, "%s:%s", "mlxreg",
|
snprintf(led_data->led_cdev_name, sizeof(led_data->led_cdev_name),
|
||||||
data->label);
|
"mlxreg:%s", data->label);
|
||||||
led_cdev->name = led_data->led_cdev_name;
|
led_cdev->name = led_data->led_cdev_name;
|
||||||
led_cdev->brightness = brightness;
|
led_cdev->brightness = brightness;
|
||||||
led_cdev->max_brightness = LED_ON;
|
led_cdev->max_brightness = LED_ON;
|
||||||
|
|
|
@ -585,6 +585,7 @@ struct cache_set {
|
||||||
*/
|
*/
|
||||||
wait_queue_head_t btree_cache_wait;
|
wait_queue_head_t btree_cache_wait;
|
||||||
struct task_struct *btree_cache_alloc_lock;
|
struct task_struct *btree_cache_alloc_lock;
|
||||||
|
spinlock_t btree_cannibalize_lock;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* When we free a btree node, we increment the gen of the bucket the
|
* When we free a btree node, we increment the gen of the bucket the
|
||||||
|
|
|
@ -876,15 +876,17 @@ static struct btree *mca_find(struct cache_set *c, struct bkey *k)
|
||||||
|
|
||||||
static int mca_cannibalize_lock(struct cache_set *c, struct btree_op *op)
|
static int mca_cannibalize_lock(struct cache_set *c, struct btree_op *op)
|
||||||
{
|
{
|
||||||
struct task_struct *old;
|
spin_lock(&c->btree_cannibalize_lock);
|
||||||
|
if (likely(c->btree_cache_alloc_lock == NULL)) {
|
||||||
old = cmpxchg(&c->btree_cache_alloc_lock, NULL, current);
|
c->btree_cache_alloc_lock = current;
|
||||||
if (old && old != current) {
|
} else if (c->btree_cache_alloc_lock != current) {
|
||||||
if (op)
|
if (op)
|
||||||
prepare_to_wait(&c->btree_cache_wait, &op->wait,
|
prepare_to_wait(&c->btree_cache_wait, &op->wait,
|
||||||
TASK_UNINTERRUPTIBLE);
|
TASK_UNINTERRUPTIBLE);
|
||||||
|
spin_unlock(&c->btree_cannibalize_lock);
|
||||||
return -EINTR;
|
return -EINTR;
|
||||||
}
|
}
|
||||||
|
spin_unlock(&c->btree_cannibalize_lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -919,10 +921,12 @@ static struct btree *mca_cannibalize(struct cache_set *c, struct btree_op *op,
|
||||||
*/
|
*/
|
||||||
static void bch_cannibalize_unlock(struct cache_set *c)
|
static void bch_cannibalize_unlock(struct cache_set *c)
|
||||||
{
|
{
|
||||||
|
spin_lock(&c->btree_cannibalize_lock);
|
||||||
if (c->btree_cache_alloc_lock == current) {
|
if (c->btree_cache_alloc_lock == current) {
|
||||||
c->btree_cache_alloc_lock = NULL;
|
c->btree_cache_alloc_lock = NULL;
|
||||||
wake_up(&c->btree_cache_wait);
|
wake_up(&c->btree_cache_wait);
|
||||||
}
|
}
|
||||||
|
spin_unlock(&c->btree_cannibalize_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct btree *mca_alloc(struct cache_set *c, struct btree_op *op,
|
static struct btree *mca_alloc(struct cache_set *c, struct btree_op *op,
|
||||||
|
|
|
@ -1737,6 +1737,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb)
|
||||||
sema_init(&c->sb_write_mutex, 1);
|
sema_init(&c->sb_write_mutex, 1);
|
||||||
mutex_init(&c->bucket_lock);
|
mutex_init(&c->bucket_lock);
|
||||||
init_waitqueue_head(&c->btree_cache_wait);
|
init_waitqueue_head(&c->btree_cache_wait);
|
||||||
|
spin_lock_init(&c->btree_cannibalize_lock);
|
||||||
init_waitqueue_head(&c->bucket_wait);
|
init_waitqueue_head(&c->bucket_wait);
|
||||||
init_waitqueue_head(&c->gc_wait);
|
init_waitqueue_head(&c->gc_wait);
|
||||||
sema_init(&c->uuid_write_mutex, 1);
|
sema_init(&c->uuid_write_mutex, 1);
|
||||||
|
|
|
@ -483,10 +483,11 @@ static int tda10071_read_status(struct dvb_frontend *fe, enum fe_status *status)
|
||||||
goto error;
|
goto error;
|
||||||
|
|
||||||
if (dev->delivery_system == SYS_DVBS) {
|
if (dev->delivery_system == SYS_DVBS) {
|
||||||
dev->dvbv3_ber = buf[0] << 24 | buf[1] << 16 |
|
u32 bit_error = buf[0] << 24 | buf[1] << 16 |
|
||||||
buf[2] << 8 | buf[3] << 0;
|
buf[2] << 8 | buf[3] << 0;
|
||||||
dev->post_bit_error += buf[0] << 24 | buf[1] << 16 |
|
|
||||||
buf[2] << 8 | buf[3] << 0;
|
dev->dvbv3_ber = bit_error;
|
||||||
|
dev->post_bit_error += bit_error;
|
||||||
c->post_bit_error.stat[0].scale = FE_SCALE_COUNTER;
|
c->post_bit_error.stat[0].scale = FE_SCALE_COUNTER;
|
||||||
c->post_bit_error.stat[0].uvalue = dev->post_bit_error;
|
c->post_bit_error.stat[0].uvalue = dev->post_bit_error;
|
||||||
dev->block_error += buf[4] << 8 | buf[5] << 0;
|
dev->block_error += buf[4] << 8 | buf[5] << 0;
|
||||||
|
|
|
@ -2337,11 +2337,12 @@ smiapp_sysfs_nvm_read(struct device *dev, struct device_attribute *attr,
|
||||||
if (rval < 0) {
|
if (rval < 0) {
|
||||||
if (rval != -EBUSY && rval != -EAGAIN)
|
if (rval != -EBUSY && rval != -EAGAIN)
|
||||||
pm_runtime_set_active(&client->dev);
|
pm_runtime_set_active(&client->dev);
|
||||||
pm_runtime_put(&client->dev);
|
pm_runtime_put_noidle(&client->dev);
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (smiapp_read_nvm(sensor, sensor->nvm)) {
|
if (smiapp_read_nvm(sensor, sensor->nvm)) {
|
||||||
|
pm_runtime_put(&client->dev);
|
||||||
dev_err(&client->dev, "nvm read failed\n");
|
dev_err(&client->dev, "nvm read failed\n");
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue