This is the 4.19.143 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl9QtnYACgkQONu9yGCS aT6d3hAA0SGXTk13kxCTzOTOh7hhZJSI6a+JL64Cj/o8IkaoaCFMjLevcuYMAWh1 LARaLjPy7MNm1fAy6LPaQcLwRax2Ocwyl27x3U3IrM4/Fos/r0wkn4Ek6IJVBD0H FqF4VHRoLt0IUhOTdsdGqv4YHRhE/l8dFHXVencTVE8dAB5QUUpI8XwKruk8HlOD L2h1gF6x8yV18lt3I6kIA3+n9ImMSNO65OxwXUTgu0cZoyk35byj1bbgu8mkZPkk s7Y5oBS5CorhBYFP+D6Av5e9LOP4jzvwPqCeLLCIa5idM277afyt6dKnwBcdK4w/ Y10AIlGeji0xaAD4Xv2SnjiY6lFtA5DF8gg8zLsjdjgPyELrZOdNOxJPhckL8Fbj u9oeWerJPBgI1bEtaWUihRSo31dedp8VAi87aRdwMkNIdBrXLo9tdv+waWTm8YPi 0kbG+p/Cp7Z8SqG6dEJsLxnes2Spd5RohRsYET/L3adl5B/IdYVxuHF0Lc2U/5AM +7FvisuqjeDS0o8ZpAP8F0wpvqDIhD+5Iy2NkT3/HcgzyYYd9q4+L5szoARN4Dzn pIm/Y9UyvXxgYNUSvVl5H1hn4JJR0WuxgiBYoUrZGc5w5Ey5f8M9hOM90lfu6MWO YWbLVEIui+jW9pkV4SmO71zkR+OI6u1I2YSTYGyTvXnyD+YL44w= =416Q -----END PGP SIGNATURE----- Merge 4.19.143 into android-4.19-stable Changes in 4.19.143 powerpc/64s: Don't init FSCR_DSCR in __init_FSCR() gre6: Fix reception with IP6_TNL_F_RCV_DSCP_COPY net: Fix potential wrong skb->protocol in skb_vlan_untag() net: qrtr: fix usage of idr in port assignment to socket net/smc: Prevent kernel-infoleak in __smc_diag_dump() tipc: fix uninit skb->data in tipc_nl_compat_dumpit() net: ena: Make missed_tx stat incremental ipvlan: fix device features ALSA: pci: delete repeated words in comments ASoC: img: Fix a reference count leak in img_i2s_in_set_fmt ASoC: img-parallel-out: Fix a reference count leak ASoC: tegra: Fix reference count leaks. mfd: intel-lpss: Add Intel Emmitsburg PCH PCI IDs arm64: dts: qcom: msm8916: Pull down PDM GPIOs during sleep powerpc/xive: Ignore kmemleak false positives media: pci: ttpci: av7110: fix possible buffer overflow caused by bad DMA value in debiirq() blktrace: ensure our debugfs dir exists scsi: target: tcmu: Fix crash on ARM during cmd completion iommu/iova: Don't BUG on invalid PFNs drm/amdkfd: Fix reference count leaks. drm/radeon: fix multiple reference count leak drm/amdgpu: fix ref count leak in amdgpu_driver_open_kms drm/amd/display: fix ref count leak in amdgpu_drm_ioctl drm/amdgpu: fix ref count leak in amdgpu_display_crtc_set_config drm/amdgpu/display: fix ref count leak when pm_runtime_get_sync fails scsi: lpfc: Fix shost refcount mismatch when deleting vport xfs: Don't allow logging of XFS_ISTALE inodes selftests/powerpc: Purge extra count_pmc() calls of ebb selftests f2fs: fix error path in do_recover_data() omapfb: fix multiple reference count leaks due to pm_runtime_get_sync PCI: Fix pci_create_slot() reference count leak ARM: dts: ls1021a: output PPS signal on FIPER2 rtlwifi: rtl8192cu: Prevent leaking urb mips/vdso: Fix resource leaks in genvdso.c cec-api: prevent leaking memory through hole in structure HID: quirks: add NOGET quirk for Logitech GROUP f2fs: fix use-after-free issue drm/nouveau/drm/noveau: fix reference count leak in nouveau_fbcon_open drm/nouveau: fix reference count leak in nv50_disp_atomic_commit drm/nouveau: Fix reference count leak in nouveau_connector_detect locking/lockdep: Fix overflow in presentation of average lock-time btrfs: file: reserve qgroup space after the hole punch range is locked scsi: iscsi: Do not put host in iscsi_set_flashnode_param() ceph: fix potential mdsc use-after-free crash scsi: fcoe: Memory leak fix in fcoe_sysfs_fcf_del() EDAC/ie31200: Fallback if host bridge device is already initialized KVM: arm64: Fix symbol dependency in __hyp_call_panic_nvhe powerpc/spufs: add CONFIG_COREDUMP dependency USB: sisusbvga: Fix a potential UB casued by left shifting a negative value efi: provide empty efi_enter_virtual_mode implementation Revert "ath10k: fix DMA related firmware crashes on multiple devices" media: gpio-ir-tx: improve precision of transmitted signal due to scheduling drm/msm/adreno: fix updating ring fence nvme-fc: Fix wrong return value in __nvme_fc_init_request() null_blk: fix passing of REQ_FUA flag in null_handle_rq i2c: rcar: in slave mode, clear NACK earlier usb: gadget: f_tcm: Fix some resource leaks in some error paths jbd2: make sure jh have b_transaction set in refile/unfile_buffer ext4: don't BUG on inconsistent journal feature ext4: handle read only external journal device jbd2: abort journal if free a async write error metadata buffer ext4: handle option set by mount flags correctly ext4: handle error of ext4_setup_system_zone() on remount ext4: correctly restore system zone info when remount fails fs: prevent BUG_ON in submit_bh_wbc() spi: stm32: fix stm32_spi_prepare_mbr in case of odd clk_rate s390/cio: add cond_resched() in the slow_eval_known_fn() loop ASoC: wm8994: Avoid attempts to read unreadable registers scsi: fcoe: Fix I/O path allocation scsi: ufs: Fix possible infinite loop in ufshcd_hold scsi: ufs: Improve interrupt handling for shared interrupts scsi: ufs: Clean up completed request without interrupt notification scsi: qla2xxx: Check if FW supports MQ before enabling scsi: qla2xxx: Fix null pointer access during disconnect from subsystem Revert "scsi: qla2xxx: Fix crash on qla2x00_mailbox_command" macvlan: validate setting of multiple remote source MAC addresses net: gianfar: Add of_node_put() before goto statement powerpc/perf: Fix soft lockups due to missed interrupt accounting block: loop: set discard granularity and alignment for block device backed loop HID: i2c-hid: Always sleep 60ms after I2C_HID_PWR_ON commands blk-mq: order adding requests to hctx->dispatch and checking SCHED_RESTART btrfs: reset compression level for lzo on remount btrfs: fix space cache memory leak after transaction abort fbcon: prevent user font height or width change from causing potential out-of-bounds access USB: lvtest: return proper error code in probe vt: defer kfree() of vc_screenbuf in vc_do_resize() vt_ioctl: change VT_RESIZEX ioctl to check for error return from vc_resize() serial: samsung: Removes the IRQ not found warning serial: pl011: Fix oops on -EPROBE_DEFER serial: pl011: Don't leak amba_ports entry on driver register error serial: 8250_exar: Fix number of ports for Commtech PCIe cards serial: 8250: change lock order in serial8250_do_startup() writeback: Protect inode->i_io_list with inode->i_lock writeback: Avoid skipping inode writeback writeback: Fix sync livelock due to b_dirty_time processing XEN uses irqdesc::irq_data_common::handler_data to store a per interrupt XEN data pointer which contains XEN specific information. usb: host: xhci: fix ep context print mismatch in debugfs xhci: Do warm-reset when both CAS and XDEV_RESUME are set xhci: Always restore EP_SOFT_CLEAR_TOGGLE even if ep reset failed PM: sleep: core: Fix the handling of pending runtime resume requests device property: Fix the secondary firmware node handling in set_primary_fwnode() genirq/matrix: Deal with the sillyness of for_each_cpu() on UP irqchip/stm32-exti: Avoid losing interrupts due to clearing pending bits by mistake drm/amdgpu: Fix buffer overflow in INFO ioctl drm/amd/pm: correct Vega10 swctf limit setting drm/amd/pm: correct Vega12 swctf limit setting USB: yurex: Fix bad gfp argument usb: uas: Add quirk for PNY Pro Elite USB: quirks: Add no-lpm quirk for another Raydium touchscreen USB: quirks: Ignore duplicate endpoint on Sound Devices MixPre-D USB: Ignore UAS for JMicron JMS567 ATA/ATAPI Bridge usb: host: ohci-exynos: Fix error handling in exynos_ohci_probe() USB: gadget: u_f: add overflow checks to VLA macros USB: gadget: f_ncm: add bounds checks to ncm_unwrap_ntb() USB: gadget: u_f: Unbreak offset calculation in VLAs USB: cdc-acm: rework notification_buffer resizing usb: storage: Add unusual_uas entry for Sony PSZ drives btrfs: check the right error variable in btrfs_del_dir_entries_in_log usb: dwc3: gadget: Don't setup more than requested usb: dwc3: gadget: Fix handling ZLP usb: dwc3: gadget: Handle ZLP for sg requests tpm: Unify the mismatching TPM space buffer sizes HID: hiddev: Fix slab-out-of-bounds write in hiddev_ioctl_usage() ALSA: usb-audio: Update documentation comment for MS2109 quirk Linux 4.19.143 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I8b6e29eda77bd69df30132842cf28019c8e7c1a3
This commit is contained in:
commit
a13ec5ea86
142 changed files with 1047 additions and 425 deletions
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 142
|
||||
SUBLEVEL = 143
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
|
|
@ -609,7 +609,7 @@
|
|||
fsl,tmr-prsc = <2>;
|
||||
fsl,tmr-add = <0xaaaaaaab>;
|
||||
fsl,tmr-fiper1 = <999999995>;
|
||||
fsl,tmr-fiper2 = <99990>;
|
||||
fsl,tmr-fiper2 = <999999995>;
|
||||
fsl,max-adj = <499999999>;
|
||||
};
|
||||
|
||||
|
|
|
@ -529,7 +529,7 @@
|
|||
pins = "gpio63", "gpio64", "gpio65", "gpio66",
|
||||
"gpio67", "gpio68";
|
||||
drive-strength = <2>;
|
||||
bias-disable;
|
||||
bias-pull-down;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -626,7 +626,7 @@ static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par,
|
|||
* making sure it is a kernel address and not a PC-relative
|
||||
* reference.
|
||||
*/
|
||||
asm volatile("ldr %0, =__hyp_panic_string" : "=r" (str_va));
|
||||
asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string));
|
||||
|
||||
__hyp_do_panic(str_va,
|
||||
spsr, elr,
|
||||
|
|
|
@ -126,6 +126,7 @@ static void *map_vdso(const char *path, size_t *_size)
|
|||
if (fstat(fd, &stat) != 0) {
|
||||
fprintf(stderr, "%s: Failed to stat '%s': %s\n", program_name,
|
||||
path, strerror(errno));
|
||||
close(fd);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -134,6 +135,7 @@ static void *map_vdso(const char *path, size_t *_size)
|
|||
if (addr == MAP_FAILED) {
|
||||
fprintf(stderr, "%s: Failed to map '%s': %s\n", program_name,
|
||||
path, strerror(errno));
|
||||
close(fd);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -143,6 +145,7 @@ static void *map_vdso(const char *path, size_t *_size)
|
|||
if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG) != 0) {
|
||||
fprintf(stderr, "%s: '%s' is not an ELF file\n", program_name,
|
||||
path);
|
||||
close(fd);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -154,6 +157,7 @@ static void *map_vdso(const char *path, size_t *_size)
|
|||
default:
|
||||
fprintf(stderr, "%s: '%s' has invalid ELF class\n",
|
||||
program_name, path);
|
||||
close(fd);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -165,6 +169,7 @@ static void *map_vdso(const char *path, size_t *_size)
|
|||
default:
|
||||
fprintf(stderr, "%s: '%s' has invalid ELF data order\n",
|
||||
program_name, path);
|
||||
close(fd);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -172,15 +177,18 @@ static void *map_vdso(const char *path, size_t *_size)
|
|||
fprintf(stderr,
|
||||
"%s: '%s' has invalid ELF machine (expected EM_MIPS)\n",
|
||||
program_name, path);
|
||||
close(fd);
|
||||
return NULL;
|
||||
} else if (swap_uint16(ehdr->e_type) != ET_DYN) {
|
||||
fprintf(stderr,
|
||||
"%s: '%s' has invalid ELF type (expected ET_DYN)\n",
|
||||
program_name, path);
|
||||
close(fd);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
*_size = stat.st_size;
|
||||
close(fd);
|
||||
return addr;
|
||||
}
|
||||
|
||||
|
@ -284,10 +292,12 @@ int main(int argc, char **argv)
|
|||
/* Calculate and write symbol offsets to <output file> */
|
||||
if (!get_symbols(dbg_vdso_path, dbg_vdso)) {
|
||||
unlink(out_path);
|
||||
fclose(out_file);
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
fprintf(out_file, "};\n");
|
||||
fclose(out_file);
|
||||
|
||||
return EXIT_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -183,7 +183,7 @@ __init_LPCR_ISA300:
|
|||
|
||||
__init_FSCR:
|
||||
mfspr r3,SPRN_FSCR
|
||||
ori r3,r3,FSCR_TAR|FSCR_DSCR|FSCR_EBB
|
||||
ori r3,r3,FSCR_TAR|FSCR_EBB
|
||||
mtspr SPRN_FSCR,r3
|
||||
blr
|
||||
|
||||
|
|
|
@ -2085,6 +2085,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
|
|||
|
||||
if (perf_event_overflow(event, &data, regs))
|
||||
power_pmu_stop(event, 0);
|
||||
} else if (period) {
|
||||
/* Account for interrupt in case of invalid SIAR */
|
||||
if (perf_event_account_interrupt(event))
|
||||
power_pmu_stop(event, 0);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -46,6 +46,7 @@ config SPU_FS
|
|||
tristate "SPU file system"
|
||||
default m
|
||||
depends on PPC_CELL
|
||||
depends on COREDUMP
|
||||
select SPU_BASE
|
||||
help
|
||||
The SPU file system is used to access Synergistic Processing
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/delay.h>
|
||||
#include <linux/cpumask.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/kmemleak.h>
|
||||
|
||||
#include <asm/prom.h>
|
||||
#include <asm/io.h>
|
||||
|
@ -627,6 +628,7 @@ static bool xive_native_provision_pages(void)
|
|||
pr_err("Failed to allocate provisioning page\n");
|
||||
return false;
|
||||
}
|
||||
kmemleak_ignore(p);
|
||||
opal_xive_donate_page(chip, __pa(p));
|
||||
}
|
||||
return true;
|
||||
|
|
|
@ -69,6 +69,15 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
|
|||
return;
|
||||
clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
|
||||
|
||||
/*
|
||||
* Order clearing SCHED_RESTART and list_empty_careful(&hctx->dispatch)
|
||||
* in blk_mq_run_hw_queue(). Its pair is the barrier in
|
||||
* blk_mq_dispatch_rq_list(). So dispatch code won't see SCHED_RESTART,
|
||||
* meantime new request added to hctx->dispatch is missed to check in
|
||||
* blk_mq_run_hw_queue().
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
blk_mq_run_hw_queue(hctx, true);
|
||||
}
|
||||
|
||||
|
|
|
@ -1221,6 +1221,15 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
|
|||
list_splice_init(list, &hctx->dispatch);
|
||||
spin_unlock(&hctx->lock);
|
||||
|
||||
/*
|
||||
* Order adding requests to hctx->dispatch and checking
|
||||
* SCHED_RESTART flag. The pair of this smp_mb() is the one
|
||||
* in blk_mq_sched_restart(). Avoid restart code path to
|
||||
* miss the new added requests to hctx->dispatch, meantime
|
||||
* SCHED_RESTART is observed here.
|
||||
*/
|
||||
smp_mb();
|
||||
|
||||
/*
|
||||
* If SCHED_RESTART was set by the caller of this function and
|
||||
* it is no longer set that means that it was cleared by another
|
||||
|
|
|
@ -3777,9 +3777,9 @@ static inline bool fwnode_is_primary(struct fwnode_handle *fwnode)
|
|||
*/
|
||||
void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
|
||||
{
|
||||
if (fwnode) {
|
||||
struct fwnode_handle *fn = dev->fwnode;
|
||||
struct fwnode_handle *fn = dev->fwnode;
|
||||
|
||||
if (fwnode) {
|
||||
if (fwnode_is_primary(fn))
|
||||
fn = fn->secondary;
|
||||
|
||||
|
@ -3789,8 +3789,12 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
|
|||
}
|
||||
dev->fwnode = fwnode;
|
||||
} else {
|
||||
dev->fwnode = fwnode_is_primary(dev->fwnode) ?
|
||||
dev->fwnode->secondary : NULL;
|
||||
if (fwnode_is_primary(fn)) {
|
||||
dev->fwnode = fn->secondary;
|
||||
fn->secondary = NULL;
|
||||
} else {
|
||||
dev->fwnode = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(set_primary_fwnode);
|
||||
|
|
|
@ -1763,13 +1763,17 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
|
|||
}
|
||||
|
||||
/*
|
||||
* If a device configured to wake up the system from sleep states
|
||||
* has been suspended at run time and there's a resume request pending
|
||||
* for it, this is equivalent to the device signaling wakeup, so the
|
||||
* system suspend operation should be aborted.
|
||||
* Wait for possible runtime PM transitions of the device in progress
|
||||
* to complete and if there's a runtime resume request pending for it,
|
||||
* resume it before proceeding with invoking the system-wide suspend
|
||||
* callbacks for it.
|
||||
*
|
||||
* If the system-wide suspend callbacks below change the configuration
|
||||
* of the device, they must disable runtime PM for it or otherwise
|
||||
* ensure that its runtime-resume callbacks will not be confused by that
|
||||
* change in case they are invoked going forward.
|
||||
*/
|
||||
if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
|
||||
pm_wakeup_event(dev, 0);
|
||||
pm_runtime_barrier(dev);
|
||||
|
||||
if (pm_wakeup_pending()) {
|
||||
dev->power.direct_complete = false;
|
||||
|
|
|
@ -877,6 +877,7 @@ static void loop_config_discard(struct loop_device *lo)
|
|||
struct file *file = lo->lo_backing_file;
|
||||
struct inode *inode = file->f_mapping->host;
|
||||
struct request_queue *q = lo->lo_queue;
|
||||
u32 granularity, max_discard_sectors;
|
||||
|
||||
/*
|
||||
* If the backing device is a block device, mirror its zeroing
|
||||
|
@ -889,11 +890,10 @@ static void loop_config_discard(struct loop_device *lo)
|
|||
struct request_queue *backingq;
|
||||
|
||||
backingq = bdev_get_queue(inode->i_bdev);
|
||||
blk_queue_max_discard_sectors(q,
|
||||
backingq->limits.max_write_zeroes_sectors);
|
||||
|
||||
blk_queue_max_write_zeroes_sectors(q,
|
||||
backingq->limits.max_write_zeroes_sectors);
|
||||
max_discard_sectors = backingq->limits.max_write_zeroes_sectors;
|
||||
granularity = backingq->limits.discard_granularity ?:
|
||||
queue_physical_block_size(backingq);
|
||||
|
||||
/*
|
||||
* We use punch hole to reclaim the free space used by the
|
||||
|
@ -902,23 +902,26 @@ static void loop_config_discard(struct loop_device *lo)
|
|||
* useful information.
|
||||
*/
|
||||
} else if (!file->f_op->fallocate || lo->lo_encrypt_key_size) {
|
||||
q->limits.discard_granularity = 0;
|
||||
q->limits.discard_alignment = 0;
|
||||
blk_queue_max_discard_sectors(q, 0);
|
||||
blk_queue_max_write_zeroes_sectors(q, 0);
|
||||
max_discard_sectors = 0;
|
||||
granularity = 0;
|
||||
|
||||
} else {
|
||||
q->limits.discard_granularity = inode->i_sb->s_blocksize;
|
||||
q->limits.discard_alignment = 0;
|
||||
|
||||
blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
|
||||
blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
|
||||
max_discard_sectors = UINT_MAX >> 9;
|
||||
granularity = inode->i_sb->s_blocksize;
|
||||
}
|
||||
|
||||
if (q->limits.max_write_zeroes_sectors)
|
||||
if (max_discard_sectors) {
|
||||
q->limits.discard_granularity = granularity;
|
||||
blk_queue_max_discard_sectors(q, max_discard_sectors);
|
||||
blk_queue_max_write_zeroes_sectors(q, max_discard_sectors);
|
||||
blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
|
||||
else
|
||||
} else {
|
||||
q->limits.discard_granularity = 0;
|
||||
blk_queue_max_discard_sectors(q, 0);
|
||||
blk_queue_max_write_zeroes_sectors(q, 0);
|
||||
blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
|
||||
}
|
||||
q->limits.discard_alignment = 0;
|
||||
}
|
||||
|
||||
static void loop_unprepare_queue(struct loop_device *lo)
|
||||
|
|
|
@ -1086,7 +1086,7 @@ static int null_handle_rq(struct nullb_cmd *cmd)
|
|||
len = bvec.bv_len;
|
||||
err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
|
||||
op_is_write(req_op(rq)), sector,
|
||||
req_op(rq) & REQ_FUA);
|
||||
rq->cmd_flags & REQ_FUA);
|
||||
if (err) {
|
||||
spin_unlock_irq(&nullb->lock);
|
||||
return err;
|
||||
|
|
|
@ -276,13 +276,8 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
|
|||
chip->cdev.owner = THIS_MODULE;
|
||||
chip->cdevs.owner = THIS_MODULE;
|
||||
|
||||
chip->work_space.context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
if (!chip->work_space.context_buf) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
chip->work_space.session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
if (!chip->work_space.session_buf) {
|
||||
rc = tpm2_init_space(&chip->work_space, TPM2_SPACE_BUFFER_SIZE);
|
||||
if (rc) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -188,6 +188,7 @@ struct tpm_space {
|
|||
u8 *context_buf;
|
||||
u32 session_tbl[3];
|
||||
u8 *session_buf;
|
||||
u32 buf_size;
|
||||
};
|
||||
|
||||
enum tpm_chip_flags {
|
||||
|
@ -278,6 +279,9 @@ struct tpm_output_header {
|
|||
|
||||
#define TPM_TAG_RQU_COMMAND 193
|
||||
|
||||
/* TPM2 specific constants. */
|
||||
#define TPM2_SPACE_BUFFER_SIZE 16384 /* 16 kB */
|
||||
|
||||
struct stclear_flags_t {
|
||||
__be16 tag;
|
||||
u8 deactivated;
|
||||
|
@ -595,7 +599,7 @@ void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type);
|
|||
unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal);
|
||||
int tpm2_probe(struct tpm_chip *chip);
|
||||
int tpm2_find_cc(struct tpm_chip *chip, u32 cc);
|
||||
int tpm2_init_space(struct tpm_space *space);
|
||||
int tpm2_init_space(struct tpm_space *space, unsigned int buf_size);
|
||||
void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space);
|
||||
int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u32 cc,
|
||||
u8 *cmd);
|
||||
|
|
|
@ -43,18 +43,21 @@ static void tpm2_flush_sessions(struct tpm_chip *chip, struct tpm_space *space)
|
|||
}
|
||||
}
|
||||
|
||||
int tpm2_init_space(struct tpm_space *space)
|
||||
int tpm2_init_space(struct tpm_space *space, unsigned int buf_size)
|
||||
{
|
||||
space->context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
space->context_buf = kzalloc(buf_size, GFP_KERNEL);
|
||||
if (!space->context_buf)
|
||||
return -ENOMEM;
|
||||
|
||||
space->session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
space->session_buf = kzalloc(buf_size, GFP_KERNEL);
|
||||
if (space->session_buf == NULL) {
|
||||
kfree(space->context_buf);
|
||||
/* Prevent caller getting a dangling pointer. */
|
||||
space->context_buf = NULL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
space->buf_size = buf_size;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -276,8 +279,10 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u32 cc,
|
|||
sizeof(space->context_tbl));
|
||||
memcpy(&chip->work_space.session_tbl, &space->session_tbl,
|
||||
sizeof(space->session_tbl));
|
||||
memcpy(chip->work_space.context_buf, space->context_buf, PAGE_SIZE);
|
||||
memcpy(chip->work_space.session_buf, space->session_buf, PAGE_SIZE);
|
||||
memcpy(chip->work_space.context_buf, space->context_buf,
|
||||
space->buf_size);
|
||||
memcpy(chip->work_space.session_buf, space->session_buf,
|
||||
space->buf_size);
|
||||
|
||||
rc = tpm2_load_space(chip);
|
||||
if (rc) {
|
||||
|
@ -456,7 +461,7 @@ static int tpm2_save_space(struct tpm_chip *chip)
|
|||
continue;
|
||||
|
||||
rc = tpm2_save_context(chip, space->context_tbl[i],
|
||||
space->context_buf, PAGE_SIZE,
|
||||
space->context_buf, space->buf_size,
|
||||
&offset);
|
||||
if (rc == -ENOENT) {
|
||||
space->context_tbl[i] = 0;
|
||||
|
@ -474,9 +479,8 @@ static int tpm2_save_space(struct tpm_chip *chip)
|
|||
continue;
|
||||
|
||||
rc = tpm2_save_context(chip, space->session_tbl[i],
|
||||
space->session_buf, PAGE_SIZE,
|
||||
space->session_buf, space->buf_size,
|
||||
&offset);
|
||||
|
||||
if (rc == -ENOENT) {
|
||||
/* handle error saving session, just forget it */
|
||||
space->session_tbl[i] = 0;
|
||||
|
@ -522,8 +526,10 @@ int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space,
|
|||
sizeof(space->context_tbl));
|
||||
memcpy(&space->session_tbl, &chip->work_space.session_tbl,
|
||||
sizeof(space->session_tbl));
|
||||
memcpy(space->context_buf, chip->work_space.context_buf, PAGE_SIZE);
|
||||
memcpy(space->session_buf, chip->work_space.session_buf, PAGE_SIZE);
|
||||
memcpy(space->context_buf, chip->work_space.context_buf,
|
||||
space->buf_size);
|
||||
memcpy(space->session_buf, chip->work_space.session_buf,
|
||||
space->buf_size);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -22,7 +22,7 @@ static int tpmrm_open(struct inode *inode, struct file *file)
|
|||
if (priv == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
rc = tpm2_init_space(&priv->space);
|
||||
rc = tpm2_init_space(&priv->space, TPM2_SPACE_BUFFER_SIZE);
|
||||
if (rc) {
|
||||
kfree(priv);
|
||||
return -ENOMEM;
|
||||
|
|
|
@ -147,6 +147,8 @@
|
|||
(n << (28 + (2 * skl) - PAGE_SHIFT))
|
||||
|
||||
static int nr_channels;
|
||||
static struct pci_dev *mci_pdev;
|
||||
static int ie31200_registered = 1;
|
||||
|
||||
struct ie31200_priv {
|
||||
void __iomem *window;
|
||||
|
@ -518,12 +520,16 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
|
|||
static int ie31200_init_one(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent)
|
||||
{
|
||||
edac_dbg(0, "MC:\n");
|
||||
int rc;
|
||||
|
||||
edac_dbg(0, "MC:\n");
|
||||
if (pci_enable_device(pdev) < 0)
|
||||
return -EIO;
|
||||
rc = ie31200_probe1(pdev, ent->driver_data);
|
||||
if (rc == 0 && !mci_pdev)
|
||||
mci_pdev = pci_dev_get(pdev);
|
||||
|
||||
return ie31200_probe1(pdev, ent->driver_data);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void ie31200_remove_one(struct pci_dev *pdev)
|
||||
|
@ -532,6 +538,8 @@ static void ie31200_remove_one(struct pci_dev *pdev)
|
|||
struct ie31200_priv *priv;
|
||||
|
||||
edac_dbg(0, "\n");
|
||||
pci_dev_put(mci_pdev);
|
||||
mci_pdev = NULL;
|
||||
mci = edac_mc_del_mc(&pdev->dev);
|
||||
if (!mci)
|
||||
return;
|
||||
|
@ -583,17 +591,53 @@ static struct pci_driver ie31200_driver = {
|
|||
|
||||
static int __init ie31200_init(void)
|
||||
{
|
||||
int pci_rc, i;
|
||||
|
||||
edac_dbg(3, "MC:\n");
|
||||
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
|
||||
opstate_init();
|
||||
|
||||
return pci_register_driver(&ie31200_driver);
|
||||
pci_rc = pci_register_driver(&ie31200_driver);
|
||||
if (pci_rc < 0)
|
||||
goto fail0;
|
||||
|
||||
if (!mci_pdev) {
|
||||
ie31200_registered = 0;
|
||||
for (i = 0; ie31200_pci_tbl[i].vendor != 0; i++) {
|
||||
mci_pdev = pci_get_device(ie31200_pci_tbl[i].vendor,
|
||||
ie31200_pci_tbl[i].device,
|
||||
NULL);
|
||||
if (mci_pdev)
|
||||
break;
|
||||
}
|
||||
if (!mci_pdev) {
|
||||
edac_dbg(0, "ie31200 pci_get_device fail\n");
|
||||
pci_rc = -ENODEV;
|
||||
goto fail1;
|
||||
}
|
||||
pci_rc = ie31200_init_one(mci_pdev, &ie31200_pci_tbl[i]);
|
||||
if (pci_rc < 0) {
|
||||
edac_dbg(0, "ie31200 init fail\n");
|
||||
pci_rc = -ENODEV;
|
||||
goto fail1;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
||||
fail1:
|
||||
pci_unregister_driver(&ie31200_driver);
|
||||
fail0:
|
||||
pci_dev_put(mci_pdev);
|
||||
|
||||
return pci_rc;
|
||||
}
|
||||
|
||||
static void __exit ie31200_exit(void)
|
||||
{
|
||||
edac_dbg(3, "MC:\n");
|
||||
pci_unregister_driver(&ie31200_driver);
|
||||
if (!ie31200_registered)
|
||||
ie31200_remove_one(mci_pdev);
|
||||
}
|
||||
|
||||
module_init(ie31200_init);
|
||||
|
|
|
@ -718,8 +718,10 @@ amdgpu_connector_lvds_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
if (encoder) {
|
||||
|
@ -856,8 +858,10 @@ amdgpu_connector_vga_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
encoder = amdgpu_connector_best_single_encoder(connector);
|
||||
|
@ -979,8 +983,10 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
|
||||
|
@ -1329,8 +1335,10 @@ amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
|
||||
|
|
|
@ -275,7 +275,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
|
|||
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto out;
|
||||
|
||||
ret = drm_crtc_helper_set_config(set, ctx);
|
||||
|
||||
|
@ -290,7 +290,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
|
|||
take the current one */
|
||||
if (active && !adev->have_disp_power_ref) {
|
||||
adev->have_disp_power_ref = true;
|
||||
return ret;
|
||||
goto out;
|
||||
}
|
||||
/* if we have no active crtcs, then drop the power ref
|
||||
we got before */
|
||||
|
@ -299,6 +299,7 @@ int amdgpu_display_crtc_set_config(struct drm_mode_set *set,
|
|||
adev->have_disp_power_ref = false;
|
||||
}
|
||||
|
||||
out:
|
||||
/* drop the power reference we got coming in here */
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return ret;
|
||||
|
|
|
@ -1085,11 +1085,12 @@ long amdgpu_drm_ioctl(struct file *filp,
|
|||
dev = file_priv->minor->dev;
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
goto out;
|
||||
|
||||
ret = drm_ioctl(filp, cmd, arg);
|
||||
|
||||
pm_runtime_mark_last_busy(dev->dev);
|
||||
out:
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -524,8 +524,12 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
|
|||
* in the bitfields */
|
||||
if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK)
|
||||
se_num = 0xffffffff;
|
||||
else if (se_num >= AMDGPU_GFX_MAX_SE)
|
||||
return -EINVAL;
|
||||
if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK)
|
||||
sh_num = 0xffffffff;
|
||||
else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE)
|
||||
return -EINVAL;
|
||||
|
||||
if (info->read_mmr_reg.count > 128)
|
||||
return -EINVAL;
|
||||
|
@ -835,7 +839,7 @@ int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
|
|||
|
||||
r = pm_runtime_get_sync(dev->dev);
|
||||
if (r < 0)
|
||||
return r;
|
||||
goto pm_put;
|
||||
|
||||
fpriv = kzalloc(sizeof(*fpriv), GFP_KERNEL);
|
||||
if (unlikely(!fpriv)) {
|
||||
|
@ -883,6 +887,7 @@ int amdgpu_driver_open_kms(struct drm_device *dev, struct drm_file *file_priv)
|
|||
|
||||
out_suspend:
|
||||
pm_runtime_mark_last_busy(dev->dev);
|
||||
pm_put:
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
|
||||
return r;
|
||||
|
|
|
@ -592,8 +592,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
|
|||
|
||||
ret = kobject_init_and_add(dev->kobj_node, &node_type,
|
||||
sys_props.kobj_nodes, "%d", id);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
kobject_put(dev->kobj_node);
|
||||
return ret;
|
||||
}
|
||||
|
||||
dev->kobj_mem = kobject_create_and_add("mem_banks", dev->kobj_node);
|
||||
if (!dev->kobj_mem)
|
||||
|
@ -640,8 +642,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
|
|||
return -ENOMEM;
|
||||
ret = kobject_init_and_add(mem->kobj, &mem_type,
|
||||
dev->kobj_mem, "%d", i);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
kobject_put(mem->kobj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
mem->attr.name = "properties";
|
||||
mem->attr.mode = KFD_SYSFS_FILE_MODE;
|
||||
|
@ -659,8 +663,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
|
|||
return -ENOMEM;
|
||||
ret = kobject_init_and_add(cache->kobj, &cache_type,
|
||||
dev->kobj_cache, "%d", i);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
kobject_put(cache->kobj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
cache->attr.name = "properties";
|
||||
cache->attr.mode = KFD_SYSFS_FILE_MODE;
|
||||
|
@ -678,8 +684,10 @@ static int kfd_build_sysfs_node_entry(struct kfd_topology_device *dev,
|
|||
return -ENOMEM;
|
||||
ret = kobject_init_and_add(iolink->kobj, &iolink_type,
|
||||
dev->kobj_iolink, "%d", i);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
kobject_put(iolink->kobj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
iolink->attr.name = "properties";
|
||||
iolink->attr.mode = KFD_SYSFS_FILE_MODE;
|
||||
|
@ -759,8 +767,10 @@ static int kfd_topology_update_sysfs(void)
|
|||
ret = kobject_init_and_add(sys_props.kobj_topology,
|
||||
&sysprops_type, &kfd_device->kobj,
|
||||
"topology");
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
kobject_put(sys_props.kobj_topology);
|
||||
return ret;
|
||||
}
|
||||
|
||||
sys_props.kobj_nodes = kobject_create_and_add("nodes",
|
||||
sys_props.kobj_topology);
|
||||
|
|
|
@ -362,6 +362,9 @@ int vega10_thermal_get_temperature(struct pp_hwmgr *hwmgr)
|
|||
static int vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
struct PP_TemperatureRange *range)
|
||||
{
|
||||
struct phm_ppt_v2_information *pp_table_info =
|
||||
(struct phm_ppt_v2_information *)(hwmgr->pptable);
|
||||
struct phm_tdp_table *tdp_table = pp_table_info->tdp_table;
|
||||
struct amdgpu_device *adev = hwmgr->adev;
|
||||
int low = VEGA10_THERMAL_MINIMUM_ALERT_TEMP *
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
|
@ -371,8 +374,8 @@ static int vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
|
|||
|
||||
if (low < range->min)
|
||||
low = range->min;
|
||||
if (high > range->max)
|
||||
high = range->max;
|
||||
if (high > tdp_table->usSoftwareShutdownTemp)
|
||||
high = tdp_table->usSoftwareShutdownTemp;
|
||||
|
||||
if (low > high)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -170,6 +170,8 @@ int vega12_thermal_get_temperature(struct pp_hwmgr *hwmgr)
|
|||
static int vega12_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
struct PP_TemperatureRange *range)
|
||||
{
|
||||
struct phm_ppt_v3_information *pptable_information =
|
||||
(struct phm_ppt_v3_information *)hwmgr->pptable;
|
||||
struct amdgpu_device *adev = hwmgr->adev;
|
||||
int low = VEGA12_THERMAL_MINIMUM_ALERT_TEMP *
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
|
@ -179,8 +181,8 @@ static int vega12_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
|
|||
|
||||
if (low < range->min)
|
||||
low = range->min;
|
||||
if (high > range->max)
|
||||
high = range->max;
|
||||
if (high > pptable_information->us_software_shutdown_temp)
|
||||
high = pptable_information->us_software_shutdown_temp;
|
||||
|
||||
if (low > high)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -221,7 +221,7 @@ int adreno_hw_init(struct msm_gpu *gpu)
|
|||
ring->next = ring->start;
|
||||
|
||||
/* reset completed fence seqno: */
|
||||
ring->memptrs->fence = ring->seqno;
|
||||
ring->memptrs->fence = ring->fctx->completed_fence;
|
||||
ring->memptrs->rptr = 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -1920,8 +1920,10 @@ nv50_disp_atomic_commit(struct drm_device *dev,
|
|||
int ret, i;
|
||||
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0 && ret != -EACCES)
|
||||
if (ret < 0 && ret != -EACCES) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = drm_atomic_helper_setup_commit(state, nonblock);
|
||||
if (ret)
|
||||
|
|
|
@ -551,8 +551,10 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
|
|||
pm_runtime_get_noresume(dev->dev);
|
||||
} else {
|
||||
ret = pm_runtime_get_sync(dev->dev);
|
||||
if (ret < 0 && ret != -EACCES)
|
||||
if (ret < 0 && ret != -EACCES) {
|
||||
pm_runtime_put_autosuspend(dev->dev);
|
||||
return conn_status;
|
||||
}
|
||||
}
|
||||
|
||||
nv_encoder = nouveau_connector_ddc_detect(connector);
|
||||
|
|
|
@ -189,8 +189,10 @@ nouveau_fbcon_open(struct fb_info *info, int user)
|
|||
struct nouveau_fbdev *fbcon = info->par;
|
||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
||||
int ret = pm_runtime_get_sync(drm->dev->dev);
|
||||
if (ret < 0 && ret != -EACCES)
|
||||
if (ret < 0 && ret != -EACCES) {
|
||||
pm_runtime_put(drm->dev->dev);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -882,8 +882,10 @@ radeon_lvds_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
if (encoder) {
|
||||
|
@ -1028,8 +1030,10 @@ radeon_vga_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
encoder = radeon_best_single_encoder(connector);
|
||||
|
@ -1166,8 +1170,10 @@ radeon_tv_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
encoder = radeon_best_single_encoder(connector);
|
||||
|
@ -1250,8 +1256,10 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
if (radeon_connector->detected_hpd_without_ddc) {
|
||||
|
@ -1665,8 +1673,10 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
|
|||
|
||||
if (!drm_kms_helper_is_poll_worker()) {
|
||||
r = pm_runtime_get_sync(connector->dev->dev);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
pm_runtime_put_autosuspend(connector->dev->dev);
|
||||
return connector_status_disconnected;
|
||||
}
|
||||
}
|
||||
|
||||
if (!force && radeon_check_hpd_status_unchanged(connector)) {
|
||||
|
|
|
@ -756,6 +756,7 @@
|
|||
#define USB_DEVICE_ID_LOGITECH_G27_WHEEL 0xc29b
|
||||
#define USB_DEVICE_ID_LOGITECH_WII_WHEEL 0xc29c
|
||||
#define USB_DEVICE_ID_LOGITECH_ELITE_KBD 0xc30a
|
||||
#define USB_DEVICE_ID_LOGITECH_GROUP_AUDIO 0x0882
|
||||
#define USB_DEVICE_ID_S510_RECEIVER 0xc50c
|
||||
#define USB_DEVICE_ID_S510_RECEIVER_2 0xc517
|
||||
#define USB_DEVICE_ID_LOGITECH_CORDLESS_DESKTOP_LX500 0xc512
|
||||
|
|
|
@ -179,6 +179,7 @@ static const struct hid_device_id hid_quirks[] = {
|
|||
{ HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP_LTD2, USB_DEVICE_ID_SMARTJOY_DUAL_PLUS), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_QUAD_USB_JOYPAD), HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_XIN_MO, USB_DEVICE_ID_XIN_MO_DUAL_ARCADE), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_GROUP_AUDIO), HID_QUIRK_NOGET },
|
||||
|
||||
{ 0 }
|
||||
};
|
||||
|
|
|
@ -444,6 +444,19 @@ static int i2c_hid_set_power(struct i2c_client *client, int power_state)
|
|||
dev_err(&client->dev, "failed to change power setting.\n");
|
||||
|
||||
set_pwr_exit:
|
||||
|
||||
/*
|
||||
* The HID over I2C specification states that if a DEVICE needs time
|
||||
* after the PWR_ON request, it should utilise CLOCK stretching.
|
||||
* However, it has been observered that the Windows driver provides a
|
||||
* 1ms sleep between the PWR_ON and RESET requests.
|
||||
* According to Goodix Windows even waits 60 ms after (other?)
|
||||
* PWR_ON requests. Testing has confirmed that several devices
|
||||
* will not work properly without a delay after a PWR_ON request.
|
||||
*/
|
||||
if (!ret && power_state == I2C_HID_PWR_ON)
|
||||
msleep(60);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -465,15 +478,6 @@ static int i2c_hid_hwreset(struct i2c_client *client)
|
|||
if (ret)
|
||||
goto out_unlock;
|
||||
|
||||
/*
|
||||
* The HID over I2C specification states that if a DEVICE needs time
|
||||
* after the PWR_ON request, it should utilise CLOCK stretching.
|
||||
* However, it has been observered that the Windows driver provides a
|
||||
* 1ms sleep between the PWR_ON and RESET requests and that some devices
|
||||
* rely on this.
|
||||
*/
|
||||
usleep_range(1000, 5000);
|
||||
|
||||
i2c_hid_dbg(ihid, "resetting...\n");
|
||||
|
||||
ret = i2c_hid_command(client, &hid_reset_cmd, NULL, 0);
|
||||
|
|
|
@ -532,12 +532,16 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
|
|||
|
||||
switch (cmd) {
|
||||
case HIDIOCGUSAGE:
|
||||
if (uref->usage_index >= field->report_count)
|
||||
goto inval;
|
||||
uref->value = field->value[uref->usage_index];
|
||||
if (copy_to_user(user_arg, uref, sizeof(*uref)))
|
||||
goto fault;
|
||||
goto goodreturn;
|
||||
|
||||
case HIDIOCSUSAGE:
|
||||
if (uref->usage_index >= field->report_count)
|
||||
goto inval;
|
||||
field->value[uref->usage_index] = uref->value;
|
||||
goto goodreturn;
|
||||
|
||||
|
|
|
@ -594,6 +594,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
|||
/* master sent stop */
|
||||
if (ssr_filtered & SSR) {
|
||||
i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
|
||||
rcar_i2c_write(priv, ICSCR, SIE | SDBS); /* clear our NACK */
|
||||
rcar_i2c_write(priv, ICSIER, SAR);
|
||||
rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
|
||||
}
|
||||
|
|
|
@ -901,7 +901,9 @@ iova_magazine_free_pfns(struct iova_magazine *mag, struct iova_domain *iovad)
|
|||
for (i = 0 ; i < mag->size; ++i) {
|
||||
struct iova *iova = private_find_iova(iovad, mag->pfns[i]);
|
||||
|
||||
BUG_ON(!iova);
|
||||
if (WARN_ON(!iova))
|
||||
continue;
|
||||
|
||||
private_free_iova(iovad, iova);
|
||||
}
|
||||
|
||||
|
|
|
@ -382,6 +382,16 @@ static void stm32_irq_ack(struct irq_data *d)
|
|||
irq_gc_unlock(gc);
|
||||
}
|
||||
|
||||
/* directly set the target bit without reading first. */
|
||||
static inline void stm32_exti_write_bit(struct irq_data *d, u32 reg)
|
||||
{
|
||||
struct stm32_exti_chip_data *chip_data = irq_data_get_irq_chip_data(d);
|
||||
void __iomem *base = chip_data->host_data->base;
|
||||
u32 val = BIT(d->hwirq % IRQS_PER_BANK);
|
||||
|
||||
writel_relaxed(val, base + reg);
|
||||
}
|
||||
|
||||
static inline u32 stm32_exti_set_bit(struct irq_data *d, u32 reg)
|
||||
{
|
||||
struct stm32_exti_chip_data *chip_data = irq_data_get_irq_chip_data(d);
|
||||
|
@ -415,9 +425,9 @@ static void stm32_exti_h_eoi(struct irq_data *d)
|
|||
|
||||
raw_spin_lock(&chip_data->rlock);
|
||||
|
||||
stm32_exti_set_bit(d, stm32_bank->rpr_ofst);
|
||||
stm32_exti_write_bit(d, stm32_bank->rpr_ofst);
|
||||
if (stm32_bank->fpr_ofst != UNDEF_REG)
|
||||
stm32_exti_set_bit(d, stm32_bank->fpr_ofst);
|
||||
stm32_exti_write_bit(d, stm32_bank->fpr_ofst);
|
||||
|
||||
raw_spin_unlock(&chip_data->rlock);
|
||||
|
||||
|
|
|
@ -147,7 +147,13 @@ static long cec_adap_g_log_addrs(struct cec_adapter *adap,
|
|||
struct cec_log_addrs log_addrs;
|
||||
|
||||
mutex_lock(&adap->lock);
|
||||
log_addrs = adap->log_addrs;
|
||||
/*
|
||||
* We use memcpy here instead of assignment since there is a
|
||||
* hole at the end of struct cec_log_addrs that an assignment
|
||||
* might ignore. So when we do copy_to_user() we could leak
|
||||
* one byte of memory.
|
||||
*/
|
||||
memcpy(&log_addrs, &adap->log_addrs, sizeof(log_addrs));
|
||||
if (!adap->is_configured)
|
||||
memset(log_addrs.log_addr, CEC_LOG_ADDR_INVALID,
|
||||
sizeof(log_addrs.log_addr));
|
||||
|
|
|
@ -424,14 +424,15 @@ static void debiirq(unsigned long cookie)
|
|||
case DATA_CI_GET:
|
||||
{
|
||||
u8 *data = av7110->debi_virt;
|
||||
u8 data_0 = data[0];
|
||||
|
||||
if ((data[0] < 2) && data[2] == 0xff) {
|
||||
if (data_0 < 2 && data[2] == 0xff) {
|
||||
int flags = 0;
|
||||
if (data[5] > 0)
|
||||
flags |= CA_CI_MODULE_PRESENT;
|
||||
if (data[5] > 5)
|
||||
flags |= CA_CI_MODULE_READY;
|
||||
av7110->ci_slot[data[0]].flags = flags;
|
||||
av7110->ci_slot[data_0].flags = flags;
|
||||
} else
|
||||
ci_get_data(&av7110->ci_rbuffer,
|
||||
av7110->debi_virt,
|
||||
|
|
|
@ -87,13 +87,8 @@ static int gpio_ir_tx(struct rc_dev *dev, unsigned int *txbuf,
|
|||
// space
|
||||
edge = ktime_add_us(edge, txbuf[i]);
|
||||
delta = ktime_us_delta(edge, ktime_get());
|
||||
if (delta > 10) {
|
||||
spin_unlock_irqrestore(&gpio_ir->lock, flags);
|
||||
usleep_range(delta, delta + 10);
|
||||
spin_lock_irqsave(&gpio_ir->lock, flags);
|
||||
} else if (delta > 0) {
|
||||
if (delta > 0)
|
||||
udelay(delta);
|
||||
}
|
||||
} else {
|
||||
// pulse
|
||||
ktime_t last = ktime_add_us(edge, txbuf[i]);
|
||||
|
|
|
@ -176,6 +176,9 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
|
|||
{ PCI_VDEVICE(INTEL, 0x1ac4), (kernel_ulong_t)&bxt_info },
|
||||
{ PCI_VDEVICE(INTEL, 0x1ac6), (kernel_ulong_t)&bxt_info },
|
||||
{ PCI_VDEVICE(INTEL, 0x1aee), (kernel_ulong_t)&bxt_uart_info },
|
||||
/* EBG */
|
||||
{ PCI_VDEVICE(INTEL, 0x1bad), (kernel_ulong_t)&bxt_uart_info },
|
||||
{ PCI_VDEVICE(INTEL, 0x1bae), (kernel_ulong_t)&bxt_uart_info },
|
||||
/* GLK */
|
||||
{ PCI_VDEVICE(INTEL, 0x31ac), (kernel_ulong_t)&glk_i2c_info },
|
||||
{ PCI_VDEVICE(INTEL, 0x31ae), (kernel_ulong_t)&glk_i2c_info },
|
||||
|
|
|
@ -2736,7 +2736,7 @@ static int check_missing_comp_in_tx_queue(struct ena_adapter *adapter,
|
|||
}
|
||||
|
||||
u64_stats_update_begin(&tx_ring->syncp);
|
||||
tx_ring->tx_stats.missed_tx = missed_tx;
|
||||
tx_ring->tx_stats.missed_tx += missed_tx;
|
||||
u64_stats_update_end(&tx_ring->syncp);
|
||||
|
||||
return rc;
|
||||
|
@ -3544,6 +3544,9 @@ static void ena_keep_alive_wd(void *adapter_data,
|
|||
rx_drops = ((u64)desc->rx_drops_high << 32) | desc->rx_drops_low;
|
||||
|
||||
u64_stats_update_begin(&adapter->syncp);
|
||||
/* These stats are accumulated by the device, so the counters indicate
|
||||
* all drops since last reset.
|
||||
*/
|
||||
adapter->dev_stats.rx_drops = rx_drops;
|
||||
u64_stats_update_end(&adapter->syncp);
|
||||
}
|
||||
|
|
|
@ -844,8 +844,10 @@ static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev)
|
|||
continue;
|
||||
|
||||
err = gfar_parse_group(child, priv, model);
|
||||
if (err)
|
||||
if (err) {
|
||||
of_node_put(child);
|
||||
goto err_grp_init;
|
||||
}
|
||||
}
|
||||
} else { /* SQ_SG_MODE */
|
||||
err = gfar_parse_group(np, priv, model);
|
||||
|
|
|
@ -192,7 +192,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
|
|||
}
|
||||
|
||||
/* alloc the udl from per cpu ddp pool */
|
||||
ddp->udl = dma_pool_alloc(ddp_pool->pool, GFP_KERNEL, &ddp->udp);
|
||||
ddp->udl = dma_pool_alloc(ddp_pool->pool, GFP_ATOMIC, &ddp->udp);
|
||||
if (!ddp->udl) {
|
||||
e_err(drv, "failed allocated ddp context\n");
|
||||
goto out_noddp_unmap;
|
||||
|
|
|
@ -177,12 +177,21 @@ static void ipvlan_port_destroy(struct net_device *dev)
|
|||
kfree(port);
|
||||
}
|
||||
|
||||
#define IPVLAN_ALWAYS_ON_OFLOADS \
|
||||
(NETIF_F_SG | NETIF_F_HW_CSUM | \
|
||||
NETIF_F_GSO_ROBUST | NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ENCAP_ALL)
|
||||
|
||||
#define IPVLAN_ALWAYS_ON \
|
||||
(IPVLAN_ALWAYS_ON_OFLOADS | NETIF_F_LLTX | NETIF_F_VLAN_CHALLENGED)
|
||||
|
||||
#define IPVLAN_FEATURES \
|
||||
(NETIF_F_SG | NETIF_F_CSUM_MASK | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
|
||||
(NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \
|
||||
NETIF_F_GSO | NETIF_F_TSO | NETIF_F_GSO_ROBUST | \
|
||||
NETIF_F_TSO_ECN | NETIF_F_TSO6 | NETIF_F_GRO | NETIF_F_RXCSUM | \
|
||||
NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)
|
||||
|
||||
/* NETIF_F_GSO_ENCAP_ALL NETIF_F_GSO_SOFTWARE Newly added */
|
||||
|
||||
#define IPVLAN_STATE_MASK \
|
||||
((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT))
|
||||
|
||||
|
@ -196,7 +205,9 @@ static int ipvlan_init(struct net_device *dev)
|
|||
dev->state = (dev->state & ~IPVLAN_STATE_MASK) |
|
||||
(phy_dev->state & IPVLAN_STATE_MASK);
|
||||
dev->features = phy_dev->features & IPVLAN_FEATURES;
|
||||
dev->features |= NETIF_F_LLTX | NETIF_F_VLAN_CHALLENGED;
|
||||
dev->features |= IPVLAN_ALWAYS_ON;
|
||||
dev->vlan_features = phy_dev->vlan_features & IPVLAN_FEATURES;
|
||||
dev->vlan_features |= IPVLAN_ALWAYS_ON_OFLOADS;
|
||||
dev->gso_max_size = phy_dev->gso_max_size;
|
||||
dev->gso_max_segs = phy_dev->gso_max_segs;
|
||||
dev->hard_header_len = phy_dev->hard_header_len;
|
||||
|
@ -297,7 +308,14 @@ static netdev_features_t ipvlan_fix_features(struct net_device *dev,
|
|||
{
|
||||
struct ipvl_dev *ipvlan = netdev_priv(dev);
|
||||
|
||||
return features & (ipvlan->sfeatures | ~IPVLAN_FEATURES);
|
||||
features |= NETIF_F_ALL_FOR_ALL;
|
||||
features &= (ipvlan->sfeatures | ~IPVLAN_FEATURES);
|
||||
features = netdev_increment_features(ipvlan->phy_dev->features,
|
||||
features, features);
|
||||
features |= IPVLAN_ALWAYS_ON;
|
||||
features &= (IPVLAN_FEATURES | IPVLAN_ALWAYS_ON);
|
||||
|
||||
return features;
|
||||
}
|
||||
|
||||
static void ipvlan_change_rx_flags(struct net_device *dev, int change)
|
||||
|
@ -802,10 +820,9 @@ static int ipvlan_device_event(struct notifier_block *unused,
|
|||
|
||||
case NETDEV_FEAT_CHANGE:
|
||||
list_for_each_entry(ipvlan, &port->ipvlans, pnode) {
|
||||
ipvlan->dev->features = dev->features & IPVLAN_FEATURES;
|
||||
ipvlan->dev->gso_max_size = dev->gso_max_size;
|
||||
ipvlan->dev->gso_max_segs = dev->gso_max_segs;
|
||||
netdev_features_change(ipvlan->dev);
|
||||
netdev_update_features(ipvlan->dev);
|
||||
}
|
||||
break;
|
||||
|
||||
|
|
|
@ -1230,6 +1230,9 @@ static void macvlan_port_destroy(struct net_device *dev)
|
|||
static int macvlan_validate(struct nlattr *tb[], struct nlattr *data[],
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct nlattr *nla, *head;
|
||||
int rem, len;
|
||||
|
||||
if (tb[IFLA_ADDRESS]) {
|
||||
if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN)
|
||||
return -EINVAL;
|
||||
|
@ -1277,6 +1280,20 @@ static int macvlan_validate(struct nlattr *tb[], struct nlattr *data[],
|
|||
return -EADDRNOTAVAIL;
|
||||
}
|
||||
|
||||
if (data[IFLA_MACVLAN_MACADDR_DATA]) {
|
||||
head = nla_data(data[IFLA_MACVLAN_MACADDR_DATA]);
|
||||
len = nla_len(data[IFLA_MACVLAN_MACADDR_DATA]);
|
||||
|
||||
nla_for_each_attr(nla, head, len, rem) {
|
||||
if (nla_type(nla) != IFLA_MACVLAN_MACADDR ||
|
||||
nla_len(nla) != ETH_ALEN)
|
||||
return -EINVAL;
|
||||
|
||||
if (!is_valid_ether_addr(nla_data(nla)))
|
||||
return -EADDRNOTAVAIL;
|
||||
}
|
||||
}
|
||||
|
||||
if (data[IFLA_MACVLAN_MACADDR_COUNT])
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -1333,10 +1350,6 @@ static int macvlan_changelink_sources(struct macvlan_dev *vlan, u32 mode,
|
|||
len = nla_len(data[IFLA_MACVLAN_MACADDR_DATA]);
|
||||
|
||||
nla_for_each_attr(nla, head, len, rem) {
|
||||
if (nla_type(nla) != IFLA_MACVLAN_MACADDR ||
|
||||
nla_len(nla) != ETH_ALEN)
|
||||
continue;
|
||||
|
||||
addr = nla_data(nla);
|
||||
ret = macvlan_hash_add_source(vlan, addr);
|
||||
if (ret)
|
||||
|
|
|
@ -753,7 +753,7 @@ ath10k_rx_desc_get_l3_pad_bytes(struct ath10k_hw_params *hw,
|
|||
|
||||
#define TARGET_10_4_TX_DBG_LOG_SIZE 1024
|
||||
#define TARGET_10_4_NUM_WDS_ENTRIES 32
|
||||
#define TARGET_10_4_DMA_BURST_SIZE 0
|
||||
#define TARGET_10_4_DMA_BURST_SIZE 1
|
||||
#define TARGET_10_4_MAC_AGGR_DELIM 0
|
||||
#define TARGET_10_4_RX_SKIP_DEFRAG_TIMEOUT_DUP_DETECTION_CHECK 1
|
||||
#define TARGET_10_4_VOW_CONFIG 0
|
||||
|
|
|
@ -739,8 +739,11 @@ static int _rtl_usb_receive(struct ieee80211_hw *hw)
|
|||
|
||||
usb_anchor_urb(urb, &rtlusb->rx_submitted);
|
||||
err = usb_submit_urb(urb, GFP_KERNEL);
|
||||
if (err)
|
||||
if (err) {
|
||||
usb_unanchor_urb(urb);
|
||||
usb_free_urb(urb);
|
||||
goto err_out;
|
||||
}
|
||||
usb_free_urb(urb);
|
||||
}
|
||||
return 0;
|
||||
|
|
|
@ -1716,7 +1716,7 @@ __nvme_fc_init_request(struct nvme_fc_ctrl *ctrl,
|
|||
if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.cmddma)) {
|
||||
dev_err(ctrl->dev,
|
||||
"FCP Op failed - cmdiu dma mapping failed.\n");
|
||||
ret = EFAULT;
|
||||
ret = -EFAULT;
|
||||
goto out_on_error;
|
||||
}
|
||||
|
||||
|
@ -1726,7 +1726,7 @@ __nvme_fc_init_request(struct nvme_fc_ctrl *ctrl,
|
|||
if (fc_dma_mapping_error(ctrl->lport->dev, op->fcp_req.rspdma)) {
|
||||
dev_err(ctrl->dev,
|
||||
"FCP Op failed - rspiu dma mapping failed.\n");
|
||||
ret = EFAULT;
|
||||
ret = -EFAULT;
|
||||
}
|
||||
|
||||
atomic_set(&op->state, FCPOP_STATE_IDLE);
|
||||
|
|
|
@ -303,13 +303,16 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
|
|||
slot_name = make_slot_name(name);
|
||||
if (!slot_name) {
|
||||
err = -ENOMEM;
|
||||
kfree(slot);
|
||||
goto err;
|
||||
}
|
||||
|
||||
err = kobject_init_and_add(&slot->kobj, &pci_slot_ktype, NULL,
|
||||
"%s", slot_name);
|
||||
if (err)
|
||||
if (err) {
|
||||
kobject_put(&slot->kobj);
|
||||
goto err;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&slot->list);
|
||||
list_add(&slot->list, &parent->slots);
|
||||
|
@ -328,7 +331,6 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
|
|||
mutex_unlock(&pci_slot_mutex);
|
||||
return slot;
|
||||
err:
|
||||
kfree(slot);
|
||||
slot = ERR_PTR(err);
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -615,6 +615,11 @@ static int slow_eval_known_fn(struct subchannel *sch, void *data)
|
|||
rc = css_evaluate_known_subchannel(sch, 1);
|
||||
if (rc == -EAGAIN)
|
||||
css_schedule_eval(sch->schid);
|
||||
/*
|
||||
* The loop might take long time for platforms with lots of
|
||||
* known devices. Allow scheduling here.
|
||||
*/
|
||||
cond_resched();
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -267,9 +267,9 @@ static void fcoe_sysfs_fcf_del(struct fcoe_fcf *new)
|
|||
WARN_ON(!fcf_dev);
|
||||
new->fcf_dev = NULL;
|
||||
fcoe_fcf_device_delete(fcf_dev);
|
||||
kfree(new);
|
||||
mutex_unlock(&cdev->lock);
|
||||
}
|
||||
kfree(new);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -653,27 +653,16 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
|
|||
vport->port_state < LPFC_VPORT_READY)
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
/*
|
||||
* This is a bit of a mess. We want to ensure the shost doesn't get
|
||||
* torn down until we're done with the embedded lpfc_vport structure.
|
||||
*
|
||||
* Beyond holding a reference for this function, we also need a
|
||||
* reference for outstanding I/O requests we schedule during delete
|
||||
* processing. But once we scsi_remove_host() we can no longer obtain
|
||||
* a reference through scsi_host_get().
|
||||
*
|
||||
* So we take two references here. We release one reference at the
|
||||
* bottom of the function -- after delinking the vport. And we
|
||||
* release the other at the completion of the unreg_vpi that get's
|
||||
* initiated after we've disposed of all other resources associated
|
||||
* with the port.
|
||||
* Take early refcount for outstanding I/O requests we schedule during
|
||||
* delete processing for unreg_vpi. Always keep this before
|
||||
* scsi_remove_host() as we can no longer obtain a reference through
|
||||
* scsi_host_get() after scsi_host_remove as shost is set to SHOST_DEL.
|
||||
*/
|
||||
if (!scsi_host_get(shost))
|
||||
return VPORT_INVAL;
|
||||
if (!scsi_host_get(shost)) {
|
||||
scsi_host_put(shost);
|
||||
return VPORT_INVAL;
|
||||
}
|
||||
|
||||
lpfc_free_sysfs_attr(vport);
|
||||
|
||||
lpfc_debugfs_terminate(vport);
|
||||
|
@ -820,8 +809,9 @@ lpfc_vport_delete(struct fc_vport *fc_vport)
|
|||
if (!(vport->vpi_state & LPFC_VPI_REGISTERED) ||
|
||||
lpfc_mbx_unreg_vpi(vport))
|
||||
scsi_host_put(shost);
|
||||
} else
|
||||
} else {
|
||||
scsi_host_put(shost);
|
||||
}
|
||||
|
||||
lpfc_free_vpi(phba, vport->vpi);
|
||||
vport->work_port_events = 0;
|
||||
|
|
|
@ -329,14 +329,6 @@ qla2x00_mailbox_command(scsi_qla_host_t *vha, mbx_cmd_t *mcp)
|
|||
if (time_after(jiffies, wait_time))
|
||||
break;
|
||||
|
||||
/*
|
||||
* Check if it's UNLOADING, cause we cannot poll in
|
||||
* this case, or else a NULL pointer dereference
|
||||
* is triggered.
|
||||
*/
|
||||
if (unlikely(test_bit(UNLOADING, &base_vha->dpc_flags)))
|
||||
return QLA_FUNCTION_TIMEOUT;
|
||||
|
||||
/* Check for pending interrupts. */
|
||||
qla2x00_poll(ha->rsp_q_map[0]);
|
||||
|
||||
|
|
|
@ -477,6 +477,11 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
|
|||
struct nvme_private *priv = fd->private;
|
||||
struct qla_nvme_rport *qla_rport = rport->private;
|
||||
|
||||
if (!priv) {
|
||||
/* nvme association has been torn down */
|
||||
return rval;
|
||||
}
|
||||
|
||||
fcport = qla_rport->fcport;
|
||||
|
||||
vha = fcport->vha;
|
||||
|
|
|
@ -1997,6 +1997,11 @@ qla2x00_iospace_config(struct qla_hw_data *ha)
|
|||
/* Determine queue resources */
|
||||
ha->max_req_queues = ha->max_rsp_queues = 1;
|
||||
ha->msix_count = QLA_BASE_VECTORS;
|
||||
|
||||
/* Check if FW supports MQ or not */
|
||||
if (!(ha->fw_attributes & BIT_6))
|
||||
goto mqiobase_exit;
|
||||
|
||||
if (!ql2xmqsupport || !ql2xnvmeenable ||
|
||||
(!IS_QLA25XX(ha) && !IS_QLA81XX(ha)))
|
||||
goto mqiobase_exit;
|
||||
|
|
|
@ -3172,7 +3172,7 @@ static int iscsi_set_flashnode_param(struct iscsi_transport *transport,
|
|||
pr_err("%s could not find host no %u\n",
|
||||
__func__, ev->u.set_flashnode.host_no);
|
||||
err = -ENODEV;
|
||||
goto put_host;
|
||||
goto exit_set_fnode;
|
||||
}
|
||||
|
||||
idx = ev->u.set_flashnode.flashnode_idx;
|
||||
|
|
|
@ -1556,6 +1556,7 @@ static void ufshcd_ungate_work(struct work_struct *work)
|
|||
int ufshcd_hold(struct ufs_hba *hba, bool async)
|
||||
{
|
||||
int rc = 0;
|
||||
bool flush_result;
|
||||
unsigned long flags;
|
||||
|
||||
if (!ufshcd_is_clkgating_allowed(hba))
|
||||
|
@ -1587,7 +1588,9 @@ int ufshcd_hold(struct ufs_hba *hba, bool async)
|
|||
break;
|
||||
}
|
||||
spin_unlock_irqrestore(hba->host->host_lock, flags);
|
||||
flush_work(&hba->clk_gating.ungate_work);
|
||||
flush_result = flush_work(&hba->clk_gating.ungate_work);
|
||||
if (hba->clk_gating.is_suspended && !flush_result)
|
||||
goto out;
|
||||
spin_lock_irqsave(hba->host->host_lock, flags);
|
||||
goto start;
|
||||
}
|
||||
|
@ -5656,7 +5659,7 @@ static void ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
|
|||
*/
|
||||
static irqreturn_t ufshcd_intr(int irq, void *__hba)
|
||||
{
|
||||
u32 intr_status, enabled_intr_status;
|
||||
u32 intr_status, enabled_intr_status = 0;
|
||||
irqreturn_t retval = IRQ_NONE;
|
||||
struct ufs_hba *hba = __hba;
|
||||
int retries = hba->nutrs;
|
||||
|
@ -5670,7 +5673,7 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
|
|||
* read, make sure we handle them by checking the interrupt status
|
||||
* again in a loop until we process all of the reqs before returning.
|
||||
*/
|
||||
do {
|
||||
while (intr_status && retries--) {
|
||||
enabled_intr_status =
|
||||
intr_status & ufshcd_readl(hba, REG_INTERRUPT_ENABLE);
|
||||
if (intr_status)
|
||||
|
@ -5681,7 +5684,7 @@ static irqreturn_t ufshcd_intr(int irq, void *__hba)
|
|||
}
|
||||
|
||||
intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
|
||||
} while (intr_status && --retries);
|
||||
}
|
||||
|
||||
spin_unlock(hba->host->host_lock);
|
||||
return retval;
|
||||
|
@ -5981,7 +5984,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
|
|||
/* command completed already */
|
||||
dev_err(hba->dev, "%s: cmd at tag %d successfully cleared from DB.\n",
|
||||
__func__, tag);
|
||||
goto out;
|
||||
goto cleanup;
|
||||
} else {
|
||||
dev_err(hba->dev,
|
||||
"%s: no response from device. tag = %d, err %d\n",
|
||||
|
@ -6015,6 +6018,7 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
|
|||
goto out;
|
||||
}
|
||||
|
||||
cleanup:
|
||||
scsi_dma_unmap(cmd);
|
||||
|
||||
spin_lock_irqsave(host->host_lock, flags);
|
||||
|
|
|
@ -254,7 +254,8 @@ static int stm32_spi_prepare_mbr(struct stm32_spi *spi, u32 speed_hz)
|
|||
{
|
||||
u32 div, mbrdiv;
|
||||
|
||||
div = DIV_ROUND_UP(spi->clk_rate, speed_hz);
|
||||
/* Ensure spi->clk_rate is even */
|
||||
div = DIV_ROUND_UP(spi->clk_rate & ~0x1, speed_hz);
|
||||
|
||||
/*
|
||||
* SPI framework set xfer->speed_hz to master->max_speed_hz if
|
||||
|
|
|
@ -1231,7 +1231,14 @@ static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
|
|||
|
||||
struct tcmu_cmd_entry *entry = (void *) mb + CMDR_OFF + udev->cmdr_last_cleaned;
|
||||
|
||||
tcmu_flush_dcache_range(entry, sizeof(*entry));
|
||||
/*
|
||||
* Flush max. up to end of cmd ring since current entry might
|
||||
* be a padding that is shorter than sizeof(*entry)
|
||||
*/
|
||||
size_t ring_left = head_to_end(udev->cmdr_last_cleaned,
|
||||
udev->cmdr_size);
|
||||
tcmu_flush_dcache_range(entry, ring_left < sizeof(*entry) ?
|
||||
ring_left : sizeof(*entry));
|
||||
|
||||
if (tcmu_hdr_get_op(entry->hdr.len_op) == TCMU_OP_PAD) {
|
||||
UPDATE_HEAD(udev->cmdr_last_cleaned,
|
||||
|
|
|
@ -638,6 +638,24 @@ static const struct exar8250_board pbn_exar_XR17V35x = {
|
|||
.exit = pci_xr17v35x_exit,
|
||||
};
|
||||
|
||||
static const struct exar8250_board pbn_fastcom35x_2 = {
|
||||
.num_ports = 2,
|
||||
.setup = pci_xr17v35x_setup,
|
||||
.exit = pci_xr17v35x_exit,
|
||||
};
|
||||
|
||||
static const struct exar8250_board pbn_fastcom35x_4 = {
|
||||
.num_ports = 4,
|
||||
.setup = pci_xr17v35x_setup,
|
||||
.exit = pci_xr17v35x_exit,
|
||||
};
|
||||
|
||||
static const struct exar8250_board pbn_fastcom35x_8 = {
|
||||
.num_ports = 8,
|
||||
.setup = pci_xr17v35x_setup,
|
||||
.exit = pci_xr17v35x_exit,
|
||||
};
|
||||
|
||||
static const struct exar8250_board pbn_exar_XR17V4358 = {
|
||||
.num_ports = 12,
|
||||
.setup = pci_xr17v35x_setup,
|
||||
|
@ -708,9 +726,9 @@ static const struct pci_device_id exar_pci_tbl[] = {
|
|||
EXAR_DEVICE(EXAR, EXAR_XR17V358, pbn_exar_XR17V35x),
|
||||
EXAR_DEVICE(EXAR, EXAR_XR17V4358, pbn_exar_XR17V4358),
|
||||
EXAR_DEVICE(EXAR, EXAR_XR17V8358, pbn_exar_XR17V8358),
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4222PCIE, pbn_exar_XR17V35x),
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4224PCIE, pbn_exar_XR17V35x),
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4228PCIE, pbn_exar_XR17V35x),
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4222PCIE, pbn_fastcom35x_2),
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4224PCIE, pbn_fastcom35x_4),
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4228PCIE, pbn_fastcom35x_8),
|
||||
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4222PCI335, pbn_fastcom335_2),
|
||||
EXAR_DEVICE(COMMTECH, COMMTECH_4224PCI335, pbn_fastcom335_4),
|
||||
|
|
|
@ -2259,6 +2259,10 @@ int serial8250_do_startup(struct uart_port *port)
|
|||
|
||||
if (port->irq && !(up->port.flags & UPF_NO_THRE_TEST)) {
|
||||
unsigned char iir1;
|
||||
|
||||
if (port->irqflags & IRQF_SHARED)
|
||||
disable_irq_nosync(port->irq);
|
||||
|
||||
/*
|
||||
* Test for UARTs that do not reassert THRE when the
|
||||
* transmitter is idle and the interrupt has already
|
||||
|
@ -2268,8 +2272,6 @@ int serial8250_do_startup(struct uart_port *port)
|
|||
* allow register changes to become visible.
|
||||
*/
|
||||
spin_lock_irqsave(&port->lock, flags);
|
||||
if (up->port.irqflags & IRQF_SHARED)
|
||||
disable_irq_nosync(port->irq);
|
||||
|
||||
wait_for_xmitr(up, UART_LSR_THRE);
|
||||
serial_port_out_sync(port, UART_IER, UART_IER_THRI);
|
||||
|
@ -2281,9 +2283,10 @@ int serial8250_do_startup(struct uart_port *port)
|
|||
iir = serial_port_in(port, UART_IIR);
|
||||
serial_port_out(port, UART_IER, 0);
|
||||
|
||||
spin_unlock_irqrestore(&port->lock, flags);
|
||||
|
||||
if (port->irqflags & IRQF_SHARED)
|
||||
enable_irq(port->irq);
|
||||
spin_unlock_irqrestore(&port->lock, flags);
|
||||
|
||||
/*
|
||||
* If the interrupt is not reasserted, or we otherwise
|
||||
|
|
|
@ -2252,9 +2252,8 @@ pl011_console_write(struct console *co, const char *s, unsigned int count)
|
|||
clk_disable(uap->clk);
|
||||
}
|
||||
|
||||
static void __init
|
||||
pl011_console_get_options(struct uart_amba_port *uap, int *baud,
|
||||
int *parity, int *bits)
|
||||
static void pl011_console_get_options(struct uart_amba_port *uap, int *baud,
|
||||
int *parity, int *bits)
|
||||
{
|
||||
if (pl011_read(uap, REG_CR) & UART01x_CR_UARTEN) {
|
||||
unsigned int lcr_h, ibrd, fbrd;
|
||||
|
@ -2287,7 +2286,7 @@ pl011_console_get_options(struct uart_amba_port *uap, int *baud,
|
|||
}
|
||||
}
|
||||
|
||||
static int __init pl011_console_setup(struct console *co, char *options)
|
||||
static int pl011_console_setup(struct console *co, char *options)
|
||||
{
|
||||
struct uart_amba_port *uap;
|
||||
int baud = 38400;
|
||||
|
@ -2355,8 +2354,8 @@ static int __init pl011_console_setup(struct console *co, char *options)
|
|||
*
|
||||
* Returns 0 if console matches; otherwise non-zero to use default matching
|
||||
*/
|
||||
static int __init pl011_console_match(struct console *co, char *name, int idx,
|
||||
char *options)
|
||||
static int pl011_console_match(struct console *co, char *name, int idx,
|
||||
char *options)
|
||||
{
|
||||
unsigned char iotype;
|
||||
resource_size_t addr;
|
||||
|
@ -2594,7 +2593,7 @@ static int pl011_setup_port(struct device *dev, struct uart_amba_port *uap,
|
|||
|
||||
static int pl011_register_port(struct uart_amba_port *uap)
|
||||
{
|
||||
int ret;
|
||||
int ret, i;
|
||||
|
||||
/* Ensure interrupts from this UART are masked and cleared */
|
||||
pl011_write(0, uap, REG_IMSC);
|
||||
|
@ -2605,6 +2604,9 @@ static int pl011_register_port(struct uart_amba_port *uap)
|
|||
if (ret < 0) {
|
||||
dev_err(uap->port.dev,
|
||||
"Failed to register AMBA-PL011 driver\n");
|
||||
for (i = 0; i < ARRAY_SIZE(amba_ports); i++)
|
||||
if (amba_ports[i] == uap)
|
||||
amba_ports[i] = NULL;
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1755,9 +1755,11 @@ static int s3c24xx_serial_init_port(struct s3c24xx_uart_port *ourport,
|
|||
ourport->tx_irq = ret + 1;
|
||||
}
|
||||
|
||||
ret = platform_get_irq(platdev, 1);
|
||||
if (ret > 0)
|
||||
ourport->tx_irq = ret;
|
||||
if (!s3c24xx_serial_has_interrupt_mask(port)) {
|
||||
ret = platform_get_irq(platdev, 1);
|
||||
if (ret > 0)
|
||||
ourport->tx_irq = ret;
|
||||
}
|
||||
/*
|
||||
* DMA is currently supported only on DT platforms, if DMA properties
|
||||
* are specified.
|
||||
|
|
|
@ -1199,7 +1199,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
|
|||
unsigned int old_rows, old_row_size, first_copied_row;
|
||||
unsigned int new_cols, new_rows, new_row_size, new_screen_size;
|
||||
unsigned int user;
|
||||
unsigned short *newscreen;
|
||||
unsigned short *oldscreen, *newscreen;
|
||||
struct uni_screen *new_uniscr = NULL;
|
||||
|
||||
WARN_CONSOLE_UNLOCKED();
|
||||
|
@ -1297,10 +1297,11 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
|
|||
if (new_scr_end > new_origin)
|
||||
scr_memsetw((void *)new_origin, vc->vc_video_erase_char,
|
||||
new_scr_end - new_origin);
|
||||
kfree(vc->vc_screenbuf);
|
||||
oldscreen = vc->vc_screenbuf;
|
||||
vc->vc_screenbuf = newscreen;
|
||||
vc->vc_screenbuf_size = new_screen_size;
|
||||
set_origin(vc);
|
||||
kfree(oldscreen);
|
||||
|
||||
/* do part of a reset_terminal() */
|
||||
vc->vc_top = 0;
|
||||
|
|
|
@ -893,12 +893,22 @@ int vt_ioctl(struct tty_struct *tty,
|
|||
console_lock();
|
||||
vcp = vc_cons[i].d;
|
||||
if (vcp) {
|
||||
int ret;
|
||||
int save_scan_lines = vcp->vc_scan_lines;
|
||||
int save_font_height = vcp->vc_font.height;
|
||||
|
||||
if (v.v_vlin)
|
||||
vcp->vc_scan_lines = v.v_vlin;
|
||||
if (v.v_clin)
|
||||
vcp->vc_font.height = v.v_clin;
|
||||
vcp->vc_resize_user = 1;
|
||||
vc_resize(vcp, v.v_cols, v.v_rows);
|
||||
ret = vc_resize(vcp, v.v_cols, v.v_rows);
|
||||
if (ret) {
|
||||
vcp->vc_scan_lines = save_scan_lines;
|
||||
vcp->vc_font.height = save_font_height;
|
||||
console_unlock();
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
console_unlock();
|
||||
}
|
||||
|
|
|
@ -378,21 +378,19 @@ static void acm_ctrl_irq(struct urb *urb)
|
|||
if (current_size < expected_size) {
|
||||
/* notification is transmitted fragmented, reassemble */
|
||||
if (acm->nb_size < expected_size) {
|
||||
if (acm->nb_size) {
|
||||
kfree(acm->notification_buffer);
|
||||
acm->nb_size = 0;
|
||||
}
|
||||
u8 *new_buffer;
|
||||
alloc_size = roundup_pow_of_two(expected_size);
|
||||
/*
|
||||
* kmalloc ensures a valid notification_buffer after a
|
||||
* use of kfree in case the previous allocation was too
|
||||
* small. Final freeing is done on disconnect.
|
||||
*/
|
||||
acm->notification_buffer =
|
||||
kmalloc(alloc_size, GFP_ATOMIC);
|
||||
if (!acm->notification_buffer)
|
||||
/* Final freeing is done on disconnect. */
|
||||
new_buffer = krealloc(acm->notification_buffer,
|
||||
alloc_size, GFP_ATOMIC);
|
||||
if (!new_buffer) {
|
||||
acm->nb_index = 0;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
acm->notification_buffer = new_buffer;
|
||||
acm->nb_size = alloc_size;
|
||||
dr = (struct usb_cdc_notification *)acm->notification_buffer;
|
||||
}
|
||||
|
||||
copy_size = min(current_size,
|
||||
|
|
|
@ -370,6 +370,10 @@ static const struct usb_device_id usb_quirk_list[] = {
|
|||
{ USB_DEVICE(0x0926, 0x0202), .driver_info =
|
||||
USB_QUIRK_ENDPOINT_BLACKLIST },
|
||||
|
||||
/* Sound Devices MixPre-D */
|
||||
{ USB_DEVICE(0x0926, 0x0208), .driver_info =
|
||||
USB_QUIRK_ENDPOINT_BLACKLIST },
|
||||
|
||||
/* Keytouch QWERTY Panel keyboard */
|
||||
{ USB_DEVICE(0x0926, 0x3333), .driver_info =
|
||||
USB_QUIRK_CONFIG_INTF_STRINGS },
|
||||
|
@ -465,6 +469,8 @@ static const struct usb_device_id usb_quirk_list[] = {
|
|||
|
||||
{ USB_DEVICE(0x2386, 0x3119), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
{ USB_DEVICE(0x2386, 0x350e), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* DJI CineSSD */
|
||||
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
|
@ -509,6 +515,7 @@ static const struct usb_device_id usb_amd_resume_quirk_list[] = {
|
|||
*/
|
||||
static const struct usb_device_id usb_endpoint_blacklist[] = {
|
||||
{ USB_DEVICE_INTERFACE_NUMBER(0x0926, 0x0202, 1), .driver_info = 0x85 },
|
||||
{ USB_DEVICE_INTERFACE_NUMBER(0x0926, 0x0208, 1), .driver_info = 0x85 },
|
||||
{ }
|
||||
};
|
||||
|
||||
|
|
|
@ -1017,26 +1017,24 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
|
|||
* dwc3_prepare_one_trb - setup one TRB from one request
|
||||
* @dep: endpoint for which this request is prepared
|
||||
* @req: dwc3_request pointer
|
||||
* @trb_length: buffer size of the TRB
|
||||
* @chain: should this TRB be chained to the next?
|
||||
* @node: only for isochronous endpoints. First TRB needs different type.
|
||||
*/
|
||||
static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
|
||||
struct dwc3_request *req, unsigned chain, unsigned node)
|
||||
struct dwc3_request *req, unsigned int trb_length,
|
||||
unsigned chain, unsigned node)
|
||||
{
|
||||
struct dwc3_trb *trb;
|
||||
unsigned int length;
|
||||
dma_addr_t dma;
|
||||
unsigned stream_id = req->request.stream_id;
|
||||
unsigned short_not_ok = req->request.short_not_ok;
|
||||
unsigned no_interrupt = req->request.no_interrupt;
|
||||
|
||||
if (req->request.num_sgs > 0) {
|
||||
length = sg_dma_len(req->start_sg);
|
||||
if (req->request.num_sgs > 0)
|
||||
dma = sg_dma_address(req->start_sg);
|
||||
} else {
|
||||
length = req->request.length;
|
||||
else
|
||||
dma = req->request.dma;
|
||||
}
|
||||
|
||||
trb = &dep->trb_pool[dep->trb_enqueue];
|
||||
|
||||
|
@ -1048,7 +1046,7 @@ static void dwc3_prepare_one_trb(struct dwc3_ep *dep,
|
|||
|
||||
req->num_trbs++;
|
||||
|
||||
__dwc3_prepare_one_trb(dep, trb, dma, length, chain, node,
|
||||
__dwc3_prepare_one_trb(dep, trb, dma, trb_length, chain, node,
|
||||
stream_id, short_not_ok, no_interrupt);
|
||||
}
|
||||
|
||||
|
@ -1058,16 +1056,27 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
|
|||
struct scatterlist *sg = req->start_sg;
|
||||
struct scatterlist *s;
|
||||
int i;
|
||||
|
||||
unsigned int length = req->request.length;
|
||||
unsigned int remaining = req->request.num_mapped_sgs
|
||||
- req->num_queued_sgs;
|
||||
|
||||
/*
|
||||
* If we resume preparing the request, then get the remaining length of
|
||||
* the request and resume where we left off.
|
||||
*/
|
||||
for_each_sg(req->request.sg, s, req->num_queued_sgs, i)
|
||||
length -= sg_dma_len(s);
|
||||
|
||||
for_each_sg(sg, s, remaining, i) {
|
||||
unsigned int length = req->request.length;
|
||||
unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
|
||||
unsigned int rem = length % maxp;
|
||||
unsigned int trb_length;
|
||||
unsigned chain = true;
|
||||
|
||||
trb_length = min_t(unsigned int, length, sg_dma_len(s));
|
||||
|
||||
length -= trb_length;
|
||||
|
||||
/*
|
||||
* IOMMU driver is coalescing the list of sgs which shares a
|
||||
* page boundary into one and giving it to USB driver. With
|
||||
|
@ -1075,7 +1084,7 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
|
|||
* sgs passed. So mark the chain bit to false if it isthe last
|
||||
* mapped sg.
|
||||
*/
|
||||
if (i == remaining - 1)
|
||||
if ((i == remaining - 1) || !length)
|
||||
chain = false;
|
||||
|
||||
if (rem && usb_endpoint_dir_out(dep->endpoint.desc) && !chain) {
|
||||
|
@ -1085,7 +1094,7 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
|
|||
req->needs_extra_trb = true;
|
||||
|
||||
/* prepare normal TRB */
|
||||
dwc3_prepare_one_trb(dep, req, true, i);
|
||||
dwc3_prepare_one_trb(dep, req, trb_length, true, i);
|
||||
|
||||
/* Now prepare one extra TRB to align transfer size */
|
||||
trb = &dep->trb_pool[dep->trb_enqueue];
|
||||
|
@ -1095,8 +1104,37 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
|
|||
req->request.stream_id,
|
||||
req->request.short_not_ok,
|
||||
req->request.no_interrupt);
|
||||
} else if (req->request.zero && req->request.length &&
|
||||
!usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
|
||||
!rem && !chain) {
|
||||
struct dwc3 *dwc = dep->dwc;
|
||||
struct dwc3_trb *trb;
|
||||
|
||||
req->needs_extra_trb = true;
|
||||
|
||||
/* Prepare normal TRB */
|
||||
dwc3_prepare_one_trb(dep, req, trb_length, true, i);
|
||||
|
||||
/* Prepare one extra TRB to handle ZLP */
|
||||
trb = &dep->trb_pool[dep->trb_enqueue];
|
||||
req->num_trbs++;
|
||||
__dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0,
|
||||
!req->direction, 1,
|
||||
req->request.stream_id,
|
||||
req->request.short_not_ok,
|
||||
req->request.no_interrupt);
|
||||
|
||||
/* Prepare one more TRB to handle MPS alignment */
|
||||
if (!req->direction) {
|
||||
trb = &dep->trb_pool[dep->trb_enqueue];
|
||||
req->num_trbs++;
|
||||
__dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp,
|
||||
false, 1, req->request.stream_id,
|
||||
req->request.short_not_ok,
|
||||
req->request.no_interrupt);
|
||||
}
|
||||
} else {
|
||||
dwc3_prepare_one_trb(dep, req, chain, i);
|
||||
dwc3_prepare_one_trb(dep, req, trb_length, chain, i);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1111,6 +1149,16 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
|
|||
|
||||
req->num_queued_sgs++;
|
||||
|
||||
/*
|
||||
* The number of pending SG entries may not correspond to the
|
||||
* number of mapped SG entries. If all the data are queued, then
|
||||
* don't include unused SG entries.
|
||||
*/
|
||||
if (length == 0) {
|
||||
req->num_pending_sgs -= req->request.num_mapped_sgs - req->num_queued_sgs;
|
||||
break;
|
||||
}
|
||||
|
||||
if (!dwc3_calc_trbs_left(dep))
|
||||
break;
|
||||
}
|
||||
|
@ -1130,7 +1178,7 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
|
|||
req->needs_extra_trb = true;
|
||||
|
||||
/* prepare normal TRB */
|
||||
dwc3_prepare_one_trb(dep, req, true, 0);
|
||||
dwc3_prepare_one_trb(dep, req, length, true, 0);
|
||||
|
||||
/* Now prepare one extra TRB to align transfer size */
|
||||
trb = &dep->trb_pool[dep->trb_enqueue];
|
||||
|
@ -1140,6 +1188,7 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
|
|||
req->request.short_not_ok,
|
||||
req->request.no_interrupt);
|
||||
} else if (req->request.zero && req->request.length &&
|
||||
!usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
|
||||
(IS_ALIGNED(req->request.length, maxp))) {
|
||||
struct dwc3 *dwc = dep->dwc;
|
||||
struct dwc3_trb *trb;
|
||||
|
@ -1147,17 +1196,27 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep,
|
|||
req->needs_extra_trb = true;
|
||||
|
||||
/* prepare normal TRB */
|
||||
dwc3_prepare_one_trb(dep, req, true, 0);
|
||||
dwc3_prepare_one_trb(dep, req, length, true, 0);
|
||||
|
||||
/* Now prepare one extra TRB to handle ZLP */
|
||||
/* Prepare one extra TRB to handle ZLP */
|
||||
trb = &dep->trb_pool[dep->trb_enqueue];
|
||||
req->num_trbs++;
|
||||
__dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, 0,
|
||||
false, 1, req->request.stream_id,
|
||||
!req->direction, 1, req->request.stream_id,
|
||||
req->request.short_not_ok,
|
||||
req->request.no_interrupt);
|
||||
|
||||
/* Prepare one more TRB to handle MPS alignment for OUT */
|
||||
if (!req->direction) {
|
||||
trb = &dep->trb_pool[dep->trb_enqueue];
|
||||
req->num_trbs++;
|
||||
__dwc3_prepare_one_trb(dep, trb, dwc->bounce_addr, maxp,
|
||||
false, 1, req->request.stream_id,
|
||||
req->request.short_not_ok,
|
||||
req->request.no_interrupt);
|
||||
}
|
||||
} else {
|
||||
dwc3_prepare_one_trb(dep, req, false, 0);
|
||||
dwc3_prepare_one_trb(dep, req, length, false, 0);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2328,8 +2387,17 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
|
|||
status);
|
||||
|
||||
if (req->needs_extra_trb) {
|
||||
unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
|
||||
|
||||
ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event,
|
||||
status);
|
||||
|
||||
/* Reclaim MPS padding TRB for ZLP */
|
||||
if (!req->direction && req->request.zero && req->request.length &&
|
||||
!usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
|
||||
(IS_ALIGNED(req->request.length, maxp)))
|
||||
ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event, status);
|
||||
|
||||
req->needs_extra_trb = false;
|
||||
}
|
||||
|
||||
|
|
|
@ -1184,12 +1184,15 @@ static int ncm_unwrap_ntb(struct gether *port,
|
|||
int ndp_index;
|
||||
unsigned dg_len, dg_len2;
|
||||
unsigned ndp_len;
|
||||
unsigned block_len;
|
||||
struct sk_buff *skb2;
|
||||
int ret = -EINVAL;
|
||||
unsigned max_size = le32_to_cpu(ntb_parameters.dwNtbOutMaxSize);
|
||||
unsigned ntb_max = le32_to_cpu(ntb_parameters.dwNtbOutMaxSize);
|
||||
unsigned frame_max = le16_to_cpu(ecm_desc.wMaxSegmentSize);
|
||||
const struct ndp_parser_opts *opts = ncm->parser_opts;
|
||||
unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0;
|
||||
int dgram_counter;
|
||||
bool ndp_after_header;
|
||||
|
||||
/* dwSignature */
|
||||
if (get_unaligned_le32(tmp) != opts->nth_sign) {
|
||||
|
@ -1208,25 +1211,37 @@ static int ncm_unwrap_ntb(struct gether *port,
|
|||
}
|
||||
tmp++; /* skip wSequence */
|
||||
|
||||
block_len = get_ncm(&tmp, opts->block_length);
|
||||
/* (d)wBlockLength */
|
||||
if (get_ncm(&tmp, opts->block_length) > max_size) {
|
||||
if (block_len > ntb_max) {
|
||||
INFO(port->func.config->cdev, "OUT size exceeded\n");
|
||||
goto err;
|
||||
}
|
||||
|
||||
ndp_index = get_ncm(&tmp, opts->ndp_index);
|
||||
ndp_after_header = false;
|
||||
|
||||
/* Run through all the NDP's in the NTB */
|
||||
do {
|
||||
/* NCM 3.2 */
|
||||
if (((ndp_index % 4) != 0) &&
|
||||
(ndp_index < opts->nth_size)) {
|
||||
/*
|
||||
* NCM 3.2
|
||||
* dwNdpIndex
|
||||
*/
|
||||
if (((ndp_index % 4) != 0) ||
|
||||
(ndp_index < opts->nth_size) ||
|
||||
(ndp_index > (block_len -
|
||||
opts->ndp_size))) {
|
||||
INFO(port->func.config->cdev, "Bad index: %#X\n",
|
||||
ndp_index);
|
||||
goto err;
|
||||
}
|
||||
if (ndp_index == opts->nth_size)
|
||||
ndp_after_header = true;
|
||||
|
||||
/* walk through NDP */
|
||||
/*
|
||||
* walk through NDP
|
||||
* dwSignature
|
||||
*/
|
||||
tmp = (void *)(skb->data + ndp_index);
|
||||
if (get_unaligned_le32(tmp) != ncm->ndp_sign) {
|
||||
INFO(port->func.config->cdev, "Wrong NDP SIGN\n");
|
||||
|
@ -1237,14 +1252,15 @@ static int ncm_unwrap_ntb(struct gether *port,
|
|||
ndp_len = get_unaligned_le16(tmp++);
|
||||
/*
|
||||
* NCM 3.3.1
|
||||
* wLength
|
||||
* entry is 2 items
|
||||
* item size is 16/32 bits, opts->dgram_item_len * 2 bytes
|
||||
* minimal: struct usb_cdc_ncm_ndpX + normal entry + zero entry
|
||||
* Each entry is a dgram index and a dgram length.
|
||||
*/
|
||||
if ((ndp_len < opts->ndp_size
|
||||
+ 2 * 2 * (opts->dgram_item_len * 2))
|
||||
|| (ndp_len % opts->ndplen_align != 0)) {
|
||||
+ 2 * 2 * (opts->dgram_item_len * 2)) ||
|
||||
(ndp_len % opts->ndplen_align != 0)) {
|
||||
INFO(port->func.config->cdev, "Bad NDP length: %#X\n",
|
||||
ndp_len);
|
||||
goto err;
|
||||
|
@ -1261,8 +1277,21 @@ static int ncm_unwrap_ntb(struct gether *port,
|
|||
|
||||
do {
|
||||
index = index2;
|
||||
/* wDatagramIndex[0] */
|
||||
if ((index < opts->nth_size) ||
|
||||
(index > block_len - opts->dpe_size)) {
|
||||
INFO(port->func.config->cdev,
|
||||
"Bad index: %#X\n", index);
|
||||
goto err;
|
||||
}
|
||||
|
||||
dg_len = dg_len2;
|
||||
if (dg_len < 14 + crc_len) { /* ethernet hdr + crc */
|
||||
/*
|
||||
* wDatagramLength[0]
|
||||
* ethernet hdr + crc or larger than max frame size
|
||||
*/
|
||||
if ((dg_len < 14 + crc_len) ||
|
||||
(dg_len > frame_max)) {
|
||||
INFO(port->func.config->cdev,
|
||||
"Bad dgram length: %#X\n", dg_len);
|
||||
goto err;
|
||||
|
@ -1286,6 +1315,37 @@ static int ncm_unwrap_ntb(struct gether *port,
|
|||
index2 = get_ncm(&tmp, opts->dgram_item_len);
|
||||
dg_len2 = get_ncm(&tmp, opts->dgram_item_len);
|
||||
|
||||
if (index2 == 0 || dg_len2 == 0)
|
||||
break;
|
||||
|
||||
/* wDatagramIndex[1] */
|
||||
if (ndp_after_header) {
|
||||
if (index2 < opts->nth_size + opts->ndp_size) {
|
||||
INFO(port->func.config->cdev,
|
||||
"Bad index: %#X\n", index2);
|
||||
goto err;
|
||||
}
|
||||
} else {
|
||||
if (index2 < opts->nth_size + opts->dpe_size) {
|
||||
INFO(port->func.config->cdev,
|
||||
"Bad index: %#X\n", index2);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
if (index2 > block_len - opts->dpe_size) {
|
||||
INFO(port->func.config->cdev,
|
||||
"Bad index: %#X\n", index2);
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* wDatagramLength[1] */
|
||||
if ((dg_len2 < 14 + crc_len) ||
|
||||
(dg_len2 > frame_max)) {
|
||||
INFO(port->func.config->cdev,
|
||||
"Bad dgram length: %#X\n", dg_len);
|
||||
goto err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy the data into a new skb.
|
||||
* This ensures the truesize is correct
|
||||
|
@ -1302,9 +1362,6 @@ static int ncm_unwrap_ntb(struct gether *port,
|
|||
ndp_len -= 2 * (opts->dgram_item_len * 2);
|
||||
|
||||
dgram_counter++;
|
||||
|
||||
if (index2 == 0 || dg_len2 == 0)
|
||||
break;
|
||||
} while (ndp_len > 2 * (opts->dgram_item_len * 2));
|
||||
} while (ndp_index);
|
||||
|
||||
|
|
|
@ -751,12 +751,13 @@ static int uasp_alloc_stream_res(struct f_uas *fu, struct uas_stream *stream)
|
|||
goto err_sts;
|
||||
|
||||
return 0;
|
||||
|
||||
err_sts:
|
||||
usb_ep_free_request(fu->ep_status, stream->req_status);
|
||||
stream->req_status = NULL;
|
||||
err_out:
|
||||
usb_ep_free_request(fu->ep_out, stream->req_out);
|
||||
stream->req_out = NULL;
|
||||
err_out:
|
||||
usb_ep_free_request(fu->ep_in, stream->req_in);
|
||||
stream->req_in = NULL;
|
||||
out:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#define __U_F_H__
|
||||
|
||||
#include <linux/usb/gadget.h>
|
||||
#include <linux/overflow.h>
|
||||
|
||||
/* Variable Length Array Macros **********************************************/
|
||||
#define vla_group(groupname) size_t groupname##__next = 0
|
||||
|
@ -21,21 +22,36 @@
|
|||
|
||||
#define vla_item(groupname, type, name, n) \
|
||||
size_t groupname##_##name##__offset = ({ \
|
||||
size_t align_mask = __alignof__(type) - 1; \
|
||||
size_t offset = (groupname##__next + align_mask) & ~align_mask;\
|
||||
size_t size = (n) * sizeof(type); \
|
||||
groupname##__next = offset + size; \
|
||||
size_t offset = 0; \
|
||||
if (groupname##__next != SIZE_MAX) { \
|
||||
size_t align_mask = __alignof__(type) - 1; \
|
||||
size_t size = array_size(n, sizeof(type)); \
|
||||
offset = (groupname##__next + align_mask) & \
|
||||
~align_mask; \
|
||||
if (check_add_overflow(offset, size, \
|
||||
&groupname##__next)) { \
|
||||
groupname##__next = SIZE_MAX; \
|
||||
offset = 0; \
|
||||
} \
|
||||
} \
|
||||
offset; \
|
||||
})
|
||||
|
||||
#define vla_item_with_sz(groupname, type, name, n) \
|
||||
size_t groupname##_##name##__sz = (n) * sizeof(type); \
|
||||
size_t groupname##_##name##__offset = ({ \
|
||||
size_t align_mask = __alignof__(type) - 1; \
|
||||
size_t offset = (groupname##__next + align_mask) & ~align_mask;\
|
||||
size_t size = groupname##_##name##__sz; \
|
||||
groupname##__next = offset + size; \
|
||||
offset; \
|
||||
size_t groupname##_##name##__sz = array_size(n, sizeof(type)); \
|
||||
size_t groupname##_##name##__offset = ({ \
|
||||
size_t offset = 0; \
|
||||
if (groupname##__next != SIZE_MAX) { \
|
||||
size_t align_mask = __alignof__(type) - 1; \
|
||||
offset = (groupname##__next + align_mask) & \
|
||||
~align_mask; \
|
||||
if (check_add_overflow(offset, groupname##_##name##__sz,\
|
||||
&groupname##__next)) { \
|
||||
groupname##__next = SIZE_MAX; \
|
||||
offset = 0; \
|
||||
} \
|
||||
} \
|
||||
offset; \
|
||||
})
|
||||
|
||||
#define vla_ptr(ptr, groupname, name) \
|
||||
|
|
|
@ -156,9 +156,8 @@ static int exynos_ohci_probe(struct platform_device *pdev)
|
|||
hcd->rsrc_len = resource_size(res);
|
||||
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
if (!irq) {
|
||||
dev_err(&pdev->dev, "Failed to get IRQ\n");
|
||||
err = -ENODEV;
|
||||
if (irq < 0) {
|
||||
err = irq;
|
||||
goto fail_io;
|
||||
}
|
||||
|
||||
|
|
|
@ -273,7 +273,7 @@ static int xhci_slot_context_show(struct seq_file *s, void *unused)
|
|||
|
||||
static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
|
||||
{
|
||||
int dci;
|
||||
int ep_index;
|
||||
dma_addr_t dma;
|
||||
struct xhci_hcd *xhci;
|
||||
struct xhci_ep_ctx *ep_ctx;
|
||||
|
@ -282,9 +282,9 @@ static int xhci_endpoint_context_show(struct seq_file *s, void *unused)
|
|||
|
||||
xhci = hcd_to_xhci(bus_to_hcd(dev->udev->bus));
|
||||
|
||||
for (dci = 1; dci < 32; dci++) {
|
||||
ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, dci);
|
||||
dma = dev->out_ctx->dma + dci * CTX_SIZE(xhci->hcc_params);
|
||||
for (ep_index = 0; ep_index < 31; ep_index++) {
|
||||
ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index);
|
||||
dma = dev->out_ctx->dma + (ep_index + 1) * CTX_SIZE(xhci->hcc_params);
|
||||
seq_printf(s, "%pad: %s\n", &dma,
|
||||
xhci_decode_ep_context(le32_to_cpu(ep_ctx->ep_info),
|
||||
le32_to_cpu(ep_ctx->ep_info2),
|
||||
|
|
|
@ -736,15 +736,6 @@ static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci,
|
|||
{
|
||||
u32 pls = status_reg & PORT_PLS_MASK;
|
||||
|
||||
/* resume state is a xHCI internal state.
|
||||
* Do not report it to usb core, instead, pretend to be U3,
|
||||
* thus usb core knows it's not ready for transfer
|
||||
*/
|
||||
if (pls == XDEV_RESUME) {
|
||||
*status |= USB_SS_PORT_LS_U3;
|
||||
return;
|
||||
}
|
||||
|
||||
/* When the CAS bit is set then warm reset
|
||||
* should be performed on port
|
||||
*/
|
||||
|
@ -766,6 +757,16 @@ static void xhci_hub_report_usb3_link_state(struct xhci_hcd *xhci,
|
|||
*/
|
||||
pls |= USB_PORT_STAT_CONNECTION;
|
||||
} else {
|
||||
/*
|
||||
* Resume state is an xHCI internal state. Do not report it to
|
||||
* usb core, instead, pretend to be U3, thus usb core knows
|
||||
* it's not ready for transfer.
|
||||
*/
|
||||
if (pls == XDEV_RESUME) {
|
||||
*status |= USB_SS_PORT_LS_U3;
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* If CAS bit isn't set but the Port is already at
|
||||
* Compliance Mode, fake a connection so the USB core
|
||||
|
|
|
@ -3154,10 +3154,11 @@ static void xhci_endpoint_reset(struct usb_hcd *hcd,
|
|||
|
||||
wait_for_completion(cfg_cmd->completion);
|
||||
|
||||
ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;
|
||||
xhci_free_command(xhci, cfg_cmd);
|
||||
cleanup:
|
||||
xhci_free_command(xhci, stop_cmd);
|
||||
if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE)
|
||||
ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;
|
||||
}
|
||||
|
||||
static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
|
||||
|
|
|
@ -429,7 +429,7 @@ static int lvs_rh_probe(struct usb_interface *intf,
|
|||
USB_DT_SS_HUB_SIZE, USB_CTRL_GET_TIMEOUT);
|
||||
if (ret < (USB_DT_HUB_NONVAR_SIZE + 2)) {
|
||||
dev_err(&hdev->dev, "wrong root hub descriptor read %d\n", ret);
|
||||
return ret;
|
||||
return ret < 0 ? ret : -EINVAL;
|
||||
}
|
||||
|
||||
/* submit urb to poll interrupt endpoint */
|
||||
|
|
|
@ -761,7 +761,7 @@ static int sisusb_write_mem_bulk(struct sisusb_usb_data *sisusb, u32 addr,
|
|||
u8 swap8, fromkern = kernbuffer ? 1 : 0;
|
||||
u16 swap16;
|
||||
u32 swap32, flag = (length >> 28) & 1;
|
||||
char buf[4];
|
||||
u8 buf[4];
|
||||
|
||||
/* if neither kernbuffer not userbuffer are given, assume
|
||||
* data in obuf
|
||||
|
|
|
@ -492,7 +492,7 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
|
|||
prepare_to_wait(&dev->waitq, &wait, TASK_INTERRUPTIBLE);
|
||||
dev_dbg(&dev->interface->dev, "%s - submit %c\n", __func__,
|
||||
dev->cntl_buffer[0]);
|
||||
retval = usb_submit_urb(dev->cntl_urb, GFP_KERNEL);
|
||||
retval = usb_submit_urb(dev->cntl_urb, GFP_ATOMIC);
|
||||
if (retval >= 0)
|
||||
timeout = schedule_timeout(YUREX_WRITE_TIMEOUT);
|
||||
finish_wait(&dev->waitq, &wait);
|
||||
|
|
|
@ -2328,7 +2328,7 @@ UNUSUAL_DEV( 0x357d, 0x7788, 0x0114, 0x0114,
|
|||
"JMicron",
|
||||
"USB to ATA/ATAPI Bridge",
|
||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_BROKEN_FUA ),
|
||||
US_FL_BROKEN_FUA | US_FL_IGNORE_UAS ),
|
||||
|
||||
/* Reported by Andrey Rahmatullin <wrar@altlinux.org> */
|
||||
UNUSUAL_DEV( 0x4102, 0x1020, 0x0100, 0x0100,
|
||||
|
|
|
@ -28,6 +28,13 @@
|
|||
* and don't forget to CC: the USB development list <linux-usb@vger.kernel.org>
|
||||
*/
|
||||
|
||||
/* Reported-by: Till Dörges <doerges@pre-sense.de> */
|
||||
UNUSUAL_DEV(0x054c, 0x087d, 0x0000, 0x9999,
|
||||
"Sony",
|
||||
"PSZ-HA*",
|
||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_NO_REPORT_OPCODES),
|
||||
|
||||
/* Reported-by: Julian Groß <julian.g@posteo.de> */
|
||||
UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,
|
||||
"LaCie",
|
||||
|
@ -80,6 +87,13 @@ UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999,
|
|||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_BROKEN_FUA),
|
||||
|
||||
/* Reported-by: Thinh Nguyen <thinhn@synopsys.com> */
|
||||
UNUSUAL_DEV(0x154b, 0xf00d, 0x0000, 0x9999,
|
||||
"PNY",
|
||||
"Pro Elite SSD",
|
||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_NO_ATA_1X),
|
||||
|
||||
/* Reported-by: Hans de Goede <hdegoede@redhat.com> */
|
||||
UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
|
||||
"VIA",
|
||||
|
|
|
@ -2152,6 +2152,9 @@ static void updatescrollmode(struct display *p,
|
|||
}
|
||||
}
|
||||
|
||||
#define PITCH(w) (((w) + 7) >> 3)
|
||||
#define CALC_FONTSZ(h, p, c) ((h) * (p) * (c)) /* size = height * pitch * charcount */
|
||||
|
||||
static int fbcon_resize(struct vc_data *vc, unsigned int width,
|
||||
unsigned int height, unsigned int user)
|
||||
{
|
||||
|
@ -2161,6 +2164,24 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
|
|||
struct fb_var_screeninfo var = info->var;
|
||||
int x_diff, y_diff, virt_w, virt_h, virt_fw, virt_fh;
|
||||
|
||||
if (ops->p && ops->p->userfont && FNTSIZE(vc->vc_font.data)) {
|
||||
int size;
|
||||
int pitch = PITCH(vc->vc_font.width);
|
||||
|
||||
/*
|
||||
* If user font, ensure that a possible change to user font
|
||||
* height or width will not allow a font data out-of-bounds access.
|
||||
* NOTE: must use original charcount in calculation as font
|
||||
* charcount can change and cannot be used to determine the
|
||||
* font data allocated size.
|
||||
*/
|
||||
if (pitch <= 0)
|
||||
return -EINVAL;
|
||||
size = CALC_FONTSZ(vc->vc_font.height, pitch, FNTCHARCNT(vc->vc_font.data));
|
||||
if (size > FNTSIZE(vc->vc_font.data))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
virt_w = FBCON_SWAP(ops->rotate, width, height);
|
||||
virt_h = FBCON_SWAP(ops->rotate, height, width);
|
||||
virt_fw = FBCON_SWAP(ops->rotate, vc->vc_font.width,
|
||||
|
@ -2623,7 +2644,7 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
|
|||
int size;
|
||||
int i, csum;
|
||||
u8 *new_data, *data = font->data;
|
||||
int pitch = (font->width+7) >> 3;
|
||||
int pitch = PITCH(font->width);
|
||||
|
||||
/* Is there a reason why fbconsole couldn't handle any charcount >256?
|
||||
* If not this check should be changed to charcount < 256 */
|
||||
|
@ -2639,7 +2660,7 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
|
|||
if (fbcon_invalid_charcount(info, charcount))
|
||||
return -EINVAL;
|
||||
|
||||
size = h * pitch * charcount;
|
||||
size = CALC_FONTSZ(h, pitch, charcount);
|
||||
|
||||
new_data = kmalloc(FONT_EXTRA_WORDS * sizeof(int) + size, GFP_USER);
|
||||
|
||||
|
|
|
@ -531,8 +531,11 @@ int dispc_runtime_get(void)
|
|||
DSSDBG("dispc_runtime_get\n");
|
||||
|
||||
r = pm_runtime_get_sync(&dispc.pdev->dev);
|
||||
WARN_ON(r < 0);
|
||||
return r < 0 ? r : 0;
|
||||
if (WARN_ON(r < 0)) {
|
||||
pm_runtime_put_sync(&dispc.pdev->dev);
|
||||
return r;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(dispc_runtime_get);
|
||||
|
||||
|
|
|
@ -1148,8 +1148,11 @@ static int dsi_runtime_get(struct platform_device *dsidev)
|
|||
DSSDBG("dsi_runtime_get\n");
|
||||
|
||||
r = pm_runtime_get_sync(&dsi->pdev->dev);
|
||||
WARN_ON(r < 0);
|
||||
return r < 0 ? r : 0;
|
||||
if (WARN_ON(r < 0)) {
|
||||
pm_runtime_put_sync(&dsi->pdev->dev);
|
||||
return r;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void dsi_runtime_put(struct platform_device *dsidev)
|
||||
|
|
|
@ -779,8 +779,11 @@ int dss_runtime_get(void)
|
|||
DSSDBG("dss_runtime_get\n");
|
||||
|
||||
r = pm_runtime_get_sync(&dss.pdev->dev);
|
||||
WARN_ON(r < 0);
|
||||
return r < 0 ? r : 0;
|
||||
if (WARN_ON(r < 0)) {
|
||||
pm_runtime_put_sync(&dss.pdev->dev);
|
||||
return r;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void dss_runtime_put(void)
|
||||
|
|
|
@ -50,9 +50,10 @@ static int hdmi_runtime_get(void)
|
|||
DSSDBG("hdmi_runtime_get\n");
|
||||
|
||||
r = pm_runtime_get_sync(&hdmi.pdev->dev);
|
||||
WARN_ON(r < 0);
|
||||
if (r < 0)
|
||||
if (WARN_ON(r < 0)) {
|
||||
pm_runtime_put_sync(&hdmi.pdev->dev);
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -54,9 +54,10 @@ static int hdmi_runtime_get(void)
|
|||
DSSDBG("hdmi_runtime_get\n");
|
||||
|
||||
r = pm_runtime_get_sync(&hdmi.pdev->dev);
|
||||
WARN_ON(r < 0);
|
||||
if (r < 0)
|
||||
if (WARN_ON(r < 0)) {
|
||||
pm_runtime_put_sync(&hdmi.pdev->dev);
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -402,8 +402,11 @@ static int venc_runtime_get(void)
|
|||
DSSDBG("venc_runtime_get\n");
|
||||
|
||||
r = pm_runtime_get_sync(&venc.pdev->dev);
|
||||
WARN_ON(r < 0);
|
||||
return r < 0 ? r : 0;
|
||||
if (WARN_ON(r < 0)) {
|
||||
pm_runtime_put_sync(&venc.pdev->dev);
|
||||
return r;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void venc_runtime_put(void)
|
||||
|
|
|
@ -154,7 +154,7 @@ int get_evtchn_to_irq(unsigned evtchn)
|
|||
/* Get info for IRQ */
|
||||
struct irq_info *info_for_irq(unsigned irq)
|
||||
{
|
||||
return irq_get_handler_data(irq);
|
||||
return irq_get_chip_data(irq);
|
||||
}
|
||||
|
||||
/* Constructors for packed IRQ information. */
|
||||
|
@ -375,7 +375,7 @@ static void xen_irq_init(unsigned irq)
|
|||
info->type = IRQT_UNBOUND;
|
||||
info->refcnt = -1;
|
||||
|
||||
irq_set_handler_data(irq, info);
|
||||
irq_set_chip_data(irq, info);
|
||||
|
||||
list_add_tail(&info->list, &xen_irq_list_head);
|
||||
}
|
||||
|
@ -424,14 +424,14 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
|
|||
|
||||
static void xen_free_irq(unsigned irq)
|
||||
{
|
||||
struct irq_info *info = irq_get_handler_data(irq);
|
||||
struct irq_info *info = irq_get_chip_data(irq);
|
||||
|
||||
if (WARN_ON(!info))
|
||||
return;
|
||||
|
||||
list_del(&info->list);
|
||||
|
||||
irq_set_handler_data(irq, NULL);
|
||||
irq_set_chip_data(irq, NULL);
|
||||
|
||||
WARN_ON(info->refcnt > 0);
|
||||
|
||||
|
@ -601,7 +601,7 @@ EXPORT_SYMBOL_GPL(xen_irq_from_gsi);
|
|||
static void __unbind_from_irq(unsigned int irq)
|
||||
{
|
||||
int evtchn = evtchn_from_irq(irq);
|
||||
struct irq_info *info = irq_get_handler_data(irq);
|
||||
struct irq_info *info = irq_get_chip_data(irq);
|
||||
|
||||
if (info->refcnt > 0) {
|
||||
info->refcnt--;
|
||||
|
@ -1105,7 +1105,7 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
|
|||
|
||||
void unbind_from_irqhandler(unsigned int irq, void *dev_id)
|
||||
{
|
||||
struct irq_info *info = irq_get_handler_data(irq);
|
||||
struct irq_info *info = irq_get_chip_data(irq);
|
||||
|
||||
if (WARN_ON(!info))
|
||||
return;
|
||||
|
@ -1139,7 +1139,7 @@ int evtchn_make_refcounted(unsigned int evtchn)
|
|||
if (irq == -1)
|
||||
return -ENOENT;
|
||||
|
||||
info = irq_get_handler_data(irq);
|
||||
info = irq_get_chip_data(irq);
|
||||
|
||||
if (!info)
|
||||
return -ENOENT;
|
||||
|
@ -1167,7 +1167,7 @@ int evtchn_get(unsigned int evtchn)
|
|||
if (irq == -1)
|
||||
goto done;
|
||||
|
||||
info = irq_get_handler_data(irq);
|
||||
info = irq_get_chip_data(irq);
|
||||
|
||||
if (!info)
|
||||
goto done;
|
||||
|
|
|
@ -4444,6 +4444,7 @@ static void btrfs_cleanup_bg_io(struct btrfs_block_group_cache *cache)
|
|||
cache->io_ctl.inode = NULL;
|
||||
iput(inode);
|
||||
}
|
||||
ASSERT(cache->io_ctl.pages == NULL);
|
||||
btrfs_put_block_group(cache);
|
||||
}
|
||||
|
||||
|
|
|
@ -3010,14 +3010,14 @@ static int btrfs_zero_range(struct inode *inode,
|
|||
if (ret < 0)
|
||||
goto out;
|
||||
space_reserved = true;
|
||||
ret = btrfs_qgroup_reserve_data(inode, &data_reserved,
|
||||
alloc_start, bytes_to_reserve);
|
||||
if (ret)
|
||||
goto out;
|
||||
ret = btrfs_punch_hole_lock_range(inode, lockstart, lockend,
|
||||
&cached_state);
|
||||
if (ret)
|
||||
goto out;
|
||||
ret = btrfs_qgroup_reserve_data(inode, &data_reserved,
|
||||
alloc_start, bytes_to_reserve);
|
||||
if (ret)
|
||||
goto out;
|
||||
ret = btrfs_prealloc_file_range(inode, mode, alloc_start,
|
||||
alloc_end - alloc_start,
|
||||
i_blocksize(inode),
|
||||
|
|
|
@ -1167,7 +1167,6 @@ static int __btrfs_wait_cache_io(struct btrfs_root *root,
|
|||
ret = update_cache_item(trans, root, inode, path, offset,
|
||||
io_ctl->entries, io_ctl->bitmaps);
|
||||
out:
|
||||
io_ctl_free(io_ctl);
|
||||
if (ret) {
|
||||
invalidate_inode_pages2(inode->i_mapping);
|
||||
BTRFS_I(inode)->generation = 0;
|
||||
|
@ -1332,6 +1331,7 @@ static int __btrfs_write_out_cache(struct btrfs_root *root, struct inode *inode,
|
|||
* them out later
|
||||
*/
|
||||
io_ctl_drop_pages(io_ctl);
|
||||
io_ctl_free(io_ctl);
|
||||
|
||||
unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,
|
||||
i_size_read(inode) - 1, &cached_state);
|
||||
|
|
|
@ -539,6 +539,7 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
|
|||
} else if (strncmp(args[0].from, "lzo", 3) == 0) {
|
||||
compress_type = "lzo";
|
||||
info->compress_type = BTRFS_COMPRESS_LZO;
|
||||
info->compress_level = 0;
|
||||
btrfs_set_opt(info->mount_opt, COMPRESS);
|
||||
btrfs_clear_opt(info->mount_opt, NODATACOW);
|
||||
btrfs_clear_opt(info->mount_opt, NODATASUM);
|
||||
|
|
|
@ -3422,11 +3422,13 @@ int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans,
|
|||
btrfs_free_path(path);
|
||||
out_unlock:
|
||||
mutex_unlock(&dir->log_mutex);
|
||||
if (ret == -ENOSPC) {
|
||||
if (err == -ENOSPC) {
|
||||
btrfs_set_log_full_commit(root->fs_info, trans);
|
||||
ret = 0;
|
||||
} else if (ret < 0)
|
||||
btrfs_abort_transaction(trans, ret);
|
||||
err = 0;
|
||||
} else if (err < 0 && err != -ENOENT) {
|
||||
/* ENOENT can be returned if the entry hasn't been fsynced yet */
|
||||
btrfs_abort_transaction(trans, err);
|
||||
}
|
||||
|
||||
btrfs_end_log_trans(root);
|
||||
|
||||
|
|
|
@ -3196,6 +3196,15 @@ int __sync_dirty_buffer(struct buffer_head *bh, int op_flags)
|
|||
WARN_ON(atomic_read(&bh->b_count) < 1);
|
||||
lock_buffer(bh);
|
||||
if (test_clear_buffer_dirty(bh)) {
|
||||
/*
|
||||
* The bh should be mapped, but it might not be if the
|
||||
* device was hot-removed. Not much we can do but fail the I/O.
|
||||
*/
|
||||
if (!buffer_mapped(bh)) {
|
||||
unlock_buffer(bh);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
get_bh(bh);
|
||||
bh->b_end_io = end_buffer_write_sync;
|
||||
ret = submit_bh(REQ_OP_WRITE, op_flags, bh);
|
||||
|
|
|
@ -3615,6 +3615,9 @@ static void delayed_work(struct work_struct *work)
|
|||
dout("mdsc delayed_work\n");
|
||||
ceph_check_delayed_caps(mdsc);
|
||||
|
||||
if (mdsc->stopping)
|
||||
return;
|
||||
|
||||
mutex_lock(&mdsc->mutex);
|
||||
renew_interval = mdsc->mdsmap->m_session_timeout >> 2;
|
||||
renew_caps = time_after_eq(jiffies, HZ*renew_interval +
|
||||
|
@ -3950,7 +3953,16 @@ void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc)
|
|||
static void ceph_mdsc_stop(struct ceph_mds_client *mdsc)
|
||||
{
|
||||
dout("stop\n");
|
||||
cancel_delayed_work_sync(&mdsc->delayed_work); /* cancel timer */
|
||||
/*
|
||||
* Make sure the delayed work stopped before releasing
|
||||
* the resources.
|
||||
*
|
||||
* Because the cancel_delayed_work_sync() will only
|
||||
* guarantee that the work finishes executing. But the
|
||||
* delayed work will re-arm itself again after that.
|
||||
*/
|
||||
flush_delayed_work(&mdsc->delayed_work);
|
||||
|
||||
if (mdsc->mdsmap)
|
||||
ceph_mdsmap_destroy(mdsc->mdsmap);
|
||||
kfree(mdsc->sessions);
|
||||
|
|
|
@ -250,14 +250,6 @@ int ext4_setup_system_zone(struct super_block *sb)
|
|||
int flex_size = ext4_flex_bg_size(sbi);
|
||||
int ret;
|
||||
|
||||
if (!test_opt(sb, BLOCK_VALIDITY)) {
|
||||
if (sbi->system_blks)
|
||||
ext4_release_system_zone(sb);
|
||||
return 0;
|
||||
}
|
||||
if (sbi->system_blks)
|
||||
return 0;
|
||||
|
||||
system_blks = kzalloc(sizeof(*system_blks), GFP_KERNEL);
|
||||
if (!system_blks)
|
||||
return -ENOMEM;
|
||||
|
|
175
fs/ext4/super.c
175
fs/ext4/super.c
|
@ -66,10 +66,10 @@ static int ext4_load_journal(struct super_block *, struct ext4_super_block *,
|
|||
unsigned long journal_devnum);
|
||||
static int ext4_show_options(struct seq_file *seq, struct dentry *root);
|
||||
static int ext4_commit_super(struct super_block *sb, int sync);
|
||||
static void ext4_mark_recovery_complete(struct super_block *sb,
|
||||
static int ext4_mark_recovery_complete(struct super_block *sb,
|
||||
struct ext4_super_block *es);
|
||||
static void ext4_clear_journal_err(struct super_block *sb,
|
||||
struct ext4_super_block *es);
|
||||
static int ext4_clear_journal_err(struct super_block *sb,
|
||||
struct ext4_super_block *es);
|
||||
static int ext4_sync_fs(struct super_block *sb, int wait);
|
||||
static int ext4_remount(struct super_block *sb, int *flags, char *data);
|
||||
static int ext4_statfs(struct dentry *dentry, struct kstatfs *buf);
|
||||
|
@ -4629,11 +4629,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
|
|||
|
||||
ext4_set_resv_clusters(sb);
|
||||
|
||||
err = ext4_setup_system_zone(sb);
|
||||
if (err) {
|
||||
ext4_msg(sb, KERN_ERR, "failed to initialize system "
|
||||
"zone (%d)", err);
|
||||
goto failed_mount4a;
|
||||
if (test_opt(sb, BLOCK_VALIDITY)) {
|
||||
err = ext4_setup_system_zone(sb);
|
||||
if (err) {
|
||||
ext4_msg(sb, KERN_ERR, "failed to initialize system "
|
||||
"zone (%d)", err);
|
||||
goto failed_mount4a;
|
||||
}
|
||||
}
|
||||
|
||||
ext4_ext_init(sb);
|
||||
|
@ -4701,7 +4703,9 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
|
|||
EXT4_SB(sb)->s_mount_state &= ~EXT4_ORPHAN_FS;
|
||||
if (needs_recovery) {
|
||||
ext4_msg(sb, KERN_INFO, "recovery complete");
|
||||
ext4_mark_recovery_complete(sb, es);
|
||||
err = ext4_mark_recovery_complete(sb, es);
|
||||
if (err)
|
||||
goto failed_mount8;
|
||||
}
|
||||
if (EXT4_SB(sb)->s_journal) {
|
||||
if (test_opt(sb, DATA_FLAGS) == EXT4_MOUNT_JOURNAL_DATA)
|
||||
|
@ -4744,10 +4748,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
|
|||
ext4_msg(sb, KERN_ERR, "VFS: Can't find ext4 filesystem");
|
||||
goto failed_mount;
|
||||
|
||||
#ifdef CONFIG_QUOTA
|
||||
failed_mount8:
|
||||
ext4_unregister_sysfs(sb);
|
||||
#endif
|
||||
failed_mount7:
|
||||
ext4_unregister_li_request(sb);
|
||||
failed_mount6:
|
||||
|
@ -4889,7 +4891,8 @@ static journal_t *ext4_get_journal(struct super_block *sb,
|
|||
struct inode *journal_inode;
|
||||
journal_t *journal;
|
||||
|
||||
BUG_ON(!ext4_has_feature_journal(sb));
|
||||
if (WARN_ON_ONCE(!ext4_has_feature_journal(sb)))
|
||||
return NULL;
|
||||
|
||||
journal_inode = ext4_get_journal_inode(sb, journal_inum);
|
||||
if (!journal_inode)
|
||||
|
@ -4919,7 +4922,8 @@ static journal_t *ext4_get_dev_journal(struct super_block *sb,
|
|||
struct ext4_super_block *es;
|
||||
struct block_device *bdev;
|
||||
|
||||
BUG_ON(!ext4_has_feature_journal(sb));
|
||||
if (WARN_ON_ONCE(!ext4_has_feature_journal(sb)))
|
||||
return NULL;
|
||||
|
||||
bdev = ext4_blkdev_get(j_dev, sb);
|
||||
if (bdev == NULL)
|
||||
|
@ -5010,8 +5014,10 @@ static int ext4_load_journal(struct super_block *sb,
|
|||
dev_t journal_dev;
|
||||
int err = 0;
|
||||
int really_read_only;
|
||||
int journal_dev_ro;
|
||||
|
||||
BUG_ON(!ext4_has_feature_journal(sb));
|
||||
if (WARN_ON_ONCE(!ext4_has_feature_journal(sb)))
|
||||
return -EFSCORRUPTED;
|
||||
|
||||
if (journal_devnum &&
|
||||
journal_devnum != le32_to_cpu(es->s_journal_dev)) {
|
||||
|
@ -5021,7 +5027,31 @@ static int ext4_load_journal(struct super_block *sb,
|
|||
} else
|
||||
journal_dev = new_decode_dev(le32_to_cpu(es->s_journal_dev));
|
||||
|
||||
really_read_only = bdev_read_only(sb->s_bdev);
|
||||
if (journal_inum && journal_dev) {
|
||||
ext4_msg(sb, KERN_ERR,
|
||||
"filesystem has both journal inode and journal device!");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (journal_inum) {
|
||||
journal = ext4_get_journal(sb, journal_inum);
|
||||
if (!journal)
|
||||
return -EINVAL;
|
||||
} else {
|
||||
journal = ext4_get_dev_journal(sb, journal_dev);
|
||||
if (!journal)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
journal_dev_ro = bdev_read_only(journal->j_dev);
|
||||
really_read_only = bdev_read_only(sb->s_bdev) | journal_dev_ro;
|
||||
|
||||
if (journal_dev_ro && !sb_rdonly(sb)) {
|
||||
ext4_msg(sb, KERN_ERR,
|
||||
"journal device read-only, try mounting with '-o ro'");
|
||||
err = -EROFS;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Are we loading a blank journal or performing recovery after a
|
||||
|
@ -5036,27 +5066,14 @@ static int ext4_load_journal(struct super_block *sb,
|
|||
ext4_msg(sb, KERN_ERR, "write access "
|
||||
"unavailable, cannot proceed "
|
||||
"(try mounting with noload)");
|
||||
return -EROFS;
|
||||
err = -EROFS;
|
||||
goto err_out;
|
||||
}
|
||||
ext4_msg(sb, KERN_INFO, "write access will "
|
||||
"be enabled during recovery");
|
||||
}
|
||||
}
|
||||
|
||||
if (journal_inum && journal_dev) {
|
||||
ext4_msg(sb, KERN_ERR, "filesystem has both journal "
|
||||
"and inode journals!");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (journal_inum) {
|
||||
if (!(journal = ext4_get_journal(sb, journal_inum)))
|
||||
return -EINVAL;
|
||||
} else {
|
||||
if (!(journal = ext4_get_dev_journal(sb, journal_dev)))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!(journal->j_flags & JBD2_BARRIER))
|
||||
ext4_msg(sb, KERN_INFO, "barriers disabled");
|
||||
|
||||
|
@ -5076,12 +5093,16 @@ static int ext4_load_journal(struct super_block *sb,
|
|||
|
||||
if (err) {
|
||||
ext4_msg(sb, KERN_ERR, "error loading journal");
|
||||
jbd2_journal_destroy(journal);
|
||||
return err;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
EXT4_SB(sb)->s_journal = journal;
|
||||
ext4_clear_journal_err(sb, es);
|
||||
err = ext4_clear_journal_err(sb, es);
|
||||
if (err) {
|
||||
EXT4_SB(sb)->s_journal = NULL;
|
||||
jbd2_journal_destroy(journal);
|
||||
return err;
|
||||
}
|
||||
|
||||
if (!really_read_only && journal_devnum &&
|
||||
journal_devnum != le32_to_cpu(es->s_journal_dev)) {
|
||||
|
@ -5092,6 +5113,10 @@ static int ext4_load_journal(struct super_block *sb,
|
|||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_out:
|
||||
jbd2_journal_destroy(journal);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int ext4_commit_super(struct super_block *sb, int sync)
|
||||
|
@ -5103,13 +5128,6 @@ static int ext4_commit_super(struct super_block *sb, int sync)
|
|||
if (!sbh || block_device_ejected(sb))
|
||||
return error;
|
||||
|
||||
/*
|
||||
* The superblock bh should be mapped, but it might not be if the
|
||||
* device was hot-removed. Not much we can do but fail the I/O.
|
||||
*/
|
||||
if (!buffer_mapped(sbh))
|
||||
return error;
|
||||
|
||||
/*
|
||||
* If the file system is mounted read-only, don't update the
|
||||
* superblock write time. This avoids updating the superblock
|
||||
|
@ -5177,26 +5195,32 @@ static int ext4_commit_super(struct super_block *sb, int sync)
|
|||
* remounting) the filesystem readonly, then we will end up with a
|
||||
* consistent fs on disk. Record that fact.
|
||||
*/
|
||||
static void ext4_mark_recovery_complete(struct super_block *sb,
|
||||
struct ext4_super_block *es)
|
||||
static int ext4_mark_recovery_complete(struct super_block *sb,
|
||||
struct ext4_super_block *es)
|
||||
{
|
||||
int err;
|
||||
journal_t *journal = EXT4_SB(sb)->s_journal;
|
||||
|
||||
if (!ext4_has_feature_journal(sb)) {
|
||||
BUG_ON(journal != NULL);
|
||||
return;
|
||||
if (journal != NULL) {
|
||||
ext4_error(sb, "Journal got removed while the fs was "
|
||||
"mounted!");
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
jbd2_journal_lock_updates(journal);
|
||||
if (jbd2_journal_flush(journal) < 0)
|
||||
err = jbd2_journal_flush(journal);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
|
||||
if (ext4_has_feature_journal_needs_recovery(sb) && sb_rdonly(sb)) {
|
||||
ext4_clear_feature_journal_needs_recovery(sb);
|
||||
ext4_commit_super(sb, 1);
|
||||
}
|
||||
|
||||
out:
|
||||
jbd2_journal_unlock_updates(journal);
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -5204,14 +5228,17 @@ static void ext4_mark_recovery_complete(struct super_block *sb,
|
|||
* has recorded an error from a previous lifetime, move that error to the
|
||||
* main filesystem now.
|
||||
*/
|
||||
static void ext4_clear_journal_err(struct super_block *sb,
|
||||
static int ext4_clear_journal_err(struct super_block *sb,
|
||||
struct ext4_super_block *es)
|
||||
{
|
||||
journal_t *journal;
|
||||
int j_errno;
|
||||
const char *errstr;
|
||||
|
||||
BUG_ON(!ext4_has_feature_journal(sb));
|
||||
if (!ext4_has_feature_journal(sb)) {
|
||||
ext4_error(sb, "Journal got removed while the fs was mounted!");
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
journal = EXT4_SB(sb)->s_journal;
|
||||
|
||||
|
@ -5236,6 +5263,7 @@ static void ext4_clear_journal_err(struct super_block *sb,
|
|||
jbd2_journal_clear_err(journal);
|
||||
jbd2_journal_update_sb_errno(journal);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -5378,7 +5406,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
{
|
||||
struct ext4_super_block *es;
|
||||
struct ext4_sb_info *sbi = EXT4_SB(sb);
|
||||
unsigned long old_sb_flags;
|
||||
unsigned long old_sb_flags, vfs_flags;
|
||||
struct ext4_mount_options old_opts;
|
||||
int enable_quota = 0;
|
||||
ext4_group_t g;
|
||||
|
@ -5421,6 +5449,14 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
if (sbi->s_journal && sbi->s_journal->j_task->io_context)
|
||||
journal_ioprio = sbi->s_journal->j_task->io_context->ioprio;
|
||||
|
||||
/*
|
||||
* Some options can be enabled by ext4 and/or by VFS mount flag
|
||||
* either way we need to make sure it matches in both *flags and
|
||||
* s_flags. Copy those selected flags from *flags to s_flags
|
||||
*/
|
||||
vfs_flags = SB_LAZYTIME | SB_I_VERSION;
|
||||
sb->s_flags = (sb->s_flags & ~vfs_flags) | (*flags & vfs_flags);
|
||||
|
||||
if (!parse_options(data, sb, NULL, &journal_ioprio, 1)) {
|
||||
err = -EINVAL;
|
||||
goto restore_opts;
|
||||
|
@ -5474,9 +5510,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
set_task_ioprio(sbi->s_journal->j_task, journal_ioprio);
|
||||
}
|
||||
|
||||
if (*flags & SB_LAZYTIME)
|
||||
sb->s_flags |= SB_LAZYTIME;
|
||||
|
||||
if ((bool)(*flags & SB_RDONLY) != sb_rdonly(sb)) {
|
||||
if (sbi->s_mount_flags & EXT4_MF_FS_ABORTED) {
|
||||
err = -EROFS;
|
||||
|
@ -5506,8 +5539,13 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
(sbi->s_mount_state & EXT4_VALID_FS))
|
||||
es->s_state = cpu_to_le16(sbi->s_mount_state);
|
||||
|
||||
if (sbi->s_journal)
|
||||
if (sbi->s_journal) {
|
||||
/*
|
||||
* We let remount-ro finish even if marking fs
|
||||
* as clean failed...
|
||||
*/
|
||||
ext4_mark_recovery_complete(sb, es);
|
||||
}
|
||||
if (sbi->s_mmp_tsk)
|
||||
kthread_stop(sbi->s_mmp_tsk);
|
||||
} else {
|
||||
|
@ -5555,8 +5593,11 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
* been changed by e2fsck since we originally mounted
|
||||
* the partition.)
|
||||
*/
|
||||
if (sbi->s_journal)
|
||||
ext4_clear_journal_err(sb, es);
|
||||
if (sbi->s_journal) {
|
||||
err = ext4_clear_journal_err(sb, es);
|
||||
if (err)
|
||||
goto restore_opts;
|
||||
}
|
||||
sbi->s_mount_state = le16_to_cpu(es->s_state);
|
||||
|
||||
err = ext4_setup_super(sb, es, 0);
|
||||
|
@ -5586,7 +5627,17 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
ext4_register_li_request(sb, first_not_zeroed);
|
||||
}
|
||||
|
||||
ext4_setup_system_zone(sb);
|
||||
/*
|
||||
* Handle creation of system zone data early because it can fail.
|
||||
* Releasing of existing data is done when we are sure remount will
|
||||
* succeed.
|
||||
*/
|
||||
if (test_opt(sb, BLOCK_VALIDITY) && !sbi->system_blks) {
|
||||
err = ext4_setup_system_zone(sb);
|
||||
if (err)
|
||||
goto restore_opts;
|
||||
}
|
||||
|
||||
if (sbi->s_journal == NULL && !(old_sb_flags & SB_RDONLY)) {
|
||||
err = ext4_commit_super(sb, 1);
|
||||
if (err)
|
||||
|
@ -5607,8 +5658,16 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
}
|
||||
}
|
||||
#endif
|
||||
if (!test_opt(sb, BLOCK_VALIDITY) && sbi->system_blks)
|
||||
ext4_release_system_zone(sb);
|
||||
|
||||
/*
|
||||
* Some options can be enabled by ext4 and/or by VFS mount flag
|
||||
* either way we need to make sure it matches in both *flags and
|
||||
* s_flags. Copy those selected flags from s_flags to *flags
|
||||
*/
|
||||
*flags = (*flags & ~vfs_flags) | (sb->s_flags & vfs_flags);
|
||||
|
||||
*flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME);
|
||||
ext4_msg(sb, KERN_INFO, "re-mounted. Opts: %s", orig_data);
|
||||
kfree(orig_data);
|
||||
return 0;
|
||||
|
@ -5622,6 +5681,8 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
|
|||
sbi->s_commit_interval = old_opts.s_commit_interval;
|
||||
sbi->s_min_batch_time = old_opts.s_min_batch_time;
|
||||
sbi->s_max_batch_time = old_opts.s_max_batch_time;
|
||||
if (!test_opt(sb, BLOCK_VALIDITY) && sbi->system_blks)
|
||||
ext4_release_system_zone(sb);
|
||||
#ifdef CONFIG_QUOTA
|
||||
sbi->s_jquota_fmt = old_opts.s_jquota_fmt;
|
||||
for (i = 0; i < EXT4_MAXQUOTAS; i++) {
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue