Merge LTS tag v4.19.3 into msm-kona
* refs/heads/tmp-73aa1c8: Revert "drm/msm: dpu: Allow planes to extend past active display" Revert "drm/msm/disp/dpu: Use proper define for drm_encoder_init() 'encoder_type'" Linux 4.19.3 Revert "ACPICA: AML interpreter: add region addresses in global list during initialization" CONFIG_XEN_PV breaks xen_create_contiguous_region on ARM drm/i915: Fix hpd handling for pins with two encoders drm/i915: Fix NULL deref when re-enabling HPD IRQs on systems with MST drm/i915: Fix possible race in intel_dp_add_mst_connector() drm/i915/execlists: Force write serialisation into context image vs execution drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5 drm/i915: Mark pin flags as u64 drm/i915: Don't oops during modeset shutdown after lpe audio deinit drm/i915: Compare user's 64b GTT offset even on 32b drm/i915: Fix ilk+ watermarks when disabling pipes drm/i915: Fix error handling for the NV12 fb dimensions check drm/i915: Mark up GTT sizes as u64 drm/i915/hdmi: Add HDMI 2.0 audio clock recovery N values drm/i915/icl: Fix the macros for DFLEXDPMLE register bits drm/i915/dp: Restrict link retrain workaround to external monitors drm/i915/dp: Fix link retraining comment in intel_dp_long_pulse() drm/i915: Large page offsets for pread/pwrite drm/i915: Skip vcpi allocation for MSTB ports that are gone drm/i915: Don't unset intel_connector->mst_port drm/i915: Restore vblank interrupts earlier drm/i915: Use the correct crtc when sanitizing plane mapping drm/i915/dp: Link train Fallback on eDP only if fallback link BW can fit panel's native mode drm: panel-orientation-quirks: Add quirk for Acer One 10 (S1003) drm/dp_mst: Check if primary mstb is null drm/etnaviv: fix bogus fence complete check in timeout handler drm/amd/powerplay: Enable/Disable NBPSTATE on On/OFF of UVD drm/nouveau: Fix nv50_mstc->best_encoder() drm/nouveau: Check backlight IDs are >= 0, not > 0 drm/amdgpu: Suppress keypresses from ACPI_VIDEO events drm/amdgpu: add missing CHIP_HAINAN in amdgpu_ucode_get_load_type drm/amdgpu: Fix typo in amdgpu_vmid_mgr_init drm/rockchip: Allow driver to be shutdown on reboot/kexec scripts/spdxcheck.py: make python3 compliant mm: don't reclaim inodes with many attached pages efi/arm/libstub: Pack FDT after populating it mm/swapfile.c: use kvzalloc for swap_info_struct allocation hugetlbfs: fix kernel BUG at fs/hugetlbfs/inode.c:444! lib/ubsan.c: don't mark __ubsan_handle_builtin_unreachable as noreturn crypto: user - fix leaking uninitialized memory to userspace libata: blacklist SAMSUNG MZ7TD256HAFV-000L9 SSD gfs2: Fix metadata read-ahead during truncate (2) gfs2: Put bitmap buffers in put_super selinux: check length properly in SCTP bind hook fuse: fix possibly missed wake-up after abort fuse: fix leaked notify reply fuse: fix use-after-free in fuse_direct_IO() rtc: hctosys: Add missing range error reporting nfsd: COPY and CLONE operations require the saved filehandle to be set NFSv4: Don't exit the state manager without clearing NFS4CLNT_MANAGER_RUNNING sunrpc: correct the computation for page_ptr when truncating kdb: print real address of pointers instead of hashed addresses kdb: use correct pointer when 'btc' calls 'btt' ARM: cpuidle: Don't register the driver when back-end init returns -ENXIO uapi: fix linux/kfd_ioctl.h userspace compilation errors mnt: fix __detach_mounts infinite loop mount: Prevent MNT_DETACH from disconnecting locked mounts mount: Don't allow copying MNT_UNBINDABLE|MNT_LOCKED mounts mount: Retest MNT_LOCKED in do_umount ext4: fix buffer leak in __ext4_read_dirblock() on error path ext4: fix buffer leak in ext4_expand_extra_isize_ea() on error path ext4: fix buffer leak in ext4_xattr_move_to_block() on error path ext4: release bs.bh before re-using in ext4_xattr_block_find() ext4: fix buffer leak in ext4_xattr_get_block() on error path ext4: fix possible leak of s_journal_flag_rwsem in error path ext4: fix possible leak of sbi->s_group_desc_leak in error path ext4: avoid possible double brelse() in add_new_gdb() on error path ext4: fix missing cleanup if ext4_alloc_flex_bg_array() fails while resizing ext4: avoid buffer leak in ext4_orphan_add() after prior errors ext4: avoid buffer leak on shutdown in ext4_mark_iloc_dirty() ext4: fix possible inode leak in the retry loop of ext4_resize_fs() ext4: missing !bh check in ext4_xattr_inode_write() ext4: avoid potential extra brelse in setup_new_flex_group_blocks() ext4: add missing brelse() add_new_gdb_meta_bg()'s error path ext4: add missing brelse() in set_flexbg_block_bitmap()'s error path ext4: add missing brelse() update_backups()'s error path clockevents/drivers/i8253: Add support for PIT shutdown quirk btrfs: tree-checker: Fix misleading group system information Btrfs: fix data corruption due to cloning of eof block Btrfs: fix infinite loop on inode eviction after deduplication of eof block Btrfs: fix cur_offset in the error case for nocow Btrfs: fix missing data checksums after a ranged fsync (msync) btrfs: fix pinned underflow after transaction aborted watchdog/core: Add missing prototypes for weak functions arch/alpha, termios: implement BOTHER, IBSHIFT and termios2 termios, tty/tty_baudrate.c: fix buffer overrun x86/hyper-v: Enable PIT shutdown quirk x86/cpu/vmware: Do not trace vmware_sched_clock() of, numa: Validate some distance map rules perf intel-pt: Insert callchain context into synthesized callchains perf intel-pt/bts: Calculate cpumode for synthesized samples perf callchain: Honour the ordering of PERF_CONTEXT_{USER,KERNEL,etc} perf stat: Handle different PMU names with common prefix perf cs-etm: Correct CPU mode for samples hwmon: (core) Fix double-free in __hwmon_device_register() mtd: docg3: don't set conflicting BCH_CONST_PARAMS option mtd: nand: Fix nanddev_neraseblocks() mtd: spi-nor: cadence-quadspi: Return error code in cqspi_direct_read_execute() bonding/802.3ad: fix link_failure_count tracking ARM: 8809/1: proc-v7: fix Thumb annotation of cpu_v7_hvc_switch_mm netfilter: conntrack: fix calculation of next bucket number in early_drop memory_hotplug: cond_resched in __remove_pages mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings ocfs2: free up write context when direct IO failed ocfs2: fix a misuse a of brelse after failing ocfs2_check_dir_entry soc: ti: QMSS: Fix usage of irq_set_affinity_hint Revert "powerpc/8xx: Use L1 entry APG to handle _PAGE_ACCESSED for CONFIG_SWAP" SCSI: fix queue cleanup race before queue initialization is done scsi: qla2xxx: Initialize port speed to avoid setting lower speed vhost/scsi: truncate T10 PI iov_iter to prot_bytes crypto: hisilicon - Fix reference after free of memories on error path crypto: hisilicon - Fix NULL dereference for same dst and src reset: hisilicon: fix potential NULL pointer dereference acpi, nfit: Fix ARS overflow continuation acpi/nfit, x86/mce: Validate a MCE's address before using it acpi/nfit, x86/mce: Handle only uncorrectable machine checks mach64: fix image corruption due to reading accelerator registers mach64: fix display corruption on big endian machines thermal: core: Fix use-after-free in thermal_cooling_device_destroy_sysfs Revert "ceph: fix dentry leak in splice_dentry()" libceph: bump CEPH_MSG_MAX_DATA_LEN clk: rockchip: Fix static checker warning in rockchip_ddrclk_get_parent call clk: rockchip: fix wrong mmc sample phase shift for rk3328 clk: sunxi-ng: h6: fix bus clocks' divider position clk: at91: Fix division by zero in PLL recalc_rate() clk: s2mps11: Fix matching when built as module and DT node contains compatible um: Drop own definition of PTRACE_SYSEMU/_SINGLESTEP xtensa: fix boot parameters address translation xtensa: make sure bFLT stack is 16 byte aligned xtensa: add NOTES section to the linker script MIPS: Loongson-3: Fix BRIDGE irq delivery problem MIPS: Loongson-3: Fix CPU UART irq delivery problem zram: close udev startup race condition as default groups clk: meson: axg: mark fdiv2 and fdiv3 as critical clk: meson-gxbb: set fclk_div3 as CLK_IS_CRITICAL arm64: dts: stratix10: fix multicast filtering arm64: dts: stratix10: Support Ethernet Jumbo frame drm/msm: fix OF child-node lookup fuse: set FR_SENT while locked fuse: fix blocked_waitq wakeup fuse: Fix use-after-free in fuse_dev_do_write() fuse: Fix use-after-free in fuse_dev_do_read() vfs: fix FIGETBSZ ioctl on an overlayfs file scsi: qla2xxx: Fix driver hang when FC-NVMe LUNs are configured scsi: qla2xxx: Fix duplicate switch database entries scsi: qla2xxx: Fix NVMe Target discovery scsi: qla2xxx: Fix NVMe session hang on unload scsi: qla2xxx: Fix for double free of SRB structure scsi: qla2xxx: Fix re-using LoopID when handle is in use scsi: qla2xxx: Reject bsg request if chip is down. scsi: qla2xxx: shutdown chip if reset fail scsi: qla2xxx: Fix early srb free on abort scsi: qla2xxx: Remove stale debug trace message from tcm_qla2xxx scsi: qla2xxx: Fix process response queue for ISP26XX and above scsi: qla2xxx: Fix incorrect port speed being set for FC adapters serial: sh-sci: Fix could not remove dev_attr_rx_fifo_timeout ovl: automatically enable redirect_dir on metacopy=on ovl: check whiteout in ovl_create_over_whiteout() ovl: fix recursive oi->lock in ovl_link() ovl: fix error handling in ovl_verify_set_fh() cdrom: fix improper type cast, which can leat to information leak. media: ov5640: fix restore of last mode set drm/amdgpu: fix integer overflow test in amdgpu_bo_list_create() 9p: clear dangling pointers in p9stat_free media: ov5640: fix mode change regression ARM: dts: imx6ull: keep IMX6UL_ prefix for signals on both i.MX6UL and i.MX6ULL udf: Prevent write-unsupported filesystem to be remounted read-write 9p locks: fix glock.client_id leak in do_lock staging: most: video: fix registration of an empty comp core_component drm/amdgpu: Fix SDMA TO after GPU reset v3 drm: rcar-du: Update Gen3 output limitations staging:iio:ad7606: fix voltage scales powerpc/selftests: Wait all threads to join media: tvp5150: fix width alignment during set_selection() sc16is7xx: Fix for multi-channel stall serial: 8250_of: Fix for lack of interrupt support staging: erofs: fix a missing endian conversion MIPS/PCI: Call pcie_bus_configure_settings() to set MPS/MRRS powerpc/memtrace: Remove memory in chunks powerpc/boot: Ensure _zimage_start is a weak symbol MIPS: kexec: Mark CPU offline before disabling local IRQ media: coda: don't overwrite h.264 profile_idc on decoder instance media: pci: cx23885: handle adding to list failure drm/hisilicon: hibmc: Do not carry error code in HiBMC framebuffer pointer drm/amd/display: fix gamma not being applied drm/amd/display: Raise dispclk value for dce120 by 15% drm/omap: fix memory barrier bug in DMM driver powerpc/mm: Don't report hugepage tables as memory leaks when using kmemleak drm/msm: dpu: Allow planes to extend past active display drm/msm/disp/dpu: Use proper define for drm_encoder_init() 'encoder_type' drm/msm/gpu: fix parameters in function msm_gpu_crashstate_capture powerpc/nohash: fix undefined behaviour when testing page size support ARM: imx_v6_v7_defconfig: Select CONFIG_TMPFS_POSIX_ACL drm/amdgpu/powerplay: fix missing break in switch statements drm/nouveau/secboot/acr: fix memory leak tracing/kprobes: Check the probe on unloaded module correctly tty: check name length in tty_find_polling_driver() powerpc/eeh: Fix possible null deref in eeh_dump_dev_log() powerpc/Makefile: Fix PPC_BOOK3S_64 ASFLAGS Input: wm97xx-ts - fix exit path drm/amd/display: fix bug of accessing invalid memory powerpc/mm: fix always true/false warning in slice.c powerpc/mm: Fix page table dump to work on Radix powerpc/64/module: REL32 relocation range check powerpc/traps: restore recoverability of machine_check interrupts Change-Id: Id971c3ddeb610be8aee4ff531ec3fb20ad0db58d Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
This commit is contained in:
commit
60b1073aea
193 changed files with 1499 additions and 773 deletions
|
@ -286,6 +286,12 @@ pointed by REDIRECT. This should not be possible on local system as setting
|
|||
"trusted." xattrs will require CAP_SYS_ADMIN. But it should be possible
|
||||
for untrusted layers like from a pen drive.
|
||||
|
||||
Note: redirect_dir={off|nofollow|follow(*)} conflicts with metacopy=on, and
|
||||
results in an error.
|
||||
|
||||
(*) redirect_dir=follow only conflicts with metacopy=on if upperdir=... is
|
||||
given.
|
||||
|
||||
Sharing and copying layers
|
||||
--------------------------
|
||||
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 2
|
||||
SUBLEVEL = 3
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
|
|
@ -73,9 +73,15 @@
|
|||
})
|
||||
|
||||
#define user_termios_to_kernel_termios(k, u) \
|
||||
copy_from_user(k, u, sizeof(struct termios))
|
||||
copy_from_user(k, u, sizeof(struct termios2))
|
||||
|
||||
#define kernel_termios_to_user_termios(u, k) \
|
||||
copy_to_user(u, k, sizeof(struct termios2))
|
||||
|
||||
#define user_termios_to_kernel_termios_1(k, u) \
|
||||
copy_from_user(k, u, sizeof(struct termios))
|
||||
|
||||
#define kernel_termios_to_user_termios_1(u, k) \
|
||||
copy_to_user(u, k, sizeof(struct termios))
|
||||
|
||||
#endif /* _ALPHA_TERMIOS_H */
|
||||
|
|
|
@ -32,6 +32,11 @@
|
|||
#define TCXONC _IO('t', 30)
|
||||
#define TCFLSH _IO('t', 31)
|
||||
|
||||
#define TCGETS2 _IOR('T', 42, struct termios2)
|
||||
#define TCSETS2 _IOW('T', 43, struct termios2)
|
||||
#define TCSETSW2 _IOW('T', 44, struct termios2)
|
||||
#define TCSETSF2 _IOW('T', 45, struct termios2)
|
||||
|
||||
#define TIOCSWINSZ _IOW('t', 103, struct winsize)
|
||||
#define TIOCGWINSZ _IOR('t', 104, struct winsize)
|
||||
#define TIOCSTART _IO('t', 110) /* start output, like ^Q */
|
||||
|
|
|
@ -26,6 +26,19 @@ struct termios {
|
|||
speed_t c_ospeed; /* output speed */
|
||||
};
|
||||
|
||||
/* Alpha has identical termios and termios2 */
|
||||
|
||||
struct termios2 {
|
||||
tcflag_t c_iflag; /* input mode flags */
|
||||
tcflag_t c_oflag; /* output mode flags */
|
||||
tcflag_t c_cflag; /* control mode flags */
|
||||
tcflag_t c_lflag; /* local mode flags */
|
||||
cc_t c_cc[NCCS]; /* control characters */
|
||||
cc_t c_line; /* line discipline (== c_cc[19]) */
|
||||
speed_t c_ispeed; /* input speed */
|
||||
speed_t c_ospeed; /* output speed */
|
||||
};
|
||||
|
||||
/* Alpha has matching termios and ktermios */
|
||||
|
||||
struct ktermios {
|
||||
|
@ -152,6 +165,7 @@ struct ktermios {
|
|||
#define B3000000 00034
|
||||
#define B3500000 00035
|
||||
#define B4000000 00036
|
||||
#define BOTHER 00037
|
||||
|
||||
#define CSIZE 00001400
|
||||
#define CS5 00000000
|
||||
|
@ -169,6 +183,9 @@ struct ktermios {
|
|||
#define CMSPAR 010000000000 /* mark or space (stick) parity */
|
||||
#define CRTSCTS 020000000000 /* flow control */
|
||||
|
||||
#define CIBAUD 07600000
|
||||
#define IBSHIFT 16
|
||||
|
||||
/* c_lflag bits */
|
||||
#define ISIG 0x00000080
|
||||
#define ICANON 0x00000100
|
||||
|
|
|
@ -14,14 +14,23 @@
|
|||
* The pin function ID is a tuple of
|
||||
* <mux_reg conf_reg input_reg mux_mode input_val>
|
||||
*/
|
||||
/* signals common for i.MX6UL and i.MX6ULL */
|
||||
#undef MX6UL_PAD_UART5_TX_DATA__UART5_DTE_RX
|
||||
#define MX6UL_PAD_UART5_TX_DATA__UART5_DTE_RX 0x00BC 0x0348 0x0644 0x0 0x6
|
||||
#undef MX6UL_PAD_UART5_RX_DATA__UART5_DCE_RX
|
||||
#define MX6UL_PAD_UART5_RX_DATA__UART5_DCE_RX 0x00C0 0x034C 0x0644 0x0 0x7
|
||||
#undef MX6UL_PAD_ENET1_RX_EN__UART5_DCE_RTS
|
||||
#define MX6UL_PAD_ENET1_RX_EN__UART5_DCE_RTS 0x00CC 0x0358 0x0640 0x1 0x5
|
||||
#undef MX6UL_PAD_ENET1_TX_DATA0__UART5_DTE_RTS
|
||||
#define MX6UL_PAD_ENET1_TX_DATA0__UART5_DTE_RTS 0x00D0 0x035C 0x0640 0x1 0x6
|
||||
#undef MX6UL_PAD_CSI_DATA02__UART5_DCE_RTS
|
||||
#define MX6UL_PAD_CSI_DATA02__UART5_DCE_RTS 0x01EC 0x0478 0x0640 0x8 0x7
|
||||
|
||||
/* signals for i.MX6ULL only */
|
||||
#define MX6ULL_PAD_UART1_TX_DATA__UART5_DTE_RX 0x0084 0x0310 0x0644 0x9 0x4
|
||||
#define MX6ULL_PAD_UART1_RX_DATA__UART5_DCE_RX 0x0088 0x0314 0x0644 0x9 0x5
|
||||
#define MX6ULL_PAD_UART1_CTS_B__UART5_DCE_RTS 0x008C 0x0318 0x0640 0x9 0x3
|
||||
#define MX6ULL_PAD_UART1_RTS_B__UART5_DTE_RTS 0x0090 0x031C 0x0640 0x9 0x4
|
||||
#define MX6ULL_PAD_UART5_TX_DATA__UART5_DTE_RX 0x00BC 0x0348 0x0644 0x0 0x6
|
||||
#define MX6ULL_PAD_UART5_RX_DATA__UART5_DCE_RX 0x00C0 0x034C 0x0644 0x0 0x7
|
||||
#define MX6ULL_PAD_ENET1_RX_EN__UART5_DCE_RTS 0x00CC 0x0358 0x0640 0x1 0x5
|
||||
#define MX6ULL_PAD_ENET1_TX_DATA0__UART5_DTE_RTS 0x00D0 0x035C 0x0640 0x1 0x6
|
||||
#define MX6ULL_PAD_ENET2_RX_DATA0__EPDC_SDDO08 0x00E4 0x0370 0x0000 0x9 0x0
|
||||
#define MX6ULL_PAD_ENET2_RX_DATA1__EPDC_SDDO09 0x00E8 0x0374 0x0000 0x9 0x0
|
||||
#define MX6ULL_PAD_ENET2_RX_EN__EPDC_SDDO10 0x00EC 0x0378 0x0000 0x9 0x0
|
||||
|
@ -55,7 +64,6 @@
|
|||
#define MX6ULL_PAD_CSI_DATA00__ESAI_TX_HF_CLK 0x01E4 0x0470 0x0000 0x9 0x0
|
||||
#define MX6ULL_PAD_CSI_DATA01__ESAI_RX_HF_CLK 0x01E8 0x0474 0x0000 0x9 0x0
|
||||
#define MX6ULL_PAD_CSI_DATA02__ESAI_RX_FS 0x01EC 0x0478 0x0000 0x9 0x0
|
||||
#define MX6ULL_PAD_CSI_DATA02__UART5_DCE_RTS 0x01EC 0x0478 0x0640 0x8 0x7
|
||||
#define MX6ULL_PAD_CSI_DATA03__ESAI_RX_CLK 0x01F0 0x047C 0x0000 0x9 0x0
|
||||
#define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0
|
||||
#define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0
|
||||
|
|
|
@ -409,6 +409,7 @@ CONFIG_ZISOFS=y
|
|||
CONFIG_UDF_FS=m
|
||||
CONFIG_MSDOS_FS=m
|
||||
CONFIG_VFAT_FS=y
|
||||
CONFIG_TMPFS_POSIX_ACL=y
|
||||
CONFIG_JFFS2_FS=y
|
||||
CONFIG_UBIFS_FS=y
|
||||
CONFIG_NFS_FS=y
|
||||
|
|
|
@ -112,7 +112,7 @@ ENTRY(cpu_v7_hvc_switch_mm)
|
|||
hvc #0
|
||||
ldmfd sp!, {r0 - r3}
|
||||
b cpu_v7_switch_mm
|
||||
ENDPROC(cpu_v7_smc_switch_mm)
|
||||
ENDPROC(cpu_v7_hvc_switch_mm)
|
||||
#endif
|
||||
ENTRY(cpu_v7_iciallu_switch_mm)
|
||||
mov r3, #0
|
||||
|
|
|
@ -137,6 +137,9 @@
|
|||
reset-names = "stmmaceth", "stmmaceth-ocp";
|
||||
clocks = <&clkmgr STRATIX10_EMAC0_CLK>;
|
||||
clock-names = "stmmaceth";
|
||||
tx-fifo-depth = <16384>;
|
||||
rx-fifo-depth = <16384>;
|
||||
snps,multicast-filter-bins = <256>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
|
@ -150,6 +153,9 @@
|
|||
reset-names = "stmmaceth", "stmmaceth-ocp";
|
||||
clocks = <&clkmgr STRATIX10_EMAC1_CLK>;
|
||||
clock-names = "stmmaceth";
|
||||
tx-fifo-depth = <16384>;
|
||||
rx-fifo-depth = <16384>;
|
||||
snps,multicast-filter-bins = <256>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
|
@ -163,6 +169,9 @@
|
|||
reset-names = "stmmaceth", "stmmaceth-ocp";
|
||||
clocks = <&clkmgr STRATIX10_EMAC2_CLK>;
|
||||
clock-names = "stmmaceth";
|
||||
tx-fifo-depth = <16384>;
|
||||
rx-fifo-depth = <16384>;
|
||||
snps,multicast-filter-bins = <256>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
|
|
|
@ -76,7 +76,7 @@
|
|||
phy-mode = "rgmii";
|
||||
phy-handle = <&phy0>;
|
||||
|
||||
max-frame-size = <3800>;
|
||||
max-frame-size = <9000>;
|
||||
|
||||
mdio0 {
|
||||
#address-cells = <1>;
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
#define MIPS_CPU_IRQ_BASE 56
|
||||
|
||||
#define LOONGSON_UART_IRQ (MIPS_CPU_IRQ_BASE + 2) /* UART */
|
||||
#define LOONGSON_HT1_IRQ (MIPS_CPU_IRQ_BASE + 3) /* HT1 */
|
||||
#define LOONGSON_BRIDGE_IRQ (MIPS_CPU_IRQ_BASE + 3) /* CASCADE */
|
||||
#define LOONGSON_TIMER_IRQ (MIPS_CPU_IRQ_BASE + 7) /* CPU Timer */
|
||||
|
||||
#define LOONGSON_HT1_CFG_BASE loongson_sysconf.ht_control_base
|
||||
|
|
|
@ -36,6 +36,9 @@ static void crash_shutdown_secondary(void *passed_regs)
|
|||
if (!cpu_online(cpu))
|
||||
return;
|
||||
|
||||
/* We won't be sent IPIs any more. */
|
||||
set_cpu_online(cpu, false);
|
||||
|
||||
local_irq_disable();
|
||||
if (!cpumask_test_cpu(cpu, &cpus_in_crash))
|
||||
crash_save_cpu(regs, cpu);
|
||||
|
|
|
@ -118,6 +118,9 @@ machine_kexec(struct kimage *image)
|
|||
*ptr = (unsigned long) phys_to_virt(*ptr);
|
||||
}
|
||||
|
||||
/* Mark offline BEFORE disabling local irq. */
|
||||
set_cpu_online(smp_processor_id(), false);
|
||||
|
||||
/*
|
||||
* we do not want to be bothered.
|
||||
*/
|
||||
|
|
|
@ -96,51 +96,8 @@ void mach_irq_dispatch(unsigned int pending)
|
|||
}
|
||||
}
|
||||
|
||||
static struct irqaction cascade_irqaction = {
|
||||
.handler = no_action,
|
||||
.flags = IRQF_NO_SUSPEND,
|
||||
.name = "cascade",
|
||||
};
|
||||
|
||||
static inline void mask_loongson_irq(struct irq_data *d)
|
||||
{
|
||||
clear_c0_status(0x100 << (d->irq - MIPS_CPU_IRQ_BASE));
|
||||
irq_disable_hazard();
|
||||
|
||||
/* Workaround: UART IRQ may deliver to any core */
|
||||
if (d->irq == LOONGSON_UART_IRQ) {
|
||||
int cpu = smp_processor_id();
|
||||
int node_id = cpu_logical_map(cpu) / loongson_sysconf.cores_per_node;
|
||||
int core_id = cpu_logical_map(cpu) % loongson_sysconf.cores_per_node;
|
||||
u64 intenclr_addr = smp_group[node_id] |
|
||||
(u64)(&LOONGSON_INT_ROUTER_INTENCLR);
|
||||
u64 introuter_lpc_addr = smp_group[node_id] |
|
||||
(u64)(&LOONGSON_INT_ROUTER_LPC);
|
||||
|
||||
*(volatile u32 *)intenclr_addr = 1 << 10;
|
||||
*(volatile u8 *)introuter_lpc_addr = 0x10 + (1<<core_id);
|
||||
}
|
||||
}
|
||||
|
||||
static inline void unmask_loongson_irq(struct irq_data *d)
|
||||
{
|
||||
/* Workaround: UART IRQ may deliver to any core */
|
||||
if (d->irq == LOONGSON_UART_IRQ) {
|
||||
int cpu = smp_processor_id();
|
||||
int node_id = cpu_logical_map(cpu) / loongson_sysconf.cores_per_node;
|
||||
int core_id = cpu_logical_map(cpu) % loongson_sysconf.cores_per_node;
|
||||
u64 intenset_addr = smp_group[node_id] |
|
||||
(u64)(&LOONGSON_INT_ROUTER_INTENSET);
|
||||
u64 introuter_lpc_addr = smp_group[node_id] |
|
||||
(u64)(&LOONGSON_INT_ROUTER_LPC);
|
||||
|
||||
*(volatile u32 *)intenset_addr = 1 << 10;
|
||||
*(volatile u8 *)introuter_lpc_addr = 0x10 + (1<<core_id);
|
||||
}
|
||||
|
||||
set_c0_status(0x100 << (d->irq - MIPS_CPU_IRQ_BASE));
|
||||
irq_enable_hazard();
|
||||
}
|
||||
static inline void mask_loongson_irq(struct irq_data *d) { }
|
||||
static inline void unmask_loongson_irq(struct irq_data *d) { }
|
||||
|
||||
/* For MIPS IRQs which shared by all cores */
|
||||
static struct irq_chip loongson_irq_chip = {
|
||||
|
@ -183,12 +140,11 @@ void __init mach_init_irq(void)
|
|||
chip->irq_set_affinity = plat_set_irq_affinity;
|
||||
|
||||
irq_set_chip_and_handler(LOONGSON_UART_IRQ,
|
||||
&loongson_irq_chip, handle_level_irq);
|
||||
&loongson_irq_chip, handle_percpu_irq);
|
||||
irq_set_chip_and_handler(LOONGSON_BRIDGE_IRQ,
|
||||
&loongson_irq_chip, handle_percpu_irq);
|
||||
|
||||
/* setup HT1 irq */
|
||||
setup_irq(LOONGSON_HT1_IRQ, &cascade_irqaction);
|
||||
|
||||
set_c0_status(STATUSF_IP2 | STATUSF_IP6);
|
||||
set_c0_status(STATUSF_IP2 | STATUSF_IP3 | STATUSF_IP6);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
|
|
|
@ -127,8 +127,12 @@ static void pcibios_scanbus(struct pci_controller *hose)
|
|||
if (pci_has_flag(PCI_PROBE_ONLY)) {
|
||||
pci_bus_claim_resources(bus);
|
||||
} else {
|
||||
struct pci_bus *child;
|
||||
|
||||
pci_bus_size_bridges(bus);
|
||||
pci_bus_assign_resources(bus);
|
||||
list_for_each_entry(child, &bus->children, node)
|
||||
pcie_bus_configure_settings(child);
|
||||
}
|
||||
pci_bus_add_devices(bus);
|
||||
}
|
||||
|
|
|
@ -238,7 +238,11 @@ cpu-as-$(CONFIG_4xx) += -Wa,-m405
|
|||
cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec)
|
||||
cpu-as-$(CONFIG_E200) += -Wa,-me200
|
||||
cpu-as-$(CONFIG_E500) += -Wa,-me500
|
||||
cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4
|
||||
|
||||
# When using '-many -mpower4' gas will first try and find a matching power4
|
||||
# mnemonic and failing that it will allow any valid mnemonic that GAS knows
|
||||
# about. GCC will pass -many to GAS when assembling, clang does not.
|
||||
cpu-as-$(CONFIG_PPC_BOOK3S_64) += -Wa,-mpower4 -Wa,-many
|
||||
cpu-as-$(CONFIG_PPC_E500MC) += $(call as-option,-Wa$(comma)-me500mc)
|
||||
|
||||
KBUILD_AFLAGS += $(cpu-as-y)
|
||||
|
|
|
@ -47,8 +47,10 @@ p_end: .long _end
|
|||
p_pstack: .long _platform_stack_top
|
||||
#endif
|
||||
|
||||
.weak _zimage_start
|
||||
.globl _zimage_start
|
||||
/* Clang appears to require the .weak directive to be after the symbol
|
||||
* is defined. See https://bugs.llvm.org/show_bug.cgi?id=38921 */
|
||||
.weak _zimage_start
|
||||
_zimage_start:
|
||||
.globl _zimage_start_lib
|
||||
_zimage_start_lib:
|
||||
|
|
|
@ -34,20 +34,12 @@
|
|||
* respectively NA for All or X for Supervisor and no access for User.
|
||||
* Then we use the APG to say whether accesses are according to Page rules or
|
||||
* "all Supervisor" rules (Access to all)
|
||||
* We also use the 2nd APG bit for _PAGE_ACCESSED when having SWAP:
|
||||
* When that bit is not set access is done iaw "all user"
|
||||
* which means no access iaw page rules.
|
||||
* Therefore, we define 4 APG groups. lsb is _PMD_USER, 2nd is _PAGE_ACCESSED
|
||||
* 0x => No access => 11 (all accesses performed as user iaw page definition)
|
||||
* 10 => No user => 01 (all accesses performed according to page definition)
|
||||
* 11 => User => 00 (all accesses performed as supervisor iaw page definition)
|
||||
* Therefore, we define 2 APG groups. lsb is _PMD_USER
|
||||
* 0 => No user => 01 (all accesses performed according to page definition)
|
||||
* 1 => User => 00 (all accesses performed as supervisor iaw page definition)
|
||||
* We define all 16 groups so that all other bits of APG can take any value
|
||||
*/
|
||||
#ifdef CONFIG_SWAP
|
||||
#define MI_APG_INIT 0xf4f4f4f4
|
||||
#else
|
||||
#define MI_APG_INIT 0x44444444
|
||||
#endif
|
||||
|
||||
/* The effective page number register. When read, contains the information
|
||||
* about the last instruction TLB miss. When MI_RPN is written, bits in
|
||||
|
@ -115,20 +107,12 @@
|
|||
* Supervisor and no access for user and NA for ALL.
|
||||
* Then we use the APG to say whether accesses are according to Page rules or
|
||||
* "all Supervisor" rules (Access to all)
|
||||
* We also use the 2nd APG bit for _PAGE_ACCESSED when having SWAP:
|
||||
* When that bit is not set access is done iaw "all user"
|
||||
* which means no access iaw page rules.
|
||||
* Therefore, we define 4 APG groups. lsb is _PMD_USER, 2nd is _PAGE_ACCESSED
|
||||
* 0x => No access => 11 (all accesses performed as user iaw page definition)
|
||||
* 10 => No user => 01 (all accesses performed according to page definition)
|
||||
* 11 => User => 00 (all accesses performed as supervisor iaw page definition)
|
||||
* Therefore, we define 2 APG groups. lsb is _PMD_USER
|
||||
* 0 => No user => 01 (all accesses performed according to page definition)
|
||||
* 1 => User => 00 (all accesses performed as supervisor iaw page definition)
|
||||
* We define all 16 groups so that all other bits of APG can take any value
|
||||
*/
|
||||
#ifdef CONFIG_SWAP
|
||||
#define MD_APG_INIT 0xf4f4f4f4
|
||||
#else
|
||||
#define MD_APG_INIT 0x44444444
|
||||
#endif
|
||||
|
||||
/* The effective page number register. When read, contains the information
|
||||
* about the last instruction TLB miss. When MD_RPN is written, bits in
|
||||
|
@ -180,12 +164,6 @@
|
|||
*/
|
||||
#define SPRN_M_TW 799
|
||||
|
||||
/* APGs */
|
||||
#define M_APG0 0x00000000
|
||||
#define M_APG1 0x00000020
|
||||
#define M_APG2 0x00000040
|
||||
#define M_APG3 0x00000060
|
||||
|
||||
#ifdef CONFIG_PPC_MM_SLICES
|
||||
#include <asm/nohash/32/slice.h>
|
||||
#define SLICE_ARRAY_SIZE (1 << (32 - SLICE_LOW_SHIFT - 1))
|
||||
|
|
|
@ -169,6 +169,11 @@ static size_t eeh_dump_dev_log(struct eeh_dev *edev, char *buf, size_t len)
|
|||
int n = 0, l = 0;
|
||||
char buffer[128];
|
||||
|
||||
if (!pdn) {
|
||||
pr_warn("EEH: Note: No error log for absent device.\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
n += scnprintf(buf+n, len-n, "%04x:%02x:%02x.%01x\n",
|
||||
pdn->phb->global_number, pdn->busno,
|
||||
PCI_SLOT(pdn->devfn), PCI_FUNC(pdn->devfn));
|
||||
|
|
|
@ -353,13 +353,14 @@ _ENTRY(ITLBMiss_cmp)
|
|||
#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_HUGETLB_PAGE)
|
||||
mtcr r12
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SWAP
|
||||
rlwinm r11, r10, 31, _PAGE_ACCESSED >> 1
|
||||
#endif
|
||||
/* Load the MI_TWC with the attributes for this "segment." */
|
||||
mtspr SPRN_MI_TWC, r11 /* Set segment attributes */
|
||||
|
||||
#ifdef CONFIG_SWAP
|
||||
rlwinm r11, r10, 32-5, _PAGE_PRESENT
|
||||
and r11, r11, r10
|
||||
rlwimi r10, r11, 0, _PAGE_PRESENT
|
||||
#endif
|
||||
li r11, RPN_PATTERN | 0x200
|
||||
/* The Linux PTE won't go exactly into the MMU TLB.
|
||||
* Software indicator bits 20 and 23 must be clear.
|
||||
|
@ -470,14 +471,22 @@ _ENTRY(DTLBMiss_jmp)
|
|||
* above.
|
||||
*/
|
||||
rlwimi r11, r10, 0, _PAGE_GUARDED
|
||||
#ifdef CONFIG_SWAP
|
||||
/* _PAGE_ACCESSED has to be set. We use second APG bit for that, 0
|
||||
* on that bit will represent a Non Access group
|
||||
*/
|
||||
rlwinm r11, r10, 31, _PAGE_ACCESSED >> 1
|
||||
#endif
|
||||
mtspr SPRN_MD_TWC, r11
|
||||
|
||||
/* Both _PAGE_ACCESSED and _PAGE_PRESENT has to be set.
|
||||
* We also need to know if the insn is a load/store, so:
|
||||
* Clear _PAGE_PRESENT and load that which will
|
||||
* trap into DTLB Error with store bit set accordinly.
|
||||
*/
|
||||
/* PRESENT=0x1, ACCESSED=0x20
|
||||
* r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5));
|
||||
* r10 = (r10 & ~PRESENT) | r11;
|
||||
*/
|
||||
#ifdef CONFIG_SWAP
|
||||
rlwinm r11, r10, 32-5, _PAGE_PRESENT
|
||||
and r11, r11, r10
|
||||
rlwimi r10, r11, 0, _PAGE_PRESENT
|
||||
#endif
|
||||
/* The Linux PTE won't go exactly into the MMU TLB.
|
||||
* Software indicator bits 24, 25, 26, and 27 must be
|
||||
* set. All other Linux PTE bits control the behavior
|
||||
|
@ -637,8 +646,8 @@ InstructionBreakpoint:
|
|||
*/
|
||||
DTLBMissIMMR:
|
||||
mtcr r12
|
||||
/* Set 512k byte guarded page and mark it valid and accessed */
|
||||
li r10, MD_PS512K | MD_GUARDED | MD_SVALID | M_APG2
|
||||
/* Set 512k byte guarded page and mark it valid */
|
||||
li r10, MD_PS512K | MD_GUARDED | MD_SVALID
|
||||
mtspr SPRN_MD_TWC, r10
|
||||
mfspr r10, SPRN_IMMR /* Get current IMMR */
|
||||
rlwinm r10, r10, 0, 0xfff80000 /* Get 512 kbytes boundary */
|
||||
|
@ -656,8 +665,8 @@ _ENTRY(dtlb_miss_exit_2)
|
|||
|
||||
DTLBMissLinear:
|
||||
mtcr r12
|
||||
/* Set 8M byte page and mark it valid and accessed */
|
||||
li r11, MD_PS8MEG | MD_SVALID | M_APG2
|
||||
/* Set 8M byte page and mark it valid */
|
||||
li r11, MD_PS8MEG | MD_SVALID
|
||||
mtspr SPRN_MD_TWC, r11
|
||||
rlwinm r10, r10, 0, 0x0f800000 /* 8xx supports max 256Mb RAM */
|
||||
ori r10, r10, 0xf0 | MD_SPS16K | _PAGE_PRIVILEGED | _PAGE_DIRTY | \
|
||||
|
@ -675,8 +684,8 @@ _ENTRY(dtlb_miss_exit_3)
|
|||
#ifndef CONFIG_PIN_TLB_TEXT
|
||||
ITLBMissLinear:
|
||||
mtcr r12
|
||||
/* Set 8M byte page and mark it valid,accessed */
|
||||
li r11, MI_PS8MEG | MI_SVALID | M_APG2
|
||||
/* Set 8M byte page and mark it valid */
|
||||
li r11, MI_PS8MEG | MI_SVALID
|
||||
mtspr SPRN_MI_TWC, r11
|
||||
rlwinm r10, r10, 0, 0x0f800000 /* 8xx supports max 256Mb RAM */
|
||||
ori r10, r10, 0xf0 | MI_SPS16K | _PAGE_PRIVILEGED | _PAGE_DIRTY | \
|
||||
|
@ -960,7 +969,7 @@ initial_mmu:
|
|||
ori r8, r8, MI_EVALID /* Mark it valid */
|
||||
mtspr SPRN_MI_EPN, r8
|
||||
li r8, MI_PS8MEG /* Set 8M byte page */
|
||||
ori r8, r8, MI_SVALID | M_APG2 /* Make it valid, APG 2 */
|
||||
ori r8, r8, MI_SVALID /* Make it valid */
|
||||
mtspr SPRN_MI_TWC, r8
|
||||
li r8, MI_BOOTINIT /* Create RPN for address 0 */
|
||||
mtspr SPRN_MI_RPN, r8 /* Store TLB entry */
|
||||
|
@ -987,7 +996,7 @@ initial_mmu:
|
|||
ori r8, r8, MD_EVALID /* Mark it valid */
|
||||
mtspr SPRN_MD_EPN, r8
|
||||
li r8, MD_PS512K | MD_GUARDED /* Set 512k byte page */
|
||||
ori r8, r8, MD_SVALID | M_APG2 /* Make it valid and accessed */
|
||||
ori r8, r8, MD_SVALID /* Make it valid */
|
||||
mtspr SPRN_MD_TWC, r8
|
||||
mr r8, r9 /* Create paddr for TLB */
|
||||
ori r8, r8, MI_BOOTINIT|0x2 /* Inhibit cache -- Cort */
|
||||
|
|
|
@ -680,7 +680,14 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
|
|||
|
||||
case R_PPC64_REL32:
|
||||
/* 32 bits relative (used by relative exception tables) */
|
||||
*(u32 *)location = value - (unsigned long)location;
|
||||
/* Convert value to relative */
|
||||
value -= (unsigned long)location;
|
||||
if (value + 0x80000000 > 0xffffffff) {
|
||||
pr_err("%s: REL32 %li out of range!\n",
|
||||
me->name, (long int)value);
|
||||
return -ENOEXEC;
|
||||
}
|
||||
*(u32 *)location = value;
|
||||
break;
|
||||
|
||||
case R_PPC64_TOCSAVE:
|
||||
|
|
|
@ -767,12 +767,17 @@ void machine_check_exception(struct pt_regs *regs)
|
|||
if (check_io_access(regs))
|
||||
goto bail;
|
||||
|
||||
die("Machine check", regs, SIGBUS);
|
||||
|
||||
/* Must die if the interrupt is not recoverable */
|
||||
if (!(regs->msr & MSR_RI))
|
||||
nmi_panic(regs, "Unrecoverable Machine check");
|
||||
|
||||
if (!nested)
|
||||
nmi_exit();
|
||||
|
||||
die("Machine check", regs, SIGBUS);
|
||||
|
||||
return;
|
||||
|
||||
bail:
|
||||
if (!nested)
|
||||
nmi_exit();
|
||||
|
|
|
@ -79,7 +79,7 @@ void __init MMU_init_hw(void)
|
|||
for (; i < 32 && mem >= LARGE_PAGE_SIZE_8M; i++) {
|
||||
mtspr(SPRN_MD_CTR, ctr | (i << 8));
|
||||
mtspr(SPRN_MD_EPN, (unsigned long)__va(addr) | MD_EVALID);
|
||||
mtspr(SPRN_MD_TWC, MD_PS8MEG | MD_SVALID | M_APG2);
|
||||
mtspr(SPRN_MD_TWC, MD_PS8MEG | MD_SVALID);
|
||||
mtspr(SPRN_MD_RPN, addr | flags | _PAGE_PRESENT);
|
||||
addr += LARGE_PAGE_SIZE_8M;
|
||||
mem -= LARGE_PAGE_SIZE_8M;
|
||||
|
|
|
@ -418,12 +418,13 @@ static void walk_pagetables(struct pg_state *st)
|
|||
unsigned int i;
|
||||
unsigned long addr;
|
||||
|
||||
addr = st->start_address;
|
||||
|
||||
/*
|
||||
* Traverse the linux pagetable structure and dump pages that are in
|
||||
* the hash pagetable.
|
||||
*/
|
||||
for (i = 0; i < PTRS_PER_PGD; i++, pgd++) {
|
||||
addr = KERN_VIRT_START + i * PGDIR_SIZE;
|
||||
for (i = 0; i < PTRS_PER_PGD; i++, pgd++, addr += PGDIR_SIZE) {
|
||||
if (!pgd_none(*pgd) && !pgd_huge(*pgd))
|
||||
/* pgd exists */
|
||||
walk_pud(st, pgd, addr);
|
||||
|
@ -472,9 +473,14 @@ static int ptdump_show(struct seq_file *m, void *v)
|
|||
{
|
||||
struct pg_state st = {
|
||||
.seq = m,
|
||||
.start_address = KERN_VIRT_START,
|
||||
.marker = address_markers,
|
||||
};
|
||||
|
||||
if (radix_enabled())
|
||||
st.start_address = PAGE_OFFSET;
|
||||
else
|
||||
st.start_address = KERN_VIRT_START;
|
||||
|
||||
/* Traverse kernel page tables */
|
||||
walk_pagetables(&st);
|
||||
note_page(&st, 0, 0, 0);
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/moduleparam.h>
|
||||
#include <linux/swap.h>
|
||||
#include <linux/swapops.h>
|
||||
#include <linux/kmemleak.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/tlb.h>
|
||||
|
@ -112,6 +113,8 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
|
|||
for (i = i - 1 ; i >= 0; i--, hpdp--)
|
||||
*hpdp = __hugepd(0);
|
||||
kmem_cache_free(cachep, new);
|
||||
} else {
|
||||
kmemleak_ignore(new);
|
||||
}
|
||||
spin_unlock(ptl);
|
||||
return 0;
|
||||
|
|
|
@ -61,6 +61,13 @@ static void slice_print_mask(const char *label, const struct slice_mask *mask) {
|
|||
|
||||
#endif
|
||||
|
||||
static inline bool slice_addr_is_low(unsigned long addr)
|
||||
{
|
||||
u64 tmp = (u64)addr;
|
||||
|
||||
return tmp < SLICE_LOW_TOP;
|
||||
}
|
||||
|
||||
static void slice_range_to_mask(unsigned long start, unsigned long len,
|
||||
struct slice_mask *ret)
|
||||
{
|
||||
|
@ -70,7 +77,7 @@ static void slice_range_to_mask(unsigned long start, unsigned long len,
|
|||
if (SLICE_NUM_HIGH)
|
||||
bitmap_zero(ret->high_slices, SLICE_NUM_HIGH);
|
||||
|
||||
if (start < SLICE_LOW_TOP) {
|
||||
if (slice_addr_is_low(start)) {
|
||||
unsigned long mend = min(end,
|
||||
(unsigned long)(SLICE_LOW_TOP - 1));
|
||||
|
||||
|
@ -78,7 +85,7 @@ static void slice_range_to_mask(unsigned long start, unsigned long len,
|
|||
- (1u << GET_LOW_SLICE_INDEX(start));
|
||||
}
|
||||
|
||||
if ((start + len) > SLICE_LOW_TOP) {
|
||||
if (SLICE_NUM_HIGH && !slice_addr_is_low(end)) {
|
||||
unsigned long start_index = GET_HIGH_SLICE_INDEX(start);
|
||||
unsigned long align_end = ALIGN(end, (1UL << SLICE_HIGH_SHIFT));
|
||||
unsigned long count = GET_HIGH_SLICE_INDEX(align_end) - start_index;
|
||||
|
@ -133,7 +140,7 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret,
|
|||
if (!slice_low_has_vma(mm, i))
|
||||
ret->low_slices |= 1u << i;
|
||||
|
||||
if (high_limit <= SLICE_LOW_TOP)
|
||||
if (slice_addr_is_low(high_limit - 1))
|
||||
return;
|
||||
|
||||
for (i = 0; i < GET_HIGH_SLICE_INDEX(high_limit); i++)
|
||||
|
@ -182,7 +189,7 @@ static bool slice_check_range_fits(struct mm_struct *mm,
|
|||
unsigned long end = start + len - 1;
|
||||
u64 low_slices = 0;
|
||||
|
||||
if (start < SLICE_LOW_TOP) {
|
||||
if (slice_addr_is_low(start)) {
|
||||
unsigned long mend = min(end,
|
||||
(unsigned long)(SLICE_LOW_TOP - 1));
|
||||
|
||||
|
@ -192,7 +199,7 @@ static bool slice_check_range_fits(struct mm_struct *mm,
|
|||
if ((low_slices & available->low_slices) != low_slices)
|
||||
return false;
|
||||
|
||||
if (SLICE_NUM_HIGH && ((start + len) > SLICE_LOW_TOP)) {
|
||||
if (SLICE_NUM_HIGH && !slice_addr_is_low(end)) {
|
||||
unsigned long start_index = GET_HIGH_SLICE_INDEX(start);
|
||||
unsigned long align_end = ALIGN(end, (1UL << SLICE_HIGH_SHIFT));
|
||||
unsigned long count = GET_HIGH_SLICE_INDEX(align_end) - start_index;
|
||||
|
@ -303,7 +310,7 @@ static bool slice_scan_available(unsigned long addr,
|
|||
int end, unsigned long *boundary_addr)
|
||||
{
|
||||
unsigned long slice;
|
||||
if (addr < SLICE_LOW_TOP) {
|
||||
if (slice_addr_is_low(addr)) {
|
||||
slice = GET_LOW_SLICE_INDEX(addr);
|
||||
*boundary_addr = (slice + end) << SLICE_LOW_SHIFT;
|
||||
return !!(available->low_slices & (1u << slice));
|
||||
|
@ -706,7 +713,7 @@ unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
|
|||
|
||||
VM_BUG_ON(radix_enabled());
|
||||
|
||||
if (addr < SLICE_LOW_TOP) {
|
||||
if (slice_addr_is_low(addr)) {
|
||||
psizes = mm->context.low_slices_psize;
|
||||
index = GET_LOW_SLICE_INDEX(addr);
|
||||
} else {
|
||||
|
|
|
@ -503,6 +503,9 @@ static void setup_page_sizes(void)
|
|||
for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
|
||||
struct mmu_psize_def *def = &mmu_psize_defs[psize];
|
||||
|
||||
if (!def->shift)
|
||||
continue;
|
||||
|
||||
if (tlb1ps & (1U << (def->shift - 10))) {
|
||||
def->flags |= MMU_PAGE_SIZE_DIRECT;
|
||||
|
||||
|
|
|
@ -90,17 +90,15 @@ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
|
|||
walk_memory_range(start_pfn, end_pfn, (void *)MEM_OFFLINE,
|
||||
change_memblock_state);
|
||||
|
||||
lock_device_hotplug();
|
||||
remove_memory(nid, start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
|
||||
unlock_device_hotplug();
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static u64 memtrace_alloc_node(u32 nid, u64 size)
|
||||
{
|
||||
u64 start_pfn, end_pfn, nr_pages;
|
||||
u64 start_pfn, end_pfn, nr_pages, pfn;
|
||||
u64 base_pfn;
|
||||
u64 bytes = memory_block_size_bytes();
|
||||
|
||||
if (!node_spanned_pages(nid))
|
||||
return 0;
|
||||
|
@ -113,8 +111,21 @@ static u64 memtrace_alloc_node(u32 nid, u64 size)
|
|||
end_pfn = round_down(end_pfn - nr_pages, nr_pages);
|
||||
|
||||
for (base_pfn = end_pfn; base_pfn > start_pfn; base_pfn -= nr_pages) {
|
||||
if (memtrace_offline_pages(nid, base_pfn, nr_pages) == true)
|
||||
if (memtrace_offline_pages(nid, base_pfn, nr_pages) == true) {
|
||||
/*
|
||||
* Remove memory in memory block size chunks so that
|
||||
* iomem resources are always split to the same size and
|
||||
* we never try to remove memory that spans two iomem
|
||||
* resources.
|
||||
*/
|
||||
lock_device_hotplug();
|
||||
end_pfn = base_pfn + nr_pages;
|
||||
for (pfn = base_pfn; pfn < end_pfn; pfn += bytes>> PAGE_SHIFT) {
|
||||
remove_memory(nid, pfn << PAGE_SHIFT, bytes);
|
||||
}
|
||||
unlock_device_hotplug();
|
||||
return base_pfn << PAGE_SHIFT;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -216,6 +216,8 @@ static inline int umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *s
|
|||
|
||||
int mce_available(struct cpuinfo_x86 *c);
|
||||
bool mce_is_memory_error(struct mce *m);
|
||||
bool mce_is_correctable(struct mce *m);
|
||||
int mce_usable_address(struct mce *m);
|
||||
|
||||
DECLARE_PER_CPU(unsigned, mce_exception_count);
|
||||
DECLARE_PER_CPU(unsigned, mce_poll_count);
|
||||
|
|
|
@ -485,7 +485,7 @@ static void mce_report_event(struct pt_regs *regs)
|
|||
* be somewhat complicated (e.g. segment offset would require an instruction
|
||||
* parser). So only support physical addresses up to page granuality for now.
|
||||
*/
|
||||
static int mce_usable_address(struct mce *m)
|
||||
int mce_usable_address(struct mce *m)
|
||||
{
|
||||
if (!(m->status & MCI_STATUS_ADDRV))
|
||||
return 0;
|
||||
|
@ -505,6 +505,7 @@ static int mce_usable_address(struct mce *m)
|
|||
|
||||
return 1;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mce_usable_address);
|
||||
|
||||
bool mce_is_memory_error(struct mce *m)
|
||||
{
|
||||
|
@ -534,7 +535,7 @@ bool mce_is_memory_error(struct mce *m)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(mce_is_memory_error);
|
||||
|
||||
static bool mce_is_correctable(struct mce *m)
|
||||
bool mce_is_correctable(struct mce *m)
|
||||
{
|
||||
if (m->cpuvendor == X86_VENDOR_AMD && m->status & MCI_STATUS_DEFERRED)
|
||||
return false;
|
||||
|
@ -544,6 +545,7 @@ static bool mce_is_correctable(struct mce *m)
|
|||
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mce_is_correctable);
|
||||
|
||||
static bool cec_add_mce(struct mce *m)
|
||||
{
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/kexec.h>
|
||||
#include <linux/i8253.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/hypervisor.h>
|
||||
#include <asm/hyperv-tlfs.h>
|
||||
|
@ -285,6 +286,16 @@ static void __init ms_hyperv_init_platform(void)
|
|||
if (efi_enabled(EFI_BOOT))
|
||||
x86_platform.get_nmi_reason = hv_get_nmi_reason;
|
||||
|
||||
/*
|
||||
* Hyper-V VMs have a PIT emulation quirk such that zeroing the
|
||||
* counter register during PIT shutdown restarts the PIT. So it
|
||||
* continues to interrupt @18.2 HZ. Setting i8253_clear_counter
|
||||
* to false tells pit_shutdown() not to zero the counter so that
|
||||
* the PIT really is shutdown. Generation 2 VMs don't have a PIT,
|
||||
* and setting this value has no effect.
|
||||
*/
|
||||
i8253_clear_counter_on_shutdown = false;
|
||||
|
||||
#if IS_ENABLED(CONFIG_HYPERV)
|
||||
/*
|
||||
* Setup the hook to get control post apic initialization.
|
||||
|
|
|
@ -77,7 +77,7 @@ static __init int setup_vmw_sched_clock(char *s)
|
|||
}
|
||||
early_param("no-vmw-sched-clock", setup_vmw_sched_clock);
|
||||
|
||||
static unsigned long long vmware_sched_clock(void)
|
||||
static unsigned long long notrace vmware_sched_clock(void)
|
||||
{
|
||||
unsigned long long ns;
|
||||
|
||||
|
|
|
@ -10,20 +10,10 @@
|
|||
|
||||
static inline void update_debugregs(int seq) {}
|
||||
|
||||
/* syscall emulation path in ptrace */
|
||||
|
||||
#ifndef PTRACE_SYSEMU
|
||||
#define PTRACE_SYSEMU 31
|
||||
#endif
|
||||
|
||||
void set_using_sysemu(int value);
|
||||
int get_using_sysemu(void);
|
||||
extern int sysemu_supported;
|
||||
|
||||
#ifndef PTRACE_SYSEMU_SINGLESTEP
|
||||
#define PTRACE_SYSEMU_SINGLESTEP 32
|
||||
#endif
|
||||
|
||||
#define UPT_SYSCALL_ARG1(r) UPT_BX(r)
|
||||
#define UPT_SYSCALL_ARG2(r) UPT_CX(r)
|
||||
#define UPT_SYSCALL_ARG3(r) UPT_DX(r)
|
||||
|
|
|
@ -33,7 +33,7 @@ uImage: $(obj)/uImage
|
|||
boot-elf boot-redboot: $(addprefix $(obj)/,$(subdir-y))
|
||||
$(Q)$(MAKE) $(build)=$(obj)/$@ $(MAKECMDGOALS)
|
||||
|
||||
OBJCOPYFLAGS = --strip-all -R .comment -R .note.gnu.build-id -O binary
|
||||
OBJCOPYFLAGS = --strip-all -R .comment -R .notes -O binary
|
||||
|
||||
vmlinux.bin: vmlinux FORCE
|
||||
$(call if_changed,objcopy)
|
||||
|
|
|
@ -23,7 +23,11 @@
|
|||
# error Linux requires the Xtensa Windowed Registers Option.
|
||||
#endif
|
||||
|
||||
#define ARCH_SLAB_MINALIGN XCHAL_DATA_WIDTH
|
||||
/* Xtensa ABI requires stack alignment to be at least 16 */
|
||||
|
||||
#define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16)
|
||||
|
||||
#define ARCH_SLAB_MINALIGN STACK_ALIGN
|
||||
|
||||
/*
|
||||
* User space process size: 1 GB.
|
||||
|
|
|
@ -88,9 +88,12 @@ _SetupMMU:
|
|||
initialize_mmu
|
||||
#if defined(CONFIG_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
|
||||
rsr a2, excsave1
|
||||
movi a3, 0x08000000
|
||||
movi a3, XCHAL_KSEG_PADDR
|
||||
bltu a2, a3, 1f
|
||||
sub a2, a2, a3
|
||||
movi a3, XCHAL_KSEG_SIZE
|
||||
bgeu a2, a3, 1f
|
||||
movi a3, 0xd0000000
|
||||
movi a3, XCHAL_KSEG_CACHED_VADDR
|
||||
add a2, a2, a3
|
||||
wsr a2, excsave1
|
||||
1:
|
||||
|
|
|
@ -131,6 +131,7 @@ SECTIONS
|
|||
.fixup : { *(.fixup) }
|
||||
|
||||
EXCEPTION_TABLE(16)
|
||||
NOTES
|
||||
/* Data section */
|
||||
|
||||
_sdata = .;
|
||||
|
|
|
@ -793,9 +793,8 @@ void blk_cleanup_queue(struct request_queue *q)
|
|||
* dispatch may still be in-progress since we dispatch requests
|
||||
* from more than one contexts.
|
||||
*
|
||||
* No need to quiesce queue if it isn't initialized yet since
|
||||
* blk_freeze_queue() should be enough for cases of passthrough
|
||||
* request.
|
||||
* We rely on driver to deal with the race in case that queue
|
||||
* initialization isn't done.
|
||||
*/
|
||||
if (q->mq_ops && blk_queue_init_done(q))
|
||||
blk_mq_quiesce_queue(q);
|
||||
|
|
|
@ -83,7 +83,7 @@ static int crypto_report_cipher(struct sk_buff *skb, struct crypto_alg *alg)
|
|||
{
|
||||
struct crypto_report_cipher rcipher;
|
||||
|
||||
strlcpy(rcipher.type, "cipher", sizeof(rcipher.type));
|
||||
strncpy(rcipher.type, "cipher", sizeof(rcipher.type));
|
||||
|
||||
rcipher.blocksize = alg->cra_blocksize;
|
||||
rcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
|
||||
|
@ -102,7 +102,7 @@ static int crypto_report_comp(struct sk_buff *skb, struct crypto_alg *alg)
|
|||
{
|
||||
struct crypto_report_comp rcomp;
|
||||
|
||||
strlcpy(rcomp.type, "compression", sizeof(rcomp.type));
|
||||
strncpy(rcomp.type, "compression", sizeof(rcomp.type));
|
||||
if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
|
||||
sizeof(struct crypto_report_comp), &rcomp))
|
||||
goto nla_put_failure;
|
||||
|
@ -116,7 +116,7 @@ static int crypto_report_acomp(struct sk_buff *skb, struct crypto_alg *alg)
|
|||
{
|
||||
struct crypto_report_acomp racomp;
|
||||
|
||||
strlcpy(racomp.type, "acomp", sizeof(racomp.type));
|
||||
strncpy(racomp.type, "acomp", sizeof(racomp.type));
|
||||
|
||||
if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMP,
|
||||
sizeof(struct crypto_report_acomp), &racomp))
|
||||
|
@ -131,7 +131,7 @@ static int crypto_report_akcipher(struct sk_buff *skb, struct crypto_alg *alg)
|
|||
{
|
||||
struct crypto_report_akcipher rakcipher;
|
||||
|
||||
strlcpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
|
||||
strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
|
||||
|
||||
if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER,
|
||||
sizeof(struct crypto_report_akcipher), &rakcipher))
|
||||
|
@ -146,7 +146,7 @@ static int crypto_report_kpp(struct sk_buff *skb, struct crypto_alg *alg)
|
|||
{
|
||||
struct crypto_report_kpp rkpp;
|
||||
|
||||
strlcpy(rkpp.type, "kpp", sizeof(rkpp.type));
|
||||
strncpy(rkpp.type, "kpp", sizeof(rkpp.type));
|
||||
|
||||
if (nla_put(skb, CRYPTOCFGA_REPORT_KPP,
|
||||
sizeof(struct crypto_report_kpp), &rkpp))
|
||||
|
@ -160,10 +160,10 @@ static int crypto_report_kpp(struct sk_buff *skb, struct crypto_alg *alg)
|
|||
static int crypto_report_one(struct crypto_alg *alg,
|
||||
struct crypto_user_alg *ualg, struct sk_buff *skb)
|
||||
{
|
||||
strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
|
||||
strlcpy(ualg->cru_driver_name, alg->cra_driver_name,
|
||||
strncpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
|
||||
strncpy(ualg->cru_driver_name, alg->cra_driver_name,
|
||||
sizeof(ualg->cru_driver_name));
|
||||
strlcpy(ualg->cru_module_name, module_name(alg->cra_module),
|
||||
strncpy(ualg->cru_module_name, module_name(alg->cra_module),
|
||||
sizeof(ualg->cru_module_name));
|
||||
|
||||
ualg->cru_type = 0;
|
||||
|
@ -176,7 +176,7 @@ static int crypto_report_one(struct crypto_alg *alg,
|
|||
if (alg->cra_flags & CRYPTO_ALG_LARVAL) {
|
||||
struct crypto_report_larval rl;
|
||||
|
||||
strlcpy(rl.type, "larval", sizeof(rl.type));
|
||||
strncpy(rl.type, "larval", sizeof(rl.type));
|
||||
if (nla_put(skb, CRYPTOCFGA_REPORT_LARVAL,
|
||||
sizeof(struct crypto_report_larval), &rl))
|
||||
goto nla_put_failure;
|
||||
|
|
|
@ -417,10 +417,6 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
|
|||
ACPI_FORMAT_UINT64(obj_desc->region.address),
|
||||
obj_desc->region.length));
|
||||
|
||||
status = acpi_ut_add_address_range(obj_desc->region.space_id,
|
||||
obj_desc->region.address,
|
||||
obj_desc->region.length, node);
|
||||
|
||||
/* Now the address and length are valid for this opregion */
|
||||
|
||||
obj_desc->region.flags |= AOPOBJ_DATA_VALID;
|
||||
|
|
|
@ -2845,9 +2845,9 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc)
|
|||
return rc;
|
||||
|
||||
if (ars_status_process_records(acpi_desc))
|
||||
return -ENOMEM;
|
||||
dev_err(acpi_desc->dev, "Failed to process ARS records\n");
|
||||
|
||||
return 0;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int ars_register(struct acpi_nfit_desc *acpi_desc,
|
||||
|
|
|
@ -25,8 +25,12 @@ static int nfit_handle_mce(struct notifier_block *nb, unsigned long val,
|
|||
struct acpi_nfit_desc *acpi_desc;
|
||||
struct nfit_spa *nfit_spa;
|
||||
|
||||
/* We only care about memory errors */
|
||||
if (!mce_is_memory_error(mce))
|
||||
/* We only care about uncorrectable memory errors */
|
||||
if (!mce_is_memory_error(mce) || mce_is_correctable(mce))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/* Verify the address reported in the MCE is valid. */
|
||||
if (!mce_usable_address(mce))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/*
|
||||
|
|
|
@ -4553,7 +4553,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
|
|||
/* These specific Samsung models/firmware-revs do not handle LPM well */
|
||||
{ "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
|
||||
{ "SAMSUNG SSD PM830 mSATA *", "CXM13D1Q", ATA_HORKAGE_NOLPM, },
|
||||
{ "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
|
||||
{ "SAMSUNG MZ7TD256HAFV-000L9", NULL, ATA_HORKAGE_NOLPM, },
|
||||
|
||||
/* devices that don't properly handle queued TRIM commands */
|
||||
{ "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
|
||||
|
|
|
@ -1636,6 +1636,11 @@ static const struct attribute_group zram_disk_attr_group = {
|
|||
.attrs = zram_disk_attrs,
|
||||
};
|
||||
|
||||
static const struct attribute_group *zram_disk_attr_groups[] = {
|
||||
&zram_disk_attr_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
/*
|
||||
* Allocate and initialize new zram device. the function returns
|
||||
* '>= 0' device_id upon success, and negative value otherwise.
|
||||
|
@ -1716,24 +1721,15 @@ static int zram_add(void)
|
|||
|
||||
zram->disk->queue->backing_dev_info->capabilities |=
|
||||
(BDI_CAP_STABLE_WRITES | BDI_CAP_SYNCHRONOUS_IO);
|
||||
disk_to_dev(zram->disk)->groups = zram_disk_attr_groups;
|
||||
add_disk(zram->disk);
|
||||
|
||||
ret = sysfs_create_group(&disk_to_dev(zram->disk)->kobj,
|
||||
&zram_disk_attr_group);
|
||||
if (ret < 0) {
|
||||
pr_err("Error creating sysfs group for device %d\n",
|
||||
device_id);
|
||||
goto out_free_disk;
|
||||
}
|
||||
strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
|
||||
|
||||
zram_debugfs_register(zram);
|
||||
pr_info("Added device: %s\n", zram->disk->disk_name);
|
||||
return device_id;
|
||||
|
||||
out_free_disk:
|
||||
del_gendisk(zram->disk);
|
||||
put_disk(zram->disk);
|
||||
out_free_queue:
|
||||
blk_cleanup_queue(queue);
|
||||
out_free_idr:
|
||||
|
@ -1762,16 +1758,6 @@ static int zram_remove(struct zram *zram)
|
|||
mutex_unlock(&bdev->bd_mutex);
|
||||
|
||||
zram_debugfs_unregister(zram);
|
||||
/*
|
||||
* Remove sysfs first, so no one will perform a disksize
|
||||
* store while we destroy the devices. This also helps during
|
||||
* hot_remove -- zram_reset_device() is the last holder of
|
||||
* ->init_lock, no later/concurrent disksize_store() or any
|
||||
* other sysfs handlers are possible.
|
||||
*/
|
||||
sysfs_remove_group(&disk_to_dev(zram->disk)->kobj,
|
||||
&zram_disk_attr_group);
|
||||
|
||||
/* Make sure all the pending I/O are finished */
|
||||
fsync_bdev(bdev);
|
||||
zram_reset_device(zram);
|
||||
|
|
|
@ -2445,7 +2445,7 @@ static int cdrom_ioctl_select_disc(struct cdrom_device_info *cdi,
|
|||
return -ENOSYS;
|
||||
|
||||
if (arg != CDSL_CURRENT && arg != CDSL_NONE) {
|
||||
if ((int)arg >= cdi->capacity)
|
||||
if (arg >= cdi->capacity)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
|
|
@ -133,6 +133,9 @@ static unsigned long clk_pll_recalc_rate(struct clk_hw *hw,
|
|||
{
|
||||
struct clk_pll *pll = to_clk_pll(hw);
|
||||
|
||||
if (!pll->div || !pll->mul)
|
||||
return 0;
|
||||
|
||||
return (parent_rate / pll->div) * (pll->mul + 1);
|
||||
}
|
||||
|
||||
|
|
|
@ -245,6 +245,36 @@ static const struct platform_device_id s2mps11_clk_id[] = {
|
|||
};
|
||||
MODULE_DEVICE_TABLE(platform, s2mps11_clk_id);
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
/*
|
||||
* Device is instantiated through parent MFD device and device matching is done
|
||||
* through platform_device_id.
|
||||
*
|
||||
* However if device's DT node contains proper clock compatible and driver is
|
||||
* built as a module, then the *module* matching will be done trough DT aliases.
|
||||
* This requires of_device_id table. In the same time this will not change the
|
||||
* actual *device* matching so do not add .of_match_table.
|
||||
*/
|
||||
static const struct of_device_id s2mps11_dt_match[] = {
|
||||
{
|
||||
.compatible = "samsung,s2mps11-clk",
|
||||
.data = (void *)S2MPS11X,
|
||||
}, {
|
||||
.compatible = "samsung,s2mps13-clk",
|
||||
.data = (void *)S2MPS13X,
|
||||
}, {
|
||||
.compatible = "samsung,s2mps14-clk",
|
||||
.data = (void *)S2MPS14X,
|
||||
}, {
|
||||
.compatible = "samsung,s5m8767-clk",
|
||||
.data = (void *)S5M8767X,
|
||||
}, {
|
||||
/* Sentinel */
|
||||
},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, s2mps11_dt_match);
|
||||
#endif
|
||||
|
||||
static struct platform_driver s2mps11_clk_driver = {
|
||||
.driver = {
|
||||
.name = "s2mps11-clk",
|
||||
|
|
|
@ -109,9 +109,8 @@ struct hisi_reset_controller *hisi_reset_init(struct platform_device *pdev)
|
|||
return NULL;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
rstc->membase = devm_ioremap(&pdev->dev,
|
||||
res->start, resource_size(res));
|
||||
if (!rstc->membase)
|
||||
rstc->membase = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(rstc->membase))
|
||||
return NULL;
|
||||
|
||||
spin_lock_init(&rstc->lock);
|
||||
|
|
|
@ -319,6 +319,7 @@ static struct clk_regmap axg_fclk_div2 = {
|
|||
.ops = &clk_regmap_gate_ops,
|
||||
.parent_names = (const char *[]){ "fclk_div2_div" },
|
||||
.num_parents = 1,
|
||||
.flags = CLK_IS_CRITICAL,
|
||||
},
|
||||
};
|
||||
|
||||
|
@ -343,6 +344,18 @@ static struct clk_regmap axg_fclk_div3 = {
|
|||
.ops = &clk_regmap_gate_ops,
|
||||
.parent_names = (const char *[]){ "fclk_div3_div" },
|
||||
.num_parents = 1,
|
||||
/*
|
||||
* FIXME:
|
||||
* This clock, as fdiv2, is used by the SCPI FW and is required
|
||||
* by the platform to operate correctly.
|
||||
* Until the following condition are met, we need this clock to
|
||||
* be marked as critical:
|
||||
* a) The SCPI generic driver claims and enable all the clocks
|
||||
* it needs
|
||||
* b) CCF has a clock hand-off mechanism to make the sure the
|
||||
* clock stays on until the proper driver comes along
|
||||
*/
|
||||
.flags = CLK_IS_CRITICAL,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
|
@ -522,6 +522,18 @@ static struct clk_regmap gxbb_fclk_div3 = {
|
|||
.ops = &clk_regmap_gate_ops,
|
||||
.parent_names = (const char *[]){ "fclk_div3_div" },
|
||||
.num_parents = 1,
|
||||
/*
|
||||
* FIXME:
|
||||
* This clock, as fdiv2, is used by the SCPI FW and is required
|
||||
* by the platform to operate correctly.
|
||||
* Until the following condition are met, we need this clock to
|
||||
* be marked as critical:
|
||||
* a) The SCPI generic driver claims and enable all the clocks
|
||||
* it needs
|
||||
* b) CCF has a clock hand-off mechanism to make the sure the
|
||||
* clock stays on until the proper driver comes along
|
||||
*/
|
||||
.flags = CLK_IS_CRITICAL,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
|
@ -80,16 +80,12 @@ static long rockchip_ddrclk_sip_round_rate(struct clk_hw *hw,
|
|||
static u8 rockchip_ddrclk_get_parent(struct clk_hw *hw)
|
||||
{
|
||||
struct rockchip_ddrclk *ddrclk = to_rockchip_ddrclk_hw(hw);
|
||||
int num_parents = clk_hw_get_num_parents(hw);
|
||||
u32 val;
|
||||
|
||||
val = clk_readl(ddrclk->reg_base +
|
||||
ddrclk->mux_offset) >> ddrclk->mux_shift;
|
||||
val &= GENMASK(ddrclk->mux_width - 1, 0);
|
||||
|
||||
if (val >= num_parents)
|
||||
return -EINVAL;
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
|
|
|
@ -813,22 +813,22 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
|
|||
MMC(SCLK_SDMMC_DRV, "sdmmc_drv", "clk_sdmmc",
|
||||
RK3328_SDMMC_CON0, 1),
|
||||
MMC(SCLK_SDMMC_SAMPLE, "sdmmc_sample", "clk_sdmmc",
|
||||
RK3328_SDMMC_CON1, 1),
|
||||
RK3328_SDMMC_CON1, 0),
|
||||
|
||||
MMC(SCLK_SDIO_DRV, "sdio_drv", "clk_sdio",
|
||||
RK3328_SDIO_CON0, 1),
|
||||
MMC(SCLK_SDIO_SAMPLE, "sdio_sample", "clk_sdio",
|
||||
RK3328_SDIO_CON1, 1),
|
||||
RK3328_SDIO_CON1, 0),
|
||||
|
||||
MMC(SCLK_EMMC_DRV, "emmc_drv", "clk_emmc",
|
||||
RK3328_EMMC_CON0, 1),
|
||||
MMC(SCLK_EMMC_SAMPLE, "emmc_sample", "clk_emmc",
|
||||
RK3328_EMMC_CON1, 1),
|
||||
RK3328_EMMC_CON1, 0),
|
||||
|
||||
MMC(SCLK_SDMMC_EXT_DRV, "sdmmc_ext_drv", "clk_sdmmc_ext",
|
||||
RK3328_SDMMC_EXT_CON0, 1),
|
||||
MMC(SCLK_SDMMC_EXT_SAMPLE, "sdmmc_ext_sample", "clk_sdmmc_ext",
|
||||
RK3328_SDMMC_EXT_CON1, 1),
|
||||
RK3328_SDMMC_EXT_CON1, 0),
|
||||
};
|
||||
|
||||
static const char *const rk3328_critical_clocks[] __initconst = {
|
||||
|
|
|
@ -224,7 +224,7 @@ static SUNXI_CCU_MP_WITH_MUX(psi_ahb1_ahb2_clk, "psi-ahb1-ahb2",
|
|||
psi_ahb1_ahb2_parents,
|
||||
0x510,
|
||||
0, 5, /* M */
|
||||
16, 2, /* P */
|
||||
8, 2, /* P */
|
||||
24, 2, /* mux */
|
||||
0);
|
||||
|
||||
|
@ -233,19 +233,19 @@ static const char * const ahb3_apb1_apb2_parents[] = { "osc24M", "osc32k",
|
|||
"pll-periph0" };
|
||||
static SUNXI_CCU_MP_WITH_MUX(ahb3_clk, "ahb3", ahb3_apb1_apb2_parents, 0x51c,
|
||||
0, 5, /* M */
|
||||
16, 2, /* P */
|
||||
8, 2, /* P */
|
||||
24, 2, /* mux */
|
||||
0);
|
||||
|
||||
static SUNXI_CCU_MP_WITH_MUX(apb1_clk, "apb1", ahb3_apb1_apb2_parents, 0x520,
|
||||
0, 5, /* M */
|
||||
16, 2, /* P */
|
||||
8, 2, /* P */
|
||||
24, 2, /* mux */
|
||||
0);
|
||||
|
||||
static SUNXI_CCU_MP_WITH_MUX(apb2_clk, "apb2", ahb3_apb1_apb2_parents, 0x524,
|
||||
0, 5, /* M */
|
||||
16, 2, /* P */
|
||||
8, 2, /* P */
|
||||
24, 2, /* mux */
|
||||
0);
|
||||
|
||||
|
|
|
@ -20,6 +20,13 @@
|
|||
DEFINE_RAW_SPINLOCK(i8253_lock);
|
||||
EXPORT_SYMBOL(i8253_lock);
|
||||
|
||||
/*
|
||||
* Handle PIT quirk in pit_shutdown() where zeroing the counter register
|
||||
* restarts the PIT, negating the shutdown. On platforms with the quirk,
|
||||
* platform specific code can set this to false.
|
||||
*/
|
||||
bool i8253_clear_counter_on_shutdown __ro_after_init = true;
|
||||
|
||||
#ifdef CONFIG_CLKSRC_I8253
|
||||
/*
|
||||
* Since the PIT overflows every tick, its not very useful
|
||||
|
@ -109,8 +116,11 @@ static int pit_shutdown(struct clock_event_device *evt)
|
|||
raw_spin_lock(&i8253_lock);
|
||||
|
||||
outb_p(0x30, PIT_MODE);
|
||||
outb_p(0, PIT_CH0);
|
||||
outb_p(0, PIT_CH0);
|
||||
|
||||
if (i8253_clear_counter_on_shutdown) {
|
||||
outb_p(0, PIT_CH0);
|
||||
outb_p(0, PIT_CH0);
|
||||
}
|
||||
|
||||
raw_spin_unlock(&i8253_lock);
|
||||
return 0;
|
||||
|
|
|
@ -103,13 +103,6 @@ static int __init arm_idle_init_cpu(int cpu)
|
|||
goto out_kfree_drv;
|
||||
}
|
||||
|
||||
ret = cpuidle_register_driver(drv);
|
||||
if (ret) {
|
||||
if (ret != -EBUSY)
|
||||
pr_err("Failed to register cpuidle driver\n");
|
||||
goto out_kfree_drv;
|
||||
}
|
||||
|
||||
/*
|
||||
* Call arch CPU operations in order to initialize
|
||||
* idle states suspend back-end specific data
|
||||
|
@ -117,15 +110,20 @@ static int __init arm_idle_init_cpu(int cpu)
|
|||
ret = arm_cpuidle_init(cpu);
|
||||
|
||||
/*
|
||||
* Skip the cpuidle device initialization if the reported
|
||||
* Allow the initialization to continue for other CPUs, if the reported
|
||||
* failure is a HW misconfiguration/breakage (-ENXIO).
|
||||
*/
|
||||
if (ret == -ENXIO)
|
||||
return 0;
|
||||
|
||||
if (ret) {
|
||||
pr_err("CPU %d failed to init idle CPU ops\n", cpu);
|
||||
goto out_unregister_drv;
|
||||
ret = ret == -ENXIO ? 0 : ret;
|
||||
goto out_kfree_drv;
|
||||
}
|
||||
|
||||
ret = cpuidle_register_driver(drv);
|
||||
if (ret) {
|
||||
if (ret != -EBUSY)
|
||||
pr_err("Failed to register cpuidle driver\n");
|
||||
goto out_kfree_drv;
|
||||
}
|
||||
|
||||
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
|
||||
|
|
|
@ -732,6 +732,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
|
|||
int *splits_in_nents;
|
||||
int *splits_out_nents = NULL;
|
||||
struct sec_request_el *el, *temp;
|
||||
bool split = skreq->src != skreq->dst;
|
||||
|
||||
mutex_init(&sec_req->lock);
|
||||
sec_req->req_base = &skreq->base;
|
||||
|
@ -750,7 +751,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
|
|||
if (ret)
|
||||
goto err_free_split_sizes;
|
||||
|
||||
if (skreq->src != skreq->dst) {
|
||||
if (split) {
|
||||
sec_req->len_out = sg_nents(skreq->dst);
|
||||
ret = sec_map_and_split_sg(skreq->dst, split_sizes, steps,
|
||||
&splits_out, &splits_out_nents,
|
||||
|
@ -785,8 +786,9 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
|
|||
split_sizes[i],
|
||||
skreq->src != skreq->dst,
|
||||
splits_in[i], splits_in_nents[i],
|
||||
splits_out[i],
|
||||
splits_out_nents[i], info);
|
||||
split ? splits_out[i] : NULL,
|
||||
split ? splits_out_nents[i] : 0,
|
||||
info);
|
||||
if (IS_ERR(el)) {
|
||||
ret = PTR_ERR(el);
|
||||
goto err_free_elements;
|
||||
|
@ -806,13 +808,6 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
|
|||
* more refined but this is unlikely to happen so no need.
|
||||
*/
|
||||
|
||||
/* Cleanup - all elements in pointer arrays have been coppied */
|
||||
kfree(splits_in_nents);
|
||||
kfree(splits_in);
|
||||
kfree(splits_out_nents);
|
||||
kfree(splits_out);
|
||||
kfree(split_sizes);
|
||||
|
||||
/* Grab a big lock for a long time to avoid concurrency issues */
|
||||
mutex_lock(&queue->queuelock);
|
||||
|
||||
|
@ -827,13 +822,13 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
|
|||
(!queue->havesoftqueue ||
|
||||
kfifo_avail(&queue->softqueue) > steps)) ||
|
||||
!list_empty(&ctx->backlog)) {
|
||||
ret = -EBUSY;
|
||||
if ((skreq->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) {
|
||||
list_add_tail(&sec_req->backlog_head, &ctx->backlog);
|
||||
mutex_unlock(&queue->queuelock);
|
||||
return -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = -EBUSY;
|
||||
mutex_unlock(&queue->queuelock);
|
||||
goto err_free_elements;
|
||||
}
|
||||
|
@ -842,7 +837,15 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
|
|||
if (ret)
|
||||
goto err_free_elements;
|
||||
|
||||
return -EINPROGRESS;
|
||||
ret = -EINPROGRESS;
|
||||
out:
|
||||
/* Cleanup - all elements in pointer arrays have been copied */
|
||||
kfree(splits_in_nents);
|
||||
kfree(splits_in);
|
||||
kfree(splits_out_nents);
|
||||
kfree(splits_out);
|
||||
kfree(split_sizes);
|
||||
return ret;
|
||||
|
||||
err_free_elements:
|
||||
list_for_each_entry_safe(el, temp, &sec_req->elements, head) {
|
||||
|
@ -854,7 +857,7 @@ static int sec_alg_skcipher_crypto(struct skcipher_request *skreq,
|
|||
crypto_skcipher_ivsize(atfm),
|
||||
DMA_BIDIRECTIONAL);
|
||||
err_unmap_out_sg:
|
||||
if (skreq->src != skreq->dst)
|
||||
if (split)
|
||||
sec_unmap_sg_on_err(skreq->dst, steps, splits_out,
|
||||
splits_out_nents, sec_req->len_out,
|
||||
info->dev);
|
||||
|
|
|
@ -158,6 +158,10 @@ static efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
|
|||
return efi_status;
|
||||
}
|
||||
}
|
||||
|
||||
/* shrink the FDT back to its minimum size */
|
||||
fdt_pack(fdt);
|
||||
|
||||
return EFI_SUCCESS;
|
||||
|
||||
fdt_set_fail:
|
||||
|
|
|
@ -358,7 +358,9 @@ static int amdgpu_atif_get_sbios_requests(struct amdgpu_atif *atif,
|
|||
*
|
||||
* Checks the acpi event and if it matches an atif event,
|
||||
* handles it.
|
||||
* Returns NOTIFY code
|
||||
*
|
||||
* Returns:
|
||||
* NOTIFY_BAD or NOTIFY_DONE, depending on the event.
|
||||
*/
|
||||
static int amdgpu_atif_handler(struct amdgpu_device *adev,
|
||||
struct acpi_bus_event *event)
|
||||
|
@ -372,11 +374,16 @@ static int amdgpu_atif_handler(struct amdgpu_device *adev,
|
|||
if (strcmp(event->device_class, ACPI_VIDEO_CLASS) != 0)
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/* Is this actually our event? */
|
||||
if (!atif ||
|
||||
!atif->notification_cfg.enabled ||
|
||||
event->type != atif->notification_cfg.command_code)
|
||||
/* Not our event */
|
||||
return NOTIFY_DONE;
|
||||
event->type != atif->notification_cfg.command_code) {
|
||||
/* These events will generate keypresses otherwise */
|
||||
if (event->type == ACPI_VIDEO_NOTIFY_PROBE)
|
||||
return NOTIFY_BAD;
|
||||
else
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
if (atif->functions.sbios_requests) {
|
||||
struct atif_sbios_requests req;
|
||||
|
@ -385,7 +392,7 @@ static int amdgpu_atif_handler(struct amdgpu_device *adev,
|
|||
count = amdgpu_atif_get_sbios_requests(atif, &req);
|
||||
|
||||
if (count <= 0)
|
||||
return NOTIFY_DONE;
|
||||
return NOTIFY_BAD;
|
||||
|
||||
DRM_DEBUG_DRIVER("ATIF: %d pending SBIOS requests\n", count);
|
||||
|
||||
|
|
|
@ -67,7 +67,8 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
|
|||
unsigned i;
|
||||
int r;
|
||||
|
||||
if (num_entries > SIZE_MAX / sizeof(struct amdgpu_bo_list_entry))
|
||||
if (num_entries > (SIZE_MAX - sizeof(struct amdgpu_bo_list))
|
||||
/ sizeof(struct amdgpu_bo_list_entry))
|
||||
return -EINVAL;
|
||||
|
||||
size = sizeof(struct amdgpu_bo_list);
|
||||
|
|
|
@ -574,7 +574,7 @@ void amdgpu_vmid_mgr_init(struct amdgpu_device *adev)
|
|||
/* skip over VMID 0, since it is the system VM */
|
||||
for (j = 1; j < id_mgr->num_ids; ++j) {
|
||||
amdgpu_vmid_reset(adev, i, j);
|
||||
amdgpu_sync_create(&id_mgr->ids[i].active);
|
||||
amdgpu_sync_create(&id_mgr->ids[j].active);
|
||||
list_add_tail(&id_mgr->ids[j].list, &id_mgr->ids_lru);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -66,6 +66,7 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
|
|||
amdgpu_sync_create(&(*job)->sync);
|
||||
amdgpu_sync_create(&(*job)->sched_sync);
|
||||
(*job)->vram_lost_counter = atomic_read(&adev->vram_lost_counter);
|
||||
(*job)->vm_pd_addr = AMDGPU_BO_INVALID_OFFSET;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -277,6 +277,7 @@ amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type)
|
|||
case CHIP_PITCAIRN:
|
||||
case CHIP_VERDE:
|
||||
case CHIP_OLAND:
|
||||
case CHIP_HAINAN:
|
||||
return AMDGPU_FW_LOAD_DIRECT;
|
||||
#endif
|
||||
#ifdef CONFIG_DRM_AMDGPU_CIK
|
||||
|
|
|
@ -714,7 +714,8 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, bool need_
|
|||
}
|
||||
|
||||
gds_switch_needed &= !!ring->funcs->emit_gds_switch;
|
||||
vm_flush_needed &= !!ring->funcs->emit_vm_flush;
|
||||
vm_flush_needed &= !!ring->funcs->emit_vm_flush &&
|
||||
job->vm_pd_addr != AMDGPU_BO_INVALID_OFFSET;
|
||||
pasid_mapping_needed &= adev->gmc.gmc_funcs->emit_pasid_mapping &&
|
||||
ring->funcs->emit_wreg;
|
||||
|
||||
|
|
|
@ -1120,9 +1120,6 @@ static enum surface_update_type get_plane_info_update_type(const struct dc_surfa
|
|||
*/
|
||||
update_flags->bits.bpp_change = 1;
|
||||
|
||||
if (u->gamma && dce_use_lut(u->plane_info->format))
|
||||
update_flags->bits.gamma_change = 1;
|
||||
|
||||
if (memcmp(&u->plane_info->tiling_info, &u->surface->tiling_info,
|
||||
sizeof(union dc_tiling_info)) != 0) {
|
||||
update_flags->bits.swizzle_change = 1;
|
||||
|
@ -1139,7 +1136,6 @@ static enum surface_update_type get_plane_info_update_type(const struct dc_surfa
|
|||
if (update_flags->bits.rotation_change
|
||||
|| update_flags->bits.stereo_format_change
|
||||
|| update_flags->bits.pixel_format_change
|
||||
|| update_flags->bits.gamma_change
|
||||
|| update_flags->bits.bpp_change
|
||||
|| update_flags->bits.bandwidth_change
|
||||
|| update_flags->bits.output_tf_change)
|
||||
|
@ -1229,13 +1225,26 @@ static enum surface_update_type det_surface_update(const struct dc *dc,
|
|||
if (u->coeff_reduction_factor)
|
||||
update_flags->bits.coeff_reduction_change = 1;
|
||||
|
||||
if (u->gamma) {
|
||||
enum surface_pixel_format format = SURFACE_PIXEL_FORMAT_GRPH_BEGIN;
|
||||
|
||||
if (u->plane_info)
|
||||
format = u->plane_info->format;
|
||||
else if (u->surface)
|
||||
format = u->surface->format;
|
||||
|
||||
if (dce_use_lut(format))
|
||||
update_flags->bits.gamma_change = 1;
|
||||
}
|
||||
|
||||
if (update_flags->bits.in_transfer_func_change) {
|
||||
type = UPDATE_TYPE_MED;
|
||||
elevate_update_type(&overall_type, type);
|
||||
}
|
||||
|
||||
if (update_flags->bits.input_csc_change
|
||||
|| update_flags->bits.coeff_reduction_change) {
|
||||
|| update_flags->bits.coeff_reduction_change
|
||||
|| update_flags->bits.gamma_change) {
|
||||
type = UPDATE_TYPE_FULL;
|
||||
elevate_update_type(&overall_type, type);
|
||||
}
|
||||
|
|
|
@ -466,6 +466,9 @@ static void dce12_update_clocks(struct dccg *dccg,
|
|||
{
|
||||
struct dm_pp_clock_for_voltage_req clock_voltage_req = {0};
|
||||
|
||||
/* TODO: Investigate why this is needed to fix display corruption. */
|
||||
new_clocks->dispclk_khz = new_clocks->dispclk_khz * 115 / 100;
|
||||
|
||||
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, dccg->clks.dispclk_khz)) {
|
||||
clock_voltage_req.clk_type = DM_PP_CLOCK_TYPE_DISPLAY_CLK;
|
||||
clock_voltage_req.clocks_in_khz = new_clocks->dispclk_khz;
|
||||
|
|
|
@ -1069,10 +1069,14 @@ static void build_evenly_distributed_points(
|
|||
struct dividers dividers)
|
||||
{
|
||||
struct gamma_pixel *p = points;
|
||||
struct gamma_pixel *p_last = p + numberof_points - 1;
|
||||
struct gamma_pixel *p_last;
|
||||
|
||||
uint32_t i = 0;
|
||||
|
||||
// This function should not gets called with 0 as a parameter
|
||||
ASSERT(numberof_points > 0);
|
||||
p_last = p + numberof_points - 1;
|
||||
|
||||
do {
|
||||
struct fixed31_32 value = dc_fixpt_from_fraction(i,
|
||||
numberof_points - 1);
|
||||
|
@ -1083,7 +1087,7 @@ static void build_evenly_distributed_points(
|
|||
|
||||
++p;
|
||||
++i;
|
||||
} while (i != numberof_points);
|
||||
} while (i < numberof_points);
|
||||
|
||||
p->r = dc_fixpt_div(p_last->r, dividers.divider1);
|
||||
p->g = dc_fixpt_div(p_last->g, dividers.divider1);
|
||||
|
|
|
@ -1222,14 +1222,17 @@ static int smu8_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
|
|||
|
||||
static int smu8_dpm_powerdown_uvd(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
if (PP_CAP(PHM_PlatformCaps_UVDPowerGating))
|
||||
if (PP_CAP(PHM_PlatformCaps_UVDPowerGating)) {
|
||||
smu8_nbdpm_pstate_enable_disable(hwmgr, true, true);
|
||||
return smum_send_msg_to_smc(hwmgr, PPSMC_MSG_UVDPowerOFF);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu8_dpm_powerup_uvd(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
if (PP_CAP(PHM_PlatformCaps_UVDPowerGating)) {
|
||||
smu8_nbdpm_pstate_enable_disable(hwmgr, false, true);
|
||||
return smum_send_msg_to_smc_with_parameter(
|
||||
hwmgr,
|
||||
PPSMC_MSG_UVDPowerON,
|
||||
|
|
|
@ -2268,11 +2268,13 @@ static uint32_t ci_get_offsetof(uint32_t type, uint32_t member)
|
|||
case DRAM_LOG_BUFF_SIZE:
|
||||
return offsetof(SMU7_SoftRegisters, DRAM_LOG_BUFF_SIZE);
|
||||
}
|
||||
break;
|
||||
case SMU_Discrete_DpmTable:
|
||||
switch (member) {
|
||||
case LowSclkInterruptThreshold:
|
||||
return offsetof(SMU7_Discrete_DpmTable, LowSclkInterruptT);
|
||||
}
|
||||
break;
|
||||
}
|
||||
pr_debug("can't get the offset of type %x member %x\n", type, member);
|
||||
return 0;
|
||||
|
|
|
@ -2330,6 +2330,7 @@ static uint32_t fiji_get_offsetof(uint32_t type, uint32_t member)
|
|||
case DRAM_LOG_BUFF_SIZE:
|
||||
return offsetof(SMU73_SoftRegisters, DRAM_LOG_BUFF_SIZE);
|
||||
}
|
||||
break;
|
||||
case SMU_Discrete_DpmTable:
|
||||
switch (member) {
|
||||
case UvdBootLevel:
|
||||
|
@ -2339,6 +2340,7 @@ static uint32_t fiji_get_offsetof(uint32_t type, uint32_t member)
|
|||
case LowSclkInterruptThreshold:
|
||||
return offsetof(SMU73_Discrete_DpmTable, LowSclkInterruptThreshold);
|
||||
}
|
||||
break;
|
||||
}
|
||||
pr_warn("can't get the offset of type %x member %x\n", type, member);
|
||||
return 0;
|
||||
|
|
|
@ -2236,11 +2236,13 @@ static uint32_t iceland_get_offsetof(uint32_t type, uint32_t member)
|
|||
case DRAM_LOG_BUFF_SIZE:
|
||||
return offsetof(SMU71_SoftRegisters, DRAM_LOG_BUFF_SIZE);
|
||||
}
|
||||
break;
|
||||
case SMU_Discrete_DpmTable:
|
||||
switch (member) {
|
||||
case LowSclkInterruptThreshold:
|
||||
return offsetof(SMU71_Discrete_DpmTable, LowSclkInterruptThreshold);
|
||||
}
|
||||
break;
|
||||
}
|
||||
pr_warn("can't get the offset of type %x member %x\n", type, member);
|
||||
return 0;
|
||||
|
|
|
@ -2618,6 +2618,7 @@ static uint32_t tonga_get_offsetof(uint32_t type, uint32_t member)
|
|||
case DRAM_LOG_BUFF_SIZE:
|
||||
return offsetof(SMU72_SoftRegisters, DRAM_LOG_BUFF_SIZE);
|
||||
}
|
||||
break;
|
||||
case SMU_Discrete_DpmTable:
|
||||
switch (member) {
|
||||
case UvdBootLevel:
|
||||
|
@ -2627,6 +2628,7 @@ static uint32_t tonga_get_offsetof(uint32_t type, uint32_t member)
|
|||
case LowSclkInterruptThreshold:
|
||||
return offsetof(SMU72_Discrete_DpmTable, LowSclkInterruptThreshold);
|
||||
}
|
||||
break;
|
||||
}
|
||||
pr_warn("can't get the offset of type %x member %x\n", type, member);
|
||||
return 0;
|
||||
|
|
|
@ -2184,6 +2184,7 @@ static uint32_t vegam_get_offsetof(uint32_t type, uint32_t member)
|
|||
case DRAM_LOG_BUFF_SIZE:
|
||||
return offsetof(SMU75_SoftRegisters, DRAM_LOG_BUFF_SIZE);
|
||||
}
|
||||
break;
|
||||
case SMU_Discrete_DpmTable:
|
||||
switch (member) {
|
||||
case UvdBootLevel:
|
||||
|
@ -2193,6 +2194,7 @@ static uint32_t vegam_get_offsetof(uint32_t type, uint32_t member)
|
|||
case LowSclkInterruptThreshold:
|
||||
return offsetof(SMU75_Discrete_DpmTable, LowSclkInterruptThreshold);
|
||||
}
|
||||
break;
|
||||
}
|
||||
pr_warn("can't get the offset of type %x member %x\n", type, member);
|
||||
return 0;
|
||||
|
|
|
@ -1274,6 +1274,9 @@ static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_
|
|||
mutex_lock(&mgr->lock);
|
||||
mstb = mgr->mst_primary;
|
||||
|
||||
if (!mstb)
|
||||
goto out;
|
||||
|
||||
for (i = 0; i < lct - 1; i++) {
|
||||
int shift = (i % 2) ? 0 : 4;
|
||||
int port_num = (rad[i / 2] >> shift) & 0xf;
|
||||
|
|
|
@ -30,6 +30,12 @@ struct drm_dmi_panel_orientation_data {
|
|||
int orientation;
|
||||
};
|
||||
|
||||
static const struct drm_dmi_panel_orientation_data acer_s1003 = {
|
||||
.width = 800,
|
||||
.height = 1280,
|
||||
.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
|
||||
};
|
||||
|
||||
static const struct drm_dmi_panel_orientation_data asus_t100ha = {
|
||||
.width = 800,
|
||||
.height = 1280,
|
||||
|
@ -67,7 +73,13 @@ static const struct drm_dmi_panel_orientation_data lcd800x1280_rightside_up = {
|
|||
};
|
||||
|
||||
static const struct dmi_system_id orientation_data[] = {
|
||||
{ /* Asus T100HA */
|
||||
{ /* Acer One 10 (S1003) */
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "One S1003"),
|
||||
},
|
||||
.driver_data = (void *)&acer_s1003,
|
||||
}, { /* Asus T100HA */
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100HAN"),
|
||||
|
|
|
@ -93,7 +93,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
|
|||
* If the GPU managed to complete this jobs fence, the timout is
|
||||
* spurious. Bail out.
|
||||
*/
|
||||
if (fence_completed(gpu, submit->out_fence->seqno))
|
||||
if (dma_fence_is_signaled(submit->out_fence))
|
||||
return;
|
||||
|
||||
/*
|
||||
|
|
|
@ -122,6 +122,7 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
|
|||
hi_fbdev->fb = hibmc_framebuffer_init(priv->dev, &mode_cmd, gobj);
|
||||
if (IS_ERR(hi_fbdev->fb)) {
|
||||
ret = PTR_ERR(hi_fbdev->fb);
|
||||
hi_fbdev->fb = NULL;
|
||||
DRM_ERROR("failed to initialize framebuffer: %d\n", ret);
|
||||
goto out_release_fbi;
|
||||
}
|
||||
|
|
|
@ -35,7 +35,6 @@
|
|||
#define _GVT_GTT_H_
|
||||
|
||||
#define I915_GTT_PAGE_SHIFT 12
|
||||
#define I915_GTT_PAGE_MASK (~(I915_GTT_PAGE_SIZE - 1))
|
||||
|
||||
struct intel_vgpu_mm;
|
||||
|
||||
|
|
|
@ -1122,11 +1122,7 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
|
|||
offset = offset_in_page(args->offset);
|
||||
for (idx = args->offset >> PAGE_SHIFT; remain; idx++) {
|
||||
struct page *page = i915_gem_object_get_page(obj, idx);
|
||||
int length;
|
||||
|
||||
length = remain;
|
||||
if (offset + length > PAGE_SIZE)
|
||||
length = PAGE_SIZE - offset;
|
||||
unsigned int length = min_t(u64, remain, PAGE_SIZE - offset);
|
||||
|
||||
ret = shmem_pread(page, offset, length, user_data,
|
||||
page_to_phys(page) & obj_do_bit17_swizzling,
|
||||
|
@ -1570,11 +1566,7 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
|
|||
offset = offset_in_page(args->offset);
|
||||
for (idx = args->offset >> PAGE_SHIFT; remain; idx++) {
|
||||
struct page *page = i915_gem_object_get_page(obj, idx);
|
||||
int length;
|
||||
|
||||
length = remain;
|
||||
if (offset + length > PAGE_SIZE)
|
||||
length = PAGE_SIZE - offset;
|
||||
unsigned int length = min_t(u64, remain, PAGE_SIZE - offset);
|
||||
|
||||
ret = shmem_pwrite(page, offset, length, user_data,
|
||||
page_to_phys(page) & obj_do_bit17_swizzling,
|
||||
|
|
|
@ -458,7 +458,7 @@ eb_validate_vma(struct i915_execbuffer *eb,
|
|||
* any non-page-aligned or non-canonical addresses.
|
||||
*/
|
||||
if (unlikely(entry->flags & EXEC_OBJECT_PINNED &&
|
||||
entry->offset != gen8_canonical_addr(entry->offset & PAGE_MASK)))
|
||||
entry->offset != gen8_canonical_addr(entry->offset & I915_GTT_PAGE_MASK)))
|
||||
return -EINVAL;
|
||||
|
||||
/* pad_to_size was once a reserved field, so sanitize it */
|
||||
|
|
|
@ -1768,7 +1768,7 @@ static void gen6_dump_ppgtt(struct i915_hw_ppgtt *base, struct seq_file *m)
|
|||
if (i == 4)
|
||||
continue;
|
||||
|
||||
seq_printf(m, "\t\t(%03d, %04d) %08lx: ",
|
||||
seq_printf(m, "\t\t(%03d, %04d) %08llx: ",
|
||||
pde, pte,
|
||||
(pde * GEN6_PTES + pte) * PAGE_SIZE);
|
||||
for (i = 0; i < 4; i++) {
|
||||
|
|
|
@ -42,13 +42,15 @@
|
|||
#include "i915_selftest.h"
|
||||
#include "i915_timeline.h"
|
||||
|
||||
#define I915_GTT_PAGE_SIZE_4K BIT(12)
|
||||
#define I915_GTT_PAGE_SIZE_64K BIT(16)
|
||||
#define I915_GTT_PAGE_SIZE_2M BIT(21)
|
||||
#define I915_GTT_PAGE_SIZE_4K BIT_ULL(12)
|
||||
#define I915_GTT_PAGE_SIZE_64K BIT_ULL(16)
|
||||
#define I915_GTT_PAGE_SIZE_2M BIT_ULL(21)
|
||||
|
||||
#define I915_GTT_PAGE_SIZE I915_GTT_PAGE_SIZE_4K
|
||||
#define I915_GTT_MAX_PAGE_SIZE I915_GTT_PAGE_SIZE_2M
|
||||
|
||||
#define I915_GTT_PAGE_MASK -I915_GTT_PAGE_SIZE
|
||||
|
||||
#define I915_GTT_MIN_ALIGNMENT I915_GTT_PAGE_SIZE
|
||||
|
||||
#define I915_FENCE_REG_NONE -1
|
||||
|
@ -662,20 +664,20 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
|
|||
u64 start, u64 end, unsigned int flags);
|
||||
|
||||
/* Flags used by pin/bind&friends. */
|
||||
#define PIN_NONBLOCK BIT(0)
|
||||
#define PIN_MAPPABLE BIT(1)
|
||||
#define PIN_ZONE_4G BIT(2)
|
||||
#define PIN_NONFAULT BIT(3)
|
||||
#define PIN_NOEVICT BIT(4)
|
||||
#define PIN_NONBLOCK BIT_ULL(0)
|
||||
#define PIN_MAPPABLE BIT_ULL(1)
|
||||
#define PIN_ZONE_4G BIT_ULL(2)
|
||||
#define PIN_NONFAULT BIT_ULL(3)
|
||||
#define PIN_NOEVICT BIT_ULL(4)
|
||||
|
||||
#define PIN_MBZ BIT(5) /* I915_VMA_PIN_OVERFLOW */
|
||||
#define PIN_GLOBAL BIT(6) /* I915_VMA_GLOBAL_BIND */
|
||||
#define PIN_USER BIT(7) /* I915_VMA_LOCAL_BIND */
|
||||
#define PIN_UPDATE BIT(8)
|
||||
#define PIN_MBZ BIT_ULL(5) /* I915_VMA_PIN_OVERFLOW */
|
||||
#define PIN_GLOBAL BIT_ULL(6) /* I915_VMA_GLOBAL_BIND */
|
||||
#define PIN_USER BIT_ULL(7) /* I915_VMA_LOCAL_BIND */
|
||||
#define PIN_UPDATE BIT_ULL(8)
|
||||
|
||||
#define PIN_HIGH BIT(9)
|
||||
#define PIN_OFFSET_BIAS BIT(10)
|
||||
#define PIN_OFFSET_FIXED BIT(11)
|
||||
#define PIN_HIGH BIT_ULL(9)
|
||||
#define PIN_OFFSET_BIAS BIT_ULL(10)
|
||||
#define PIN_OFFSET_FIXED BIT_ULL(11)
|
||||
#define PIN_OFFSET_MASK (-I915_GTT_PAGE_SIZE)
|
||||
|
||||
#endif
|
||||
|
|
|
@ -2097,8 +2097,12 @@ enum i915_power_well_id {
|
|||
|
||||
/* ICL PHY DFLEX registers */
|
||||
#define PORT_TX_DFLEXDPMLE1 _MMIO(0x1638C0)
|
||||
#define DFLEXDPMLE1_DPMLETC_MASK(n) (0xf << (4 * (n)))
|
||||
#define DFLEXDPMLE1_DPMLETC(n, x) ((x) << (4 * (n)))
|
||||
#define DFLEXDPMLE1_DPMLETC_MASK(tc_port) (0xf << (4 * (tc_port)))
|
||||
#define DFLEXDPMLE1_DPMLETC_ML0(tc_port) (1 << (4 * (tc_port)))
|
||||
#define DFLEXDPMLE1_DPMLETC_ML1_0(tc_port) (3 << (4 * (tc_port)))
|
||||
#define DFLEXDPMLE1_DPMLETC_ML3(tc_port) (8 << (4 * (tc_port)))
|
||||
#define DFLEXDPMLE1_DPMLETC_ML3_2(tc_port) (12 << (4 * (tc_port)))
|
||||
#define DFLEXDPMLE1_DPMLETC_ML3_0(tc_port) (15 << (4 * (tc_port)))
|
||||
|
||||
/* BXT PHY Ref registers */
|
||||
#define _PORT_REF_DW3_A 0x16218C
|
||||
|
|
|
@ -144,6 +144,9 @@ static const struct {
|
|||
/* HDMI N/CTS table */
|
||||
#define TMDS_297M 297000
|
||||
#define TMDS_296M 296703
|
||||
#define TMDS_594M 594000
|
||||
#define TMDS_593M 593407
|
||||
|
||||
static const struct {
|
||||
int sample_rate;
|
||||
int clock;
|
||||
|
@ -164,6 +167,20 @@ static const struct {
|
|||
{ 176400, TMDS_297M, 18816, 247500 },
|
||||
{ 192000, TMDS_296M, 23296, 281250 },
|
||||
{ 192000, TMDS_297M, 20480, 247500 },
|
||||
{ 44100, TMDS_593M, 8918, 937500 },
|
||||
{ 44100, TMDS_594M, 9408, 990000 },
|
||||
{ 48000, TMDS_593M, 5824, 562500 },
|
||||
{ 48000, TMDS_594M, 6144, 594000 },
|
||||
{ 32000, TMDS_593M, 5824, 843750 },
|
||||
{ 32000, TMDS_594M, 3072, 445500 },
|
||||
{ 88200, TMDS_593M, 17836, 937500 },
|
||||
{ 88200, TMDS_594M, 18816, 990000 },
|
||||
{ 96000, TMDS_593M, 11648, 562500 },
|
||||
{ 96000, TMDS_594M, 12288, 594000 },
|
||||
{ 176400, TMDS_593M, 35672, 937500 },
|
||||
{ 176400, TMDS_594M, 37632, 990000 },
|
||||
{ 192000, TMDS_593M, 23296, 562500 },
|
||||
{ 192000, TMDS_594M, 24576, 594000 },
|
||||
};
|
||||
|
||||
/* get AUD_CONFIG_PIXEL_CLOCK_HDMI_* value for mode */
|
||||
|
|
|
@ -2754,20 +2754,33 @@ intel_set_plane_visible(struct intel_crtc_state *crtc_state,
|
|||
|
||||
plane_state->base.visible = visible;
|
||||
|
||||
/* FIXME pre-g4x don't work like this */
|
||||
if (visible) {
|
||||
if (visible)
|
||||
crtc_state->base.plane_mask |= drm_plane_mask(&plane->base);
|
||||
crtc_state->active_planes |= BIT(plane->id);
|
||||
} else {
|
||||
else
|
||||
crtc_state->base.plane_mask &= ~drm_plane_mask(&plane->base);
|
||||
crtc_state->active_planes &= ~BIT(plane->id);
|
||||
}
|
||||
|
||||
DRM_DEBUG_KMS("%s active planes 0x%x\n",
|
||||
crtc_state->base.crtc->name,
|
||||
crtc_state->active_planes);
|
||||
}
|
||||
|
||||
static void fixup_active_planes(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
|
||||
struct drm_plane *plane;
|
||||
|
||||
/*
|
||||
* Active_planes aliases if multiple "primary" or cursor planes
|
||||
* have been used on the same (or wrong) pipe. plane_mask uses
|
||||
* unique ids, hence we can use that to reconstruct active_planes.
|
||||
*/
|
||||
crtc_state->active_planes = 0;
|
||||
|
||||
drm_for_each_plane_mask(plane, &dev_priv->drm,
|
||||
crtc_state->base.plane_mask)
|
||||
crtc_state->active_planes |= BIT(to_intel_plane(plane)->id);
|
||||
}
|
||||
|
||||
static void intel_plane_disable_noatomic(struct intel_crtc *crtc,
|
||||
struct intel_plane *plane)
|
||||
{
|
||||
|
@ -2777,6 +2790,7 @@ static void intel_plane_disable_noatomic(struct intel_crtc *crtc,
|
|||
to_intel_plane_state(plane->base.state);
|
||||
|
||||
intel_set_plane_visible(crtc_state, plane_state, false);
|
||||
fixup_active_planes(crtc_state);
|
||||
|
||||
if (plane->id == PLANE_PRIMARY)
|
||||
intel_pre_disable_primary_noatomic(&crtc->base);
|
||||
|
@ -2795,7 +2809,6 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
|
|||
struct drm_i915_gem_object *obj;
|
||||
struct drm_plane *primary = intel_crtc->base.primary;
|
||||
struct drm_plane_state *plane_state = primary->state;
|
||||
struct drm_crtc_state *crtc_state = intel_crtc->base.state;
|
||||
struct intel_plane *intel_plane = to_intel_plane(primary);
|
||||
struct intel_plane_state *intel_state =
|
||||
to_intel_plane_state(plane_state);
|
||||
|
@ -2885,10 +2898,6 @@ intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
|
|||
plane_state->fb = fb;
|
||||
plane_state->crtc = &intel_crtc->base;
|
||||
|
||||
intel_set_plane_visible(to_intel_crtc_state(crtc_state),
|
||||
to_intel_plane_state(plane_state),
|
||||
true);
|
||||
|
||||
atomic_or(to_intel_plane(primary)->frontbuffer_bit,
|
||||
&obj->frontbuffer_bits);
|
||||
}
|
||||
|
@ -12630,17 +12639,12 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
|
|||
intel_check_cpu_fifo_underruns(dev_priv);
|
||||
intel_check_pch_fifo_underruns(dev_priv);
|
||||
|
||||
if (!new_crtc_state->active) {
|
||||
/*
|
||||
* Make sure we don't call initial_watermarks
|
||||
* for ILK-style watermark updates.
|
||||
*
|
||||
* No clue what this is supposed to achieve.
|
||||
*/
|
||||
if (INTEL_GEN(dev_priv) >= 9)
|
||||
dev_priv->display.initial_watermarks(intel_state,
|
||||
to_intel_crtc_state(new_crtc_state));
|
||||
}
|
||||
/* FIXME unify this for all platforms */
|
||||
if (!new_crtc_state->active &&
|
||||
!HAS_GMCH_DISPLAY(dev_priv) &&
|
||||
dev_priv->display.initial_watermarks)
|
||||
dev_priv->display.initial_watermarks(intel_state,
|
||||
to_intel_crtc_state(new_crtc_state));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -14573,7 +14577,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
|
|||
fb->height < SKL_MIN_YUV_420_SRC_H ||
|
||||
(fb->width % 4) != 0 || (fb->height % 4) != 0)) {
|
||||
DRM_DEBUG_KMS("src dimensions not correct for NV12\n");
|
||||
return -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
for (i = 0; i < fb->format->num_planes; i++) {
|
||||
|
@ -15365,17 +15369,6 @@ void i830_disable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe)
|
|||
POSTING_READ(DPLL(pipe));
|
||||
}
|
||||
|
||||
static bool intel_plane_mapping_ok(struct intel_crtc *crtc,
|
||||
struct intel_plane *plane)
|
||||
{
|
||||
enum pipe pipe;
|
||||
|
||||
if (!plane->get_hw_state(plane, &pipe))
|
||||
return true;
|
||||
|
||||
return pipe == crtc->pipe;
|
||||
}
|
||||
|
||||
static void
|
||||
intel_sanitize_plane_mapping(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
|
@ -15387,13 +15380,20 @@ intel_sanitize_plane_mapping(struct drm_i915_private *dev_priv)
|
|||
for_each_intel_crtc(&dev_priv->drm, crtc) {
|
||||
struct intel_plane *plane =
|
||||
to_intel_plane(crtc->base.primary);
|
||||
struct intel_crtc *plane_crtc;
|
||||
enum pipe pipe;
|
||||
|
||||
if (intel_plane_mapping_ok(crtc, plane))
|
||||
if (!plane->get_hw_state(plane, &pipe))
|
||||
continue;
|
||||
|
||||
if (pipe == crtc->pipe)
|
||||
continue;
|
||||
|
||||
DRM_DEBUG_KMS("%s attached to the wrong pipe, disabling plane\n",
|
||||
plane->base.name);
|
||||
intel_plane_disable_noatomic(crtc, plane);
|
||||
|
||||
plane_crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
|
||||
intel_plane_disable_noatomic(plane_crtc, plane);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -15441,13 +15441,9 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc,
|
|||
I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
|
||||
}
|
||||
|
||||
/* restore vblank interrupts to correct state */
|
||||
drm_crtc_vblank_reset(&crtc->base);
|
||||
if (crtc->active) {
|
||||
struct intel_plane *plane;
|
||||
|
||||
drm_crtc_vblank_on(&crtc->base);
|
||||
|
||||
/* Disable everything but the primary plane */
|
||||
for_each_intel_plane_on_crtc(dev, crtc, plane) {
|
||||
const struct intel_plane_state *plane_state =
|
||||
|
@ -15565,23 +15561,32 @@ void i915_redisable_vga(struct drm_i915_private *dev_priv)
|
|||
}
|
||||
|
||||
/* FIXME read out full plane state for all planes */
|
||||
static void readout_plane_state(struct intel_crtc *crtc)
|
||||
static void readout_plane_state(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
struct intel_plane *plane;
|
||||
struct intel_crtc *crtc;
|
||||
|
||||
for_each_intel_plane_on_crtc(&dev_priv->drm, crtc, plane) {
|
||||
for_each_intel_plane(&dev_priv->drm, plane) {
|
||||
struct intel_plane_state *plane_state =
|
||||
to_intel_plane_state(plane->base.state);
|
||||
enum pipe pipe;
|
||||
struct intel_crtc_state *crtc_state;
|
||||
enum pipe pipe = PIPE_A;
|
||||
bool visible;
|
||||
|
||||
visible = plane->get_hw_state(plane, &pipe);
|
||||
|
||||
crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
|
||||
crtc_state = to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
intel_set_plane_visible(crtc_state, plane_state, visible);
|
||||
}
|
||||
|
||||
for_each_intel_crtc(&dev_priv->drm, crtc) {
|
||||
struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
fixup_active_planes(crtc_state);
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_modeset_readout_hw_state(struct drm_device *dev)
|
||||
|
@ -15613,13 +15618,13 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
|
|||
if (crtc_state->base.active)
|
||||
dev_priv->active_crtcs |= 1 << crtc->pipe;
|
||||
|
||||
readout_plane_state(crtc);
|
||||
|
||||
DRM_DEBUG_KMS("[CRTC:%d:%s] hw state readout: %s\n",
|
||||
crtc->base.base.id, crtc->base.name,
|
||||
enableddisabled(crtc_state->base.active));
|
||||
}
|
||||
|
||||
readout_plane_state(dev_priv);
|
||||
|
||||
for (i = 0; i < dev_priv->num_shared_dpll; i++) {
|
||||
struct intel_shared_dpll *pll = &dev_priv->shared_dplls[i];
|
||||
|
||||
|
@ -15789,7 +15794,6 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
|
|||
struct drm_modeset_acquire_ctx *ctx)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
||||
enum pipe pipe;
|
||||
struct intel_crtc *crtc;
|
||||
struct intel_encoder *encoder;
|
||||
int i;
|
||||
|
@ -15800,15 +15804,23 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
|
|||
/* HW state is read out, now we need to sanitize this mess. */
|
||||
get_encoder_power_domains(dev_priv);
|
||||
|
||||
intel_sanitize_plane_mapping(dev_priv);
|
||||
/*
|
||||
* intel_sanitize_plane_mapping() may need to do vblank
|
||||
* waits, so we need vblank interrupts restored beforehand.
|
||||
*/
|
||||
for_each_intel_crtc(&dev_priv->drm, crtc) {
|
||||
drm_crtc_vblank_reset(&crtc->base);
|
||||
|
||||
for_each_intel_encoder(dev, encoder) {
|
||||
intel_sanitize_encoder(encoder);
|
||||
if (crtc->active)
|
||||
drm_crtc_vblank_on(&crtc->base);
|
||||
}
|
||||
|
||||
for_each_pipe(dev_priv, pipe) {
|
||||
crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
|
||||
intel_sanitize_plane_mapping(dev_priv);
|
||||
|
||||
for_each_intel_encoder(dev, encoder)
|
||||
intel_sanitize_encoder(encoder);
|
||||
|
||||
for_each_intel_crtc(&dev_priv->drm, crtc) {
|
||||
intel_sanitize_crtc(crtc, ctx);
|
||||
intel_dump_pipe_config(crtc, crtc->config,
|
||||
"[setup_hw_state]");
|
||||
|
|
|
@ -401,6 +401,22 @@ static bool intel_dp_link_params_valid(struct intel_dp *intel_dp, int link_rate,
|
|||
return true;
|
||||
}
|
||||
|
||||
static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
|
||||
int link_rate,
|
||||
uint8_t lane_count)
|
||||
{
|
||||
const struct drm_display_mode *fixed_mode =
|
||||
intel_dp->attached_connector->panel.fixed_mode;
|
||||
int mode_rate, max_rate;
|
||||
|
||||
mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
|
||||
max_rate = intel_dp_max_data_rate(link_rate, lane_count);
|
||||
if (mode_rate > max_rate)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
|
||||
int link_rate, uint8_t lane_count)
|
||||
{
|
||||
|
@ -410,9 +426,23 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
|
|||
intel_dp->num_common_rates,
|
||||
link_rate);
|
||||
if (index > 0) {
|
||||
if (intel_dp_is_edp(intel_dp) &&
|
||||
!intel_dp_can_link_train_fallback_for_edp(intel_dp,
|
||||
intel_dp->common_rates[index - 1],
|
||||
lane_count)) {
|
||||
DRM_DEBUG_KMS("Retrying Link training for eDP with same parameters\n");
|
||||
return 0;
|
||||
}
|
||||
intel_dp->max_link_rate = intel_dp->common_rates[index - 1];
|
||||
intel_dp->max_link_lane_count = lane_count;
|
||||
} else if (lane_count > 1) {
|
||||
if (intel_dp_is_edp(intel_dp) &&
|
||||
!intel_dp_can_link_train_fallback_for_edp(intel_dp,
|
||||
intel_dp_max_common_rate(intel_dp),
|
||||
lane_count >> 1)) {
|
||||
DRM_DEBUG_KMS("Retrying Link training for eDP with same parameters\n");
|
||||
return 0;
|
||||
}
|
||||
intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
|
||||
intel_dp->max_link_lane_count = lane_count >> 1;
|
||||
} else {
|
||||
|
@ -4709,19 +4739,13 @@ intel_dp_long_pulse(struct intel_connector *connector,
|
|||
*/
|
||||
status = connector_status_disconnected;
|
||||
goto out;
|
||||
} else {
|
||||
/*
|
||||
* If display is now connected check links status,
|
||||
* there has been known issues of link loss triggering
|
||||
* long pulse.
|
||||
*
|
||||
* Some sinks (eg. ASUS PB287Q) seem to perform some
|
||||
* weird HPD ping pong during modesets. So we can apparently
|
||||
* end up with HPD going low during a modeset, and then
|
||||
* going back up soon after. And once that happens we must
|
||||
* retrain the link to get a picture. That's in case no
|
||||
* userspace component reacted to intermittent HPD dip.
|
||||
*/
|
||||
}
|
||||
|
||||
/*
|
||||
* Some external monitors do not signal loss of link synchronization
|
||||
* with an IRQ_HPD, so force a link status check.
|
||||
*/
|
||||
if (!intel_dp_is_edp(intel_dp)) {
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
|
||||
intel_dp_retrain_link(encoder, ctx);
|
||||
|
|
|
@ -352,22 +352,14 @@ intel_dp_start_link_train(struct intel_dp *intel_dp)
|
|||
return;
|
||||
|
||||
failure_handling:
|
||||
/* Dont fallback and prune modes if its eDP */
|
||||
if (!intel_dp_is_edp(intel_dp)) {
|
||||
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
|
||||
intel_connector->base.base.id,
|
||||
intel_connector->base.name,
|
||||
intel_dp->link_rate, intel_dp->lane_count);
|
||||
if (!intel_dp_get_link_train_fallback_values(intel_dp,
|
||||
intel_dp->link_rate,
|
||||
intel_dp->lane_count))
|
||||
/* Schedule a Hotplug Uevent to userspace to start modeset */
|
||||
schedule_work(&intel_connector->modeset_retry_work);
|
||||
} else {
|
||||
DRM_ERROR("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
|
||||
intel_connector->base.base.id,
|
||||
intel_connector->base.name,
|
||||
intel_dp->link_rate, intel_dp->lane_count);
|
||||
}
|
||||
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
|
||||
intel_connector->base.base.id,
|
||||
intel_connector->base.name,
|
||||
intel_dp->link_rate, intel_dp->lane_count);
|
||||
if (!intel_dp_get_link_train_fallback_values(intel_dp,
|
||||
intel_dp->link_rate,
|
||||
intel_dp->lane_count))
|
||||
/* Schedule a Hotplug Uevent to userspace to start modeset */
|
||||
schedule_work(&intel_connector->modeset_retry_work);
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -38,11 +38,11 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
|
|||
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(&encoder->base);
|
||||
struct intel_digital_port *intel_dig_port = intel_mst->primary;
|
||||
struct intel_dp *intel_dp = &intel_dig_port->dp;
|
||||
struct intel_connector *connector =
|
||||
to_intel_connector(conn_state->connector);
|
||||
struct drm_connector *connector = conn_state->connector;
|
||||
void *port = to_intel_connector(connector)->port;
|
||||
struct drm_atomic_state *state = pipe_config->base.state;
|
||||
int bpp;
|
||||
int lane_count, slots;
|
||||
int lane_count, slots = 0;
|
||||
const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
|
||||
int mst_pbn;
|
||||
bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc,
|
||||
|
@ -70,17 +70,23 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
|
|||
|
||||
pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
|
||||
|
||||
if (drm_dp_mst_port_has_audio(&intel_dp->mst_mgr, connector->port))
|
||||
if (drm_dp_mst_port_has_audio(&intel_dp->mst_mgr, port))
|
||||
pipe_config->has_audio = true;
|
||||
|
||||
mst_pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock, bpp);
|
||||
pipe_config->pbn = mst_pbn;
|
||||
|
||||
slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr,
|
||||
connector->port, mst_pbn);
|
||||
if (slots < 0) {
|
||||
DRM_DEBUG_KMS("failed finding vcpi slots:%d\n", slots);
|
||||
return false;
|
||||
/* Zombie connectors can't have VCPI slots */
|
||||
if (READ_ONCE(connector->registered)) {
|
||||
slots = drm_dp_atomic_find_vcpi_slots(state,
|
||||
&intel_dp->mst_mgr,
|
||||
port,
|
||||
mst_pbn);
|
||||
if (slots < 0) {
|
||||
DRM_DEBUG_KMS("failed finding vcpi slots:%d\n",
|
||||
slots);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
intel_link_compute_m_n(bpp, lane_count,
|
||||
|
@ -311,9 +317,8 @@ static int intel_dp_mst_get_ddc_modes(struct drm_connector *connector)
|
|||
struct edid *edid;
|
||||
int ret;
|
||||
|
||||
if (!intel_dp) {
|
||||
if (!READ_ONCE(connector->registered))
|
||||
return intel_connector_update_modes(connector, NULL);
|
||||
}
|
||||
|
||||
edid = drm_dp_mst_get_edid(connector, &intel_dp->mst_mgr, intel_connector->port);
|
||||
ret = intel_connector_update_modes(connector, edid);
|
||||
|
@ -328,9 +333,10 @@ intel_dp_mst_detect(struct drm_connector *connector, bool force)
|
|||
struct intel_connector *intel_connector = to_intel_connector(connector);
|
||||
struct intel_dp *intel_dp = intel_connector->mst_port;
|
||||
|
||||
if (!intel_dp)
|
||||
if (!READ_ONCE(connector->registered))
|
||||
return connector_status_disconnected;
|
||||
return drm_dp_mst_detect_port(connector, &intel_dp->mst_mgr, intel_connector->port);
|
||||
return drm_dp_mst_detect_port(connector, &intel_dp->mst_mgr,
|
||||
intel_connector->port);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -370,7 +376,7 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
|
|||
int bpp = 24; /* MST uses fixed bpp */
|
||||
int max_rate, mode_rate, max_lanes, max_link_clock;
|
||||
|
||||
if (!intel_dp)
|
||||
if (!READ_ONCE(connector->registered))
|
||||
return MODE_ERROR;
|
||||
|
||||
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
||||
|
@ -402,7 +408,7 @@ static struct drm_encoder *intel_mst_atomic_best_encoder(struct drm_connector *c
|
|||
struct intel_dp *intel_dp = intel_connector->mst_port;
|
||||
struct intel_crtc *crtc = to_intel_crtc(state->crtc);
|
||||
|
||||
if (!intel_dp)
|
||||
if (!READ_ONCE(connector->registered))
|
||||
return NULL;
|
||||
return &intel_dp->mst_encoders[crtc->pipe]->base.base;
|
||||
}
|
||||
|
@ -452,6 +458,10 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
|
|||
if (!intel_connector)
|
||||
return NULL;
|
||||
|
||||
intel_connector->get_hw_state = intel_dp_mst_get_hw_state;
|
||||
intel_connector->mst_port = intel_dp;
|
||||
intel_connector->port = port;
|
||||
|
||||
connector = &intel_connector->base;
|
||||
ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs,
|
||||
DRM_MODE_CONNECTOR_DisplayPort);
|
||||
|
@ -462,10 +472,6 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
|
|||
|
||||
drm_connector_helper_add(connector, &intel_dp_mst_connector_helper_funcs);
|
||||
|
||||
intel_connector->get_hw_state = intel_dp_mst_get_hw_state;
|
||||
intel_connector->mst_port = intel_dp;
|
||||
intel_connector->port = port;
|
||||
|
||||
for_each_pipe(dev_priv, pipe) {
|
||||
struct drm_encoder *enc =
|
||||
&intel_dp->mst_encoders[pipe]->base.base;
|
||||
|
@ -503,7 +509,6 @@ static void intel_dp_register_mst_connector(struct drm_connector *connector)
|
|||
static void intel_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_connector *connector)
|
||||
{
|
||||
struct intel_connector *intel_connector = to_intel_connector(connector);
|
||||
struct drm_i915_private *dev_priv = to_i915(connector->dev);
|
||||
|
||||
DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", connector->base.id, connector->name);
|
||||
|
@ -512,10 +517,6 @@ static void intel_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
|
|||
if (dev_priv->fbdev)
|
||||
drm_fb_helper_remove_one_connector(&dev_priv->fbdev->helper,
|
||||
connector);
|
||||
/* prevent race with the check in ->detect */
|
||||
drm_modeset_lock(&connector->dev->mode_config.connection_mutex, NULL);
|
||||
intel_connector->mst_port = NULL;
|
||||
drm_modeset_unlock(&connector->dev->mode_config.connection_mutex);
|
||||
|
||||
drm_connector_put(connector);
|
||||
}
|
||||
|
|
|
@ -228,7 +228,9 @@ static void intel_hpd_irq_storm_reenable_work(struct work_struct *work)
|
|||
drm_for_each_connector_iter(connector, &conn_iter) {
|
||||
struct intel_connector *intel_connector = to_intel_connector(connector);
|
||||
|
||||
if (intel_connector->encoder->hpd_pin == pin) {
|
||||
/* Don't check MST ports, they don't have pins */
|
||||
if (!intel_connector->mst_port &&
|
||||
intel_connector->encoder->hpd_pin == pin) {
|
||||
if (connector->polled != intel_connector->polled)
|
||||
DRM_DEBUG_DRIVER("Reenabling HPD on connector %s\n",
|
||||
connector->name);
|
||||
|
@ -395,37 +397,54 @@ void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
|
|||
struct intel_encoder *encoder;
|
||||
bool storm_detected = false;
|
||||
bool queue_dig = false, queue_hp = false;
|
||||
u32 long_hpd_pulse_mask = 0;
|
||||
u32 short_hpd_pulse_mask = 0;
|
||||
enum hpd_pin pin;
|
||||
|
||||
if (!pin_mask)
|
||||
return;
|
||||
|
||||
spin_lock(&dev_priv->irq_lock);
|
||||
for_each_intel_encoder(&dev_priv->drm, encoder) {
|
||||
enum hpd_pin pin = encoder->hpd_pin;
|
||||
bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);
|
||||
|
||||
/*
|
||||
* Determine whether ->hpd_pulse() exists for each pin, and
|
||||
* whether we have a short or a long pulse. This is needed
|
||||
* as each pin may have up to two encoders (HDMI and DP) and
|
||||
* only the one of them (DP) will have ->hpd_pulse().
|
||||
*/
|
||||
for_each_intel_encoder(&dev_priv->drm, encoder) {
|
||||
bool has_hpd_pulse = intel_encoder_has_hpd_pulse(encoder);
|
||||
enum port port = encoder->port;
|
||||
bool long_hpd;
|
||||
|
||||
pin = encoder->hpd_pin;
|
||||
if (!(BIT(pin) & pin_mask))
|
||||
continue;
|
||||
|
||||
if (has_hpd_pulse) {
|
||||
bool long_hpd = long_mask & BIT(pin);
|
||||
enum port port = encoder->port;
|
||||
if (!has_hpd_pulse)
|
||||
continue;
|
||||
|
||||
DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),
|
||||
long_hpd ? "long" : "short");
|
||||
/*
|
||||
* For long HPD pulses we want to have the digital queue happen,
|
||||
* but we still want HPD storm detection to function.
|
||||
*/
|
||||
queue_dig = true;
|
||||
if (long_hpd) {
|
||||
dev_priv->hotplug.long_port_mask |= (1 << port);
|
||||
} else {
|
||||
/* for short HPD just trigger the digital queue */
|
||||
dev_priv->hotplug.short_port_mask |= (1 << port);
|
||||
continue;
|
||||
}
|
||||
long_hpd = long_mask & BIT(pin);
|
||||
|
||||
DRM_DEBUG_DRIVER("digital hpd port %c - %s\n", port_name(port),
|
||||
long_hpd ? "long" : "short");
|
||||
queue_dig = true;
|
||||
|
||||
if (long_hpd) {
|
||||
long_hpd_pulse_mask |= BIT(pin);
|
||||
dev_priv->hotplug.long_port_mask |= BIT(port);
|
||||
} else {
|
||||
short_hpd_pulse_mask |= BIT(pin);
|
||||
dev_priv->hotplug.short_port_mask |= BIT(port);
|
||||
}
|
||||
}
|
||||
|
||||
/* Now process each pin just once */
|
||||
for_each_hpd_pin(pin) {
|
||||
bool long_hpd;
|
||||
|
||||
if (!(BIT(pin) & pin_mask))
|
||||
continue;
|
||||
|
||||
if (dev_priv->hotplug.stats[pin].state == HPD_DISABLED) {
|
||||
/*
|
||||
|
@ -442,11 +461,22 @@ void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
|
|||
if (dev_priv->hotplug.stats[pin].state != HPD_ENABLED)
|
||||
continue;
|
||||
|
||||
if (!has_hpd_pulse) {
|
||||
/*
|
||||
* Delegate to ->hpd_pulse() if one of the encoders for this
|
||||
* pin has it, otherwise let the hotplug_work deal with this
|
||||
* pin directly.
|
||||
*/
|
||||
if (((short_hpd_pulse_mask | long_hpd_pulse_mask) & BIT(pin))) {
|
||||
long_hpd = long_hpd_pulse_mask & BIT(pin);
|
||||
} else {
|
||||
dev_priv->hotplug.event_bits |= BIT(pin);
|
||||
long_hpd = true;
|
||||
queue_hp = true;
|
||||
}
|
||||
|
||||
if (!long_hpd)
|
||||
continue;
|
||||
|
||||
if (intel_hpd_irq_storm_detect(dev_priv, pin)) {
|
||||
dev_priv->hotplug.event_bits &= ~BIT(pin);
|
||||
storm_detected = true;
|
||||
|
|
|
@ -297,8 +297,10 @@ void intel_lpe_audio_teardown(struct drm_i915_private *dev_priv)
|
|||
lpe_audio_platdev_destroy(dev_priv);
|
||||
|
||||
irq_free_desc(dev_priv->lpe_audio.irq);
|
||||
}
|
||||
|
||||
dev_priv->lpe_audio.irq = -1;
|
||||
dev_priv->lpe_audio.platdev = NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_lpe_audio_notify() - notify lpe audio event
|
||||
|
|
|
@ -424,7 +424,8 @@ static u64 execlists_update_context(struct i915_request *rq)
|
|||
|
||||
reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);
|
||||
|
||||
/* True 32b PPGTT with dynamic page allocation: update PDP
|
||||
/*
|
||||
* True 32b PPGTT with dynamic page allocation: update PDP
|
||||
* registers and point the unallocated PDPs to scratch page.
|
||||
* PML4 is allocated during ppgtt init, so this is not needed
|
||||
* in 48-bit mode.
|
||||
|
@ -432,6 +433,17 @@ static u64 execlists_update_context(struct i915_request *rq)
|
|||
if (ppgtt && !i915_vm_is_48bit(&ppgtt->vm))
|
||||
execlists_update_context_pdps(ppgtt, reg_state);
|
||||
|
||||
/*
|
||||
* Make sure the context image is complete before we submit it to HW.
|
||||
*
|
||||
* Ostensibly, writes (including the WCB) should be flushed prior to
|
||||
* an uncached write such as our mmio register access, the empirical
|
||||
* evidence (esp. on Braswell) suggests that the WC write into memory
|
||||
* may not be visible to the HW prior to the completion of the UC
|
||||
* register write and that we may begin execution from the context
|
||||
* before its image is complete leading to invalid PD chasing.
|
||||
*/
|
||||
wmb();
|
||||
return ce->lrc_desc;
|
||||
}
|
||||
|
||||
|
|
|
@ -91,6 +91,7 @@ static int
|
|||
gen4_render_ring_flush(struct i915_request *rq, u32 mode)
|
||||
{
|
||||
u32 cmd, *cs;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* read/write caches:
|
||||
|
@ -127,12 +128,45 @@ gen4_render_ring_flush(struct i915_request *rq, u32 mode)
|
|||
cmd |= MI_INVALIDATE_ISP;
|
||||
}
|
||||
|
||||
cs = intel_ring_begin(rq, 2);
|
||||
i = 2;
|
||||
if (mode & EMIT_INVALIDATE)
|
||||
i += 20;
|
||||
|
||||
cs = intel_ring_begin(rq, i);
|
||||
if (IS_ERR(cs))
|
||||
return PTR_ERR(cs);
|
||||
|
||||
*cs++ = cmd;
|
||||
*cs++ = MI_NOOP;
|
||||
|
||||
/*
|
||||
* A random delay to let the CS invalidate take effect? Without this
|
||||
* delay, the GPU relocation path fails as the CS does not see
|
||||
* the updated contents. Just as important, if we apply the flushes
|
||||
* to the EMIT_FLUSH branch (i.e. immediately after the relocation
|
||||
* write and before the invalidate on the next batch), the relocations
|
||||
* still fail. This implies that is a delay following invalidation
|
||||
* that is required to reset the caches as opposed to a delay to
|
||||
* ensure the memory is written.
|
||||
*/
|
||||
if (mode & EMIT_INVALIDATE) {
|
||||
*cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
|
||||
*cs++ = i915_ggtt_offset(rq->engine->scratch) |
|
||||
PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = 0;
|
||||
*cs++ = 0;
|
||||
|
||||
for (i = 0; i < 12; i++)
|
||||
*cs++ = MI_FLUSH;
|
||||
|
||||
*cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
|
||||
*cs++ = i915_ggtt_offset(rq->engine->scratch) |
|
||||
PIPE_CONTROL_GLOBAL_GTT;
|
||||
*cs++ = 0;
|
||||
*cs++ = 0;
|
||||
}
|
||||
|
||||
*cs++ = cmd;
|
||||
|
||||
intel_ring_advance(rq, cs);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -549,7 +549,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
|
|||
err = igt_check_page_sizes(vma);
|
||||
|
||||
if (vma->page_sizes.gtt != I915_GTT_PAGE_SIZE_4K) {
|
||||
pr_err("page_sizes.gtt=%u, expected %lu\n",
|
||||
pr_err("page_sizes.gtt=%u, expected %llu\n",
|
||||
vma->page_sizes.gtt, I915_GTT_PAGE_SIZE_4K);
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
|
|
@ -1337,7 +1337,7 @@ static int igt_gtt_reserve(void *arg)
|
|||
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
|
||||
if (vma->node.start != total ||
|
||||
vma->node.size != 2*I915_GTT_PAGE_SIZE) {
|
||||
pr_err("i915_gem_gtt_reserve (pass 1) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
|
||||
pr_err("i915_gem_gtt_reserve (pass 1) placement failed, found (%llx + %llx), expected (%llx + %llx)\n",
|
||||
vma->node.start, vma->node.size,
|
||||
total, 2*I915_GTT_PAGE_SIZE);
|
||||
err = -EINVAL;
|
||||
|
@ -1386,7 +1386,7 @@ static int igt_gtt_reserve(void *arg)
|
|||
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
|
||||
if (vma->node.start != total ||
|
||||
vma->node.size != 2*I915_GTT_PAGE_SIZE) {
|
||||
pr_err("i915_gem_gtt_reserve (pass 2) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
|
||||
pr_err("i915_gem_gtt_reserve (pass 2) placement failed, found (%llx + %llx), expected (%llx + %llx)\n",
|
||||
vma->node.start, vma->node.size,
|
||||
total, 2*I915_GTT_PAGE_SIZE);
|
||||
err = -EINVAL;
|
||||
|
@ -1430,7 +1430,7 @@ static int igt_gtt_reserve(void *arg)
|
|||
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
|
||||
if (vma->node.start != offset ||
|
||||
vma->node.size != 2*I915_GTT_PAGE_SIZE) {
|
||||
pr_err("i915_gem_gtt_reserve (pass 3) placement failed, found (%llx + %llx), expected (%llx + %lx)\n",
|
||||
pr_err("i915_gem_gtt_reserve (pass 3) placement failed, found (%llx + %llx), expected (%llx + %llx)\n",
|
||||
vma->node.start, vma->node.size,
|
||||
offset, 2*I915_GTT_PAGE_SIZE);
|
||||
err = -EINVAL;
|
||||
|
|
|
@ -633,8 +633,7 @@ static int adreno_get_legacy_pwrlevels(struct device *dev)
|
|||
struct device_node *child, *node;
|
||||
int ret;
|
||||
|
||||
node = of_find_compatible_node(dev->of_node, NULL,
|
||||
"qcom,gpu-pwrlevels");
|
||||
node = of_get_compatible_child(dev->of_node, "qcom,gpu-pwrlevels");
|
||||
if (!node) {
|
||||
dev_err(dev, "Could not find the GPU powerlevels\n");
|
||||
return -ENXIO;
|
||||
|
@ -655,6 +654,8 @@ static int adreno_get_legacy_pwrlevels(struct device *dev)
|
|||
dev_pm_opp_add(dev, val, 0);
|
||||
}
|
||||
|
||||
of_node_put(node);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -367,8 +367,8 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
|
|||
msm_gpu_devcoredump_read, msm_gpu_devcoredump_free);
|
||||
}
|
||||
#else
|
||||
static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, char *comm,
|
||||
char *cmd)
|
||||
static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
|
||||
struct msm_gem_submit *submit, char *comm, char *cmd)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -843,22 +843,16 @@ nv50_mstc_atomic_best_encoder(struct drm_connector *connector,
|
|||
{
|
||||
struct nv50_head *head = nv50_head(connector_state->crtc);
|
||||
struct nv50_mstc *mstc = nv50_mstc(connector);
|
||||
if (mstc->port) {
|
||||
struct nv50_mstm *mstm = mstc->mstm;
|
||||
return &mstm->msto[head->base.index]->encoder;
|
||||
}
|
||||
return NULL;
|
||||
|
||||
return &mstc->mstm->msto[head->base.index]->encoder;
|
||||
}
|
||||
|
||||
static struct drm_encoder *
|
||||
nv50_mstc_best_encoder(struct drm_connector *connector)
|
||||
{
|
||||
struct nv50_mstc *mstc = nv50_mstc(connector);
|
||||
if (mstc->port) {
|
||||
struct nv50_mstm *mstm = mstc->mstm;
|
||||
return &mstm->msto[0]->encoder;
|
||||
}
|
||||
return NULL;
|
||||
|
||||
return &mstc->mstm->msto[0]->encoder;
|
||||
}
|
||||
|
||||
static enum drm_mode_status
|
||||
|
|
|
@ -116,7 +116,7 @@ nv40_backlight_init(struct drm_connector *connector)
|
|||
&nv40_bl_ops, &props);
|
||||
|
||||
if (IS_ERR(bd)) {
|
||||
if (bl_connector.id > 0)
|
||||
if (bl_connector.id >= 0)
|
||||
ida_simple_remove(&bl_ida, bl_connector.id);
|
||||
return PTR_ERR(bd);
|
||||
}
|
||||
|
@ -249,7 +249,7 @@ nv50_backlight_init(struct drm_connector *connector)
|
|||
nv_encoder, ops, &props);
|
||||
|
||||
if (IS_ERR(bd)) {
|
||||
if (bl_connector.id > 0)
|
||||
if (bl_connector.id >= 0)
|
||||
ida_simple_remove(&bl_ida, bl_connector.id);
|
||||
return PTR_ERR(bd);
|
||||
}
|
||||
|
|
|
@ -801,6 +801,7 @@ acr_r352_load(struct nvkm_acr *_acr, struct nvkm_falcon *falcon,
|
|||
bl = acr->hsbl_unload_blob;
|
||||
} else {
|
||||
nvkm_error(_acr->subdev, "invalid secure boot blob!\n");
|
||||
kfree(bl_desc);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
|
|
@ -285,6 +285,17 @@ static int dmm_txn_commit(struct dmm_txn *txn, bool wait)
|
|||
}
|
||||
|
||||
txn->last_pat->next_pa = 0;
|
||||
/* ensure that the written descriptors are visible to DMM */
|
||||
wmb();
|
||||
|
||||
/*
|
||||
* NOTE: the wmb() above should be enough, but there seems to be a bug
|
||||
* in OMAP's memory barrier implementation, which in some rare cases may
|
||||
* cause the writes not to be observable after wmb().
|
||||
*/
|
||||
|
||||
/* read back to ensure the data is in RAM */
|
||||
readl(&txn->last_pat->next_pa);
|
||||
|
||||
/* write to PAT_DESCR to clear out any pending transaction */
|
||||
dmm_write(dmm, 0x0, reg[PAT_DESCR][engine->id]);
|
||||
|
|
|
@ -516,12 +516,22 @@ int rcar_du_modeset_init(struct rcar_du_device *rcdu)
|
|||
|
||||
dev->mode_config.min_width = 0;
|
||||
dev->mode_config.min_height = 0;
|
||||
dev->mode_config.max_width = 4095;
|
||||
dev->mode_config.max_height = 2047;
|
||||
dev->mode_config.normalize_zpos = true;
|
||||
dev->mode_config.funcs = &rcar_du_mode_config_funcs;
|
||||
dev->mode_config.helper_private = &rcar_du_mode_config_helper;
|
||||
|
||||
if (rcdu->info->gen < 3) {
|
||||
dev->mode_config.max_width = 4095;
|
||||
dev->mode_config.max_height = 2047;
|
||||
} else {
|
||||
/*
|
||||
* The Gen3 DU uses the VSP1 for memory access, and is limited
|
||||
* to frame sizes of 8190x8190.
|
||||
*/
|
||||
dev->mode_config.max_width = 8190;
|
||||
dev->mode_config.max_height = 8190;
|
||||
}
|
||||
|
||||
rcdu->num_crtcs = hweight8(rcdu->info->channels_mask);
|
||||
|
||||
ret = rcar_du_properties_init(rcdu);
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue