0cc34620e8
* refs/heads/tmp-204dd19: UPSTREAM: driver core: Avoid deferred probe due to fw_devlink_pause/resume() UPSTREAM: driver core: Rename dev_links_info.defer_sync to defer_hook UPSTREAM: driver core: Don't do deferred probe in parallel with kernel_init thread Restore sdcardfs feature Revert rpmh and usb changes Linux 4.19.136 regmap: debugfs: check count when read regmap file rtnetlink: Fix memory(net_device) leak when ->newlink fails udp: Improve load balancing for SO_REUSEPORT. udp: Copy has_conns in reuseport_grow(). sctp: shrink stream outq when fails to do addstream reconf sctp: shrink stream outq only when new outcnt < old outcnt AX.25: Prevent integer overflows in connect and sendmsg tcp: allow at most one TLP probe per flight rxrpc: Fix sendmsg() returning EPIPE due to recvmsg() returning ENODATA qrtr: orphan socket in qrtr_release() net: udp: Fix wrong clean up for IS_UDPLITE macro net-sysfs: add a newline when printing 'tx_timeout' by sysfs ip6_gre: fix null-ptr-deref in ip6gre_init_net() drivers/net/wan/x25_asy: Fix to make it work dev: Defer free of skbs in flush_backlog AX.25: Prevent out-of-bounds read in ax25_sendmsg() AX.25: Fix out-of-bounds read in ax25_connect() Linux 4.19.135 ath9k: Fix regression with Atheros 9271 ath9k: Fix general protection fault in ath9k_hif_usb_rx_cb dm integrity: fix integrity recalculation that is improperly skipped ASoC: qcom: Drop HAS_DMA dependency to fix link failure ASoC: rt5670: Add new gpio1_is_ext_spk_en quirk and enable it on the Lenovo Miix 2 10 x86, vmlinux.lds: Page-align end of ..page_aligned sections parisc: Add atomic64_set_release() define to avoid CPU soft lockups drm/amd/powerplay: fix a crash when overclocking Vega M drm/amdgpu: Fix NULL dereference in dpm sysfs handlers io-mapping: indicate mapping failure mm: memcg/slab: fix memory leak at non-root kmem_cache destroy mm: memcg/slab: synchronize access to kmem_cache dying flag using a spinlock mm/memcg: fix refcount error while moving and swapping Makefile: Fix GCC_TOOLCHAIN_DIR prefix for Clang cross compilation vt: Reject zero-sized screen buffer size. fbdev: Detect integer underflow at "struct fbcon_ops"->clear_margins. serial: 8250_mtk: Fix high-speed baud rates clamping serial: 8250: fix null-ptr-deref in serial8250_start_tx() staging: comedi: addi_apci_1564: check INSN_CONFIG_DIGITAL_TRIG shift staging: comedi: addi_apci_1500: check INSN_CONFIG_DIGITAL_TRIG shift staging: comedi: ni_6527: fix INSN_CONFIG_DIGITAL_TRIG support staging: comedi: addi_apci_1032: check INSN_CONFIG_DIGITAL_TRIG shift staging: wlan-ng: properly check endpoint types Revert "cifs: Fix the target file was deleted when rename failed." usb: xhci: Fix ASM2142/ASM3142 DMA addressing usb: xhci-mtk: fix the failure of bandwidth allocation binder: Don't use mmput() from shrinker function. RISC-V: Upgrade smp_mb__after_spinlock() to iorw,iorw x86: math-emu: Fix up 'cmp' insn for clang ias arm64: Use test_tsk_thread_flag() for checking TIF_SINGLESTEP hwmon: (scmi) Fix potential buffer overflow in scmi_hwmon_probe() hwmon: (adm1275) Make sure we are reading enough data for different chips usb: gadget: udc: gr_udc: fix memleak on error handling path in gr_ep_init() Input: synaptics - enable InterTouch for ThinkPad X1E 1st gen dmaengine: ioat setting ioat timeout as module parameter hwmon: (aspeed-pwm-tacho) Avoid possible buffer overflow regmap: dev_get_regmap_match(): fix string comparison spi: mediatek: use correct SPI_CFG2_REG MACRO Input: add `SW_MACHINE_COVER` dmaengine: tegra210-adma: Fix runtime PM imbalance on error HID: apple: Disable Fn-key key-re-mapping on clone keyboards HID: steam: fixes race in handling device list. HID: alps: support devices with report id 2 HID: i2c-hid: add Mediacom FlexBook edge13 to descriptor override scripts/gdb: fix lx-symbols 'gdb.error' while loading modules scripts/decode_stacktrace: strip basepath from all paths serial: exar: Fix GPIO configuration for Sealevel cards based on XR17V35X bonding: check return value of register_netdevice() in bond_newlink() i2c: rcar: always clear ICSAR to avoid side effects net: ethernet: ave: Fix error returns in ave_init ipvs: fix the connection sync failed in some cases qed: suppress "don't support RoCE & iWARP" flooding on HW init mlxsw: destroy workqueue when trap_register in mlxsw_emad_init bonding: check error value of register_netdevice() immediately net: smc91x: Fix possible memory leak in smc_drv_probe() drm: sun4i: hdmi: Fix inverted HPD result ieee802154: fix one possible memleak in adf7242_probe net: dp83640: fix SIOCSHWTSTAMP to update the struct with actual configuration ax88172a: fix ax88172a_unbind() failures hippi: Fix a size used in a 'pci_free_consistent()' in an error handling path fpga: dfl: fix bug in port reset handshake bnxt_en: Fix race when modifying pause settings. btrfs: fix page leaks after failure to lock page for delalloc btrfs: fix mount failure caused by race with umount btrfs: fix double free on ulist after backref resolution failure ASoC: rt5670: Correct RT5670_LDO_SEL_MASK ALSA: info: Drop WARN_ON() from buffer NULL sanity check uprobes: Change handle_swbp() to send SIGTRAP with si_code=SI_KERNEL, to fix GDB regression IB/umem: fix reference count leak in ib_umem_odp_get() tipc: clean up skb list lock handling on send path spi: spi-fsl-dspi: Exit the ISR with IRQ_NONE when it's not ours SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion") irqdomain/treewide: Keep firmware node unconditionally allocated fuse: fix weird page warning drivers/firmware/psci: Fix memory leakage in alloc_init_cpu_groups() drm/nouveau/i2c/g94-: increase NV_PMGR_DP_AUXCTL_TRANSACTREQ timeout net: sky2: initialize return of gm_phy_read drivers/net/wan/lapbether: Fixed the value of hard_header_len xtensa: update *pos in cpuinfo_op.next xtensa: fix __sync_fetch_and_{and,or}_4 declarations scsi: scsi_transport_spi: Fix function pointer check mac80211: allow rx of mesh eapol frames with default rx key pinctrl: amd: fix npins for uart0 in kerncz_groups gpio: arizona: put pm_runtime in case of failure gpio: arizona: handle pm_runtime_get_sync failure case soc: qcom: rpmh: Dirt can only make you dirtier, not cleaner ANDROID: build: update ABI definitions ANDROID: update the kernel release format for GKI ANDROID: Incremental fs: magic number compatible 32-bit ANDROID: kbuild: don't merge .*..compoundliteral in modules ANDROID: GKI: preserve ABI for struct sock_cgroup_data Revert "genetlink: remove genl_bind" Revert "arm64/alternatives: use subsections for replacement sequences" Linux 4.19.134 spi: sprd: switch the sequence of setting WDG_LOAD_LOW and _HIGH rxrpc: Fix trace string libceph: don't omit recovery_deletes in target_copy() printk: queue wake_up_klogd irq_work only if per-CPU areas are ready genirq/affinity: Handle affinity setting on inactive interrupts correctly sched/fair: handle case of task_h_load() returning 0 sched: Fix unreliable rseq cpu_id for new tasks arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return arm64: ptrace: Consistently use pseudo-singlestep exceptions arm64: ptrace: Override SPSR.SS when single-stepping is enabled thermal/drivers/cpufreq_cooling: Fix wrong frequency converted from power misc: atmel-ssc: lock with mutex instead of spinlock dmaengine: fsl-edma: Fix NULL pointer exception in fsl_edma_tx_handler intel_th: Fix a NULL dereference when hub driver is not loaded intel_th: pci: Add Emmitsburg PCH support intel_th: pci: Add Tiger Lake PCH-H support intel_th: pci: Add Jasper Lake CPU support powerpc/book3s64/pkeys: Fix pkey_access_permitted() for execute disable pkey hwmon: (emc2103) fix unable to change fan pwm1_enable attribute riscv: use 16KB kernel stack on 64-bit MIPS: Fix build for LTS kernel caused by backporting lpj adjustment timer: Fix wheel index calculation on last level timer: Prevent base->clk from moving backward uio_pdrv_genirq: fix use without device tree and no interrupt Input: i8042 - add Lenovo XiaoXin Air 12 to i8042 nomux list mei: bus: don't clean driver pointer Revert "zram: convert remaining CLASS_ATTR() to CLASS_ATTR_RO()" fuse: Fix parameter for FS_IOC_{GET,SET}FLAGS ovl: fix unneeded call to ovl_change_flags() ovl: relax WARN_ON() when decoding lower directory file handle ovl: inode reference leak in ovl_is_inuse true case. serial: mxs-auart: add missed iounmap() in probe failure and remove virtio: virtio_console: add missing MODULE_DEVICE_TABLE() for rproc serial virt: vbox: Fix guest capabilities mask check virt: vbox: Fix VBGL_IOCTL_VMMDEV_REQUEST_BIG and _LOG req numbers to match upstream USB: serial: option: add Quectel EG95 LTE modem USB: serial: option: add GosunCn GM500 series USB: serial: ch341: add new Product ID for CH340 USB: serial: cypress_m8: enable Simply Automated UPB PIM USB: serial: iuu_phoenix: fix memory corruption usb: gadget: function: fix missing spinlock in f_uac1_legacy usb: chipidea: core: add wakeup support for extcon usb: dwc2: Fix shutdown callback in platform USB: c67x00: fix use after free in c67x00_giveback_urb ALSA: hda/realtek - Enable Speaker for ASUS UX533 and UX534 ALSA: hda/realtek - change to suitable link model for ASUS platform ALSA: usb-audio: Fix race against the error recovery URB submission ALSA: line6: Sync the pending work cancel at disconnection ALSA: line6: Perform sanity check for each URB creation HID: quirks: Ignore Simply Automated UPB PIM HID: quirks: Always poll Obins Anne Pro 2 keyboard HID: magicmouse: do not set up autorepeat slimbus: core: Fix mismatch in of_node_get/put mtd: rawnand: oxnas: Release all devices in the _remove() path mtd: rawnand: oxnas: Unregister all devices on error mtd: rawnand: oxnas: Keep track of registered devices mtd: rawnand: brcmnand: fix CS0 layout mtd: rawnand: timings: Fix default tR_max and tCCS_min timings mtd: rawnand: marvell: Fix probe error path mtd: rawnand: marvell: Use nand_cleanup() when the device is not yet registered soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request soc: qcom: rpmh-rsc: Clear active mode configuration for wake TCS soc: qcom: rpmh: Invalidate SLEEP and WAKE TCSes before flushing new data soc: qcom: rpmh: Update dirty flag only when data changes perf stat: Zero all the 'ena' and 'run' array slot stats for interval mode apparmor: ensure that dfa state tables have entries copy_xstate_to_kernel: Fix typo which caused GDB regression regmap: debugfs: Don't sleep while atomic for fast_io regmaps ARM: dts: socfpga: Align L2 cache-controller nodename with dtschema Revert "thermal: mediatek: fix register index error" staging: comedi: verify array index is correct before using it usb: gadget: udc: atmel: fix uninitialized read in debug printk spi: spi-sun6i: sun6i_spi_transfer_one(): fix setting of clock rate arm64: dts: meson: add missing gxl rng clock phy: sun4i-usb: fix dereference of pointer phy0 before it is null checked iio:health:afe4404 Fix timestamp alignment and prevent data leak. ALSA: usb-audio: Add registration quirk for Kingston HyperX Cloud Flight S ACPI: video: Use native backlight on Acer TravelMate 5735Z Input: mms114 - add extra compatible for mms345l ALSA: usb-audio: Add registration quirk for Kingston HyperX Cloud Alpha S ACPI: video: Use native backlight on Acer Aspire 5783z ALSA: usb-audio: Rewrite registration quirk handling mmc: sdhci: do not enable card detect interrupt for gpio cd type doc: dt: bindings: usb: dwc3: Update entries for disabling SS instances in park mode ALSA: usb-audio: Create a registration quirk for Kingston HyperX Amp (0951:16d8) scsi: sr: remove references to BLK_DEV_SR_VENDOR, leave it enabled ARM: at91: pm: add quirk for sam9x60's ulp1 HID: quirks: Remove ITE 8595 entry from hid_have_special_driver net: sfp: add some quirks for GPON modules net: sfp: add support for module quirks Revert "usb/ehci-platform: Set PM runtime as active on resume" Revert "usb/xhci-plat: Set PM runtime as active on resume" Revert "usb/ohci-platform: Fix a warning when hibernating" of: of_mdio: Correct loop scanning logic net: dsa: bcm_sf2: Fix node reference count spi: spi-fsl-dspi: Fix lockup if device is shutdown during SPI transfer spi: fix initial SPI_SR value in spi-fsl-dspi iio:health:afe4403 Fix timestamp alignment and prevent data leak. iio:pressure:ms5611 Fix buffer element alignment iio:humidity:hts221 Fix alignment and data leak issues iio: pressure: zpa2326: handle pm_runtime_get_sync failure iio: mma8452: Add missed iio_device_unregister() call in mma8452_probe() iio: magnetometer: ak8974: Fix runtime PM imbalance on error iio:humidity:hdc100x Fix alignment and data leak issues iio:magnetometer:ak8974: Fix alignment and data leak issues arm64/alternatives: don't patch up internal branches i2c: eg20t: Load module automatically if ID matches gfs2: read-only mounts should grab the sd_freeze_gl glock tpm_tis: extra chip->ops check on error path in tpm_tis_core_init arm64/alternatives: use subsections for replacement sequences m68k: mm: fix node memblock init m68k: nommu: register start of the memory with memblock drm/exynos: fix ref count leak in mic_pre_enable drm/msm: fix potential memleak in error branch vlan: consolidate VLAN parsing code and limit max parsing depth sched: consistently handle layer3 header accesses in the presence of VLANs cgroup: Fix sock_cgroup_data on big-endian. cgroup: fix cgroup_sk_alloc() for sk_clone_lock() tcp: md5: allow changing MD5 keys in all socket states tcp: md5: refine tcp_md5_do_add()/tcp_md5_hash_key() barriers tcp: md5: do not send silly options in SYNCOOKIES tcp: md5: add missing memory barriers in tcp_md5_do_add()/tcp_md5_hash_key() tcp: make sure listeners don't initialize congestion-control state tcp: fix SO_RCVLOWAT possible hangs under high mem pressure net: usb: qmi_wwan: add support for Quectel EG95 LTE modem net_sched: fix a memory leak in atm_tc_init() net: Added pointer check for dst->ops->neigh_lookup in dst_neigh_lookup_skb llc: make sure applications use ARPHRD_ETHER l2tp: remove skb_dst_set() from l2tp_xmit_skb() ipv4: fill fl4_icmp_{type,code} in ping_v4_sendmsg genetlink: remove genl_bind net: rmnet: fix lower interface leak perf: Make perf able to build with latest libbfd UPSTREAM: media: v4l2-ctrl: Add H264 profile and levels UPSTREAM: media: v4l2-ctrl: Add control for h.264 chroma qp offset ANDROID: GKI: ASoC: compress: revert some code to avoid race condition ANDROID: GKI: Update the ABI xml representation. ANDROID: GKI: kernel: tick-sched: Add an API for wakeup callbacks ANDROID: ASoC: Compress: Check and set pcm_new driver op Revert "ANDROID: GKI: arm64: gki_defconfig: Disable CONFIG_ARM64_TAGGED_ADDR_ABI" ANDROID: arm64: configs: enabe CONFIG_TMPFS Revert "ALSA: compress: fix partial_drain completion state" ANDROID: GKI: enable CONFIG_EXT4_FS_POSIX_ACL. ANDROID: GKI: set CONFIG_STATIC_USERMODEHELPER_PATH Linux 4.19.133 s390/mm: fix huge pte soft dirty copying ARC: elf: use right ELF_ARCH ARC: entry: fix potential EFA clobber when TIF_SYSCALL_TRACE dm: use noio when sending kobject event drm/radeon: fix double free btrfs: fix fatal extent_buffer readahead vs releasepage race Revert "ath9k: Fix general protection fault in ath9k_hif_usb_rx_cb" bpf: Check correct cred for CAP_SYSLOG in bpf_dump_raw_ok() kprobes: Do not expose probe addresses to non-CAP_SYSLOG module: Do not expose section addresses to non-CAP_SYSLOG module: Refactor section attr into bin attribute kernel: module: Use struct_size() helper kallsyms: Refactor kallsyms_show_value() to take cred KVM: x86: Mark CR4.TSD as being possibly owned by the guest KVM: x86: Inject #GP if guest attempts to toggle CR4.LA57 in 64-bit mode KVM: x86: bit 8 of non-leaf PDPEs is not reserved KVM: arm64: Stop clobbering x0 for HVC_SOFT_RESTART KVM: arm64: Fix definition of PAGE_HYP_DEVICE ALSA: usb-audio: add quirk for MacroSilicon MS2109 ALSA: hda - let hs_mic be picked ahead of hp_mic ALSA: opl3: fix infoleak in opl3 mlxsw: spectrum_router: Remove inappropriate usage of WARN_ON() net: macb: mark device wake capable when "magic-packet" property present bnxt_en: fix NULL dereference in case SR-IOV configuration fails cxgb4: fix all-mask IP address comparison nbd: Fix memory leak in nbd_add_socket arm64: kgdb: Fix single-step exception handling oops ALSA: compress: fix partial_drain completion state net: hns3: fix use-after-free when doing self test smsc95xx: avoid memory leak in smsc95xx_bind smsc95xx: check return value of smsc95xx_reset net: cxgb4: fix return error value in t4_prep_fw drm/mediatek: Check plane visibility in atomic_update net: qrtr: Fix an out of bounds read qrtr_endpoint_post() x86/entry: Increase entry_stack size to a full page nvme-rdma: assign completion vector correctly block: release bip in a right way in error path usb: dwc3: pci: Fix reference count leak in dwc3_pci_resume_work scsi: mptscsih: Fix read sense data size ARM: imx6: add missing put_device() call in imx6q_suspend_init() cifs: update ctime and mtime during truncate s390/kasan: fix early pgm check handler execution drm: panel-orientation-quirks: Use generic orientation-data for Acer S1003 drm: panel-orientation-quirks: Add quirk for Asus T101HA panel i40e: protect ring accesses with READ- and WRITE_ONCE ixgbe: protect ring accesses with READ- and WRITE_ONCE spi: spidev: fix a potential use-after-free in spidev_release() spi: spidev: fix a race between spidev_release and spidev_remove gpu: host1x: Detach driver on unregister drm/tegra: hub: Do not enable orphaned window group ARM: dts: omap4-droid4: Fix spi configuration and increase rate regmap: fix alignment issue spi: spi-fsl-dspi: Fix external abort on interrupt in resume or exit paths spi: spi-fsl-dspi: use IRQF_SHARED mode to request IRQ spi: spi-fsl-dspi: Fix lockup if device is removed during SPI transfer spi: spi-fsl-dspi: Adding shutdown hook KVM: s390: reduce number of IO pins to 1 ANDROID: GKI: update abi based on padding fields being added ANDROID: GKI: USB: Gadget: add Android ABI padding to struct usb_gadget ANDROID: GKI: sound/usb/card.h: add Android ABI padding to struct snd_usb_endpoint ANDROID: fscrypt: fix DUN contiguity with inline encryption + IV_INO_LBLK_32 policies ANDROID: f2fs: add back compress inode check Linux 4.19.132 efi: Make it possible to disable efivar_ssdt entirely dm zoned: assign max_io_len correctly irqchip/gic: Atomically update affinity MIPS: Add missing EHB in mtc0 -> mfc0 sequence for DSPen cifs: Fix the target file was deleted when rename failed. SMB3: Honor lease disabling for multiuser mounts SMB3: Honor persistent/resilient handle flags for multiuser mounts SMB3: Honor 'seal' flag for multiuser mounts Revert "ALSA: usb-audio: Improve frames size computation" nfsd: apply umask on fs without ACL support i2c: mlxcpld: check correct size of maximum RECV_LEN packet i2c: algo-pca: Add 0x78 as SCL stuck low status for PCA9665 nvme: fix a crash in nvme_mpath_add_disk SMB3: Honor 'posix' flag for multiuser mounts virtio-blk: free vblk-vqs in error path of virtblk_probe() drm: sun4i: hdmi: Remove extra HPD polling hwmon: (acpi_power_meter) Fix potential memory leak in acpi_power_meter_add() hwmon: (max6697) Make sure the OVERT mask is set correctly cxgb4: fix SGE queue dump destination buffer context cxgb4: use correct type for all-mask IP address comparison cxgb4: parse TC-U32 key values and masks natively cxgb4: use unaligned conversion for fetching timestamp drm/msm/dpu: fix error return code in dpu_encoder_init crypto: af_alg - fix use-after-free in af_alg_accept() due to bh_lock_sock() kgdb: Avoid suspicious RCU usage warning nvme-multipath: fix deadlock between ana_work and scan_work nvme-multipath: set bdi capabilities once s390/debug: avoid kernel warning on too large number of pages usb: usbtest: fix missing kfree(dev->buf) in usbtest_disconnect mm/slub: fix stack overruns with SLUB_STATS mm/slub.c: fix corrupted freechain in deactivate_slab() usbnet: smsc95xx: Fix use-after-free after removal EDAC/amd64: Read back the scrub rate PCI register on F15h mm: fix swap cache node allocation mask btrfs: fix a block group ref counter leak after failure to remove block group ANDROID: Update ABI representation for libabigail update ANDROID: Update the ABI representation ANDROID: Update the ABI xml representation ANDROID: GKI: fix ABI diffs caused by GPU heap and pool vmstat additions ANDROID: sched: consider stune boost margin when computing energy ANDROID: GKI: move abi files to android/ ANDROID: GKI: drop unneeded "_whitelist" off of symbol filenames UPSTREAM: binder: fix null deref of proc->context ANDROID: cpufreq: schedutil: maintain raw cache when next_f is not changed UPSTREAM: net: bpf: Make bpf_ktime_get_ns() available to non GPL programs UPSTREAM: usb: musb: mediatek: add reset FADDR to zero in reset interrupt handle ANDROID: GKI: scripts: Makefile: update the lz4 command (#2) ANDROID: Update the ABI xml representation Revert "drm/dsi: Fix byte order of DCS set/get brightness" Linux 4.19.131 Revert "tty: hvc: Fix data abort due to race in hvc_open" xfs: add agf freeblocks verify in xfs_agf_verify dm writecache: add cond_resched to loop in persistent_memory_claim() dm writecache: correct uncommitted_block when discarding uncommitted entry NFSv4 fix CLOSE not waiting for direct IO compeletion pNFS/flexfiles: Fix list corruption if the mirror count changes SUNRPC: Properly set the @subbuf parameter of xdr_buf_subsegment() sunrpc: fixed rollback in rpc_gssd_dummy_populate() Staging: rtl8723bs: prevent buffer overflow in update_sta_support_rate() drm/radeon: fix fb_div check in ni_init_smc_spll_table() drm: rcar-du: Fix build error ring-buffer: Zero out time extend if it is nested and not absolute tracing: Fix event trigger to accept redundant spaces arm64: perf: Report the PC value in REGS_ABI_32 mode ocfs2: fix panic on nfs server over ocfs2 ocfs2: fix value of OCFS2_INVALID_SLOT ocfs2: load global_inode_alloc ocfs2: avoid inode removal while nfsd is accessing it mm/slab: use memzero_explicit() in kzfree() btrfs: fix failure of RWF_NOWAIT write into prealloc extent beyond eof btrfs: fix data block group relocation failure due to concurrent scrub x86/asm/64: Align start of __clear_user() loop to 16-bytes KVM: nVMX: Plumb L2 GPA through to PML emulation KVM: X86: Fix MSR range of APIC registers in X2APIC mode erofs: fix partially uninitialized misuse in z_erofs_onlinepage_fixup ACPI: sysfs: Fix pm_profile_attr type ALSA: hda/realtek - Add quirk for MSI GE63 laptop ALSA: hda: Add NVIDIA codec IDs 9a & 9d through a0 to patch table RISC-V: Don't allow write+exec only page mapping request in mmap blktrace: break out of blktrace setup on concurrent calls kbuild: improve cc-option to clean up all temporary files arm64: sve: Fix build failure when ARM64_SVE=y and SYSCTL=n s390/vdso: fix vDSO clock_getres() s390/ptrace: fix setting syscall number net: alx: fix race condition in alx_remove ibmvnic: Harden device login requests hwrng: ks-sa - Fix runtime PM imbalance on error riscv/atomic: Fix sign extension for RV64I drm/amd/display: Use kfree() to free rgb_user in calculate_user_regamma_ramp() ata/libata: Fix usage of page address by page_address in ata_scsi_mode_select_xlat function sata_rcar: handle pm_runtime_get_sync failure cases sched/core: Fix PI boosting between RT and DEADLINE tasks sched/deadline: Initialize ->dl_boosted i2c: core: check returned size of emulated smbus block read i2c: fsi: Fix the port number field in status register net: bcmgenet: use hardware padding of runt frames netfilter: ipset: fix unaligned atomic access usb: gadget: udc: Potential Oops in error handling code ARM: imx5: add missing put_device() call in imx_suspend_alloc_ocram() cxgb4: move handling L2T ARP failures to caller net: qed: fix excessive QM ILT lines consumption net: qed: fix NVMe login fails over VFs net: qed: fix left elements count calculation RDMA/mad: Fix possible memory leak in ib_mad_post_receive_mads() ASoC: rockchip: Fix a reference count leak. RDMA/cma: Protect bind_list and listen_list while finding matching cm id RDMA/qedr: Fix KASAN: use-after-free in ucma_event_handler+0x532 rxrpc: Fix handling of rwind from an ACK packet ARM: dts: NSP: Correct FA2 mailbox node regmap: Fix memory leak from regmap_register_patch x86/resctrl: Fix a NULL vs IS_ERR() static checker warning in rdt_cdp_peer_get() ARM: dts: Fix duovero smsc interrupt for suspend ASoC: fsl_ssi: Fix bclk calculation for mono channel regualtor: pfuze100: correct sw1a/sw2 on pfuze3000 efi/esrt: Fix reference count leak in esre_create_sysfs_entry. ASoC: q6asm: handle EOS correctly xfrm: Fix double ESP trailer insertion in IPsec crypto offload. cifs/smb3: Fix data inconsistent when zero file range cifs/smb3: Fix data inconsistent when punch hole IB/mad: Fix use after free when destroying MAD agent loop: replace kill_bdev with invalidate_bdev cdc-acm: Add DISABLE_ECHO quirk for Microchip/SMSC chip xhci: Return if xHCI doesn't support LPM xhci: Fix enumeration issue when setting max packet size for FS devices. xhci: Fix incorrect EP_STATE_MASK scsi: zfcp: Fix panic on ERP timeout for previously dismissed ERP action ALSA: usb-audio: Fix OOB access of mixer element list ALSA: usb-audio: add quirk for Samsung USBC Headset (AKG) ALSA: usb-audio: add quirk for Denon DCD-1500RE usb: typec: tcpci_rt1711h: avoid screaming irq causing boot hangs usb: host: ehci-exynos: Fix error check in exynos_ehci_probe() xhci: Poll for U0 after disabling USB2 LPM usb: host: xhci-mtk: avoid runtime suspend when removing hcd USB: ehci: reopen solution for Synopsys HC bug usb: add USB_QUIRK_DELAY_INIT for Logitech C922 usb: dwc2: Postponed gadget registration to the udc class driver USB: ohci-sm501: Add missed iounmap() in remove net: core: reduce recursion limit value net: Do not clear the sock TX queue in sk_set_socket() net: Fix the arp error in some cases sch_cake: don't call diffserv parsing code when it is not needed tcp_cubic: fix spurious HYSTART_DELAY exit upon drop in min RTT sch_cake: fix a few style nits sch_cake: don't try to reallocate or unshare skb unconditionally ip_tunnel: fix use-after-free in ip_tunnel_lookup() net: phy: Check harder for errors in get_phy_id() ip6_gre: fix use-after-free in ip6gre_tunnel_lookup() tg3: driver sleeps indefinitely when EEH errors exceed eeh_max_freezes tcp: grow window for OOO packets only for SACK flows tcp: don't ignore ECN CWR on pure ACK sctp: Don't advertise IPv4 addresses if ipv6only is set on the socket rxrpc: Fix notification call on completion of discarded calls rocker: fix incorrect error handling in dma_rings_init net: usb: ax88179_178a: fix packet alignment padding net: increment xmit_recursion level in dev_direct_xmit() net: use correct this_cpu primitive in dev_recursion_level net: place xmit recursion in softnet data net: fix memleak in register_netdevice() net: bridge: enfore alignment for ethernet address mld: fix memory leak in ipv6_mc_destroy_dev() ibmveth: Fix max MTU limit apparmor: don't try to replace stale label in ptraceme check ALSA: hda/realtek - Enable micmute LED on and HP system ALSA: hda/realtek: Enable mute LED on an HP system ALSA: hda/realtek - Enable the headset of ASUS B9450FA with ALC294 fix a braino in "sparc32: fix register window handling in genregs32_[gs]et()" i2c: tegra: Fix Maximum transfer size i2c: tegra: Add missing kerneldoc for some fields i2c: tegra: Cleanup kerneldoc comments EDAC/amd64: Add Family 17h Model 30h PCI IDs net: sched: export __netdev_watchdog_up() net: bcmgenet: remove HFB_CTRL access mtd: rawnand: marvell: Fix the condition on a return code fanotify: fix ignore mask logic for events on child and on dir block/bio-integrity: don't free 'buf' if bio_integrity_add_page() failed net: be more gentle about silly gso requests coming from user ANDROID: lib/vdso: do not update timespec if clock_getres() fails Revert "ANDROID: fscrypt: add key removal notifier chain" ANDROID: update the ABI xml and qcom whitelist ANDROID: fs: export vfs_{read|write} ANDROID: GKI: update abi definitions now that sdcardfs is gone Revert "ANDROID: sdcardfs: Enable modular sdcardfs" Revert "ANDROID: vfs: Add setattr2 for filesystems with per mount permissions" Revert "ANDROID: vfs: fix export symbol type" Revert "ANDROID: vfs: Add permission2 for filesystems with per mount permissions" Revert "ANDROID: vfs: fix export symbol types" Revert "ANDROID: vfs: add d_canonical_path for stacked filesystem support" Revert "ANDROID: fs: Restore vfs_path_lookup() export" ANDROID: sdcardfs: remove sdcardfs from system Revert "ALSA: usb-audio: Improve frames size computation" ANDROID: Makefile: append BUILD_NUMBER to version string when defined ANDROID: GKI: Update ABI for incremental fs ANDROID: GKI: Update cuttlefish whitelist ANDROID: GKI: Disable INCREMENTAL_FS on x86 too ANDROID: cpufreq: schedutil: drop cache when update skipped due to rate limit Linux 4.19.130 KVM: x86/mmu: Set mmio_value to '0' if reserved #PF can't be generated kvm: x86: Fix reserved bits related calculation errors caused by MKTME kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c md: add feature flag MD_FEATURE_RAID0_LAYOUT Revert "dpaa_eth: fix usage as DSA master, try 3" net: core: device_rename: Use rwsem instead of a seqcount sched/rt, net: Use CONFIG_PREEMPTION.patch kretprobe: Prevent triggering kretprobe from within kprobe_flush_task net: octeon: mgmt: Repair filling of RX ring e1000e: Do not wake up the system via WOL if device wakeup is disabled kprobes: Fix to protect kick_kprobe_optimizer() by kprobe_mutex crypto: algboss - don't wait during notifier callback crypto: algif_skcipher - Cap recv SG list at ctx->used drm/i915/icl+: Fix hotplug interrupt disabling after storm detection drm/i915: Whitelist context-local timestamp in the gen9 cmdparser s390: fix syscall_get_error for compat processes mtd: rawnand: tmio: Fix the probe error path mtd: rawnand: mtk: Fix the probe error path mtd: rawnand: plat_nand: Fix the probe error path mtd: rawnand: socrates: Fix the probe error path mtd: rawnand: oxnas: Fix the probe error path mtd: rawnand: oxnas: Add of_node_put() mtd: rawnand: orion: Fix the probe error path mtd: rawnand: xway: Fix the probe error path mtd: rawnand: sharpsl: Fix the probe error path mtd: rawnand: diskonchip: Fix the probe error path mtd: rawnand: Pass a nand_chip object to nand_release() mtd: rawnand: Pass a nand_chip object to nand_scan() block: nr_sects_write(): Disable preemption on seqcount write x86/boot/compressed: Relax sed symbol type regex for LLVM ld.lld drm/dp_mst: Increase ACT retry timeout to 3s ext4: avoid race conditions when remounting with options that change dax ext4: fix partial cluster initialization when splitting extent selinux: fix double free drm/amdgpu: Replace invalid device ID with a valid device ID drm/qxl: Use correct notify port address when creating cursor ring drm/dp_mst: Reformat drm_dp_check_act_status() a bit drm: encoder_slave: fix refcouting error for modules libata: Use per port sync for detach arm64: hw_breakpoint: Don't invoke overflow handler on uaccess watchpoints block: Fix use-after-free in blkdev_get() afs: afs_write_end() should change i_size under the right lock afs: Fix non-setting of mtime when writing into mmap bcache: fix potential deadlock problem in btree_gc_coalesce ext4: stop overwrite the errcode in ext4_setup_super perf report: Fix NULL pointer dereference in hists__fprintf_nr_sample_events() usb/ehci-platform: Set PM runtime as active on resume usb: host: ehci-platform: add a quirk to avoid stuck usb/xhci-plat: Set PM runtime as active on resume xdp: Fix xsk_generic_xmit errno net/filter: Permit reading NET in load_bytes_relative when MAC not set x86/idt: Keep spurious entries unset in system_vectors scsi: acornscsi: Fix an error handling path in acornscsi_probe() drm/sun4i: hdmi ddc clk: Fix size of m divider ASoC: rt5645: Add platform-data for Asus T101HA ASoC: Intel: bytcr_rt5640: Add quirk for Toshiba Encore WT10-A tablet ASoC: core: only convert non DPCM link to DPCM link afs: Fix memory leak in afs_put_sysnames() selftests/net: in timestamping, strncpy needs to preserve null byte drivers/perf: hisi: Fix wrong value for all counters enable NTB: ntb_test: Fix bug when counting remote files NTB: perf: Fix race condition when run with ntb_test NTB: perf: Fix support for hardware that doesn't have port numbers NTB: perf: Don't require one more memory window than number of peers NTB: Revert the change to use the NTB device dev for DMA allocations NTB: ntb_tool: reading the link file should not end in a NULL byte ntb_tool: pass correct struct device to dma_alloc_coherent ntb_perf: pass correct struct device to dma_alloc_coherent gfs2: fix use-after-free on transaction ail lists blktrace: fix endianness for blk_log_remap() blktrace: fix endianness in get_pdu_int() blktrace: use errno instead of bi_status selftests/vm/pkeys: fix alloc_random_pkey() to make it really random elfnote: mark all .note sections SHF_ALLOC include/linux/bitops.h: avoid clang shift-count-overflow warnings lib/zlib: remove outdated and incorrect pre-increment optimization geneve: change from tx_error to tx_dropped on missing metadata crypto: omap-sham - add proper load balancing support for multicore pinctrl: freescale: imx: Fix an error handling path in 'imx_pinctrl_probe()' pinctrl: imxl: Fix an error handling path in 'imx1_pinctrl_core_probe()' scsi: ufs: Don't update urgent bkops level when toggling auto bkops scsi: iscsi: Fix reference count leak in iscsi_boot_create_kobj gfs2: Allow lock_nolock mount to specify jid=X openrisc: Fix issue with argument clobbering for clone/fork rxrpc: Adjust /proc/net/rxrpc/calls to display call->debug_id not user_ID vfio/mdev: Fix reference count leak in add_mdev_supported_type ASoC: fsl_asrc_dma: Fix dma_chan leak when config DMA channel failed extcon: adc-jack: Fix an error handling path in 'adc_jack_probe()' powerpc/4xx: Don't unmap NULL mbase of: Fix a refcounting bug in __of_attach_node_sysfs() NFSv4.1 fix rpc_call_done assignment for BIND_CONN_TO_SESSION net: sunrpc: Fix off-by-one issues in 'rpc_ntop6' clk: sprd: return correct type of value for _sprd_pll_recalc_rate KVM: PPC: Book3S HV: Ignore kmemleak false positives scsi: ufs-qcom: Fix scheduling while atomic issue clk: bcm2835: Fix return type of bcm2835_register_gate scsi: target: tcmu: Fix a use after free in tcmu_check_expired_queue_cmd() ASoC: fix incomplete error-handling in img_i2s_in_probe. x86/apic: Make TSC deadline timer detection message visible RDMA/iw_cxgb4: cleanup device debugfs entries on ULD remove usb: gadget: Fix issue with config_ep_by_speed function usb: gadget: fix potential double-free in m66592_probe. usb: gadget: lpc32xx_udc: don't dereference ep pointer before null check USB: gadget: udc: s3c2410_udc: Remove pointless NULL check in s3c2410_udc_nuke usb: dwc2: gadget: move gadget resume after the core is in L0 state watchdog: da9062: No need to ping manually before setting timeout IB/cma: Fix ports memory leak in cma_configfs PCI: dwc: Fix inner MSI IRQ domain registration PCI/PTM: Inherit Switch Downstream Port PTM settings from Upstream Port dm zoned: return NULL if dmz_get_zone_for_reclaim() fails to find a zone powerpc/64s/pgtable: fix an undefined behaviour arm64: tegra: Fix ethernet phy-mode for Jetson Xavier scsi: target: tcmu: Userspace must not complete queued commands clk: samsung: exynos5433: Add IGNORE_UNUSED flag to sclk_i2s1 fpga: dfl: afu: Corrected error handling levels tty: n_gsm: Fix bogus i++ in gsm_data_kick USB: host: ehci-mxc: Add error handling in ehci_mxc_drv_probe() ASoC: Intel: bytcr_rt5640: Add quirk for Toshiba Encore WT8-A tablet drm/msm/mdp5: Fix mdp5_init error path for failed mdp5_kms allocation usb/ohci-platform: Fix a warning when hibernating vfio-pci: Mask cap zero powerpc/ps3: Fix kexec shutdown hang powerpc/pseries/ras: Fix FWNMI_VALID off by one ipmi: use vzalloc instead of kmalloc for user creation HID: Add quirks for Trust Panora Graphic Tablet tty: n_gsm: Fix waking up upper tty layer when room available tty: n_gsm: Fix SOF skipping powerpc/64: Don't initialise init_task->thread.regs PCI: Fix pci_register_host_bridge() device_register() error handling clk: ti: composite: fix memory leak dlm: remove BUG() before panic() pinctrl: rockchip: fix memleak in rockchip_dt_node_to_map scsi: mpt3sas: Fix double free warnings power: supply: smb347-charger: IRQSTAT_D is volatile power: supply: lp8788: Fix an error handling path in 'lp8788_charger_probe()' scsi: qla2xxx: Fix warning after FC target reset PCI/ASPM: Allow ASPM on links to PCIe-to-PCI/PCI-X Bridges PCI: rcar: Fix incorrect programming of OB windows drivers: base: Fix NULL pointer exception in __platform_driver_probe() if a driver developer is foolish serial: amba-pl011: Make sure we initialize the port.lock spinlock i2c: pxa: fix i2c_pxa_scream_blue_murder() debug output PCI: v3-semi: Fix a memory leak in v3_pci_probe() error handling paths staging: sm750fb: add missing case while setting FB_VISUAL usb: dwc3: gadget: Properly handle failed kick_transfer thermal/drivers/ti-soc-thermal: Avoid dereferencing ERR_PTR slimbus: ngd: get drvdata from correct device tty: hvc: Fix data abort due to race in hvc_open s390/qdio: put thinint indicator after early error ALSA: usb-audio: Fix racy list management in output queue ALSA: usb-audio: Improve frames size computation staging: gasket: Fix mapping refcnt leak when register/store fails staging: gasket: Fix mapping refcnt leak when put attribute fails firmware: qcom_scm: fix bogous abuse of dma-direct internals pinctrl: rza1: Fix wrong array assignment of rza1l_swio_entries scsi: qedf: Fix crash when MFW calls for protocol stats while function is still probing gpio: dwapb: Append MODULE_ALIAS for platform driver ARM: dts: sun8i-h2-plus-bananapi-m2-zero: Fix led polarity scsi: qedi: Do not flush offload work if ARP not resolved arm64: dts: mt8173: fix unit name warnings staging: greybus: fix a missing-check bug in gb_lights_light_config() x86/purgatory: Disable various profiling and sanitizing options apparmor: fix nnp subset test for unconfined scsi: ibmvscsi: Don't send host info in adapter info MAD after LPM scsi: sr: Fix sr_probe() missing deallocate of device minor ASoC: meson: add missing free_irq() in error path apparmor: check/put label on apparmor_sk_clone_security() apparmor: fix introspection of of task mode for unconfined tasks mksysmap: Fix the mismatch of '.L' symbols in System.map NTB: Fix the default port and peer numbers for legacy drivers NTB: ntb_pingpong: Choose doorbells based on port number yam: fix possible memory leak in yam_init_driver pwm: img: Call pm_runtime_put() in pm_runtime_get_sync() failed case powerpc/crashkernel: Take "mem=" option into account PCI: vmd: Filter resource type bits from shadow register nfsd: Fix svc_xprt refcnt leak when setup callback client failed powerpc/perf/hv-24x7: Fix inconsistent output values incase multiple hv-24x7 events run clk: clk-flexgen: fix clock-critical handling scsi: lpfc: Fix lpfc_nodelist leak when processing unsolicited event mfd: wm8994: Fix driver operation if loaded as modules gpio: dwapb: Call acpi_gpiochip_free_interrupts() on GPIO chip de-registration m68k/PCI: Fix a memory leak in an error handling path RDMA/mlx5: Add init2init as a modify command vfio/pci: fix memory leaks in alloc_perm_bits() ps3disk: use the default segment boundary PCI: aardvark: Don't blindly enable ASPM L0s and don't write to read-only register dm mpath: switch paths in dm_blk_ioctl() code path serial: 8250: Fix max baud limit in generic 8250 port usblp: poison URBs upon disconnect clk: samsung: Mark top ISP and CAM clocks on Exynos542x as critical i2c: pxa: clear all master action bits in i2c_pxa_stop_message() f2fs: report delalloc reserve as non-free in statfs for project quota iio: bmp280: fix compensation of humidity scsi: qla2xxx: Fix issue with adapter's stopping state PCI: Allow pci_resize_resource() for devices on root bus ALSA: isa/wavefront: prevent out of bounds write in ioctl ALSA: hda/realtek - Introduce polarity for micmute LED GPIO scsi: qedi: Check for buffer overflow in qedi_set_path() ARM: integrator: Add some Kconfig selections ASoC: davinci-mcasp: Fix dma_chan refcnt leak when getting dma type backlight: lp855x: Ensure regulators are disabled on probe failure clk: qcom: msm8916: Fix the address location of pll->config_reg remoteproc: Fix IDR initialisation in rproc_alloc() iio: pressure: bmp280: Tolerate IRQ before registering i2c: piix4: Detect secondary SMBus controller on AMD AM4 chipsets ASoC: tegra: tegra_wm8903: Support nvidia, headset property clk: sunxi: Fix incorrect usage of round_down() power: supply: bq24257_charger: Replace depends on REGMAP_I2C with select ANDROID: ext4: Optimize match for casefolded encrypted dirs ANDROID: ext4: Handle casefolding with encryption ANDROID: extcon: Remove redundant EXPORT_SYMBOL_GPL ANDROID: update the ABI xml representation ANDROID: GKI: cfg80211: add ABI changes for CONFIG_NL80211_TESTMODE ANDROID: gki_defconfig: x86: Enable KERNEL_LZ4 ANDROID: GKI: scripts: Makefile: update the lz4 command FROMLIST: f2fs: fix use-after-free when accessing bio->bi_crypt_context UPSTREAM: fdt: Update CRC check for rng-seed ANDROID: GKI: Update ABI for incremental fs ANDROID: GKI: Update whitelist and defconfig for incfs ANDROID: Use depmod from the hermetic toolchain Linux 4.19.129 perf symbols: Fix debuginfo search for Ubuntu perf probe: Check address correctness by map instead of _etext perf probe: Fix to check blacklist address correctly perf probe: Do not show the skipped events w1: omap-hdq: cleanup to add missing newline for some dev_dbg mtd: rawnand: pasemi: Fix the probe error path mtd: rawnand: brcmnand: fix hamming oob layout sunrpc: clean up properly in gss_mech_unregister() sunrpc: svcauth_gss_register_pseudoflavor must reject duplicate registrations. kbuild: force to build vmlinux if CONFIG_MODVERSION=y powerpc/64s: Save FSCR to init_task.thread.fscr after feature init powerpc/64s: Don't let DT CPU features set FSCR_DSCR drivers/macintosh: Fix memleak in windfarm_pm112 driver ARM: dts: s5pv210: Set keep-power-in-suspend for SDHCI1 on Aries ARM: dts: at91: sama5d2_ptc_ek: fix vbus pin ARM: dts: exynos: Fix GPIO polarity for thr GalaxyS3 CM36651 sensor's bus ARM: tegra: Correct PL310 Auxiliary Control Register initialization kernel/cpu_pm: Fix uninitted local in cpu_pm alpha: fix memory barriers so that they conform to the specification dm crypt: avoid truncating the logical block size sparc64: fix misuses of access_process_vm() in genregs32_[sg]et() sparc32: fix register window handling in genregs32_[gs]et() gnss: sirf: fix error return code in sirf_probe() pinctrl: samsung: Save/restore eint_mask over suspend for EINT_TYPE GPIOs pinctrl: samsung: Correct setting of eint wakeup mask on s5pv210 power: vexpress: add suppress_bind_attrs to true igb: Report speed and duplex as unknown when device is runtime suspended media: ov5640: fix use of destroyed mutex b43_legacy: Fix connection problem with WPA3 b43: Fix connection problem with WPA3 b43legacy: Fix case where channel status is corrupted Bluetooth: hci_bcm: fix freeing not-requested IRQ media: go7007: fix a miss of snd_card_free carl9170: remove P2P_GO support e1000e: Relax condition to trigger reset for ME workaround e1000e: Disable TSO for buffer overrun workaround PCI: Program MPS for RCiEP devices ima: Call ima_calc_boot_aggregate() in ima_eventdigest_init() btrfs: fix wrong file range cleanup after an error filling dealloc range btrfs: fix error handling when submitting direct I/O bio PCI: Generalize multi-function power dependency device links PCI: Unify ACS quirk desired vs provided checking PCI: Make ACS quirk implementations more uniform serial: 8250_pci: Move Pericom IDs to pci_ids.h PCI: Add Loongson vendor ID x86/amd_nb: Add Family 19h PCI IDs PCI: vmd: Add device id for VMD device 8086:9A0B PCI: Add Amazon's Annapurna Labs vendor ID PCI: Add Genesys Logic, Inc. Vendor ID ALSA: lx6464es - add support for LX6464ESe pci express variant x86/amd_nb: Add PCI device IDs for family 17h, model 70h PCI: mediatek: Add controller support for MT7629 PCI: Enable NVIDIA HDA controllers PCI: Add NVIDIA GPU multi-function power dependencies PCI: Add Synopsys endpoint EDDA Device ID misc: pci_endpoint_test: Add support to test PCI EP in AM654x misc: pci_endpoint_test: Add the layerscape EP device support PCI: Move Rohm Vendor ID to generic list PCI: Move Synopsys HAPS platform device IDs PCI: add USR vendor id and use it in r8169 and w6692 driver x86/amd_nb: Add PCI device IDs for family 17h, model 30h hwmon/k10temp, x86/amd_nb: Consolidate shared device IDs pci:ipmi: Move IPMI PCI class id defines to pci_ids.h PCI: Remove unused NFP32xx IDs PCI: Add ACS quirk for Intel Root Complex Integrated Endpoints PCI: Add ACS quirk for iProc PAXB PCI: Avoid FLR for AMD Starship USB 3.0 PCI: Avoid FLR for AMD Matisse HD Audio & USB 3.0 PCI: Avoid Pericom USB controller OHCI/EHCI PME# defect ext4: fix race between ext4_sync_parent() and rename() ext4: fix error pointer dereference ext4: fix EXT_MAX_EXTENT/INDEX to check for zeroed eh_max evm: Fix possible memory leak in evm_calc_hmac_or_hash() ima: Directly assign the ima_default_policy pointer to ima_rules ima: Fix ima digest hash table key calculation mm: initialize deferred pages with interrupts enabled mm: thp: make the THP mapcount atomic against __split_huge_pmd_locked() btrfs: send: emit file capabilities after chown btrfs: include non-missing as a qualifier for the latest_bdev string.h: fix incompatibility between FORTIFY_SOURCE and KASAN platform/x86: intel-vbtn: Only blacklist SW_TABLET_MODE on the 9 / "Laptop" chasis-type platform/x86: intel-hid: Add a quirk to support HP Spectre X2 (2015) platform/x86: hp-wmi: Convert simple_strtoul() to kstrtou32() cpuidle: Fix three reference count leaks spi: dw: Return any value retrieved from the dma_transfer callback mmc: sdhci-esdhc-imx: fix the mask for tuning start point ixgbe: fix signed-integer-overflow warning mmc: via-sdmmc: Respect the cmd->busy_timeout from the mmc core staging: greybus: sdio: Respect the cmd->busy_timeout from the mmc core mmc: sdhci-msm: Set SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 quirk bcache: fix refcount underflow in bcache_device_free() MIPS: Fix IRQ tracing when call handle_fpe() and handle_msa_fpe() PCI: Don't disable decoding when mmio_always_on is set macvlan: Skip loopback packets in RX handler btrfs: qgroup: mark qgroup inconsistent if we're inherting snapshot to a new qgroup m68k: mac: Don't call via_flush_cache() on Mac IIfx x86/mm: Stop printing BRK addresses crypto: stm32/crc32 - fix multi-instance crypto: stm32/crc32 - fix run-time self test issue. crypto: stm32/crc32 - fix ext4 chksum BUG_ON() mips: Add udelay lpj numbers adjustment mips: MAAR: Use more precise address mask x86/boot: Correct relocation destination on old linkers mwifiex: Fix memory corruption in dump_station rtlwifi: Fix a double free in _rtl_usb_tx_urb_setup() net/mlx5e: IPoIB, Drop multicast packets that this interface sent veth: Adjust hard_start offset on redirect XDP frames md: don't flush workqueue unconditionally in md_open mt76: avoid rx reorder buffer overflow net: qed*: Reduce RX and TX default ring count when running inside kdump kernel wcn36xx: Fix error handling path in 'wcn36xx_probe()' ath10k: Remove msdu from idr when management pkt send fails nvme: refine the Qemu Identify CNS quirk platform/x86: intel-vbtn: Also handle tablet-mode switch on "Detachable" and "Portable" chassis-types platform/x86: intel-vbtn: Do not advertise switches to userspace if they are not there platform/x86: intel-vbtn: Split keymap into buttons and switches parts platform/x86: intel-vbtn: Use acpi_evaluate_integer() xfs: fix duplicate verification from xfs_qm_dqflush() xfs: reset buffer write failure state on successful completion kgdb: Fix spurious true from in_dbg_master() mips: cm: Fix an invalid error code of INTVN_*_ERR MIPS: Truncate link address into 32bit for 32bit kernel Crypto/chcr: fix for ccm(aes) failed test xfs: clean up the error handling in xfs_swap_extents powerpc/spufs: fix copy_to_user while atomic net: allwinner: Fix use correct return type for ndo_start_xmit() media: cec: silence shift wrapping warning in __cec_s_log_addrs() net: lpc-enet: fix error return code in lpc_mii_init() drivers/perf: hisi: Fix typo in events attribute array sched/core: Fix illegal RCU from offline CPUs exit: Move preemption fixup up, move blocking operations down lib/mpi: Fix 64-bit MIPS build with Clang net: bcmgenet: set Rx mode before starting netif selftests/bpf: Fix memory leak in extract_build_id() netfilter: nft_nat: return EOPNOTSUPP if type or flags are not supported audit: fix a net reference leak in audit_list_rules_send() Bluetooth: btbcm: Add 2 missing models to subver tables MIPS: Make sparse_init() using top-down allocation media: platform: fcp: Set appropriate DMA parameters media: dvb: return -EREMOTEIO on i2c transfer failure. audit: fix a net reference leak in audit_send_reply() dt-bindings: display: mediatek: control dpi pins mode to avoid leakage e1000: Distribute switch variables for initialization tools api fs: Make xxx__mountpoint() more scalable brcmfmac: fix wrong location to get firmware feature staging: android: ion: use vmap instead of vm_map_ram net: vmxnet3: fix possible buffer overflow caused by bad DMA value in vmxnet3_get_rss() x86/kvm/hyper-v: Explicitly align hcall param for kvm_hyperv_exit spi: dw: Fix Rx-only DMA transfers mmc: meson-mx-sdio: trigger a soft reset after a timeout or CRC error batman-adv: Revert "disable ethtool link speed detection when auto negotiation off" ARM: 8978/1: mm: make act_mm() respect THREAD_SIZE btrfs: do not ignore error from btrfs_next_leaf() when inserting checksums clocksource: dw_apb_timer_of: Fix missing clockevent timers clocksource: dw_apb_timer: Make CPU-affiliation being optional spi: dw: Enable interrupts in accordance with DMA xfer mode kgdb: Prevent infinite recursive entries to the debugger kgdb: Disable WARN_CONSOLE_UNLOCKED for all kgdb Bluetooth: Add SCO fallback for invalid LMP parameters error MIPS: Loongson: Build ATI Radeon GPU driver as module ixgbe: Fix XDP redirect on archs with PAGE_SIZE above 4K arm64: insn: Fix two bugs in encoding 32-bit logical immediates spi: dw: Zero DMA Tx and Rx configurations on stack arm64: cacheflush: Fix KGDB trap detection efi/libstub/x86: Work around LLVM ELF quirk build regression net: ena: fix error returning in ena_com_get_hash_function() net: atlantic: make hw_get_regs optional spi: pxa2xx: Apply CS clk quirk to BXT objtool: Ignore empty alternatives media: si2157: Better check for running tuner in init crypto: ccp -- don't "select" CONFIG_DMADEVICES drm: bridge: adv7511: Extend list of audio sample rates ACPI: GED: use correct trigger type field in _Exx / _Lxx handling KVM: arm64: Synchronize sysreg state on injecting an AArch32 exception xen/pvcalls-back: test for errors when calling backend_connect() mmc: sdio: Fix potential NULL pointer error in mmc_sdio_init_card() ARM: dts: at91: sama5d2_ptc_ek: fix sdmmc0 node description mmc: sdhci-msm: Clear tuning done flag while hs400 tuning agp/intel: Reinforce the barrier after GTT updates perf: Add cond_resched() to task_function_call() fat: don't allow to mount if the FAT length == 0 mm/slub: fix a memory leak in sysfs_slab_add() drm/vkms: Hold gem object while still in-use Smack: slab-out-of-bounds in vsscanf ath9k: Fix general protection fault in ath9k_hif_usb_rx_cb ath9x: Fix stack-out-of-bounds Write in ath9k_hif_usb_rx_cb ath9k: Fix use-after-free Write in ath9k_htc_rx_msg ath9k: Fix use-after-free Read in ath9k_wmi_ctrl_rx scsi: megaraid_sas: TM command refire leads to controller firmware crash KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts KVM: MIPS: Fix VPN2_MASK definition for variable cpu_vmbits KVM: MIPS: Define KVM_ENTRYHI_ASID to cpu_asid_mask(&boot_cpu_data) KVM: nVMX: Consult only the "basic" exit reason when routing nested exit KVM: nSVM: leave ASID aside in copy_vmcb_control_area KVM: nSVM: fix condition for filtering async PF video: fbdev: w100fb: Fix a potential double free. proc: Use new_inode not new_inode_pseudo ovl: initialize error in ovl_copy_xattr selftests/net: in rxtimestamp getopt_long needs terminating null entry crypto: virtio: Fix dest length calculation in __virtio_crypto_skcipher_do_req() crypto: virtio: Fix src/dst scatterlist calculation in __virtio_crypto_skcipher_do_req() crypto: virtio: Fix use-after-free in virtio_crypto_skcipher_finalize_req() spi: pxa2xx: Fix runtime PM ref imbalance on probe error spi: pxa2xx: Balance runtime PM enable/disable on error spi: bcm2835: Fix controller unregister order spi: pxa2xx: Fix controller unregister order spi: Fix controller unregister order spi: No need to assign dummy value in spi_unregister_controller() x86/speculation: PR_SPEC_FORCE_DISABLE enforcement for indirect branches. x86/speculation: Avoid force-disabling IBPB based on STIBP and enhanced IBRS. x86/speculation: Add support for STIBP always-on preferred mode x86/speculation: Change misspelled STIPB to STIBP KVM: x86: only do L1TF workaround on affected processors KVM: x86/mmu: Consolidate "is MMIO SPTE" code kvm: x86: Fix L1TF mitigation for shadow MMU KVM: x86: Fix APIC page invalidation race x86/{mce,mm}: Unmap the entire page if the whole page is affected and poisoned ALSA: pcm: disallow linking stream to itself crypto: cavium/nitrox - Fix 'nitrox_get_first_device()' when ndevlist is fully iterated PM: runtime: clk: Fix clk_pm_runtime_get() error path spi: bcm-qspi: when tx/rx buffer is NULL set to 0 spi: bcm2835aux: Fix controller unregister order spi: dw: Fix controller unregister order nilfs2: fix null pointer dereference at nilfs_segctor_do_construct() cgroup, blkcg: Prepare some symbols for module and !CONFIG_CGROUP usages ACPI: PM: Avoid using power resources if there are none for D0 ACPI: GED: add support for _Exx / _Lxx handler methods ACPI: CPPC: Fix reference count leak in acpi_cppc_processor_probe() ACPI: sysfs: Fix reference count leak in acpi_sysfs_add_hotplug_profile() ALSA: usb-audio: Add vendor, product and profile name for HP Thunderbolt Dock ALSA: usb-audio: Fix inconsistent card PM state after resume ALSA: hda/realtek - add a pintbl quirk for several Lenovo machines ALSA: es1688: Add the missed snd_card_free() efi/efivars: Add missing kobject_put() in sysfs entry creation error path x86/reboot/quirks: Add MacBook6,1 reboot quirk x86/speculation: Prevent rogue cross-process SSBD shutdown x86/PCI: Mark Intel C620 MROMs as having non-compliant BARs x86_64: Fix jiffies ODR violation btrfs: tree-checker: Check level for leaves and nodes aio: fix async fsync creds mm: add kvfree_sensitive() for freeing sensitive data objects perf probe: Accept the instance number of kretprobe event x86/cpu/amd: Make erratum #1054 a legacy erratum RDMA/uverbs: Make the event_queue fds return POLLERR when disassociated ath9k_htc: Silence undersized packet warnings powerpc/xive: Clear the page tables for the ESB IO mapping drivers/net/ibmvnic: Update VNIC protocol version reporting Input: synaptics - add a second working PNP_ID for Lenovo T470s sched/fair: Don't NUMA balance for kthreads ARM: 8977/1: ptrace: Fix mask for thumb breakpoint hook Input: mms114 - fix handling of mms345l crypto: talitos - fix ECB and CBC algs ivsize btrfs: Detect unbalanced tree with empty leaf before crashing btree operations btrfs: merge btrfs_find_device and find_device lib: Reduce user_access_begin() boundaries in strncpy_from_user() and strnlen_user() x86: uaccess: Inhibit speculation past access_ok() in user_access_begin() arch/openrisc: Fix issues with access_ok() Fix 'acccess_ok()' on alpha and SH make 'user_access_begin()' do 'access_ok()' selftests: bpf: fix use of undeclared RET_IF macro tun: correct header offsets in napi frags mode vxlan: Avoid infinite loop when suppressing NS messages with invalid options bridge: Avoid infinite loop when suppressing NS messages with invalid options net_failover: fixed rollback in net_failover_open() ipv6: fix IPV6_ADDRFORM operation logic writeback: Drop I_DIRTY_TIME_EXPIRE writeback: Fix sync livelock due to b_dirty_time processing writeback: Avoid skipping inode writeback writeback: Protect inode->i_io_list with inode->i_lock Revert "writeback: Avoid skipping inode writeback" ANDROID: gki_defconfig: increase vbus_draw to 500mA fscrypt: remove stale definition fs-verity: remove unnecessary extern keywords fs-verity: fix all kerneldoc warnings fscrypt: add support for IV_INO_LBLK_32 policies fscrypt: make test_dummy_encryption use v2 by default fscrypt: support test_dummy_encryption=v2 fscrypt: add fscrypt_add_test_dummy_key() linux/parser.h: add include guards fscrypt: remove unnecessary extern keywords fscrypt: name all function parameters fscrypt: fix all kerneldoc warnings ANDROID: Update the ABI ANDROID: GKI: power: power-supply: Add POWER_SUPPLY_PROP_CHARGER_STATUS property ANDROID: GKI: add dev to usb_gsi_request ANDROID: GKI: dma-buf: add dent_count to dma_buf ANDROID: Update the ABI xml and whitelist ANDROID: GKI: update whitelist ANDROID: extcon: Export symbol of `extcon_get_edev_name` ANDROID: kbuild: merge more sections with LTO UPSTREAM: timekeeping/vsyscall: Update VDSO data unconditionally ANDROID: GKI: Revert "genetlink: disallow subscribing to unknown mcast groups" BACKPORT: usb: musb: Add support for MediaTek musb controller UPSTREAM: usb: musb: Add musb_clearb/w() interface UPSTREAM: usb: musb: Add noirq type of dma create interface UPSTREAM: usb: musb: Add get/set toggle hooks UPSTREAM: dt-bindings: usb: musb: Add support for MediaTek musb controller FROMGIT: driver core: Remove unnecessary is_fwnode_dev variable in device_add() FROMGIT: driver core: Remove check in driver_deferred_probe_force_trigger() FROMGIT: of: platform: Batch fwnode parsing when adding all top level devices FROMGIT: BACKPORT: driver core: fw_devlink: Add support for batching fwnode parsing BACKPORT: driver core: Look for waiting consumers only for a fwnode's primary device BACKPORT: driver core: Add device links from fwnode only for the primary device Linux 4.19.128 Revert "net/mlx5: Annotate mutex destroy for root ns" uprobes: ensure that uprobe->offset and ->ref_ctr_offset are properly aligned x86/speculation: Add Ivy Bridge to affected list x86/speculation: Add SRBDS vulnerability and mitigation documentation x86/speculation: Add Special Register Buffer Data Sampling (SRBDS) mitigation x86/cpu: Add 'table' argument to cpu_matches() x86/cpu: Add a steppings field to struct x86_cpu_id nvmem: qfprom: remove incorrect write support CDC-ACM: heed quirk also in error handling staging: rtl8712: Fix IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK tty: hvc_console, fix crashes on parallel open/close vt: keyboard: avoid signed integer overflow in k_ascii usb: musb: Fix runtime PM imbalance on error usb: musb: start session in resume for host port iio: vcnl4000: Fix i2c swapped word reading. USB: serial: option: add Telit LE910C1-EUX compositions USB: serial: usb_wwan: do not resubmit rx urb on fatal errors USB: serial: qcserial: add DW5816e QDL support net: check untrusted gso_size at kernel entry vsock: fix timeout in vsock_accept() NFC: st21nfca: add missed kfree_skb() in an error path net: usb: qmi_wwan: add Telit LE910C1-EUX composition l2tp: do not use inet_hash()/inet_unhash() l2tp: add sk_family checks to l2tp_validate_socket devinet: fix memleak in inetdev_init() Revert "ANDROID: Remove default y on BRIDGE_IGMP_SNOOPING" ANDROID: Update the ABI xml and whitelist ANDROID: GKI: update whitelist ANDROID: arch: arm64: vdso: export the symbols for time() ANDROID: Incremental fs: Remove dependency on PKCS7_MESSAGE_PARSER ANDROID: dm-bow: Add block_size option f2fs: attach IO flags to the missing cases f2fs: add node_io_flag for bio flags likewise data_io_flag f2fs: remove unused parameter of f2fs_put_rpages_mapping() f2fs: handle readonly filesystem in f2fs_ioc_shutdown() f2fs: avoid utf8_strncasecmp() with unstable name f2fs: don't return vmalloc() memory from f2fs_kmalloc() ANDROID: GKI: set CONFIG_BLK_DEV_LOOP_MIN_COUNT to 16 ANDROID: Incremental fs: Cache successful hash calculations ANDROID: Incremental fs: Fix four error-path bugs Linux 4.19.127 net: smsc911x: Fix runtime PM imbalance on error net: ethernet: stmmac: Enable interface clocks on probe for IPQ806x net/ethernet/freescale: rework quiesce/activate for ucc_geth null_blk: return error for invalid zone size s390/mm: fix set_huge_pte_at() for empty ptes drm/edid: Add Oculus Rift S to non-desktop list net: bmac: Fix read of MAC address from ROM x86/mmiotrace: Use cpumask_available() for cpumask_var_t variables i2c: altera: Fix race between xfer_msg and isr thread evm: Fix RCU list related warnings ARC: [plat-eznps]: Restrict to CONFIG_ISA_ARCOMPACT ARC: Fix ICCM & DCCM runtime size checks s390/ftrace: save traced function caller spi: dw: use "smp_mb()" to avoid sending spi data error powerpc/powernv: Avoid re-registration of imc debugfs directory scsi: hisi_sas: Check sas_port before using it drm/i915: fix port checks for MST support on gen >= 11 airo: Fix read overflows sending packets net: dsa: mt7530: set CPU port to fallback mode scsi: ufs: Release clock if DMA map fails mmc: fix compilation of user API kernel/relay.c: handle alloc_percpu returning NULL in relay_open p54usb: add AirVasT USB stick device-id HID: i2c-hid: add Schneider SCL142ALM to descriptor override HID: sony: Fix for broken buttons on DS3 USB dongles mm: Fix mremap not considering huge pmd devmap libnvdimm: Fix endian conversion issues Revert "cgroup: Add memory barriers to plug cgroup_rstat_updated() race window" f2fs: fix retry logic in f2fs_write_cache_pages() ANDROID: Update ABI representation Linux 4.19.126 mm/vmalloc.c: don't dereference possible NULL pointer in __vunmap() netfilter: nf_conntrack_pptp: fix compilation warning with W=1 build bonding: Fix reference count leak in bond_sysfs_slave_add. crypto: chelsio/chtls: properly set tp->lsndtime qlcnic: fix missing release in qlcnic_83xx_interrupt_test. xsk: Add overflow check for u64 division, stored into u32 bnxt_en: Fix accumulation of bp->net_stats_prev. esp6: get the right proto for transport mode in esp6_gso_encap netfilter: nf_conntrack_pptp: prevent buffer overflows in debug code netfilter: nfnetlink_cthelper: unbreak userspace helper support netfilter: ipset: Fix subcounter update skip netfilter: nft_reject_bridge: enable reject with bridge vlan ip_vti: receive ipip packet by calling ip_tunnel_rcv vti4: eliminated some duplicate code. xfrm: fix error in comment xfrm: fix a NULL-ptr deref in xfrm_local_error xfrm: fix a warning in xfrm_policy_insert_list xfrm interface: fix oops when deleting a x-netns interface xfrm: call xfrm_output_gso when inner_protocol is set in xfrm_output xfrm: allow to accept packets with ipv6 NEXTHDR_HOP in xfrm_input copy_xstate_to_kernel(): don't leave parts of destination uninitialized x86/dma: Fix max PFN arithmetic overflow on 32 bit systems mac80211: mesh: fix discovery timer re-arming issue / crash RDMA/core: Fix double destruction of uobject mmc: core: Fix recursive locking issue in CQE recovery path parisc: Fix kernel panic in mem_init() iommu: Fix reference count leak in iommu_group_alloc. include/asm-generic/topology.h: guard cpumask_of_node() macro argument fs/binfmt_elf.c: allocate initialized memory in fill_thread_core_info() mm: remove VM_BUG_ON(PageSlab()) from page_mapcount() IB/ipoib: Fix double free of skb in case of multicast traffic in CM mode libceph: ignore pool overlay and cache logic on redirects ALSA: hda/realtek - Add new codec supported for ALC287 ALSA: usb-audio: Quirks for Gigabyte TRX40 Aorus Master onboard audio exec: Always set cap_ambient in cap_bprm_set_creds ALSA: usb-audio: mixer: volume quirk for ESS Technology Asus USB DAC ALSA: hda/realtek - Add a model for Thinkpad T570 without DAC workaround ALSA: hwdep: fix a left shifting 1 by 31 UB bug RDMA/pvrdma: Fix missing pci disable in pvrdma_pci_probe() mmc: block: Fix use-after-free issue for rpmb ARM: dts: bcm: HR2: Fix PPI interrupt types ARM: dts: bcm2835-rpi-zero-w: Fix led polarity ARM: dts/imx6q-bx50v3: Set display interface clock parents IB/qib: Call kobject_put() when kobject_init_and_add() fails gpio: exar: Fix bad handling for ida_simple_get error path ARM: uaccess: fix DACR mismatch with nested exceptions ARM: uaccess: integrate uaccess_save and uaccess_restore ARM: uaccess: consolidate uaccess asm to asm/uaccess-asm.h ARM: 8843/1: use unified assembler in headers ARM: 8970/1: decompressor: increase tag size Input: synaptics-rmi4 - fix error return code in rmi_driver_probe() Input: synaptics-rmi4 - really fix attn_data use-after-free Input: i8042 - add ThinkPad S230u to i8042 reset list Input: dlink-dir685-touchkeys - fix a typo in driver name Input: xpad - add custom init packet for Xbox One S controllers Input: evdev - call input_flush_device() on release(), not flush() Input: usbtouchscreen - add support for BonXeon TP samples: bpf: Fix build error cifs: Fix null pointer check in cifs_read riscv: stacktrace: Fix undefined reference to `walk_stackframe' IB/i40iw: Remove bogus call to netdev_master_upper_dev_get() net: freescale: select CONFIG_FIXED_PHY where needed usb: gadget: legacy: fix redundant initialization warnings usb: dwc3: pci: Enable extcon driver for Intel Merrifield cachefiles: Fix race between read_waiter and read_copier involving op->to_do gfs2: move privileged user check to gfs2_quota_lock_check net: microchip: encx24j600: add missed kthread_stop ALSA: usb-audio: add mapping for ASRock TRX40 Creator gpio: tegra: mask GPIO IRQs during IRQ shutdown ARM: dts: rockchip: fix pinctrl sub nodename for spi in rk322x.dtsi ARM: dts: rockchip: swap clock-names of gpu nodes arm64: dts: rockchip: swap interrupts interrupt-names rk3399 gpu node arm64: dts: rockchip: fix status for &gmac2phy in rk3328-evb.dts ARM: dts: rockchip: fix phy nodename for rk3228-evb mlxsw: spectrum: Fix use-after-free of split/unsplit/type_set in case reload fails net/mlx4_core: fix a memory leak bug. net: sun: fix missing release regions in cas_init_one(). net/mlx5: Annotate mutex destroy for root ns net/mlx5e: Update netdev txq on completions during closure sctp: Start shutdown on association restart if in SHUTDOWN-SENT state and socket is closed sctp: Don't add the shutdown timer if its already been added r8152: support additional Microsoft Surface Ethernet Adapter variant net sched: fix reporting the first-time use timestamp net: revert "net: get rid of an signed integer overflow in ip_idents_reserve()" net: qrtr: Fix passing invalid reference to qrtr_local_enqueue() net/mlx5: Add command entry handling completion net: ipip: fix wrong address family in init error path net: inet_csk: Fix so_reuseport bind-address cache in tb->fast* __netif_receive_skb_core: pass skb by reference net: dsa: mt7530: fix roaming from DSA user ports dpaa_eth: fix usage as DSA master, try 3 ax25: fix setsockopt(SO_BINDTODEVICE) ANDROID: modules: fix lockprove warning FROMGIT: USB: dummy-hcd: use configurable endpoint naming scheme UPSTREAM: usb: raw-gadget: fix null-ptr-deref when reenabling endpoints UPSTREAM: usb: raw-gadget: documentation updates UPSTREAM: usb: raw-gadget: support stalling/halting/wedging endpoints UPSTREAM: usb: raw-gadget: fix gadget endpoint selection UPSTREAM: usb: raw-gadget: improve uapi headers comments UPSTREAM: usb: raw-gadget: fix return value of ep read ioctls UPSTREAM: usb: raw-gadget: fix raw_event_queue_fetch locking UPSTREAM: usb: raw-gadget: Fix copy_to/from_user() checks f2fs: fix wrong discard space f2fs: compress: don't compress any datas after cp stop f2fs: remove unneeded return value of __insert_discard_tree() f2fs: fix wrong value of tracepoint parameter f2fs: protect new segment allocation in expand_inode_data f2fs: code cleanup by removing ifdef macro surrounding writeback: Avoid skipping inode writeback ANDROID: GKI: Update the ABI ANDROID: GKI: update whitelist ANDROID: GKI: support mm_event for FS/IO/UFS path ANDROID: net: bpf: permit redirect from ingress L3 to egress L2 devices at near max mtu FROMGIT: driver core: Update device link status correctly for SYNC_STATE_ONLY links UPSTREAM: driver core: Fix handling of SYNC_STATE_ONLY + STATELESS device links BACKPORT: driver core: Fix SYNC_STATE_ONLY device link implementation ANDROID: Bulk update the ABI xml and qcom whitelist Revert "ANDROID: Incremental fs: Avoid continually recalculating hashes" f2fs: avoid inifinite loop to wait for flushing node pages at cp_error f2fs: compress: fix zstd data corruption f2fs: add compressed/gc data read IO stat f2fs: fix potential use-after-free issue f2fs: compress: don't handle non-compressed data in workqueue f2fs: remove redundant assignment to variable err f2fs: refactor resize_fs to avoid meta updates in progress f2fs: use round_up to enhance calculation f2fs: introduce F2FS_IOC_RESERVE_COMPRESS_BLOCKS f2fs: Avoid double lock for cp_rwsem during checkpoint f2fs: report delalloc reserve as non-free in statfs for project quota f2fs: Fix wrong stub helper update_sit_info f2fs: compress: let lz4 compressor handle output buffer budget properly f2fs: remove blk_plugging in block_operations f2fs: introduce F2FS_IOC_RELEASE_COMPRESS_BLOCKS f2fs: shrink spinlock coverage f2fs: correctly fix the parent inode number during fsync() f2fs: introduce mempool for {,de}compress intermediate page allocation f2fs: introduce f2fs_bmap_compress() f2fs: support fiemap on compressed inode f2fs: support partial truncation on compressed inode f2fs: remove redundant compress inode check f2fs: use strcmp() in parse_options() f2fs: Use the correct style for SPDX License Identifier Conflicts: Documentation/devicetree/bindings Documentation/devicetree/bindings/display/mediatek/mediatek,dpi.txt Documentation/devicetree/bindings/usb/dwc3.txt drivers/media/v4l2-core/v4l2-ctrls.c drivers/mmc/core/queue.c drivers/mmc/host/sdhci-msm.c drivers/scsi/ufs/ufs-qcom.c drivers/slimbus/qcom-ngd-ctrl.c drivers/usb/gadget/composite.c fs/crypto/keyring.c fs/f2fs/data.c include/linux/fs.h include/linux/usb/gadget.h include/uapi/linux/v4l2-controls.h kernel/sched/cpufreq_schedutil.c kernel/sched/fair.c kernel/time/tick-sched.c mm/vmalloc.c net/netlink/genetlink.c net/qrtr/qrtr.c sound/core/compress_offload.c sound/soc/soc-compress.c Fixed errors: drivers/scsi/ufs/ufshcd.c drivers/soc/qcom/rq_stats.c Change-Id: I06ea6a6c3f239045e2947f27af617aa6f523bfdb Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
3018 lines
83 KiB
C
3018 lines
83 KiB
C
/*
|
|
* Copyright (C) 2009 Red Hat, Inc.
|
|
*
|
|
* This work is licensed under the terms of the GNU GPL, version 2. See
|
|
* the COPYING file in the top-level directory.
|
|
*/
|
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
#include <linux/mm.h>
|
|
#include <linux/sched.h>
|
|
#include <linux/sched/coredump.h>
|
|
#include <linux/sched/numa_balancing.h>
|
|
#include <linux/highmem.h>
|
|
#include <linux/hugetlb.h>
|
|
#include <linux/mmu_notifier.h>
|
|
#include <linux/rmap.h>
|
|
#include <linux/swap.h>
|
|
#include <linux/shrinker.h>
|
|
#include <linux/mm_inline.h>
|
|
#include <linux/swapops.h>
|
|
#include <linux/dax.h>
|
|
#include <linux/khugepaged.h>
|
|
#include <linux/freezer.h>
|
|
#include <linux/pfn_t.h>
|
|
#include <linux/mman.h>
|
|
#include <linux/memremap.h>
|
|
#include <linux/pagemap.h>
|
|
#include <linux/debugfs.h>
|
|
#include <linux/migrate.h>
|
|
#include <linux/hashtable.h>
|
|
#include <linux/userfaultfd_k.h>
|
|
#include <linux/page_idle.h>
|
|
#include <linux/shmem_fs.h>
|
|
#include <linux/oom.h>
|
|
#include <linux/page_owner.h>
|
|
|
|
#include <asm/tlb.h>
|
|
#include <asm/pgalloc.h>
|
|
#include "internal.h"
|
|
|
|
/*
|
|
* By default, transparent hugepage support is disabled in order to avoid
|
|
* risking an increased memory footprint for applications that are not
|
|
* guaranteed to benefit from it. When transparent hugepage support is
|
|
* enabled, it is for all mappings, and khugepaged scans all mappings.
|
|
* Defrag is invoked by khugepaged hugepage allocations and by page faults
|
|
* for all hugepage allocations.
|
|
*/
|
|
unsigned long transparent_hugepage_flags __read_mostly =
|
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS
|
|
(1<<TRANSPARENT_HUGEPAGE_FLAG)|
|
|
#endif
|
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE_MADVISE
|
|
(1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG)|
|
|
#endif
|
|
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG)|
|
|
(1<<TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG)|
|
|
(1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG);
|
|
|
|
static struct shrinker deferred_split_shrinker;
|
|
|
|
static atomic_t huge_zero_refcount;
|
|
struct page *huge_zero_page __read_mostly;
|
|
|
|
bool transparent_hugepage_enabled(struct vm_area_struct *vma)
|
|
{
|
|
if (vma_is_anonymous(vma))
|
|
return __transparent_hugepage_enabled(vma);
|
|
if (vma_is_shmem(vma) && shmem_huge_enabled(vma))
|
|
return __transparent_hugepage_enabled(vma);
|
|
|
|
return false;
|
|
}
|
|
|
|
static struct page *get_huge_zero_page(void)
|
|
{
|
|
struct page *zero_page;
|
|
retry:
|
|
if (likely(atomic_inc_not_zero(&huge_zero_refcount)))
|
|
return READ_ONCE(huge_zero_page);
|
|
|
|
zero_page = alloc_pages((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE,
|
|
HPAGE_PMD_ORDER);
|
|
if (!zero_page) {
|
|
count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED);
|
|
return NULL;
|
|
}
|
|
count_vm_event(THP_ZERO_PAGE_ALLOC);
|
|
preempt_disable();
|
|
if (cmpxchg(&huge_zero_page, NULL, zero_page)) {
|
|
preempt_enable();
|
|
__free_pages(zero_page, compound_order(zero_page));
|
|
goto retry;
|
|
}
|
|
|
|
/* We take additional reference here. It will be put back by shrinker */
|
|
atomic_set(&huge_zero_refcount, 2);
|
|
preempt_enable();
|
|
return READ_ONCE(huge_zero_page);
|
|
}
|
|
|
|
static void put_huge_zero_page(void)
|
|
{
|
|
/*
|
|
* Counter should never go to zero here. Only shrinker can put
|
|
* last reference.
|
|
*/
|
|
BUG_ON(atomic_dec_and_test(&huge_zero_refcount));
|
|
}
|
|
|
|
struct page *mm_get_huge_zero_page(struct mm_struct *mm)
|
|
{
|
|
if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags))
|
|
return READ_ONCE(huge_zero_page);
|
|
|
|
if (!get_huge_zero_page())
|
|
return NULL;
|
|
|
|
if (test_and_set_bit(MMF_HUGE_ZERO_PAGE, &mm->flags))
|
|
put_huge_zero_page();
|
|
|
|
return READ_ONCE(huge_zero_page);
|
|
}
|
|
|
|
void mm_put_huge_zero_page(struct mm_struct *mm)
|
|
{
|
|
if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags))
|
|
put_huge_zero_page();
|
|
}
|
|
|
|
static unsigned long shrink_huge_zero_page_count(struct shrinker *shrink,
|
|
struct shrink_control *sc)
|
|
{
|
|
/* we can free zero page only if last reference remains */
|
|
return atomic_read(&huge_zero_refcount) == 1 ? HPAGE_PMD_NR : 0;
|
|
}
|
|
|
|
static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
|
|
struct shrink_control *sc)
|
|
{
|
|
if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
|
|
struct page *zero_page = xchg(&huge_zero_page, NULL);
|
|
BUG_ON(zero_page == NULL);
|
|
__free_pages(zero_page, compound_order(zero_page));
|
|
return HPAGE_PMD_NR;
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
static struct shrinker huge_zero_page_shrinker = {
|
|
.count_objects = shrink_huge_zero_page_count,
|
|
.scan_objects = shrink_huge_zero_page_scan,
|
|
.seeks = DEFAULT_SEEKS,
|
|
};
|
|
|
|
#ifdef CONFIG_SYSFS
|
|
static ssize_t enabled_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags))
|
|
return sprintf(buf, "[always] madvise never\n");
|
|
else if (test_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags))
|
|
return sprintf(buf, "always [madvise] never\n");
|
|
else
|
|
return sprintf(buf, "always madvise [never]\n");
|
|
}
|
|
|
|
static ssize_t enabled_store(struct kobject *kobj,
|
|
struct kobj_attribute *attr,
|
|
const char *buf, size_t count)
|
|
{
|
|
ssize_t ret = count;
|
|
|
|
if (sysfs_streq(buf, "always")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
set_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
|
|
} else if (sysfs_streq(buf, "madvise")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
|
|
set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
} else if (sysfs_streq(buf, "never")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
} else
|
|
ret = -EINVAL;
|
|
|
|
if (ret > 0) {
|
|
int err = start_stop_khugepaged();
|
|
if (err)
|
|
ret = err;
|
|
}
|
|
return ret;
|
|
}
|
|
static struct kobj_attribute enabled_attr =
|
|
__ATTR(enabled, 0644, enabled_show, enabled_store);
|
|
|
|
ssize_t single_hugepage_flag_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf,
|
|
enum transparent_hugepage_flag flag)
|
|
{
|
|
return sprintf(buf, "%d\n",
|
|
!!test_bit(flag, &transparent_hugepage_flags));
|
|
}
|
|
|
|
ssize_t single_hugepage_flag_store(struct kobject *kobj,
|
|
struct kobj_attribute *attr,
|
|
const char *buf, size_t count,
|
|
enum transparent_hugepage_flag flag)
|
|
{
|
|
unsigned long value;
|
|
int ret;
|
|
|
|
ret = kstrtoul(buf, 10, &value);
|
|
if (ret < 0)
|
|
return ret;
|
|
if (value > 1)
|
|
return -EINVAL;
|
|
|
|
if (value)
|
|
set_bit(flag, &transparent_hugepage_flags);
|
|
else
|
|
clear_bit(flag, &transparent_hugepage_flags);
|
|
|
|
return count;
|
|
}
|
|
|
|
static ssize_t defrag_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags))
|
|
return sprintf(buf, "[always] defer defer+madvise madvise never\n");
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags))
|
|
return sprintf(buf, "always [defer] defer+madvise madvise never\n");
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags))
|
|
return sprintf(buf, "always defer [defer+madvise] madvise never\n");
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags))
|
|
return sprintf(buf, "always defer defer+madvise [madvise] never\n");
|
|
return sprintf(buf, "always defer defer+madvise madvise [never]\n");
|
|
}
|
|
|
|
static ssize_t defrag_store(struct kobject *kobj,
|
|
struct kobj_attribute *attr,
|
|
const char *buf, size_t count)
|
|
{
|
|
if (sysfs_streq(buf, "always")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags);
|
|
} else if (sysfs_streq(buf, "defer+madvise")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags);
|
|
} else if (sysfs_streq(buf, "defer")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags);
|
|
} else if (sysfs_streq(buf, "madvise")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags);
|
|
set_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
} else if (sysfs_streq(buf, "never")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags);
|
|
} else
|
|
return -EINVAL;
|
|
|
|
return count;
|
|
}
|
|
static struct kobj_attribute defrag_attr =
|
|
__ATTR(defrag, 0644, defrag_show, defrag_store);
|
|
|
|
static ssize_t use_zero_page_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
return single_hugepage_flag_show(kobj, attr, buf,
|
|
TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG);
|
|
}
|
|
static ssize_t use_zero_page_store(struct kobject *kobj,
|
|
struct kobj_attribute *attr, const char *buf, size_t count)
|
|
{
|
|
return single_hugepage_flag_store(kobj, attr, buf, count,
|
|
TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG);
|
|
}
|
|
static struct kobj_attribute use_zero_page_attr =
|
|
__ATTR(use_zero_page, 0644, use_zero_page_show, use_zero_page_store);
|
|
|
|
static ssize_t hpage_pmd_size_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
return sprintf(buf, "%lu\n", HPAGE_PMD_SIZE);
|
|
}
|
|
static struct kobj_attribute hpage_pmd_size_attr =
|
|
__ATTR_RO(hpage_pmd_size);
|
|
|
|
#ifdef CONFIG_DEBUG_VM
|
|
static ssize_t debug_cow_show(struct kobject *kobj,
|
|
struct kobj_attribute *attr, char *buf)
|
|
{
|
|
return single_hugepage_flag_show(kobj, attr, buf,
|
|
TRANSPARENT_HUGEPAGE_DEBUG_COW_FLAG);
|
|
}
|
|
static ssize_t debug_cow_store(struct kobject *kobj,
|
|
struct kobj_attribute *attr,
|
|
const char *buf, size_t count)
|
|
{
|
|
return single_hugepage_flag_store(kobj, attr, buf, count,
|
|
TRANSPARENT_HUGEPAGE_DEBUG_COW_FLAG);
|
|
}
|
|
static struct kobj_attribute debug_cow_attr =
|
|
__ATTR(debug_cow, 0644, debug_cow_show, debug_cow_store);
|
|
#endif /* CONFIG_DEBUG_VM */
|
|
|
|
static struct attribute *hugepage_attr[] = {
|
|
&enabled_attr.attr,
|
|
&defrag_attr.attr,
|
|
&use_zero_page_attr.attr,
|
|
&hpage_pmd_size_attr.attr,
|
|
#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE)
|
|
&shmem_enabled_attr.attr,
|
|
#endif
|
|
#ifdef CONFIG_DEBUG_VM
|
|
&debug_cow_attr.attr,
|
|
#endif
|
|
NULL,
|
|
};
|
|
|
|
static const struct attribute_group hugepage_attr_group = {
|
|
.attrs = hugepage_attr,
|
|
};
|
|
|
|
static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
|
|
{
|
|
int err;
|
|
|
|
*hugepage_kobj = kobject_create_and_add("transparent_hugepage", mm_kobj);
|
|
if (unlikely(!*hugepage_kobj)) {
|
|
pr_err("failed to create transparent hugepage kobject\n");
|
|
return -ENOMEM;
|
|
}
|
|
|
|
err = sysfs_create_group(*hugepage_kobj, &hugepage_attr_group);
|
|
if (err) {
|
|
pr_err("failed to register transparent hugepage group\n");
|
|
goto delete_obj;
|
|
}
|
|
|
|
err = sysfs_create_group(*hugepage_kobj, &khugepaged_attr_group);
|
|
if (err) {
|
|
pr_err("failed to register transparent hugepage group\n");
|
|
goto remove_hp_group;
|
|
}
|
|
|
|
return 0;
|
|
|
|
remove_hp_group:
|
|
sysfs_remove_group(*hugepage_kobj, &hugepage_attr_group);
|
|
delete_obj:
|
|
kobject_put(*hugepage_kobj);
|
|
return err;
|
|
}
|
|
|
|
static void __init hugepage_exit_sysfs(struct kobject *hugepage_kobj)
|
|
{
|
|
sysfs_remove_group(hugepage_kobj, &khugepaged_attr_group);
|
|
sysfs_remove_group(hugepage_kobj, &hugepage_attr_group);
|
|
kobject_put(hugepage_kobj);
|
|
}
|
|
#else
|
|
static inline int hugepage_init_sysfs(struct kobject **hugepage_kobj)
|
|
{
|
|
return 0;
|
|
}
|
|
|
|
static inline void hugepage_exit_sysfs(struct kobject *hugepage_kobj)
|
|
{
|
|
}
|
|
#endif /* CONFIG_SYSFS */
|
|
|
|
static int __init hugepage_init(void)
|
|
{
|
|
int err;
|
|
struct kobject *hugepage_kobj;
|
|
|
|
if (!has_transparent_hugepage()) {
|
|
transparent_hugepage_flags = 0;
|
|
return -EINVAL;
|
|
}
|
|
|
|
/*
|
|
* hugepages can't be allocated by the buddy allocator
|
|
*/
|
|
MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER >= MAX_ORDER);
|
|
/*
|
|
* we use page->mapping and page->index in second tail page
|
|
* as list_head: assuming THP order >= 2
|
|
*/
|
|
MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER < 2);
|
|
|
|
err = hugepage_init_sysfs(&hugepage_kobj);
|
|
if (err)
|
|
goto err_sysfs;
|
|
|
|
err = khugepaged_init();
|
|
if (err)
|
|
goto err_slab;
|
|
|
|
err = register_shrinker(&huge_zero_page_shrinker);
|
|
if (err)
|
|
goto err_hzp_shrinker;
|
|
err = register_shrinker(&deferred_split_shrinker);
|
|
if (err)
|
|
goto err_split_shrinker;
|
|
|
|
/*
|
|
* By default disable transparent hugepages on smaller systems,
|
|
* where the extra memory used could hurt more than TLB overhead
|
|
* is likely to save. The admin can still enable it through /sys.
|
|
*/
|
|
if (totalram_pages < (512 << (20 - PAGE_SHIFT))) {
|
|
transparent_hugepage_flags = 0;
|
|
return 0;
|
|
}
|
|
|
|
err = start_stop_khugepaged();
|
|
if (err)
|
|
goto err_khugepaged;
|
|
|
|
return 0;
|
|
err_khugepaged:
|
|
unregister_shrinker(&deferred_split_shrinker);
|
|
err_split_shrinker:
|
|
unregister_shrinker(&huge_zero_page_shrinker);
|
|
err_hzp_shrinker:
|
|
khugepaged_destroy();
|
|
err_slab:
|
|
hugepage_exit_sysfs(hugepage_kobj);
|
|
err_sysfs:
|
|
return err;
|
|
}
|
|
subsys_initcall(hugepage_init);
|
|
|
|
static int __init setup_transparent_hugepage(char *str)
|
|
{
|
|
int ret = 0;
|
|
if (!str)
|
|
goto out;
|
|
if (!strcmp(str, "always")) {
|
|
set_bit(TRANSPARENT_HUGEPAGE_FLAG,
|
|
&transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
|
|
&transparent_hugepage_flags);
|
|
ret = 1;
|
|
} else if (!strcmp(str, "madvise")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_FLAG,
|
|
&transparent_hugepage_flags);
|
|
set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
|
|
&transparent_hugepage_flags);
|
|
ret = 1;
|
|
} else if (!strcmp(str, "never")) {
|
|
clear_bit(TRANSPARENT_HUGEPAGE_FLAG,
|
|
&transparent_hugepage_flags);
|
|
clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
|
|
&transparent_hugepage_flags);
|
|
ret = 1;
|
|
}
|
|
out:
|
|
if (!ret)
|
|
pr_warn("transparent_hugepage= cannot parse, ignored\n");
|
|
return ret;
|
|
}
|
|
__setup("transparent_hugepage=", setup_transparent_hugepage);
|
|
|
|
pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
|
|
{
|
|
if (likely(vma->vm_flags & VM_WRITE))
|
|
pmd = pmd_mkwrite(pmd);
|
|
return pmd;
|
|
}
|
|
|
|
static inline struct list_head *page_deferred_list(struct page *page)
|
|
{
|
|
/* ->lru in the tail pages is occupied by compound_head. */
|
|
return &page[2].deferred_list;
|
|
}
|
|
|
|
void prep_transhuge_page(struct page *page)
|
|
{
|
|
/*
|
|
* we use page->mapping and page->indexlru in second tail page
|
|
* as list_head: assuming THP order >= 2
|
|
*/
|
|
|
|
INIT_LIST_HEAD(page_deferred_list(page));
|
|
set_compound_page_dtor(page, TRANSHUGE_PAGE_DTOR);
|
|
}
|
|
|
|
static unsigned long __thp_get_unmapped_area(struct file *filp,
|
|
unsigned long addr, unsigned long len,
|
|
loff_t off, unsigned long flags, unsigned long size)
|
|
{
|
|
loff_t off_end = off + len;
|
|
loff_t off_align = round_up(off, size);
|
|
unsigned long len_pad, ret;
|
|
|
|
if (off_end <= off_align || (off_end - off_align) < size)
|
|
return 0;
|
|
|
|
len_pad = len + size;
|
|
if (len_pad < len || (off + len_pad) < off)
|
|
return 0;
|
|
|
|
ret = current->mm->get_unmapped_area(filp, addr, len_pad,
|
|
off >> PAGE_SHIFT, flags);
|
|
|
|
/*
|
|
* The failure might be due to length padding. The caller will retry
|
|
* without the padding.
|
|
*/
|
|
if (IS_ERR_VALUE(ret))
|
|
return 0;
|
|
|
|
/*
|
|
* Do not try to align to THP boundary if allocation at the address
|
|
* hint succeeds.
|
|
*/
|
|
if (ret == addr)
|
|
return addr;
|
|
|
|
ret += (off - ret) & (size - 1);
|
|
return ret;
|
|
}
|
|
|
|
unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
|
|
unsigned long len, unsigned long pgoff, unsigned long flags)
|
|
{
|
|
unsigned long ret;
|
|
loff_t off = (loff_t)pgoff << PAGE_SHIFT;
|
|
|
|
if (!IS_DAX(filp->f_mapping->host) || !IS_ENABLED(CONFIG_FS_DAX_PMD))
|
|
goto out;
|
|
|
|
ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE);
|
|
if (ret)
|
|
return ret;
|
|
out:
|
|
return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags);
|
|
}
|
|
EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
|
|
|
|
static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
|
|
struct page *page, gfp_t gfp)
|
|
{
|
|
struct vm_area_struct *vma = vmf->vma;
|
|
struct mem_cgroup *memcg;
|
|
pgtable_t pgtable;
|
|
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
|
|
vm_fault_t ret = 0;
|
|
|
|
VM_BUG_ON_PAGE(!PageCompound(page), page);
|
|
|
|
if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) {
|
|
put_page(page);
|
|
count_vm_event(THP_FAULT_FALLBACK);
|
|
return VM_FAULT_FALLBACK;
|
|
}
|
|
|
|
pgtable = pte_alloc_one(vma->vm_mm, haddr);
|
|
if (unlikely(!pgtable)) {
|
|
ret = VM_FAULT_OOM;
|
|
goto release;
|
|
}
|
|
|
|
clear_huge_page(page, vmf->address, HPAGE_PMD_NR);
|
|
/*
|
|
* The memory barrier inside __SetPageUptodate makes sure that
|
|
* clear_huge_page writes become visible before the set_pmd_at()
|
|
* write.
|
|
*/
|
|
__SetPageUptodate(page);
|
|
|
|
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
|
|
if (unlikely(!pmd_none(*vmf->pmd))) {
|
|
goto unlock_release;
|
|
} else {
|
|
pmd_t entry;
|
|
|
|
ret = check_stable_address_space(vma->vm_mm);
|
|
if (ret)
|
|
goto unlock_release;
|
|
|
|
/* Deliver the page fault to userland */
|
|
if (userfaultfd_missing(vma)) {
|
|
vm_fault_t ret2;
|
|
|
|
spin_unlock(vmf->ptl);
|
|
mem_cgroup_cancel_charge(page, memcg, true);
|
|
put_page(page);
|
|
pte_free(vma->vm_mm, pgtable);
|
|
ret2 = handle_userfault(vmf, VM_UFFD_MISSING);
|
|
VM_BUG_ON(ret2 & VM_FAULT_FALLBACK);
|
|
return ret2;
|
|
}
|
|
|
|
entry = mk_huge_pmd(page, vma->vm_page_prot);
|
|
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
|
|
page_add_new_anon_rmap(page, vma, haddr, true);
|
|
mem_cgroup_commit_charge(page, memcg, false, true);
|
|
lru_cache_add_active_or_unevictable(page, vma);
|
|
pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
|
|
set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
|
|
add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
|
|
mm_inc_nr_ptes(vma->vm_mm);
|
|
spin_unlock(vmf->ptl);
|
|
count_vm_event(THP_FAULT_ALLOC);
|
|
}
|
|
|
|
return 0;
|
|
unlock_release:
|
|
spin_unlock(vmf->ptl);
|
|
release:
|
|
if (pgtable)
|
|
pte_free(vma->vm_mm, pgtable);
|
|
mem_cgroup_cancel_charge(page, memcg, true);
|
|
put_page(page);
|
|
return ret;
|
|
|
|
}
|
|
|
|
/*
|
|
* always: directly stall for all thp allocations
|
|
* defer: wake kswapd and fail if not immediately available
|
|
* defer+madvise: wake kswapd and directly stall for MADV_HUGEPAGE, otherwise
|
|
* fail if not immediately available
|
|
* madvise: directly stall for MADV_HUGEPAGE, otherwise fail if not immediately
|
|
* available
|
|
* never: never stall for any thp allocation
|
|
*/
|
|
static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma)
|
|
{
|
|
const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE);
|
|
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepage_flags))
|
|
return GFP_TRANSHUGE | (vma_madvised ? 0 : __GFP_NORETRY);
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_FLAG, &transparent_hugepage_flags))
|
|
return GFP_TRANSHUGE_LIGHT | __GFP_KSWAPD_RECLAIM;
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, &transparent_hugepage_flags))
|
|
return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
|
|
__GFP_KSWAPD_RECLAIM);
|
|
if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, &transparent_hugepage_flags))
|
|
return GFP_TRANSHUGE_LIGHT | (vma_madvised ? __GFP_DIRECT_RECLAIM :
|
|
0);
|
|
return GFP_TRANSHUGE_LIGHT;
|
|
}
|
|
|
|
/* Caller must hold page table lock. */
|
|
static bool set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm,
|
|
struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd,
|
|
struct page *zero_page)
|
|
{
|
|
pmd_t entry;
|
|
if (!pmd_none(*pmd))
|
|
return false;
|
|
entry = mk_pmd(zero_page, vma->vm_page_prot);
|
|
entry = pmd_mkhuge(entry);
|
|
if (pgtable)
|
|
pgtable_trans_huge_deposit(mm, pmd, pgtable);
|
|
set_pmd_at(mm, haddr, pmd, entry);
|
|
mm_inc_nr_ptes(mm);
|
|
return true;
|
|
}
|
|
|
|
vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
|
|
{
|
|
struct vm_area_struct *vma = vmf->vma;
|
|
gfp_t gfp;
|
|
struct page *page;
|
|
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
|
|
|
|
if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
|
|
return VM_FAULT_FALLBACK;
|
|
if (unlikely(anon_vma_prepare(vma)))
|
|
return VM_FAULT_OOM;
|
|
if (unlikely(khugepaged_enter(vma, vma->vm_flags)))
|
|
return VM_FAULT_OOM;
|
|
if (!(vmf->flags & FAULT_FLAG_WRITE) &&
|
|
!mm_forbids_zeropage(vma->vm_mm) &&
|
|
transparent_hugepage_use_zero_page()) {
|
|
pgtable_t pgtable;
|
|
struct page *zero_page;
|
|
bool set;
|
|
vm_fault_t ret;
|
|
pgtable = pte_alloc_one(vma->vm_mm, haddr);
|
|
if (unlikely(!pgtable))
|
|
return VM_FAULT_OOM;
|
|
zero_page = mm_get_huge_zero_page(vma->vm_mm);
|
|
if (unlikely(!zero_page)) {
|
|
pte_free(vma->vm_mm, pgtable);
|
|
count_vm_event(THP_FAULT_FALLBACK);
|
|
return VM_FAULT_FALLBACK;
|
|
}
|
|
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
|
|
ret = 0;
|
|
set = false;
|
|
if (pmd_none(*vmf->pmd)) {
|
|
ret = check_stable_address_space(vma->vm_mm);
|
|
if (ret) {
|
|
spin_unlock(vmf->ptl);
|
|
} else if (userfaultfd_missing(vma)) {
|
|
spin_unlock(vmf->ptl);
|
|
ret = handle_userfault(vmf, VM_UFFD_MISSING);
|
|
VM_BUG_ON(ret & VM_FAULT_FALLBACK);
|
|
} else {
|
|
set_huge_zero_page(pgtable, vma->vm_mm, vma,
|
|
haddr, vmf->pmd, zero_page);
|
|
spin_unlock(vmf->ptl);
|
|
set = true;
|
|
}
|
|
} else
|
|
spin_unlock(vmf->ptl);
|
|
if (!set)
|
|
pte_free(vma->vm_mm, pgtable);
|
|
return ret;
|
|
}
|
|
gfp = alloc_hugepage_direct_gfpmask(vma);
|
|
page = alloc_hugepage_vma(gfp, vma, haddr, HPAGE_PMD_ORDER);
|
|
if (unlikely(!page)) {
|
|
count_vm_event(THP_FAULT_FALLBACK);
|
|
return VM_FAULT_FALLBACK;
|
|
}
|
|
prep_transhuge_page(page);
|
|
return __do_huge_pmd_anonymous_page(vmf, page, gfp);
|
|
}
|
|
|
|
static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
|
|
pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
|
|
pgtable_t pgtable)
|
|
{
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
pmd_t entry;
|
|
spinlock_t *ptl;
|
|
|
|
ptl = pmd_lock(mm, pmd);
|
|
if (!pmd_none(*pmd)) {
|
|
if (write) {
|
|
if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
|
|
WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
|
|
goto out_unlock;
|
|
}
|
|
entry = pmd_mkyoung(*pmd);
|
|
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
|
|
if (pmdp_set_access_flags(vma, addr, pmd, entry, 1))
|
|
update_mmu_cache_pmd(vma, addr, pmd);
|
|
}
|
|
|
|
goto out_unlock;
|
|
}
|
|
|
|
entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
|
|
if (pfn_t_devmap(pfn))
|
|
entry = pmd_mkdevmap(entry);
|
|
if (write) {
|
|
entry = pmd_mkyoung(pmd_mkdirty(entry));
|
|
entry = maybe_pmd_mkwrite(entry, vma);
|
|
}
|
|
|
|
if (pgtable) {
|
|
pgtable_trans_huge_deposit(mm, pmd, pgtable);
|
|
mm_inc_nr_ptes(mm);
|
|
pgtable = NULL;
|
|
}
|
|
|
|
set_pmd_at(mm, addr, pmd, entry);
|
|
update_mmu_cache_pmd(vma, addr, pmd);
|
|
|
|
out_unlock:
|
|
spin_unlock(ptl);
|
|
if (pgtable)
|
|
pte_free(mm, pgtable);
|
|
}
|
|
|
|
vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
|
|
{
|
|
unsigned long addr = vmf->address & PMD_MASK;
|
|
struct vm_area_struct *vma = vmf->vma;
|
|
pgprot_t pgprot = vma->vm_page_prot;
|
|
pgtable_t pgtable = NULL;
|
|
|
|
/*
|
|
* If we had pmd_special, we could avoid all these restrictions,
|
|
* but we need to be consistent with PTEs and architectures that
|
|
* can't support a 'special' bit.
|
|
*/
|
|
BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
|
|
!pfn_t_devmap(pfn));
|
|
BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) ==
|
|
(VM_PFNMAP|VM_MIXEDMAP));
|
|
BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
|
|
|
|
if (addr < vma->vm_start || addr >= vma->vm_end)
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
if (arch_needs_pgtable_deposit()) {
|
|
pgtable = pte_alloc_one(vma->vm_mm, addr);
|
|
if (!pgtable)
|
|
return VM_FAULT_OOM;
|
|
}
|
|
|
|
track_pfn_insert(vma, &pgprot, pfn);
|
|
|
|
insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, pgtable);
|
|
return VM_FAULT_NOPAGE;
|
|
}
|
|
EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd);
|
|
|
|
#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
|
|
static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma)
|
|
{
|
|
if (likely(vma->vm_flags & VM_WRITE))
|
|
pud = pud_mkwrite(pud);
|
|
return pud;
|
|
}
|
|
|
|
static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
|
|
pud_t *pud, pfn_t pfn, pgprot_t prot, bool write)
|
|
{
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
pud_t entry;
|
|
spinlock_t *ptl;
|
|
|
|
ptl = pud_lock(mm, pud);
|
|
if (!pud_none(*pud)) {
|
|
if (write) {
|
|
if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) {
|
|
WARN_ON_ONCE(!is_huge_zero_pud(*pud));
|
|
goto out_unlock;
|
|
}
|
|
entry = pud_mkyoung(*pud);
|
|
entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma);
|
|
if (pudp_set_access_flags(vma, addr, pud, entry, 1))
|
|
update_mmu_cache_pud(vma, addr, pud);
|
|
}
|
|
goto out_unlock;
|
|
}
|
|
|
|
entry = pud_mkhuge(pfn_t_pud(pfn, prot));
|
|
if (pfn_t_devmap(pfn))
|
|
entry = pud_mkdevmap(entry);
|
|
if (write) {
|
|
entry = pud_mkyoung(pud_mkdirty(entry));
|
|
entry = maybe_pud_mkwrite(entry, vma);
|
|
}
|
|
set_pud_at(mm, addr, pud, entry);
|
|
update_mmu_cache_pud(vma, addr, pud);
|
|
|
|
out_unlock:
|
|
spin_unlock(ptl);
|
|
}
|
|
|
|
vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
|
|
{
|
|
unsigned long addr = vmf->address & PUD_MASK;
|
|
struct vm_area_struct *vma = vmf->vma;
|
|
pgprot_t pgprot = vma->vm_page_prot;
|
|
|
|
/*
|
|
* If we had pud_special, we could avoid all these restrictions,
|
|
* but we need to be consistent with PTEs and architectures that
|
|
* can't support a 'special' bit.
|
|
*/
|
|
BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
|
|
!pfn_t_devmap(pfn));
|
|
BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) ==
|
|
(VM_PFNMAP|VM_MIXEDMAP));
|
|
BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags));
|
|
|
|
if (addr < vma->vm_start || addr >= vma->vm_end)
|
|
return VM_FAULT_SIGBUS;
|
|
|
|
track_pfn_insert(vma, &pgprot, pfn);
|
|
|
|
insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write);
|
|
return VM_FAULT_NOPAGE;
|
|
}
|
|
EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
|
|
#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
|
|
|
|
static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
|
|
pmd_t *pmd, int flags)
|
|
{
|
|
pmd_t _pmd;
|
|
|
|
_pmd = pmd_mkyoung(*pmd);
|
|
if (flags & FOLL_WRITE)
|
|
_pmd = pmd_mkdirty(_pmd);
|
|
if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK,
|
|
pmd, _pmd, flags & FOLL_WRITE))
|
|
update_mmu_cache_pmd(vma, addr, pmd);
|
|
}
|
|
|
|
struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
|
|
pmd_t *pmd, int flags)
|
|
{
|
|
unsigned long pfn = pmd_pfn(*pmd);
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
struct dev_pagemap *pgmap;
|
|
struct page *page;
|
|
|
|
assert_spin_locked(pmd_lockptr(mm, pmd));
|
|
|
|
/*
|
|
* When we COW a devmap PMD entry, we split it into PTEs, so we should
|
|
* not be in this function with `flags & FOLL_COW` set.
|
|
*/
|
|
WARN_ONCE(flags & FOLL_COW, "mm: In follow_devmap_pmd with FOLL_COW set");
|
|
|
|
if (flags & FOLL_WRITE && !pmd_write(*pmd))
|
|
return NULL;
|
|
|
|
if (pmd_present(*pmd) && pmd_devmap(*pmd))
|
|
/* pass */;
|
|
else
|
|
return NULL;
|
|
|
|
if (flags & FOLL_TOUCH)
|
|
touch_pmd(vma, addr, pmd, flags);
|
|
|
|
/*
|
|
* device mapped pages can only be returned if the
|
|
* caller will manage the page reference count.
|
|
*/
|
|
if (!(flags & FOLL_GET))
|
|
return ERR_PTR(-EEXIST);
|
|
|
|
pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT;
|
|
pgmap = get_dev_pagemap(pfn, NULL);
|
|
if (!pgmap)
|
|
return ERR_PTR(-EFAULT);
|
|
page = pfn_to_page(pfn);
|
|
get_page(page);
|
|
put_dev_pagemap(pgmap);
|
|
|
|
return page;
|
|
}
|
|
|
|
int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
|
|
pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
|
|
struct vm_area_struct *vma)
|
|
{
|
|
spinlock_t *dst_ptl, *src_ptl;
|
|
struct page *src_page;
|
|
pmd_t pmd;
|
|
pgtable_t pgtable = NULL;
|
|
int ret = -ENOMEM;
|
|
|
|
/* Skip if can be re-fill on fault */
|
|
if (!vma_is_anonymous(vma))
|
|
return 0;
|
|
|
|
pgtable = pte_alloc_one(dst_mm, addr);
|
|
if (unlikely(!pgtable))
|
|
goto out;
|
|
|
|
dst_ptl = pmd_lock(dst_mm, dst_pmd);
|
|
src_ptl = pmd_lockptr(src_mm, src_pmd);
|
|
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
|
|
|
|
ret = -EAGAIN;
|
|
pmd = *src_pmd;
|
|
|
|
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
|
|
if (unlikely(is_swap_pmd(pmd))) {
|
|
swp_entry_t entry = pmd_to_swp_entry(pmd);
|
|
|
|
VM_BUG_ON(!is_pmd_migration_entry(pmd));
|
|
if (is_write_migration_entry(entry)) {
|
|
make_migration_entry_read(&entry);
|
|
pmd = swp_entry_to_pmd(entry);
|
|
if (pmd_swp_soft_dirty(*src_pmd))
|
|
pmd = pmd_swp_mksoft_dirty(pmd);
|
|
set_pmd_at(src_mm, addr, src_pmd, pmd);
|
|
}
|
|
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
|
|
mm_inc_nr_ptes(dst_mm);
|
|
pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
|
|
set_pmd_at(dst_mm, addr, dst_pmd, pmd);
|
|
ret = 0;
|
|
goto out_unlock;
|
|
}
|
|
#endif
|
|
|
|
if (unlikely(!pmd_trans_huge(pmd))) {
|
|
pte_free(dst_mm, pgtable);
|
|
goto out_unlock;
|
|
}
|
|
/*
|
|
* When page table lock is held, the huge zero pmd should not be
|
|
* under splitting since we don't split the page itself, only pmd to
|
|
* a page table.
|
|
*/
|
|
if (is_huge_zero_pmd(pmd)) {
|
|
struct page *zero_page;
|
|
/*
|
|
* get_huge_zero_page() will never allocate a new page here,
|
|
* since we already have a zero page to copy. It just takes a
|
|
* reference.
|
|
*/
|
|
zero_page = mm_get_huge_zero_page(dst_mm);
|
|
set_huge_zero_page(pgtable, dst_mm, vma, addr, dst_pmd,
|
|
zero_page);
|
|
ret = 0;
|
|
goto out_unlock;
|
|
}
|
|
|
|
src_page = pmd_page(pmd);
|
|
VM_BUG_ON_PAGE(!PageHead(src_page), src_page);
|
|
get_page(src_page);
|
|
page_dup_rmap(src_page, true);
|
|
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
|
|
mm_inc_nr_ptes(dst_mm);
|
|
pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
|
|
|
|
pmdp_set_wrprotect(src_mm, addr, src_pmd);
|
|
pmd = pmd_mkold(pmd_wrprotect(pmd));
|
|
set_pmd_at(dst_mm, addr, dst_pmd, pmd);
|
|
|
|
ret = 0;
|
|
out_unlock:
|
|
spin_unlock(src_ptl);
|
|
spin_unlock(dst_ptl);
|
|
out:
|
|
return ret;
|
|
}
|
|
|
|
#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
|
|
static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
|
|
pud_t *pud, int flags)
|
|
{
|
|
pud_t _pud;
|
|
|
|
_pud = pud_mkyoung(*pud);
|
|
if (flags & FOLL_WRITE)
|
|
_pud = pud_mkdirty(_pud);
|
|
if (pudp_set_access_flags(vma, addr & HPAGE_PUD_MASK,
|
|
pud, _pud, flags & FOLL_WRITE))
|
|
update_mmu_cache_pud(vma, addr, pud);
|
|
}
|
|
|
|
struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
|
|
pud_t *pud, int flags)
|
|
{
|
|
unsigned long pfn = pud_pfn(*pud);
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
struct dev_pagemap *pgmap;
|
|
struct page *page;
|
|
|
|
assert_spin_locked(pud_lockptr(mm, pud));
|
|
|
|
if (flags & FOLL_WRITE && !pud_write(*pud))
|
|
return NULL;
|
|
|
|
if (pud_present(*pud) && pud_devmap(*pud))
|
|
/* pass */;
|
|
else
|
|
return NULL;
|
|
|
|
if (flags & FOLL_TOUCH)
|
|
touch_pud(vma, addr, pud, flags);
|
|
|
|
/*
|
|
* device mapped pages can only be returned if the
|
|
* caller will manage the page reference count.
|
|
*/
|
|
if (!(flags & FOLL_GET))
|
|
return ERR_PTR(-EEXIST);
|
|
|
|
pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
|
|
pgmap = get_dev_pagemap(pfn, NULL);
|
|
if (!pgmap)
|
|
return ERR_PTR(-EFAULT);
|
|
page = pfn_to_page(pfn);
|
|
get_page(page);
|
|
put_dev_pagemap(pgmap);
|
|
|
|
return page;
|
|
}
|
|
|
|
int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
|
|
pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
|
|
struct vm_area_struct *vma)
|
|
{
|
|
spinlock_t *dst_ptl, *src_ptl;
|
|
pud_t pud;
|
|
int ret;
|
|
|
|
dst_ptl = pud_lock(dst_mm, dst_pud);
|
|
src_ptl = pud_lockptr(src_mm, src_pud);
|
|
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
|
|
|
|
ret = -EAGAIN;
|
|
pud = *src_pud;
|
|
if (unlikely(!pud_trans_huge(pud) && !pud_devmap(pud)))
|
|
goto out_unlock;
|
|
|
|
/*
|
|
* When page table lock is held, the huge zero pud should not be
|
|
* under splitting since we don't split the page itself, only pud to
|
|
* a page table.
|
|
*/
|
|
if (is_huge_zero_pud(pud)) {
|
|
/* No huge zero pud yet */
|
|
}
|
|
|
|
pudp_set_wrprotect(src_mm, addr, src_pud);
|
|
pud = pud_mkold(pud_wrprotect(pud));
|
|
set_pud_at(dst_mm, addr, dst_pud, pud);
|
|
|
|
ret = 0;
|
|
out_unlock:
|
|
spin_unlock(src_ptl);
|
|
spin_unlock(dst_ptl);
|
|
return ret;
|
|
}
|
|
|
|
void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
|
|
{
|
|
pud_t entry;
|
|
unsigned long haddr;
|
|
bool write = vmf->flags & FAULT_FLAG_WRITE;
|
|
|
|
vmf->ptl = pud_lock(vmf->vma->vm_mm, vmf->pud);
|
|
if (unlikely(!pud_same(*vmf->pud, orig_pud)))
|
|
goto unlock;
|
|
|
|
entry = pud_mkyoung(orig_pud);
|
|
if (write)
|
|
entry = pud_mkdirty(entry);
|
|
haddr = vmf->address & HPAGE_PUD_MASK;
|
|
if (pudp_set_access_flags(vmf->vma, haddr, vmf->pud, entry, write))
|
|
update_mmu_cache_pud(vmf->vma, vmf->address, vmf->pud);
|
|
|
|
unlock:
|
|
spin_unlock(vmf->ptl);
|
|
}
|
|
#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
|
|
|
|
void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd)
|
|
{
|
|
pmd_t entry;
|
|
unsigned long haddr;
|
|
bool write = vmf->flags & FAULT_FLAG_WRITE;
|
|
|
|
vmf->ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
|
|
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
|
|
goto unlock;
|
|
|
|
entry = pmd_mkyoung(orig_pmd);
|
|
if (write)
|
|
entry = pmd_mkdirty(entry);
|
|
haddr = vmf->address & HPAGE_PMD_MASK;
|
|
if (pmdp_set_access_flags(vmf->vma, haddr, vmf->pmd, entry, write))
|
|
update_mmu_cache_pmd(vmf->vma, vmf->address, vmf->pmd);
|
|
|
|
unlock:
|
|
spin_unlock(vmf->ptl);
|
|
}
|
|
|
|
static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf,
|
|
pmd_t orig_pmd, struct page *page)
|
|
{
|
|
struct vm_area_struct *vma = vmf->vma;
|
|
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
|
|
struct mem_cgroup *memcg;
|
|
pgtable_t pgtable;
|
|
pmd_t _pmd;
|
|
int i;
|
|
vm_fault_t ret = 0;
|
|
struct page **pages;
|
|
unsigned long mmun_start; /* For mmu_notifiers */
|
|
unsigned long mmun_end; /* For mmu_notifiers */
|
|
|
|
pages = kmalloc_array(HPAGE_PMD_NR, sizeof(struct page *),
|
|
GFP_KERNEL);
|
|
if (unlikely(!pages)) {
|
|
ret |= VM_FAULT_OOM;
|
|
goto out;
|
|
}
|
|
|
|
for (i = 0; i < HPAGE_PMD_NR; i++) {
|
|
pages[i] = alloc_page_vma_node(GFP_HIGHUSER_MOVABLE, vma,
|
|
vmf->address, page_to_nid(page));
|
|
if (unlikely(!pages[i] ||
|
|
mem_cgroup_try_charge_delay(pages[i], vma->vm_mm,
|
|
GFP_KERNEL, &memcg, false))) {
|
|
if (pages[i])
|
|
put_page(pages[i]);
|
|
while (--i >= 0) {
|
|
memcg = (void *)page_private(pages[i]);
|
|
set_page_private(pages[i], 0);
|
|
mem_cgroup_cancel_charge(pages[i], memcg,
|
|
false);
|
|
put_page(pages[i]);
|
|
}
|
|
kfree(pages);
|
|
ret |= VM_FAULT_OOM;
|
|
goto out;
|
|
}
|
|
set_page_private(pages[i], (unsigned long)memcg);
|
|
}
|
|
|
|
for (i = 0; i < HPAGE_PMD_NR; i++) {
|
|
copy_user_highpage(pages[i], page + i,
|
|
haddr + PAGE_SIZE * i, vma);
|
|
__SetPageUptodate(pages[i]);
|
|
cond_resched();
|
|
}
|
|
|
|
mmun_start = haddr;
|
|
mmun_end = haddr + HPAGE_PMD_SIZE;
|
|
mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
|
|
|
|
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
|
|
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
|
|
goto out_free_pages;
|
|
VM_BUG_ON_PAGE(!PageHead(page), page);
|
|
|
|
/*
|
|
* Leave pmd empty until pte is filled note we must notify here as
|
|
* concurrent CPU thread might write to new page before the call to
|
|
* mmu_notifier_invalidate_range_end() happens which can lead to a
|
|
* device seeing memory write in different order than CPU.
|
|
*
|
|
* See Documentation/vm/mmu_notifier.rst
|
|
*/
|
|
pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
|
|
|
|
pgtable = pgtable_trans_huge_withdraw(vma->vm_mm, vmf->pmd);
|
|
pmd_populate(vma->vm_mm, &_pmd, pgtable);
|
|
|
|
for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
|
|
pte_t entry;
|
|
entry = mk_pte(pages[i], vmf->vma_page_prot);
|
|
entry = maybe_mkwrite(pte_mkdirty(entry), vmf->vma_flags);
|
|
memcg = (void *)page_private(pages[i]);
|
|
set_page_private(pages[i], 0);
|
|
page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
|
|
mem_cgroup_commit_charge(pages[i], memcg, false, false);
|
|
lru_cache_add_active_or_unevictable(pages[i], vma);
|
|
vmf->pte = pte_offset_map(&_pmd, haddr);
|
|
VM_BUG_ON(!pte_none(*vmf->pte));
|
|
set_pte_at(vma->vm_mm, haddr, vmf->pte, entry);
|
|
pte_unmap(vmf->pte);
|
|
}
|
|
kfree(pages);
|
|
|
|
smp_wmb(); /* make pte visible before pmd */
|
|
pmd_populate(vma->vm_mm, vmf->pmd, pgtable);
|
|
page_remove_rmap(page, true);
|
|
spin_unlock(vmf->ptl);
|
|
|
|
/*
|
|
* No need to double call mmu_notifier->invalidate_range() callback as
|
|
* the above pmdp_huge_clear_flush_notify() did already call it.
|
|
*/
|
|
mmu_notifier_invalidate_range_only_end(vma->vm_mm, mmun_start,
|
|
mmun_end);
|
|
|
|
ret |= VM_FAULT_WRITE;
|
|
put_page(page);
|
|
|
|
out:
|
|
return ret;
|
|
|
|
out_free_pages:
|
|
spin_unlock(vmf->ptl);
|
|
mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
|
|
for (i = 0; i < HPAGE_PMD_NR; i++) {
|
|
memcg = (void *)page_private(pages[i]);
|
|
set_page_private(pages[i], 0);
|
|
mem_cgroup_cancel_charge(pages[i], memcg, false);
|
|
put_page(pages[i]);
|
|
}
|
|
kfree(pages);
|
|
goto out;
|
|
}
|
|
|
|
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
|
|
{
|
|
struct vm_area_struct *vma = vmf->vma;
|
|
struct page *page = NULL, *new_page;
|
|
struct mem_cgroup *memcg;
|
|
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
|
|
unsigned long mmun_start; /* For mmu_notifiers */
|
|
unsigned long mmun_end; /* For mmu_notifiers */
|
|
gfp_t huge_gfp; /* for allocation and charge */
|
|
vm_fault_t ret = 0;
|
|
|
|
vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
|
|
VM_BUG_ON_VMA(!vma->anon_vma, vma);
|
|
if (is_huge_zero_pmd(orig_pmd))
|
|
goto alloc;
|
|
spin_lock(vmf->ptl);
|
|
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
|
|
goto out_unlock;
|
|
|
|
page = pmd_page(orig_pmd);
|
|
VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page);
|
|
/*
|
|
* We can only reuse the page if nobody else maps the huge page or it's
|
|
* part.
|
|
*/
|
|
if (!trylock_page(page)) {
|
|
get_page(page);
|
|
spin_unlock(vmf->ptl);
|
|
lock_page(page);
|
|
spin_lock(vmf->ptl);
|
|
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) {
|
|
unlock_page(page);
|
|
put_page(page);
|
|
goto out_unlock;
|
|
}
|
|
put_page(page);
|
|
}
|
|
if (reuse_swap_page(page, NULL)) {
|
|
pmd_t entry;
|
|
entry = pmd_mkyoung(orig_pmd);
|
|
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
|
|
if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1))
|
|
update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
|
|
ret |= VM_FAULT_WRITE;
|
|
unlock_page(page);
|
|
goto out_unlock;
|
|
}
|
|
unlock_page(page);
|
|
get_page(page);
|
|
spin_unlock(vmf->ptl);
|
|
alloc:
|
|
if (__transparent_hugepage_enabled(vma) &&
|
|
!transparent_hugepage_debug_cow()) {
|
|
huge_gfp = alloc_hugepage_direct_gfpmask(vma);
|
|
new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER);
|
|
} else
|
|
new_page = NULL;
|
|
|
|
if (likely(new_page)) {
|
|
prep_transhuge_page(new_page);
|
|
} else {
|
|
if (!page) {
|
|
split_huge_pmd(vma, vmf->pmd, vmf->address);
|
|
ret |= VM_FAULT_FALLBACK;
|
|
} else {
|
|
ret = do_huge_pmd_wp_page_fallback(vmf, orig_pmd, page);
|
|
if (ret & VM_FAULT_OOM) {
|
|
split_huge_pmd(vma, vmf->pmd, vmf->address);
|
|
ret |= VM_FAULT_FALLBACK;
|
|
}
|
|
put_page(page);
|
|
}
|
|
count_vm_event(THP_FAULT_FALLBACK);
|
|
goto out;
|
|
}
|
|
|
|
if (unlikely(mem_cgroup_try_charge_delay(new_page, vma->vm_mm,
|
|
huge_gfp, &memcg, true))) {
|
|
put_page(new_page);
|
|
split_huge_pmd(vma, vmf->pmd, vmf->address);
|
|
if (page)
|
|
put_page(page);
|
|
ret |= VM_FAULT_FALLBACK;
|
|
count_vm_event(THP_FAULT_FALLBACK);
|
|
goto out;
|
|
}
|
|
|
|
count_vm_event(THP_FAULT_ALLOC);
|
|
|
|
if (!page)
|
|
clear_huge_page(new_page, vmf->address, HPAGE_PMD_NR);
|
|
else
|
|
copy_user_huge_page(new_page, page, vmf->address,
|
|
vma, HPAGE_PMD_NR);
|
|
__SetPageUptodate(new_page);
|
|
|
|
mmun_start = haddr;
|
|
mmun_end = haddr + HPAGE_PMD_SIZE;
|
|
mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
|
|
|
|
spin_lock(vmf->ptl);
|
|
if (page)
|
|
put_page(page);
|
|
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) {
|
|
spin_unlock(vmf->ptl);
|
|
mem_cgroup_cancel_charge(new_page, memcg, true);
|
|
put_page(new_page);
|
|
goto out_mn;
|
|
} else {
|
|
pmd_t entry;
|
|
entry = mk_huge_pmd(new_page, vma->vm_page_prot);
|
|
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
|
|
pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
|
|
page_add_new_anon_rmap(new_page, vma, haddr, true);
|
|
mem_cgroup_commit_charge(new_page, memcg, false, true);
|
|
lru_cache_add_active_or_unevictable(new_page, vma);
|
|
set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
|
|
update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
|
|
if (!page) {
|
|
add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
|
|
} else {
|
|
VM_BUG_ON_PAGE(!PageHead(page), page);
|
|
page_remove_rmap(page, true);
|
|
put_page(page);
|
|
}
|
|
ret |= VM_FAULT_WRITE;
|
|
}
|
|
spin_unlock(vmf->ptl);
|
|
out_mn:
|
|
/*
|
|
* No need to double call mmu_notifier->invalidate_range() callback as
|
|
* the above pmdp_huge_clear_flush_notify() did already call it.
|
|
*/
|
|
mmu_notifier_invalidate_range_only_end(vma->vm_mm, mmun_start,
|
|
mmun_end);
|
|
out:
|
|
return ret;
|
|
out_unlock:
|
|
spin_unlock(vmf->ptl);
|
|
return ret;
|
|
}
|
|
|
|
/*
|
|
* FOLL_FORCE can write to even unwritable pmd's, but only
|
|
* after we've gone through a COW cycle and they are dirty.
|
|
*/
|
|
static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags)
|
|
{
|
|
return pmd_write(pmd) ||
|
|
((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd));
|
|
}
|
|
|
|
struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
|
|
unsigned long addr,
|
|
pmd_t *pmd,
|
|
unsigned int flags)
|
|
{
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
struct page *page = NULL;
|
|
|
|
assert_spin_locked(pmd_lockptr(mm, pmd));
|
|
|
|
if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags))
|
|
goto out;
|
|
|
|
/* Avoid dumping huge zero page */
|
|
if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
|
|
return ERR_PTR(-EFAULT);
|
|
|
|
/* Full NUMA hinting faults to serialise migration in fault paths */
|
|
if ((flags & FOLL_NUMA) && pmd_protnone(*pmd))
|
|
goto out;
|
|
|
|
page = pmd_page(*pmd);
|
|
VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
|
|
if (flags & FOLL_TOUCH)
|
|
touch_pmd(vma, addr, pmd, flags);
|
|
if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
|
|
/*
|
|
* We don't mlock() pte-mapped THPs. This way we can avoid
|
|
* leaking mlocked pages into non-VM_LOCKED VMAs.
|
|
*
|
|
* For anon THP:
|
|
*
|
|
* In most cases the pmd is the only mapping of the page as we
|
|
* break COW for the mlock() -- see gup_flags |= FOLL_WRITE for
|
|
* writable private mappings in populate_vma_page_range().
|
|
*
|
|
* The only scenario when we have the page shared here is if we
|
|
* mlocking read-only mapping shared over fork(). We skip
|
|
* mlocking such pages.
|
|
*
|
|
* For file THP:
|
|
*
|
|
* We can expect PageDoubleMap() to be stable under page lock:
|
|
* for file pages we set it in page_add_file_rmap(), which
|
|
* requires page to be locked.
|
|
*/
|
|
|
|
if (PageAnon(page) && compound_mapcount(page) != 1)
|
|
goto skip_mlock;
|
|
if (PageDoubleMap(page) || !page->mapping)
|
|
goto skip_mlock;
|
|
if (!trylock_page(page))
|
|
goto skip_mlock;
|
|
lru_add_drain();
|
|
if (page->mapping && !PageDoubleMap(page))
|
|
mlock_vma_page(page);
|
|
unlock_page(page);
|
|
}
|
|
skip_mlock:
|
|
page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
|
|
VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
|
|
if (flags & FOLL_GET)
|
|
get_page(page);
|
|
|
|
out:
|
|
return page;
|
|
}
|
|
|
|
/* NUMA hinting page fault entry point for trans huge pmds */
|
|
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
|
|
{
|
|
struct vm_area_struct *vma = vmf->vma;
|
|
struct anon_vma *anon_vma = NULL;
|
|
struct page *page;
|
|
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
|
|
int page_nid = -1, this_nid = numa_node_id();
|
|
int target_nid, last_cpupid = -1;
|
|
bool page_locked;
|
|
bool migrated = false;
|
|
bool was_writable;
|
|
int flags = 0;
|
|
|
|
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
|
|
if (unlikely(!pmd_same(pmd, *vmf->pmd)))
|
|
goto out_unlock;
|
|
|
|
/*
|
|
* If there are potential migrations, wait for completion and retry
|
|
* without disrupting NUMA hinting information. Do not relock and
|
|
* check_same as the page may no longer be mapped.
|
|
*/
|
|
if (unlikely(pmd_trans_migrating(*vmf->pmd))) {
|
|
page = pmd_page(*vmf->pmd);
|
|
if (!get_page_unless_zero(page))
|
|
goto out_unlock;
|
|
spin_unlock(vmf->ptl);
|
|
put_and_wait_on_page_locked(page);
|
|
goto out;
|
|
}
|
|
|
|
page = pmd_page(pmd);
|
|
BUG_ON(is_huge_zero_page(page));
|
|
page_nid = page_to_nid(page);
|
|
last_cpupid = page_cpupid_last(page);
|
|
count_vm_numa_event(NUMA_HINT_FAULTS);
|
|
if (page_nid == this_nid) {
|
|
count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
|
|
flags |= TNF_FAULT_LOCAL;
|
|
}
|
|
|
|
/* See similar comment in do_numa_page for explanation */
|
|
if (!pmd_savedwrite(pmd))
|
|
flags |= TNF_NO_GROUP;
|
|
|
|
/*
|
|
* Acquire the page lock to serialise THP migrations but avoid dropping
|
|
* page_table_lock if at all possible
|
|
*/
|
|
page_locked = trylock_page(page);
|
|
target_nid = mpol_misplaced(page, vma, haddr);
|
|
if (target_nid == -1) {
|
|
/* If the page was locked, there are no parallel migrations */
|
|
if (page_locked)
|
|
goto clear_pmdnuma;
|
|
}
|
|
|
|
/* Migration could have started since the pmd_trans_migrating check */
|
|
if (!page_locked) {
|
|
page_nid = -1;
|
|
if (!get_page_unless_zero(page))
|
|
goto out_unlock;
|
|
spin_unlock(vmf->ptl);
|
|
put_and_wait_on_page_locked(page);
|
|
goto out;
|
|
}
|
|
|
|
/*
|
|
* Page is misplaced. Page lock serialises migrations. Acquire anon_vma
|
|
* to serialises splits
|
|
*/
|
|
get_page(page);
|
|
spin_unlock(vmf->ptl);
|
|
anon_vma = page_lock_anon_vma_read(page);
|
|
|
|
/* Confirm the PMD did not change while page_table_lock was released */
|
|
spin_lock(vmf->ptl);
|
|
if (unlikely(!pmd_same(pmd, *vmf->pmd))) {
|
|
unlock_page(page);
|
|
put_page(page);
|
|
page_nid = -1;
|
|
goto out_unlock;
|
|
}
|
|
|
|
/* Bail if we fail to protect against THP splits for any reason */
|
|
if (unlikely(!anon_vma)) {
|
|
put_page(page);
|
|
page_nid = -1;
|
|
goto clear_pmdnuma;
|
|
}
|
|
|
|
/*
|
|
* Since we took the NUMA fault, we must have observed the !accessible
|
|
* bit. Make sure all other CPUs agree with that, to avoid them
|
|
* modifying the page we're about to migrate.
|
|
*
|
|
* Must be done under PTL such that we'll observe the relevant
|
|
* inc_tlb_flush_pending().
|
|
*
|
|
* We are not sure a pending tlb flush here is for a huge page
|
|
* mapping or not. Hence use the tlb range variant
|
|
*/
|
|
if (mm_tlb_flush_pending(vma->vm_mm))
|
|
flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
|
|
|
|
/*
|
|
* Migrate the THP to the requested node, returns with page unlocked
|
|
* and access rights restored.
|
|
*/
|
|
spin_unlock(vmf->ptl);
|
|
|
|
migrated = migrate_misplaced_transhuge_page(vma->vm_mm, vma,
|
|
vmf->pmd, pmd, vmf->address, page, target_nid);
|
|
if (migrated) {
|
|
flags |= TNF_MIGRATED;
|
|
page_nid = target_nid;
|
|
} else
|
|
flags |= TNF_MIGRATE_FAIL;
|
|
|
|
goto out;
|
|
clear_pmdnuma:
|
|
BUG_ON(!PageLocked(page));
|
|
was_writable = pmd_savedwrite(pmd);
|
|
pmd = pmd_modify(pmd, vma->vm_page_prot);
|
|
pmd = pmd_mkyoung(pmd);
|
|
if (was_writable)
|
|
pmd = pmd_mkwrite(pmd);
|
|
set_pmd_at(vma->vm_mm, haddr, vmf->pmd, pmd);
|
|
update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
|
|
unlock_page(page);
|
|
out_unlock:
|
|
spin_unlock(vmf->ptl);
|
|
|
|
out:
|
|
if (anon_vma)
|
|
page_unlock_anon_vma_read(anon_vma);
|
|
|
|
if (page_nid != -1)
|
|
task_numa_fault(last_cpupid, page_nid, HPAGE_PMD_NR,
|
|
flags);
|
|
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* Return true if we do MADV_FREE successfully on entire pmd page.
|
|
* Otherwise, return false.
|
|
*/
|
|
bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
|
|
pmd_t *pmd, unsigned long addr, unsigned long next)
|
|
{
|
|
spinlock_t *ptl;
|
|
pmd_t orig_pmd;
|
|
struct page *page;
|
|
struct mm_struct *mm = tlb->mm;
|
|
bool ret = false;
|
|
|
|
tlb_remove_check_page_size_change(tlb, HPAGE_PMD_SIZE);
|
|
|
|
ptl = pmd_trans_huge_lock(pmd, vma);
|
|
if (!ptl)
|
|
goto out_unlocked;
|
|
|
|
orig_pmd = *pmd;
|
|
if (is_huge_zero_pmd(orig_pmd))
|
|
goto out;
|
|
|
|
if (unlikely(!pmd_present(orig_pmd))) {
|
|
VM_BUG_ON(thp_migration_supported() &&
|
|
!is_pmd_migration_entry(orig_pmd));
|
|
goto out;
|
|
}
|
|
|
|
page = pmd_page(orig_pmd);
|
|
/*
|
|
* If other processes are mapping this page, we couldn't discard
|
|
* the page unless they all do MADV_FREE so let's skip the page.
|
|
*/
|
|
if (page_mapcount(page) != 1)
|
|
goto out;
|
|
|
|
if (!trylock_page(page))
|
|
goto out;
|
|
|
|
/*
|
|
* If user want to discard part-pages of THP, split it so MADV_FREE
|
|
* will deactivate only them.
|
|
*/
|
|
if (next - addr != HPAGE_PMD_SIZE) {
|
|
get_page(page);
|
|
spin_unlock(ptl);
|
|
split_huge_page(page);
|
|
unlock_page(page);
|
|
put_page(page);
|
|
goto out_unlocked;
|
|
}
|
|
|
|
if (PageDirty(page))
|
|
ClearPageDirty(page);
|
|
unlock_page(page);
|
|
|
|
if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
|
|
pmdp_invalidate(vma, addr, pmd);
|
|
orig_pmd = pmd_mkold(orig_pmd);
|
|
orig_pmd = pmd_mkclean(orig_pmd);
|
|
|
|
set_pmd_at(mm, addr, pmd, orig_pmd);
|
|
tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
|
|
}
|
|
|
|
mark_page_lazyfree(page);
|
|
ret = true;
|
|
out:
|
|
spin_unlock(ptl);
|
|
out_unlocked:
|
|
return ret;
|
|
}
|
|
|
|
static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
|
|
{
|
|
pgtable_t pgtable;
|
|
|
|
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
|
|
pte_free(mm, pgtable);
|
|
mm_dec_nr_ptes(mm);
|
|
}
|
|
|
|
int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
|
|
pmd_t *pmd, unsigned long addr)
|
|
{
|
|
pmd_t orig_pmd;
|
|
spinlock_t *ptl;
|
|
|
|
tlb_remove_check_page_size_change(tlb, HPAGE_PMD_SIZE);
|
|
|
|
ptl = __pmd_trans_huge_lock(pmd, vma);
|
|
if (!ptl)
|
|
return 0;
|
|
/*
|
|
* For architectures like ppc64 we look at deposited pgtable
|
|
* when calling pmdp_huge_get_and_clear. So do the
|
|
* pgtable_trans_huge_withdraw after finishing pmdp related
|
|
* operations.
|
|
*/
|
|
orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd,
|
|
tlb->fullmm);
|
|
tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
|
|
if (vma_is_dax(vma)) {
|
|
if (arch_needs_pgtable_deposit())
|
|
zap_deposited_table(tlb->mm, pmd);
|
|
spin_unlock(ptl);
|
|
if (is_huge_zero_pmd(orig_pmd))
|
|
tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
|
|
} else if (is_huge_zero_pmd(orig_pmd)) {
|
|
zap_deposited_table(tlb->mm, pmd);
|
|
spin_unlock(ptl);
|
|
tlb_remove_page_size(tlb, pmd_page(orig_pmd), HPAGE_PMD_SIZE);
|
|
} else {
|
|
struct page *page = NULL;
|
|
int flush_needed = 1;
|
|
|
|
if (pmd_present(orig_pmd)) {
|
|
page = pmd_page(orig_pmd);
|
|
page_remove_rmap(page, true);
|
|
VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
|
|
VM_BUG_ON_PAGE(!PageHead(page), page);
|
|
} else if (thp_migration_supported()) {
|
|
swp_entry_t entry;
|
|
|
|
VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
|
|
entry = pmd_to_swp_entry(orig_pmd);
|
|
page = pfn_to_page(swp_offset(entry));
|
|
flush_needed = 0;
|
|
} else
|
|
WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
|
|
|
|
if (PageAnon(page)) {
|
|
zap_deposited_table(tlb->mm, pmd);
|
|
add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
|
|
} else {
|
|
if (arch_needs_pgtable_deposit())
|
|
zap_deposited_table(tlb->mm, pmd);
|
|
add_mm_counter(tlb->mm, mm_counter_file(page), -HPAGE_PMD_NR);
|
|
}
|
|
|
|
spin_unlock(ptl);
|
|
if (flush_needed)
|
|
tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE);
|
|
}
|
|
return 1;
|
|
}
|
|
|
|
#ifndef pmd_move_must_withdraw
|
|
static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
|
|
spinlock_t *old_pmd_ptl,
|
|
struct vm_area_struct *vma)
|
|
{
|
|
/*
|
|
* With split pmd lock we also need to move preallocated
|
|
* PTE page table if new_pmd is on different PMD page table.
|
|
*
|
|
* We also don't deposit and withdraw tables for file pages.
|
|
*/
|
|
return (new_pmd_ptl != old_pmd_ptl) && vma_is_anonymous(vma);
|
|
}
|
|
#endif
|
|
|
|
static pmd_t move_soft_dirty_pmd(pmd_t pmd)
|
|
{
|
|
#ifdef CONFIG_MEM_SOFT_DIRTY
|
|
if (unlikely(is_pmd_migration_entry(pmd)))
|
|
pmd = pmd_swp_mksoft_dirty(pmd);
|
|
else if (pmd_present(pmd))
|
|
pmd = pmd_mksoft_dirty(pmd);
|
|
#endif
|
|
return pmd;
|
|
}
|
|
|
|
bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
|
|
unsigned long new_addr, unsigned long old_end,
|
|
pmd_t *old_pmd, pmd_t *new_pmd)
|
|
{
|
|
spinlock_t *old_ptl, *new_ptl;
|
|
pmd_t pmd;
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
bool force_flush = false;
|
|
|
|
if ((old_addr & ~HPAGE_PMD_MASK) ||
|
|
(new_addr & ~HPAGE_PMD_MASK) ||
|
|
old_end - old_addr < HPAGE_PMD_SIZE)
|
|
return false;
|
|
|
|
/*
|
|
* The destination pmd shouldn't be established, free_pgtables()
|
|
* should have release it.
|
|
*/
|
|
if (WARN_ON(!pmd_none(*new_pmd))) {
|
|
VM_BUG_ON(pmd_trans_huge(*new_pmd));
|
|
return false;
|
|
}
|
|
|
|
/*
|
|
* We don't have to worry about the ordering of src and dst
|
|
* ptlocks because exclusive mmap_sem prevents deadlock.
|
|
*/
|
|
old_ptl = __pmd_trans_huge_lock(old_pmd, vma);
|
|
if (old_ptl) {
|
|
new_ptl = pmd_lockptr(mm, new_pmd);
|
|
if (new_ptl != old_ptl)
|
|
spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
|
|
pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
|
|
if (pmd_present(pmd))
|
|
force_flush = true;
|
|
VM_BUG_ON(!pmd_none(*new_pmd));
|
|
|
|
if (pmd_move_must_withdraw(new_ptl, old_ptl, vma)) {
|
|
pgtable_t pgtable;
|
|
pgtable = pgtable_trans_huge_withdraw(mm, old_pmd);
|
|
pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
|
|
}
|
|
pmd = move_soft_dirty_pmd(pmd);
|
|
set_pmd_at(mm, new_addr, new_pmd, pmd);
|
|
if (force_flush)
|
|
flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
|
|
if (new_ptl != old_ptl)
|
|
spin_unlock(new_ptl);
|
|
spin_unlock(old_ptl);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
/*
|
|
* Returns
|
|
* - 0 if PMD could not be locked
|
|
* - 1 if PMD was locked but protections unchange and TLB flush unnecessary
|
|
* - HPAGE_PMD_NR is protections changed and TLB flush necessary
|
|
*/
|
|
int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
|
|
unsigned long addr, pgprot_t newprot, int prot_numa)
|
|
{
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
spinlock_t *ptl;
|
|
pmd_t entry;
|
|
bool preserve_write;
|
|
int ret;
|
|
|
|
ptl = __pmd_trans_huge_lock(pmd, vma);
|
|
if (!ptl)
|
|
return 0;
|
|
|
|
preserve_write = prot_numa && pmd_write(*pmd);
|
|
ret = 1;
|
|
|
|
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
|
|
if (is_swap_pmd(*pmd)) {
|
|
swp_entry_t entry = pmd_to_swp_entry(*pmd);
|
|
|
|
VM_BUG_ON(!is_pmd_migration_entry(*pmd));
|
|
if (is_write_migration_entry(entry)) {
|
|
pmd_t newpmd;
|
|
/*
|
|
* A protection check is difficult so
|
|
* just be safe and disable write
|
|
*/
|
|
make_migration_entry_read(&entry);
|
|
newpmd = swp_entry_to_pmd(entry);
|
|
if (pmd_swp_soft_dirty(*pmd))
|
|
newpmd = pmd_swp_mksoft_dirty(newpmd);
|
|
set_pmd_at(mm, addr, pmd, newpmd);
|
|
}
|
|
goto unlock;
|
|
}
|
|
#endif
|
|
|
|
/*
|
|
* Avoid trapping faults against the zero page. The read-only
|
|
* data is likely to be read-cached on the local CPU and
|
|
* local/remote hits to the zero page are not interesting.
|
|
*/
|
|
if (prot_numa && is_huge_zero_pmd(*pmd))
|
|
goto unlock;
|
|
|
|
if (prot_numa && pmd_protnone(*pmd))
|
|
goto unlock;
|
|
|
|
/*
|
|
* In case prot_numa, we are under down_read(mmap_sem). It's critical
|
|
* to not clear pmd intermittently to avoid race with MADV_DONTNEED
|
|
* which is also under down_read(mmap_sem):
|
|
*
|
|
* CPU0: CPU1:
|
|
* change_huge_pmd(prot_numa=1)
|
|
* pmdp_huge_get_and_clear_notify()
|
|
* madvise_dontneed()
|
|
* zap_pmd_range()
|
|
* pmd_trans_huge(*pmd) == 0 (without ptl)
|
|
* // skip the pmd
|
|
* set_pmd_at();
|
|
* // pmd is re-established
|
|
*
|
|
* The race makes MADV_DONTNEED miss the huge pmd and don't clear it
|
|
* which may break userspace.
|
|
*
|
|
* pmdp_invalidate() is required to make sure we don't miss
|
|
* dirty/young flags set by hardware.
|
|
*/
|
|
entry = pmdp_invalidate(vma, addr, pmd);
|
|
|
|
entry = pmd_modify(entry, newprot);
|
|
if (preserve_write)
|
|
entry = pmd_mk_savedwrite(entry);
|
|
ret = HPAGE_PMD_NR;
|
|
set_pmd_at(mm, addr, pmd, entry);
|
|
BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(entry));
|
|
unlock:
|
|
spin_unlock(ptl);
|
|
return ret;
|
|
}
|
|
|
|
/*
|
|
* Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
|
|
*
|
|
* Note that if it returns page table lock pointer, this routine returns without
|
|
* unlocking page table lock. So callers must unlock it.
|
|
*/
|
|
spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma)
|
|
{
|
|
spinlock_t *ptl;
|
|
ptl = pmd_lock(vma->vm_mm, pmd);
|
|
if (likely(is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) ||
|
|
pmd_devmap(*pmd)))
|
|
return ptl;
|
|
spin_unlock(ptl);
|
|
return NULL;
|
|
}
|
|
|
|
/*
|
|
* Returns true if a given pud maps a thp, false otherwise.
|
|
*
|
|
* Note that if it returns true, this routine returns without unlocking page
|
|
* table lock. So callers must unlock it.
|
|
*/
|
|
spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma)
|
|
{
|
|
spinlock_t *ptl;
|
|
|
|
ptl = pud_lock(vma->vm_mm, pud);
|
|
if (likely(pud_trans_huge(*pud) || pud_devmap(*pud)))
|
|
return ptl;
|
|
spin_unlock(ptl);
|
|
return NULL;
|
|
}
|
|
|
|
#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
|
|
int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
|
|
pud_t *pud, unsigned long addr)
|
|
{
|
|
pud_t orig_pud;
|
|
spinlock_t *ptl;
|
|
|
|
ptl = __pud_trans_huge_lock(pud, vma);
|
|
if (!ptl)
|
|
return 0;
|
|
/*
|
|
* For architectures like ppc64 we look at deposited pgtable
|
|
* when calling pudp_huge_get_and_clear. So do the
|
|
* pgtable_trans_huge_withdraw after finishing pudp related
|
|
* operations.
|
|
*/
|
|
orig_pud = pudp_huge_get_and_clear_full(tlb->mm, addr, pud,
|
|
tlb->fullmm);
|
|
tlb_remove_pud_tlb_entry(tlb, pud, addr);
|
|
if (vma_is_dax(vma)) {
|
|
spin_unlock(ptl);
|
|
/* No zero page support yet */
|
|
} else {
|
|
/* No support for anonymous PUD pages yet */
|
|
BUG();
|
|
}
|
|
return 1;
|
|
}
|
|
|
|
static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
|
|
unsigned long haddr)
|
|
{
|
|
VM_BUG_ON(haddr & ~HPAGE_PUD_MASK);
|
|
VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
|
|
VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PUD_SIZE, vma);
|
|
VM_BUG_ON(!pud_trans_huge(*pud) && !pud_devmap(*pud));
|
|
|
|
count_vm_event(THP_SPLIT_PUD);
|
|
|
|
pudp_huge_clear_flush_notify(vma, haddr, pud);
|
|
}
|
|
|
|
void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
|
|
unsigned long address)
|
|
{
|
|
spinlock_t *ptl;
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
unsigned long haddr = address & HPAGE_PUD_MASK;
|
|
|
|
mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PUD_SIZE);
|
|
ptl = pud_lock(mm, pud);
|
|
if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud)))
|
|
goto out;
|
|
__split_huge_pud_locked(vma, pud, haddr);
|
|
|
|
out:
|
|
spin_unlock(ptl);
|
|
/*
|
|
* No need to double call mmu_notifier->invalidate_range() callback as
|
|
* the above pudp_huge_clear_flush_notify() did already call it.
|
|
*/
|
|
mmu_notifier_invalidate_range_only_end(mm, haddr, haddr +
|
|
HPAGE_PUD_SIZE);
|
|
}
|
|
#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
|
|
|
|
static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
|
|
unsigned long haddr, pmd_t *pmd)
|
|
{
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
pgtable_t pgtable;
|
|
pmd_t _pmd;
|
|
int i;
|
|
|
|
/*
|
|
* Leave pmd empty until pte is filled note that it is fine to delay
|
|
* notification until mmu_notifier_invalidate_range_end() as we are
|
|
* replacing a zero pmd write protected page with a zero pte write
|
|
* protected page.
|
|
*
|
|
* See Documentation/vm/mmu_notifier.rst
|
|
*/
|
|
pmdp_huge_clear_flush(vma, haddr, pmd);
|
|
|
|
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
|
|
pmd_populate(mm, &_pmd, pgtable);
|
|
|
|
for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
|
|
pte_t *pte, entry;
|
|
entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot);
|
|
entry = pte_mkspecial(entry);
|
|
pte = pte_offset_map(&_pmd, haddr);
|
|
VM_BUG_ON(!pte_none(*pte));
|
|
set_pte_at(mm, haddr, pte, entry);
|
|
pte_unmap(pte);
|
|
}
|
|
smp_wmb(); /* make pte visible before pmd */
|
|
pmd_populate(mm, pmd, pgtable);
|
|
}
|
|
|
|
static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
|
|
unsigned long haddr, bool freeze)
|
|
{
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
struct page *page;
|
|
pgtable_t pgtable;
|
|
pmd_t old_pmd, _pmd;
|
|
bool young, write, soft_dirty, pmd_migration = false;
|
|
unsigned long addr;
|
|
int i;
|
|
|
|
VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
|
|
VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
|
|
VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma);
|
|
VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)
|
|
&& !pmd_devmap(*pmd));
|
|
|
|
count_vm_event(THP_SPLIT_PMD);
|
|
|
|
if (!vma_is_anonymous(vma)) {
|
|
_pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd);
|
|
/*
|
|
* We are going to unmap this huge page. So
|
|
* just go ahead and zap it
|
|
*/
|
|
if (arch_needs_pgtable_deposit())
|
|
zap_deposited_table(mm, pmd);
|
|
if (vma_is_dax(vma))
|
|
return;
|
|
page = pmd_page(_pmd);
|
|
if (!PageDirty(page) && pmd_dirty(_pmd))
|
|
set_page_dirty(page);
|
|
if (!PageReferenced(page) && pmd_young(_pmd))
|
|
SetPageReferenced(page);
|
|
page_remove_rmap(page, true);
|
|
put_page(page);
|
|
add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
|
|
return;
|
|
} else if (is_huge_zero_pmd(*pmd)) {
|
|
/*
|
|
* FIXME: Do we want to invalidate secondary mmu by calling
|
|
* mmu_notifier_invalidate_range() see comments below inside
|
|
* __split_huge_pmd() ?
|
|
*
|
|
* We are going from a zero huge page write protected to zero
|
|
* small page also write protected so it does not seems useful
|
|
* to invalidate secondary mmu at this time.
|
|
*/
|
|
return __split_huge_zero_page_pmd(vma, haddr, pmd);
|
|
}
|
|
|
|
/*
|
|
* Up to this point the pmd is present and huge and userland has the
|
|
* whole access to the hugepage during the split (which happens in
|
|
* place). If we overwrite the pmd with the not-huge version pointing
|
|
* to the pte here (which of course we could if all CPUs were bug
|
|
* free), userland could trigger a small page size TLB miss on the
|
|
* small sized TLB while the hugepage TLB entry is still established in
|
|
* the huge TLB. Some CPU doesn't like that.
|
|
* See http://support.amd.com/us/Processor_TechDocs/41322.pdf, Erratum
|
|
* 383 on page 93. Intel should be safe but is also warns that it's
|
|
* only safe if the permission and cache attributes of the two entries
|
|
* loaded in the two TLB is identical (which should be the case here).
|
|
* But it is generally safer to never allow small and huge TLB entries
|
|
* for the same virtual address to be loaded simultaneously. So instead
|
|
* of doing "pmd_populate(); flush_pmd_tlb_range();" we first mark the
|
|
* current pmd notpresent (atomically because here the pmd_trans_huge
|
|
* must remain set at all times on the pmd until the split is complete
|
|
* for this pmd), then we flush the SMP TLB and finally we write the
|
|
* non-huge version of the pmd entry with pmd_populate.
|
|
*/
|
|
old_pmd = pmdp_invalidate(vma, haddr, pmd);
|
|
|
|
pmd_migration = is_pmd_migration_entry(old_pmd);
|
|
if (unlikely(pmd_migration)) {
|
|
swp_entry_t entry;
|
|
|
|
entry = pmd_to_swp_entry(old_pmd);
|
|
page = pfn_to_page(swp_offset(entry));
|
|
write = is_write_migration_entry(entry);
|
|
young = false;
|
|
soft_dirty = pmd_swp_soft_dirty(old_pmd);
|
|
} else {
|
|
page = pmd_page(old_pmd);
|
|
if (pmd_dirty(old_pmd))
|
|
SetPageDirty(page);
|
|
write = pmd_write(old_pmd);
|
|
young = pmd_young(old_pmd);
|
|
soft_dirty = pmd_soft_dirty(old_pmd);
|
|
}
|
|
VM_BUG_ON_PAGE(!page_count(page), page);
|
|
page_ref_add(page, HPAGE_PMD_NR - 1);
|
|
|
|
/*
|
|
* Withdraw the table only after we mark the pmd entry invalid.
|
|
* This's critical for some architectures (Power).
|
|
*/
|
|
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
|
|
pmd_populate(mm, &_pmd, pgtable);
|
|
|
|
for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
|
|
pte_t entry, *pte;
|
|
/*
|
|
* Note that NUMA hinting access restrictions are not
|
|
* transferred to avoid any possibility of altering
|
|
* permissions across VMAs.
|
|
*/
|
|
if (freeze || pmd_migration) {
|
|
swp_entry_t swp_entry;
|
|
swp_entry = make_migration_entry(page + i, write);
|
|
entry = swp_entry_to_pte(swp_entry);
|
|
if (soft_dirty)
|
|
entry = pte_swp_mksoft_dirty(entry);
|
|
} else {
|
|
entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
|
|
entry = maybe_mkwrite(entry, vma->vm_flags);
|
|
if (!write)
|
|
entry = pte_wrprotect(entry);
|
|
if (!young)
|
|
entry = pte_mkold(entry);
|
|
if (soft_dirty)
|
|
entry = pte_mksoft_dirty(entry);
|
|
}
|
|
pte = pte_offset_map(&_pmd, addr);
|
|
BUG_ON(!pte_none(*pte));
|
|
set_pte_at(mm, addr, pte, entry);
|
|
atomic_inc(&page[i]._mapcount);
|
|
pte_unmap(pte);
|
|
}
|
|
|
|
/*
|
|
* Set PG_double_map before dropping compound_mapcount to avoid
|
|
* false-negative page_mapped().
|
|
*/
|
|
if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
|
|
for (i = 0; i < HPAGE_PMD_NR; i++)
|
|
atomic_inc(&page[i]._mapcount);
|
|
}
|
|
|
|
if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
|
|
/* Last compound_mapcount is gone. */
|
|
__dec_node_page_state(page, NR_ANON_THPS);
|
|
if (TestClearPageDoubleMap(page)) {
|
|
/* No need in mapcount reference anymore */
|
|
for (i = 0; i < HPAGE_PMD_NR; i++)
|
|
atomic_dec(&page[i]._mapcount);
|
|
}
|
|
}
|
|
|
|
smp_wmb(); /* make pte visible before pmd */
|
|
pmd_populate(mm, pmd, pgtable);
|
|
|
|
if (freeze) {
|
|
for (i = 0; i < HPAGE_PMD_NR; i++) {
|
|
page_remove_rmap(page + i, false);
|
|
put_page(page + i);
|
|
}
|
|
}
|
|
}
|
|
|
|
void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
|
|
unsigned long address, bool freeze, struct page *page)
|
|
{
|
|
spinlock_t *ptl;
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
unsigned long haddr = address & HPAGE_PMD_MASK;
|
|
bool was_locked = false;
|
|
pmd_t _pmd;
|
|
|
|
mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE);
|
|
ptl = pmd_lock(mm, pmd);
|
|
|
|
/*
|
|
* If caller asks to setup a migration entries, we need a page to check
|
|
* pmd against. Otherwise we can end up replacing wrong page.
|
|
*/
|
|
VM_BUG_ON(freeze && !page);
|
|
if (page) {
|
|
VM_WARN_ON_ONCE(!PageLocked(page));
|
|
was_locked = true;
|
|
if (page != pmd_page(*pmd))
|
|
goto out;
|
|
}
|
|
|
|
repeat:
|
|
if (pmd_trans_huge(*pmd)) {
|
|
if (!page) {
|
|
page = pmd_page(*pmd);
|
|
if (unlikely(!trylock_page(page))) {
|
|
get_page(page);
|
|
_pmd = *pmd;
|
|
spin_unlock(ptl);
|
|
lock_page(page);
|
|
spin_lock(ptl);
|
|
if (unlikely(!pmd_same(*pmd, _pmd))) {
|
|
unlock_page(page);
|
|
put_page(page);
|
|
page = NULL;
|
|
goto repeat;
|
|
}
|
|
put_page(page);
|
|
}
|
|
}
|
|
if (PageMlocked(page))
|
|
clear_page_mlock(page);
|
|
} else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd)))
|
|
goto out;
|
|
__split_huge_pmd_locked(vma, pmd, haddr, freeze);
|
|
out:
|
|
spin_unlock(ptl);
|
|
if (!was_locked && page)
|
|
unlock_page(page);
|
|
/*
|
|
* No need to double call mmu_notifier->invalidate_range() callback.
|
|
* They are 3 cases to consider inside __split_huge_pmd_locked():
|
|
* 1) pmdp_huge_clear_flush_notify() call invalidate_range() obvious
|
|
* 2) __split_huge_zero_page_pmd() read only zero page and any write
|
|
* fault will trigger a flush_notify before pointing to a new page
|
|
* (it is fine if the secondary mmu keeps pointing to the old zero
|
|
* page in the meantime)
|
|
* 3) Split a huge pmd into pte pointing to the same page. No need
|
|
* to invalidate secondary tlb entry they are all still valid.
|
|
* any further changes to individual pte will notify. So no need
|
|
* to call mmu_notifier->invalidate_range()
|
|
*/
|
|
mmu_notifier_invalidate_range_only_end(mm, haddr, haddr +
|
|
HPAGE_PMD_SIZE);
|
|
}
|
|
|
|
void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
|
|
bool freeze, struct page *page)
|
|
{
|
|
pgd_t *pgd;
|
|
p4d_t *p4d;
|
|
pud_t *pud;
|
|
pmd_t *pmd;
|
|
|
|
pgd = pgd_offset(vma->vm_mm, address);
|
|
if (!pgd_present(*pgd))
|
|
return;
|
|
|
|
p4d = p4d_offset(pgd, address);
|
|
if (!p4d_present(*p4d))
|
|
return;
|
|
|
|
pud = pud_offset(p4d, address);
|
|
if (!pud_present(*pud))
|
|
return;
|
|
|
|
pmd = pmd_offset(pud, address);
|
|
|
|
__split_huge_pmd(vma, pmd, address, freeze, page);
|
|
}
|
|
|
|
void vma_adjust_trans_huge(struct vm_area_struct *vma,
|
|
unsigned long start,
|
|
unsigned long end,
|
|
long adjust_next)
|
|
{
|
|
/*
|
|
* If the new start address isn't hpage aligned and it could
|
|
* previously contain an hugepage: check if we need to split
|
|
* an huge pmd.
|
|
*/
|
|
if (start & ~HPAGE_PMD_MASK &&
|
|
(start & HPAGE_PMD_MASK) >= vma->vm_start &&
|
|
(start & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE <= vma->vm_end)
|
|
split_huge_pmd_address(vma, start, false, NULL);
|
|
|
|
/*
|
|
* If the new end address isn't hpage aligned and it could
|
|
* previously contain an hugepage: check if we need to split
|
|
* an huge pmd.
|
|
*/
|
|
if (end & ~HPAGE_PMD_MASK &&
|
|
(end & HPAGE_PMD_MASK) >= vma->vm_start &&
|
|
(end & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE <= vma->vm_end)
|
|
split_huge_pmd_address(vma, end, false, NULL);
|
|
|
|
/*
|
|
* If we're also updating the vma->vm_next->vm_start, if the new
|
|
* vm_next->vm_start isn't page aligned and it could previously
|
|
* contain an hugepage: check if we need to split an huge pmd.
|
|
*/
|
|
if (adjust_next > 0) {
|
|
struct vm_area_struct *next = vma->vm_next;
|
|
unsigned long nstart = next->vm_start;
|
|
nstart += adjust_next << PAGE_SHIFT;
|
|
if (nstart & ~HPAGE_PMD_MASK &&
|
|
(nstart & HPAGE_PMD_MASK) >= next->vm_start &&
|
|
(nstart & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE <= next->vm_end)
|
|
split_huge_pmd_address(next, nstart, false, NULL);
|
|
}
|
|
}
|
|
|
|
static void unmap_page(struct page *page)
|
|
{
|
|
enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS |
|
|
TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD;
|
|
bool unmap_success;
|
|
|
|
VM_BUG_ON_PAGE(!PageHead(page), page);
|
|
|
|
if (PageAnon(page))
|
|
ttu_flags |= TTU_SPLIT_FREEZE;
|
|
|
|
unmap_success = try_to_unmap(page, ttu_flags);
|
|
VM_BUG_ON_PAGE(!unmap_success, page);
|
|
}
|
|
|
|
static void remap_page(struct page *page)
|
|
{
|
|
int i;
|
|
if (PageTransHuge(page)) {
|
|
remove_migration_ptes(page, page, true);
|
|
} else {
|
|
for (i = 0; i < HPAGE_PMD_NR; i++)
|
|
remove_migration_ptes(page + i, page + i, true);
|
|
}
|
|
}
|
|
|
|
static void __split_huge_page_tail(struct page *head, int tail,
|
|
struct lruvec *lruvec, struct list_head *list)
|
|
{
|
|
struct page *page_tail = head + tail;
|
|
|
|
VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail);
|
|
|
|
/*
|
|
* Clone page flags before unfreezing refcount.
|
|
*
|
|
* After successful get_page_unless_zero() might follow flags change,
|
|
* for exmaple lock_page() which set PG_waiters.
|
|
*/
|
|
page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
|
|
page_tail->flags |= (head->flags &
|
|
((1L << PG_referenced) |
|
|
(1L << PG_swapbacked) |
|
|
(1L << PG_swapcache) |
|
|
(1L << PG_mlocked) |
|
|
(1L << PG_uptodate) |
|
|
(1L << PG_active) |
|
|
(1L << PG_workingset) |
|
|
(1L << PG_locked) |
|
|
(1L << PG_unevictable) |
|
|
(1L << PG_dirty)));
|
|
|
|
/* ->mapping in first tail page is compound_mapcount */
|
|
VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,
|
|
page_tail);
|
|
page_tail->mapping = head->mapping;
|
|
page_tail->index = head->index + tail;
|
|
|
|
/* Page flags must be visible before we make the page non-compound. */
|
|
smp_wmb();
|
|
|
|
/*
|
|
* Clear PageTail before unfreezing page refcount.
|
|
*
|
|
* After successful get_page_unless_zero() might follow put_page()
|
|
* which needs correct compound_head().
|
|
*/
|
|
clear_compound_head(page_tail);
|
|
|
|
/* Finally unfreeze refcount. Additional reference from page cache. */
|
|
page_ref_unfreeze(page_tail, 1 + (!PageAnon(head) ||
|
|
PageSwapCache(head)));
|
|
|
|
if (page_is_young(head))
|
|
set_page_young(page_tail);
|
|
if (page_is_idle(head))
|
|
set_page_idle(page_tail);
|
|
|
|
page_cpupid_xchg_last(page_tail, page_cpupid_last(head));
|
|
|
|
/*
|
|
* always add to the tail because some iterators expect new
|
|
* pages to show after the currently processed elements - e.g.
|
|
* migrate_pages
|
|
*/
|
|
lru_add_page_tail(head, page_tail, lruvec, list);
|
|
}
|
|
|
|
static void __split_huge_page(struct page *page, struct list_head *list,
|
|
pgoff_t end, unsigned long flags)
|
|
{
|
|
struct page *head = compound_head(page);
|
|
struct zone *zone = page_zone(head);
|
|
struct lruvec *lruvec;
|
|
int i;
|
|
|
|
lruvec = mem_cgroup_page_lruvec(head, zone->zone_pgdat);
|
|
|
|
/* complete memcg works before add pages to LRU */
|
|
mem_cgroup_split_huge_fixup(head);
|
|
|
|
for (i = HPAGE_PMD_NR - 1; i >= 1; i--) {
|
|
__split_huge_page_tail(head, i, lruvec, list);
|
|
/* Some pages can be beyond i_size: drop them from page cache */
|
|
if (head[i].index >= end) {
|
|
ClearPageDirty(head + i);
|
|
__delete_from_page_cache(head + i, NULL);
|
|
if (IS_ENABLED(CONFIG_SHMEM) && PageSwapBacked(head))
|
|
shmem_uncharge(head->mapping->host, 1);
|
|
put_page(head + i);
|
|
}
|
|
}
|
|
|
|
ClearPageCompound(head);
|
|
|
|
split_page_owner(head, HPAGE_PMD_ORDER);
|
|
|
|
/* See comment in __split_huge_page_tail() */
|
|
if (PageAnon(head)) {
|
|
/* Additional pin to radix tree of swap cache */
|
|
if (PageSwapCache(head))
|
|
page_ref_add(head, 2);
|
|
else
|
|
page_ref_inc(head);
|
|
} else {
|
|
/* Additional pin to radix tree */
|
|
page_ref_add(head, 2);
|
|
xa_unlock(&head->mapping->i_pages);
|
|
}
|
|
|
|
spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
|
|
|
|
remap_page(head);
|
|
|
|
for (i = 0; i < HPAGE_PMD_NR; i++) {
|
|
struct page *subpage = head + i;
|
|
if (subpage == page)
|
|
continue;
|
|
unlock_page(subpage);
|
|
|
|
/*
|
|
* Subpages may be freed if there wasn't any mapping
|
|
* like if add_to_swap() is running on a lru page that
|
|
* had its mapping zapped. And freeing these pages
|
|
* requires taking the lru_lock so we do the put_page
|
|
* of the tail pages after the split is complete.
|
|
*/
|
|
put_page(subpage);
|
|
}
|
|
}
|
|
|
|
int total_mapcount(struct page *page)
|
|
{
|
|
int i, compound, ret;
|
|
|
|
VM_BUG_ON_PAGE(PageTail(page), page);
|
|
|
|
if (likely(!PageCompound(page)))
|
|
return atomic_read(&page->_mapcount) + 1;
|
|
|
|
compound = compound_mapcount(page);
|
|
if (PageHuge(page))
|
|
return compound;
|
|
ret = compound;
|
|
for (i = 0; i < HPAGE_PMD_NR; i++)
|
|
ret += atomic_read(&page[i]._mapcount) + 1;
|
|
/* File pages has compound_mapcount included in _mapcount */
|
|
if (!PageAnon(page))
|
|
return ret - compound * HPAGE_PMD_NR;
|
|
if (PageDoubleMap(page))
|
|
ret -= HPAGE_PMD_NR;
|
|
return ret;
|
|
}
|
|
|
|
/*
|
|
* This calculates accurately how many mappings a transparent hugepage
|
|
* has (unlike page_mapcount() which isn't fully accurate). This full
|
|
* accuracy is primarily needed to know if copy-on-write faults can
|
|
* reuse the page and change the mapping to read-write instead of
|
|
* copying them. At the same time this returns the total_mapcount too.
|
|
*
|
|
* The function returns the highest mapcount any one of the subpages
|
|
* has. If the return value is one, even if different processes are
|
|
* mapping different subpages of the transparent hugepage, they can
|
|
* all reuse it, because each process is reusing a different subpage.
|
|
*
|
|
* The total_mapcount is instead counting all virtual mappings of the
|
|
* subpages. If the total_mapcount is equal to "one", it tells the
|
|
* caller all mappings belong to the same "mm" and in turn the
|
|
* anon_vma of the transparent hugepage can become the vma->anon_vma
|
|
* local one as no other process may be mapping any of the subpages.
|
|
*
|
|
* It would be more accurate to replace page_mapcount() with
|
|
* page_trans_huge_mapcount(), however we only use
|
|
* page_trans_huge_mapcount() in the copy-on-write faults where we
|
|
* need full accuracy to avoid breaking page pinning, because
|
|
* page_trans_huge_mapcount() is slower than page_mapcount().
|
|
*/
|
|
int page_trans_huge_mapcount(struct page *page, int *total_mapcount)
|
|
{
|
|
int i, ret, _total_mapcount, mapcount;
|
|
|
|
/* hugetlbfs shouldn't call it */
|
|
VM_BUG_ON_PAGE(PageHuge(page), page);
|
|
|
|
if (likely(!PageTransCompound(page))) {
|
|
mapcount = atomic_read(&page->_mapcount) + 1;
|
|
if (total_mapcount)
|
|
*total_mapcount = mapcount;
|
|
return mapcount;
|
|
}
|
|
|
|
page = compound_head(page);
|
|
|
|
_total_mapcount = ret = 0;
|
|
for (i = 0; i < HPAGE_PMD_NR; i++) {
|
|
mapcount = atomic_read(&page[i]._mapcount) + 1;
|
|
ret = max(ret, mapcount);
|
|
_total_mapcount += mapcount;
|
|
}
|
|
if (PageDoubleMap(page)) {
|
|
ret -= 1;
|
|
_total_mapcount -= HPAGE_PMD_NR;
|
|
}
|
|
mapcount = compound_mapcount(page);
|
|
ret += mapcount;
|
|
_total_mapcount += mapcount;
|
|
if (total_mapcount)
|
|
*total_mapcount = _total_mapcount;
|
|
return ret;
|
|
}
|
|
|
|
/* Racy check whether the huge page can be split */
|
|
bool can_split_huge_page(struct page *page, int *pextra_pins)
|
|
{
|
|
int extra_pins;
|
|
|
|
/* Additional pins from radix tree */
|
|
if (PageAnon(page))
|
|
extra_pins = PageSwapCache(page) ? HPAGE_PMD_NR : 0;
|
|
else
|
|
extra_pins = HPAGE_PMD_NR;
|
|
if (pextra_pins)
|
|
*pextra_pins = extra_pins;
|
|
return total_mapcount(page) == page_count(page) - extra_pins - 1;
|
|
}
|
|
|
|
/*
|
|
* This function splits huge page into normal pages. @page can point to any
|
|
* subpage of huge page to split. Split doesn't change the position of @page.
|
|
*
|
|
* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.
|
|
* The huge page must be locked.
|
|
*
|
|
* If @list is null, tail pages will be added to LRU list, otherwise, to @list.
|
|
*
|
|
* Both head page and tail pages will inherit mapping, flags, and so on from
|
|
* the hugepage.
|
|
*
|
|
* GUP pin and PG_locked transferred to @page. Rest subpages can be freed if
|
|
* they are not mapped.
|
|
*
|
|
* Returns 0 if the hugepage is split successfully.
|
|
* Returns -EBUSY if the page is pinned or if anon_vma disappeared from under
|
|
* us.
|
|
*/
|
|
int split_huge_page_to_list(struct page *page, struct list_head *list)
|
|
{
|
|
struct page *head = compound_head(page);
|
|
struct pglist_data *pgdata = NODE_DATA(page_to_nid(head));
|
|
struct anon_vma *anon_vma = NULL;
|
|
struct address_space *mapping = NULL;
|
|
int count, mapcount, extra_pins, ret;
|
|
bool mlocked;
|
|
unsigned long flags;
|
|
pgoff_t end;
|
|
|
|
VM_BUG_ON_PAGE(is_huge_zero_page(head), head);
|
|
VM_BUG_ON_PAGE(!PageLocked(page), page);
|
|
VM_BUG_ON_PAGE(!PageCompound(page), page);
|
|
|
|
if (PageWriteback(page))
|
|
return -EBUSY;
|
|
|
|
if (PageAnon(head)) {
|
|
/*
|
|
* The caller does not necessarily hold an mmap_sem that would
|
|
* prevent the anon_vma disappearing so we first we take a
|
|
* reference to it and then lock the anon_vma for write. This
|
|
* is similar to page_lock_anon_vma_read except the write lock
|
|
* is taken to serialise against parallel split or collapse
|
|
* operations.
|
|
*/
|
|
anon_vma = page_get_anon_vma(head);
|
|
if (!anon_vma) {
|
|
ret = -EBUSY;
|
|
goto out;
|
|
}
|
|
end = -1;
|
|
mapping = NULL;
|
|
anon_vma_lock_write(anon_vma);
|
|
} else {
|
|
mapping = head->mapping;
|
|
|
|
/* Truncated ? */
|
|
if (!mapping) {
|
|
ret = -EBUSY;
|
|
goto out;
|
|
}
|
|
|
|
anon_vma = NULL;
|
|
i_mmap_lock_read(mapping);
|
|
|
|
/*
|
|
*__split_huge_page() may need to trim off pages beyond EOF:
|
|
* but on 32-bit, i_size_read() takes an irq-unsafe seqlock,
|
|
* which cannot be nested inside the page tree lock. So note
|
|
* end now: i_size itself may be changed at any moment, but
|
|
* head page lock is good enough to serialize the trimming.
|
|
*/
|
|
end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE);
|
|
}
|
|
|
|
/*
|
|
* Racy check if we can split the page, before unmap_page() will
|
|
* split PMDs
|
|
*/
|
|
if (!can_split_huge_page(head, &extra_pins)) {
|
|
ret = -EBUSY;
|
|
goto out_unlock;
|
|
}
|
|
|
|
mlocked = PageMlocked(page);
|
|
unmap_page(head);
|
|
VM_BUG_ON_PAGE(compound_mapcount(head), head);
|
|
|
|
/* Make sure the page is not on per-CPU pagevec as it takes pin */
|
|
if (mlocked)
|
|
lru_add_drain();
|
|
|
|
/* prevent PageLRU to go away from under us, and freeze lru stats */
|
|
spin_lock_irqsave(zone_lru_lock(page_zone(head)), flags);
|
|
|
|
if (mapping) {
|
|
void **pslot;
|
|
|
|
xa_lock(&mapping->i_pages);
|
|
pslot = radix_tree_lookup_slot(&mapping->i_pages,
|
|
page_index(head));
|
|
/*
|
|
* Check if the head page is present in radix tree.
|
|
* We assume all tail are present too, if head is there.
|
|
*/
|
|
if (radix_tree_deref_slot_protected(pslot,
|
|
&mapping->i_pages.xa_lock) != head)
|
|
goto fail;
|
|
}
|
|
|
|
/* Prevent deferred_split_scan() touching ->_refcount */
|
|
spin_lock(&pgdata->split_queue_lock);
|
|
count = page_count(head);
|
|
mapcount = total_mapcount(head);
|
|
if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) {
|
|
if (!list_empty(page_deferred_list(head))) {
|
|
pgdata->split_queue_len--;
|
|
list_del(page_deferred_list(head));
|
|
}
|
|
if (mapping)
|
|
__dec_node_page_state(page, NR_SHMEM_THPS);
|
|
spin_unlock(&pgdata->split_queue_lock);
|
|
__split_huge_page(page, list, end, flags);
|
|
if (PageSwapCache(head)) {
|
|
swp_entry_t entry = { .val = page_private(head) };
|
|
|
|
ret = split_swap_cluster(entry);
|
|
} else
|
|
ret = 0;
|
|
} else {
|
|
if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
|
|
pr_alert("total_mapcount: %u, page_count(): %u\n",
|
|
mapcount, count);
|
|
if (PageTail(page))
|
|
dump_page(head, NULL);
|
|
dump_page(page, "total_mapcount(head) > 0");
|
|
BUG();
|
|
}
|
|
spin_unlock(&pgdata->split_queue_lock);
|
|
fail: if (mapping)
|
|
xa_unlock(&mapping->i_pages);
|
|
spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
|
|
remap_page(head);
|
|
ret = -EBUSY;
|
|
}
|
|
|
|
out_unlock:
|
|
if (anon_vma) {
|
|
anon_vma_unlock_write(anon_vma);
|
|
put_anon_vma(anon_vma);
|
|
}
|
|
if (mapping)
|
|
i_mmap_unlock_read(mapping);
|
|
out:
|
|
count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED);
|
|
return ret;
|
|
}
|
|
|
|
void free_transhuge_page(struct page *page)
|
|
{
|
|
struct pglist_data *pgdata = NODE_DATA(page_to_nid(page));
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&pgdata->split_queue_lock, flags);
|
|
if (!list_empty(page_deferred_list(page))) {
|
|
pgdata->split_queue_len--;
|
|
list_del(page_deferred_list(page));
|
|
}
|
|
spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
|
|
free_compound_page(page);
|
|
}
|
|
|
|
void deferred_split_huge_page(struct page *page)
|
|
{
|
|
struct pglist_data *pgdata = NODE_DATA(page_to_nid(page));
|
|
unsigned long flags;
|
|
|
|
VM_BUG_ON_PAGE(!PageTransHuge(page), page);
|
|
|
|
spin_lock_irqsave(&pgdata->split_queue_lock, flags);
|
|
if (list_empty(page_deferred_list(page))) {
|
|
count_vm_event(THP_DEFERRED_SPLIT_PAGE);
|
|
list_add_tail(page_deferred_list(page), &pgdata->split_queue);
|
|
pgdata->split_queue_len++;
|
|
}
|
|
spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
|
|
}
|
|
|
|
static unsigned long deferred_split_count(struct shrinker *shrink,
|
|
struct shrink_control *sc)
|
|
{
|
|
struct pglist_data *pgdata = NODE_DATA(sc->nid);
|
|
return READ_ONCE(pgdata->split_queue_len);
|
|
}
|
|
|
|
static unsigned long deferred_split_scan(struct shrinker *shrink,
|
|
struct shrink_control *sc)
|
|
{
|
|
struct pglist_data *pgdata = NODE_DATA(sc->nid);
|
|
unsigned long flags;
|
|
LIST_HEAD(list), *pos, *next;
|
|
struct page *page;
|
|
int split = 0;
|
|
|
|
spin_lock_irqsave(&pgdata->split_queue_lock, flags);
|
|
/* Take pin on all head pages to avoid freeing them under us */
|
|
list_for_each_safe(pos, next, &pgdata->split_queue) {
|
|
page = list_entry((void *)pos, struct page, mapping);
|
|
page = compound_head(page);
|
|
if (get_page_unless_zero(page)) {
|
|
list_move(page_deferred_list(page), &list);
|
|
} else {
|
|
/* We lost race with put_compound_page() */
|
|
list_del_init(page_deferred_list(page));
|
|
pgdata->split_queue_len--;
|
|
}
|
|
if (!--sc->nr_to_scan)
|
|
break;
|
|
}
|
|
spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
|
|
|
|
list_for_each_safe(pos, next, &list) {
|
|
page = list_entry((void *)pos, struct page, mapping);
|
|
if (!trylock_page(page))
|
|
goto next;
|
|
/* split_huge_page() removes page from list on success */
|
|
if (!split_huge_page(page))
|
|
split++;
|
|
unlock_page(page);
|
|
next:
|
|
put_page(page);
|
|
}
|
|
|
|
spin_lock_irqsave(&pgdata->split_queue_lock, flags);
|
|
list_splice_tail(&list, &pgdata->split_queue);
|
|
spin_unlock_irqrestore(&pgdata->split_queue_lock, flags);
|
|
|
|
/*
|
|
* Stop shrinker if we didn't split any page, but the queue is empty.
|
|
* This can happen if pages were freed under us.
|
|
*/
|
|
if (!split && list_empty(&pgdata->split_queue))
|
|
return SHRINK_STOP;
|
|
return split;
|
|
}
|
|
|
|
static struct shrinker deferred_split_shrinker = {
|
|
.count_objects = deferred_split_count,
|
|
.scan_objects = deferred_split_scan,
|
|
.seeks = DEFAULT_SEEKS,
|
|
.flags = SHRINKER_NUMA_AWARE,
|
|
};
|
|
|
|
#ifdef CONFIG_DEBUG_FS
|
|
static int split_huge_pages_set(void *data, u64 val)
|
|
{
|
|
struct zone *zone;
|
|
struct page *page;
|
|
unsigned long pfn, max_zone_pfn;
|
|
unsigned long total = 0, split = 0;
|
|
|
|
if (val != 1)
|
|
return -EINVAL;
|
|
|
|
for_each_populated_zone(zone) {
|
|
max_zone_pfn = zone_end_pfn(zone);
|
|
for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) {
|
|
if (!pfn_valid(pfn))
|
|
continue;
|
|
|
|
page = pfn_to_page(pfn);
|
|
if (!get_page_unless_zero(page))
|
|
continue;
|
|
|
|
if (zone != page_zone(page))
|
|
goto next;
|
|
|
|
if (!PageHead(page) || PageHuge(page) || !PageLRU(page))
|
|
goto next;
|
|
|
|
total++;
|
|
lock_page(page);
|
|
if (!split_huge_page(page))
|
|
split++;
|
|
unlock_page(page);
|
|
next:
|
|
put_page(page);
|
|
}
|
|
}
|
|
|
|
pr_info("%lu of %lu THP split\n", split, total);
|
|
|
|
return 0;
|
|
}
|
|
DEFINE_SIMPLE_ATTRIBUTE(split_huge_pages_fops, NULL, split_huge_pages_set,
|
|
"%llu\n");
|
|
|
|
static int __init split_huge_pages_debugfs(void)
|
|
{
|
|
void *ret;
|
|
|
|
ret = debugfs_create_file("split_huge_pages", 0200, NULL, NULL,
|
|
&split_huge_pages_fops);
|
|
if (!ret)
|
|
pr_warn("Failed to create split_huge_pages in debugfs");
|
|
return 0;
|
|
}
|
|
late_initcall(split_huge_pages_debugfs);
|
|
#endif
|
|
|
|
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
|
|
void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
|
|
struct page *page)
|
|
{
|
|
struct vm_area_struct *vma = pvmw->vma;
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
unsigned long address = pvmw->address;
|
|
pmd_t pmdval;
|
|
swp_entry_t entry;
|
|
pmd_t pmdswp;
|
|
|
|
if (!(pvmw->pmd && !pvmw->pte))
|
|
return;
|
|
|
|
flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
|
|
pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
|
|
if (pmd_dirty(pmdval))
|
|
set_page_dirty(page);
|
|
entry = make_migration_entry(page, pmd_write(pmdval));
|
|
pmdswp = swp_entry_to_pmd(entry);
|
|
if (pmd_soft_dirty(pmdval))
|
|
pmdswp = pmd_swp_mksoft_dirty(pmdswp);
|
|
set_pmd_at(mm, address, pvmw->pmd, pmdswp);
|
|
page_remove_rmap(page, true);
|
|
put_page(page);
|
|
}
|
|
|
|
void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
|
|
{
|
|
struct vm_area_struct *vma = pvmw->vma;
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
unsigned long address = pvmw->address;
|
|
unsigned long mmun_start = address & HPAGE_PMD_MASK;
|
|
pmd_t pmde;
|
|
swp_entry_t entry;
|
|
|
|
if (!(pvmw->pmd && !pvmw->pte))
|
|
return;
|
|
|
|
entry = pmd_to_swp_entry(*pvmw->pmd);
|
|
get_page(new);
|
|
pmde = pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot));
|
|
if (pmd_swp_soft_dirty(*pvmw->pmd))
|
|
pmde = pmd_mksoft_dirty(pmde);
|
|
if (is_write_migration_entry(entry))
|
|
pmde = maybe_pmd_mkwrite(pmde, vma);
|
|
|
|
flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE);
|
|
if (PageAnon(new))
|
|
page_add_anon_rmap(new, vma, mmun_start, true);
|
|
else
|
|
page_add_file_rmap(new, true);
|
|
set_pmd_at(mm, mmun_start, pvmw->pmd, pmde);
|
|
if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new))
|
|
mlock_vma_page(new);
|
|
update_mmu_cache_pmd(vma, address, pvmw->pmd);
|
|
}
|
|
#endif
|