Merge android-4.19 into android-4.19-stable

Signed-off-by: Ram Muthiah <rammuthiah@google.com>
Change-Id: If63a8097b5a21cf4b2abadf5c259f50262329840
This commit is contained in:
Ram Muthiah 2020-04-02 13:59:57 -07:00
commit 2b82910d12
509 changed files with 86443 additions and 75463 deletions

View file

@ -136,6 +136,10 @@
dynamic table installation which will install SSDT dynamic table installation which will install SSDT
tables to /sys/firmware/acpi/tables/dynamic. tables to /sys/firmware/acpi/tables/dynamic.
acpi_no_watchdog [HW,ACPI,WDT]
Ignore the ACPI-based watchdog interface (WDAT) and let
a native driver control the watchdog device instead.
acpi_rsdp= [ACPI,EFI,KEXEC] acpi_rsdp= [ACPI,EFI,KEXEC]
Pass the RSDP address to the kernel, mostly used Pass the RSDP address to the kernel, mostly used
on machines running EFI runtime service to boot the on machines running EFI runtime service to boot the

View file

@ -44,8 +44,15 @@ The AArch64 Tagged Address ABI has two stages of relaxation depending
how the user addresses are used by the kernel: how the user addresses are used by the kernel:
1. User addresses not accessed by the kernel but used for address space 1. User addresses not accessed by the kernel but used for address space
management (e.g. ``mmap()``, ``mprotect()``, ``madvise()``). The use management (e.g. ``mprotect()``, ``madvise()``). The use of valid
of valid tagged pointers in this context is always allowed. tagged pointers in this context is allowed with the exception of
``brk()``, ``mmap()`` and the ``new_address`` argument to
``mremap()`` as these have the potential to alias with existing
user addresses.
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
incorrectly accept valid tagged pointers for the ``brk()``,
``mmap()`` and ``mremap()`` system calls.
2. User addresses accessed by the kernel (e.g. ``write()``). This ABI 2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
relaxation is disabled by default and the application thread needs to relaxation is disabled by default and the application thread needs to

View file

@ -165,6 +165,11 @@ Optional property:
2000mW, while on a 10'' tablet is around 2000mW, while on a 10'' tablet is around
4500mW. 4500mW.
- tracks-low: Indicates that the temperature sensor tracks the low
Type: bool thresholds, so the governors may mitigate by ensuring
timing closures and other low temperature operating
issues.
Note: The delay properties are bound to the maximum dT/dt (temperature Note: The delay properties are bound to the maximum dT/dt (temperature
derivative over time) in two situations for a thermal zone: derivative over time) in two situations for a thermal zone:
(i) - when passive cooling is activated (polling-delay-passive); and (i) - when passive cooling is activated (polling-delay-passive); and

View file

@ -25,8 +25,8 @@ suspend/resume and shutdown ordering.
Device links allow representation of such dependencies in the driver core. Device links allow representation of such dependencies in the driver core.
In its standard form, a device link combines *both* dependency types: In its standard or *managed* form, a device link combines *both* dependency
It guarantees correct suspend/resume and shutdown ordering between a types: It guarantees correct suspend/resume and shutdown ordering between a
"supplier" device and its "consumer" devices, and it guarantees driver "supplier" device and its "consumer" devices, and it guarantees driver
presence on the supplier. The consumer devices are not probed before the presence on the supplier. The consumer devices are not probed before the
supplier is bound to a driver, and they're unbound before the supplier supplier is bound to a driver, and they're unbound before the supplier
@ -59,18 +59,24 @@ device ``->probe`` callback or a boot-time PCI quirk.
Another example for an inconsistent state would be a device link that Another example for an inconsistent state would be a device link that
represents a driver presence dependency, yet is added from the consumer's represents a driver presence dependency, yet is added from the consumer's
``->probe`` callback while the supplier hasn't probed yet: Had the driver ``->probe`` callback while the supplier hasn't started to probe yet: Had the
core known about the device link earlier, it wouldn't have probed the driver core known about the device link earlier, it wouldn't have probed the
consumer in the first place. The onus is thus on the consumer to check consumer in the first place. The onus is thus on the consumer to check
presence of the supplier after adding the link, and defer probing on presence of the supplier after adding the link, and defer probing on
non-presence. non-presence. [Note that it is valid to create a link from the consumer's
``->probe`` callback while the supplier is still probing, but the consumer must
know that the supplier is functional already at the link creation time (that is
the case, for instance, if the consumer has just acquired some resources that
would not have been available had the supplier not been functional then).]
If a device link is added in the ``->probe`` callback of the supplier or If a device link with ``DL_FLAG_STATELESS`` set (i.e. a stateless device link)
consumer driver, it is typically deleted in its ``->remove`` callback for is added in the ``->probe`` callback of the supplier or consumer driver, it is
symmetry. That way, if the driver is compiled as a module, the device typically deleted in its ``->remove`` callback for symmetry. That way, if the
link is added on module load and orderly deleted on unload. The same driver is compiled as a module, the device link is added on module load and
restrictions that apply to device link addition (e.g. exclusion of a orderly deleted on unload. The same restrictions that apply to device link
parallel suspend/resume transition) apply equally to deletion. addition (e.g. exclusion of a parallel suspend/resume transition) apply equally
to deletion. Device links managed by the driver core are deleted automatically
by it.
Several flags may be specified on device link addition, two of which Several flags may be specified on device link addition, two of which
have already been mentioned above: ``DL_FLAG_STATELESS`` to express that no have already been mentioned above: ``DL_FLAG_STATELESS`` to express that no
@ -83,22 +89,37 @@ link is added from the consumer's ``->probe`` callback: ``DL_FLAG_RPM_ACTIVE``
can be specified to runtime resume the supplier upon addition of the can be specified to runtime resume the supplier upon addition of the
device link. ``DL_FLAG_AUTOREMOVE_CONSUMER`` causes the device link to be device link. ``DL_FLAG_AUTOREMOVE_CONSUMER`` causes the device link to be
automatically purged when the consumer fails to probe or later unbinds. automatically purged when the consumer fails to probe or later unbinds.
This obviates the need to explicitly delete the link in the ``->remove``
callback or in the error path of the ``->probe`` callback.
Similarly, when the device link is added from supplier's ``->probe`` callback, Similarly, when the device link is added from supplier's ``->probe`` callback,
``DL_FLAG_AUTOREMOVE_SUPPLIER`` causes the device link to be automatically ``DL_FLAG_AUTOREMOVE_SUPPLIER`` causes the device link to be automatically
purged when the supplier fails to probe or later unbinds. purged when the supplier fails to probe or later unbinds.
If neither ``DL_FLAG_AUTOREMOVE_CONSUMER`` nor ``DL_FLAG_AUTOREMOVE_SUPPLIER``
is set, ``DL_FLAG_AUTOPROBE_CONSUMER`` can be used to request the driver core
to probe for a driver for the consumer driver on the link automatically after
a driver has been bound to the supplier device.
Note, however, that any combinations of ``DL_FLAG_AUTOREMOVE_CONSUMER``,
``DL_FLAG_AUTOREMOVE_SUPPLIER`` or ``DL_FLAG_AUTOPROBE_CONSUMER`` with
``DL_FLAG_STATELESS`` are invalid and cannot be used.
Limitations Limitations
=========== ===========
Driver authors should be aware that a driver presence dependency (i.e. when Driver authors should be aware that a driver presence dependency for managed
``DL_FLAG_STATELESS`` is not specified on link addition) may cause probing of device links (i.e. when ``DL_FLAG_STATELESS`` is not specified on link addition)
the consumer to be deferred indefinitely. This can become a problem if the may cause probing of the consumer to be deferred indefinitely. This can become
consumer is required to probe before a certain initcall level is reached. a problem if the consumer is required to probe before a certain initcall level
Worse, if the supplier driver is blacklisted or missing, the consumer will is reached. Worse, if the supplier driver is blacklisted or missing, the
never be probed. consumer will never be probed.
Moreover, managed device links cannot be deleted directly. They are deleted
by the driver core when they are not necessary any more in accordance with the
``DL_FLAG_AUTOREMOVE_CONSUMER`` and ``DL_FLAG_AUTOREMOVE_SUPPLIER`` flags.
However, stateless device links (i.e. device links with ``DL_FLAG_STATELESS``
set) are expected to be removed by whoever called :c:func:`device_link_add()`
to add them with the help of either :c:func:`device_link_del()` or
:c:func:`device_link_remove()`.
Sometimes drivers depend on optional resources. They are able to operate Sometimes drivers depend on optional resources. They are able to operate
in a degraded mode (reduced feature set or performance) when those resources in a degraded mode (reduced feature set or performance) when those resources
@ -283,4 +304,4 @@ API
=== ===
.. kernel-doc:: drivers/base/core.c .. kernel-doc:: drivers/base/core.c
:functions: device_link_add device_link_del :functions: device_link_add device_link_del device_link_remove

View file

@ -633,6 +633,17 @@ from a passphrase or other low-entropy user credential.
FS_IOC_GET_ENCRYPTION_PWSALT is deprecated. Instead, prefer to FS_IOC_GET_ENCRYPTION_PWSALT is deprecated. Instead, prefer to
generate and manage any needed salt(s) in userspace. generate and manage any needed salt(s) in userspace.
Getting a file's encryption nonce
---------------------------------
Since Linux v5.7, the ioctl FS_IOC_GET_ENCRYPTION_NONCE is supported.
On encrypted files and directories it gets the inode's 16-byte nonce.
On unencrypted files and directories, it fails with ENODATA.
This ioctl can be useful for automated tests which verify that the
encryption is being done correctly. It is not needed for normal use
of fscrypt.
Adding keys Adding keys
----------- -----------

View file

@ -627,3 +627,10 @@ in your dentry operations instead.
DCACHE_RCUACCESS is gone; having an RCU delay on dentry freeing is the DCACHE_RCUACCESS is gone; having an RCU delay on dentry freeing is the
default. DCACHE_NORCU opts out, and only d_alloc_pseudo() has any default. DCACHE_NORCU opts out, and only d_alloc_pseudo() has any
business doing so. business doing so.
--
[mandatory]
[should've been added in 2016] stale comment in finish_open()
nonwithstanding, failure exits in ->atomic_open() instances should
*NOT* fput() the file, no matter what. Everything is handled by the
caller.

View file

@ -76,7 +76,7 @@ flowtable and add one rule to your forward chain.
table inet x { table inet x {
flowtable f { flowtable f {
hook ingress priority 0 devices = { eth0, eth1 }; hook ingress priority 0; devices = { eth0, eth1 };
} }
chain y { chain y {
type filter hook forward priority 0; policy accept; type filter hook forward priority 0; policy accept;

View file

@ -64,6 +64,7 @@ Currently, these files are in /proc/sys/vm:
- swappiness - swappiness
- user_reserve_kbytes - user_reserve_kbytes
- vfs_cache_pressure - vfs_cache_pressure
- watermark_boost_factor
- watermark_scale_factor - watermark_scale_factor
- zone_reclaim_mode - zone_reclaim_mode
@ -872,6 +873,26 @@ ten times more freeable objects than there are.
============================================================= =============================================================
watermark_boost_factor:
This factor controls the level of reclaim when memory is being fragmented.
It defines the percentage of the high watermark of a zone that will be
reclaimed if pages of different mobility are being mixed within pageblocks.
The intent is that compaction has less work to do in the future and to
increase the success rate of future high-order allocations such as SLUB
allocations, THP and hugetlbfs pages.
To make it sensible with respect to the watermark_scale_factor parameter,
the unit is in fractions of 10,000. The default value of 15,000 means
that up to 150% of the high watermark will be reclaimed in the event of
a pageblock being mixed due to fragmentation. The level of reclaim is
determined by the number of fragmentation events that occurred in the
recent past. If this value is smaller than a pageblock then a pageblocks
worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
of 0 will disable the feature.
=============================================================
watermark_scale_factor: watermark_scale_factor:
This factor controls the aggressiveness of kswapd. It defines the This factor controls the aggressiveness of kswapd. It defines the

View file

@ -0,0 +1,61 @@
==============
USB Raw Gadget
==============
USB Raw Gadget is a kernel module that provides a userspace interface for
the USB Gadget subsystem. Essentially it allows to emulate USB devices
from userspace. Enabled with CONFIG_USB_RAW_GADGET. Raw Gadget is
currently a strictly debugging feature and shouldn't be used in
production, use GadgetFS instead.
Comparison to GadgetFS
~~~~~~~~~~~~~~~~~~~~~~
Raw Gadget is similar to GadgetFS, but provides a more low-level and
direct access to the USB Gadget layer for the userspace. The key
differences are:
1. Every USB request is passed to the userspace to get a response, while
GadgetFS responds to some USB requests internally based on the provided
descriptors. However note, that the UDC driver might respond to some
requests on its own and never forward them to the Gadget layer.
2. GadgetFS performs some sanity checks on the provided USB descriptors,
while Raw Gadget allows you to provide arbitrary data as responses to
USB requests.
3. Raw Gadget provides a way to select a UDC device/driver to bind to,
while GadgetFS currently binds to the first available UDC.
4. Raw Gadget uses predictable endpoint names (handles) across different
UDCs (as long as UDCs have enough endpoints of each required transfer
type).
5. Raw Gadget has ioctl-based interface instead of a filesystem-based one.
Userspace interface
~~~~~~~~~~~~~~~~~~~
To create a Raw Gadget instance open /dev/raw-gadget. Multiple raw-gadget
instances (bound to different UDCs) can be used at the same time. The
interaction with the opened file happens through the ioctl() calls, see
comments in include/uapi/linux/usb/raw_gadget.h for details.
The typical usage of Raw Gadget looks like:
1. Open Raw Gadget instance via /dev/raw-gadget.
2. Initialize the instance via USB_RAW_IOCTL_INIT.
3. Launch the instance with USB_RAW_IOCTL_RUN.
4. In a loop issue USB_RAW_IOCTL_EVENT_FETCH calls to receive events from
Raw Gadget and react to those depending on what kind of USB device
needs to be emulated.
Potential future improvements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Implement ioctl's for setting/clearing halt status on endpoints.
- Reporting more events (suspend, resume, etc.) through
USB_RAW_IOCTL_EVENT_FETCH.
- Support O_NONBLOCK I/O.

View file

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 4 VERSION = 4
PATCHLEVEL = 19 PATCHLEVEL = 19
SUBLEVEL = 107 SUBLEVEL = 113
EXTRAVERSION = EXTRAVERSION =
NAME = "People's Front" NAME = "People's Front"
@ -827,7 +827,7 @@ LD_FLAGS_LTO_CLANG := -mllvm -import-instr-limit=5
KBUILD_LDFLAGS += $(LD_FLAGS_LTO_CLANG) KBUILD_LDFLAGS += $(LD_FLAGS_LTO_CLANG)
KBUILD_LDFLAGS_MODULE += $(LD_FLAGS_LTO_CLANG) KBUILD_LDFLAGS_MODULE += $(LD_FLAGS_LTO_CLANG)
KBUILD_LDS_MODULE += $(srctree)/scripts/module-lto.lds KBUILD_LDFLAGS_MODULE += -T $(srctree)/scripts/module-lto.lds
# allow disabling only clang LTO where needed # allow disabling only clang LTO where needed
DISABLE_LTO_CLANG := -fno-lto DISABLE_LTO_CLANG := -fno-lto
@ -843,7 +843,8 @@ export LTO_CFLAGS DISABLE_LTO
endif endif
ifdef CONFIG_CFI_CLANG ifdef CONFIG_CFI_CLANG
cfi-clang-flags += -fsanitize=cfi -fno-sanitize-cfi-canonical-jump-tables cfi-clang-flags += -fsanitize=cfi -fno-sanitize-cfi-canonical-jump-tables \
-fno-sanitize-blacklist
DISABLE_CFI_CLANG := -fno-sanitize=cfi DISABLE_CFI_CLANG := -fno-sanitize=cfi
ifdef CONFIG_MODULES ifdef CONFIG_MODULES
cfi-clang-flags += -fsanitize-cfi-cross-dso cfi-clang-flags += -fsanitize-cfi-cross-dso
@ -1104,8 +1105,8 @@ endif
autoksyms_h := $(if $(CONFIG_TRIM_UNUSED_KSYMS), include/generated/autoksyms.h) autoksyms_h := $(if $(CONFIG_TRIM_UNUSED_KSYMS), include/generated/autoksyms.h)
quiet_cmd_autoksyms_h = GEN $@ quiet_cmd_autoksyms_h = GEN $@
cmd_autoksyms_h = mkdir -p $(dir $@); $(CONFIG_SHELL) \ cmd_autoksyms_h = mkdir -p $(dir $@); \
$(srctree)/scripts/gen_autoksyms.sh $@ $(CONFIG_SHELL) $(srctree)/scripts/gen_autoksyms.sh $@
$(autoksyms_h): $(autoksyms_h):
$(call cmd,autoksyms_h) $(call cmd,autoksyms_h)

File diff suppressed because it is too large Load diff

View file

@ -17,9 +17,11 @@
__arch_copy_to_user __arch_copy_to_user
arch_setup_dma_ops arch_setup_dma_ops
arch_timer_read_ool_enabled arch_timer_read_ool_enabled
arm64_const_caps_ready
atomic_notifier_call_chain atomic_notifier_call_chain
atomic_notifier_chain_register atomic_notifier_chain_register
atomic_notifier_chain_unregister atomic_notifier_chain_unregister
autoremove_wake_function
bin2hex bin2hex
__bitmap_clear __bitmap_clear
bitmap_find_next_zero_area_off bitmap_find_next_zero_area_off
@ -89,14 +91,17 @@
contig_page_data contig_page_data
cpu_bit_bitmap cpu_bit_bitmap
__cpuhp_setup_state __cpuhp_setup_state
cpu_hwcap_keys
cpu_hwcaps
__cpu_isolated_mask __cpu_isolated_mask
cpumask_next cpumask_next
cpu_number cpu_number
__cpu_online_mask __cpu_online_mask
__cpu_possible_mask __cpu_possible_mask
cpu_subsys cpu_subsys
create_votable crypto_alloc_shash
crypto_destroy_tfm crypto_destroy_tfm
crypto_shash_setkey
_ctype _ctype
debugfs_attr_read debugfs_attr_read
debugfs_attr_write debugfs_attr_write
@ -117,7 +122,6 @@
delayed_work_timer_fn delayed_work_timer_fn
del_timer del_timer
del_timer_sync del_timer_sync
destroy_votable
destroy_workqueue destroy_workqueue
_dev_crit _dev_crit
dev_driver_string dev_driver_string
@ -193,6 +197,7 @@
__devm_request_region __devm_request_region
devm_request_threaded_irq devm_request_threaded_irq
__devm_reset_control_get __devm_reset_control_get
devm_reset_controller_register
devm_snd_soc_register_component devm_snd_soc_register_component
devm_thermal_zone_of_sensor_register devm_thermal_zone_of_sensor_register
devm_usb_get_phy_by_phandle devm_usb_get_phy_by_phandle
@ -210,8 +215,10 @@
dma_alloc_from_dev_coherent dma_alloc_from_dev_coherent
dma_buf_attach dma_buf_attach
dma_buf_begin_cpu_access dma_buf_begin_cpu_access
dma_buf_begin_cpu_access_partial
dma_buf_detach dma_buf_detach
dma_buf_end_cpu_access dma_buf_end_cpu_access
dma_buf_end_cpu_access_partial
dma_buf_fd dma_buf_fd
dma_buf_get dma_buf_get
dma_buf_get_flags dma_buf_get_flags
@ -236,7 +243,6 @@
dma_release_from_dev_coherent dma_release_from_dev_coherent
dma_request_slave_channel dma_request_slave_channel
do_exit do_exit
do_gettimeofday
down_read down_read
down_write down_write
drain_workqueue drain_workqueue
@ -244,6 +250,7 @@
driver_unregister driver_unregister
drm_panel_notifier_register drm_panel_notifier_register
drm_panel_notifier_unregister drm_panel_notifier_unregister
dst_release
dummy_dma_ops dummy_dma_ops
__dynamic_dev_dbg __dynamic_dev_dbg
__dynamic_pr_debug __dynamic_pr_debug
@ -264,7 +271,6 @@
find_next_bit find_next_bit
find_next_zero_bit find_next_zero_bit
find_vma find_vma
find_votable
finish_wait finish_wait
flush_delayed_work flush_delayed_work
flush_work flush_work
@ -300,11 +306,8 @@
gen_pool_create gen_pool_create
gen_pool_destroy gen_pool_destroy
gen_pool_free gen_pool_free
get_client_vote
get_cpu_device get_cpu_device
get_device get_device
get_effective_result
get_effective_result_locked
__get_free_pages __get_free_pages
get_pid_task get_pid_task
get_random_bytes get_random_bytes
@ -367,12 +370,15 @@
input_close_device input_close_device
input_event input_event
input_free_device input_free_device
input_mt_init_slots
input_mt_report_slot_state
input_open_device input_open_device
input_register_device input_register_device
input_register_handle input_register_handle
input_register_handler input_register_handler
input_set_abs_params input_set_abs_params
input_set_capability input_set_capability
input_set_timestamp
input_unregister_device input_unregister_device
input_unregister_handle input_unregister_handle
input_unregister_handler input_unregister_handler
@ -395,6 +401,8 @@
ipc_log_context_create ipc_log_context_create
ipc_log_context_destroy ipc_log_context_destroy
ipc_log_string ipc_log_string
ip_route_output_flow
__ipv6_addr_type
irq_chip_disable_parent irq_chip_disable_parent
irq_chip_enable_parent irq_chip_enable_parent
irq_chip_eoi_parent irq_chip_eoi_parent
@ -481,6 +489,7 @@
list_sort list_sort
__local_bh_disable_ip __local_bh_disable_ip
__local_bh_enable_ip __local_bh_enable_ip
lock_sock_nested
mbox_client_txdone mbox_client_txdone
mbox_controller_register mbox_controller_register
mbox_controller_unregister mbox_controller_unregister
@ -512,6 +521,7 @@
mod_node_page_state mod_node_page_state
mod_timer mod_timer
module_kset module_kset
module_layout
module_put module_put
__msecs_to_jiffies __msecs_to_jiffies
msleep msleep
@ -537,11 +547,16 @@
netif_rx_ni netif_rx_ni
netif_tx_wake_queue netif_tx_wake_queue
netlink_unicast netlink_unicast
net_ratelimit
nf_register_net_hooks
nf_unregister_net_hooks
nla_memcpy nla_memcpy
nla_put nla_put
__nlmsg_put
no_llseek no_llseek
nonseekable_open nonseekable_open
nr_cpu_ids nr_cpu_ids
ns_capable
ns_to_timespec ns_to_timespec
nvmem_cell_get nvmem_cell_get
nvmem_cell_put nvmem_cell_put
@ -551,7 +566,6 @@
nvmem_device_write nvmem_device_write
of_address_to_resource of_address_to_resource
of_alias_get_id of_alias_get_id
of_batterydata_get_best_profile
of_clk_add_provider of_clk_add_provider
of_clk_get of_clk_get
of_clk_src_onecell_get of_clk_src_onecell_get
@ -690,6 +704,7 @@
print_hex_dump print_hex_dump
printk printk
proc_dointvec proc_dointvec
proc_mkdir_data
pskb_expand_head pskb_expand_head
__pskb_pull_tail __pskb_pull_tail
put_device put_device
@ -710,11 +725,6 @@
qmi_txn_cancel qmi_txn_cancel
qmi_txn_init qmi_txn_init
qmi_txn_wait qmi_txn_wait
qtee_shmbridge_allocate_shm
qtee_shmbridge_deregister
qtee_shmbridge_free_shm
qtee_shmbridge_is_enabled
qtee_shmbridge_register
queue_delayed_work_on queue_delayed_work_on
queue_work_on queue_work_on
___ratelimit ___ratelimit
@ -731,6 +741,10 @@
_raw_spin_unlock_bh _raw_spin_unlock_bh
_raw_spin_unlock_irq _raw_spin_unlock_irq
_raw_spin_unlock_irqrestore _raw_spin_unlock_irqrestore
_raw_write_lock
_raw_write_lock_bh
_raw_write_unlock
_raw_write_unlock_bh
rb_erase rb_erase
rb_first rb_first
rb_insert_color rb_insert_color
@ -738,19 +752,25 @@
__rcu_read_lock __rcu_read_lock
__rcu_read_unlock __rcu_read_unlock
rdev_get_drvdata rdev_get_drvdata
refcount_add_checked
refcount_dec_and_test_checked refcount_dec_and_test_checked
refcount_dec_checked refcount_dec_checked
refcount_inc_checked refcount_inc_checked
refcount_inc_not_zero_checked refcount_inc_not_zero_checked
refcount_sub_and_test_checked
__refrigerator __refrigerator
regcache_cache_only regcache_cache_only
regcache_mark_dirty regcache_mark_dirty
regcache_sync regcache_sync
regcache_sync_region regcache_sync_region
__register_chrdev __register_chrdev
register_inet6addr_notifier
register_inetaddr_notifier
register_netdev register_netdev
register_netdevice register_netdevice
register_netdevice_notifier register_netdevice_notifier
register_net_sysctl
register_pernet_subsys
register_pm_notifier register_pm_notifier
register_shrinker register_shrinker
register_syscore_ops register_syscore_ops
@ -774,12 +794,13 @@
regulator_set_voltage regulator_set_voltage
regulator_sync_state regulator_sync_state
release_firmware release_firmware
release_sock
remap_pfn_range remap_pfn_range
remove_proc_entry remove_proc_entry
request_firmware request_firmware
request_firmware_into_buf
request_firmware_nowait request_firmware_nowait
request_threaded_irq request_threaded_irq
rerun_election
reset_control_assert reset_control_assert
reset_control_deassert reset_control_deassert
rtc_time64_to_tm rtc_time64_to_tm
@ -794,12 +815,6 @@
sched_setscheduler sched_setscheduler
schedule schedule
schedule_timeout schedule_timeout
scm_call2
scm_call2_atomic
scm_call2_noretry
scm_io_read
scm_io_write
scm_is_call_available
scnprintf scnprintf
se_config_packing se_config_packing
se_geni_clks_off se_geni_clks_off
@ -835,6 +850,8 @@
skb_add_rx_frag skb_add_rx_frag
skb_clone skb_clone
skb_copy skb_copy
skb_copy_bits
skb_copy_expand
skb_dequeue skb_dequeue
skb_pull skb_pull
skb_push skb_push
@ -880,6 +897,8 @@
snd_soc_rtdcom_lookup snd_soc_rtdcom_lookup
snd_soc_unregister_component snd_soc_unregister_component
snprintf snprintf
sock_create
sock_release
sort sort
__spi_register_driver __spi_register_driver
spi_setup spi_setup
@ -894,6 +913,7 @@
__stack_chk_fail __stack_chk_fail
__stack_chk_guard __stack_chk_guard
strcasecmp strcasecmp
strchr
strcmp strcmp
strcpy strcpy
strim strim
@ -922,6 +942,7 @@
sysfs_create_files sysfs_create_files
sysfs_create_group sysfs_create_group
sysfs_create_groups sysfs_create_groups
sysfs_create_link
sysfs_notify sysfs_notify
sysfs_remove_file_ns sysfs_remove_file_ns
sysfs_remove_group sysfs_remove_group
@ -937,6 +958,10 @@
tasklet_init tasklet_init
tasklet_kill tasklet_kill
__tasklet_schedule __tasklet_schedule
tbn_cleanup
tbn_init
tbn_release_bus
tbn_request_bus
thermal_cdev_update thermal_cdev_update
thermal_cooling_device_unregister thermal_cooling_device_unregister
thermal_of_cooling_device_register thermal_of_cooling_device_register
@ -961,13 +986,22 @@
trace_raw_output_prep trace_raw_output_prep
trace_seq_printf trace_seq_printf
try_module_get try_module_get
typec_register_partner
typec_register_port
typec_set_data_role
typec_set_pwr_role
typec_unregister_partner
__udelay __udelay
uncached_logk uncached_logk
__unregister_chrdev __unregister_chrdev
unregister_chrdev_region unregister_chrdev_region
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_netdev unregister_netdev
unregister_netdevice_notifier unregister_netdevice_notifier
unregister_netdevice_queue unregister_netdevice_queue
unregister_net_sysctl_table
unregister_pernet_subsys
unregister_pm_notifier unregister_pm_notifier
update_devfreq update_devfreq
up_read up_read
@ -1013,7 +1047,6 @@
vmap vmap
vm_mmap vm_mmap
vm_munmap vm_munmap
vote
vscnprintf vscnprintf
vsnprintf vsnprintf
vunmap vunmap
@ -1088,7 +1121,6 @@
pci_request_acs pci_request_acs
regulator_disable_deferred regulator_disable_deferred
report_iommu_fault report_iommu_fault
scm_restore_sec_cfg
__tracepoint_smmu_init __tracepoint_smmu_init
__tracepoint_tlbi_end __tracepoint_tlbi_end
__tracepoint_tlbi_start __tracepoint_tlbi_start
@ -1110,22 +1142,14 @@
br_dev_queue_push_xmit br_dev_queue_push_xmit
br_forward_finish br_forward_finish
br_handle_frame_finish br_handle_frame_finish
dst_release
ip_do_fragment ip_do_fragment
ip_route_input_noref ip_route_input_noref
ip_route_output_flow
neigh_destroy neigh_destroy
nf_br_ops nf_br_ops
nf_hook_slow nf_hook_slow
nf_ipv6_ops nf_ipv6_ops
nf_register_net_hooks
nf_unregister_net_hooks
pskb_trim_rcsum_slow pskb_trim_rcsum_slow
register_net_sysctl
register_pernet_subsys
skb_pull_rcsum skb_pull_rcsum
unregister_net_sysctl_table
unregister_pernet_subsys
# required by cam-sync.ko # required by cam-sync.ko
media_device_cleanup media_device_cleanup
@ -1177,7 +1201,6 @@
clk_unvote_rate_vdd clk_unvote_rate_vdd
clk_vote_rate_vdd clk_vote_rate_vdd
devm_add_action devm_add_action
devm_reset_controller_register
divider_get_val divider_get_val
divider_recalc_rate divider_recalc_rate
divider_ro_round_rate_parent divider_ro_round_rate_parent
@ -1227,11 +1250,7 @@
mempool_free mempool_free
mempool_kfree mempool_kfree
mempool_kmalloc mempool_kmalloc
_raw_write_lock_bh
_raw_write_unlock_bh
send_sig_info send_sig_info
sock_create
sock_release
time64_to_tm time64_to_tm
# required by dm-default-key.ko # required by dm-default-key.ko
@ -1293,6 +1312,38 @@
usb_gadget_vbus_draw usb_gadget_vbus_draw
usb_get_maximum_speed usb_get_maximum_speed
# required by early_random.ko
add_hwgenerator_randomness
# required by ebtable_broute.ko
br_should_route_hook
synchronize_net
# required by ebtables.ko
audit_enabled
audit_log
nf_register_sockopt
nf_unregister_sockopt
__request_module
strscpy
__vmalloc
xt_check_match
xt_check_target
xt_compat_add_offset
xt_compat_calc_jump
xt_compat_flush_offsets
xt_compat_init_offsets
xt_compat_lock
xt_compat_match_offset
xt_compat_target_offset
xt_compat_unlock
xt_data_to_user
xt_find_match
xt_register_target
xt_request_find_match
xt_request_find_target
xt_unregister_target
# required by eud.ko # required by eud.ko
tty_flip_buffer_push tty_flip_buffer_push
uart_add_one_port uart_add_one_port
@ -1305,15 +1356,7 @@
of_find_i2c_device_by_node of_find_i2c_device_by_node
# required by ftm5.ko # required by ftm5.ko
input_mt_init_slots
input_mt_report_slot_state
input_set_timestamp
proc_create proc_create
proc_mkdir_data
tbn_cleanup
tbn_init
tbn_release_bus
tbn_request_bus
# required by google-battery.ko # required by google-battery.ko
simple_strtoull simple_strtoull
@ -1382,14 +1425,13 @@
add_wait_queue add_wait_queue
alloc_etherdev_mqs alloc_etherdev_mqs
eth_mac_addr eth_mac_addr
ns_capable
pci_clear_master pci_clear_master
pci_disable_device pci_disable_device
pci_enable_device pci_enable_device
pci_release_region pci_release_region
pci_request_region pci_request_region
remove_wait_queue remove_wait_queue
skb_copy_expand vm_iomap_memory
wait_woken wait_woken
woken_wake_function woken_wake_function
@ -1407,9 +1449,7 @@
force_sig force_sig
kgdb_connected kgdb_connected
kick_all_cpus_sync kick_all_cpus_sync
refcount_add_checked
refcount_add_not_zero_checked refcount_add_not_zero_checked
refcount_sub_and_test_checked
register_kprobe register_kprobe
unregister_kprobe unregister_kprobe
@ -1468,8 +1508,6 @@
register_die_notifier register_die_notifier
# required by msm-vidc.ko # required by msm-vidc.ko
dma_buf_begin_cpu_access_partial
dma_buf_end_cpu_access_partial
v4l2_ctrl_find v4l2_ctrl_find
v4l2_ctrl_get_name v4l2_ctrl_get_name
v4l2_ctrl_handler_free v4l2_ctrl_handler_free
@ -1511,7 +1549,6 @@
dma_fence_remove_callback dma_fence_remove_callback
getboottime64 getboottime64
get_random_u32 get_random_u32
get_seconds
get_task_mm get_task_mm
get_unmapped_area get_unmapped_area
get_user_pages get_user_pages
@ -1521,20 +1558,18 @@
iterate_fd iterate_fd
kern_addr_valid kern_addr_valid
kernfs_create_link kernfs_create_link
ktime_get_real_seconds
mmap_min_addr mmap_min_addr
mmput mmput
noop_llseek noop_llseek
of_devfreq_cooling_register of_devfreq_cooling_register
plist_del plist_del
_raw_write_lock
_raw_write_unlock
rb_last rb_last
rb_prev rb_prev
security_mmap_addr security_mmap_addr
set_page_dirty_lock set_page_dirty_lock
sg_alloc_table_from_pages sg_alloc_table_from_pages
sysfs_create_bin_file sysfs_create_bin_file
sysfs_create_link
sysfs_remove_bin_file sysfs_remove_bin_file
sysfs_remove_files sysfs_remove_files
trace_print_symbols_seq trace_print_symbols_seq
@ -1555,7 +1590,6 @@
# required by msm_drm.ko # required by msm_drm.ko
adjust_managed_page_count adjust_managed_page_count
autoremove_wake_function
bpf_trace_run11 bpf_trace_run11
bpf_trace_run12 bpf_trace_run12
__clk_get_hw __clk_get_hw
@ -1788,6 +1822,7 @@
invalidate_mapping_pages invalidate_mapping_pages
ioremap_page_range ioremap_page_range
irq_domain_xlate_onecell irq_domain_xlate_onecell
irq_set_affinity_notifier
kernfs_notify kernfs_notify
kernfs_put kernfs_put
kthread_cancel_delayed_work_sync kthread_cancel_delayed_work_sync
@ -1848,6 +1883,7 @@
# required by msm_pm.ko # required by msm_pm.ko
arm_cpuidle_suspend arm_cpuidle_suspend
clock_debug_print_enabled clock_debug_print_enabled
cpu_do_idle
cpuidle_dev cpuidle_dev
cpuidle_register_device cpuidle_register_device
cpuidle_register_driver cpuidle_register_driver
@ -1891,6 +1927,9 @@
arch_timer_read_counter arch_timer_read_counter
set_uncached_logk_func set_uncached_logk_func
# required by msm_scm.ko
__arm_smccc_smc
# required by msm_sharedmem.ko # required by msm_sharedmem.ko
__uio_register_device __uio_register_device
uio_unregister_device uio_unregister_device
@ -1902,7 +1941,6 @@
__iowrite32_copy __iowrite32_copy
memblock_overlaps_memory memblock_overlaps_memory
of_prop_next_u32 of_prop_next_u32
request_firmware_into_buf
# required by phy-generic.ko # required by phy-generic.ko
regulator_set_current_limit regulator_set_current_limit
@ -2049,8 +2087,6 @@
# required by qpnp-battery.ko # required by qpnp-battery.ko
__class_register __class_register
is_override_vote_enabled
is_override_vote_enabled_locked
# required by qpnp-power-on.ko # required by qpnp-power-on.ko
boot_reason boot_reason
@ -2059,22 +2095,13 @@
devm_input_allocate_device devm_input_allocate_device
# required by qpnp-qgauge.ko # required by qpnp-qgauge.ko
of_batterydata_get_aged_profile_count
of_batterydata_get_best_aged_profile
of_batterydata_read_soh_aged_profiles
rtc_class_close rtc_class_close
rtc_class_open rtc_class_open
rtc_read_time rtc_read_time
# required by qpnp-smb5-charger.ko # required by qpnp-smb5-charger.ko
get_client_vote_locked
iio_channel_release iio_channel_release
is_client_vote_enabled
is_client_vote_enabled_locked
lock_votable
of_find_node_by_phandle of_find_node_by_phandle
unlock_votable
vote_override
# required by qpnp_pdphy.ko # required by qpnp_pdphy.ko
device_get_named_child_node device_get_named_child_node
@ -2089,9 +2116,15 @@
__arch_copy_in_user __arch_copy_in_user
firmware_request_nowarn firmware_request_nowarn
get_option get_option
qtee_shmbridge_query
sigprocmask sigprocmask
# required by qtee_shm_bridge.ko
do_tlb_conf_fault_cb
__flush_dcache_area
gen_pool_best_fit
gen_pool_set_algo
gen_pool_virt_to_phys
# required by regmap-spmi.ko # required by regmap-spmi.ko
spmi_ext_register_read spmi_ext_register_read
spmi_ext_register_readl spmi_ext_register_readl
@ -2134,6 +2167,9 @@
trace_print_hex_seq trace_print_hex_seq
unregister_netdevice_many unregister_netdevice_many
# required by rndis.ko
dev_get_stats
# required by roles.ko # required by roles.ko
class_find_device class_find_device
device_connection_find_match device_connection_find_match
@ -2151,8 +2187,164 @@
devm_rtc_device_register devm_rtc_device_register
rtc_update_irq rtc_update_irq
# required by sctp.ko
__bitmap_shift_right
__bitmap_weight
call_rcu
compat_ip_getsockopt
compat_ip_setsockopt
compat_ipv6_getsockopt
compat_ipv6_setsockopt
compat_sock_common_getsockopt
compat_sock_common_setsockopt
_copy_from_iter_full
crc32c
crc32c_csum_stub
__crc32c_le_shift
crypto_shash_digest
dev_get_by_index_rcu
fl6_sock_lookup
fl6_update_dst
flex_array_alloc
flex_array_free
flex_array_get
flex_array_prealloc
flex_array_put
icmp_err_convert
icmpv6_err_convert
in6_dev_finish_destroy
inet6_add_offload
inet6_add_protocol
inet6_bind
inet6_del_protocol
inet6_destroy_sock
inet6_getname
inet6_ioctl
inet6_register_protosw
inet6_release
inet6_unregister_protosw
inet_accept
inet_add_offload
inet_add_protocol
inet_addr_type
inet_bind
inet_ctl_sock_create
inet_del_offload
inet_del_protocol
inet_get_local_port_range
inet_getname
inet_ioctl
inet_recvmsg
inet_register_protosw
inet_release
inet_sendmsg
inet_shutdown
inet_sk_set_state
inet_sock_destruct
inet_unregister_protosw
iov_iter_revert
ip6_dst_lookup_flow
ip6_xmit
__ip_dev_find
ip_getsockopt
__ip_queue_xmit
ip_setsockopt
ipv6_chk_addr
ipv6_dup_options
ipv6_getsockopt
ipv6_setsockopt
kfree_call_rcu
napi_busy_loop
net_enable_timestamp
nf_conntrack_destroy
nr_free_buffer_pages
overflowuid
percpu_counter_add_batch
percpu_counter_batch
percpu_counter_destroy
__percpu_counter_init
prandom_u32
prepare_to_wait
prepare_to_wait_exclusive
proc_create_net_data
proc_create_net_single
proc_dointvec_minmax
proc_dostring
proc_doulongvec_minmax
proto_register
proto_unregister
put_cmsg
rcu_barrier
remove_proc_subtree
rfs_needed
rhashtable_free_and_destroy
rhashtable_insert_slow
rhashtable_walk_enter
rhashtable_walk_exit
rhashtable_walk_next
rhashtable_walk_start_check
rhashtable_walk_stop
rhltable_init
rht_bucket_nested
rht_bucket_nested_insert
rps_cpu_mask
rps_sock_flow_table
security_inet_conn_established
security_sctp_assoc_request
security_sctp_bind_connect
security_sctp_sk_clone
send_sig
sk_alloc
__skb_checksum
skb_copy_datagram_iter
skb_queue_head
skb_segment
skb_set_owner_w
sk_busy_loop_end
sk_common_release
sk_filter_trim_cap
sk_free
__sk_mem_reclaim
__sk_mem_schedule
sk_setup_caps
snmp_get_cpu_field
sock_alloc_file
sock_common_getsockopt
sock_common_setsockopt
sock_i_ino
sock_init_data
sock_i_uid
sock_kmalloc
sock_no_mmap
sock_no_sendpage
sock_no_socketpair
sock_prot_inuse_add
__sock_recv_ts_and_drops
sock_wake_async
sock_wfree
__wake_up_sync_key
__xfrm_policy_check
# required by sctp_diag.ko
inet_diag_msg_attrs_fill
inet_diag_msg_common_fill
inet_diag_register
inet_diag_unregister
netlink_net_capable
nla_reserve_64bit
nla_reserve
sock_diag_check_cookie
sock_diag_save_cookie
# required by sec_touch.ko
filp_close
filp_open
input_mt_destroy_slots
strncat
sysfs_remove_link
vfs_read
# required by secure_buffer.ko # required by secure_buffer.ko
scm_get_feat_version
trace_print_array_seq trace_print_array_seq
# required by slg51000-regulator.ko # required by slg51000-regulator.ko
@ -2259,16 +2451,11 @@
typec_partner_register_altmode typec_partner_register_altmode
typec_partner_set_identity typec_partner_set_identity
typec_port_register_altmode typec_port_register_altmode
typec_register_partner
typec_register_port
typec_set_data_role
typec_set_mode typec_set_mode
typec_set_orientation typec_set_orientation
typec_set_pwr_opmode typec_set_pwr_opmode
typec_set_pwr_role
typec_set_vconn_role typec_set_vconn_role
typec_unregister_altmode typec_unregister_altmode
typec_unregister_partner
typec_unregister_port typec_unregister_port
usb_debug_root usb_debug_root
@ -2339,10 +2526,8 @@
crypto_aead_setkey crypto_aead_setkey
crypto_alloc_aead crypto_alloc_aead
crypto_alloc_base crypto_alloc_base
crypto_alloc_shash
crypto_alloc_skcipher crypto_alloc_skcipher
crypto_shash_final crypto_shash_final
crypto_shash_setkey
crypto_shash_update crypto_shash_update
default_llseek default_llseek
deregister_cld_cmd_cb deregister_cld_cmd_cb
@ -2353,7 +2538,6 @@
ieee80211_frequency_to_channel ieee80211_frequency_to_channel
ieee80211_get_channel ieee80211_get_channel
ieee80211_hdrlen ieee80211_hdrlen
__ipv6_addr_type
irq_set_affinity_hint irq_set_affinity_hint
mac_pton mac_pton
netif_tx_stop_all_queues netif_tx_stop_all_queues
@ -2363,7 +2547,6 @@
nla_parse nla_parse
nla_put_64bit nla_put_64bit
nla_strlcpy nla_strlcpy
__nlmsg_put
param_get_string param_get_string
param_ops_byte param_ops_byte
param_set_copystring param_set_copystring
@ -2379,8 +2562,6 @@
proc_mkdir proc_mkdir
_raw_spin_trylock _raw_spin_trylock
register_cld_cmd_cb register_cld_cmd_cb
register_inet6addr_notifier
register_inetaddr_notifier
register_netevent_notifier register_netevent_notifier
register_sysctl_table register_sysctl_table
regulatory_set_wiphy_regd regulatory_set_wiphy_regd
@ -2389,13 +2570,9 @@
schedule_timeout_interruptible schedule_timeout_interruptible
seq_vprintf seq_vprintf
set_cpus_allowed_ptr set_cpus_allowed_ptr
skb_copy_bits
skb_queue_purge skb_queue_purge
skip_spaces skip_spaces
strchr
strchrnul strchrnul
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_netevent_notifier unregister_netevent_notifier
unregister_sysctl_table unregister_sysctl_table
vprintk vprintk
@ -2442,20 +2619,6 @@
# required by usb_f_gsi.ko # required by usb_f_gsi.ko
dev_get_by_name dev_get_by_name
kstrtou16_from_user kstrtou16_from_user
rndis_deregister
rndis_flow_control
rndis_free_response
rndis_get_next_response
rndis_msg_parser
rndis_register
rndis_set_host_mac
rndis_set_max_pkt_xfer
rndis_set_param_dev
rndis_set_param_medium
rndis_set_param_vendor
rndis_set_pkt_alignment_factor
rndis_signal_connect
rndis_uninit
usb_composite_setup_continue usb_composite_setup_continue
usb_ep_autoconfig_by_name usb_ep_autoconfig_by_name
usb_ep_set_halt usb_ep_set_halt

View file

@ -1,2 +1,4 @@
[abi_whitelist] [abi_whitelist]
dummy_symbol # commonly used symbols
module_layout
__put_task_struct

View file

@ -14,6 +14,8 @@
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#define ASM_NL ` /* use '`' to mark new line in macro */ #define ASM_NL ` /* use '`' to mark new line in macro */
#define __ALIGN .align 4
#define __ALIGN_STR __stringify(__ALIGN)
/* annotation for data we want in DCCM - if enabled in .config */ /* annotation for data we want in DCCM - if enabled in .config */
.macro ARCFP_DATA nm .macro ARCFP_DATA nm

View file

@ -525,11 +525,11 @@
* Supply voltage supervisor on board will not allow opp50 so * Supply voltage supervisor on board will not allow opp50 so
* disable it and set opp100 as suspend OPP. * disable it and set opp100 as suspend OPP.
*/ */
opp50@300000000 { opp50-300000000 {
status = "disabled"; status = "disabled";
}; };
opp100@600000000 { opp100-600000000 {
opp-suspend; opp-suspend;
}; };
}; };

View file

@ -324,6 +324,7 @@
device_type = "pci"; device_type = "pci";
ranges = <0x81000000 0 0 0x03000 0 0x00010000 ranges = <0x81000000 0 0 0x03000 0 0x00010000
0x82000000 0 0x20013000 0x13000 0 0xffed000>; 0x82000000 0 0x20013000 0x13000 0 0xffed000>;
dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>;
bus-range = <0x00 0xff>; bus-range = <0x00 0xff>;
#interrupt-cells = <1>; #interrupt-cells = <1>;
num-lanes = <1>; num-lanes = <1>;
@ -376,6 +377,7 @@
device_type = "pci"; device_type = "pci";
ranges = <0x81000000 0 0 0x03000 0 0x00010000 ranges = <0x81000000 0 0 0x03000 0 0x00010000
0x82000000 0 0x30013000 0x13000 0 0xffed000>; 0x82000000 0 0x30013000 0x13000 0 0xffed000>;
dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>;
bus-range = <0x00 0xff>; bus-range = <0x00 0xff>;
#interrupt-cells = <1>; #interrupt-cells = <1>;
num-lanes = <1>; num-lanes = <1>;

View file

@ -81,3 +81,8 @@
reg = <0x3fc>; reg = <0x3fc>;
}; };
}; };
&mmc3 {
/* dra76x is not affected by i887 */
max-frequency = <96000000>;
};

View file

@ -183,7 +183,6 @@
pinctrl-0 = <&pinctrl_usdhc4>; pinctrl-0 = <&pinctrl_usdhc4>;
bus-width = <8>; bus-width = <8>;
non-removable; non-removable;
vmmc-supply = <&vdd_emmc_1p8>;
status = "disabled"; status = "disabled";
}; };

View file

@ -319,7 +319,6 @@
assigned-clock-rates = <400000000>; assigned-clock-rates = <400000000>;
bus-width = <8>; bus-width = <8>;
fsl,tuning-step = <2>; fsl,tuning-step = <2>;
max-frequency = <100000000>;
vmmc-supply = <&reg_module_3v3>; vmmc-supply = <&reg_module_3v3>;
vqmmc-supply = <&reg_DCDC3>; vqmmc-supply = <&reg_DCDC3>;
non-removable; non-removable;

View file

@ -584,7 +584,7 @@
}; };
mdio0: mdio@2d24000 { mdio0: mdio@2d24000 {
compatible = "fsl,etsec2-mdio"; compatible = "gianfar";
device_type = "mdio"; device_type = "mdio";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <0>; #size-cells = <0>;
@ -593,7 +593,7 @@
}; };
mdio1: mdio@2d64000 { mdio1: mdio@2d64000 {
compatible = "fsl,etsec2-mdio"; compatible = "gianfar";
device_type = "mdio"; device_type = "mdio";
#address-cells = <1>; #address-cells = <1>;
#size-cells = <0>; #size-cells = <0>;

View file

@ -45,7 +45,7 @@
/* DAC */ /* DAC */
format = "i2s"; format = "i2s";
mclk-fs = <256>; mclk-fs = <256>;
frame-inversion = <1>; frame-inversion;
cpu { cpu {
sound-dai = <&sti_uni_player2>; sound-dai = <&sti_uni_player2>;
}; };

View file

@ -103,6 +103,8 @@ static bool __init cntvct_functional(void)
* this. * this.
*/ */
np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer"); np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
if (!np)
np = of_find_compatible_node(NULL, NULL, "arm,armv8-timer");
if (!np) if (!np)
goto out_put; goto out_put;

View file

@ -89,6 +89,8 @@ AFLAGS_suspend-imx6.o :=-Wa,-march=armv7-a
obj-$(CONFIG_SOC_IMX6) += suspend-imx6.o obj-$(CONFIG_SOC_IMX6) += suspend-imx6.o
obj-$(CONFIG_SOC_IMX53) += suspend-imx53.o obj-$(CONFIG_SOC_IMX53) += suspend-imx53.o
endif endif
AFLAGS_resume-imx6.o :=-Wa,-march=armv7-a
obj-$(CONFIG_SOC_IMX6) += resume-imx6.o
obj-$(CONFIG_SOC_IMX6) += pm-imx6.o obj-$(CONFIG_SOC_IMX6) += pm-imx6.o
obj-$(CONFIG_SOC_IMX1) += mach-imx1.o obj-$(CONFIG_SOC_IMX1) += mach-imx1.o

View file

@ -103,17 +103,17 @@ void imx_cpu_die(unsigned int cpu);
int imx_cpu_kill(unsigned int cpu); int imx_cpu_kill(unsigned int cpu);
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
void v7_cpu_resume(void);
void imx53_suspend(void __iomem *ocram_vbase); void imx53_suspend(void __iomem *ocram_vbase);
extern const u32 imx53_suspend_sz; extern const u32 imx53_suspend_sz;
void imx6_suspend(void __iomem *ocram_vbase); void imx6_suspend(void __iomem *ocram_vbase);
#else #else
static inline void v7_cpu_resume(void) {}
static inline void imx53_suspend(void __iomem *ocram_vbase) {} static inline void imx53_suspend(void __iomem *ocram_vbase) {}
static const u32 imx53_suspend_sz; static const u32 imx53_suspend_sz;
static inline void imx6_suspend(void __iomem *ocram_vbase) {} static inline void imx6_suspend(void __iomem *ocram_vbase) {}
#endif #endif
void v7_cpu_resume(void);
void imx6_pm_ccm_init(const char *ccm_compat); void imx6_pm_ccm_init(const char *ccm_compat);
void imx6q_pm_init(void); void imx6q_pm_init(void);
void imx6dl_pm_init(void); void imx6dl_pm_init(void);

View file

@ -0,0 +1,24 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright 2014 Freescale Semiconductor, Inc.
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/asm-offsets.h>
#include <asm/hardware/cache-l2x0.h>
#include "hardware.h"
/*
* The following code must assume it is running from physical address
* where absolute virtual addresses to the data section have to be
* turned into relative ones.
*/
ENTRY(v7_cpu_resume)
bl v7_invalidate_l1
#ifdef CONFIG_CACHE_L2X0
bl l2c310_early_resume
#endif
b cpu_resume
ENDPROC(v7_cpu_resume)

View file

@ -333,17 +333,3 @@ resume:
ret lr ret lr
ENDPROC(imx6_suspend) ENDPROC(imx6_suspend)
/*
* The following code must assume it is running from physical address
* where absolute virtual addresses to the data section have to be
* turned into relative ones.
*/
ENTRY(v7_cpu_resume)
bl v7_invalidate_l1
#ifdef CONFIG_CACHE_L2X0
bl l2c310_early_resume
#endif
b cpu_resume
ENDPROC(v7_cpu_resume)

View file

@ -33,12 +33,12 @@ CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT_ALWAYS_ON=y CONFIG_BPF_JIT_ALWAYS_ON=y
# CONFIG_RSEQ is not set # CONFIG_RSEQ is not set
CONFIG_EMBEDDED=y CONFIG_EMBEDDED=y
# CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB_MERGE_DEFAULT is not set # CONFIG_SLAB_MERGE_DEFAULT is not set
CONFIG_SLAB_FREELIST_RANDOM=y CONFIG_SLAB_FREELIST_RANDOM=y
CONFIG_SLAB_FREELIST_HARDENED=y CONFIG_SLAB_FREELIST_HARDENED=y
CONFIG_PROFILING=y CONFIG_PROFILING=y
# CONFIG_ZONE_DMA32 is not set
CONFIG_ARCH_HISI=y CONFIG_ARCH_HISI=y
CONFIG_ARCH_QCOM=y CONFIG_ARCH_QCOM=y
CONFIG_PCI=y CONFIG_PCI=y
@ -77,8 +77,9 @@ CONFIG_ARM_SCPI_PROTOCOL=y
CONFIG_ARM64_CRYPTO=y CONFIG_ARM64_CRYPTO=y
CONFIG_CRYPTO_SHA2_ARM64_CE=y CONFIG_CRYPTO_SHA2_ARM64_CE=y
CONFIG_CRYPTO_AES_ARM64_CE_BLK=y CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
CONFIG_KPROBES=y CONFIG_JUMP_LABEL=y
CONFIG_LTO_CLANG=y CONFIG_LTO_CLANG=y
CONFIG_CFI_CLANG=y
CONFIG_SHADOW_CALL_STACK=y CONFIG_SHADOW_CALL_STACK=y
CONFIG_MODULES=y CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_UNLOAD=y
@ -88,6 +89,7 @@ CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_GKI_HACKS_TO_FIX=y CONFIG_GKI_HACKS_TO_FIX=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y CONFIG_BINFMT_MISC=y
CONFIG_CLEANCACHE=y
CONFIG_CMA=y CONFIG_CMA=y
CONFIG_CMA_AREAS=16 CONFIG_CMA_AREAS=16
CONFIG_ZSMALLOC=y CONFIG_ZSMALLOC=y
@ -206,7 +208,6 @@ CONFIG_RFKILL=y
# CONFIG_UEVENT_HELPER is not set # CONFIG_UEVENT_HELPER is not set
# CONFIG_FW_CACHE is not set # CONFIG_FW_CACHE is not set
# CONFIG_ALLOW_DEV_COREDUMP is not set # CONFIG_ALLOW_DEV_COREDUMP is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_DMA_CMA=y CONFIG_DMA_CMA=y
CONFIG_GNSS=y CONFIG_GNSS=y
CONFIG_ZRAM=y CONFIG_ZRAM=y
@ -301,6 +302,7 @@ CONFIG_THERMAL_GOV_USER_SPACE=y
CONFIG_CPU_THERMAL=y CONFIG_CPU_THERMAL=y
CONFIG_DEVFREQ_THERMAL=y CONFIG_DEVFREQ_THERMAL=y
CONFIG_WATCHDOG=y CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
CONFIG_MFD_ACT8945A=y CONFIG_MFD_ACT8945A=y
CONFIG_MFD_SYSCON=y CONFIG_MFD_SYSCON=y
CONFIG_REGULATOR=y CONFIG_REGULATOR=y
@ -311,6 +313,10 @@ CONFIG_MEDIA_CONTROLLER=y
# CONFIG_VGA_ARB is not set # CONFIG_VGA_ARB is not set
CONFIG_DRM=y CONFIG_DRM=y
# CONFIG_DRM_FBDEV_EMULATION is not set # CONFIG_DRM_FBDEV_EMULATION is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_GENERIC is not set
CONFIG_SOUND=y CONFIG_SOUND=y
CONFIG_SND=y CONFIG_SND=y
CONFIG_SND_HRTIMER=y CONFIG_SND_HRTIMER=y
@ -335,10 +341,12 @@ CONFIG_USB_OTG=y
CONFIG_USB_GADGET=y CONFIG_USB_GADGET=y
CONFIG_USB_CONFIGFS=y CONFIG_USB_CONFIGFS=y
CONFIG_USB_CONFIGFS_UEVENT=y CONFIG_USB_CONFIGFS_UEVENT=y
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_FS=y CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_F_ACC=y CONFIG_USB_CONFIGFS_F_ACC=y
CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
CONFIG_USB_CONFIGFS_F_MIDI=y CONFIG_USB_CONFIGFS_F_MIDI=y
CONFIG_TYPEC=y
CONFIG_MMC=y CONFIG_MMC=y
# CONFIG_PWRSEQ_EMMC is not set # CONFIG_PWRSEQ_EMMC is not set
# CONFIG_PWRSEQ_SIMPLE is not set # CONFIG_PWRSEQ_SIMPLE is not set
@ -366,7 +374,6 @@ CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_DEVFREQ_GOV_PASSIVE=y CONFIG_DEVFREQ_GOV_PASSIVE=y
CONFIG_EXTCON=y
CONFIG_IIO=y CONFIG_IIO=y
CONFIG_PWM=y CONFIG_PWM=y
CONFIG_QCOM_PDC=y CONFIG_QCOM_PDC=y
@ -450,6 +457,7 @@ CONFIG_NLS_MAC_TURKISH=y
CONFIG_NLS_UTF8=y CONFIG_NLS_UTF8=y
CONFIG_UNICODE=y CONFIG_UNICODE=y
CONFIG_SECURITY=y CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y CONFIG_SECURITY_NETWORK=y
CONFIG_HARDENED_USERCOPY=y CONFIG_HARDENED_USERCOPY=y
CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX=y

View file

@ -220,7 +220,7 @@ static inline unsigned long kaslr_offset(void)
((__force __typeof__(addr))sign_extend64((__force u64)(addr), 55)) ((__force __typeof__(addr))sign_extend64((__force u64)(addr), 55))
#define untagged_addr(addr) ({ \ #define untagged_addr(addr) ({ \
u64 __addr = (__force u64)addr; \ u64 __addr = (__force u64)(addr); \
__addr &= __untagged_addr(__addr); \ __addr &= __untagged_addr(__addr); \
(__force __typeof__(addr))__addr; \ (__force __typeof__(addr))__addr; \
}) })

View file

@ -32,6 +32,10 @@ extern void __cpu_copy_user_page(void *to, const void *from,
extern void copy_page(void *to, const void *from); extern void copy_page(void *to, const void *from);
extern void clear_page(void *to); extern void clear_page(void *to);
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr) #define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr)
#define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr) #define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr)

View file

@ -935,11 +935,22 @@ void tick_broadcast(const struct cpumask *mask)
} }
#endif #endif
/*
* The number of CPUs online, not counting this CPU (which may not be
* fully online and so not counted in num_online_cpus()).
*/
static inline unsigned int num_other_online_cpus(void)
{
unsigned int this_cpu_online = cpu_online(smp_processor_id());
return num_online_cpus() - this_cpu_online;
}
void smp_send_stop(void) void smp_send_stop(void)
{ {
unsigned long timeout; unsigned long timeout;
if (num_online_cpus() > 1) { if (num_other_online_cpus()) {
cpumask_t mask; cpumask_t mask;
cpumask_copy(&mask, cpu_online_mask); cpumask_copy(&mask, cpu_online_mask);
@ -952,10 +963,10 @@ void smp_send_stop(void)
/* Wait up to one second for other CPUs to stop */ /* Wait up to one second for other CPUs to stop */
timeout = USEC_PER_SEC; timeout = USEC_PER_SEC;
while (num_online_cpus() > 1 && timeout--) while (num_other_online_cpus() && timeout--)
udelay(1); udelay(1);
if (num_online_cpus() > 1) if (num_other_online_cpus())
pr_warning("SMP: failed to stop secondary CPUs %*pbl\n", pr_warning("SMP: failed to stop secondary CPUs %*pbl\n",
cpumask_pr_args(cpu_online_mask)); cpumask_pr_args(cpu_online_mask));
@ -978,7 +989,11 @@ void crash_smp_send_stop(void)
cpus_stopped = 1; cpus_stopped = 1;
if (num_online_cpus() == 1) { /*
* If this cpu is the only one alive at this point in time, online or
* not, there are no stop messages to be sent around, so just back out.
*/
if (num_other_online_cpus() == 0) {
sdei_mask_local_cpu(); sdei_mask_local_cpu();
return; return;
} }
@ -986,7 +1001,7 @@ void crash_smp_send_stop(void)
cpumask_copy(&mask, cpu_online_mask); cpumask_copy(&mask, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), &mask); cpumask_clear_cpu(smp_processor_id(), &mask);
atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1); atomic_set(&waiting_for_crash_ipi, num_other_online_cpus());
pr_crit("SMP: stopping secondary CPUs\n"); pr_crit("SMP: stopping secondary CPUs\n");
smp_cross_call(&mask, IPI_CPU_CRASH_STOP); smp_cross_call(&mask, IPI_CPU_CRASH_STOP);

View file

@ -28,9 +28,11 @@
#include <linux/dma-contiguous.h> #include <linux/dma-contiguous.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/swiotlb.h> #include <linux/swiotlb.h>
#include <linux/dma-removed.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/tlbflush.h>
static int swiotlb __ro_after_init; static int swiotlb __ro_after_init;
@ -321,6 +323,56 @@ static int __swiotlb_dma_supported(struct device *hwdev, u64 mask)
return 1; return 1;
} }
static void *arm64_dma_remap(struct device *dev, void *cpu_addr,
dma_addr_t handle, size_t size,
unsigned long attrs)
{
struct page *page = phys_to_page(dma_to_phys(dev, handle));
bool coherent = is_device_dma_coherent(dev);
pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent);
unsigned long offset = handle & ~PAGE_MASK;
struct vm_struct *area;
unsigned long addr;
size = PAGE_ALIGN(size + offset);
/*
* DMA allocation can be mapped to user space, so lets
* set VM_USERMAP flags too.
*/
area = get_vm_area(size, VM_USERMAP);
if (!area)
return NULL;
addr = (unsigned long)area->addr;
area->phys_addr = __pfn_to_phys(page_to_pfn(page));
if (ioremap_page_range(addr, addr + size, area->phys_addr, prot)) {
vunmap((void *)addr);
return NULL;
}
return (void *)addr + offset;
}
static void arm64_dma_unremap(struct device *dev, void *remapped_addr,
size_t size)
{
struct vm_struct *area;
size = PAGE_ALIGN(size);
remapped_addr = (void *)((unsigned long)remapped_addr & PAGE_MASK);
area = find_vm_area(remapped_addr);
if (!area) {
WARN(1, "trying to free invalid coherent area: %pK\n",
remapped_addr);
return;
}
vunmap(remapped_addr);
flush_tlb_kernel_range((unsigned long)remapped_addr,
(unsigned long)(remapped_addr + size));
}
static int __swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t addr) static int __swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t addr)
{ {
if (swiotlb) if (swiotlb)
@ -343,6 +395,8 @@ static const struct dma_map_ops arm64_swiotlb_dma_ops = {
.sync_sg_for_device = __swiotlb_sync_sg_for_device, .sync_sg_for_device = __swiotlb_sync_sg_for_device,
.dma_supported = __swiotlb_dma_supported, .dma_supported = __swiotlb_dma_supported,
.mapping_error = __swiotlb_dma_mapping_error, .mapping_error = __swiotlb_dma_mapping_error,
.remap = arm64_dma_remap,
.unremap = arm64_dma_unremap,
}; };
static int __init atomic_pool_init(void) static int __init atomic_pool_init(void)
@ -888,8 +942,12 @@ static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent) const struct iommu_ops *iommu, bool coherent)
{ {
if (!dev->dma_ops) if (!dev->dma_ops) {
dev->dma_ops = &arm64_swiotlb_dma_ops; if (dev->removed_mem)
set_dma_ops(dev, &removed_dma_ops);
else
dev->dma_ops = &arm64_swiotlb_dma_ops;
}
dev->archdata.dma_coherent = coherent; dev->archdata.dma_coherent = coherent;
__iommu_setup_dma_ops(dev, dma_base, size, iommu); __iommu_setup_dma_ops(dev, dma_base, size, iommu);

View file

@ -361,6 +361,27 @@ static phys_addr_t pgd_pgtable_alloc(void)
return __pa(ptr); return __pa(ptr);
} }
/**
* create_pgtable_mapping - create a pagetable mapping for given
* physical start and end addresses.
* @start: physical start address.
* @end: physical end address.
*/
void create_pgtable_mapping(phys_addr_t start, phys_addr_t end)
{
unsigned long virt = (unsigned long)phys_to_virt(start);
if (virt < VMALLOC_START) {
pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
&start, virt);
return;
}
__create_pgd_mapping(init_mm.pgd, start, virt, end - start,
PAGE_KERNEL, NULL, 0);
}
EXPORT_SYMBOL_GPL(create_pgtable_mapping);
/* /*
* This function can only be used to modify existing table entries, * This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the * without allocating new levels of table. Note that this permits the

View file

@ -134,7 +134,7 @@ void release_vpe(struct vpe *v)
{ {
list_del(&v->list); list_del(&v->list);
if (v->load_addr) if (v->load_addr)
release_progmem(v); release_progmem(v->load_addr);
kfree(v); kfree(v);
} }

View file

@ -2188,11 +2188,13 @@ static struct cpu_spec * __init setup_cpu_spec(unsigned long offset,
* oprofile_cpu_type already has a value, then we are * oprofile_cpu_type already has a value, then we are
* possibly overriding a real PVR with a logical one, * possibly overriding a real PVR with a logical one,
* and, in that case, keep the current value for * and, in that case, keep the current value for
* oprofile_cpu_type. * oprofile_cpu_type. Futhermore, let's ensure that the
* fix for the PMAO bug is enabled on compatibility mode.
*/ */
if (old.oprofile_cpu_type != NULL) { if (old.oprofile_cpu_type != NULL) {
t->oprofile_cpu_type = old.oprofile_cpu_type; t->oprofile_cpu_type = old.oprofile_cpu_type;
t->oprofile_type = old.oprofile_type; t->oprofile_type = old.oprofile_type;
t->cpu_features |= old.cpu_features & CPU_FTR_PMAO_BUG;
} }
} }

View file

@ -322,6 +322,12 @@ SECTIONS
*(.branch_lt) *(.branch_lt)
} }
#ifdef CONFIG_DEBUG_INFO_BTF
.BTF : AT(ADDR(.BTF) - LOAD_OFFSET) {
*(.BTF)
}
#endif
.opd : AT(ADDR(.opd) - LOAD_OFFSET) { .opd : AT(ADDR(.opd) - LOAD_OFFSET) {
__start_opd = .; __start_opd = .;
KEEP(*(.opd)) KEEP(*(.opd))

View file

@ -16,6 +16,10 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/moduleloader.h> #include <linux/moduleloader.h>
#include <linux/vmalloc.h>
#include <linux/sizes.h>
#include <asm/pgtable.h>
#include <asm/sections.h>
static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v) static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
{ {
@ -394,3 +398,15 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
return 0; return 0;
} }
#if defined(CONFIG_MMU) && defined(CONFIG_64BIT)
#define VMALLOC_MODULE_START \
max(PFN_ALIGN((unsigned long)&_end - SZ_2G), VMALLOC_START)
void *module_alloc(unsigned long size)
{
return __vmalloc_node_range(size, 1, VMALLOC_MODULE_START,
VMALLOC_END, GFP_KERNEL,
PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif

View file

@ -140,7 +140,7 @@ all: bzImage
#KBUILD_IMAGE is necessary for packaging targets like rpm-pkg, deb-pkg... #KBUILD_IMAGE is necessary for packaging targets like rpm-pkg, deb-pkg...
KBUILD_IMAGE := $(boot)/bzImage KBUILD_IMAGE := $(boot)/bzImage
install: vmlinux install:
$(Q)$(MAKE) $(build)=$(boot) $@ $(Q)$(MAKE) $(build)=$(boot) $@
bzImage: vmlinux bzImage: vmlinux

View file

@ -46,7 +46,7 @@ quiet_cmd_ar = AR $@
$(obj)/startup.a: $(OBJECTS) FORCE $(obj)/startup.a: $(OBJECTS) FORCE
$(call if_changed,ar) $(call if_changed,ar)
install: $(CONFIGURE) $(obj)/bzImage install:
sh -x $(srctree)/$(obj)/install.sh $(KERNELRELEASE) $(obj)/bzImage \ sh -x $(srctree)/$(obj)/install.sh $(KERNELRELEASE) $(obj)/bzImage \
System.map "$(INSTALL_PATH)" System.map "$(INSTALL_PATH)"

View file

@ -228,7 +228,7 @@ struct qdio_buffer {
* @sbal: absolute SBAL address * @sbal: absolute SBAL address
*/ */
struct sl_element { struct sl_element {
unsigned long sbal; u64 sbal;
} __attribute__ ((packed)); } __attribute__ ((packed));
/** /**

View file

@ -29,9 +29,6 @@
#define __PAGE_OFFSET __PAGE_OFFSET_BASE #define __PAGE_OFFSET __PAGE_OFFSET_BASE
#include "../../mm/ident_map.c" #include "../../mm/ident_map.c"
/* Used by pgtable.h asm code to force instruction serialization. */
unsigned long __force_order;
/* Used to track our page table allocation area. */ /* Used to track our page table allocation area. */
struct alloc_pgt_data { struct alloc_pgt_data {
unsigned char *pgt_buf; unsigned char *pgt_buf;

View file

@ -33,7 +33,6 @@ CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT_ALWAYS_ON=y CONFIG_BPF_JIT_ALWAYS_ON=y
# CONFIG_RSEQ is not set # CONFIG_RSEQ is not set
CONFIG_EMBEDDED=y CONFIG_EMBEDDED=y
# CONFIG_VM_EVENT_COUNTERS is not set
# CONFIG_COMPAT_BRK is not set # CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB_MERGE_DEFAULT is not set # CONFIG_SLAB_MERGE_DEFAULT is not set
CONFIG_PROFILING=y CONFIG_PROFILING=y
@ -50,7 +49,7 @@ CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_PCI_MSI=y CONFIG_PCI_MSI=y
CONFIG_IA32_EMULATION=y CONFIG_IA32_EMULATION=y
CONFIG_KPROBES=y CONFIG_JUMP_LABEL=y
CONFIG_LTO_CLANG=y CONFIG_LTO_CLANG=y
CONFIG_CFI_CLANG=y CONFIG_CFI_CLANG=y
CONFIG_MODULES=y CONFIG_MODULES=y
@ -61,6 +60,7 @@ CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_GKI_HACKS_TO_FIX=y CONFIG_GKI_HACKS_TO_FIX=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y CONFIG_BINFMT_MISC=y
CONFIG_CLEANCACHE=y
CONFIG_ZSMALLOC=y CONFIG_ZSMALLOC=y
CONFIG_NET=y CONFIG_NET=y
CONFIG_PACKET=y CONFIG_PACKET=y
@ -177,7 +177,6 @@ CONFIG_RFKILL=y
# CONFIG_UEVENT_HELPER is not set # CONFIG_UEVENT_HELPER is not set
# CONFIG_FW_CACHE is not set # CONFIG_FW_CACHE is not set
# CONFIG_ALLOW_DEV_COREDUMP is not set # CONFIG_ALLOW_DEV_COREDUMP is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_GNSS=y CONFIG_GNSS=y
CONFIG_OF=y CONFIG_OF=y
CONFIG_ZRAM=y CONFIG_ZRAM=y
@ -262,6 +261,8 @@ CONFIG_GPIOLIB=y
CONFIG_DEVFREQ_THERMAL=y CONFIG_DEVFREQ_THERMAL=y
# CONFIG_X86_PKG_TEMP_THERMAL is not set # CONFIG_X86_PKG_TEMP_THERMAL is not set
CONFIG_WATCHDOG=y CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
CONFIG_MFD_SYSCON=y
CONFIG_REGULATOR=y CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_MEDIA_SUPPORT=y CONFIG_MEDIA_SUPPORT=y
@ -295,6 +296,7 @@ CONFIG_USB_HIDDEV=y
CONFIG_USB_GADGET=y CONFIG_USB_GADGET=y
CONFIG_USB_CONFIGFS=y CONFIG_USB_CONFIGFS=y
CONFIG_USB_CONFIGFS_UEVENT=y CONFIG_USB_CONFIGFS_UEVENT=y
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_FS=y CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_F_ACC=y CONFIG_USB_CONFIGFS_F_ACC=y
CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
@ -393,6 +395,7 @@ CONFIG_NLS_MAC_TURKISH=y
CONFIG_NLS_UTF8=y CONFIG_NLS_UTF8=y
CONFIG_UNICODE=y CONFIG_UNICODE=y
CONFIG_SECURITY=y CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y CONFIG_SECURITY_NETWORK=y
CONFIG_HARDENED_USERCOPY=y CONFIG_HARDENED_USERCOPY=y
CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX=y

View file

@ -193,20 +193,18 @@ static int amd_uncore_event_init(struct perf_event *event)
/* /*
* NB and Last level cache counters (MSRs) are shared across all cores * NB and Last level cache counters (MSRs) are shared across all cores
* that share the same NB / Last level cache. Interrupts can be directed * that share the same NB / Last level cache. On family 16h and below,
* to a single target core, however, event counts generated by processes * Interrupts can be directed to a single target core, however, event
* running on other cores cannot be masked out. So we do not support * counts generated by processes running on other cores cannot be masked
* sampling and per-thread events. * out. So we do not support sampling and per-thread events via
* CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts:
*/ */
if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)
return -EINVAL;
/* NB and Last level cache counters do not have usr/os/guest/host bits */ /* NB and Last level cache counters do not have usr/os/guest/host bits */
if (event->attr.exclude_user || event->attr.exclude_kernel || if (event->attr.exclude_user || event->attr.exclude_kernel ||
event->attr.exclude_host || event->attr.exclude_guest) event->attr.exclude_host || event->attr.exclude_guest)
return -EINVAL; return -EINVAL;
/* and we do not enable counter overflow interrupts */
hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB; hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
hwc->idx = -1; hwc->idx = -1;
@ -314,6 +312,7 @@ static struct pmu amd_nb_pmu = {
.start = amd_uncore_start, .start = amd_uncore_start,
.stop = amd_uncore_stop, .stop = amd_uncore_stop,
.read = amd_uncore_read, .read = amd_uncore_read,
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
}; };
static struct pmu amd_llc_pmu = { static struct pmu amd_llc_pmu = {
@ -324,6 +323,7 @@ static struct pmu amd_llc_pmu = {
.start = amd_uncore_start, .start = amd_uncore_start,
.stop = amd_uncore_stop, .stop = amd_uncore_stop,
.read = amd_uncore_read, .read = amd_uncore_read,
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
}; };
static struct amd_uncore *amd_uncore_alloc(unsigned int cpu) static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)

View file

@ -387,7 +387,7 @@ static __always_inline void setup_pku(struct cpuinfo_x86 *c)
* cpuid bit to be set. We need to ensure that we * cpuid bit to be set. We need to ensure that we
* update that bit in this CPU's "cpu_info". * update that bit in this CPU's "cpu_info".
*/ */
get_cpu_cap(c); set_cpu_cap(c, X86_FEATURE_OSPKE);
} }
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS

View file

@ -489,17 +489,18 @@ static void intel_ppin_init(struct cpuinfo_x86 *c)
return; return;
if ((val & 3UL) == 1UL) { if ((val & 3UL) == 1UL) {
/* PPIN available but disabled: */ /* PPIN locked in disabled mode */
return; return;
} }
/* If PPIN is disabled, but not locked, try to enable: */ /* If PPIN is disabled, try to enable */
if (!(val & 3UL)) { if (!(val & 2UL)) {
wrmsrl_safe(MSR_PPIN_CTL, val | 2UL); wrmsrl_safe(MSR_PPIN_CTL, val | 2UL);
rdmsrl_safe(MSR_PPIN_CTL, &val); rdmsrl_safe(MSR_PPIN_CTL, &val);
} }
if ((val & 3UL) == 2UL) /* Is the enable bit set? */
if (val & 2UL)
set_cpu_cap(c, X86_FEATURE_INTEL_PPIN); set_cpu_cap(c, X86_FEATURE_INTEL_PPIN);
} }
} }

View file

@ -5112,6 +5112,7 @@ int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len)
ctxt->fetch.ptr = ctxt->fetch.data; ctxt->fetch.ptr = ctxt->fetch.data;
ctxt->fetch.end = ctxt->fetch.data + insn_len; ctxt->fetch.end = ctxt->fetch.data + insn_len;
ctxt->opcode_len = 1; ctxt->opcode_len = 1;
ctxt->intercept = x86_intercept_none;
if (insn_len > 0) if (insn_len > 0)
memcpy(ctxt->fetch.data, insn, insn_len); memcpy(ctxt->fetch.data, insn, insn_len);
else { else {

View file

@ -1298,6 +1298,47 @@ static void shrink_ple_window(struct kvm_vcpu *vcpu)
control->pause_filter_count, old); control->pause_filter_count, old);
} }
/*
* The default MMIO mask is a single bit (excluding the present bit),
* which could conflict with the memory encryption bit. Check for
* memory encryption support and override the default MMIO mask if
* memory encryption is enabled.
*/
static __init void svm_adjust_mmio_mask(void)
{
unsigned int enc_bit, mask_bit;
u64 msr, mask;
/* If there is no memory encryption support, use existing mask */
if (cpuid_eax(0x80000000) < 0x8000001f)
return;
/* If memory encryption is not enabled, use existing mask */
rdmsrl(MSR_K8_SYSCFG, msr);
if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
return;
enc_bit = cpuid_ebx(0x8000001f) & 0x3f;
mask_bit = boot_cpu_data.x86_phys_bits;
/* Increment the mask bit if it is the same as the encryption bit */
if (enc_bit == mask_bit)
mask_bit++;
/*
* If the mask bit location is below 52, then some bits above the
* physical addressing limit will always be reserved, so use the
* rsvd_bits() function to generate the mask. This mask, along with
* the present bit, will be used to generate a page fault with
* PFER.RSV = 1.
*
* If the mask bit location is 52 (or above), then clear the mask.
*/
mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0;
kvm_mmu_set_mmio_spte_mask(mask, mask);
}
static __init int svm_hardware_setup(void) static __init int svm_hardware_setup(void)
{ {
int cpu; int cpu;
@ -1352,6 +1393,8 @@ static __init int svm_hardware_setup(void)
} }
} }
svm_adjust_mmio_mask();
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
r = svm_cpu_init(cpu); r = svm_cpu_init(cpu);
if (r) if (r)

View file

@ -13724,6 +13724,7 @@ static int vmx_check_intercept_io(struct kvm_vcpu *vcpu,
else else
intercept = nested_vmx_check_io_bitmaps(vcpu, port, size); intercept = nested_vmx_check_io_bitmaps(vcpu, port, size);
/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED. */
return intercept ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE; return intercept ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
} }
@ -13753,6 +13754,20 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
case x86_intercept_outs: case x86_intercept_outs:
return vmx_check_intercept_io(vcpu, info); return vmx_check_intercept_io(vcpu, info);
case x86_intercept_lgdt:
case x86_intercept_lidt:
case x86_intercept_lldt:
case x86_intercept_ltr:
case x86_intercept_sgdt:
case x86_intercept_sidt:
case x86_intercept_sldt:
case x86_intercept_str:
if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_DESC))
return X86EMUL_CONTINUE;
/* FIXME: produce nested vmexit and return X86EMUL_INTERCEPTED. */
break;
/* TODO: check more intercepts... */ /* TODO: check more intercepts... */
default: default:
break; break;

View file

@ -8693,12 +8693,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
{ {
vcpu->arch.apf.msr_val = 0;
vcpu_load(vcpu);
kvm_mmu_unload(vcpu);
vcpu_put(vcpu);
kvm_arch_vcpu_free(vcpu); kvm_arch_vcpu_free(vcpu);
} }

View file

@ -273,7 +273,7 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
return pmd_k; return pmd_k;
} }
void vmalloc_sync_all(void) static void vmalloc_sync(void)
{ {
unsigned long address; unsigned long address;
@ -300,6 +300,16 @@ void vmalloc_sync_all(void)
} }
} }
void vmalloc_sync_mappings(void)
{
vmalloc_sync();
}
void vmalloc_sync_unmappings(void)
{
vmalloc_sync();
}
/* /*
* 32-bit: * 32-bit:
* *
@ -402,11 +412,23 @@ static void dump_pagetable(unsigned long address)
#else /* CONFIG_X86_64: */ #else /* CONFIG_X86_64: */
void vmalloc_sync_all(void) void vmalloc_sync_mappings(void)
{ {
/*
* 64-bit mappings might allocate new p4d/pud pages
* that need to be propagated to all tasks' PGDs.
*/
sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_END); sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_END);
} }
void vmalloc_sync_unmappings(void)
{
/*
* Unmappings never allocate or free p4d/pud pages.
* No work is required here.
*/
}
/* /*
* 64-bit: * 64-bit:
* *

View file

@ -313,7 +313,7 @@ void efi_sync_low_kernel_mappings(void)
static inline phys_addr_t static inline phys_addr_t
virt_to_phys_or_null_size(void *va, unsigned long size) virt_to_phys_or_null_size(void *va, unsigned long size)
{ {
bool bad_size; phys_addr_t pa;
if (!va) if (!va)
return 0; return 0;
@ -321,16 +321,13 @@ virt_to_phys_or_null_size(void *va, unsigned long size)
if (virt_addr_valid(va)) if (virt_addr_valid(va))
return virt_to_phys(va); return virt_to_phys(va);
/* pa = slow_virt_to_phys(va);
* A fully aligned variable on the stack is guaranteed not to
* cross a page bounary. Try to catch strings on the stack by
* checking that 'size' is a power of two.
*/
bad_size = size > PAGE_SIZE || !is_power_of_2(size);
WARN_ON(!IS_ALIGNED((unsigned long)va, size) || bad_size); /* check if the object crosses a page boundary */
if (WARN_ON((pa ^ (pa + size - 1)) & PAGE_MASK))
return 0;
return slow_virt_to_phys(va); return pa;
} }
#define virt_to_phys_or_null(addr) \ #define virt_to_phys_or_null(addr) \
@ -790,6 +787,8 @@ static efi_status_t
efi_thunk_get_variable(efi_char16_t *name, efi_guid_t *vendor, efi_thunk_get_variable(efi_char16_t *name, efi_guid_t *vendor,
u32 *attr, unsigned long *data_size, void *data) u32 *attr, unsigned long *data_size, void *data)
{ {
u8 buf[24] __aligned(8);
efi_guid_t *vnd = PTR_ALIGN((efi_guid_t *)buf, sizeof(*vnd));
efi_status_t status; efi_status_t status;
u32 phys_name, phys_vendor, phys_attr; u32 phys_name, phys_vendor, phys_attr;
u32 phys_data_size, phys_data; u32 phys_data_size, phys_data;
@ -797,14 +796,19 @@ efi_thunk_get_variable(efi_char16_t *name, efi_guid_t *vendor,
spin_lock_irqsave(&efi_runtime_lock, flags); spin_lock_irqsave(&efi_runtime_lock, flags);
*vnd = *vendor;
phys_data_size = virt_to_phys_or_null(data_size); phys_data_size = virt_to_phys_or_null(data_size);
phys_vendor = virt_to_phys_or_null(vendor); phys_vendor = virt_to_phys_or_null(vnd);
phys_name = virt_to_phys_or_null_size(name, efi_name_size(name)); phys_name = virt_to_phys_or_null_size(name, efi_name_size(name));
phys_attr = virt_to_phys_or_null(attr); phys_attr = virt_to_phys_or_null(attr);
phys_data = virt_to_phys_or_null_size(data, *data_size); phys_data = virt_to_phys_or_null_size(data, *data_size);
status = efi_thunk(get_variable, phys_name, phys_vendor, if (!phys_name || (data && !phys_data))
phys_attr, phys_data_size, phys_data); status = EFI_INVALID_PARAMETER;
else
status = efi_thunk(get_variable, phys_name, phys_vendor,
phys_attr, phys_data_size, phys_data);
spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock_irqrestore(&efi_runtime_lock, flags);
@ -815,19 +819,25 @@ static efi_status_t
efi_thunk_set_variable(efi_char16_t *name, efi_guid_t *vendor, efi_thunk_set_variable(efi_char16_t *name, efi_guid_t *vendor,
u32 attr, unsigned long data_size, void *data) u32 attr, unsigned long data_size, void *data)
{ {
u8 buf[24] __aligned(8);
efi_guid_t *vnd = PTR_ALIGN((efi_guid_t *)buf, sizeof(*vnd));
u32 phys_name, phys_vendor, phys_data; u32 phys_name, phys_vendor, phys_data;
efi_status_t status; efi_status_t status;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&efi_runtime_lock, flags); spin_lock_irqsave(&efi_runtime_lock, flags);
*vnd = *vendor;
phys_name = virt_to_phys_or_null_size(name, efi_name_size(name)); phys_name = virt_to_phys_or_null_size(name, efi_name_size(name));
phys_vendor = virt_to_phys_or_null(vendor); phys_vendor = virt_to_phys_or_null(vnd);
phys_data = virt_to_phys_or_null_size(data, data_size); phys_data = virt_to_phys_or_null_size(data, data_size);
/* If data_size is > sizeof(u32) we've got problems */ if (!phys_name || !phys_data)
status = efi_thunk(set_variable, phys_name, phys_vendor, status = EFI_INVALID_PARAMETER;
attr, data_size, phys_data); else
status = efi_thunk(set_variable, phys_name, phys_vendor,
attr, data_size, phys_data);
spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock_irqrestore(&efi_runtime_lock, flags);
@ -839,6 +849,8 @@ efi_thunk_set_variable_nonblocking(efi_char16_t *name, efi_guid_t *vendor,
u32 attr, unsigned long data_size, u32 attr, unsigned long data_size,
void *data) void *data)
{ {
u8 buf[24] __aligned(8);
efi_guid_t *vnd = PTR_ALIGN((efi_guid_t *)buf, sizeof(*vnd));
u32 phys_name, phys_vendor, phys_data; u32 phys_name, phys_vendor, phys_data;
efi_status_t status; efi_status_t status;
unsigned long flags; unsigned long flags;
@ -846,13 +858,17 @@ efi_thunk_set_variable_nonblocking(efi_char16_t *name, efi_guid_t *vendor,
if (!spin_trylock_irqsave(&efi_runtime_lock, flags)) if (!spin_trylock_irqsave(&efi_runtime_lock, flags))
return EFI_NOT_READY; return EFI_NOT_READY;
*vnd = *vendor;
phys_name = virt_to_phys_or_null_size(name, efi_name_size(name)); phys_name = virt_to_phys_or_null_size(name, efi_name_size(name));
phys_vendor = virt_to_phys_or_null(vendor); phys_vendor = virt_to_phys_or_null(vnd);
phys_data = virt_to_phys_or_null_size(data, data_size); phys_data = virt_to_phys_or_null_size(data, data_size);
/* If data_size is > sizeof(u32) we've got problems */ if (!phys_name || !phys_data)
status = efi_thunk(set_variable, phys_name, phys_vendor, status = EFI_INVALID_PARAMETER;
attr, data_size, phys_data); else
status = efi_thunk(set_variable, phys_name, phys_vendor,
attr, data_size, phys_data);
spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock_irqrestore(&efi_runtime_lock, flags);
@ -864,21 +880,29 @@ efi_thunk_get_next_variable(unsigned long *name_size,
efi_char16_t *name, efi_char16_t *name,
efi_guid_t *vendor) efi_guid_t *vendor)
{ {
u8 buf[24] __aligned(8);
efi_guid_t *vnd = PTR_ALIGN((efi_guid_t *)buf, sizeof(*vnd));
efi_status_t status; efi_status_t status;
u32 phys_name_size, phys_name, phys_vendor; u32 phys_name_size, phys_name, phys_vendor;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&efi_runtime_lock, flags); spin_lock_irqsave(&efi_runtime_lock, flags);
*vnd = *vendor;
phys_name_size = virt_to_phys_or_null(name_size); phys_name_size = virt_to_phys_or_null(name_size);
phys_vendor = virt_to_phys_or_null(vendor); phys_vendor = virt_to_phys_or_null(vnd);
phys_name = virt_to_phys_or_null_size(name, *name_size); phys_name = virt_to_phys_or_null_size(name, *name_size);
status = efi_thunk(get_next_variable, phys_name_size, if (!phys_name)
phys_name, phys_vendor); status = EFI_INVALID_PARAMETER;
else
status = efi_thunk(get_next_variable, phys_name_size,
phys_name, phys_vendor);
spin_unlock_irqrestore(&efi_runtime_lock, flags); spin_unlock_irqrestore(&efi_runtime_lock, flags);
*vendor = *vnd;
return status; return status;
} }

View file

@ -908,14 +908,15 @@ static u64 xen_read_msr_safe(unsigned int msr, int *err)
static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high) static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
{ {
int ret; int ret;
#ifdef CONFIG_X86_64
unsigned int which;
u64 base;
#endif
ret = 0; ret = 0;
switch (msr) { switch (msr) {
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
unsigned which;
u64 base;
case MSR_FS_BASE: which = SEGBASE_FS; goto set; case MSR_FS_BASE: which = SEGBASE_FS; goto set;
case MSR_KERNEL_GS_BASE: which = SEGBASE_GS_USER; goto set; case MSR_KERNEL_GS_BASE: which = SEGBASE_GS_USER; goto set;
case MSR_GS_BASE: which = SEGBASE_GS_KERNEL; goto set; case MSR_GS_BASE: which = SEGBASE_GS_KERNEL; goto set;

View file

@ -525,12 +525,13 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
*/ */
entity = &bfqg->entity; entity = &bfqg->entity;
for_each_entity(entity) { for_each_entity(entity) {
bfqg = container_of(entity, struct bfq_group, entity); struct bfq_group *curr_bfqg = container_of(entity,
if (bfqg != bfqd->root_group) { struct bfq_group, entity);
parent = bfqg_parent(bfqg); if (curr_bfqg != bfqd->root_group) {
parent = bfqg_parent(curr_bfqg);
if (!parent) if (!parent)
parent = bfqd->root_group; parent = bfqd->root_group;
bfq_group_set_parent(bfqg, parent); bfq_group_set_parent(curr_bfqg, parent);
} }
} }

View file

@ -0,0 +1,4 @@
. ${ROOT_DIR}/common/build.config.common
. ${ROOT_DIR}/common/build.config.arm
. ${ROOT_DIR}/common/build.config.allmodconfig

12
build.config.arm Normal file
View file

@ -0,0 +1,12 @@
ARCH=arm
CLANG_TRIPLE=arm-linux-gnueabi-
CROSS_COMPILE=arm-linux-androidkernel-
LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/arm/arm-linux-androideabi-4.9/bin
FILES="
arch/arm/boot/Image.gz
arch/arm/boot/Image
vmlinux
System.map
"

View file

@ -5,7 +5,7 @@ CC=clang
LD=ld.lld LD=ld.lld
NM=llvm-nm NM=llvm-nm
OBJCOPY=llvm-objcopy OBJCOPY=llvm-objcopy
CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r377782b/bin CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r377782c/bin
EXTRA_CMDS='' EXTRA_CMDS=''
STOP_SHIP_TRACEPRINTK=1 STOP_SHIP_TRACEPRINTK=1

View file

@ -0,0 +1,2 @@
. ${ROOT_DIR}/common/build.config.gki.aarch64
TRIM_NONLISTED_KMI=""

View file

@ -0,0 +1,2 @@
. ${ROOT_DIR}/common/build.config.gki.x86_64
TRIM_NONLISTED_KMI=""

View file

@ -967,6 +967,30 @@ struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
} }
EXPORT_SYMBOL_GPL(crypto_alloc_skcipher); EXPORT_SYMBOL_GPL(crypto_alloc_skcipher);
struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(
const char *alg_name, u32 type, u32 mask)
{
struct crypto_skcipher *tfm;
/* Only sync algorithms allowed. */
mask |= CRYPTO_ALG_ASYNC;
tfm = crypto_alloc_tfm(alg_name, &crypto_skcipher_type2, type, mask);
/*
* Make sure we do not allocate something that might get used with
* an on-stack request: check the request size.
*/
if (!IS_ERR(tfm) && WARN_ON(crypto_skcipher_reqsize(tfm) >
MAX_SYNC_SKCIPHER_REQSIZE)) {
crypto_free_skcipher(tfm);
return ERR_PTR(-EINVAL);
}
return (struct crypto_sync_skcipher *)tfm;
}
EXPORT_SYMBOL_GPL(crypto_alloc_sync_skcipher);
int crypto_has_skcipher2(const char *alg_name, u32 type, u32 mask) int crypto_has_skcipher2(const char *alg_name, u32 type, u32 mask)
{ {
return crypto_type_has_alg(alg_name, &crypto_skcipher_type2, return crypto_type_has_alg(alg_name, &crypto_skcipher_type2,

View file

@ -58,12 +58,14 @@ static bool acpi_watchdog_uses_rtc(const struct acpi_table_wdat *wdat)
} }
#endif #endif
static bool acpi_no_watchdog;
static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void) static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void)
{ {
const struct acpi_table_wdat *wdat = NULL; const struct acpi_table_wdat *wdat = NULL;
acpi_status status; acpi_status status;
if (acpi_disabled) if (acpi_disabled || acpi_no_watchdog)
return NULL; return NULL;
status = acpi_get_table(ACPI_SIG_WDAT, 0, status = acpi_get_table(ACPI_SIG_WDAT, 0,
@ -91,6 +93,14 @@ bool acpi_has_watchdog(void)
} }
EXPORT_SYMBOL_GPL(acpi_has_watchdog); EXPORT_SYMBOL_GPL(acpi_has_watchdog);
/* ACPI watchdog can be disabled on boot command line */
static int __init disable_acpi_watchdog(char *str)
{
acpi_no_watchdog = true;
return 1;
}
__setup("acpi_no_watchdog", disable_acpi_watchdog);
void __init acpi_watchdog_init(void) void __init acpi_watchdog_init(void)
{ {
const struct acpi_wdat_entry *entries; const struct acpi_wdat_entry *entries;
@ -129,12 +139,11 @@ void __init acpi_watchdog_init(void)
gas = &entries[i].register_region; gas = &entries[i].register_region;
res.start = gas->address; res.start = gas->address;
res.end = res.start + ACPI_ACCESS_BYTE_WIDTH(gas->access_width) - 1;
if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) { if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
res.flags = IORESOURCE_MEM; res.flags = IORESOURCE_MEM;
res.end = res.start + ALIGN(gas->access_width, 4) - 1;
} else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) { } else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
res.flags = IORESOURCE_IO; res.flags = IORESOURCE_IO;
res.end = res.start + gas->access_width - 1;
} else { } else {
pr_warn("Unsupported address space: %u\n", pr_warn("Unsupported address space: %u\n",
gas->space_id); gas->space_id);

View file

@ -201,7 +201,7 @@ static int ghes_estatus_pool_expand(unsigned long len)
* New allocation must be visible in all pgd before it can be found by * New allocation must be visible in all pgd before it can be found by
* an NMI allocating from the pool. * an NMI allocating from the pool.
*/ */
vmalloc_sync_all(); vmalloc_sync_mappings();
return gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1); return gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1);
} }

View file

@ -5215,6 +5215,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
binder_dev = container_of(filp->private_data, binder_dev = container_of(filp->private_data,
struct binder_device, miscdev); struct binder_device, miscdev);
} }
refcount_inc(&binder_dev->ref);
proc->context = &binder_dev->context; proc->context = &binder_dev->context;
binder_alloc_init(&proc->alloc); binder_alloc_init(&proc->alloc);
@ -5392,6 +5393,7 @@ static int binder_node_release(struct binder_node *node, int refs)
static void binder_deferred_release(struct binder_proc *proc) static void binder_deferred_release(struct binder_proc *proc)
{ {
struct binder_context *context = proc->context; struct binder_context *context = proc->context;
struct binder_device *device;
struct rb_node *n; struct rb_node *n;
int threads, nodes, incoming_refs, outgoing_refs, active_transactions; int threads, nodes, incoming_refs, outgoing_refs, active_transactions;
@ -5410,6 +5412,12 @@ static void binder_deferred_release(struct binder_proc *proc)
context->binder_context_mgr_node = NULL; context->binder_context_mgr_node = NULL;
} }
mutex_unlock(&context->context_mgr_node_lock); mutex_unlock(&context->context_mgr_node_lock);
device = container_of(proc->context, struct binder_device, context);
if (refcount_dec_and_test(&device->ref)) {
kfree(context->name);
kfree(device);
}
proc->context = NULL;
binder_inner_proc_lock(proc); binder_inner_proc_lock(proc);
/* /*
* Make sure proc stays alive after we * Make sure proc stays alive after we
@ -6081,6 +6089,7 @@ static int __init init_binder_device(const char *name)
binder_device->miscdev.minor = MISC_DYNAMIC_MINOR; binder_device->miscdev.minor = MISC_DYNAMIC_MINOR;
binder_device->miscdev.name = name; binder_device->miscdev.name = name;
refcount_set(&binder_device->ref, 1);
binder_device->context.binder_context_mgr_uid = INVALID_UID; binder_device->context.binder_context_mgr_uid = INVALID_UID;
binder_device->context.name = name; binder_device->context.name = name;
mutex_init(&binder_device->context.context_mgr_node_lock); mutex_init(&binder_device->context.context_mgr_node_lock);

View file

@ -8,6 +8,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/refcount.h>
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/uidgid.h> #include <linux/uidgid.h>
@ -33,6 +34,7 @@ struct binder_device {
struct miscdevice miscdev; struct miscdevice miscdev;
struct binder_context context; struct binder_context context;
struct inode *binderfs_inode; struct inode *binderfs_inode;
refcount_t ref;
}; };
/** /**

View file

@ -154,6 +154,7 @@ static int binderfs_binder_device_create(struct inode *ref_inode,
if (!name) if (!name)
goto err; goto err;
refcount_set(&device->ref, 1);
device->binderfs_inode = inode; device->binderfs_inode = inode;
device->context.binder_context_mgr_uid = INVALID_UID; device->context.binder_context_mgr_uid = INVALID_UID;
device->context.name = name; device->context.name = name;
@ -257,8 +258,10 @@ static void binderfs_evict_inode(struct inode *inode)
ida_free(&binderfs_minors, device->miscdev.minor); ida_free(&binderfs_minors, device->miscdev.minor);
mutex_unlock(&binderfs_minors_mutex); mutex_unlock(&binderfs_minors_mutex);
kfree(device->context.name); if (refcount_dec_and_test(&device->ref)) {
kfree(device); kfree(device->context.name);
kfree(device);
}
} }
/** /**

View file

@ -118,7 +118,7 @@ static int device_is_dependent(struct device *dev, void *target)
return ret; return ret;
list_for_each_entry(link, &dev->links.consumers, s_node) { list_for_each_entry(link, &dev->links.consumers, s_node) {
if (link->flags == DL_FLAG_SYNC_STATE_ONLY) if (link->flags == (DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
continue; continue;
if (link->consumer == target) if (link->consumer == target)
@ -131,6 +131,50 @@ static int device_is_dependent(struct device *dev, void *target)
return ret; return ret;
} }
static void device_link_init_status(struct device_link *link,
struct device *consumer,
struct device *supplier)
{
switch (supplier->links.status) {
case DL_DEV_PROBING:
switch (consumer->links.status) {
case DL_DEV_PROBING:
/*
* A consumer driver can create a link to a supplier
* that has not completed its probing yet as long as it
* knows that the supplier is already functional (for
* example, it has just acquired some resources from the
* supplier).
*/
link->status = DL_STATE_CONSUMER_PROBE;
break;
default:
link->status = DL_STATE_DORMANT;
break;
}
break;
case DL_DEV_DRIVER_BOUND:
switch (consumer->links.status) {
case DL_DEV_PROBING:
link->status = DL_STATE_CONSUMER_PROBE;
break;
case DL_DEV_DRIVER_BOUND:
link->status = DL_STATE_ACTIVE;
break;
default:
link->status = DL_STATE_AVAILABLE;
break;
}
break;
case DL_DEV_UNBINDING:
link->status = DL_STATE_SUPPLIER_UNBIND;
break;
default:
link->status = DL_STATE_DORMANT;
break;
}
}
static int device_reorder_to_tail(struct device *dev, void *not_used) static int device_reorder_to_tail(struct device *dev, void *not_used)
{ {
struct device_link *link; struct device_link *link;
@ -147,7 +191,7 @@ static int device_reorder_to_tail(struct device *dev, void *not_used)
device_for_each_child(dev, NULL, device_reorder_to_tail); device_for_each_child(dev, NULL, device_reorder_to_tail);
list_for_each_entry(link, &dev->links.consumers, s_node) { list_for_each_entry(link, &dev->links.consumers, s_node) {
if (link->flags == DL_FLAG_SYNC_STATE_ONLY) if (link->flags == (DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
continue; continue;
device_reorder_to_tail(link->consumer, NULL); device_reorder_to_tail(link->consumer, NULL);
} }
@ -175,6 +219,14 @@ void device_pm_move_to_tail(struct device *dev)
device_links_read_unlock(idx); device_links_read_unlock(idx);
} }
#define DL_MANAGED_LINK_FLAGS (DL_FLAG_AUTOREMOVE_CONSUMER | \
DL_FLAG_AUTOREMOVE_SUPPLIER | \
DL_FLAG_AUTOPROBE_CONSUMER | \
DL_FLAG_SYNC_STATE_ONLY)
#define DL_ADD_VALID_FLAGS (DL_MANAGED_LINK_FLAGS | DL_FLAG_STATELESS | \
DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE)
/** /**
* device_link_add - Create a link between two devices. * device_link_add - Create a link between two devices.
* @consumer: Consumer end of the link. * @consumer: Consumer end of the link.
@ -189,14 +241,38 @@ void device_pm_move_to_tail(struct device *dev)
* of the link. If DL_FLAG_PM_RUNTIME is not set, DL_FLAG_RPM_ACTIVE will be * of the link. If DL_FLAG_PM_RUNTIME is not set, DL_FLAG_RPM_ACTIVE will be
* ignored. * ignored.
* *
* If the DL_FLAG_AUTOREMOVE_CONSUMER flag is set, the link will be removed * If DL_FLAG_STATELESS is set in @flags, the caller of this function is
* automatically when the consumer device driver unbinds from it. Analogously, * expected to release the link returned by it directly with the help of either
* if DL_FLAG_AUTOREMOVE_SUPPLIER is set in @flags, the link will be removed * device_link_del() or device_link_remove().
* automatically when the supplier device driver unbinds from it.
* *
* The combination of DL_FLAG_STATELESS and either DL_FLAG_AUTOREMOVE_CONSUMER * If that flag is not set, however, the caller of this function is handing the
* or DL_FLAG_AUTOREMOVE_SUPPLIER set in @flags at the same time is invalid and * management of the link over to the driver core entirely and its return value
* will cause NULL to be returned upfront. * can only be used to check whether or not the link is present. In that case,
* the DL_FLAG_AUTOREMOVE_CONSUMER and DL_FLAG_AUTOREMOVE_SUPPLIER device link
* flags can be used to indicate to the driver core when the link can be safely
* deleted. Namely, setting one of them in @flags indicates to the driver core
* that the link is not going to be used (by the given caller of this function)
* after unbinding the consumer or supplier driver, respectively, from its
* device, so the link can be deleted at that point. If none of them is set,
* the link will be maintained until one of the devices pointed to by it (either
* the consumer or the supplier) is unregistered.
*
* Also, if DL_FLAG_STATELESS, DL_FLAG_AUTOREMOVE_CONSUMER and
* DL_FLAG_AUTOREMOVE_SUPPLIER are not set in @flags (that is, a persistent
* managed device link is being added), the DL_FLAG_AUTOPROBE_CONSUMER flag can
* be used to request the driver core to automaticall probe for a consmer
* driver after successfully binding a driver to the supplier device.
*
* The combination of DL_FLAG_STATELESS and one of DL_FLAG_AUTOREMOVE_CONSUMER,
* DL_FLAG_AUTOREMOVE_SUPPLIER, or DL_FLAG_AUTOPROBE_CONSUMER set in @flags at
* the same time is invalid and will cause NULL to be returned upfront.
* However, if a device link between the given @consumer and @supplier pair
* exists already when this function is called for them, the existing link will
* be returned regardless of its current type and status (the link's flags may
* be modified then). The caller of this function is then expected to treat
* the link as though it has just been created, so (in particular) if
* DL_FLAG_STATELESS was passed in @flags, the link needs to be released
* explicitly when not needed any more (as stated above).
* *
* A side effect of the link creation is re-ordering of dpm_list and the * A side effect of the link creation is re-ordering of dpm_list and the
* devices_kset list by moving the consumer device and all devices depending * devices_kset list by moving the consumer device and all devices depending
@ -212,11 +288,13 @@ struct device_link *device_link_add(struct device *consumer,
{ {
struct device_link *link; struct device_link *link;
if (!consumer || !supplier || if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS ||
(flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
(flags & DL_FLAG_SYNC_STATE_ONLY && (flags & DL_FLAG_SYNC_STATE_ONLY &&
flags != DL_FLAG_SYNC_STATE_ONLY) || flags != DL_FLAG_SYNC_STATE_ONLY) ||
(flags & DL_FLAG_STATELESS && (flags & DL_FLAG_AUTOPROBE_CONSUMER &&
flags & (DL_FLAG_AUTOREMOVE_CONSUMER | DL_FLAG_AUTOREMOVE_SUPPLIER))) flags & (DL_FLAG_AUTOREMOVE_CONSUMER |
DL_FLAG_AUTOREMOVE_SUPPLIER)))
return NULL; return NULL;
if (flags & DL_FLAG_PM_RUNTIME && flags & DL_FLAG_RPM_ACTIVE) { if (flags & DL_FLAG_PM_RUNTIME && flags & DL_FLAG_RPM_ACTIVE) {
@ -226,6 +304,9 @@ struct device_link *device_link_add(struct device *consumer,
} }
} }
if (!(flags & DL_FLAG_STATELESS))
flags |= DL_FLAG_MANAGED;
device_links_write_lock(); device_links_write_lock();
device_pm_lock(); device_pm_lock();
@ -243,25 +324,18 @@ struct device_link *device_link_add(struct device *consumer,
goto out; goto out;
} }
/*
* DL_FLAG_AUTOREMOVE_SUPPLIER indicates that the link will be needed
* longer than for DL_FLAG_AUTOREMOVE_CONSUMER and setting them both
* together doesn't make sense, so prefer DL_FLAG_AUTOREMOVE_SUPPLIER.
*/
if (flags & DL_FLAG_AUTOREMOVE_SUPPLIER)
flags &= ~DL_FLAG_AUTOREMOVE_CONSUMER;
list_for_each_entry(link, &supplier->links.consumers, s_node) { list_for_each_entry(link, &supplier->links.consumers, s_node) {
if (link->consumer != consumer) if (link->consumer != consumer)
continue; continue;
/*
* Don't return a stateless link if the caller wants a stateful
* one and vice versa.
*/
if (WARN_ON((flags & DL_FLAG_STATELESS) != (link->flags & DL_FLAG_STATELESS))) {
link = NULL;
goto out;
}
if (flags & DL_FLAG_AUTOREMOVE_CONSUMER)
link->flags |= DL_FLAG_AUTOREMOVE_CONSUMER;
if (flags & DL_FLAG_AUTOREMOVE_SUPPLIER)
link->flags |= DL_FLAG_AUTOREMOVE_SUPPLIER;
if (flags & DL_FLAG_PM_RUNTIME) { if (flags & DL_FLAG_PM_RUNTIME) {
if (!(link->flags & DL_FLAG_PM_RUNTIME)) { if (!(link->flags & DL_FLAG_PM_RUNTIME)) {
pm_runtime_new_link(consumer); pm_runtime_new_link(consumer);
@ -271,13 +345,42 @@ struct device_link *device_link_add(struct device *consumer,
refcount_inc(&link->rpm_active); refcount_inc(&link->rpm_active);
} }
kref_get(&link->kref); if (flags & DL_FLAG_STATELESS) {
kref_get(&link->kref);
if (link->flags & DL_FLAG_SYNC_STATE_ONLY &&
!(link->flags & DL_FLAG_STATELESS)) {
link->flags |= DL_FLAG_STATELESS;
goto reorder;
} else {
goto out;
}
}
/*
* If the life time of the link following from the new flags is
* longer than indicated by the flags of the existing link,
* update the existing link to stay around longer.
*/
if (flags & DL_FLAG_AUTOREMOVE_SUPPLIER) {
if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER) {
link->flags &= ~DL_FLAG_AUTOREMOVE_CONSUMER;
link->flags |= DL_FLAG_AUTOREMOVE_SUPPLIER;
}
} else if (!(flags & DL_FLAG_AUTOREMOVE_CONSUMER)) {
link->flags &= ~(DL_FLAG_AUTOREMOVE_CONSUMER |
DL_FLAG_AUTOREMOVE_SUPPLIER);
}
if (!(link->flags & DL_FLAG_MANAGED)) {
kref_get(&link->kref);
link->flags |= DL_FLAG_MANAGED;
device_link_init_status(link, consumer, supplier);
}
if (link->flags & DL_FLAG_SYNC_STATE_ONLY && if (link->flags & DL_FLAG_SYNC_STATE_ONLY &&
!(flags & DL_FLAG_SYNC_STATE_ONLY)) { !(flags & DL_FLAG_SYNC_STATE_ONLY)) {
link->flags &= ~DL_FLAG_SYNC_STATE_ONLY; link->flags &= ~DL_FLAG_SYNC_STATE_ONLY;
goto reorder; goto reorder;
} }
goto out; goto out;
} }
@ -304,42 +407,25 @@ struct device_link *device_link_add(struct device *consumer,
kref_init(&link->kref); kref_init(&link->kref);
/* Determine the initial link state. */ /* Determine the initial link state. */
if (flags & DL_FLAG_STATELESS) { if (flags & DL_FLAG_STATELESS)
link->status = DL_STATE_NONE; link->status = DL_STATE_NONE;
} else { else
switch (supplier->links.status) { device_link_init_status(link, consumer, supplier);
case DL_DEV_DRIVER_BOUND:
switch (consumer->links.status) {
case DL_DEV_PROBING:
/*
* Some callers expect the link creation during
* consumer driver probe to resume the supplier
* even without DL_FLAG_RPM_ACTIVE.
*/
if (flags & DL_FLAG_PM_RUNTIME)
pm_runtime_resume(supplier);
link->status = DL_STATE_CONSUMER_PROBE; /*
break; * Some callers expect the link creation during consumer driver probe to
case DL_DEV_DRIVER_BOUND: * resume the supplier even without DL_FLAG_RPM_ACTIVE.
link->status = DL_STATE_ACTIVE; */
break; if (link->status == DL_STATE_CONSUMER_PROBE &&
default: flags & DL_FLAG_PM_RUNTIME)
link->status = DL_STATE_AVAILABLE; pm_runtime_resume(supplier);
break;
}
break;
case DL_DEV_UNBINDING:
link->status = DL_STATE_SUPPLIER_UNBIND;
break;
default:
link->status = DL_STATE_DORMANT;
break;
}
}
if (flags & DL_FLAG_SYNC_STATE_ONLY) if (flags & DL_FLAG_SYNC_STATE_ONLY) {
dev_dbg(consumer,
"Linked as a sync state only consumer to %s\n",
dev_name(supplier));
goto out; goto out;
}
reorder: reorder:
/* /*
* Move the consumer and all of the devices depending on it to the end * Move the consumer and all of the devices depending on it to the end
@ -424,9 +510,13 @@ static void device_link_add_missing_supplier_links(void)
mutex_lock(&wfs_lock); mutex_lock(&wfs_lock);
list_for_each_entry_safe(dev, tmp, &wait_for_suppliers, list_for_each_entry_safe(dev, tmp, &wait_for_suppliers,
links.needs_suppliers) links.needs_suppliers) {
if (!fwnode_call_int_op(dev->fwnode, add_links, dev)) int ret = fwnode_call_int_op(dev->fwnode, add_links, dev);
if (!ret)
list_del_init(&dev->links.needs_suppliers); list_del_init(&dev->links.needs_suppliers);
else if (ret != -ENODEV)
dev->links.need_for_probe = false;
}
mutex_unlock(&wfs_lock); mutex_unlock(&wfs_lock);
} }
@ -477,8 +567,16 @@ static void __device_link_del(struct kref *kref)
} }
#endif /* !CONFIG_SRCU */ #endif /* !CONFIG_SRCU */
static void device_link_put_kref(struct device_link *link)
{
if (link->flags & DL_FLAG_STATELESS)
kref_put(&link->kref, __device_link_del);
else
WARN(1, "Unable to drop a managed device link reference\n");
}
/** /**
* device_link_del - Delete a link between two devices. * device_link_del - Delete a stateless link between two devices.
* @link: Device link to delete. * @link: Device link to delete.
* *
* The caller must ensure proper synchronization of this function with runtime * The caller must ensure proper synchronization of this function with runtime
@ -490,14 +588,14 @@ void device_link_del(struct device_link *link)
{ {
device_links_write_lock(); device_links_write_lock();
device_pm_lock(); device_pm_lock();
kref_put(&link->kref, __device_link_del); device_link_put_kref(link);
device_pm_unlock(); device_pm_unlock();
device_links_write_unlock(); device_links_write_unlock();
} }
EXPORT_SYMBOL_GPL(device_link_del); EXPORT_SYMBOL_GPL(device_link_del);
/** /**
* device_link_remove - remove a link between two devices. * device_link_remove - Delete a stateless link between two devices.
* @consumer: Consumer end of the link. * @consumer: Consumer end of the link.
* @supplier: Supplier end of the link. * @supplier: Supplier end of the link.
* *
@ -516,7 +614,7 @@ void device_link_remove(void *consumer, struct device *supplier)
list_for_each_entry(link, &supplier->links.consumers, s_node) { list_for_each_entry(link, &supplier->links.consumers, s_node) {
if (link->consumer == consumer) { if (link->consumer == consumer) {
kref_put(&link->kref, __device_link_del); device_link_put_kref(link);
break; break;
} }
} }
@ -549,7 +647,7 @@ static void device_links_missing_supplier(struct device *dev)
* mark the link as "consumer probe in progress" to make the supplier removal * mark the link as "consumer probe in progress" to make the supplier removal
* wait for us to complete (or bad things may happen). * wait for us to complete (or bad things may happen).
* *
* Links with the DL_FLAG_STATELESS flag set are ignored. * Links without the DL_FLAG_MANAGED flag set are ignored.
*/ */
int device_links_check_suppliers(struct device *dev) int device_links_check_suppliers(struct device *dev)
{ {
@ -571,7 +669,7 @@ int device_links_check_suppliers(struct device *dev)
device_links_write_lock(); device_links_write_lock();
list_for_each_entry(link, &dev->links.suppliers, c_node) { list_for_each_entry(link, &dev->links.suppliers, c_node) {
if (link->flags & DL_FLAG_STATELESS || if (!(link->flags & DL_FLAG_MANAGED) ||
link->flags & DL_FLAG_SYNC_STATE_ONLY) link->flags & DL_FLAG_SYNC_STATE_ONLY)
continue; continue;
@ -615,7 +713,7 @@ static void __device_links_queue_sync_state(struct device *dev,
return; return;
list_for_each_entry(link, &dev->links.consumers, s_node) { list_for_each_entry(link, &dev->links.consumers, s_node) {
if (link->flags & DL_FLAG_STATELESS) if (!(link->flags & DL_FLAG_MANAGED))
continue; continue;
if (link->status != DL_STATE_ACTIVE) if (link->status != DL_STATE_ACTIVE)
return; return;
@ -638,25 +736,31 @@ static void __device_links_queue_sync_state(struct device *dev,
/** /**
* device_links_flush_sync_list - Call sync_state() on a list of devices * device_links_flush_sync_list - Call sync_state() on a list of devices
* @list: List of devices to call sync_state() on * @list: List of devices to call sync_state() on
* @dont_lock_dev: Device for which lock is already held by the caller
* *
* Calls sync_state() on all the devices that have been queued for it. This * Calls sync_state() on all the devices that have been queued for it. This
* function is used in conjunction with __device_links_queue_sync_state(). * function is used in conjunction with __device_links_queue_sync_state(). The
* @dont_lock_dev parameter is useful when this function is called from a
* context where a device lock is already held.
*/ */
static void device_links_flush_sync_list(struct list_head *list) static void device_links_flush_sync_list(struct list_head *list,
struct device *dont_lock_dev)
{ {
struct device *dev, *tmp; struct device *dev, *tmp;
list_for_each_entry_safe(dev, tmp, list, links.defer_sync) { list_for_each_entry_safe(dev, tmp, list, links.defer_sync) {
list_del_init(&dev->links.defer_sync); list_del_init(&dev->links.defer_sync);
device_lock(dev); if (dev != dont_lock_dev)
device_lock(dev);
if (dev->bus->sync_state) if (dev->bus->sync_state)
dev->bus->sync_state(dev); dev->bus->sync_state(dev);
else if (dev->driver && dev->driver->sync_state) else if (dev->driver && dev->driver->sync_state)
dev->driver->sync_state(dev); dev->driver->sync_state(dev);
device_unlock(dev); if (dev != dont_lock_dev)
device_unlock(dev);
put_device(dev); put_device(dev);
} }
@ -694,7 +798,7 @@ void device_links_supplier_sync_state_resume(void)
out: out:
device_links_write_unlock(); device_links_write_unlock();
device_links_flush_sync_list(&sync_list); device_links_flush_sync_list(&sync_list, NULL);
} }
static int sync_state_resume_initcall(void) static int sync_state_resume_initcall(void)
@ -719,7 +823,7 @@ static void __device_links_supplier_defer_sync(struct device *sup)
* *
* Also change the status of @dev's links to suppliers to "active". * Also change the status of @dev's links to suppliers to "active".
* *
* Links with the DL_FLAG_STATELESS flag set are ignored. * Links without the DL_FLAG_MANAGED flag set are ignored.
*/ */
void device_links_driver_bound(struct device *dev) void device_links_driver_bound(struct device *dev)
{ {
@ -738,15 +842,33 @@ void device_links_driver_bound(struct device *dev)
device_links_write_lock(); device_links_write_lock();
list_for_each_entry(link, &dev->links.consumers, s_node) { list_for_each_entry(link, &dev->links.consumers, s_node) {
if (link->flags & DL_FLAG_STATELESS) if (!(link->flags & DL_FLAG_MANAGED))
continue;
/*
* Links created during consumer probe may be in the "consumer
* probe" state to start with if the supplier is still probing
* when they are created and they may become "active" if the
* consumer probe returns first. Skip them here.
*/
if (link->status == DL_STATE_CONSUMER_PROBE ||
link->status == DL_STATE_ACTIVE)
continue; continue;
WARN_ON(link->status != DL_STATE_DORMANT); WARN_ON(link->status != DL_STATE_DORMANT);
WRITE_ONCE(link->status, DL_STATE_AVAILABLE); WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
if (link->flags & DL_FLAG_AUTOPROBE_CONSUMER)
driver_deferred_probe_add(link->consumer);
} }
if (defer_sync_state_count)
__device_links_supplier_defer_sync(dev);
else
__device_links_queue_sync_state(dev, &sync_list);
list_for_each_entry(link, &dev->links.suppliers, c_node) { list_for_each_entry(link, &dev->links.suppliers, c_node) {
if (link->flags & DL_FLAG_STATELESS) if (!(link->flags & DL_FLAG_MANAGED))
continue; continue;
WARN_ON(link->status != DL_STATE_CONSUMER_PROBE); WARN_ON(link->status != DL_STATE_CONSUMER_PROBE);
@ -763,7 +885,14 @@ void device_links_driver_bound(struct device *dev)
device_links_write_unlock(); device_links_write_unlock();
device_links_flush_sync_list(&sync_list); device_links_flush_sync_list(&sync_list, dev);
}
static void device_link_drop_managed(struct device_link *link)
{
link->flags &= ~DL_FLAG_MANAGED;
WRITE_ONCE(link->status, DL_STATE_NONE);
kref_put(&link->kref, __device_link_del);
} }
/** /**
@ -776,29 +905,60 @@ void device_links_driver_bound(struct device *dev)
* unless they already are in the "supplier unbind in progress" state in which * unless they already are in the "supplier unbind in progress" state in which
* case they need not be updated. * case they need not be updated.
* *
* Links with the DL_FLAG_STATELESS flag set are ignored. * Links without the DL_FLAG_MANAGED flag set are ignored.
*/ */
static void __device_links_no_driver(struct device *dev) static void __device_links_no_driver(struct device *dev)
{ {
struct device_link *link, *ln; struct device_link *link, *ln;
list_for_each_entry_safe_reverse(link, ln, &dev->links.suppliers, c_node) { list_for_each_entry_safe_reverse(link, ln, &dev->links.suppliers, c_node) {
if (link->flags & DL_FLAG_STATELESS) if (!(link->flags & DL_FLAG_MANAGED))
continue; continue;
if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER) if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER)
kref_put(&link->kref, __device_link_del); device_link_drop_managed(link);
else if (link->status != DL_STATE_SUPPLIER_UNBIND) else if (link->status == DL_STATE_CONSUMER_PROBE ||
link->status == DL_STATE_ACTIVE)
WRITE_ONCE(link->status, DL_STATE_AVAILABLE); WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
} }
dev->links.status = DL_DEV_NO_DRIVER; dev->links.status = DL_DEV_NO_DRIVER;
} }
/**
* device_links_no_driver - Update links after failing driver probe.
* @dev: Device whose driver has just failed to probe.
*
* Clean up leftover links to consumers for @dev and invoke
* %__device_links_no_driver() to update links to suppliers for it as
* appropriate.
*
* Links without the DL_FLAG_MANAGED flag set are ignored.
*/
void device_links_no_driver(struct device *dev) void device_links_no_driver(struct device *dev)
{ {
struct device_link *link;
device_links_write_lock(); device_links_write_lock();
list_for_each_entry(link, &dev->links.consumers, s_node) {
if (!(link->flags & DL_FLAG_MANAGED))
continue;
/*
* The probe has failed, so if the status of the link is
* "consumer probe" or "active", it must have been added by
* a probing consumer while this device was still probing.
* Change its state to "dormant", as it represents a valid
* relationship, but it is not functionally meaningful.
*/
if (link->status == DL_STATE_CONSUMER_PROBE ||
link->status == DL_STATE_ACTIVE)
WRITE_ONCE(link->status, DL_STATE_DORMANT);
}
__device_links_no_driver(dev); __device_links_no_driver(dev);
device_links_write_unlock(); device_links_write_unlock();
} }
@ -810,7 +970,7 @@ void device_links_no_driver(struct device *dev)
* invoke %__device_links_no_driver() to update links to suppliers for it as * invoke %__device_links_no_driver() to update links to suppliers for it as
* appropriate. * appropriate.
* *
* Links with the DL_FLAG_STATELESS flag set are ignored. * Links without the DL_FLAG_MANAGED flag set are ignored.
*/ */
void device_links_driver_cleanup(struct device *dev) void device_links_driver_cleanup(struct device *dev)
{ {
@ -819,7 +979,7 @@ void device_links_driver_cleanup(struct device *dev)
device_links_write_lock(); device_links_write_lock();
list_for_each_entry_safe(link, ln, &dev->links.consumers, s_node) { list_for_each_entry_safe(link, ln, &dev->links.consumers, s_node) {
if (link->flags & DL_FLAG_STATELESS) if (!(link->flags & DL_FLAG_MANAGED))
continue; continue;
WARN_ON(link->flags & DL_FLAG_AUTOREMOVE_CONSUMER); WARN_ON(link->flags & DL_FLAG_AUTOREMOVE_CONSUMER);
@ -832,7 +992,7 @@ void device_links_driver_cleanup(struct device *dev)
*/ */
if (link->status == DL_STATE_SUPPLIER_UNBIND && if (link->status == DL_STATE_SUPPLIER_UNBIND &&
link->flags & DL_FLAG_AUTOREMOVE_SUPPLIER) link->flags & DL_FLAG_AUTOREMOVE_SUPPLIER)
kref_put(&link->kref, __device_link_del); device_link_drop_managed(link);
WRITE_ONCE(link->status, DL_STATE_DORMANT); WRITE_ONCE(link->status, DL_STATE_DORMANT);
} }
@ -855,7 +1015,7 @@ void device_links_driver_cleanup(struct device *dev)
* *
* Return 'false' if there are no probing or active consumers. * Return 'false' if there are no probing or active consumers.
* *
* Links with the DL_FLAG_STATELESS flag set are ignored. * Links without the DL_FLAG_MANAGED flag set are ignored.
*/ */
bool device_links_busy(struct device *dev) bool device_links_busy(struct device *dev)
{ {
@ -865,7 +1025,7 @@ bool device_links_busy(struct device *dev)
device_links_write_lock(); device_links_write_lock();
list_for_each_entry(link, &dev->links.consumers, s_node) { list_for_each_entry(link, &dev->links.consumers, s_node) {
if (link->flags & DL_FLAG_STATELESS) if (!(link->flags & DL_FLAG_MANAGED))
continue; continue;
if (link->status == DL_STATE_CONSUMER_PROBE if (link->status == DL_STATE_CONSUMER_PROBE
@ -895,7 +1055,7 @@ bool device_links_busy(struct device *dev)
* driver to unbind and start over (the consumer will not re-probe as we have * driver to unbind and start over (the consumer will not re-probe as we have
* changed the state of the link already). * changed the state of the link already).
* *
* Links with the DL_FLAG_STATELESS flag set are ignored. * Links without the DL_FLAG_MANAGED flag set are ignored.
*/ */
void device_links_unbind_consumers(struct device *dev) void device_links_unbind_consumers(struct device *dev)
{ {
@ -907,7 +1067,7 @@ void device_links_unbind_consumers(struct device *dev)
list_for_each_entry(link, &dev->links.consumers, s_node) { list_for_each_entry(link, &dev->links.consumers, s_node) {
enum device_link_state status; enum device_link_state status;
if (link->flags & DL_FLAG_STATELESS || if (!(link->flags & DL_FLAG_MANAGED) ||
link->flags & DL_FLAG_SYNC_STATE_ONLY) link->flags & DL_FLAG_SYNC_STATE_ONLY)
continue; continue;

View file

@ -116,7 +116,7 @@ static void deferred_probe_work_func(struct work_struct *work)
} }
static DECLARE_WORK(deferred_probe_work, deferred_probe_work_func); static DECLARE_WORK(deferred_probe_work, deferred_probe_work_func);
static void driver_deferred_probe_add(struct device *dev) void driver_deferred_probe_add(struct device *dev)
{ {
mutex_lock(&deferred_probe_mutex); mutex_lock(&deferred_probe_mutex);
if (list_empty(&dev->p->deferred_probe)) { if (list_empty(&dev->p->deferred_probe)) {

View file

@ -1531,7 +1531,7 @@ void pm_runtime_remove(struct device *dev)
* runtime PM references to the device, drop the usage counter of the device * runtime PM references to the device, drop the usage counter of the device
* (as many times as needed). * (as many times as needed).
* *
* Links with the DL_FLAG_STATELESS flag set are ignored. * Links with the DL_FLAG_MANAGED flag unset are ignored.
* *
* Since the device is guaranteed to be runtime-active at the point this is * Since the device is guaranteed to be runtime-active at the point this is
* called, nothing else needs to be done here. * called, nothing else needs to be done here.
@ -1548,7 +1548,7 @@ void pm_runtime_clean_up_links(struct device *dev)
idx = device_links_read_lock(); idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.consumers, s_node) { list_for_each_entry_rcu(link, &dev->links.consumers, s_node) {
if (link->flags & DL_FLAG_STATELESS) if (!(link->flags & DL_FLAG_MANAGED))
continue; continue;
while (refcount_dec_not_one(&link->rpm_active)) while (refcount_dec_not_one(&link->rpm_active))

View file

@ -271,10 +271,12 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num); err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num);
if (err) { if (err) {
virtqueue_kick(vblk->vqs[qid].vq); virtqueue_kick(vblk->vqs[qid].vq);
blk_mq_stop_hw_queue(hctx); /* Don't stop the queue if -ENOMEM: we may have failed to
* bounce the buffer due to global resource outage.
*/
if (err == -ENOSPC)
blk_mq_stop_hw_queue(hctx);
spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags); spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
/* Out of mem doesn't actually happen, since we fall back
* to direct descriptors */
if (err == -ENOMEM || err == -ENOSPC) if (err == -ENOMEM || err == -ENOSPC)
return BLK_STS_DEV_RESOURCE; return BLK_STS_DEV_RESOURCE;
return BLK_STS_IOERR; return BLK_STS_IOERR;

View file

@ -735,10 +735,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
flags = ipmi_ssif_lock_cond(ssif_info, &oflags); flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
msg = ssif_info->curr_msg; msg = ssif_info->curr_msg;
if (msg) { if (msg) {
if (data) {
if (len > IPMI_MAX_MSG_LENGTH)
len = IPMI_MAX_MSG_LENGTH;
memcpy(msg->rsp, data, len);
} else {
len = 0;
}
msg->rsp_size = len; msg->rsp_size = len;
if (msg->rsp_size > IPMI_MAX_MSG_LENGTH)
msg->rsp_size = IPMI_MAX_MSG_LENGTH;
memcpy(msg->rsp, data, msg->rsp_size);
ssif_info->curr_msg = NULL; ssif_info->curr_msg = NULL;
} }

View file

@ -600,7 +600,6 @@ struct devfreq *devfreq_add_device(struct device *dev,
{ {
struct devfreq *devfreq; struct devfreq *devfreq;
struct devfreq_governor *governor; struct devfreq_governor *governor;
static atomic_t devfreq_no = ATOMIC_INIT(-1);
int err = 0; int err = 0;
if (!dev || !profile || !governor_name) { if (!dev || !profile || !governor_name) {
@ -661,8 +660,7 @@ struct devfreq *devfreq_add_device(struct device *dev,
} }
devfreq->max_freq = devfreq->scaling_max_freq; devfreq->max_freq = devfreq->scaling_max_freq;
dev_set_name(&devfreq->dev, "devfreq%d", dev_set_name(&devfreq->dev, "%s", dev_name(dev));
atomic_inc_return(&devfreq_no));
err = device_register(&devfreq->dev); err = device_register(&devfreq->dev);
if (err) { if (err) {
mutex_unlock(&devfreq->lock); mutex_unlock(&devfreq->lock);

View file

@ -55,10 +55,10 @@ static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
size_t ret = 0; size_t ret = 0;
dmabuf = dentry->d_fsdata; dmabuf = dentry->d_fsdata;
mutex_lock(&dmabuf->lock); spin_lock(&dmabuf->name_lock);
if (dmabuf->name) if (dmabuf->name)
ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN); ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN);
mutex_unlock(&dmabuf->lock); spin_unlock(&dmabuf->name_lock);
return dynamic_dname(dentry, buffer, buflen, "/%s:%s", return dynamic_dname(dentry, buffer, buflen, "/%s:%s",
dentry->d_name.name, ret > 0 ? name : ""); dentry->d_name.name, ret > 0 ? name : "");
@ -86,6 +86,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_release(struct inode *inode, struct file *file) static int dma_buf_release(struct inode *inode, struct file *file)
{ {
struct dma_buf *dmabuf; struct dma_buf *dmabuf;
int dtor_ret = 0;
if (!is_dma_buf_file(file)) if (!is_dma_buf_file(file))
return -EINVAL; return -EINVAL;
@ -104,12 +105,19 @@ static int dma_buf_release(struct inode *inode, struct file *file)
*/ */
BUG_ON(dmabuf->cb_shared.active || dmabuf->cb_excl.active); BUG_ON(dmabuf->cb_shared.active || dmabuf->cb_excl.active);
dmabuf->ops->release(dmabuf);
mutex_lock(&db_list.lock); mutex_lock(&db_list.lock);
list_del(&dmabuf->list_node); list_del(&dmabuf->list_node);
mutex_unlock(&db_list.lock); mutex_unlock(&db_list.lock);
if (dmabuf->dtor)
dtor_ret = dmabuf->dtor(dmabuf, dmabuf->dtor_data);
if (!dtor_ret)
dmabuf->ops->release(dmabuf);
else
pr_warn_ratelimited("Leaking dmabuf %s because destructor failed error:%d\n",
dmabuf->name, dtor_ret);
if (dmabuf->resv == (struct reservation_object *)&dmabuf[1]) if (dmabuf->resv == (struct reservation_object *)&dmabuf[1])
reservation_object_fini(dmabuf->resv); reservation_object_fini(dmabuf->resv);
@ -337,6 +345,7 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
return PTR_ERR(name); return PTR_ERR(name);
mutex_lock(&dmabuf->lock); mutex_lock(&dmabuf->lock);
spin_lock(&dmabuf->name_lock);
if (!list_empty(&dmabuf->attachments)) { if (!list_empty(&dmabuf->attachments)) {
ret = -EBUSY; ret = -EBUSY;
kfree(name); kfree(name);
@ -346,16 +355,24 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
dmabuf->name = name; dmabuf->name = name;
out_unlock: out_unlock:
spin_unlock(&dmabuf->name_lock);
mutex_unlock(&dmabuf->lock); mutex_unlock(&dmabuf->lock);
return ret; return ret;
} }
static int dma_buf_begin_cpu_access_umapped(struct dma_buf *dmabuf,
enum dma_data_direction direction);
static int dma_buf_end_cpu_access_umapped(struct dma_buf *dmabuf,
enum dma_data_direction direction);
static long dma_buf_ioctl(struct file *file, static long dma_buf_ioctl(struct file *file,
unsigned int cmd, unsigned long arg) unsigned int cmd, unsigned long arg)
{ {
struct dma_buf *dmabuf; struct dma_buf *dmabuf;
struct dma_buf_sync sync; struct dma_buf_sync sync;
enum dma_data_direction direction; enum dma_data_direction dir;
int ret; int ret;
dmabuf = file->private_data; dmabuf = file->private_data;
@ -370,22 +387,30 @@ static long dma_buf_ioctl(struct file *file,
switch (sync.flags & DMA_BUF_SYNC_RW) { switch (sync.flags & DMA_BUF_SYNC_RW) {
case DMA_BUF_SYNC_READ: case DMA_BUF_SYNC_READ:
direction = DMA_FROM_DEVICE; dir = DMA_FROM_DEVICE;
break; break;
case DMA_BUF_SYNC_WRITE: case DMA_BUF_SYNC_WRITE:
direction = DMA_TO_DEVICE; dir = DMA_TO_DEVICE;
break; break;
case DMA_BUF_SYNC_RW: case DMA_BUF_SYNC_RW:
direction = DMA_BIDIRECTIONAL; dir = DMA_BIDIRECTIONAL;
break; break;
default: default:
return -EINVAL; return -EINVAL;
} }
if (sync.flags & DMA_BUF_SYNC_END) if (sync.flags & DMA_BUF_SYNC_END)
ret = dma_buf_end_cpu_access(dmabuf, direction); if (sync.flags & DMA_BUF_SYNC_USER_MAPPED)
ret = dma_buf_end_cpu_access_umapped(dmabuf,
dir);
else
ret = dma_buf_end_cpu_access(dmabuf, dir);
else else
ret = dma_buf_begin_cpu_access(dmabuf, direction); if (sync.flags & DMA_BUF_SYNC_USER_MAPPED)
ret = dma_buf_begin_cpu_access_umapped(dmabuf,
dir);
else
ret = dma_buf_begin_cpu_access(dmabuf, dir);
return ret; return ret;
@ -405,10 +430,10 @@ static void dma_buf_show_fdinfo(struct seq_file *m, struct file *file)
/* Don't count the temporary reference taken inside procfs seq_show */ /* Don't count the temporary reference taken inside procfs seq_show */
seq_printf(m, "count:\t%ld\n", file_count(dmabuf->file) - 1); seq_printf(m, "count:\t%ld\n", file_count(dmabuf->file) - 1);
seq_printf(m, "exp_name:\t%s\n", dmabuf->exp_name); seq_printf(m, "exp_name:\t%s\n", dmabuf->exp_name);
mutex_lock(&dmabuf->lock); spin_lock(&dmabuf->name_lock);
if (dmabuf->name) if (dmabuf->name)
seq_printf(m, "name:\t%s\n", dmabuf->name); seq_printf(m, "name:\t%s\n", dmabuf->name);
mutex_unlock(&dmabuf->lock); spin_unlock(&dmabuf->name_lock);
} }
static const struct file_operations dma_buf_fops = { static const struct file_operations dma_buf_fops = {
@ -563,6 +588,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
dmabuf->file = file; dmabuf->file = file;
mutex_init(&dmabuf->lock); mutex_init(&dmabuf->lock);
spin_lock_init(&dmabuf->name_lock);
INIT_LIST_HEAD(&dmabuf->attachments); INIT_LIST_HEAD(&dmabuf->attachments);
mutex_lock(&db_list.lock); mutex_lock(&db_list.lock);
@ -851,7 +877,8 @@ EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
* - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write * - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write
* to mmap area 3. SYNC_END ioctl. This can be repeated as often as you * to mmap area 3. SYNC_END ioctl. This can be repeated as often as you
* want (with the new data being consumed by say the GPU or the scanout * want (with the new data being consumed by say the GPU or the scanout
* device) * device). Optionally SYNC_USER_MAPPED can be set to restrict cache
* maintenance to only the parts of the buffer which are mmap(ed).
* - munmap once you don't need the buffer any more * - munmap once you don't need the buffer any more
* *
* For correctness and optimal performance, it is always required to use * For correctness and optimal performance, it is always required to use
@ -938,6 +965,51 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
} }
EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access); EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
static int dma_buf_begin_cpu_access_umapped(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
int ret = 0;
if (WARN_ON(!dmabuf))
return -EINVAL;
if (dmabuf->ops->begin_cpu_access_umapped)
ret = dmabuf->ops->begin_cpu_access_umapped(dmabuf, direction);
/* Ensure that all fences are waited upon - but we first allow
* the native handler the chance to do so more efficiently if it
* chooses. A double invocation here will be reasonably cheap no-op.
*/
if (ret == 0)
ret = __dma_buf_begin_cpu_access(dmabuf, direction);
return ret;
}
int dma_buf_begin_cpu_access_partial(struct dma_buf *dmabuf,
enum dma_data_direction direction,
unsigned int offset, unsigned int len)
{
int ret = 0;
if (WARN_ON(!dmabuf))
return -EINVAL;
if (dmabuf->ops->begin_cpu_access_partial)
ret = dmabuf->ops->begin_cpu_access_partial(dmabuf, direction,
offset, len);
/* Ensure that all fences are waited upon - but we first allow
* the native handler the chance to do so more efficiently if it
* chooses. A double invocation here will be reasonably cheap no-op.
*/
if (ret == 0)
ret = __dma_buf_begin_cpu_access(dmabuf, direction);
return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access_partial);
/** /**
* dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
* cpu in the kernel context. Calls end_cpu_access to allow exporter-specific * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
@ -964,6 +1036,35 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
} }
EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access); EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
static int dma_buf_end_cpu_access_umapped(struct dma_buf *dmabuf,
enum dma_data_direction direction)
{
int ret = 0;
WARN_ON(!dmabuf);
if (dmabuf->ops->end_cpu_access_umapped)
ret = dmabuf->ops->end_cpu_access_umapped(dmabuf, direction);
return ret;
}
int dma_buf_end_cpu_access_partial(struct dma_buf *dmabuf,
enum dma_data_direction direction,
unsigned int offset, unsigned int len)
{
int ret = 0;
WARN_ON(!dmabuf);
if (dmabuf->ops->end_cpu_access_partial)
ret = dmabuf->ops->end_cpu_access_partial(dmabuf, direction,
offset, len);
return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access_partial);
/** /**
* dma_buf_kmap - Map a page of the buffer object into kernel address space. The * dma_buf_kmap - Map a page of the buffer object into kernel address space. The
* same restrictions as for kmap and friends apply. * same restrictions as for kmap and friends apply.
@ -1125,6 +1226,20 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
} }
EXPORT_SYMBOL_GPL(dma_buf_vunmap); EXPORT_SYMBOL_GPL(dma_buf_vunmap);
int dma_buf_get_flags(struct dma_buf *dmabuf, unsigned long *flags)
{
int ret = 0;
if (WARN_ON(!dmabuf) || !flags)
return -EINVAL;
if (dmabuf->ops->get_flags)
ret = dmabuf->ops->get_flags(dmabuf, flags);
return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_get_flags);
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
static int dma_buf_debug_show(struct seq_file *s, void *unused) static int dma_buf_debug_show(struct seq_file *s, void *unused)
{ {

View file

@ -1944,8 +1944,6 @@ static void dma_tc_handle(struct coh901318_chan *cohc)
return; return;
} }
spin_lock(&cohc->lock);
/* /*
* When we reach this point, at least one queue item * When we reach this point, at least one queue item
* should have been moved over from cohc->queue to * should have been moved over from cohc->queue to
@ -1966,8 +1964,6 @@ static void dma_tc_handle(struct coh901318_chan *cohc)
if (coh901318_queue_start(cohc) == NULL) if (coh901318_queue_start(cohc) == NULL)
cohc->busy = 0; cohc->busy = 0;
spin_unlock(&cohc->lock);
/* /*
* This tasklet will remove items from cohc->active * This tasklet will remove items from cohc->active
* and thus terminates them. * and thus terminates them.

View file

@ -335,6 +335,7 @@ struct sdma_desc {
* @sdma: pointer to the SDMA engine for this channel * @sdma: pointer to the SDMA engine for this channel
* @channel: the channel number, matches dmaengine chan_id + 1 * @channel: the channel number, matches dmaengine chan_id + 1
* @direction: transfer type. Needed for setting SDMA script * @direction: transfer type. Needed for setting SDMA script
* @slave_config Slave configuration
* @peripheral_type: Peripheral type. Needed for setting SDMA script * @peripheral_type: Peripheral type. Needed for setting SDMA script
* @event_id0: aka dma request line * @event_id0: aka dma request line
* @event_id1: for channels that use 2 events * @event_id1: for channels that use 2 events
@ -362,6 +363,7 @@ struct sdma_channel {
struct sdma_engine *sdma; struct sdma_engine *sdma;
unsigned int channel; unsigned int channel;
enum dma_transfer_direction direction; enum dma_transfer_direction direction;
struct dma_slave_config slave_config;
enum sdma_peripheral_type peripheral_type; enum sdma_peripheral_type peripheral_type;
unsigned int event_id0; unsigned int event_id0;
unsigned int event_id1; unsigned int event_id1;
@ -440,6 +442,10 @@ struct sdma_engine {
struct sdma_buffer_descriptor *bd0; struct sdma_buffer_descriptor *bd0;
}; };
static int sdma_config_write(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg,
enum dma_transfer_direction direction);
static struct sdma_driver_data sdma_imx31 = { static struct sdma_driver_data sdma_imx31 = {
.chnenbl0 = SDMA_CHNENBL0_IMX31, .chnenbl0 = SDMA_CHNENBL0_IMX31,
.num_events = 32, .num_events = 32,
@ -1122,18 +1128,6 @@ static int sdma_config_channel(struct dma_chan *chan)
sdmac->shp_addr = 0; sdmac->shp_addr = 0;
sdmac->per_addr = 0; sdmac->per_addr = 0;
if (sdmac->event_id0) {
if (sdmac->event_id0 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id0);
}
if (sdmac->event_id1) {
if (sdmac->event_id1 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id1);
}
switch (sdmac->peripheral_type) { switch (sdmac->peripheral_type) {
case IMX_DMATYPE_DSP: case IMX_DMATYPE_DSP:
sdma_config_ownership(sdmac, false, true, true); sdma_config_ownership(sdmac, false, true, true);
@ -1431,6 +1425,8 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg(
struct scatterlist *sg; struct scatterlist *sg;
struct sdma_desc *desc; struct sdma_desc *desc;
sdma_config_write(chan, &sdmac->slave_config, direction);
desc = sdma_transfer_init(sdmac, direction, sg_len); desc = sdma_transfer_init(sdmac, direction, sg_len);
if (!desc) if (!desc)
goto err_out; goto err_out;
@ -1515,6 +1511,8 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic(
dev_dbg(sdma->dev, "%s channel: %d\n", __func__, channel); dev_dbg(sdma->dev, "%s channel: %d\n", __func__, channel);
sdma_config_write(chan, &sdmac->slave_config, direction);
desc = sdma_transfer_init(sdmac, direction, num_periods); desc = sdma_transfer_init(sdmac, direction, num_periods);
if (!desc) if (!desc)
goto err_out; goto err_out;
@ -1570,17 +1568,18 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic(
return NULL; return NULL;
} }
static int sdma_config(struct dma_chan *chan, static int sdma_config_write(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg) struct dma_slave_config *dmaengine_cfg,
enum dma_transfer_direction direction)
{ {
struct sdma_channel *sdmac = to_sdma_chan(chan); struct sdma_channel *sdmac = to_sdma_chan(chan);
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) { if (direction == DMA_DEV_TO_MEM) {
sdmac->per_address = dmaengine_cfg->src_addr; sdmac->per_address = dmaengine_cfg->src_addr;
sdmac->watermark_level = dmaengine_cfg->src_maxburst * sdmac->watermark_level = dmaengine_cfg->src_maxburst *
dmaengine_cfg->src_addr_width; dmaengine_cfg->src_addr_width;
sdmac->word_size = dmaengine_cfg->src_addr_width; sdmac->word_size = dmaengine_cfg->src_addr_width;
} else if (dmaengine_cfg->direction == DMA_DEV_TO_DEV) { } else if (direction == DMA_DEV_TO_DEV) {
sdmac->per_address2 = dmaengine_cfg->src_addr; sdmac->per_address2 = dmaengine_cfg->src_addr;
sdmac->per_address = dmaengine_cfg->dst_addr; sdmac->per_address = dmaengine_cfg->dst_addr;
sdmac->watermark_level = dmaengine_cfg->src_maxburst & sdmac->watermark_level = dmaengine_cfg->src_maxburst &
@ -1594,10 +1593,33 @@ static int sdma_config(struct dma_chan *chan,
dmaengine_cfg->dst_addr_width; dmaengine_cfg->dst_addr_width;
sdmac->word_size = dmaengine_cfg->dst_addr_width; sdmac->word_size = dmaengine_cfg->dst_addr_width;
} }
sdmac->direction = dmaengine_cfg->direction; sdmac->direction = direction;
return sdma_config_channel(chan); return sdma_config_channel(chan);
} }
static int sdma_config(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
memcpy(&sdmac->slave_config, dmaengine_cfg, sizeof(*dmaengine_cfg));
/* Set ENBLn earlier to make sure dma request triggered after that */
if (sdmac->event_id0) {
if (sdmac->event_id0 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id0);
}
if (sdmac->event_id1) {
if (sdmac->event_id1 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id1);
}
return 0;
}
static enum dma_status sdma_tx_status(struct dma_chan *chan, static enum dma_status sdma_tx_status(struct dma_chan *chan,
dma_cookie_t cookie, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)

View file

@ -288,7 +288,7 @@ static struct tegra_dma_desc *tegra_dma_desc_get(
/* Do not allocate if desc are waiting for ack */ /* Do not allocate if desc are waiting for ack */
list_for_each_entry(dma_desc, &tdc->free_dma_desc, node) { list_for_each_entry(dma_desc, &tdc->free_dma_desc, node) {
if (async_tx_test_ack(&dma_desc->txd)) { if (async_tx_test_ack(&dma_desc->txd) && !dma_desc->cb_count) {
list_del(&dma_desc->node); list_del(&dma_desc->node);
spin_unlock_irqrestore(&tdc->lock, flags); spin_unlock_irqrestore(&tdc->lock, flags);
dma_desc->txd.flags = 0; dma_desc->txd.flags = 0;
@ -756,10 +756,6 @@ static int tegra_dma_terminate_all(struct dma_chan *dc)
bool was_busy; bool was_busy;
spin_lock_irqsave(&tdc->lock, flags); spin_lock_irqsave(&tdc->lock, flags);
if (list_empty(&tdc->pending_sg_req)) {
spin_unlock_irqrestore(&tdc->lock, flags);
return 0;
}
if (!tdc->busy) if (!tdc->busy)
goto skip_dma_stop; goto skip_dma_stop;

View file

@ -2863,6 +2863,7 @@ static int init_csrows(struct mem_ctl_info *mci)
dimm = csrow->channels[j]->dimm; dimm = csrow->channels[j]->dimm;
dimm->mtype = pvt->dram_type; dimm->mtype = pvt->dram_type;
dimm->edac_mode = edac_mode; dimm->edac_mode = edac_mode;
dimm->grain = 64;
} }
} }

View file

@ -925,6 +925,22 @@ int extcon_register_notifier(struct extcon_dev *edev, unsigned int id,
} }
EXPORT_SYMBOL_GPL(extcon_register_notifier); EXPORT_SYMBOL_GPL(extcon_register_notifier);
int extcon_register_blocking_notifier(struct extcon_dev *edev, unsigned int id,
struct notifier_block *nb)
{
int idx = -EINVAL;
if (!edev || !nb)
return -EINVAL;
idx = find_cable_index_by_id(edev, id);
if (idx < 0)
return idx;
return blocking_notifier_chain_register(&edev->bnh[idx], nb);
}
EXPORT_SYMBOL(extcon_register_blocking_notifier);
/** /**
* extcon_unregister_notifier() - Unregister a notifier block from the extcon. * extcon_unregister_notifier() - Unregister a notifier block from the extcon.
* @edev: the extcon device * @edev: the extcon device

View file

@ -48,6 +48,7 @@ struct extcon_dev {
struct device dev; struct device dev;
struct raw_notifier_head nh_all; struct raw_notifier_head nh_all;
struct raw_notifier_head *nh; struct raw_notifier_head *nh;
struct blocking_notifier_head *bnh;
struct list_head entry; struct list_head entry;
int max_supported; int max_supported;
spinlock_t lock; /* could be called by irq handler */ spinlock_t lock; /* could be called by irq handler */

View file

@ -139,13 +139,16 @@ static ssize_t
efivar_attr_read(struct efivar_entry *entry, char *buf) efivar_attr_read(struct efivar_entry *entry, char *buf)
{ {
struct efi_variable *var = &entry->var; struct efi_variable *var = &entry->var;
unsigned long size = sizeof(var->Data);
char *str = buf; char *str = buf;
int ret;
if (!entry || !buf) if (!entry || !buf)
return -EINVAL; return -EINVAL;
var->DataSize = 1024; ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) var->DataSize = size;
if (ret)
return -EIO; return -EIO;
if (var->Attributes & EFI_VARIABLE_NON_VOLATILE) if (var->Attributes & EFI_VARIABLE_NON_VOLATILE)
@ -172,13 +175,16 @@ static ssize_t
efivar_size_read(struct efivar_entry *entry, char *buf) efivar_size_read(struct efivar_entry *entry, char *buf)
{ {
struct efi_variable *var = &entry->var; struct efi_variable *var = &entry->var;
unsigned long size = sizeof(var->Data);
char *str = buf; char *str = buf;
int ret;
if (!entry || !buf) if (!entry || !buf)
return -EINVAL; return -EINVAL;
var->DataSize = 1024; ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) var->DataSize = size;
if (ret)
return -EIO; return -EIO;
str += sprintf(str, "0x%lx\n", var->DataSize); str += sprintf(str, "0x%lx\n", var->DataSize);
@ -189,12 +195,15 @@ static ssize_t
efivar_data_read(struct efivar_entry *entry, char *buf) efivar_data_read(struct efivar_entry *entry, char *buf)
{ {
struct efi_variable *var = &entry->var; struct efi_variable *var = &entry->var;
unsigned long size = sizeof(var->Data);
int ret;
if (!entry || !buf) if (!entry || !buf)
return -EINVAL; return -EINVAL;
var->DataSize = 1024; ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) var->DataSize = size;
if (ret)
return -EIO; return -EIO;
memcpy(buf, var->Data, var->DataSize); memcpy(buf, var->Data, var->DataSize);
@ -263,6 +272,9 @@ efivar_store_raw(struct efivar_entry *entry, const char *buf, size_t count)
u8 *data; u8 *data;
int err; int err;
if (!entry || !buf)
return -EINVAL;
if (is_compat()) { if (is_compat()) {
struct compat_efi_variable *compat; struct compat_efi_variable *compat;
@ -314,14 +326,16 @@ efivar_show_raw(struct efivar_entry *entry, char *buf)
{ {
struct efi_variable *var = &entry->var; struct efi_variable *var = &entry->var;
struct compat_efi_variable *compat; struct compat_efi_variable *compat;
unsigned long datasize = sizeof(var->Data);
size_t size; size_t size;
int ret;
if (!entry || !buf) if (!entry || !buf)
return 0; return 0;
var->DataSize = 1024; ret = efivar_entry_get(entry, &var->Attributes, &datasize, var->Data);
if (efivar_entry_get(entry, &entry->var.Attributes, var->DataSize = datasize;
&entry->var.DataSize, entry->var.Data)) if (ret)
return -EIO; return -EIO;
if (is_compat()) { if (is_compat()) {

View file

@ -45,39 +45,7 @@
#define __efi_call_virt(f, args...) \ #define __efi_call_virt(f, args...) \
__efi_call_virt_pointer(efi.systab->runtime, f, args) __efi_call_virt_pointer(efi.systab->runtime, f, args)
/* efi_runtime_service() function identifiers */ struct efi_runtime_work efi_rts_work;
enum efi_rts_ids {
GET_TIME,
SET_TIME,
GET_WAKEUP_TIME,
SET_WAKEUP_TIME,
GET_VARIABLE,
GET_NEXT_VARIABLE,
SET_VARIABLE,
QUERY_VARIABLE_INFO,
GET_NEXT_HIGH_MONO_COUNT,
UPDATE_CAPSULE,
QUERY_CAPSULE_CAPS,
};
/*
* efi_runtime_work: Details of EFI Runtime Service work
* @arg<1-5>: EFI Runtime Service function arguments
* @status: Status of executing EFI Runtime Service
* @efi_rts_id: EFI Runtime Service function identifier
* @efi_rts_comp: Struct used for handling completions
*/
struct efi_runtime_work {
void *arg1;
void *arg2;
void *arg3;
void *arg4;
void *arg5;
efi_status_t status;
struct work_struct work;
enum efi_rts_ids efi_rts_id;
struct completion efi_rts_comp;
};
/* /*
* efi_queue_work: Queue efi_runtime_service() and wait until it's done * efi_queue_work: Queue efi_runtime_service() and wait until it's done
@ -91,11 +59,10 @@ struct efi_runtime_work {
*/ */
#define efi_queue_work(_rts, _arg1, _arg2, _arg3, _arg4, _arg5) \ #define efi_queue_work(_rts, _arg1, _arg2, _arg3, _arg4, _arg5) \
({ \ ({ \
struct efi_runtime_work efi_rts_work; \
efi_rts_work.status = EFI_ABORTED; \ efi_rts_work.status = EFI_ABORTED; \
\ \
init_completion(&efi_rts_work.efi_rts_comp); \ init_completion(&efi_rts_work.efi_rts_comp); \
INIT_WORK_ONSTACK(&efi_rts_work.work, efi_call_rts); \ INIT_WORK(&efi_rts_work.work, efi_call_rts); \
efi_rts_work.arg1 = _arg1; \ efi_rts_work.arg1 = _arg1; \
efi_rts_work.arg2 = _arg2; \ efi_rts_work.arg2 = _arg2; \
efi_rts_work.arg3 = _arg3; \ efi_rts_work.arg3 = _arg3; \
@ -191,18 +158,16 @@ extern struct semaphore __efi_uv_runtime_lock __alias(efi_runtime_lock);
*/ */
static void efi_call_rts(struct work_struct *work) static void efi_call_rts(struct work_struct *work)
{ {
struct efi_runtime_work *efi_rts_work;
void *arg1, *arg2, *arg3, *arg4, *arg5; void *arg1, *arg2, *arg3, *arg4, *arg5;
efi_status_t status = EFI_NOT_FOUND; efi_status_t status = EFI_NOT_FOUND;
efi_rts_work = container_of(work, struct efi_runtime_work, work); arg1 = efi_rts_work.arg1;
arg1 = efi_rts_work->arg1; arg2 = efi_rts_work.arg2;
arg2 = efi_rts_work->arg2; arg3 = efi_rts_work.arg3;
arg3 = efi_rts_work->arg3; arg4 = efi_rts_work.arg4;
arg4 = efi_rts_work->arg4; arg5 = efi_rts_work.arg5;
arg5 = efi_rts_work->arg5;
switch (efi_rts_work->efi_rts_id) { switch (efi_rts_work.efi_rts_id) {
case GET_TIME: case GET_TIME:
status = efi_call_virt(get_time, (efi_time_t *)arg1, status = efi_call_virt(get_time, (efi_time_t *)arg1,
(efi_time_cap_t *)arg2); (efi_time_cap_t *)arg2);
@ -260,8 +225,8 @@ static void efi_call_rts(struct work_struct *work)
*/ */
pr_err("Requested executing invalid EFI Runtime Service.\n"); pr_err("Requested executing invalid EFI Runtime Service.\n");
} }
efi_rts_work->status = status; efi_rts_work.status = status;
complete(&efi_rts_work->efi_rts_comp); complete(&efi_rts_work.efi_rts_comp);
} }
static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc) static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)

View file

@ -1909,7 +1909,11 @@ static int gpiochip_add_irqchip(struct gpio_chip *gpiochip,
type = IRQ_TYPE_NONE; type = IRQ_TYPE_NONE;
} }
gpiochip->to_irq = gpiochip_to_irq; #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
if (!gpiochip->to_irq)
#endif
gpiochip->to_irq = gpiochip_to_irq;
gpiochip->irq.default_type = type; gpiochip->irq.default_type = type;
gpiochip->irq.lock_key = lock_key; gpiochip->irq.lock_key = lock_key;
gpiochip->irq.request_key = request_key; gpiochip->irq.request_key = request_key;
@ -1919,9 +1923,16 @@ static int gpiochip_add_irqchip(struct gpio_chip *gpiochip,
else else
ops = &gpiochip_domain_ops; ops = &gpiochip_domain_ops;
gpiochip->irq.domain = irq_domain_add_simple(np, gpiochip->ngpio, #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
gpiochip->irq.first, if (gpiochip->irq.parent_domain)
ops, gpiochip); gpiochip->irq.domain = irq_domain_add_hierarchy(gpiochip->irq.parent_domain,
0, gpiochip->ngpio,
np, ops, gpiochip);
else
#endif
gpiochip->irq.domain = irq_domain_add_simple(np, gpiochip->ngpio,
gpiochip->irq.first,
ops, gpiochip);
if (!gpiochip->irq.domain) if (!gpiochip->irq.domain)
return -EINVAL; return -EINVAL;

View file

@ -364,8 +364,7 @@ bool amdgpu_atombios_get_connector_info_from_object_table(struct amdgpu_device *
router.ddc_valid = false; router.ddc_valid = false;
router.cd_valid = false; router.cd_valid = false;
for (j = 0; j < ((le16_to_cpu(path->usSize) - 8) / 2); j++) { for (j = 0; j < ((le16_to_cpu(path->usSize) - 8) / 2); j++) {
uint8_t grph_obj_type= uint8_t grph_obj_type =
grph_obj_type =
(le16_to_cpu(path->usGraphicObjIds[j]) & (le16_to_cpu(path->usGraphicObjIds[j]) &
OBJECT_TYPE_MASK) >> OBJECT_TYPE_SHIFT; OBJECT_TYPE_MASK) >> OBJECT_TYPE_SHIFT;

View file

@ -694,11 +694,11 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
ssize_t result = 0; ssize_t result = 0;
uint32_t offset, se, sh, cu, wave, simd, thread, bank, *data; uint32_t offset, se, sh, cu, wave, simd, thread, bank, *data;
if (size & 3 || *pos & 3) if (size > 4096 || size & 3 || *pos & 3)
return -EINVAL; return -EINVAL;
/* decode offset */ /* decode offset */
offset = *pos & GENMASK_ULL(11, 0); offset = (*pos & GENMASK_ULL(11, 0)) >> 2;
se = (*pos & GENMASK_ULL(19, 12)) >> 12; se = (*pos & GENMASK_ULL(19, 12)) >> 12;
sh = (*pos & GENMASK_ULL(27, 20)) >> 20; sh = (*pos & GENMASK_ULL(27, 20)) >> 20;
cu = (*pos & GENMASK_ULL(35, 28)) >> 28; cu = (*pos & GENMASK_ULL(35, 28)) >> 28;
@ -729,7 +729,7 @@ static ssize_t amdgpu_debugfs_gpr_read(struct file *f, char __user *buf,
while (size) { while (size) {
uint32_t value; uint32_t value;
value = data[offset++]; value = data[result >> 2];
r = put_user(value, (uint32_t *)buf); r = put_user(value, (uint32_t *)buf);
if (r) { if (r) {
result = r; result = r;

View file

@ -97,6 +97,7 @@ struct amdgpu_gmc {
uint32_t srbm_soft_reset; uint32_t srbm_soft_reset;
bool prt_warning; bool prt_warning;
uint64_t stolen_size; uint64_t stolen_size;
uint32_t sdpif_register;
/* apertures */ /* apertures */
u64 shared_aperture_start; u64 shared_aperture_start;
u64 shared_aperture_end; u64 shared_aperture_end;

View file

@ -992,6 +992,19 @@ static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
} }
} }
/**
* gmc_v9_0_restore_registers - restores regs
*
* @adev: amdgpu_device pointer
*
* This restores register values, saved at suspend.
*/
static void gmc_v9_0_restore_registers(struct amdgpu_device *adev)
{
if (adev->asic_type == CHIP_RAVEN)
WREG32(mmDCHUBBUB_SDPIF_MMIO_CNTRL_0, adev->gmc.sdpif_register);
}
/** /**
* gmc_v9_0_gart_enable - gart enable * gmc_v9_0_gart_enable - gart enable
* *
@ -1080,6 +1093,20 @@ static int gmc_v9_0_hw_init(void *handle)
return r; return r;
} }
/**
* gmc_v9_0_save_registers - saves regs
*
* @adev: amdgpu_device pointer
*
* This saves potential register values that should be
* restored upon resume
*/
static void gmc_v9_0_save_registers(struct amdgpu_device *adev)
{
if (adev->asic_type == CHIP_RAVEN)
adev->gmc.sdpif_register = RREG32(mmDCHUBBUB_SDPIF_MMIO_CNTRL_0);
}
/** /**
* gmc_v9_0_gart_disable - gart disable * gmc_v9_0_gart_disable - gart disable
* *
@ -1112,9 +1139,16 @@ static int gmc_v9_0_hw_fini(void *handle)
static int gmc_v9_0_suspend(void *handle) static int gmc_v9_0_suspend(void *handle)
{ {
int r;
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
return gmc_v9_0_hw_fini(adev); r = gmc_v9_0_hw_fini(adev);
if (r)
return r;
gmc_v9_0_save_registers(adev);
return 0;
} }
static int gmc_v9_0_resume(void *handle) static int gmc_v9_0_resume(void *handle)
@ -1122,6 +1156,7 @@ static int gmc_v9_0_resume(void *handle)
int r; int r;
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
gmc_v9_0_restore_registers(adev);
r = gmc_v9_0_hw_init(adev); r = gmc_v9_0_hw_init(adev);
if (r) if (r)
return r; return r;

View file

@ -419,6 +419,7 @@ static void dm_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
dc_link_remove_remote_sink(aconnector->dc_link, aconnector->dc_sink); dc_link_remove_remote_sink(aconnector->dc_link, aconnector->dc_sink);
dc_sink_release(aconnector->dc_sink); dc_sink_release(aconnector->dc_sink);
aconnector->dc_sink = NULL; aconnector->dc_sink = NULL;
aconnector->dc_link->cur_link_settings.lane_count = 0;
} }
drm_connector_unregister(connector); drm_connector_unregister(connector);

View file

@ -684,8 +684,8 @@ static void hubbub1_det_request_size(
hubbub1_get_blk256_size(&blk256_width, &blk256_height, bpe); hubbub1_get_blk256_size(&blk256_width, &blk256_height, bpe);
swath_bytes_horz_wc = height * blk256_height * bpe; swath_bytes_horz_wc = width * blk256_height * bpe;
swath_bytes_vert_wc = width * blk256_width * bpe; swath_bytes_vert_wc = height * blk256_width * bpe;
*req128_horz_wc = (2 * swath_bytes_horz_wc <= detile_buf_size) ? *req128_horz_wc = (2 * swath_bytes_horz_wc <= detile_buf_size) ?
false : /* full 256B request */ false : /* full 256B request */

View file

@ -7376,6 +7376,8 @@
#define mmCRTC4_CRTC_DRR_CONTROL 0x0f3e #define mmCRTC4_CRTC_DRR_CONTROL 0x0f3e
#define mmCRTC4_CRTC_DRR_CONTROL_BASE_IDX 2 #define mmCRTC4_CRTC_DRR_CONTROL_BASE_IDX 2
#define mmDCHUBBUB_SDPIF_MMIO_CNTRL_0 0x395d
#define mmDCHUBBUB_SDPIF_MMIO_CNTRL_0_BASE_IDX 2
// addressBlock: dce_dc_fmt4_dispdec // addressBlock: dce_dc_fmt4_dispdec
// base address: 0x2000 // base address: 0x2000

View file

@ -1364,28 +1364,34 @@ static void hdmi_config_AVI(struct dw_hdmi *hdmi, struct drm_display_mode *mode)
frame.colorspace = HDMI_COLORSPACE_RGB; frame.colorspace = HDMI_COLORSPACE_RGB;
/* Set up colorimetry */ /* Set up colorimetry */
switch (hdmi->hdmi_data.enc_out_encoding) { if (!hdmi_bus_fmt_is_rgb(hdmi->hdmi_data.enc_out_bus_format)) {
case V4L2_YCBCR_ENC_601: switch (hdmi->hdmi_data.enc_out_encoding) {
if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV601) case V4L2_YCBCR_ENC_601:
frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV601)
else frame.colorimetry = HDMI_COLORIMETRY_EXTENDED;
else
frame.colorimetry = HDMI_COLORIMETRY_ITU_601;
frame.extended_colorimetry =
HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
break;
case V4L2_YCBCR_ENC_709:
if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV709)
frame.colorimetry = HDMI_COLORIMETRY_EXTENDED;
else
frame.colorimetry = HDMI_COLORIMETRY_ITU_709;
frame.extended_colorimetry =
HDMI_EXTENDED_COLORIMETRY_XV_YCC_709;
break;
default: /* Carries no data */
frame.colorimetry = HDMI_COLORIMETRY_ITU_601; frame.colorimetry = HDMI_COLORIMETRY_ITU_601;
frame.extended_colorimetry =
HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
break;
}
} else {
frame.colorimetry = HDMI_COLORIMETRY_NONE;
frame.extended_colorimetry = frame.extended_colorimetry =
HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
break;
case V4L2_YCBCR_ENC_709:
if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV709)
frame.colorimetry = HDMI_COLORIMETRY_EXTENDED;
else
frame.colorimetry = HDMI_COLORIMETRY_ITU_709;
frame.extended_colorimetry =
HDMI_EXTENDED_COLORIMETRY_XV_YCC_709;
break;
default: /* Carries no data */
frame.colorimetry = HDMI_COLORIMETRY_ITU_601;
frame.extended_colorimetry =
HDMI_EXTENDED_COLORIMETRY_XV_YCC_601;
break;
} }
frame.scan_mode = HDMI_SCAN_MODE_NONE; frame.scan_mode = HDMI_SCAN_MODE_NONE;

View file

@ -545,10 +545,12 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
} }
DRM_DEBUG_LEASE("Creating lease\n"); DRM_DEBUG_LEASE("Creating lease\n");
/* lessee will take the ownership of leases */
lessee = drm_lease_create(lessor, &leases); lessee = drm_lease_create(lessor, &leases);
if (IS_ERR(lessee)) { if (IS_ERR(lessee)) {
ret = PTR_ERR(lessee); ret = PTR_ERR(lessee);
idr_destroy(&leases);
goto out_leases; goto out_leases;
} }
@ -583,7 +585,6 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
out_leases: out_leases:
put_unused_fd(fd); put_unused_fd(fd);
idr_destroy(&leases);
DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret); DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret);
return ret; return ret;

View file

@ -1722,8 +1722,9 @@ static int exynos_dsi_probe(struct platform_device *pdev)
ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(dsi->supplies), ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(dsi->supplies),
dsi->supplies); dsi->supplies);
if (ret) { if (ret) {
dev_info(dev, "failed to get regulators: %d\n", ret); if (ret != -EPROBE_DEFER)
return -EPROBE_DEFER; dev_info(dev, "failed to get regulators: %d\n", ret);
return ret;
} }
dsi->clks = devm_kcalloc(dev, dsi->clks = devm_kcalloc(dev,
@ -1736,9 +1737,10 @@ static int exynos_dsi_probe(struct platform_device *pdev)
dsi->clks[i] = devm_clk_get(dev, clk_names[i]); dsi->clks[i] = devm_clk_get(dev, clk_names[i]);
if (IS_ERR(dsi->clks[i])) { if (IS_ERR(dsi->clks[i])) {
if (strcmp(clk_names[i], "sclk_mipi") == 0) { if (strcmp(clk_names[i], "sclk_mipi") == 0) {
strcpy(clk_names[i], OLD_SCLK_MIPI_CLK_NAME); dsi->clks[i] = devm_clk_get(dev,
i--; OLD_SCLK_MIPI_CLK_NAME);
continue; if (!IS_ERR(dsi->clks[i]))
continue;
} }
dev_info(dev, "failed to get the clock: %s\n", dev_info(dev, "failed to get the clock: %s\n",

View file

@ -95,12 +95,12 @@ static void dmabuf_gem_object_free(struct kref *kref)
dmabuf_obj = container_of(pos, dmabuf_obj = container_of(pos,
struct intel_vgpu_dmabuf_obj, list); struct intel_vgpu_dmabuf_obj, list);
if (dmabuf_obj == obj) { if (dmabuf_obj == obj) {
list_del(pos);
intel_gvt_hypervisor_put_vfio_device(vgpu); intel_gvt_hypervisor_put_vfio_device(vgpu);
idr_remove(&vgpu->object_idr, idr_remove(&vgpu->object_idr,
dmabuf_obj->dmabuf_id); dmabuf_obj->dmabuf_id);
kfree(dmabuf_obj->info); kfree(dmabuf_obj->info);
kfree(dmabuf_obj); kfree(dmabuf_obj);
list_del(pos);
break; break;
} }
} }

View file

@ -272,10 +272,17 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
{ {
struct intel_gvt *gvt = vgpu->gvt; struct intel_gvt *gvt = vgpu->gvt;
mutex_lock(&vgpu->vgpu_lock);
WARN(vgpu->active, "vGPU is still active!\n"); WARN(vgpu->active, "vGPU is still active!\n");
/*
* remove idr first so later clean can judge if need to stop
* service if no active vgpu.
*/
mutex_lock(&gvt->lock);
idr_remove(&gvt->vgpu_idr, vgpu->id);
mutex_unlock(&gvt->lock);
mutex_lock(&vgpu->vgpu_lock);
intel_gvt_debugfs_remove_vgpu(vgpu); intel_gvt_debugfs_remove_vgpu(vgpu);
intel_vgpu_clean_sched_policy(vgpu); intel_vgpu_clean_sched_policy(vgpu);
intel_vgpu_clean_submission(vgpu); intel_vgpu_clean_submission(vgpu);
@ -290,7 +297,6 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
mutex_unlock(&vgpu->vgpu_lock); mutex_unlock(&vgpu->vgpu_lock);
mutex_lock(&gvt->lock); mutex_lock(&gvt->lock);
idr_remove(&gvt->vgpu_idr, vgpu->id);
if (idr_is_empty(&gvt->vgpu_idr)) if (idr_is_empty(&gvt->vgpu_idr))
intel_gvt_clean_irq(gvt); intel_gvt_clean_irq(gvt);
intel_gvt_update_vgpu_types(gvt); intel_gvt_update_vgpu_types(gvt);
@ -556,9 +562,9 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
intel_vgpu_reset_mmio(vgpu, dmlr); intel_vgpu_reset_mmio(vgpu, dmlr);
populate_pvinfo_page(vgpu); populate_pvinfo_page(vgpu);
intel_vgpu_reset_display(vgpu);
if (dmlr) { if (dmlr) {
intel_vgpu_reset_display(vgpu);
intel_vgpu_reset_cfg_space(vgpu); intel_vgpu_reset_cfg_space(vgpu);
/* only reset the failsafe mode when dmlr reset */ /* only reset the failsafe mode when dmlr reset */
vgpu->failsafe = false; vgpu->failsafe = false;

View file

@ -506,10 +506,18 @@ static const struct drm_crtc_helper_funcs mtk_crtc_helper_funcs = {
static int mtk_drm_crtc_init(struct drm_device *drm, static int mtk_drm_crtc_init(struct drm_device *drm,
struct mtk_drm_crtc *mtk_crtc, struct mtk_drm_crtc *mtk_crtc,
struct drm_plane *primary, unsigned int pipe)
struct drm_plane *cursor, unsigned int pipe)
{ {
int ret; struct drm_plane *primary = NULL;
struct drm_plane *cursor = NULL;
int i, ret;
for (i = 0; i < mtk_crtc->layer_nr; i++) {
if (mtk_crtc->planes[i].type == DRM_PLANE_TYPE_PRIMARY)
primary = &mtk_crtc->planes[i];
else if (mtk_crtc->planes[i].type == DRM_PLANE_TYPE_CURSOR)
cursor = &mtk_crtc->planes[i];
}
ret = drm_crtc_init_with_planes(drm, &mtk_crtc->base, primary, cursor, ret = drm_crtc_init_with_planes(drm, &mtk_crtc->base, primary, cursor,
&mtk_crtc_funcs, NULL); &mtk_crtc_funcs, NULL);
@ -622,9 +630,7 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
goto unprepare; goto unprepare;
} }
ret = mtk_drm_crtc_init(drm_dev, mtk_crtc, &mtk_crtc->planes[0], ret = mtk_drm_crtc_init(drm_dev, mtk_crtc, pipe);
mtk_crtc->layer_nr > 1 ? &mtk_crtc->planes[1] :
NULL, pipe);
if (ret < 0) if (ret < 0)
goto unprepare; goto unprepare;
drm_mode_crtc_set_gamma_size(&mtk_crtc->base, MTK_LUT_SIZE); drm_mode_crtc_set_gamma_size(&mtk_crtc->base, MTK_LUT_SIZE);

View file

@ -1118,8 +1118,8 @@ static void mdp5_crtc_wait_for_pp_done(struct drm_crtc *crtc)
ret = wait_for_completion_timeout(&mdp5_crtc->pp_completion, ret = wait_for_completion_timeout(&mdp5_crtc->pp_completion,
msecs_to_jiffies(50)); msecs_to_jiffies(50));
if (ret == 0) if (ret == 0)
dev_warn(dev->dev, "pp done time out, lm=%d\n", dev_warn_ratelimited(dev->dev, "pp done time out, lm=%d\n",
mdp5_cstate->pipeline.mixer->lm); mdp5_cstate->pipeline.mixer->lm);
} }
static void mdp5_crtc_wait_for_flush_done(struct drm_crtc *crtc) static void mdp5_crtc_wait_for_flush_done(struct drm_crtc *crtc)

View file

@ -328,7 +328,7 @@ static int dsi_mgr_connector_get_modes(struct drm_connector *connector)
return num; return num;
} }
static int dsi_mgr_connector_mode_valid(struct drm_connector *connector, static enum drm_mode_status dsi_mgr_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode) struct drm_display_mode *mode)
{ {
int id = dsi_mgr_connector_get_id(connector); int id = dsi_mgr_connector_get_id(connector);
@ -471,6 +471,7 @@ static void dsi_mgr_bridge_post_disable(struct drm_bridge *bridge)
struct msm_dsi *msm_dsi1 = dsi_mgr_get_dsi(DSI_1); struct msm_dsi *msm_dsi1 = dsi_mgr_get_dsi(DSI_1);
struct mipi_dsi_host *host = msm_dsi->host; struct mipi_dsi_host *host = msm_dsi->host;
struct drm_panel *panel = msm_dsi->panel; struct drm_panel *panel = msm_dsi->panel;
struct msm_dsi_pll *src_pll;
bool is_dual_dsi = IS_DUAL_DSI(); bool is_dual_dsi = IS_DUAL_DSI();
int ret; int ret;
@ -511,6 +512,10 @@ static void dsi_mgr_bridge_post_disable(struct drm_bridge *bridge)
id, ret); id, ret);
} }
/* Save PLL status if it is a clock source */
src_pll = msm_dsi_phy_get_pll(msm_dsi->phy);
msm_dsi_pll_save_state(src_pll);
ret = msm_dsi_host_power_off(host); ret = msm_dsi_host_power_off(host);
if (ret) if (ret)
pr_err("%s: host %d power off failed,%d\n", __func__, id, ret); pr_err("%s: host %d power off failed,%d\n", __func__, id, ret);

View file

@ -726,10 +726,6 @@ void msm_dsi_phy_disable(struct msm_dsi_phy *phy)
if (!phy || !phy->cfg->ops.disable) if (!phy || !phy->cfg->ops.disable)
return; return;
/* Save PLL status if it is a clock source */
if (phy->usecase != MSM_DSI_PHY_SLAVE)
msm_dsi_pll_save_state(phy->pll);
phy->cfg->ops.disable(phy); phy->cfg->ops.disable(phy);
dsi_phy_regulator_disable(phy); dsi_phy_regulator_disable(phy);

View file

@ -406,6 +406,12 @@ static int dsi_pll_10nm_vco_prepare(struct clk_hw *hw)
if (pll_10nm->slave) if (pll_10nm->slave)
dsi_pll_enable_pll_bias(pll_10nm->slave); dsi_pll_enable_pll_bias(pll_10nm->slave);
rc = dsi_pll_10nm_vco_set_rate(hw,pll_10nm->vco_current_rate, 0);
if (rc) {
pr_err("vco_set_rate failed, rc=%d\n", rc);
return rc;
}
/* Start PLL */ /* Start PLL */
pll_write(pll_10nm->phy_cmn_mmio + REG_DSI_10nm_PHY_CMN_PLL_CNTRL, pll_write(pll_10nm->phy_cmn_mmio + REG_DSI_10nm_PHY_CMN_PLL_CNTRL,
0x01); 0x01);

View file

@ -492,6 +492,14 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv)
if (ret) if (ret)
goto err_msm_uninit; goto err_msm_uninit;
if (!dev->dma_parms) {
dev->dma_parms = devm_kzalloc(dev, sizeof(*dev->dma_parms),
GFP_KERNEL);
if (!dev->dma_parms)
return -ENOMEM;
}
dma_set_max_seg_size(dev, DMA_BIT_MASK(32));
msm_gem_shrinker_init(ddev); msm_gem_shrinker_init(ddev);
switch (get_mdp_ver(pdev)) { switch (get_mdp_ver(pdev)) {

View file

@ -110,48 +110,104 @@ static const struct de2_fmt_info de2_formats[] = {
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_XRGB4444,
.de2_fmt = SUN8I_MIXER_FBFMT_ARGB4444,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_ABGR4444, .drm_fmt = DRM_FORMAT_ABGR4444,
.de2_fmt = SUN8I_MIXER_FBFMT_ABGR4444, .de2_fmt = SUN8I_MIXER_FBFMT_ABGR4444,
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_XBGR4444,
.de2_fmt = SUN8I_MIXER_FBFMT_ABGR4444,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_RGBA4444, .drm_fmt = DRM_FORMAT_RGBA4444,
.de2_fmt = SUN8I_MIXER_FBFMT_RGBA4444, .de2_fmt = SUN8I_MIXER_FBFMT_RGBA4444,
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_RGBX4444,
.de2_fmt = SUN8I_MIXER_FBFMT_RGBA4444,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_BGRA4444, .drm_fmt = DRM_FORMAT_BGRA4444,
.de2_fmt = SUN8I_MIXER_FBFMT_BGRA4444, .de2_fmt = SUN8I_MIXER_FBFMT_BGRA4444,
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_BGRX4444,
.de2_fmt = SUN8I_MIXER_FBFMT_BGRA4444,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_ARGB1555, .drm_fmt = DRM_FORMAT_ARGB1555,
.de2_fmt = SUN8I_MIXER_FBFMT_ARGB1555, .de2_fmt = SUN8I_MIXER_FBFMT_ARGB1555,
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_XRGB1555,
.de2_fmt = SUN8I_MIXER_FBFMT_ARGB1555,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_ABGR1555, .drm_fmt = DRM_FORMAT_ABGR1555,
.de2_fmt = SUN8I_MIXER_FBFMT_ABGR1555, .de2_fmt = SUN8I_MIXER_FBFMT_ABGR1555,
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_XBGR1555,
.de2_fmt = SUN8I_MIXER_FBFMT_ABGR1555,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_RGBA5551, .drm_fmt = DRM_FORMAT_RGBA5551,
.de2_fmt = SUN8I_MIXER_FBFMT_RGBA5551, .de2_fmt = SUN8I_MIXER_FBFMT_RGBA5551,
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_RGBX5551,
.de2_fmt = SUN8I_MIXER_FBFMT_RGBA5551,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_BGRA5551, .drm_fmt = DRM_FORMAT_BGRA5551,
.de2_fmt = SUN8I_MIXER_FBFMT_BGRA5551, .de2_fmt = SUN8I_MIXER_FBFMT_BGRA5551,
.rgb = true, .rgb = true,
.csc = SUN8I_CSC_MODE_OFF, .csc = SUN8I_CSC_MODE_OFF,
}, },
{
/* for DE2 VI layer which ignores alpha */
.drm_fmt = DRM_FORMAT_BGRX5551,
.de2_fmt = SUN8I_MIXER_FBFMT_BGRA5551,
.rgb = true,
.csc = SUN8I_CSC_MODE_OFF,
},
{ {
.drm_fmt = DRM_FORMAT_UYVY, .drm_fmt = DRM_FORMAT_UYVY,
.de2_fmt = SUN8I_MIXER_FBFMT_UYVY, .de2_fmt = SUN8I_MIXER_FBFMT_UYVY,
@ -200,12 +256,6 @@ static const struct de2_fmt_info de2_formats[] = {
.rgb = false, .rgb = false,
.csc = SUN8I_CSC_MODE_YUV2RGB, .csc = SUN8I_CSC_MODE_YUV2RGB,
}, },
{
.drm_fmt = DRM_FORMAT_YUV444,
.de2_fmt = SUN8I_MIXER_FBFMT_RGB888,
.rgb = true,
.csc = SUN8I_CSC_MODE_YUV2RGB,
},
{ {
.drm_fmt = DRM_FORMAT_YUV422, .drm_fmt = DRM_FORMAT_YUV422,
.de2_fmt = SUN8I_MIXER_FBFMT_YUV422, .de2_fmt = SUN8I_MIXER_FBFMT_YUV422,
@ -224,12 +274,6 @@ static const struct de2_fmt_info de2_formats[] = {
.rgb = false, .rgb = false,
.csc = SUN8I_CSC_MODE_YUV2RGB, .csc = SUN8I_CSC_MODE_YUV2RGB,
}, },
{
.drm_fmt = DRM_FORMAT_YVU444,
.de2_fmt = SUN8I_MIXER_FBFMT_RGB888,
.rgb = true,
.csc = SUN8I_CSC_MODE_YVU2RGB,
},
{ {
.drm_fmt = DRM_FORMAT_YVU422, .drm_fmt = DRM_FORMAT_YVU422,
.de2_fmt = SUN8I_MIXER_FBFMT_YUV422, .de2_fmt = SUN8I_MIXER_FBFMT_YUV422,

View file

@ -330,26 +330,26 @@ static const struct drm_plane_funcs sun8i_vi_layer_funcs = {
}; };
/* /*
* While all RGB formats are supported, VI planes don't support * While DE2 VI layer supports same RGB formats as UI layer, alpha
* alpha blending, so there is no point having formats with alpha * channel is ignored. This structure lists all unique variants
* channel if their opaque analog exist. * where alpha channel is replaced with "don't care" (X) channel.
*/ */
static const u32 sun8i_vi_layer_formats[] = { static const u32 sun8i_vi_layer_formats[] = {
DRM_FORMAT_ABGR1555,
DRM_FORMAT_ABGR4444,
DRM_FORMAT_ARGB1555,
DRM_FORMAT_ARGB4444,
DRM_FORMAT_BGR565, DRM_FORMAT_BGR565,
DRM_FORMAT_BGR888, DRM_FORMAT_BGR888,
DRM_FORMAT_BGRA5551, DRM_FORMAT_BGRX4444,
DRM_FORMAT_BGRA4444, DRM_FORMAT_BGRX5551,
DRM_FORMAT_BGRX8888, DRM_FORMAT_BGRX8888,
DRM_FORMAT_RGB565, DRM_FORMAT_RGB565,
DRM_FORMAT_RGB888, DRM_FORMAT_RGB888,
DRM_FORMAT_RGBA4444, DRM_FORMAT_RGBX4444,
DRM_FORMAT_RGBA5551, DRM_FORMAT_RGBX5551,
DRM_FORMAT_RGBX8888, DRM_FORMAT_RGBX8888,
DRM_FORMAT_XBGR1555,
DRM_FORMAT_XBGR4444,
DRM_FORMAT_XBGR8888, DRM_FORMAT_XBGR8888,
DRM_FORMAT_XRGB1555,
DRM_FORMAT_XRGB4444,
DRM_FORMAT_XRGB8888, DRM_FORMAT_XRGB8888,
DRM_FORMAT_NV16, DRM_FORMAT_NV16,
@ -363,11 +363,9 @@ static const u32 sun8i_vi_layer_formats[] = {
DRM_FORMAT_YUV411, DRM_FORMAT_YUV411,
DRM_FORMAT_YUV420, DRM_FORMAT_YUV420,
DRM_FORMAT_YUV422, DRM_FORMAT_YUV422,
DRM_FORMAT_YUV444,
DRM_FORMAT_YVU411, DRM_FORMAT_YVU411,
DRM_FORMAT_YVU420, DRM_FORMAT_YVU420,
DRM_FORMAT_YVU422, DRM_FORMAT_YVU422,
DRM_FORMAT_YVU444,
}; };
struct sun8i_vi_layer *sun8i_vi_layer_init_one(struct drm_device *drm, struct sun8i_vi_layer *sun8i_vi_layer_init_one(struct drm_device *drm,

View file

@ -734,7 +734,7 @@ static int alps_input_configured(struct hid_device *hdev, struct hid_input *hi)
if (data->has_sp) { if (data->has_sp) {
input2 = input_allocate_device(); input2 = input_allocate_device();
if (!input2) { if (!input2) {
input_free_device(input2); ret = -ENOMEM;
goto exit; goto exit;
} }

Some files were not shown because too many files have changed in this diff Show more