Merge branch 'android14-5.15' into branch 'android14-5.15-lts'

A back-merge to catch the latest changes that are in the android14-5.15
branch.  This contains the following commits:

* 1982cd32ae ANDROID: GKI: Remove CONFIG_MEDIA_CEC_RC
*   270ce44fdb Merge "Merge tag 'android14-5.15.144_r00' into branch 'android14-5.15'" into android14-5.15
|\
| * f26b6b0390 Merge tag 'android14-5.15.144_r00' into branch 'android14-5.15'
* | 9a196e8051 ANDROID: uid_sys_stats: Drop CONFIG_UID_SYS_STATS_DEBUG logic
* | debd0f470b ANDROID: uid_sys_stats: Fully initialize uid_entry_tmp value
* | cf1268f696 UPSTREAM: usb: gadget: uvc: Remove nested locking
* | c9395ddbaa UPSTREAM: usb: gadget: uvc: Fix use are free during STREAMOFF
* | 7a71ed71fc ANDROID: fuse-bpf: Fix the issue of abnormal lseek system calls
* | da99db444b ANDROID: Update the ABI symbol list
* | 4f66f3be95 ANDROID: Export sysctl_sched_min_granularity
* | e32aeb03b9 UPSTREAM: sched/fair: Limit sched slice duration
* | b9e9a2c009 FROMGIT: mm: update mark_victim tracepoints fields
* | 61d5f76a90 UPSTREAM: mmc: core: Do not force a retune before RPMB switch
* | d85e6cb679 ANDROID: GKI: add symbol list for etm driver
* | eb31cad2df UPSTREAM: coresight: tmc: Don't enable TMC when it's not ready.
* | 57ddb1ecd7 UPSTREAM: netfilter: nf_tables: bail out on mismatching dynset and set expressions
|/
* a4da62d21c FROMGIT: usb: dwc3: gadget: Handle EP0 request dequeuing properly
* 0d3c49180f UPSTREAM: usb: dwc3: gadget: Refactor EP0 forced stall/restart into a separate API
* 154a4394d0 UPSTREAM: usb: dwc3: gadget: Execute gadget stop after halting the controller
* 0b1767af48 BACKPORT: usb: dwc3: gadget: Stall and restart EP0 if host is unresponsive
* b68fafef56 UPSTREAM: usb: dwc3: gadget: Submit endxfer command if delayed during disconnect
* 02524b7519 UPSTREAM: usb: dwc3: ep0: Don't prepare beyond Setup stage
* 149320568d ANDROID: arm64: mm: perform clean & invalidation in __dma_map_area
* d0cdb904f9 ANDROID: GKI: Update symbol list for Amlogic
* a95b355479 UPSTREAM: bcache: move uapi header bcache.h to bcache code directory
* 6246b8f3ef UPSTREAM: netfilter: nf_tables: skip set commit for deleted/destroyed sets
* 6048991845 ANDROID: KVM: arm64: Avoid BUG-ing from the host abort path
* 38cb0b1181 ANDROID: GKI: Update symbol list for lenovo
* 3cd672870a ANDROID: binder: fix KMI-break due to alloc->lock
* 63f7ddea2e ANDROID: binder: fix KMI-break due to address type change
* 74ecd99c15 BACKPORT: FROMGIT: binder: switch alloc->mutex to spinlock_t
* 5a8658eac3 BACKPORT: FROMGIT: binder: reverse locking order in shrinker callback
* f0667c870c FROMGIT: binder: avoid user addresses in debug logs
* b93c9f8565 FROMGIT: binder: refactor binder_delete_free_buffer()
* f6b1c043ae FROMGIT: binder: collapse print_binder_buffer() into caller
* 683f84a35f FROMGIT: binder: document the final page calculation
* 4c82cbad43 BACKPORT: FROMGIT: binder: rename lru shrinker utilities
* eba1fb9603 UPSTREAM: drivers/android: remove redundant ret variable
* 356047fe2a FROMGIT: binder: make oversized buffer code more readable
* f7476dca31 FROMGIT: binder: remove redundant debug log
* 477e8e8453 BACKPORT: FROMGIT: binder: perform page installation outside of locks
* af71193412 FROMGIT: binder: initialize lru pages in mmap callback
* ef524f4dd4 FROMGIT: binder: malloc new_buffer outside of locks
* b23dbdbf19 BACKPORT: FROMGIT: binder: refactor page range allocation
* 59e0d62fc8 BACKPORT: FROMGIT: binder: relocate binder_alloc_clear_buf()
* 081ddad216 FROMGIT: binder: relocate low space calculation
* e1d195e94d FROMGIT: binder: separate the no-space debugging logic
* 26d06d9349 FROMGIT: binder: remove pid param in binder_alloc_new_buf()
* d5c44f9065 FROMGIT: binder: do unlocked work in binder_alloc_new_buf()
* 0b24368fff FROMGIT: binder: split up binder_update_page_range()
* c38a89805f FROMGIT: binder: keep vma addresses type as unsigned long
* ca5c7be9e0 FROMGIT: binder: remove extern from function prototypes
* 2a250a1528 FROMGIT: binder: fix comment on binder_alloc_new_buf() return value
* 26f0c01348 FROMGIT: binder: fix trivial typo of binder_free_buf_locked()
* 11ca07657c FROMGIT: binder: fix unused alloc->free_async_space
* 65cf1585ea FROMGIT: binder: fix async space check for 0-sized buffers
* 1787dddd97 FROMGIT: binder: fix race between mmput() and do_exit()
* 8dce2880bc FROMGIT: binder: fix use-after-free in shinker's callback
* 3c4732563e FROMGIT: binder: use EPOLLERR from eventpoll.h
* 486b17a096 ANDROID: GKI: Update symbol list for Amlogic
* bfefe25dfa UPSTREAM: bpf: Fix prog_array_map_poke_run map poke update
* 274748592e ANDROID: gki_defconfig: Set CONFIG_IDLE_INJECT and CONFIG_CPU_IDLE_THERMAL into y
* 597362d44f ANDROID: KVM: arm64: Don't prepopulate MMIO regions for host stage-2
* 45d542adb4 ANDROID: KVM: arm64: Fix host_smc print typo
* 38bb85f2fb ANDROID: KVM: arm64: Fix hyp event alignment
* c09c7c05d0 ANDROID: KVM: arm64: Document module_change_host_prot_range
* 80d91f64ba BACKPORT: USB: gadget: core: adjust uevent timing on gadget unbind
* 19a4494b2b ANDROID: GKI: Update RTK STB KMI symbol list
* 55f6a96975 UPSTREAM: dm verity: don't perform FEC for failed readahead IO
* 781393c0a2 UPSTREAM: ipv4: igmp: fix refcnt uaf issue when receiving igmp query packet
* 97c69470fe UPSTREAM: netfilter: nft_set_pipapo: skip inactive elements during set walk
* 16ea59408c ANDROID: fuse-bpf: Follow mounts in lookups
* abea1cb16e ANDROID: Snapshot Mainline's version of checkpatch.pl
* 9f6f0c1de5 ANDROID: GKI: Update symbol list for Amlogic
* a5123cff8d ANDROID: KVM: arm64: Skip prefaulting ptes which will be modified later
* 97bbbf4497 ANDROID: KVM: arm64: Introduce module_change_host_prot_range
* 1dc2dbbb57 ANDROID: KVM: arm64: Relax checks in module_change_host_page_prot
* b43baa770b ANDROID: KVM: arm64: Optimise module_change_host_page_prot
* dc87f3522e ANDROID: KVM: arm64: Prefault entries when splitting a block mapping
* ab6f88aebe ANDROID: GKI: Update symbol list for transsion
* 896cff8734 ANDROID: Add vendor_hooks to workaround CONFIG_TASK_DELAY_ACCT
* 3d3f9377b2 ANDROID:  Add missing symbol for QCOM
* 5feaf92b24 UPSTREAM: binder: fix memory leaks of spam and pending work
* 92bef7c6af UPSTREAM: mm,kfence: decouple kfence from page granularity mapping judgement
* e6ffb329ee UPSTREAM: arm64/mm: fold check for KFENCE into can_set_direct_map()
* b39c28c44c ANDROID: GKI: db845c: Update symbols list and ABI on rpmsg_register_device_override
* d0b481d97e ANDROID: fix up rpmsg_device ABI break
* 6bfb30205b ANDROID: fix up platform_device ABI break
* 649e9135df UPSTREAM: rpmsg: Fix possible refcount leak in rpmsg_register_device_override()
* b1e39deac4 UPSTREAM: rpmsg: glink: Release driver_override
* 9697a16480 BACKPORT: rpmsg: Fix calling device_lock() on non-initialized device
* 01b4519a41 BACKPORT: rpmsg: Fix kfree() of static memory on setting driver_override
* d82ae69002 UPSTREAM: rpmsg: Constify local variable in field store macro
* 341acb7bac UPSTREAM: driver: platform: Add helper for safer setting of driver_override
* 81051a615f Revert "ANDROID: Enable CONFIG_KUNIT=y."
* 1d5461bec0 Revert "ANDROID: Add kunit targets."
* 12ab8f1569 UPSTREAM: io_uring/fdinfo: lock SQ thread while retrieving thread cpu/pid
* 3a7b8e544b ANDROID: arm64: Remove a bunch of duplicate errata hunks
* 4ba6c3197c ANDROID: arm64: Disable workaround for CPU errata 2441007 and 2441009
* 0bea71c862 ANDROID: abi_gki_aarch64_qcom: Add GIC and hibernation APIs
* 3923e9952d ANDROID: irqchip/irq-gic-v3: Add vendor hook for gic suspend
* bd58836882 ANDROID: Update the ABI representation
* 16a47663f5 BACKPORT: fscrypt: support crypto data unit size less than filesystem block size
* a934b92361 ANDROID: mm: do not allow file-backed pages from CMA
* 35482d0d38 UPSTREAM: netfilter: nf_tables: remove catchall element in GC sync path
* e19a3cd1ce ANDROID: fuse-bpf: Ignore readaheads unless they go to the daemon
* 5421e17c17 ANDROID: Update the ABI symbol list
* 46f8b2ca58 ANDROID: GKI: add a vendor hook in ptep_clear_flush_young()
* 0add0e52ef UPSTREAM: fs: drop_caches: draining pages before dropping caches
*   0d1f309e44 Merge "Merge tag 'android14-5.15.137_r00' into branch 'android14-5.15'" into android14-5.15
|\
| * 6dfd4d406c Merge tag 'android14-5.15.137_r00' into branch 'android14-5.15'
* | 73c2c0d53d ANDROID: Update the ABI symbol list
* | 87344b2ab7 ANDROID: sched: Add trace_android_rvh_set_user_nice_locked
|/
* bd1e76c09b ANDROID: GKI: update symbol list
* fdaddcab76 ANDROID: GKI: vendor code needs __balance_callbacks access
* e2fbc5cc3a ANDROID: KVM: arm64: pkvm_module_ops documentation
* bf291bdd70 UPSTREAM: usb: typec: tcpm: Fix NULL pointer dereference in tcpm_pd_svdm()
* 52ecdc264d UPSTREAM: USB: core: Fix race by not overwriting udev->descriptor in hub_port_init()
* a7f103722b UPSTREAM: USB: core: Change usb_get_device_descriptor() API
* 28e703ec05 UPSTREAM: USB: core: Unite old scheme and new scheme descriptor reads
* e5f9357102 ANDROID: GKI: Update symbol list for lenovo
* dcf95aa0af FROMGIT: usb:gadget:uvc Do not use worker thread to pump isoc usb requests
* 8078e50f5e FROMGIT: usb: gadget: uvc: Fix use-after-free for inflight usb_requests
* 0041748215 FROMGIT: usb: gadget: uvc: move video disable logic to its own function
* 563283055b FROMGIT: usb: gadget: uvc: Allocate uvc_requests one at a time
* 848fa308a9 FROMGIT: usb: gadget: uvc: prevent use of disabled endpoint
* 24e0a18cb4 ANDROID: abi_gki_aarch64_qcom: Update symbol list
* 8ab43e8a3e ANDROID: arch_topology: Add android_rvh_update_thermal_stats
* 6ac8b4fbb4 ANDROID: fuse-bpf: Add NULL pointer check in fuse_release_in
* 4030b1eeed ANDROID: GKI: update symbol list for lenovo
* 71647086d7 ANDROID: GKI: add a vendor hook in cpufreq_online
* 2cecfa7378 FROMGIT: Input: uinput - allow injecting event times
* e9a7a2060a ANDROID: Update the ABI symbol list
* 984523c368 ANDROID: sched: Add vendor hook for update_load_sum
* f88c9605bd ANDROID: GKI: update mtktv symbol
* b6fd46aaf1 ANDROID: GKI: Add symbol list for Transsion
* 17d202d85b ANDROID: KVM: arm64: mount procfs for pKVM module loading
* be8f9c8bf9 ANDROID: GKI: Update symbol list for Amlogic
* 76fcf197f2 UPSTREAM: ASoC: soc-compress: Fix deadlock in soc_compr_open_fe
* 7f194d670f BACKPORT: ASoC: add snd_soc_card_mutex_lock/unlock()
* 8cdc41d9b8 BACKPORT: ASoC: expand snd_soc_dpcm_mutex_lock/unlock()
* 2721efad0a BACKPORT: ASoC: expand snd_soc_dapm_mutex_lock/unlock()
* b7bcf839e1 ANDROID: KVM: arm64: Fix error path in pkvm_mem_abort()
* 27aaaa9af5 ANDROID: GKI: Update symbol list for Amlogic
* 0681d570ba ANDROID: mm: add vendor hook in isolate_freepages()
* f206a8c31c ANDROID: GKI: Update symbol list for Amlogic
* 88b56693e9 ANDROID: GKI: Update symbol list for rtktv
* dbeed23196 ANDROID: fs/passthrough: Fix compatibility with R/O file system
* bde02310d4 ANDROID: Update the ABI symbol list
* 71320d0c1e Revert "ANDROID: KVM: arm64: Don't allocate from handle_host_mem_abort"
* 7b356cb300 UPSTREAM: netfilter: ipset: add the missing IP_SET_HASH_WITH_NET0 macro for ip_set_hash_netportnet.c
* 7d75f8038b UPSTREAM: vringh: don't use vringh_kiov_advance() in vringh_iov_xfer()
* 484868e95c BACKPORT: usb: gadget: uvc: Add missing initialization of ssp config descriptor
* ae71397d90 BACKPORT: usb: gadget: unconditionally allocate hs/ss descriptor in bind operation
* f5beeb23ed UPSTREAM: usb: gadget: f_uvc: change endpoint allocation in uvc_function_bind()
* 65aa3dd94a UPSTREAM: usb: gadget: function: Remove unused declarations
* 81bffd1c1a UPSTREAM: usb: gadget: uvc: clean up comments and styling in video_pump
* 0772c040ae UPSTREAM: ravb: Fix use-after-free issue in ravb_tx_timeout_work()
* 069cc1491e UPSTREAM: ravb: Fix up dma_free_coherent() call in ravb_remove()
* 501954e892 BACKPORT: usb: typec: altmodes/displayport: Signal hpd low when exiting mode
* b39c40b693 ANDROID: Update the ABI symbol list
* 858e65da9c ANDROID: f2fs: Fix the calculation of the number of zones
* 2eb175772d ANDROID: GKI: Update symbol list for lenovo
* 0a25bb8f1c ANDROID: KVM: arm64: Fix KVM_HOST_S2_DEFAULT_MMIO_PTE encoding

Change-Id: I7d833acbf81f15f2b96f130d293febe6a8b3fca2
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman
2024-01-26 03:05:39 +00:00
115 changed files with 5366 additions and 3762 deletions

View File

@@ -96,7 +96,6 @@ filegroup(
"android/abi_gki_aarch64_db845c",
"android/abi_gki_aarch64_exynos",
"android/abi_gki_aarch64_fips140",
"android/abi_gki_aarch64_kunit",
"android/abi_gki_aarch64_lenovo",
"android/abi_gki_aarch64_mtkott",
"android/abi_gki_aarch64_mtktv",
@@ -105,6 +104,7 @@ filegroup(
"android/abi_gki_aarch64_qcom",
"android/abi_gki_aarch64_rtkstb",
"android/abi_gki_aarch64_rtktv",
"android/abi_gki_aarch64_transsion",
"android/abi_gki_aarch64_tuxera",
"android/abi_gki_aarch64_virtual_device",
"android/abi_gki_aarch64_xiaomi",
@@ -615,70 +615,6 @@ kernel_build(
visibility = ["//visibility:private"],
)
# KUnit test targets
# Modules defined by tools/testing/kunit/configs/android/kunit_defconfig
_KUNIT_COMMON_MODULES = [
# keep sorted
"drivers/rtc/lib_test.ko",
"fs/ext4/ext4-inode-test.ko",
"fs/fat/fat_test.ko",
"kernel/time/time_test.ko",
"lib/kunit/kunit-example-test.ko",
"lib/kunit/kunit-test.ko",
"mm/kfence/kfence_test.ko",
"sound/soc/soc-topology-test.ko",
]
kernel_build(
name = "kunit_aarch64",
outs = [],
arch = "arm64",
base_kernel = ":kernel_aarch64",
build_config = "build.config.kunit.aarch64",
defconfig_fragments = [
"tools/testing/kunit/configs/android/kunit_defconfig",
],
kmi_symbol_list = "android/abi_gki_aarch64_kunit",
make_goals = ["modules"],
module_outs = _KUNIT_COMMON_MODULES,
)
copy_to_dist_dir(
name = "kunit_aarch64_dist",
data = [":kunit_aarch64"],
dist_dir = "out/kunit_aarch64/dist",
flat = True,
log = "info",
)
kernel_abi(
name = "kunit_aarch64_abi",
kernel_build = ":kunit_aarch64",
kmi_symbol_list_add_only = True,
)
kernel_build(
name = "kunit_x86_64",
outs = [],
arch = "x86_64",
base_kernel = ":kernel_x86_64",
build_config = "build.config.kunit.x86_64",
defconfig_fragments = [
"tools/testing/kunit/configs/android/kunit_defconfig",
],
make_goals = ["modules"],
module_outs = _KUNIT_COMMON_MODULES,
)
copy_to_dist_dir(
name = "kunit_x86_64_dist",
data = [":kunit_x86_64"],
dist_dir = "out/kunit_x86_64/dist",
flat = True,
log = "info",
)
# DDK Headers
# All headers. These are the public targets for DDK modules to use.
alias(

View File

@@ -261,9 +261,9 @@ DIRECT_KEY policies
The Adiantum encryption mode (see `Encryption modes and usage`_) is
suitable for both contents and filenames encryption, and it accepts
long IVs --- long enough to hold both an 8-byte logical block number
and a 16-byte per-file nonce. Also, the overhead of each Adiantum key
is greater than that of an AES-256-XTS key.
long IVs --- long enough to hold both an 8-byte data unit index and a
16-byte per-file nonce. Also, the overhead of each Adiantum key is
greater than that of an AES-256-XTS key.
Therefore, to improve performance and save memory, for Adiantum a
"direct key" configuration is supported. When the user has enabled
@@ -300,8 +300,8 @@ IV_INO_LBLK_32 policies
IV_INO_LBLK_32 policies work like IV_INO_LBLK_64, except that for
IV_INO_LBLK_32, the inode number is hashed with SipHash-2-4 (where the
SipHash key is derived from the master key) and added to the file
logical block number mod 2^32 to produce a 32-bit IV.
SipHash key is derived from the master key) and added to the file data
unit index mod 2^32 to produce a 32-bit IV.
This format is optimized for use with inline encryption hardware
compliant with the eMMC v5.2 standard, which supports only 32 IV bits
@@ -384,31 +384,62 @@ with ciphertext expansion.
Contents encryption
-------------------
For file contents, each filesystem block is encrypted independently.
Starting from Linux kernel 5.5, encryption of filesystems with block
size less than system's page size is supported.
For contents encryption, each file's contents is divided into "data
units". Each data unit is encrypted independently. The IV for each
data unit incorporates the zero-based index of the data unit within
the file. This ensures that each data unit within a file is encrypted
differently, which is essential to prevent leaking information.
Each block's IV is set to the logical block number within the file as
a little endian number, except that:
Note: the encryption depending on the offset into the file means that
operations like "collapse range" and "insert range" that rearrange the
extent mapping of files are not supported on encrypted files.
- With CBC mode encryption, ESSIV is also used. Specifically, each IV
is encrypted with AES-256 where the AES-256 key is the SHA-256 hash
of the file's data encryption key.
There are two cases for the sizes of the data units:
- With `DIRECT_KEY policies`_, the file's nonce is appended to the IV.
Currently this is only allowed with the Adiantum encryption mode.
* Fixed-size data units. This is how all filesystems other than UBIFS
work. A file's data units are all the same size; the last data unit
is zero-padded if needed. By default, the data unit size is equal
to the filesystem block size. On some filesystems, users can select
a sub-block data unit size via the ``log2_data_unit_size`` field of
the encryption policy; see `FS_IOC_SET_ENCRYPTION_POLICY`_.
- With `IV_INO_LBLK_64 policies`_, the logical block number is limited
to 32 bits and is placed in bits 0-31 of the IV. The inode number
(which is also limited to 32 bits) is placed in bits 32-63.
* Variable-size data units. This is what UBIFS does. Each "UBIFS
data node" is treated as a crypto data unit. Each contains variable
length, possibly compressed data, zero-padded to the next 16-byte
boundary. Users cannot select a sub-block data unit size on UBIFS.
- With `IV_INO_LBLK_32 policies`_, the logical block number is limited
to 32 bits and is placed in bits 0-31 of the IV. The inode number
is then hashed and added mod 2^32.
In the case of compression + encryption, the compressed data is
encrypted. UBIFS compression works as described above. f2fs
compression works a bit differently; it compresses a number of
filesystem blocks into a smaller number of filesystem blocks.
Therefore a f2fs-compressed file still uses fixed-size data units, and
it is encrypted in a similar way to a file containing holes.
Note that because file logical block numbers are included in the IVs,
filesystems must enforce that blocks are never shifted around within
encrypted files, e.g. via "collapse range" or "insert range".
As mentioned in `Key hierarchy`_, the default encryption setting uses
per-file keys. In this case, the IV for each data unit is simply the
index of the data unit in the file. However, users can select an
encryption setting that does not use per-file keys. For these, some
kind of file identifier is incorporated into the IVs as follows:
- With `DIRECT_KEY policies`_, the data unit index is placed in bits
0-63 of the IV, and the file's nonce is placed in bits 64-191.
- With `IV_INO_LBLK_64 policies`_, the data unit index is placed in
bits 0-31 of the IV, and the file's inode number is placed in bits
32-63. This setting is only allowed when data unit indices and
inode numbers fit in 32 bits.
- With `IV_INO_LBLK_32 policies`_, the file's inode number is hashed
and added to the data unit index. The resulting value is truncated
to 32 bits and placed in bits 0-31 of the IV. This setting is only
allowed when data unit indices and inode numbers fit in 32 bits.
The byte order of the IV is always little endian.
If the user selects FSCRYPT_MODE_AES_128_CBC for the contents mode, an
ESSIV layer is automatically included. In this case, before the IV is
passed to AES-128-CBC, it is encrypted with AES-256 where the AES-256
key is the SHA-256 hash of the file's contents encryption key.
Filenames encryption
--------------------
@@ -477,7 +508,8 @@ follows::
__u8 contents_encryption_mode;
__u8 filenames_encryption_mode;
__u8 flags;
__u8 __reserved[4];
__u8 log2_data_unit_size;
__u8 __reserved[3];
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
};
@@ -512,6 +544,29 @@ This structure must be initialized as follows:
The DIRECT_KEY, IV_INO_LBLK_64, and IV_INO_LBLK_32 flags are
mutually exclusive.
- ``log2_data_unit_size`` is the log2 of the data unit size in bytes,
or 0 to select the default data unit size. The data unit size is
the granularity of file contents encryption. For example, setting
``log2_data_unit_size`` to 12 causes file contents be passed to the
underlying encryption algorithm (such as AES-256-XTS) in 4096-byte
data units, each with its own IV.
Not all filesystems support setting ``log2_data_unit_size``. ext4
and f2fs support it since Linux v6.7. On filesystems that support
it, the supported nonzero values are 9 through the log2 of the
filesystem block size, inclusively. The default value of 0 selects
the filesystem block size.
The main use case for ``log2_data_unit_size`` is for selecting a
data unit size smaller than the filesystem block size for
compatibility with inline encryption hardware that only supports
smaller data unit sizes. ``/sys/block/$disk/queue/crypto/`` may be
useful for checking which data unit sizes are supported by a
particular system's inline encryption hardware.
Leave this field zeroed unless you are certain you need it. Using
an unnecessarily small data unit size reduces performance.
- For v2 encryption policies, ``__reserved`` must be zeroed.
- For v1 encryption policies, ``master_key_descriptor`` specifies how

File diff suppressed because it is too large Load Diff

View File

@@ -22,6 +22,8 @@
__arch_clear_user
__arch_copy_from_user
__arch_copy_to_user
argv_free
argv_split
arm64_const_caps_ready
arm64_use_ng_mappings
__arm_smccc_hvc
@@ -269,9 +271,11 @@
debugfs_create_bool
debugfs_create_dir
debugfs_create_file
debugfs_create_regset32
debugfs_create_u32
debugfs_create_u64
debugfs_lookup
debugfs_lookup_and_remove
debugfs_remove
debugfs_rename
dec_zone_page_state
@@ -308,6 +312,7 @@
device_for_each_child
device_get_child_node_count
device_get_match_data
device_get_next_child_node
device_get_phy_mode
device_initialize
device_init_wakeup
@@ -316,6 +321,7 @@
device_property_present
device_property_read_string
device_property_read_u32_array
device_property_read_u8_array
device_register
device_remove_file
device_rename
@@ -338,6 +344,7 @@
devm_extcon_dev_allocate
devm_extcon_dev_register
devm_free_irq
devm_fwnode_gpiod_get_index
devm_gpiod_get
devm_gpiod_get_index
devm_gpiod_get_optional
@@ -357,6 +364,7 @@
devm_kmemdup
devm_kstrdup
devm_kvasprintf
devm_led_classdev_register_ext
devm_mbox_controller_register
devm_nvmem_cell_get
devm_of_clk_add_hw_provider
@@ -368,6 +376,7 @@
devm_pinctrl_get
devm_pinctrl_put
devm_pinctrl_register
devm_platform_get_and_ioremap_resource
devm_platform_ioremap_resource
devm_platform_ioremap_resource_byname
devm_pwm_get
@@ -382,6 +391,7 @@
__devm_release_region
__devm_request_region
devm_request_threaded_irq
devm_reset_control_array_get
__devm_reset_control_get
devm_reset_controller_register
devm_rtc_allocate_device
@@ -480,6 +490,10 @@
dma_map_sg_attrs
dma_map_sgtable
dmam_free_coherent
dma_pool_alloc
dma_pool_create
dma_pool_destroy
dma_pool_free
dma_resv_add_excl_fence
dma_set_coherent_mask
dma_set_mask
@@ -510,7 +524,9 @@
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_crtc_state
drm_atomic_get_new_bridge_state
drm_atomic_get_new_connector_for_encoder
drm_atomic_get_new_private_obj_state
drm_atomic_get_plane_state
drm_atomic_get_private_obj_state
drm_atomic_helper_async_commit
@@ -518,6 +534,10 @@
drm_atomic_helper_cleanup_planes
drm_atomic_helper_commit_cleanup_done
drm_atomic_helper_commit_duplicated_state
drm_atomic_helper_commit_hw_done
drm_atomic_helper_commit_modeset_disables
drm_atomic_helper_commit_modeset_enables
drm_atomic_helper_commit_planes
drm_atomic_helper_commit_tail
drm_atomic_helper_commit_tail_rpm
__drm_atomic_helper_connector_destroy_state
@@ -530,6 +550,7 @@
__drm_atomic_helper_crtc_duplicate_state
drm_atomic_helper_disable_plane
drm_atomic_helper_duplicate_state
drm_atomic_helper_fake_vblank
drm_atomic_helper_page_flip
__drm_atomic_helper_plane_destroy_state
__drm_atomic_helper_plane_duplicate_state
@@ -545,6 +566,7 @@
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_dependencies
drm_atomic_helper_wait_for_fences
drm_atomic_helper_wait_for_vblanks
drm_atomic_private_obj_init
drm_atomic_set_crtc_for_connector
drm_atomic_set_crtc_for_plane
@@ -591,6 +613,7 @@
drm_encoder_init
__drm_err
drm_format_info
drm_format_info_min_pitch
drm_framebuffer_cleanup
drm_framebuffer_init
drm_framebuffer_lookup
@@ -700,6 +723,12 @@
eth_validate_addr
event_triggers_call
extcon_dev_register
extcon_find_edev_by_node
extcon_get_edev_by_phandle
extcon_get_extcon_dev
extcon_get_state
extcon_register_notifier
extcon_unregister_notifier
extcon_set_state
extcon_set_state_sync
fasync_helper
@@ -786,6 +815,11 @@
genl_register_family
genl_unregister_family
genphy_aneg_done
__genphy_config_aneg
genphy_handle_interrupt_no_ack
genphy_read_abilities
genphy_read_mmd_unsupported
genphy_write_mmd_unsupported
genphy_read_status
genphy_restart_aneg
genphy_resume
@@ -832,6 +866,7 @@
gpiochip_generic_free
gpiochip_generic_request
gpiochip_get_data
gpiod_cansleep
gpiod_count
gpiod_direction_input
gpiod_direction_output
@@ -957,6 +992,7 @@
iommu_device_sysfs_add
iommu_device_sysfs_remove
iommu_device_unregister
iommu_get_domain_for_dev
__ioremap
ioremap_cache
iounmap
@@ -992,6 +1028,7 @@
irq_stat
irq_to_desc
is_bad_inode
is_dma_buf_file
is_vmalloc_addr
iter_file_splice_write
iwe_stream_add_event
@@ -1060,6 +1097,7 @@
kstrtoll
kstrtos8
kstrtou16
kstrtou16_from_user
kstrtou8
kstrtou8_from_user
kstrtouint
@@ -1067,6 +1105,7 @@
kstrtoull
kthread_bind
kthread_create_on_node
kthread_flush_worker
__kthread_init_worker
kthread_queue_work
kthread_should_stop
@@ -1087,6 +1126,7 @@
kvmalloc_node
led_classdev_register_ext
led_classdev_unregister
led_init_default_state_get
led_trigger_blink_oneshot
led_trigger_event
led_trigger_register
@@ -1122,8 +1162,10 @@
mdiobus_free
mdiobus_get_phy
mdiobus_read
__mdiobus_read
mdiobus_unregister
mdiobus_write
__mdiobus_write
mdio_device_create
mdio_device_free
media_create_pad_link
@@ -1135,6 +1177,7 @@
memcpy
__memcpy_fromio
__memcpy_toio
mem_dump_obj
memdup_user
memmove
memory_cgrp_subsys_enabled_key
@@ -1292,7 +1335,9 @@
of_get_phy_mode
of_get_property
of_get_regulator_init_data
of_graph_get_remote_node
of_graph_get_remote_port_parent
of_graph_is_present
of_iomap
of_irq_find_parent
of_irq_get
@@ -1327,6 +1372,8 @@
__of_reset_control_get
of_thermal_get_ntrips
of_thermal_is_trip_valid
of_usb_get_phy_mode
of_usb_host_tpl_support
oops_in_progress
overflowgid
overflowuid
@@ -1368,14 +1415,18 @@
param_set_copystring
param_set_hexint
param_set_int
pci_alloc_irq_vectors_affinity
pci_bus_type
pci_choose_state
pci_disable_device
pci_enable_device
pci_find_next_bus
pci_free_irq_vectors
pci_generic_config_read
pci_generic_config_write
pci_get_device
pci_host_probe
pci_irq_vector
pci_lock_rescan_remove
pci_msi_create_irq_domain
pci_msi_enabled
@@ -1396,6 +1447,7 @@
perf_trace_run_bpf_submit
pfn_is_map_memory
phy_attached_info
phy_basic_t1_features
phy_drivers_register
phy_drivers_unregister
phy_error
@@ -1426,10 +1478,22 @@
phylink_start
phylink_stop
phylink_suspend
phy_modify
__phy_modify
phy_modify_changed
phy_modify_paged
phy_modify_paged_changed
phy_pm_runtime_get_sync
phy_pm_runtime_put_sync
phy_power_off
phy_power_on
phy_print_status
phy_read_paged
phy_restore_page
phy_select_page
phy_set_mode_ext
phy_trigger_machine
phy_write_paged
pid_task
pinconf_generic_dt_free_map
pinconf_generic_dt_node_to_map
@@ -1467,6 +1531,7 @@
__pm_relax
pm_relax
pm_runtime_allow
pm_runtime_barrier
__pm_runtime_disable
pm_runtime_enable
pm_runtime_forbid
@@ -1485,6 +1550,9 @@
pm_wakeup_ws_event
pm_wq
posix_acl_chmod
power_supply_get_by_name
power_supply_put
power_supply_set_property
prandom_bytes
prandom_u32
preempt_schedule
@@ -1523,6 +1591,10 @@
pwm_set_chip_data
queue_delayed_work_on
queue_work_on
radix_tree_delete
radix_tree_insert
radix_tree_lookup
radix_tree_maybe_preload
___ratelimit
raw_notifier_call_chain
raw_notifier_chain_register
@@ -1666,10 +1738,12 @@
sched_setscheduler
sched_setscheduler_nocheck
sched_show_task
sched_uclamp_used
schedule
schedule_timeout
schedule_timeout_interruptible
schedule_timeout_killable
schedule_timeout_uninterruptible
scnprintf
sdio_align_size
sdio_claim_host
@@ -1700,6 +1774,8 @@
sdio_writesb
sdio_writew
send_sig
seq_list_next
seq_list_start
seq_lseek
seq_open
seq_printf
@@ -1734,6 +1810,7 @@
sg_next
__sg_page_iter_next
__sg_page_iter_start
sg_pcopy_from_buffer
sg_pcopy_to_buffer
show_class_attr_string
show_regs
@@ -1749,6 +1826,7 @@
single_open
single_open_size
single_release
si_swapinfo
skb_add_rx_frag
skb_checksum_help
skb_clone
@@ -1857,6 +1935,7 @@
split_page
sprintf
sprint_symbol
sprint_symbol_no_offset
sscanf
__stack_chk_fail
stack_trace_print
@@ -1939,6 +2018,7 @@
tasklet_unlock_wait
tasklist_lock
task_may_not_preempt
task_sched_runtime
thermal_cooling_device_unregister
thermal_of_cooling_device_register
thermal_zone_device_unregister
@@ -1970,6 +2050,7 @@
__traceiter_android_rvh_select_task_rq_rt
__traceiter_android_rvh_tick_entry
__traceiter_android_vh_alloc_pages_entry
__traceiter_android_vh_calc_alloc_flags
__traceiter_android_vh_cma_alloc_bypass
__traceiter_android_vh_cma_drain_all_pages_bypass
__traceiter_android_vh_cpu_idle_enter
@@ -1979,6 +2060,7 @@
__traceiter_android_vh_ftrace_format_check
__traceiter_android_vh_iommu_iovad_free_iova
__traceiter_android_vh_ipi_stop
__traceiter_android_vh_isolate_freepages
__traceiter_android_vh_kvmalloc_node_use_vmalloc
__traceiter_android_vh_mem_cgroup_alloc
__traceiter_android_vh_printk_caller
@@ -2018,6 +2100,7 @@
__tracepoint_android_rvh_select_task_rq_rt
__tracepoint_android_rvh_tick_entry
__tracepoint_android_vh_alloc_pages_entry
__tracepoint_android_vh_calc_alloc_flags
__tracepoint_android_vh_cma_alloc_bypass
__tracepoint_android_vh_cma_drain_all_pages_bypass
__tracepoint_android_vh_cpu_idle_enter
@@ -2027,6 +2110,7 @@
__tracepoint_android_vh_ftrace_format_check
__tracepoint_android_vh_iommu_iovad_free_iova
__tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_isolate_freepages
__tracepoint_android_vh_kvmalloc_node_use_vmalloc
__tracepoint_android_vh_mem_cgroup_alloc
__tracepoint_android_vh_printk_caller
@@ -2080,6 +2164,7 @@
uart_update_timeout
uart_write_wakeup
__ubsan_handle_cfi_check_fail_abort
uclamp_eff_value
__udelay
__uio_register_device
uio_unregister_device
@@ -2115,6 +2200,7 @@
update_rq_clock
up_read
up_write
usb_add_gadget
usb_add_gadget_udc
usb_add_hcd
usb_add_phy_dev
@@ -2124,32 +2210,72 @@
usb_autopm_put_interface
usb_control_msg
usb_create_hcd
__usb_create_hcd
usb_debug_root
usb_decode_ctrl
usb_del_gadget
usb_del_gadget_udc
usb_deregister
usb_deregister_dev
usb_disabled
usb_disable_autosuspend
usb_driver_claim_interface
usb_driver_release_interface
usb_ep_set_maxpacket_limit
usb_ep_type_string
usb_find_interface
usb_free_urb
usb_gadget_giveback_request
usb_gadget_map_request_by_dev
usb_gadget_probe_driver
usb_gadget_set_state
usb_gadget_udc_reset
usb_gadget_unmap_request_by_dev
usb_gadget_unregister_driver
usb_get_dev
usb_get_dr_mode
usb_get_from_anchor
usb_get_maximum_speed
usb_get_maximum_ssp_rate
usb_get_role_switch_default_mode
usb_hcd_check_unlink_urb
usb_hc_died
usb_hcd_end_port_resume
usb_hcd_giveback_urb
usb_hcd_irq
usb_hcd_is_primary_hcd
usb_hcd_link_urb_to_ep
usb_hcd_map_urb_for_dma
usb_hcd_platform_shutdown
usb_hcd_poll_rh_status
usb_hcd_resume_root_hub
usb_hcd_start_port_resume
usb_hcd_unlink_urb_from_ep
usb_hcd_unmap_urb_for_dma
usb_hub_clear_tt_buffer
usb_ifnum_to_if
usb_initialize_gadget
usb_interrupt_msg
usb_kill_anchored_urbs
usb_kill_urb
usb_lock_device_for_reset
usb_phy_set_charger_current
usb_put_dev
usb_put_hcd
usb_register_dev
usb_register_driver
usb_remove_hcd
usb_reset_device
usb_role_switch_get_drvdata
usb_role_switch_register
usb_role_switch_unregister
usb_root_hub_lost_power
usb_scuttle_anchored_urbs
usb_set_interface
usb_submit_urb
usb_unanchor_urb
usb_unlink_urb
usb_wakeup_notification
__usecs_to_jiffies
usleep_range_state
utf16s_to_utf8s

View File

@@ -1,94 +0,0 @@
[abi_symbol_list]
# commonly used symbols
kfree
kmalloc_caches
kunit_binary_assert_format
kunit_do_assertion
kunit_fail_assert_format
kunit_kmalloc_array
kunit_log_append
kunit_ptr_not_err_assert_format
__kunit_test_suites_exit
__kunit_test_suites_init
kunit_try_catch_throw
kunit_unary_assert_format
memset
module_layout
_printk
__put_task_struct
_raw_spin_lock_irqsave
_raw_spin_unlock_irqrestore
scnprintf
__stack_chk_fail
strcmp
strscpy
__ubsan_handle_cfi_check_fail_abort
# required by fat_test.ko
fat_time_fat2unix
fat_time_unix2fat
# required by kfence_test.ko
for_each_kernel_tracepoint
jiffies
kasan_flag_enabled
__kfence_pool
__kmalloc
kmem_cache_alloc
kmem_cache_alloc_bulk
kmem_cache_create
kmem_cache_destroy
kmem_cache_free
kmem_cache_free_bulk
kmem_cache_shrink
krealloc
ksize
prandom_u32
rcu_barrier
__rcu_read_lock
__rcu_read_unlock
strchr
strnstr
strstr
synchronize_rcu
synchronize_srcu
tracepoint_probe_register
tracepoint_probe_unregister
tracepoint_srcu
# required by kunit-test.ko
arm64_const_caps_ready
__cfi_slowpath_diag
cpu_hwcap_keys
kmem_cache_alloc_trace
kunit_add_named_resource
kunit_add_resource
kunit_alloc_and_get_resource
kunit_binary_ptr_assert_format
kunit_binary_str_assert_format
kunit_cleanup
kunit_destroy_resource
kunit_init_test
kunit_try_catch_run
refcount_warn_saturate
# required by lib_test.ko
rtc_month_days
rtc_time64_to_tm
# required by soc-topology-test.ko
get_device
memcpy
put_device
__root_device_register
root_device_unregister
snd_soc_add_component
snd_soc_component_initialize
snd_soc_register_card
snd_soc_tplg_component_load
snd_soc_tplg_component_remove
snd_soc_unregister_card
snd_soc_unregister_component
# required by time_test.ko
time64_to_tm

View File

@@ -372,6 +372,7 @@
driver_register
driver_unregister
drm_add_modes_noedid
drm_atomic_get_crtc_state
drm_atomic_helper_check
drm_atomic_helper_cleanup_planes
drm_atomic_helper_commit
@@ -394,8 +395,11 @@
drm_atomic_helper_shutdown
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_flip_done
drm_atomic_helper_wait_for_vblanks
drm_compat_ioctl
drm_connector_atomic_hdr_metadata_equal
drm_connector_attach_encoder
drm_connector_attach_hdr_output_metadata_property
drm_connector_cleanup
drm_connector_init
drm_connector_unregister
@@ -429,9 +433,11 @@
drm_gem_prime_mmap
drm_gem_vm_close
drm_gem_vm_open
drm_hdmi_infoframe_set_hdr_metadata
drm_helper_hpd_irq_event
drm_helper_probe_single_connector_modes
drm_ioctl
drm_kms_helper_hotplug_event
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
drmm_mode_config_init
@@ -443,6 +449,7 @@
drm_mode_set_name
drm_object_attach_property
drm_of_component_match_add
drm_of_component_probe
drm_of_find_possible_crtcs
drm_open
drm_panel_add
@@ -453,6 +460,7 @@
drm_plane_cleanup
drm_plane_create_alpha_property
drm_plane_create_blend_mode_property
drm_plane_create_color_properties
drm_plane_create_rotation_property
drm_plane_create_zpos_immutable_property
drm_poll
@@ -545,6 +553,8 @@
handle_level_irq
handle_simple_irq
hashlen_string
hdmi_drm_infoframe_pack_only
hdmi_infoframe_pack
hrtimer_active
hrtimer_cancel
hrtimer_forward
@@ -718,6 +728,7 @@
mipi_dsi_packet_format_is_long
misc_deregister
misc_register
mmc_app_cmd
__mmdrop
mod_delayed_work_on
mod_timer
@@ -1007,6 +1018,7 @@
regulator_disable
regulator_enable
regulator_get_bypass_regmap
regulator_get_linear_step
regulator_get_optional
regulator_is_enabled
regulator_put
@@ -1042,6 +1054,7 @@
scnprintf
sdhci_calc_clk
sdhci_enable_clk
sdhci_execute_tuning
sdhci_pltfm_clk_get_max_clock
sdhci_pltfm_pmops
sdhci_pltfm_suspend
@@ -1096,6 +1109,7 @@
skb_queue_purge
skb_queue_tail
skb_trim
smp_call_function_any
smp_call_function_single
snd_card_ref
snd_card_register
@@ -1266,6 +1280,7 @@
__traceiter_android_rvh_flush_task
__traceiter_android_rvh_migrate_queued_task
__traceiter_android_rvh_new_task_stats
__traceiter_android_rvh_sched_balance_rt
__traceiter_android_rvh_sched_cpu_dying
__traceiter_android_rvh_sched_cpu_starting
__traceiter_android_rvh_sched_exec
@@ -1279,7 +1294,10 @@
__traceiter_android_rvh_update_cpus_allowed
__traceiter_android_rvh_wake_up_new_task
__traceiter_android_vh_binder_wakeup_ilocked
__traceiter_android_vh_cpufreq_online
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_update_topology_flags_workfn
__traceiter_android_vh_use_amu_fie
__traceiter_binder_transaction_received
__traceiter_cpu_frequency_limits
__tracepoint_android_rvh_account_irq
@@ -1292,6 +1310,7 @@
__tracepoint_android_rvh_flush_task
__tracepoint_android_rvh_migrate_queued_task
__tracepoint_android_rvh_new_task_stats
__tracepoint_android_rvh_sched_balance_rt
__tracepoint_android_rvh_sched_cpu_dying
__tracepoint_android_rvh_sched_cpu_starting
__tracepoint_android_rvh_sched_exec
@@ -1305,7 +1324,10 @@
__tracepoint_android_rvh_update_cpus_allowed
__tracepoint_android_rvh_wake_up_new_task
__tracepoint_android_vh_binder_wakeup_ilocked
__tracepoint_android_vh_cpufreq_online
__tracepoint_android_vh_dup_task_struct
__tracepoint_android_vh_update_topology_flags_workfn
__tracepoint_android_vh_use_amu_fie
__tracepoint_binder_transaction_received
__tracepoint_cpu_frequency_limits
try_module_get

View File

@@ -44,9 +44,11 @@
bio_start_io_acct
__bitmap_clear
__bitmap_complement
bitmap_find_free_region
bitmap_find_next_zero_area_off
bitmap_from_arr32
bitmap_print_to_pagebuf
bitmap_release_region
__bitmap_set
bitmap_to_arr32
__bitmap_weight
@@ -187,6 +189,7 @@
contig_page_data
_copy_from_iter
copy_from_kernel_nofault
copy_page
_copy_to_iter
cpu_bit_bitmap
cpufreq_cpu_get_raw
@@ -395,8 +398,11 @@
__devm_mdiobus_register
devm_of_phy_get_by_index
__devm_of_phy_provider_register
devm_pci_alloc_host_bridge
devm_phy_create
devm_phy_optional_get
devm_pinctrl_get
devm_pinctrl_put
devm_pwm_get
__devm_regmap_init_i2c
devm_regulator_bulk_get
@@ -405,6 +411,7 @@
devm_request_any_context_irq
__devm_request_region
devm_request_threaded_irq
__devm_reset_control_get
devm_rtc_allocate_device
__devm_rtc_register_device
devm_snd_soc_register_card
@@ -469,6 +476,7 @@
dma_buf_unmap_attachment
dma_buf_vmap
dma_buf_vunmap
dma_contiguous_default_area
dma_fence_add_callback
dma_fence_array_create
dma_fence_array_ops
@@ -699,6 +707,7 @@
generic_file_open
generic_file_read_iter
generic_file_splice_read
generic_handle_domain_irq
generic_handle_irq
generic_mii_ioctl
generic_read_dir
@@ -761,6 +770,7 @@
gpio_to_desc
gre_add_protocol
gre_del_protocol
handle_edge_irq
handle_fasteoi_irq
handle_simple_irq
handle_sysrq
@@ -895,6 +905,7 @@
__ipv6_addr_type
ipv6_dev_find
ipv6_stub
irq_chip_ack_parent
irq_chip_eoi_parent
irqchip_fwnode_ops
irq_chip_mask_parent
@@ -902,15 +913,23 @@
irq_chip_set_type_parent
irq_chip_set_vcpu_affinity_parent
irq_chip_unmask_parent
irq_dispose_mapping
__irq_domain_add
irq_domain_alloc_irqs_parent
irq_domain_create_hierarchy
irq_domain_free_irqs_common
irq_domain_get_irq_data
irq_domain_remove
irq_domain_set_hwirq_and_chip
irq_domain_set_info
irq_find_matching_fwspec
irq_get_irq_data
irq_of_parse_and_map
__irq_resolve_mapping
irq_set_affinity_hint
irq_set_chained_handler_and_data
irq_set_chip_and_handler_name
irq_set_chip_data
__irq_set_handler
irq_set_irq_wake
irq_to_desc
@@ -1067,6 +1086,7 @@
__memcpy_toio
memdup_user
memmove
memory_read_from_buffer
memparse
memremap
memscan
@@ -1269,6 +1289,17 @@
param_ops_ulong
param_ops_ushort
path_put
pci_generic_config_read32
pci_generic_config_write32
pci_host_probe
pci_lock_rescan_remove
pci_msi_create_irq_domain
pci_msi_mask_irq
pci_msi_unmask_irq
pci_pio_to_address
pci_remove_root_bus
pci_stop_root_bus
pci_unlock_rescan_remove
PDE_DATA
__percpu_down_read
percpu_down_write
@@ -1345,6 +1376,7 @@
__platform_register_drivers
platform_unregister_drivers
__pm_relax
pm_relax
__pm_runtime_disable
pm_runtime_enable
pm_runtime_force_resume
@@ -1356,6 +1388,7 @@
__pm_runtime_suspend
__pm_runtime_use_autosuspend
__pm_stay_awake
pm_stay_awake
pm_wakeup_ws_event
pm_wq
power_supply_changed
@@ -1507,6 +1540,8 @@
__request_module
__request_region
request_threaded_irq
reset_control_assert
reset_control_deassert
rfkill_alloc
rfkill_blocked
rfkill_destroy
@@ -1543,10 +1578,13 @@
rproc_add_carveout
rproc_alloc
rproc_boot
rproc_da_to_va
rproc_del
rproc_free
rproc_get_by_child
rproc_mem_entry_init
rproc_of_resm_mem_entry_init
rproc_report_crash
rproc_shutdown
rproc_vq_interrupt
rtc_add_group
@@ -1800,6 +1838,7 @@
__spi_alloc_controller
spi_finalize_current_message
spi_new_device
spi_register_controller
__spi_register_driver
spi_setup
spi_sync

View File

@@ -21,6 +21,7 @@
__alloc_percpu_gfp
__alloc_skb
alloc_workqueue
amba_bustype
amba_driver_register
amba_driver_unregister
android_debug_symbol
@@ -178,7 +179,12 @@
component_match_add_release
component_unbind_all
config_ep_by_speed
configfs_register_group
configfs_register_subsystem
configfs_unregister_subsystem
config_group_init
config_group_init_type_name
config_item_set_name
console_set_on_cmdline
console_suspend_enabled
console_trylock
@@ -239,7 +245,9 @@
cpu_number
__cpu_online_mask
cpu_pm_register_notifier
cpu_pm_unregister_notifier
__cpu_possible_mask
__cpu_present_mask
cpupri_find_fitness
cpu_scale
cpus_read_lock
@@ -346,6 +354,7 @@
device_init_wakeup
device_link_add
device_link_del
device_match_fwnode
device_property_present
device_property_read_string
device_property_read_u32_array
@@ -466,6 +475,7 @@
disk_end_io_acct
disk_start_io_acct
dma_alloc_attrs
dma_alloc_pages
dma_async_device_register
dma_async_device_unregister
dma_async_tx_descriptor_init
@@ -502,6 +512,7 @@
dma_fence_signal_locked
dma_fence_wait_timeout
dma_free_attrs
dma_free_pages
dma_get_slave_caps
dma_get_slave_channel
dma_heap_add
@@ -614,6 +625,7 @@
drm_bridge_remove
drm_compat_ioctl
drm_connector_atomic_hdr_metadata_equal
drm_connector_attach_content_protection_property
drm_connector_attach_encoder
drm_connector_attach_hdr_output_metadata_property
drm_connector_attach_max_bpc_property
@@ -682,6 +694,7 @@
drm_gem_vm_close
drm_gem_vm_open
drm_get_format_info
drm_hdcp_update_content_protection
drm_hdmi_infoframe_set_hdr_metadata
drm_helper_mode_fill_fb_struct
drm_helper_probe_single_connector_modes
@@ -837,6 +850,9 @@
full_name_hash
fwnode_get_name
fwnode_gpiod_get_index
fwnode_handle_get
fwnode_handle_put
fwnode_property_present
fwnode_property_read_string
fwnode_property_read_u32_array
gcd
@@ -932,6 +948,7 @@
handle_nested_irq
handle_simple_irq
handle_sysrq
hashlen_string
have_governor_per_policy
hdmi_drm_infoframe_pack_only
hex2bin
@@ -1126,6 +1143,9 @@
kernel_param_lock
kernel_param_unlock
kernel_restart
kernfs_find_and_get_ns
kernfs_notify
kernfs_put
kern_mount
kern_unmount
key_create_or_update
@@ -1228,6 +1248,7 @@
log_threaded_irq_wakeup_reason
loops_per_jiffy
mac_pton
max_load_balance_interval
mbox_chan_received_data
mbox_controller_register
mbox_controller_unregister
@@ -1380,8 +1401,14 @@
of_get_named_gpio_flags
of_get_next_available_child
of_get_next_child
of_get_next_parent
of_get_property
of_get_regulator_init_data
of_graph_get_next_endpoint
of_graph_get_port_parent
of_graph_get_remote_endpoint
of_graph_is_present
of_graph_parse_endpoint
of_iomap
of_irq_find_parent
of_irq_get
@@ -1475,6 +1502,10 @@
pci_write_config_word
PDE_DATA
__per_cpu_offset
perf_aux_output_begin
perf_aux_output_end
perf_aux_output_flag
perf_event_addr_filters_sync
perf_event_create_kernel_counter
perf_event_disable
perf_event_enable
@@ -1483,6 +1514,7 @@
perf_event_read_value
perf_event_release_kernel
perf_event_update_userpage
perf_get_aux
perf_pmu_migrate_context
perf_pmu_register
perf_pmu_unregister
@@ -1660,6 +1692,7 @@
register_inet6addr_notifier
register_inetaddr_notifier
register_kernel_break_hook
register_kretprobe
register_netdev
register_netdevice
register_netdevice_notifier
@@ -1961,6 +1994,7 @@
srcu_notifier_chain_unregister
sscanf
__stack_chk_fail
static_key_count
static_key_disable
static_key_enable
static_key_slow_dec
@@ -2009,7 +2043,9 @@
syscon_regmap_lookup_by_phandle
sysctl_sched_features
sysctl_sched_latency
sysctl_sched_min_granularity
sysfs_add_file_to_group
sysfs_add_link_to_group
sysfs_create_file_ns
sysfs_create_files
sysfs_create_group
@@ -2019,10 +2055,12 @@
sysfs_emit_at
__sysfs_match_string
sysfs_notify
sysfs_remove_file_from_group
sysfs_remove_file_ns
sysfs_remove_group
sysfs_remove_groups
sysfs_remove_link
sysfs_remove_link_from_group
sysfs_streq
sysfs_update_group
sysrq_mask
@@ -2129,6 +2167,7 @@
__traceiter_android_rvh_setscheduler
__traceiter_android_rvh_set_task_cpu
__traceiter_android_rvh_set_user_nice
__traceiter_android_rvh_set_user_nice_locked
__traceiter_android_rvh_show_max_freq
__traceiter_android_rvh_typec_tcpci_get_vbus
__traceiter_android_rvh_uclamp_eff_get
@@ -2136,10 +2175,12 @@
__traceiter_android_rvh_ufs_reprogram_all_keys
__traceiter_android_rvh_update_blocked_fair
__traceiter_android_rvh_update_load_avg
__traceiter_android_rvh_update_load_sum
__traceiter_android_rvh_update_misfit_status
__traceiter_android_rvh_update_rt_rq_load_avg
__traceiter_android_rvh_usb_dev_suspend
__traceiter_android_rvh_util_est_update
__traceiter_android_rvh_util_fits_cpu
__traceiter_android_vh_arch_set_freq_scale
__traceiter_android_vh_audio_usb_offload_connect
__traceiter_android_vh_binder_restore_priority
@@ -2161,6 +2202,7 @@
__traceiter_android_vh_pagecache_get_page
__traceiter_android_vh_prio_inheritance
__traceiter_android_vh_prio_restore
__traceiter_android_vh_ptep_clear_flush_young
__traceiter_android_vh_reclaim_pages_plug
__traceiter_android_vh_resume_end
__traceiter_android_vh_rmqueue
@@ -2265,6 +2307,7 @@
__tracepoint_android_rvh_setscheduler
__tracepoint_android_rvh_set_task_cpu
__tracepoint_android_rvh_set_user_nice
__tracepoint_android_rvh_set_user_nice_locked
__tracepoint_android_rvh_show_max_freq
__tracepoint_android_rvh_typec_tcpci_get_vbus
__tracepoint_android_rvh_uclamp_eff_get
@@ -2272,10 +2315,12 @@
__tracepoint_android_rvh_ufs_reprogram_all_keys
__tracepoint_android_rvh_update_blocked_fair
__tracepoint_android_rvh_update_load_avg
__tracepoint_android_rvh_update_load_sum
__tracepoint_android_rvh_update_misfit_status
__tracepoint_android_rvh_update_rt_rq_load_avg
__tracepoint_android_rvh_usb_dev_suspend
__tracepoint_android_rvh_util_est_update
__tracepoint_android_rvh_util_fits_cpu
__tracepoint_android_vh_arch_set_freq_scale
__tracepoint_android_vh_audio_usb_offload_connect
__tracepoint_android_vh_binder_restore_priority
@@ -2297,6 +2342,7 @@
__tracepoint_android_vh_pagecache_get_page
__tracepoint_android_vh_prio_inheritance
__tracepoint_android_vh_prio_restore
__tracepoint_android_vh_ptep_clear_flush_young
__tracepoint_android_vh_reclaim_pages_plug
__tracepoint_android_vh_resume_end
__tracepoint_android_vh_rmqueue
@@ -2415,6 +2461,7 @@
unregister_chrdev_region
unregister_inet6addr_notifier
unregister_inetaddr_notifier
unregister_kretprobe
unregister_netdev
unregister_netdevice_many
unregister_netdevice_notifier
@@ -2467,6 +2514,7 @@
usb_put_hcd
usb_register_notify
usb_remove_hcd
usb_role_string
usb_role_switch_get_drvdata
usb_role_switch_register
usb_role_switch_unregister

View File

@@ -57,6 +57,7 @@
badblocks_set
badblocks_show
badblocks_store
__balance_callbacks
balance_push_callback
bdev_check_media_change
bdevname
@@ -824,6 +825,9 @@
get_user_ifreq
get_zeroed_page
gf128mul_lle
gic_v3_cpu_init
gic_v3_dist_init
gic_v3_dist_wait_for_rwp
gov_attr_set_init
gov_attr_set_put
governor_sysfs_ops
@@ -1311,6 +1315,7 @@
mod_delayed_work_on
mod_node_page_state
mod_timer
mod_timer_pending
__module_get
module_layout
module_put
@@ -2248,6 +2253,7 @@
__traceiter_android_rvh_update_cpu_capacity
__traceiter_android_rvh_update_cpus_allowed
__traceiter_android_rvh_update_misfit_status
__traceiter_android_rvh_update_thermal_stats
__traceiter_android_rvh_wake_up_new_task
__traceiter_android_vh_audio_usb_offload_connect
__traceiter_android_vh_binder_restore_priority
@@ -2268,6 +2274,7 @@
__traceiter_android_vh_ftrace_size_check
__traceiter_android_vh_gic_resume
__traceiter_android_vh_handle_tlb_conf
__traceiter_android_vh_gic_v3_suspend
__traceiter_android_vh_ipi_stop
__traceiter_android_vh_jiffies_update
__traceiter_android_vh_kswapd_per_node
@@ -2356,6 +2363,7 @@
__tracepoint_android_rvh_update_cpu_capacity
__tracepoint_android_rvh_update_cpus_allowed
__tracepoint_android_rvh_update_misfit_status
__tracepoint_android_rvh_update_thermal_stats
__tracepoint_android_rvh_wake_up_new_task
__tracepoint_android_vh_audio_usb_offload_connect
__tracepoint_android_vh_binder_restore_priority
@@ -2376,6 +2384,7 @@
__tracepoint_android_vh_ftrace_size_check
__tracepoint_android_vh_gic_resume
__tracepoint_android_vh_handle_tlb_conf
__tracepoint_android_vh_gic_v3_suspend
__tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_jiffies_update
__tracepoint_android_vh_kswapd_per_node

File diff suppressed because it is too large Load Diff

View File

@@ -498,6 +498,7 @@
hex_asc
hex_dump_to_buffer
hex_to_bin
hid_open_report
high_memory
hrtimer_cancel
hrtimer_init

View File

@@ -0,0 +1,35 @@
[abi_symbol_list]
avenrun
kstat
kernel_cpustat
vm_event_states
# required by delayacct
set_delayacct_enabled
__traceiter_android_vh_delayacct_set_flag
__traceiter_android_vh_delayacct_clear_flag
__traceiter_android_rvh_delayacct_init
__traceiter_android_rvh_delayacct_tsk_init
__traceiter_android_rvh_delayacct_tsk_free
__traceiter_android_vh_delayacct_blkio_start
__traceiter_android_vh_delayacct_blkio_end
__traceiter_android_vh_delayacct_add_tsk
__traceiter_android_vh_delayacct_blkio_ticks
__traceiter_android_vh_delayacct_is_task_waiting_on_io
__traceiter_android_vh_delayacct_freepages_start
__traceiter_android_vh_delayacct_freepages_end
__traceiter_android_vh_delayacct_thrashing_start
__traceiter_android_vh_delayacct_thrashing_end
__tracepoint_android_vh_delayacct_set_flag
__tracepoint_android_vh_delayacct_clear_flag
__tracepoint_android_rvh_delayacct_init
__tracepoint_android_rvh_delayacct_tsk_init
__tracepoint_android_rvh_delayacct_tsk_free
__tracepoint_android_vh_delayacct_blkio_start
__tracepoint_android_vh_delayacct_blkio_end
__tracepoint_android_vh_delayacct_add_tsk
__tracepoint_android_vh_delayacct_blkio_ticks
__tracepoint_android_vh_delayacct_is_task_waiting_on_io
__tracepoint_android_vh_delayacct_freepages_start
__tracepoint_android_vh_delayacct_freepages_end
__tracepoint_android_vh_delayacct_thrashing_start
__tracepoint_android_vh_delayacct_thrashing_end

View File

@@ -628,7 +628,6 @@ config ARM64_WORKAROUND_REPEAT_TLBI
config ARM64_ERRATUM_2441007
bool "Cortex-A55: Completion of affected memory accesses might not be guaranteed by completion of a TLBI"
default y
select ARM64_WORKAROUND_REPEAT_TLBI
help
This option adds a workaround for ARM Cortex-A55 erratum #2441007.
@@ -713,9 +712,6 @@ config ARM64_ERRATUM_1508412
If unsure, say Y.
config ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE
bool
config ARM64_ERRATUM_2658417
bool "Cortex-A510: 2658417: remove BF16 support due to incorrect result"
default y
@@ -839,7 +835,6 @@ config ARM64_ERRATUM_2224489
config ARM64_ERRATUM_2441009
bool "Cortex-A510: Completion of affected memory accesses might not be guaranteed by completion of a TLBI"
default y
select ARM64_WORKAROUND_REPEAT_TLBI
help
This option adds a workaround for ARM Cortex-A510 erratum #2441009.
@@ -874,118 +869,6 @@ config ARM64_ERRATUM_2457168
config ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE
bool
config ARM64_ERRATUM_2119858
bool "Cortex-A710: 2119858: workaround TRBE overwriting trace data in FILL mode"
default y
depends on COMPILE_TEST # Until the CoreSight TRBE driver changes are in
depends on CORESIGHT_TRBE
select ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE
help
This option adds the workaround for ARM Cortex-A710 erratum 2119858.
Affected Cortex-A710 cores could overwrite up to 3 cache lines of trace
data at the base of the buffer (pointed to by TRBASER_EL1) in FILL mode in
the event of a WRAP event.
Work around the issue by always making sure we move the TRBPTR_EL1 by
256 bytes before enabling the buffer and filling the first 256 bytes of
the buffer with ETM ignore packets upon disabling.
If unsure, say Y.
config ARM64_ERRATUM_2139208
bool "Neoverse-N2: 2139208: workaround TRBE overwriting trace data in FILL mode"
default y
depends on COMPILE_TEST # Until the CoreSight TRBE driver changes are in
depends on CORESIGHT_TRBE
select ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE
help
This option adds the workaround for ARM Neoverse-N2 erratum 2139208.
Affected Neoverse-N2 cores could overwrite up to 3 cache lines of trace
data at the base of the buffer (pointed to by TRBASER_EL1) in FILL mode in
the event of a WRAP event.
Work around the issue by always making sure we move the TRBPTR_EL1 by
256 bytes before enabling the buffer and filling the first 256 bytes of
the buffer with ETM ignore packets upon disabling.
If unsure, say Y.
config ARM64_WORKAROUND_TSB_FLUSH_FAILURE
bool
config ARM64_ERRATUM_2054223
bool "Cortex-A710: 2054223: workaround TSB instruction failing to flush trace"
default y
select ARM64_WORKAROUND_TSB_FLUSH_FAILURE
help
Enable workaround for ARM Cortex-A710 erratum 2054223
Affected cores may fail to flush the trace data on a TSB instruction, when
the PE is in trace prohibited state. This will cause losing a few bytes
of the trace cached.
Workaround is to issue two TSB consecutively on affected cores.
If unsure, say Y.
config ARM64_ERRATUM_2067961
bool "Neoverse-N2: 2067961: workaround TSB instruction failing to flush trace"
default y
select ARM64_WORKAROUND_TSB_FLUSH_FAILURE
help
Enable workaround for ARM Neoverse-N2 erratum 2067961
Affected cores may fail to flush the trace data on a TSB instruction, when
the PE is in trace prohibited state. This will cause losing a few bytes
of the trace cached.
Workaround is to issue two TSB consecutively on affected cores.
If unsure, say Y.
config ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
bool
config ARM64_ERRATUM_2253138
bool "Neoverse-N2: 2253138: workaround TRBE writing to address out-of-range"
depends on COMPILE_TEST # Until the CoreSight TRBE driver changes are in
depends on CORESIGHT_TRBE
default y
select ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
help
This option adds the workaround for ARM Neoverse-N2 erratum 2253138.
Affected Neoverse-N2 cores might write to an out-of-range address, not reserved
for TRBE. Under some conditions, the TRBE might generate a write to the next
virtually addressed page following the last page of the TRBE address space
(i.e., the TRBLIMITR_EL1.LIMIT), instead of wrapping around to the base.
Work around this in the driver by always making sure that there is a
page beyond the TRBLIMITR_EL1.LIMIT, within the space allowed for the TRBE.
If unsure, say Y.
config ARM64_ERRATUM_2224489
bool "Cortex-A710: 2224489: workaround TRBE writing to address out-of-range"
depends on COMPILE_TEST # Until the CoreSight TRBE driver changes are in
depends on CORESIGHT_TRBE
default y
select ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
help
This option adds the workaround for ARM Cortex-A710 erratum 2224489.
Affected Cortex-A710 cores might write to an out-of-range address, not reserved
for TRBE. Under some conditions, the TRBE might generate a write to the next
virtually addressed page following the last page of the TRBE address space
(i.e., the TRBLIMITR_EL1.LIMIT), instead of wrapping around to the base.
Work around this in the driver by always making sure that there is a
page beyond the TRBLIMITR_EL1.LIMIT, within the space allowed for the TRBE.
If unsure, say Y.
config CAVIUM_ERRATUM_22375
bool "Cavium erratum 22375, 24313"
default y

View File

@@ -439,6 +439,7 @@ CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_GOV_USER_SPACE=y
CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y
CONFIG_CPU_THERMAL=y
CONFIG_CPU_IDLE_THERMAL=y
CONFIG_DEVFREQ_THERMAL=y
CONFIG_THERMAL_EMULATION=y
CONFIG_WATCHDOG=y
@@ -452,7 +453,6 @@ CONFIG_LIRC=y
CONFIG_BPF_LIRC_MODE2=y
CONFIG_RC_DECODERS=y
CONFIG_RC_DEVICES=y
CONFIG_MEDIA_CEC_RC=y
# CONFIG_MEDIA_ANALOG_TV_SUPPORT is not set
# CONFIG_MEDIA_DIGITAL_TV_SUPPORT is not set
# CONFIG_MEDIA_RADIO_SUPPORT is not set
@@ -586,6 +586,7 @@ CONFIG_IIO_TRIGGER=y
CONFIG_PWM=y
CONFIG_GENERIC_PHY=y
CONFIG_POWERCAP=y
CONFIG_IDLE_INJECT=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ANDROID_BINDERFS=y
@@ -738,5 +739,4 @@ CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_TRACE_MMIO_ACCESS=y
CONFIG_HIST_TRIGGERS=y
CONFIG_PID_IN_CONTEXTIDR=y
CONFIG_KUNIT=y
# CONFIG_RUNTIME_TESTING_MENU is not set

View File

@@ -19,4 +19,14 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
return true;
}
#ifdef CONFIG_KFENCE
extern bool kfence_early_init;
static inline bool arm64_kfence_can_set_direct_map(void)
{
return !kfence_early_init;
}
#else /* CONFIG_KFENCE */
static inline bool arm64_kfence_can_set_direct_map(void) { return false; }
#endif /* CONFIG_KFENCE */
#endif /* __ASM_KFENCE_H */

View File

@@ -53,7 +53,7 @@ HYP_EVENT(host_smc,
__entry->id = id;
__entry->forwarded = forwarded;
),
HE_PRINTK("id=%llu invalid=%u",
HE_PRINTK("id=%llu forwarded=%u",
__entry->id, __entry->forwarded)
);

View File

@@ -15,10 +15,10 @@ struct hyp_entry_hdr {
/*
* Hyp events definitions common to the hyp and the host
*/
#define HYP_EVENT_FORMAT(__name, __struct) \
struct trace_hyp_format_##__name { \
struct hyp_entry_hdr hdr; \
__struct \
#define HYP_EVENT_FORMAT(__name, __struct) \
struct __packed trace_hyp_format_##__name { \
struct hyp_entry_hdr hdr; \
__struct \
}
#define HE_PROTO(args...) args

View File

@@ -234,7 +234,7 @@ enum kvm_pgtable_prot {
#define KVM_HOST_S2_DEFAULT_MMIO_PTE \
(KVM_HOST_S2_DEFAULT_MEM_PTE | \
KVM_PTE_LEAF_ATTR_HI_S2_XN)
FIELD_PREP(KVM_PTE_LEAF_ATTR_HI_S2_XN, KVM_PTE_LEAF_ATTR_HI_S2_XN_XN))
#define PAGE_HYP KVM_PGTABLE_PROT_RW
#define PAGE_HYP_EXEC (KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_X)

View File

@@ -16,6 +16,112 @@ enum pkvm_psci_notification {
};
#ifdef CONFIG_MODULES
/**
* struct pkvm_module_ops - pKVM modules callbacks
* @create_private_mapping: Map a memory region into the hypervisor private
* range. @haddr returns the virtual address where
* the mapping starts. It can't be unmapped. Host
* access permissions are unaffected.
* @alloc_module_va: Reserve a range of VA space in the hypervisor
* private range. This is handy for modules that
* need to map plugin code in a similar fashion to
* how pKVM maps module code. That space could also
* be used to map memory temporarily, when the
* fixmap granularity (PAGE_SIZE) is too small.
* @map_module_page: Used in conjunction with @alloc_module_va. When
* @is_protected is not set, the page is also
* unmapped from the host stage-2.
* @register_serial_driver: Register a driver for a serial interface. The
* framework only needs a single callback
* @hyp_putc_cb which is expected to print a single
* character.
* @puts: If a serial interface is registered, print a
* string, else does nothing.
* @putx64: If a serial interface is registered, print a
* 64-bit number, else does nothing.
* @fixmap_map: Map a page in the per-CPU hypervisor fixmap.
* This is intended to be used for temporary
* mappings in the hypervisor VA space.
* @fixmap_unmap must be called between each
* mapping to do cache maintenance and ensure the
* new mapping is visible.
* @fixmap_unmap: Unmap a page from the hypervisor fixmap. This
* call is required between each @fixmap_map().
* @linear_map_early: Map a large portion of memory into the
* hypervisor linear VA space. This is intended to
* be used only for module bootstrap and must be
* unmapped before the host is deprivilged.
* @linear_unmap_early: See @linear_map_early.
* @flush_dcache_to_poc: Clean the data cache to the point of coherency.
* This is not a requirement for any other of the
* pkvm_module_ops callbacks.
* @update_hcr_el2: Modify the running value of HCR_EL2. pKVM will
* save/restore the new value across power
* management transitions.
* @update_hfgwtr_el2: Modify the running value of HFGWTR_EL2. pKVM
* will save/restore the new value across power
* management transitions.
* @register_host_perm_fault_handler:
* @cb is called whenever the host generates an
* abort with the fault status code Permission
* Fault. Returning -EPERM lets pKVM handle the
* abort. This is useful when a module changes the
* host stage-2 permissions for certain pages.
* @host_stage2_mod_prot: Apply @prot to the page @pfn. This requires a
* permission fault handler to be registered (see
* @register_host_perm_fault_handler), otherwise
* pKVM will be unable to handle this fault and the
* CPU will be stuck in an infinite loop.
* @host_stage2_mod_prot_range: Similar to @host_stage2_mod_prot, but takes a
* range as an argument (@nr_pages). This
* considerably speeds up the process for a
* contiguous memory region, compared to the
* per-page @host_stage2_mod_prot.
* @host_stage2_get_leaf: Query the host's stage2 page-table entry for
* the page @phys.
* @register_host_smc_handler: @cb is called whenever the host issues an SMC
* pKVM couldn't handle. If @cb returns false, the
* SMC will be forwarded to EL3.
* @register_default_trap_handler:
* @cb is called whenever EL2 traps EL1 and pKVM
* has not handled it. If @cb returns false, the
* hypervisor will panic. This trap handler must be
* registered whenever changes are made to HCR
* (@update_hcr_el2) or HFGWTR
* (@update_hfgwtr_el2).
* @register_illegal_abt_notifier:
* To notify the module of a pending illegal abort
* from the host. On @cb return, the abort will be
* injected back into the host.
* @register_psci_notifier: To notify the module of a pending PSCI event.
* @register_hyp_panic_notifier:
* To notify the module of a pending hypervisor
* panic. On return from @cb, the panic will occur.
* @host_donate_hyp: The page @pfn is unmapped from the host and
* full control is given to the hypervisor.
* @hyp_donate_host: The page @pfn whom control has previously been
* given to the hypervisor (@host_donate_hyp) is
* given back to the host.
* @host_share_hyp: The page @pfn will be shared between the host
* and the hypervisor. Must be followed by
* @pin_shared_mem.
* @host_unshare_hyp: The page @pfn will be unshared and unmapped from
* the hypervisor. Must be called after
* @unpin_shared_mem.
* @pin_shared_mem: After @host_share_hyp, the newly shared page is
* still owned by the host. @pin_shared_mem will
* prevent the host from reclaiming that page until
* the hypervisor releases it (@unpin_shared_mem)
* @unpin_shared_mem: Enable the host to reclaim the shared memory
* (@host_unshare_hyp).
* @memcpy: Same as kernel memcpy, but use hypervisor VAs.
* @memset: Same as kernel memset, but use a hypervisor VA.
* @hyp_pa: Return the physical address for a hypervisor
* virtual address in the linear range.
* @hyp_va: Convert a physical address into a virtual one.
* @kern_hyp_va: Convert a kernel virtual address into an
* hypervisor virtual one.
*/
struct pkvm_module_ops {
int (*create_private_mapping)(phys_addr_t phys, size_t size,
enum kvm_pgtable_prot prot,
@@ -52,7 +158,10 @@ struct pkvm_module_ops {
void* (*hyp_va)(phys_addr_t phys);
unsigned long (*kern_hyp_va)(unsigned long x);
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_USE(1, int (*host_stage2_mod_prot_range)(
u64 pfn, enum kvm_pgtable_prot prot,
u64 nr_pages));
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);

View File

@@ -849,12 +849,14 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
}
#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
extern bool should_flush_tlb_when_young(void);
static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep)
{
int young = ptep_test_and_clear_young(vma, address, ptep);
if (young) {
if (young && should_flush_tlb_when_young()) {
/*
* We can elide the trailing DSB here since the worst that can
* happen is that a CPU continues to use the young entry in its

View File

@@ -636,14 +636,6 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
ERRATA_MIDR_RANGE_LIST(tsb_flush_fail_cpus),
},
#endif
#ifdef CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
{
.desc = "ARM erratum 2253138 or 2224489",
.capability = ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE,
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus),
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_2457168
{
.desc = "ARM erratum 2457168",
@@ -671,13 +663,6 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.cpu_enable = cpu_clear_bf16_from_user_emulation,
},
#endif
#ifdef CONFIG_ARM64_WORKAROUND_TSB_FLUSH_FAILURE
{
.desc = "ARM erratum 2067961 or 2054223",
.capability = ARM64_WORKAROUND_TSB_FLUSH_FAILURE,
ERRATA_MIDR_RANGE_LIST(tsb_flush_fail_cpus),
},
#endif
#ifdef CONFIG_ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
{
.desc = "ARM erratum 2253138 or 2224489",

View File

@@ -104,6 +104,7 @@ int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages,
struct kvm_hyp_memcache *host_mc);
int module_change_host_page_prot(u64 pfn, enum kvm_pgtable_prot prot);
int module_change_host_page_prot_range(u64 pfn, enum kvm_pgtable_prot prot, u64 nr_pages);
void destroy_hyp_vm_pgt(struct pkvm_hyp_vm *vm);
void drain_hyp_pool(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc);

View File

@@ -79,35 +79,10 @@ static void hyp_unlock_component(void)
hyp_spin_unlock(&pkvm_pgd_lock);
}
static void assert_host_can_alloc(void)
{
/* We can always get back to the host from guest context */
if (read_sysreg(vttbr_el2) != kvm_get_vttbr(&host_mmu.arch.mmu))
return;
/*
* An error code must be returned to EL1 to handle memory allocation
* failures cleanly. That's doable for explicit calls into higher
* ELs, but not so much for other EL2 entry reasons such as mem aborts.
* Thankfully we don't need memory allocation in these cases by
* construction, so let's enforce the invariant.
*/
switch (ESR_ELx_EC(read_sysreg(esr_el2))) {
case ESR_ELx_EC_HVC64:
case ESR_ELx_EC_SMC64:
break;
default:
WARN_ON(1);
}
}
static void *host_s2_zalloc_pages_exact(size_t size)
{
void *addr;
void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size));
assert_host_can_alloc();
addr = hyp_alloc_pages(&host_s2_pool, get_order(size));
hyp_split_page(hyp_virt_to_page(addr));
/*
@@ -122,8 +97,6 @@ static void *host_s2_zalloc_pages_exact(size_t size)
static void *host_s2_zalloc_page(void *pool)
{
assert_host_can_alloc();
return hyp_alloc_pages(pool, 0);
}
@@ -176,22 +149,16 @@ static void prepare_host_vtcr(void)
static int prepopulate_host_stage2(void)
{
struct memblock_region *reg;
u64 addr = 0;
int i, ret;
int i, ret = 0;
for (i = 0; i < hyp_memblock_nr; i++) {
reg = &hyp_memory[i];
ret = host_stage2_idmap_locked(addr, reg->base - addr, PKVM_HOST_MMIO_PROT, false);
if (ret)
return ret;
ret = host_stage2_idmap_locked(reg->base, reg->size, PKVM_HOST_MEM_PROT, false);
if (ret)
return ret;
addr = reg->base + reg->size;
}
return host_stage2_idmap_locked(addr, BIT(host_mmu.pgt.ia_bits) - addr, PKVM_HOST_MMIO_PROT,
false);
return ret;
}
int kvm_host_prepare_stage2(void *pgt_pool_base)
@@ -908,7 +875,14 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt)
int ret = -EPERM;
esr = read_sysreg_el2(SYS_ESR);
BUG_ON(!__get_fault_info(esr, &fault));
if (!__get_fault_info(esr, &fault)) {
addr = (u64)-1;
/*
* We've presumably raced with a page-table change which caused
* AT to fail, try again.
*/
goto return_to_host;
}
fault.esr_el2 = esr;
addr = (fault.hpfar_el2 & HPFAR_MASK) << 8;
@@ -935,6 +909,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt)
else
BUG_ON(ret && ret != -EAGAIN);
return_to_host:
trace_host_mem_abort(esr, addr);
}
@@ -2035,77 +2010,80 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
return ret;
}
static int restrict_host_page_perms(u64 addr, kvm_pte_t pte, u32 level, enum kvm_pgtable_prot prot)
{
int ret = 0;
/* XXX: optimize ... */
if (kvm_pte_valid(pte) && (level == KVM_PGTABLE_MAX_LEVELS - 1))
ret = kvm_pgtable_stage2_unmap(&host_mmu.pgt, addr, PAGE_SIZE);
if (!ret)
ret = host_stage2_idmap_locked(addr, PAGE_SIZE, prot, false);
return ret;
}
#define MODULE_PROT_ALLOWLIST (KVM_PGTABLE_PROT_RWX | \
KVM_PGTABLE_PROT_DEVICE |\
KVM_PGTABLE_PROT_NC | \
KVM_PGTABLE_PROT_PXN | \
KVM_PGTABLE_PROT_UXN)
int module_change_host_page_prot(u64 pfn, enum kvm_pgtable_prot prot)
int module_change_host_page_prot_range(u64 pfn, enum kvm_pgtable_prot prot, u64 nr_pages)
{
u64 addr = hyp_pfn_to_phys(pfn);
u64 i, addr = hyp_pfn_to_phys(pfn);
u64 end = addr + nr_pages * PAGE_SIZE;
struct hyp_page *page = NULL;
kvm_pte_t pte;
u32 level;
struct kvm_mem_range range;
bool is_mmio;
int ret;
if ((prot & MODULE_PROT_ALLOWLIST) != prot)
return -EINVAL;
is_mmio = !find_mem_range(addr, &range);
if (end > range.end) {
/* Specified range not in a single mmio or memory block. */
return -EPERM;
}
host_lock_component();
ret = kvm_pgtable_get_leaf(&host_mmu.pgt, addr, &pte, &level);
if (ret)
goto unlock;
/*
* There is no hyp_vmemmap covering MMIO regions, which makes tracking
* of module-owned MMIO regions hard, so we trust the modules not to
* mess things up.
*/
if (!addr_is_memory(addr))
if (is_mmio)
goto update;
ret = -EPERM;
/* Range is memory: we can track module ownership. */
page = hyp_phys_to_page(addr);
/*
* Modules can only relax permissions of pages they own, and restrict
* permissions of pristine pages.
* Modules can only modify pages they already own, and pristine host
* pages. The entire range must be consistently one or the other.
*/
if (prot == KVM_PGTABLE_PROT_RWX) {
if (!(page->flags & MODULE_OWNED_PAGE))
if (page->flags & MODULE_OWNED_PAGE) {
/* The entire range must be module-owned. */
ret = -EPERM;
for (i = 1; i < nr_pages; i++) {
if (!(page[i].flags & MODULE_OWNED_PAGE))
goto unlock;
}
} else {
/* The entire range must be pristine. */
ret = __host_check_page_state_range(
addr, nr_pages << PAGE_SHIFT, PKVM_PAGE_OWNED);
if (ret)
goto unlock;
} else if (host_get_page_state(pte, addr) != PKVM_PAGE_OWNED) {
goto unlock;
}
update:
if (prot == default_host_prot(!!page))
ret = host_stage2_set_owner_locked(addr, PAGE_SIZE, PKVM_ID_HOST);
else if (!prot)
ret = host_stage2_set_owner_locked(addr, PAGE_SIZE, PKVM_ID_PROTECTED);
else
ret = restrict_host_page_perms(addr, pte, level, prot);
if (!prot) {
ret = host_stage2_set_owner_locked(
addr, nr_pages << PAGE_SHIFT, PKVM_ID_PROTECTED);
} else {
ret = host_stage2_idmap_locked(
addr, nr_pages << PAGE_SHIFT, prot, false);
}
if (ret || !page)
if (WARN_ON(ret) || !page)
goto unlock;
if (prot != KVM_PGTABLE_PROT_RWX)
hyp_phys_to_page(addr)->flags |= MODULE_OWNED_PAGE;
else
hyp_phys_to_page(addr)->flags &= ~MODULE_OWNED_PAGE;
for (i = 0; i < nr_pages; i++) {
if (prot != KVM_PGTABLE_PROT_RWX)
page[i].flags |= MODULE_OWNED_PAGE;
else
page[i].flags &= ~MODULE_OWNED_PAGE;
}
unlock:
host_unlock_component();
@@ -2113,6 +2091,11 @@ unlock:
return ret;
}
int module_change_host_page_prot(u64 pfn, enum kvm_pgtable_prot prot)
{
return module_change_host_page_prot_range(pfn, prot, 1);
}
int hyp_pin_shared_mem(void *from, void *to)
{
u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE);

View File

@@ -115,6 +115,7 @@ const struct pkvm_module_ops module_ops = {
.hyp_pa = hyp_virt_to_phys,
.hyp_va = hyp_phys_to_virt,
.kern_hyp_va = __kern_hyp_va,
.host_stage2_mod_prot_range = module_change_host_page_prot_range,
};
int __pkvm_init_module(void *module_init)

View File

@@ -643,8 +643,13 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte)
return prot;
}
static bool stage2_pte_needs_update(kvm_pte_t old, kvm_pte_t new)
static bool stage2_pte_needs_update(struct kvm_pgtable *pgt,
kvm_pte_t old, kvm_pte_t new)
{
/* Following filter logic applies only to guest stage-2 entries. */
if (pgt->flags & KVM_PGTABLE_S2_IDMAP)
return true;
if (!kvm_pte_valid(old) || !kvm_pte_valid(new))
return true;
@@ -713,12 +718,15 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
new = data->annotation;
/*
* Skip updating the PTE if we are trying to recreate the exact
* same mapping or only change the access permissions. Instead,
* the vCPU will exit one more time from guest if still needed
* and then go through the path of relaxing permissions.
* Skip updating a guest PTE if we are trying to recreate the exact
* same mapping or change only the access permissions. Instead,
* the vCPU will exit one more time from the guest if still needed
* and then go through the path of relaxing permissions. This applies
* only to guest PTEs; Host PTEs are unconditionally updated. The
* host cannot livelock because the abort handler has done prior
* checks before calling here.
*/
if (!stage2_pte_needs_update(old, new))
if (!stage2_pte_needs_update(pgt, old, new))
return -EAGAIN;
if (pte_ops->pte_is_counted_cb(old, level))
@@ -773,6 +781,30 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
return 0;
}
static void stage2_map_prefault_idmap(struct kvm_pgtable_pte_ops *pte_ops,
u64 addr, u64 end, u32 level,
kvm_pte_t *ptep, kvm_pte_t block_pte)
{
u64 pa, granule;
int i;
WARN_ON(pte_ops->pte_is_counted_cb(block_pte, level-1));
if (!kvm_pte_valid(block_pte))
return;
pa = ALIGN_DOWN(addr, kvm_granule_size(level-1));
granule = kvm_granule_size(level);
for (i = 0; i < PTRS_PER_PTE; ++i, ++ptep, pa += granule) {
kvm_pte_t pte = kvm_init_valid_leaf_pte(pa, block_pte, level);
/* Skip ptes in the range being modified by the caller. */
if ((pa < addr) || (pa >= end)) {
/* We can write non-atomically: ptep isn't yet live. */
*ptep = pte;
}
}
}
static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
struct stage2_map_data *data)
{
@@ -803,6 +835,11 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
if (!childp)
return -ENOMEM;
if (pgt->flags & KVM_PGTABLE_S2_IDMAP) {
stage2_map_prefault_idmap(pte_ops, addr, end, level + 1,
childp, pte);
}
/*
* If we've run into an existing block mapping then replace it with
* a table. Accesses beyond 'end' that fall within the new table

View File

@@ -1388,7 +1388,7 @@ static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* prevent try_to_unmap() from succeeding.
*/
ret = -EIO;
goto dec_account;
goto unpin;
}
write_lock(&kvm->mmu_lock);
@@ -1397,7 +1397,7 @@ static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (ret) {
if (ret == -EAGAIN)
ret = 0;
goto unpin;
goto unlock;
}
ppage->page = page;
@@ -1407,8 +1407,9 @@ static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
return 0;
unpin:
unlock:
write_unlock(&kvm->mmu_lock);
unpin:
unpin_user_pages(&page, 1);
dec_account:
account_locked_vm(mm, 1, false);

View File

@@ -14,6 +14,7 @@
#include <linux/of_fdt.h>
#include <linux/of_reserved_mem.h>
#include <linux/sort.h>
#include <linux/stat.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
@@ -21,6 +22,9 @@
#include <asm/kvm_pkvm_module.h>
#include <asm/setup.h>
#include <uapi/linux/mount.h>
#include <linux/init_syscalls.h>
#include "hyp_constants.h"
DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized);
@@ -682,7 +686,11 @@ int __init pkvm_load_early_modules(void)
{
char *token, *buf = early_pkvm_modules;
char *module_path = CONFIG_PKVM_MODULE_PATH;
int err;
int err = init_mount("proc", "/proc", "proc",
MS_SILENT | MS_NOEXEC | MS_NOSUID, NULL);
if (err)
return err;
while (true) {
token = strsep(&buf, ",");

View File

@@ -230,6 +230,8 @@ SYM_FUNC_END_PI(__dma_flush_area)
* - dir - DMA direction
*/
SYM_FUNC_START_PI(__dma_map_area)
cmp w2, #DMA_FROM_DEVICE
b.eq __dma_flush_area
add x1, x0, x1
b __dma_clean_area
SYM_FUNC_END_PI(__dma_map_area)

View File

@@ -23,6 +23,7 @@
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/set_memory.h>
#include <linux/kfence.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -37,6 +38,9 @@
#include <asm/ptdump.h>
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
#include <asm/kfence.h>
#include <trace/hooks/mm.h>
#define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1)
@@ -508,12 +512,67 @@ static int __init enable_crash_mem_map(char *arg)
}
early_param("crashkernel", enable_crash_mem_map);
#ifdef CONFIG_KFENCE
bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
/* early_param() will be parsed before map_mem() below. */
static int __init parse_kfence_early_init(char *arg)
{
int val;
if (get_option(&arg, &val))
kfence_early_init = !!val;
return 0;
}
early_param("kfence.sample_interval", parse_kfence_early_init);
static phys_addr_t __init arm64_kfence_alloc_pool(void)
{
phys_addr_t kfence_pool;
if (!kfence_early_init)
return 0;
kfence_pool = memblock_phys_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
if (!kfence_pool) {
pr_err("failed to allocate kfence pool\n");
kfence_early_init = false;
return 0;
}
/* Temporarily mark as NOMAP. */
memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE);
return kfence_pool;
}
static void __init arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp)
{
if (!kfence_pool)
return;
/* KFENCE pool needs page-level mapping. */
__map_memblock(pgdp, kfence_pool, kfence_pool + KFENCE_POOL_SIZE,
pgprot_tagged(PAGE_KERNEL),
NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE);
__kfence_pool = phys_to_virt(kfence_pool);
}
#else /* CONFIG_KFENCE */
static inline phys_addr_t arm64_kfence_alloc_pool(void) { return 0; }
static inline void arm64_kfence_map_pool(phys_addr_t kfence_pool, pgd_t *pgdp) { }
#endif /* CONFIG_KFENCE */
static void __init map_mem(pgd_t *pgdp)
{
static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
phys_addr_t kernel_start = __pa_symbol(_stext);
phys_addr_t kernel_end = __pa_symbol(__init_begin);
phys_addr_t start, end;
phys_addr_t early_kfence_pool;
int flags = NO_EXEC_MAPPINGS;
u64 i;
@@ -526,7 +585,9 @@ static void __init map_mem(pgd_t *pgdp)
*/
BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end));
if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
early_kfence_pool = arm64_kfence_alloc_pool();
if (can_set_direct_map())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -593,6 +654,8 @@ static void __init map_mem(pgd_t *pgdp)
}
}
#endif
arm64_kfence_map_pool(early_kfence_pool, pgdp);
}
void mark_rodata_ro(void)
@@ -1491,6 +1554,14 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
return 1;
}
bool should_flush_tlb_when_young(void)
{
bool skip = false;
trace_android_vh_ptep_clear_flush_young(&skip);
return !skip;
}
#ifdef CONFIG_MEMORY_HOTPLUG
static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
{
@@ -1542,11 +1613,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
VM_BUG_ON(!mhp_range_allowed(start, size, true));
/*
* KFENCE requires linear map to be mapped at page granularity, so that
* it is possible to protect/unprotect single pages in the KFENCE pool.
*/
if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
if (can_set_direct_map())
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),

View File

@@ -11,6 +11,7 @@
#include <asm/cacheflush.h>
#include <asm/set_memory.h>
#include <asm/tlbflush.h>
#include <asm/kfence.h>
struct page_change_data {
pgprot_t set_mask;
@@ -21,7 +22,15 @@ bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED
bool can_set_direct_map(void)
{
return rodata_full || debug_pagealloc_enabled();
/*
* rodata_full and DEBUG_PAGEALLOC require linear map to be
* mapped at page granularity, so that it is possible to
* protect/unprotect single pages.
*
* KFENCE pool requires page-granular mapping if initialized late.
*/
return rodata_full || debug_pagealloc_enabled() ||
arm64_kfence_can_set_direct_map();
}
static int change_page_range(pte_t *ptep, unsigned long addr, void *data)

View File

@@ -395,6 +395,7 @@ CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=100
CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_GOV_USER_SPACE=y
CONFIG_CPU_THERMAL=y
CONFIG_CPU_IDLE_THERMAL=y
CONFIG_DEVFREQ_THERMAL=y
CONFIG_THERMAL_EMULATION=y
# CONFIG_X86_PKG_TEMP_THERMAL is not set
@@ -409,7 +410,6 @@ CONFIG_LIRC=y
CONFIG_BPF_LIRC_MODE2=y
CONFIG_RC_DECODERS=y
CONFIG_RC_DEVICES=y
CONFIG_MEDIA_CEC_RC=y
# CONFIG_MEDIA_ANALOG_TV_SUPPORT is not set
# CONFIG_MEDIA_DIGITAL_TV_SUPPORT is not set
# CONFIG_MEDIA_RADIO_SUPPORT is not set
@@ -526,6 +526,7 @@ CONFIG_IIO=y
CONFIG_IIO_BUFFER=y
CONFIG_IIO_TRIGGER=y
CONFIG_POWERCAP=y
CONFIG_IDLE_INJECT=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ANDROID_BINDERFS=y
@@ -677,4 +678,3 @@ CONFIG_SCHEDSTATS=y
CONFIG_BUG_ON_DATA_CORRUPTION=y
CONFIG_HIST_TRIGGERS=y
CONFIG_UNWINDER_FRAME_POINTER=y
CONFIG_KUNIT=y

View File

@@ -1,4 +0,0 @@
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.common
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.aarch64
DEFCONFIG=gki_defconfig

View File

@@ -1,4 +0,0 @@
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.common
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.x86_64
DEFCONFIG=gki_defconfig

View File

@@ -3434,7 +3434,7 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
tr->offsets_size, extra_buffers_size,
!reply && (t->flags & TF_ONE_WAY), current->tgid);
!reply && (t->flags & TF_ONE_WAY));
if (IS_ERR(t->buffer)) {
/*
* -ESRCH indicates VMA cleared. The target is dying.
@@ -5021,6 +5021,7 @@ static void binder_release_work(struct binder_proc *proc,
"undelivered TRANSACTION_ERROR: %u\n",
e->cmd);
} break;
case BINDER_WORK_TRANSACTION_PENDING:
case BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT:
case BINDER_WORK_TRANSACTION_COMPLETE: {
binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
@@ -5115,6 +5116,7 @@ static struct binder_thread *binder_get_thread(struct binder_proc *proc)
static void binder_free_proc(struct binder_proc *proc)
{
struct binder_proc_wrap *proc_wrap;
struct binder_device *device;
BUG_ON(!list_empty(&proc->todo));
@@ -5132,7 +5134,8 @@ static void binder_free_proc(struct binder_proc *proc)
put_cred(proc->cred);
binder_stats_deleted(BINDER_STAT_PROC);
trace_android_vh_binder_free_proc(proc);
kfree(proc);
proc_wrap = binder_proc_wrap_entry(proc);
kfree(proc_wrap);
}
static void binder_free_thread(struct binder_thread *thread)
@@ -5244,7 +5247,7 @@ static __poll_t binder_poll(struct file *filp,
thread = binder_get_thread(proc);
if (!thread)
return POLLERR;
return EPOLLERR;
binder_inner_proc_lock(thread->proc);
thread->looper |= BINDER_LOOPER_STATE_POLL;
@@ -5817,6 +5820,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc_wrap *proc_wrap;
struct binder_proc *proc, *itr;
struct binder_device *binder_dev;
struct binderfs_info *info;
@@ -5826,9 +5830,11 @@ static int binder_open(struct inode *nodp, struct file *filp)
binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__,
current->group_leader->pid, current->pid);
proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
proc_wrap = kzalloc(sizeof(*proc_wrap), GFP_KERNEL);
if (proc_wrap == NULL)
return -ENOMEM;
proc = &proc_wrap->proc;
spin_lock_init(&proc->inner_lock);
spin_lock_init(&proc->outer_lock);
get_task_struct(current->group_leader);
@@ -6198,9 +6204,9 @@ static void print_binder_transaction_ilocked(struct seq_file *m,
}
if (buffer->target_node)
seq_printf(m, " node %d", buffer->target_node->debug_id);
seq_printf(m, " size %zd:%zd data %pK\n",
seq_printf(m, " size %zd:%zd offset %lx\n",
buffer->data_size, buffer->offsets_size,
buffer->user_data);
proc->alloc.buffer - buffer->user_data);
}
static void print_binder_work_ilocked(struct seq_file *m,

File diff suppressed because it is too large Load Diff

View File

@@ -15,7 +15,7 @@
#include <linux/list_lru.h>
#include <uapi/linux/android/binder.h>
extern struct list_lru binder_alloc_lru;
extern struct list_lru binder_freelist;
struct binder_transaction;
/**
@@ -49,21 +49,19 @@ struct binder_buffer {
unsigned async_transaction:1;
unsigned oneway_spam_suspect:1;
unsigned debug_id:27;
struct binder_transaction *transaction;
struct binder_node *target_node;
size_t data_size;
size_t offsets_size;
size_t extra_buffers_size;
void __user *user_data;
int pid;
int pid;
};
/**
* struct binder_lru_page - page object used for binder shrinker
* @page_ptr: pointer to physical page in mmap'd space
* @lru: entry in binder_alloc_lru
* @lru: entry in binder_freelist
* @alloc: binder_alloc for a proc
*/
struct binder_lru_page {
@@ -74,6 +72,7 @@ struct binder_lru_page {
/**
* struct binder_alloc - per-binder proc state for binder allocator
* @lock: protects binder_alloc fields
* @vma: vm_area_struct passed to mmap_handler
* (invarient after mmap)
* @tsk: tid for task that called init for this proc
@@ -123,47 +122,29 @@ static inline void binder_selftest_alloc(struct binder_alloc *alloc) {}
enum lru_status binder_alloc_free_page(struct list_head *item,
struct list_lru_one *lru,
spinlock_t *lock, void *cb_arg);
extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async,
int pid);
extern void binder_alloc_init(struct binder_alloc *alloc);
extern int binder_alloc_shrinker_init(void);
extern void binder_alloc_shrinker_exit(void);
extern void binder_alloc_vma_close(struct binder_alloc *alloc);
extern struct binder_buffer *
struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async);
void binder_alloc_init(struct binder_alloc *alloc);
int binder_alloc_shrinker_init(void);
void binder_alloc_shrinker_exit(void);
void binder_alloc_vma_close(struct binder_alloc *alloc);
struct binder_buffer *
binder_alloc_prepare_to_free(struct binder_alloc *alloc,
uintptr_t user_ptr);
extern void binder_alloc_free_buf(struct binder_alloc *alloc,
struct binder_buffer *buffer);
extern int binder_alloc_mmap_handler(struct binder_alloc *alloc,
struct vm_area_struct *vma);
extern void binder_alloc_deferred_release(struct binder_alloc *alloc);
extern int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
extern void binder_alloc_print_allocated(struct seq_file *m,
struct binder_alloc *alloc);
unsigned long user_ptr);
void binder_alloc_free_buf(struct binder_alloc *alloc,
struct binder_buffer *buffer);
int binder_alloc_mmap_handler(struct binder_alloc *alloc,
struct vm_area_struct *vma);
void binder_alloc_deferred_release(struct binder_alloc *alloc);
int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
void binder_alloc_print_allocated(struct seq_file *m,
struct binder_alloc *alloc);
void binder_alloc_print_pages(struct seq_file *m,
struct binder_alloc *alloc);
/**
* binder_alloc_get_free_async_space() - get free space available for async
* @alloc: binder_alloc for this proc
*
* Return: the bytes remaining in the address-space for async transactions
*/
static inline size_t
binder_alloc_get_free_async_space(struct binder_alloc *alloc)
{
size_t free_async_space;
mutex_lock(&alloc->mutex);
free_async_space = alloc->free_async_space;
mutex_unlock(&alloc->mutex);
return free_async_space;
}
unsigned long
binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,

View File

@@ -93,14 +93,14 @@ static bool check_buffer_pages_allocated(struct binder_alloc *alloc,
struct binder_buffer *buffer,
size_t size)
{
void __user *page_addr;
void __user *end;
unsigned long page_addr;
unsigned long end;
int page_index;
end = (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data + size);
page_addr = buffer->user_data;
end = PAGE_ALIGN((uintptr_t)buffer->user_data + size);
page_addr = (uintptr_t)buffer->user_data;
for (; page_addr < end; page_addr += PAGE_SIZE) {
page_index = (page_addr - alloc->buffer) / PAGE_SIZE;
page_index = (page_addr - (uintptr_t)alloc->buffer) / PAGE_SIZE;
if (!alloc->pages[page_index].page_ptr ||
!list_empty(&alloc->pages[page_index].lru)) {
pr_err("expect alloc but is %s at page index %d\n",
@@ -119,7 +119,7 @@ static void binder_selftest_alloc_buf(struct binder_alloc *alloc,
int i;
for (i = 0; i < BUFFER_NUM; i++) {
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0, 0);
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0);
if (IS_ERR(buffers[i]) ||
!check_buffer_pages_allocated(alloc, buffers[i],
sizes[i])) {
@@ -158,8 +158,8 @@ static void binder_selftest_free_page(struct binder_alloc *alloc)
int i;
unsigned long count;
while ((count = list_lru_count(&binder_alloc_lru))) {
list_lru_walk(&binder_alloc_lru, binder_alloc_free_page,
while ((count = list_lru_count(&binder_freelist))) {
list_lru_walk(&binder_freelist, binder_alloc_free_page,
NULL, count);
}
@@ -183,7 +183,7 @@ static void binder_selftest_alloc_free(struct binder_alloc *alloc,
/* Allocate from lru. */
binder_selftest_alloc_buf(alloc, buffers, sizes, seq);
if (list_lru_count(&binder_alloc_lru))
if (list_lru_count(&binder_freelist))
pr_err("lru list should be empty but is not\n");
binder_selftest_free_buf(alloc, buffers, sizes, seq, end);

View File

@@ -467,6 +467,66 @@ struct binder_proc {
bool oneway_spam_detection_enabled;
};
struct binder_proc_wrap {
struct binder_proc proc;
spinlock_t lock;
};
static inline struct binder_proc *
binder_proc_entry(struct binder_alloc *alloc)
{
return container_of(alloc, struct binder_proc, alloc);
}
static inline struct binder_proc_wrap *
binder_proc_wrap_entry(struct binder_proc *proc)
{
return container_of(proc, struct binder_proc_wrap, proc);
}
static inline struct binder_proc_wrap *
binder_alloc_to_proc_wrap(struct binder_alloc *alloc)
{
return binder_proc_wrap_entry(binder_proc_entry(alloc));
}
static inline void binder_alloc_lock_init(struct binder_alloc *alloc)
{
spin_lock_init(&binder_alloc_to_proc_wrap(alloc)->lock);
}
static inline void binder_alloc_lock(struct binder_alloc *alloc)
{
spin_lock(&binder_alloc_to_proc_wrap(alloc)->lock);
}
static inline void binder_alloc_unlock(struct binder_alloc *alloc)
{
spin_unlock(&binder_alloc_to_proc_wrap(alloc)->lock);
}
static inline int binder_alloc_trylock(struct binder_alloc *alloc)
{
return spin_trylock(&binder_alloc_to_proc_wrap(alloc)->lock);
}
/**
* binder_alloc_get_free_async_space() - get free space available for async
* @alloc: binder_alloc for this proc
*
* Return: the bytes remaining in the address-space for async transactions
*/
static inline size_t
binder_alloc_get_free_async_space(struct binder_alloc *alloc)
{
size_t free_async_space;
binder_alloc_lock(alloc);
free_async_space = alloc->free_async_space;
binder_alloc_unlock(alloc);
return free_async_space;
}
/**
* struct binder_thread - binder thread bookkeeping
* @proc: binder process for this thread

View File

@@ -341,7 +341,7 @@ DEFINE_EVENT(binder_buffer_class, binder_transaction_update_buffer_release,
TRACE_EVENT(binder_update_page_range,
TP_PROTO(struct binder_alloc *alloc, bool allocate,
void __user *start, void __user *end),
unsigned long start, unsigned long end),
TP_ARGS(alloc, allocate, start, end),
TP_STRUCT__entry(
__field(int, proc)
@@ -352,7 +352,7 @@ TRACE_EVENT(binder_update_page_range,
TP_fast_assign(
__entry->proc = alloc->pid;
__entry->allocate = allocate;
__entry->offset = start - alloc->buffer;
__entry->offset = start - (uintptr_t)alloc->buffer;
__entry->size = end - start;
),
TP_printk("proc=%d allocate=%d offset=%zu size=%zu",

View File

@@ -76,7 +76,7 @@
#include <trace/hooks/typec.h>
#include <trace/hooks/sound.h>
#include <trace/hooks/user.h>
#include <trace/hooks/delayacct.h>
/*
* Export tracepoints that act as a bare tracehook (ie: have no trace event
* associated with them) to allow external modules to probe them.
@@ -138,6 +138,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_freq_table_limits);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpufreq_resolve_freq);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpufreq_fast_switch);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpufreq_target);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpufreq_online);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_skip_swapcache_flags);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_gfp_zone_flags);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_readahead_gfp_mask);
@@ -269,6 +270,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_kswapd_per_node);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_audio_usb_offload_vendor_set);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_audio_usb_offload_ep_action);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_audio_usb_offload_synctype);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_gic_v3_suspend);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_audio_usb_offload_connect);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_audio_usb_offload_disconnect);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_atomic_remove_fb);
@@ -370,3 +372,20 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_transaction_received);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_oem_binder_struct);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_special_task);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_free_buf);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_isolate_freepages);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_update_thermal_stats);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ptep_clear_flush_young);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_set_flag);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_clear_flag);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_delayacct_init);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_delayacct_tsk_init);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_delayacct_tsk_free);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_blkio_start);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_blkio_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_add_tsk);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_blkio_ticks);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_is_task_waiting_on_io);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_freepages_start);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_freepages_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_thrashing_start);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_delayacct_thrashing_end);

View File

@@ -173,8 +173,11 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
{
int cpu;
for_each_cpu(cpu, cpus)
for_each_cpu(cpu, cpus) {
WRITE_ONCE(per_cpu(thermal_pressure, cpu), th_pressure);
trace_android_rvh_update_thermal_stats(cpu);
}
}
EXPORT_SYMBOL_GPL(topology_set_thermal_pressure);

View File

@@ -1445,6 +1445,8 @@ static int cpufreq_online(unsigned int cpu)
goto out_destroy_policy;
}
trace_android_vh_cpufreq_online(policy);
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_CREATE_POLICY, policy);
}

View File

@@ -31,7 +31,7 @@ DEFINE_CORESIGHT_DEVLIST(etb_devs, "tmc_etb");
DEFINE_CORESIGHT_DEVLIST(etf_devs, "tmc_etf");
DEFINE_CORESIGHT_DEVLIST(etr_devs, "tmc_etr");
void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata)
int tmc_wait_for_tmcready(struct tmc_drvdata *drvdata)
{
struct coresight_device *csdev = drvdata->csdev;
struct csdev_access *csa = &csdev->access;
@@ -40,7 +40,9 @@ void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata)
if (coresight_timeout(csa, TMC_STS, TMC_STS_TMCREADY_BIT, 1)) {
dev_err(&csdev->dev,
"timeout while waiting for TMC to be Ready\n");
return -EBUSY;
}
return 0;
}
void tmc_flush_and_stop(struct tmc_drvdata *drvdata)

View File

@@ -16,12 +16,20 @@
static int tmc_set_etf_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle);
static void __tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
static int __tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
{
int rc = 0;
CS_UNLOCK(drvdata->base);
/* Wait for TMCSReady bit to be set */
tmc_wait_for_tmcready(drvdata);
rc = tmc_wait_for_tmcready(drvdata);
if (rc) {
dev_err(&drvdata->csdev->dev,
"Failed to enable: TMC not ready\n");
CS_LOCK(drvdata->base);
return rc;
}
writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
@@ -33,6 +41,7 @@ static void __tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
tmc_enable_hw(drvdata);
CS_LOCK(drvdata->base);
return rc;
}
static int tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
@@ -42,8 +51,10 @@ static int tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
if (rc)
return rc;
__tmc_etb_enable_hw(drvdata);
return 0;
rc = __tmc_etb_enable_hw(drvdata);
if (rc)
coresight_disclaim_device(drvdata->csdev);
return rc;
}
static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata)
@@ -91,12 +102,20 @@ static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
coresight_disclaim_device(drvdata->csdev);
}
static void __tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
static int __tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
{
int rc = 0;
CS_UNLOCK(drvdata->base);
/* Wait for TMCSReady bit to be set */
tmc_wait_for_tmcready(drvdata);
rc = tmc_wait_for_tmcready(drvdata);
if (rc) {
dev_err(&drvdata->csdev->dev,
"Failed to enable : TMC is not ready\n");
CS_LOCK(drvdata->base);
return rc;
}
writel_relaxed(TMC_MODE_HARDWARE_FIFO, drvdata->base + TMC_MODE);
writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI,
@@ -105,6 +124,7 @@ static void __tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
tmc_enable_hw(drvdata);
CS_LOCK(drvdata->base);
return rc;
}
static int tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
@@ -114,8 +134,10 @@ static int tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
if (rc)
return rc;
__tmc_etf_enable_hw(drvdata);
return 0;
rc = __tmc_etf_enable_hw(drvdata);
if (rc)
coresight_disclaim_device(drvdata->csdev);
return rc;
}
static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata)
@@ -639,6 +661,7 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
char *buf = NULL;
enum tmc_mode mode;
unsigned long flags;
int rc = 0;
/* config types are set a boot time and never change */
if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
@@ -664,7 +687,11 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
* can't be NULL.
*/
memset(drvdata->buf, 0, drvdata->size);
__tmc_etb_enable_hw(drvdata);
rc = __tmc_etb_enable_hw(drvdata);
if (rc) {
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return rc;
}
} else {
/*
* The ETB/ETF is not tracing and the buffer was just read.

View File

@@ -984,15 +984,22 @@ static void tmc_sync_etr_buf(struct tmc_drvdata *drvdata)
etr_buf->ops->sync(etr_buf, rrp, rwp);
}
static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
static int __tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
{
u32 axictl, sts;
struct etr_buf *etr_buf = drvdata->etr_buf;
int rc = 0;
CS_UNLOCK(drvdata->base);
/* Wait for TMCSReady bit to be set */
tmc_wait_for_tmcready(drvdata);
rc = tmc_wait_for_tmcready(drvdata);
if (rc) {
dev_err(&drvdata->csdev->dev,
"Failed to enable : TMC not ready\n");
CS_LOCK(drvdata->base);
return rc;
}
writel_relaxed(etr_buf->size / 4, drvdata->base + TMC_RSZ);
writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
@@ -1033,6 +1040,7 @@ static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
tmc_enable_hw(drvdata);
CS_LOCK(drvdata->base);
return rc;
}
static int tmc_etr_enable_hw(struct tmc_drvdata *drvdata,
@@ -1061,7 +1069,12 @@ static int tmc_etr_enable_hw(struct tmc_drvdata *drvdata,
rc = coresight_claim_device(drvdata->csdev);
if (!rc) {
drvdata->etr_buf = etr_buf;
__tmc_etr_enable_hw(drvdata);
rc = __tmc_etr_enable_hw(drvdata);
if (rc) {
drvdata->etr_buf = NULL;
coresight_disclaim_device(drvdata->csdev);
tmc_etr_disable_catu(drvdata);
}
}
return rc;

View File

@@ -255,7 +255,7 @@ struct tmc_sg_table {
};
/* Generic functions */
void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata);
int tmc_wait_for_tmcready(struct tmc_drvdata *drvdata);
void tmc_flush_and_stop(struct tmc_drvdata *drvdata);
void tmc_enable_hw(struct tmc_drvdata *drvdata);
void tmc_disable_hw(struct tmc_drvdata *drvdata);

View File

@@ -33,6 +33,7 @@
#define UINPUT_NAME "uinput"
#define UINPUT_BUFFER_SIZE 16
#define UINPUT_NUM_REQUESTS 16
#define UINPUT_TIMESTAMP_ALLOWED_OFFSET_SECS 10
enum uinput_state { UIST_NEW_DEVICE, UIST_SETUP_COMPLETE, UIST_CREATED };
@@ -569,11 +570,40 @@ static int uinput_setup_device_legacy(struct uinput_device *udev,
return retval;
}
/*
* Returns true if the given timestamp is valid (i.e., if all the following
* conditions are satisfied), false otherwise.
* 1) given timestamp is positive
* 2) it's within the allowed offset before the current time
* 3) it's not in the future
*/
static bool is_valid_timestamp(const ktime_t timestamp)
{
ktime_t zero_time;
ktime_t current_time;
ktime_t min_time;
ktime_t offset;
zero_time = ktime_set(0, 0);
if (ktime_compare(zero_time, timestamp) >= 0)
return false;
current_time = ktime_get();
offset = ktime_set(UINPUT_TIMESTAMP_ALLOWED_OFFSET_SECS, 0);
min_time = ktime_sub(current_time, offset);
if (ktime_after(min_time, timestamp) || ktime_after(timestamp, current_time))
return false;
return true;
}
static ssize_t uinput_inject_events(struct uinput_device *udev,
const char __user *buffer, size_t count)
{
struct input_event ev;
size_t bytes = 0;
ktime_t timestamp;
if (count != 0 && count < input_event_size())
return -EINVAL;
@@ -588,6 +618,10 @@ static ssize_t uinput_inject_events(struct uinput_device *udev,
if (input_event_from_user(buffer + bytes, &ev))
return -EFAULT;
timestamp = ktime_set(ev.input_event_sec, ev.input_event_usec * NSEC_PER_USEC);
if (is_valid_timestamp(timestamp))
input_set_timestamp(udev->dev, timestamp);
input_event(udev->dev, ev.type, ev.code, ev.value);
bytes += input_event_size();
cond_resched();

View File

@@ -215,10 +215,11 @@ static void gic_do_wait_for_rwp(void __iomem *base, u32 bit)
}
/* Wait for completion of a distributor change */
static void gic_dist_wait_for_rwp(void)
void gic_v3_dist_wait_for_rwp(void)
{
gic_do_wait_for_rwp(gic_data.dist_base, GICD_CTLR_RWP);
}
EXPORT_SYMBOL_GPL(gic_v3_dist_wait_for_rwp);
/* Wait for completion of a redistributor change */
static void gic_redist_wait_for_rwp(void)
@@ -357,7 +358,7 @@ static void gic_poke_irq(struct irq_data *d, u32 offset)
rwp_wait = gic_redist_wait_for_rwp;
} else {
base = gic_data.dist_base;
rwp_wait = gic_dist_wait_for_rwp;
rwp_wait = gic_v3_dist_wait_for_rwp;
}
writel_relaxed(mask, base + offset + (index / 32) * 4);
@@ -589,7 +590,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
rwp_wait = gic_redist_wait_for_rwp;
} else {
base = gic_data.dist_base;
rwp_wait = gic_dist_wait_for_rwp;
rwp_wait = gic_v3_dist_wait_for_rwp;
}
offset = convert_offset_index(d, GICD_ICFGR, &index);
@@ -792,7 +793,7 @@ static bool gic_has_group0(void)
return val != 0;
}
static void __init gic_dist_init(void)
void gic_v3_dist_init(void)
{
unsigned int i;
u64 affinity;
@@ -801,7 +802,7 @@ static void __init gic_dist_init(void)
/* Disable the distributor */
writel_relaxed(0, base + GICD_CTLR);
gic_dist_wait_for_rwp();
gic_v3_dist_wait_for_rwp();
/*
* Configure SPIs as non-secure Group-1. This will only matter
@@ -828,7 +829,7 @@ static void __init gic_dist_init(void)
writel_relaxed(GICD_INT_DEF_PRI_X4, base + GICD_IPRIORITYRnE + i);
/* Now do the common stuff, and wait for the distributor to drain */
gic_dist_config(base, GIC_LINE_NR, gic_dist_wait_for_rwp);
gic_dist_config(base, GIC_LINE_NR, gic_v3_dist_wait_for_rwp);
val = GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A | GICD_CTLR_ENABLE_G1;
if (gic_data.rdists.gicd_typer2 & GICD_TYPER2_nASSGIcap) {
@@ -854,6 +855,7 @@ static void __init gic_dist_init(void)
gic_write_irouter(affinity, base + GICD_IROUTERnE + i * 8);
}
}
EXPORT_SYMBOL_GPL(gic_v3_dist_init);
static int gic_iterate_rdists(int (*fn)(struct redist_region *, void __iomem *))
{
@@ -1135,7 +1137,7 @@ static int gic_dist_supports_lpis(void)
!gicv3_nolpi);
}
static void gic_cpu_init(void)
void gic_v3_cpu_init(void)
{
void __iomem *rbase;
int i;
@@ -1162,6 +1164,7 @@ static void gic_cpu_init(void)
/* initialise system registers */
gic_cpu_sys_reg_init();
}
EXPORT_SYMBOL_GPL(gic_v3_cpu_init);
#ifdef CONFIG_SMP
@@ -1170,7 +1173,7 @@ static void gic_cpu_init(void)
static int gic_starting_cpu(unsigned int cpu)
{
gic_cpu_init();
gic_v3_cpu_init();
if (gic_dist_supports_lpis())
its_cpu_init();
@@ -1312,7 +1315,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
if (enabled)
gic_unmask_irq(d);
else
gic_dist_wait_for_rwp();
gic_v3_dist_wait_for_rwp();
irq_data_update_effective_affinity(d, cpumask_of(cpu));
@@ -1364,8 +1367,15 @@ void gic_resume(void)
}
EXPORT_SYMBOL_GPL(gic_resume);
static int gic_v3_suspend(void)
{
trace_android_vh_gic_v3_suspend(&gic_data);
return 0;
}
static struct syscore_ops gic_syscore_ops = {
.resume = gic_resume,
.suspend = gic_v3_suspend,
};
static void gic_syscore_init(void)
@@ -1376,6 +1386,7 @@ static void gic_syscore_init(void)
#else
static inline void gic_syscore_init(void) { }
void gic_resume(void) { }
static int gic_v3_suspend(void) { return 0; }
#endif
@@ -1884,8 +1895,8 @@ static int __init gic_init_bases(void __iomem *dist_base,
gic_update_rdist_properties();
gic_dist_init();
gic_cpu_init();
gic_v3_dist_init();
gic_v3_cpu_init();
gic_smp_init();
gic_cpu_pm_init();
gic_syscore_init();

View File

@@ -178,7 +178,6 @@
#define pr_fmt(fmt) "bcache: %s() " fmt, __func__
#include <linux/bcache.h>
#include <linux/bio.h>
#include <linux/kobject.h>
#include <linux/list.h>
@@ -190,6 +189,7 @@
#include <linux/workqueue.h>
#include <linux/kthread.h>
#include "bcache_ondisk.h"
#include "bset.h"
#include "util.h"
#include "closure.h"

View File

@@ -2,10 +2,10 @@
#ifndef _BCACHE_BSET_H
#define _BCACHE_BSET_H
#include <linux/bcache.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include "bcache_ondisk.h"
#include "util.h" /* for time_stats */
/*

View File

@@ -6,7 +6,7 @@
* Copyright 2020 Coly Li <colyli@suse.de>
*
*/
#include <linux/bcache.h>
#include "bcache_ondisk.h"
#include "bcache.h"
#include "features.h"

View File

@@ -2,10 +2,11 @@
#ifndef _BCACHE_FEATURES_H
#define _BCACHE_FEATURES_H
#include <linux/bcache.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include "bcache_ondisk.h"
#define BCH_FEATURE_COMPAT 0
#define BCH_FEATURE_RO_COMPAT 1
#define BCH_FEATURE_INCOMPAT 2

View File

@@ -468,13 +468,6 @@ config UID_SYS_STATS
Per UID based io statistics exported to /proc/uid_io
Per UID based procstat control in /proc/uid_procstat
config UID_SYS_STATS_DEBUG
bool "Per-TASK statistics"
depends on UID_SYS_STATS
default n
help
Per TASK based io statistics exported to /proc/uid_io
config HISI_HIKEY_USB
tristate "USB GPIO Hub on HiSilicon Hikey 960/970 Platform"
depends on (OF && GPIOLIB) || COMPILE_TEST

View File

@@ -76,9 +76,6 @@ struct uid_entry {
int state;
struct io_stats io[UID_STATE_SIZE];
struct hlist_node hash;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
DECLARE_HASHTABLE(task_entries, UID_HASH_BITS);
#endif
};
static inline int trylock_uid(uid_t uid)
@@ -148,182 +145,6 @@ static void compute_io_bucket_stats(struct io_stats *io_bucket,
memset(io_dead, 0, sizeof(struct io_stats));
}
#ifdef CONFIG_UID_SYS_STATS_DEBUG
static void get_full_task_comm(struct task_entry *task_entry,
struct task_struct *task)
{
int i = 0, offset = 0, len = 0;
/* save one byte for terminating null character */
int unused_len = MAX_TASK_COMM_LEN - TASK_COMM_LEN - 1;
char buf[MAX_TASK_COMM_LEN - TASK_COMM_LEN - 1];
struct mm_struct *mm = task->mm;
/* fill the first TASK_COMM_LEN bytes with thread name */
__get_task_comm(task_entry->comm, TASK_COMM_LEN, task);
i = strlen(task_entry->comm);
while (i < TASK_COMM_LEN)
task_entry->comm[i++] = ' ';
/* next the executable file name */
if (mm) {
mmap_write_lock(mm);
if (mm->exe_file) {
char *pathname = d_path(&mm->exe_file->f_path, buf,
unused_len);
if (!IS_ERR(pathname)) {
len = strlcpy(task_entry->comm + i, pathname,
unused_len);
i += len;
task_entry->comm[i++] = ' ';
unused_len--;
}
}
mmap_write_unlock(mm);
}
unused_len -= len;
/* fill the rest with command line argument
* replace each null or new line character
* between args in argv with whitespace */
len = get_cmdline(task, buf, unused_len);
while (offset < len) {
if (buf[offset] != '\0' && buf[offset] != '\n')
task_entry->comm[i++] = buf[offset];
else
task_entry->comm[i++] = ' ';
offset++;
}
/* get rid of trailing whitespaces in case when arg is memset to
* zero before being reset in userspace
*/
while (task_entry->comm[i-1] == ' ')
i--;
task_entry->comm[i] = '\0';
}
static struct task_entry *find_task_entry(struct uid_entry *uid_entry,
struct task_struct *task)
{
struct task_entry *task_entry;
hash_for_each_possible(uid_entry->task_entries, task_entry, hash,
task->pid) {
if (task->pid == task_entry->pid) {
/* if thread name changed, update the entire command */
int len = strnchr(task_entry->comm, ' ', TASK_COMM_LEN)
- task_entry->comm;
if (strncmp(task_entry->comm, task->comm, len))
get_full_task_comm(task_entry, task);
return task_entry;
}
}
return NULL;
}
static struct task_entry *find_or_register_task(struct uid_entry *uid_entry,
struct task_struct *task)
{
struct task_entry *task_entry;
pid_t pid = task->pid;
task_entry = find_task_entry(uid_entry, task);
if (task_entry)
return task_entry;
task_entry = kzalloc(sizeof(struct task_entry), GFP_ATOMIC);
if (!task_entry)
return NULL;
get_full_task_comm(task_entry, task);
task_entry->pid = pid;
hash_add(uid_entry->task_entries, &task_entry->hash, (unsigned int)pid);
return task_entry;
}
static void remove_uid_tasks(struct uid_entry *uid_entry)
{
struct task_entry *task_entry;
unsigned long bkt_task;
struct hlist_node *tmp_task;
hash_for_each_safe(uid_entry->task_entries, bkt_task,
tmp_task, task_entry, hash) {
hash_del(&task_entry->hash);
kfree(task_entry);
}
}
static void set_io_uid_tasks_zero(struct uid_entry *uid_entry)
{
struct task_entry *task_entry;
unsigned long bkt_task;
hash_for_each(uid_entry->task_entries, bkt_task, task_entry, hash) {
memset(&task_entry->io[UID_STATE_TOTAL_CURR], 0,
sizeof(struct io_stats));
}
}
static void add_uid_tasks_io_stats(struct task_entry *task_entry,
struct task_io_accounting *ioac, int slot)
{
struct io_stats *task_io_slot = &task_entry->io[slot];
task_io_slot->read_bytes += ioac->read_bytes;
task_io_slot->write_bytes += compute_write_bytes(ioac);
task_io_slot->rchar += ioac->rchar;
task_io_slot->wchar += ioac->wchar;
task_io_slot->fsync += ioac->syscfs;
}
static void compute_io_uid_tasks(struct uid_entry *uid_entry)
{
struct task_entry *task_entry;
unsigned long bkt_task;
hash_for_each(uid_entry->task_entries, bkt_task, task_entry, hash) {
compute_io_bucket_stats(&task_entry->io[uid_entry->state],
&task_entry->io[UID_STATE_TOTAL_CURR],
&task_entry->io[UID_STATE_TOTAL_LAST],
&task_entry->io[UID_STATE_DEAD_TASKS]);
}
}
static void show_io_uid_tasks(struct seq_file *m, struct uid_entry *uid_entry)
{
struct task_entry *task_entry;
unsigned long bkt_task;
hash_for_each(uid_entry->task_entries, bkt_task, task_entry, hash) {
/* Separated by comma because space exists in task comm */
seq_printf(m, "task,%s,%lu,%llu,%llu,%llu,%llu,%llu,%llu,%llu,%llu,%llu,%llu\n",
task_entry->comm,
(unsigned long)task_entry->pid,
task_entry->io[UID_STATE_FOREGROUND].rchar,
task_entry->io[UID_STATE_FOREGROUND].wchar,
task_entry->io[UID_STATE_FOREGROUND].read_bytes,
task_entry->io[UID_STATE_FOREGROUND].write_bytes,
task_entry->io[UID_STATE_BACKGROUND].rchar,
task_entry->io[UID_STATE_BACKGROUND].wchar,
task_entry->io[UID_STATE_BACKGROUND].read_bytes,
task_entry->io[UID_STATE_BACKGROUND].write_bytes,
task_entry->io[UID_STATE_FOREGROUND].fsync,
task_entry->io[UID_STATE_BACKGROUND].fsync);
}
}
#else
static void remove_uid_tasks(struct uid_entry *uid_entry) {};
static void set_io_uid_tasks_zero(struct uid_entry *uid_entry) {};
static void compute_io_uid_tasks(struct uid_entry *uid_entry) {};
static void show_io_uid_tasks(struct seq_file *m,
struct uid_entry *uid_entry) {}
#endif
static struct uid_entry *find_uid_entry(uid_t uid)
{
struct uid_entry *uid_entry;
@@ -347,9 +168,6 @@ static struct uid_entry *find_or_register_uid(uid_t uid)
return NULL;
uid_entry->uid = uid;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
hash_init(uid_entry->task_entries);
#endif
hash_add(hash_table, &uid_entry->hash, uid);
return uid_entry;
@@ -465,7 +283,6 @@ static ssize_t uid_remove_write(struct file *file,
hash_for_each_possible_safe(hash_table, uid_entry, tmp,
hash, (uid_t)uid_start) {
if (uid_start == uid_entry->uid) {
remove_uid_tasks(uid_entry);
hash_del(&uid_entry->hash);
kfree(uid_entry);
}
@@ -503,10 +320,6 @@ static void add_uid_io_stats(struct uid_entry *uid_entry,
if (slot != UID_STATE_DEAD_TASKS && (task->flags & PF_EXITING))
return;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
task_entry = find_or_register_task(uid_entry, task);
add_uid_tasks_io_stats(task_entry, &task->ioac, slot);
#endif
__add_uid_io_stats(uid_entry, &task->ioac, slot);
}
@@ -524,7 +337,6 @@ static void update_io_stats_all(void)
hlist_for_each_entry(uid_entry, &hash_table[bkt], hash) {
memset(&uid_entry->io[UID_STATE_TOTAL_CURR], 0,
sizeof(struct io_stats));
set_io_uid_tasks_zero(uid_entry);
}
unlock_uid_by_bkt(bkt);
}
@@ -552,24 +364,18 @@ static void update_io_stats_all(void)
&uid_entry->io[UID_STATE_TOTAL_CURR],
&uid_entry->io[UID_STATE_TOTAL_LAST],
&uid_entry->io[UID_STATE_DEAD_TASKS]);
compute_io_uid_tasks(uid_entry);
}
unlock_uid_by_bkt(bkt);
}
}
#ifndef CONFIG_UID_SYS_STATS_DEBUG
static void update_io_stats_uid(struct uid_entry *uid_entry)
#else
static void update_io_stats_uid_locked(struct uid_entry *uid_entry)
#endif
{
struct task_struct *task, *temp;
struct user_namespace *user_ns = current_user_ns();
memset(&uid_entry->io[UID_STATE_TOTAL_CURR], 0,
sizeof(struct io_stats));
set_io_uid_tasks_zero(uid_entry);
rcu_read_lock();
do_each_thread(temp, task) {
@@ -583,7 +389,6 @@ static void update_io_stats_uid_locked(struct uid_entry *uid_entry)
&uid_entry->io[UID_STATE_TOTAL_CURR],
&uid_entry->io[UID_STATE_TOTAL_LAST],
&uid_entry->io[UID_STATE_DEAD_TASKS]);
compute_io_uid_tasks(uid_entry);
}
@@ -610,8 +415,6 @@ static int uid_io_show(struct seq_file *m, void *v)
uid_entry->io[UID_STATE_BACKGROUND].write_bytes,
uid_entry->io[UID_STATE_FOREGROUND].fsync,
uid_entry->io[UID_STATE_BACKGROUND].fsync);
show_io_uid_tasks(m, uid_entry);
}
unlock_uid_by_bkt(bkt);
}
@@ -643,9 +446,7 @@ static ssize_t uid_procstat_write(struct file *file,
uid_t uid;
int argc, state;
char input[128];
#ifndef CONFIG_UID_SYS_STATS_DEBUG
struct uid_entry uid_entry_tmp;
#endif
if (count >= sizeof(input))
return -EINVAL;
@@ -674,7 +475,6 @@ static ssize_t uid_procstat_write(struct file *file,
return count;
}
#ifndef CONFIG_UID_SYS_STATS_DEBUG
/*
* Update_io_stats_uid_locked would take a long lock-time of uid_lock
* due to call do_each_thread to compute uid_entry->io, which would
@@ -684,9 +484,8 @@ static ssize_t uid_procstat_write(struct file *file,
* so that we can unlock_uid during update_io_stats_uid, in order
* to avoid the unnecessary lock-time of uid_lock.
*/
uid_entry_tmp.uid = uid_entry->uid;
memcpy(uid_entry_tmp.io, uid_entry->io,
sizeof(struct io_stats) * UID_STATE_SIZE);
uid_entry_tmp = *uid_entry;
unlock_uid(uid);
update_io_stats_uid(&uid_entry_tmp);
@@ -700,13 +499,6 @@ static ssize_t uid_procstat_write(struct file *file,
}
}
unlock_uid(uid);
#else
update_io_stats_uid_locked(uid_entry);
uid_entry->state = state;
unlock_uid(uid);
#endif
return count;
}
@@ -719,9 +511,6 @@ static const struct proc_ops uid_procstat_fops = {
struct update_stats_work {
uid_t uid;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
struct task_struct *task;
#endif
struct task_io_accounting ioac;
u64 utime;
u64 stime;
@@ -747,19 +536,9 @@ static void update_stats_workfn(struct work_struct *work)
uid_entry->utime += usw->utime;
uid_entry->stime += usw->stime;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
task_entry = find_task_entry(uid_entry, usw->task);
if (!task_entry)
goto next;
add_uid_tasks_io_stats(task_entry, &usw->ioac,
UID_STATE_DEAD_TASKS);
#endif
__add_uid_io_stats(uid_entry, &usw->ioac, UID_STATE_DEAD_TASKS);
next:
unlock_uid(usw->uid);
#ifdef CONFIG_UID_SYS_STATS_DEBUG
put_task_struct(usw->task);
#endif
kfree(usw);
}
@@ -784,9 +563,6 @@ static int process_notifier(struct notifier_block *self,
usw = kmalloc(sizeof(struct update_stats_work), GFP_KERNEL);
if (usw) {
usw->uid = uid;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
usw->task = get_task_struct(task);
#endif
/*
* Copy task->ioac since task might be destroyed before
* the work is later performed.

View File

@@ -119,13 +119,12 @@ void mmc_retune_enable(struct mmc_host *host)
/*
* Pause re-tuning for a small set of operations. The pause begins after the
* next command and after first doing re-tuning.
* next command.
*/
void mmc_retune_pause(struct mmc_host *host)
{
if (!host->retune_paused) {
host->retune_paused = 1;
mmc_retune_needed(host);
mmc_retune_hold(host);
}
}

View File

@@ -813,7 +813,7 @@ static void dwc3_ep0_inspect_setup(struct dwc3 *dwc,
int ret = -EINVAL;
u32 len;
if (!dwc->gadget_driver)
if (!dwc->gadget_driver || !dwc->connected)
goto out;
trace_dwc3_ctrl_req(ctrl);

View File

@@ -139,6 +139,24 @@ int dwc3_gadget_set_link_state(struct dwc3 *dwc, enum dwc3_link_state state)
return -ETIMEDOUT;
}
static void dwc3_ep0_reset_state(struct dwc3 *dwc)
{
unsigned int dir;
if (dwc->ep0state != EP0_SETUP_PHASE) {
dir = !!dwc->ep0_expect_in;
if (dwc->ep0state == EP0_DATA_PHASE)
dwc3_ep0_end_control_data(dwc, dwc->eps[dir]);
else
dwc3_ep0_end_control_data(dwc, dwc->eps[!dir]);
dwc->eps[0]->trb_enqueue = 0;
dwc->eps[1]->trb_enqueue = 0;
dwc3_ep0_stall_and_restart(dwc);
}
}
/**
* dwc3_ep_inc_trb - increment a trb index.
* @index: Pointer to the TRB index to increment.
@@ -2021,7 +2039,17 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
list_for_each_entry(r, &dep->pending_list, list) {
if (r == req) {
dwc3_gadget_giveback(dep, req, -ECONNRESET);
/*
* Explicitly check for EP0/1 as dequeue for those
* EPs need to be handled differently. Control EP
* only deals with one USB req, and giveback will
* occur during dwc3_ep0_stall_and_restart(). EP0
* requests are never added to started_list.
*/
if (dep->number > 1)
dwc3_gadget_giveback(dep, req, -ECONNRESET);
else
dwc3_ep0_reset_state(dwc);
goto out;
}
}
@@ -2469,10 +2497,18 @@ static int __dwc3_gadget_start(struct dwc3 *dwc);
static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&dwc->lock, flags);
dwc->connected = false;
/*
* Attempt to end pending SETUP status phase, and not wait for the
* function to do so.
*/
if (dwc->delayed_status)
dwc3_ep0_send_delayed_status(dwc);
/*
* In the Synopsys DesignWare Cores USB3 Databook Rev. 3.30a
* Section 4.1.8 Table 4-7, it states that for a device-initiated
@@ -2481,9 +2517,28 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
* bit.
*/
dwc3_stop_active_transfers(dwc);
__dwc3_gadget_stop(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
/*
* Per databook, when we want to stop the gadget, if a control transfer
* is still in process, complete it and get the core into setup phase.
* In case the host is unresponsive to a SETUP transaction, forcefully
* stall the transfer, and move back to the SETUP phase, so that any
* pending endxfers can be executed.
*/
if (dwc->ep0state != EP0_SETUP_PHASE) {
reinit_completion(&dwc->ep0_in_setup);
ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
if (ret == 0) {
dev_warn(dwc->dev, "wait for SETUP phase timed out\n");
spin_lock_irqsave(&dwc->lock, flags);
dwc3_ep0_reset_state(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
}
}
/*
* Note: if the GEVNTCOUNT indicates events in the event buffer, the
* driver needs to acknowledge them before the controller can halt.
@@ -2491,7 +2546,19 @@ static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
* remaining event generated by the controller while polling for
* DSTS.DEVCTLHLT.
*/
return dwc3_gadget_run_stop(dwc, false);
ret = dwc3_gadget_run_stop(dwc, false);
/*
* Stop the gadget after controller is halted, so that if needed, the
* events to update EP0 state can still occur while the run/stop
* routine polls for the halted state. DEVTEN is cleared as part of
* gadget stop.
*/
spin_lock_irqsave(&dwc->lock, flags);
__dwc3_gadget_stop(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
return ret;
}
static int dwc3_gadget_soft_connect(struct dwc3 *dwc)
@@ -2517,18 +2584,6 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
is_on = !!is_on;
dwc->softconnect = is_on;
/*
* Per databook, when we want to stop the gadget, if a control transfer
* is still in process, complete it and get the core into setup phase.
*/
if (!is_on && dwc->ep0state != EP0_SETUP_PHASE) {
reinit_completion(&dwc->ep0_in_setup);
ret = wait_for_completion_timeout(&dwc->ep0_in_setup,
msecs_to_jiffies(DWC3_PULL_UP_TIMEOUT));
if (ret == 0)
dev_warn(dwc->dev, "timed out waiting for SETUP phase\n");
}
/*
* Avoid issuing a runtime resume if the device is already in the
@@ -3720,13 +3775,15 @@ static void dwc3_gadget_disconnect_interrupt(struct dwc3 *dwc)
reg &= ~DWC3_DCTL_INITU2ENA;
dwc3_gadget_dctl_write_safe(dwc, reg);
dwc->connected = false;
dwc3_disconnect_gadget(dwc);
dwc->gadget->speed = USB_SPEED_UNKNOWN;
dwc->setup_packet_pending = false;
usb_gadget_set_state(dwc->gadget, USB_STATE_NOTATTACHED);
dwc->connected = false;
dwc3_ep0_reset_state(dwc);
}
static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
@@ -3782,20 +3839,7 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc)
* phase. So ensure that EP0 is in setup phase by issuing a stall
* and restart if EP0 is not in setup phase.
*/
if (dwc->ep0state != EP0_SETUP_PHASE) {
unsigned int dir;
dir = !!dwc->ep0_expect_in;
if (dwc->ep0state == EP0_DATA_PHASE)
dwc3_ep0_end_control_data(dwc, dwc->eps[dir]);
else
dwc3_ep0_end_control_data(dwc, dwc->eps[!dir]);
dwc->eps[0]->trb_enqueue = 0;
dwc->eps[1]->trb_enqueue = 0;
dwc3_ep0_stall_and_restart(dwc);
}
dwc3_ep0_reset_state(dwc);
/*
* In the Synopsis DesignWare Cores USB3 Databook Rev. 3.30a

View File

@@ -1023,40 +1023,30 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
if (!f->fs_descriptors)
goto fail_f_midi;
if (gadget_is_dualspeed(c->cdev->gadget)) {
bulk_in_desc.wMaxPacketSize = cpu_to_le16(512);
bulk_out_desc.wMaxPacketSize = cpu_to_le16(512);
f->hs_descriptors = usb_copy_descriptors(midi_function);
if (!f->hs_descriptors)
goto fail_f_midi;
}
bulk_in_desc.wMaxPacketSize = cpu_to_le16(512);
bulk_out_desc.wMaxPacketSize = cpu_to_le16(512);
f->hs_descriptors = usb_copy_descriptors(midi_function);
if (!f->hs_descriptors)
goto fail_f_midi;
if (gadget_is_superspeed(c->cdev->gadget)) {
bulk_in_desc.wMaxPacketSize = cpu_to_le16(1024);
bulk_out_desc.wMaxPacketSize = cpu_to_le16(1024);
i = endpoint_descriptor_index;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_out_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_out_ss_comp_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&ms_out_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_in_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_in_ss_comp_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&ms_in_desc;
f->ss_descriptors = usb_copy_descriptors(midi_function);
if (!f->ss_descriptors)
goto fail_f_midi;
if (gadget_is_superspeed_plus(c->cdev->gadget)) {
f->ssp_descriptors = usb_copy_descriptors(midi_function);
if (!f->ssp_descriptors)
goto fail_f_midi;
}
}
bulk_in_desc.wMaxPacketSize = cpu_to_le16(1024);
bulk_out_desc.wMaxPacketSize = cpu_to_le16(1024);
i = endpoint_descriptor_index;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_out_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_out_ss_comp_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&ms_out_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_in_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&bulk_in_ss_comp_desc;
midi_function[i++] = (struct usb_descriptor_header *)
&ms_in_desc;
f->ss_descriptors = usb_copy_descriptors(midi_function);
if (!f->ss_descriptors)
goto fail_f_midi;
kfree(midi_function);

View File

@@ -263,10 +263,13 @@ uvc_function_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl)
return 0;
}
void uvc_function_setup_continue(struct uvc_device *uvc)
void uvc_function_setup_continue(struct uvc_device *uvc, int disable_ep)
{
struct usb_composite_dev *cdev = uvc->func.config->cdev;
if (disable_ep && uvc->video.ep)
usb_ep_disable(uvc->video.ep);
usb_composite_setup_continue(cdev);
}
@@ -334,15 +337,11 @@ uvc_function_set_alt(struct usb_function *f, unsigned interface, unsigned alt)
if (uvc->state != UVC_STATE_STREAMING)
return 0;
if (uvc->video.ep)
usb_ep_disable(uvc->video.ep);
memset(&v4l2_event, 0, sizeof(v4l2_event));
v4l2_event.type = UVC_EVENT_STREAMOFF;
v4l2_event_queue(&uvc->vdev, &v4l2_event);
uvc->state = UVC_STATE_CONNECTED;
return 0;
return USB_GADGET_DELAYED_STATUS;
case 1:
if (uvc->state != UVC_STATE_CONNECTED)
@@ -492,6 +491,7 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
void *mem;
switch (speed) {
case USB_SPEED_SUPER_PLUS:
case USB_SPEED_SUPER:
uvc_control_desc = uvc->desc.ss_control;
uvc_streaming_cls = uvc->desc.ss_streaming;
@@ -536,7 +536,8 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
+ uvc_control_ep.bLength + uvc_control_cs_ep.bLength
+ uvc_streaming_intf_alt0.bLength;
if (speed == USB_SPEED_SUPER) {
if (speed == USB_SPEED_SUPER ||
speed == USB_SPEED_SUPER_PLUS) {
bytes += uvc_ss_control_comp.bLength;
n_desc = 6;
} else {
@@ -580,7 +581,8 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
uvc_control_header->baInterfaceNr[0] = uvc->streaming_intf;
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_control_ep);
if (speed == USB_SPEED_SUPER)
if (speed == USB_SPEED_SUPER
|| speed == USB_SPEED_SUPER_PLUS)
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_ss_control_comp);
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_control_cs_ep);
@@ -673,21 +675,13 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
}
uvc->control_ep = ep;
if (gadget_is_superspeed(c->cdev->gadget))
ep = usb_ep_autoconfig_ss(cdev->gadget, &uvc_ss_streaming_ep,
&uvc_ss_streaming_comp);
else if (gadget_is_dualspeed(cdev->gadget))
ep = usb_ep_autoconfig(cdev->gadget, &uvc_hs_streaming_ep);
else
ep = usb_ep_autoconfig(cdev->gadget, &uvc_fs_streaming_ep);
ep = usb_ep_autoconfig(cdev->gadget, &uvc_fs_streaming_ep);
if (!ep) {
uvcg_info(f, "Unable to allocate streaming EP\n");
goto error;
}
uvc->video.ep = ep;
uvc_fs_streaming_ep.bEndpointAddress = uvc->video.ep->address;
uvc_hs_streaming_ep.bEndpointAddress = uvc->video.ep->address;
uvc_ss_streaming_ep.bEndpointAddress = uvc->video.ep->address;
@@ -726,21 +720,26 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
f->fs_descriptors = NULL;
goto error;
}
if (gadget_is_dualspeed(cdev->gadget)) {
f->hs_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_HIGH);
if (IS_ERR(f->hs_descriptors)) {
ret = PTR_ERR(f->hs_descriptors);
f->hs_descriptors = NULL;
goto error;
}
f->hs_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_HIGH);
if (IS_ERR(f->hs_descriptors)) {
ret = PTR_ERR(f->hs_descriptors);
f->hs_descriptors = NULL;
goto error;
}
if (gadget_is_superspeed(c->cdev->gadget)) {
f->ss_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_SUPER);
if (IS_ERR(f->ss_descriptors)) {
ret = PTR_ERR(f->ss_descriptors);
f->ss_descriptors = NULL;
goto error;
}
f->ss_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_SUPER);
if (IS_ERR(f->ss_descriptors)) {
ret = PTR_ERR(f->ss_descriptors);
f->ss_descriptors = NULL;
goto error;
}
f->ssp_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_SUPER_PLUS);
if (IS_ERR(f->ssp_descriptors)) {
ret = PTR_ERR(f->ssp_descriptors);
f->ssp_descriptors = NULL;
goto error;
}
/* Preallocate control endpoint request. */

View File

@@ -11,7 +11,7 @@
struct uvc_device;
void uvc_function_setup_continue(struct uvc_device *uvc);
void uvc_function_setup_continue(struct uvc_device *uvc, int disable_ep);
void uvc_function_connect(struct uvc_device *uvc);

View File

@@ -20,7 +20,6 @@ struct f_phonet_opts {
struct net_device *gphonet_setup_default(void);
void gphonet_set_gadget(struct net_device *net, struct usb_gadget *g);
int gphonet_register_netdev(struct net_device *net);
int phonet_bind_config(struct usb_configuration *c, struct net_device *dev);
void gphonet_cleanup(struct net_device *dev);
#endif /* __U_PHONET_H */

View File

@@ -71,8 +71,4 @@ void gserial_disconnect(struct gserial *);
void gserial_suspend(struct gserial *p);
void gserial_resume(struct gserial *p);
/* functions are bound to configurations by a config or gadget driver */
int gser_bind_config(struct usb_configuration *c, u8 port_num);
int obex_bind_config(struct usb_configuration *c, u8 port_num);
#endif /* __U_SERIAL_H */

View File

@@ -81,6 +81,7 @@ struct uvc_request {
struct sg_table sgt;
u8 header[UVCG_REQUEST_HEADER_LEN];
struct uvc_buffer *last_buf;
struct list_head list;
};
struct uvc_video {
@@ -101,9 +102,18 @@ struct uvc_video {
unsigned int uvc_num_requests;
/* Requests */
bool is_enabled; /* tracks whether video stream is enabled */
unsigned int req_size;
struct uvc_request *ureq;
struct list_head ureqs; /* all uvc_requests allocated by uvc_video */
/* USB requests that the video pump thread can encode into */
struct list_head req_free;
/*
* USB requests video pump thread has already encoded into. These are
* ready to be queued to the endpoint.
*/
struct list_head req_ready;
spinlock_t req_lock;
unsigned int req_int_count;
@@ -175,9 +185,7 @@ struct uvc_file_handle {
* Functions
*/
extern void uvc_function_setup_continue(struct uvc_device *uvc);
extern void uvc_endpoint_stream(struct uvc_device *dev);
extern void uvc_function_setup_continue(struct uvc_device *uvc, int disable_ep);
extern void uvc_function_connect(struct uvc_device *uvc);
extern void uvc_function_disconnect(struct uvc_device *uvc);

View File

@@ -449,7 +449,7 @@ uvc_v4l2_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
return -EINVAL;
/* Enable UVC video. */
ret = uvcg_video_enable(video, 1);
ret = uvcg_video_enable(video);
if (ret < 0)
return ret;
@@ -457,7 +457,7 @@ uvc_v4l2_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
* Complete the alternate setting selection setup phase now that
* userspace is ready to provide video frames.
*/
uvc_function_setup_continue(uvc);
uvc_function_setup_continue(uvc, 0);
uvc->state = UVC_STATE_STREAMING;
return 0;
@@ -469,11 +469,18 @@ uvc_v4l2_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
struct video_device *vdev = video_devdata(file);
struct uvc_device *uvc = video_get_drvdata(vdev);
struct uvc_video *video = &uvc->video;
int ret = 0;
if (type != video->queue.queue.type)
return -EINVAL;
return uvcg_video_enable(video, 0);
ret = uvcg_video_disable(video);
if (ret < 0)
return ret;
uvc->state = UVC_STATE_CONNECTED;
uvc_function_setup_continue(uvc, 1);
return 0;
}
static int
@@ -506,7 +513,7 @@ uvc_v4l2_subscribe_event(struct v4l2_fh *fh,
static void uvc_v4l2_disable(struct uvc_device *uvc)
{
uvc_function_disconnect(uvc);
uvcg_video_enable(&uvc->video, 0);
uvcg_video_disable(&uvc->video);
uvcg_free_buffers(&uvc->video.queue);
uvc->func_connected = false;
wake_up_interruptible(&uvc->func_connected_queue);
@@ -653,4 +660,3 @@ const struct v4l2_file_operations uvc_v4l2_fops = {
.get_unmapped_area = uvcg_v4l2_get_unmapped_area,
#endif
};

View File

@@ -227,6 +227,28 @@ uvc_video_encode_isoc(struct usb_request *req, struct uvc_video *video,
* Request handling
*/
/*
* Callers must take care to hold req_lock when this function may be called
* from multiple threads. For example, when frames are streaming to the host.
*/
static void
uvc_video_free_request(struct uvc_request *ureq, struct usb_ep *ep)
{
sg_free_table(&ureq->sgt);
if (ureq->req && ep) {
usb_ep_free_request(ep, ureq->req);
ureq->req = NULL;
}
kfree(ureq->req_buffer);
ureq->req_buffer = NULL;
if (!list_empty(&ureq->list))
list_del_init(&ureq->list);
kfree(ureq);
}
static int uvcg_video_ep_queue(struct uvc_video *video, struct usb_request *req)
{
int ret;
@@ -247,14 +269,127 @@ static int uvcg_video_ep_queue(struct uvc_video *video, struct usb_request *req)
return ret;
}
/* This function must be called with video->req_lock held. */
static int uvcg_video_usb_req_queue(struct uvc_video *video,
struct usb_request *req, bool queue_to_ep)
{
bool is_bulk = video->max_payload_size;
struct list_head *list = NULL;
if (!video->is_enabled)
return -ENODEV;
if (queue_to_ep) {
struct uvc_request *ureq = req->context;
/*
* With USB3 handling more requests at a higher speed, we can't
* afford to generate an interrupt for every request. Decide to
* interrupt:
*
* - When no more requests are available in the free queue, as
* this may be our last chance to refill the endpoint's
* request queue.
*
* - When this is request is the last request for the video
* buffer, as we want to start sending the next video buffer
* ASAP in case it doesn't get started already in the next
* iteration of this loop.
*
* - Four times over the length of the requests queue (as
* indicated by video->uvc_num_requests), as a trade-off
* between latency and interrupt load.
*/
if (list_empty(&video->req_free) || ureq->last_buf ||
!(video->req_int_count %
DIV_ROUND_UP(video->uvc_num_requests, 4))) {
video->req_int_count = 0;
req->no_interrupt = 0;
} else {
req->no_interrupt = 1;
}
video->req_int_count++;
return uvcg_video_ep_queue(video, req);
}
/*
* If we're not queuing to the ep, for isoc we're queuing
* to the req_ready list, otherwise req_free.
*/
list = is_bulk ? &video->req_free : &video->req_ready;
list_add_tail(&req->list, list);
return 0;
}
/*
* Must only be called from uvcg_video_enable - since after that we only want to
* queue requests to the endpoint from the uvc_video_complete complete handler.
* This function is needed in order to 'kick start' the flow of requests from
* gadget driver to the usb controller.
*/
static void uvc_video_ep_queue_initial_requests(struct uvc_video *video)
{
struct usb_request *req = NULL;
unsigned long flags = 0;
unsigned int count = 0;
int ret = 0;
/*
* We only queue half of the free list since we still want to have
* some free usb_requests in the free list for the video_pump async_wq
* thread to encode uvc buffers into. Otherwise we could get into a
* situation where the free list does not have any usb requests to
* encode into - we always end up queueing 0 length requests to the
* end point.
*/
unsigned int half_list_size = video->uvc_num_requests / 2;
spin_lock_irqsave(&video->req_lock, flags);
/*
* Take these requests off the free list and queue them all to the
* endpoint. Since we queue 0 length requests with the req_lock held,
* there isn't any 'data' race involved here with the complete handler.
*/
while (count < half_list_size) {
req = list_first_entry(&video->req_free, struct usb_request,
list);
list_del(&req->list);
req->length = 0;
ret = uvcg_video_ep_queue(video, req);
if (ret < 0) {
uvcg_queue_cancel(&video->queue, 0);
break;
}
count++;
}
spin_unlock_irqrestore(&video->req_lock, flags);
}
static void
uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
{
struct uvc_request *ureq = req->context;
struct uvc_video *video = ureq->video;
struct uvc_video_queue *queue = &video->queue;
struct uvc_device *uvc = video->uvc;
struct uvc_buffer *last_buf;
unsigned long flags;
bool is_bulk = video->max_payload_size;
int ret = 0;
spin_lock_irqsave(&video->req_lock, flags);
if (!video->is_enabled) {
/*
* When is_enabled is false, uvcg_video_disable() ensures
* that in-flight uvc_buffers are returned, so we can
* safely call free_request without worrying about
* last_buf.
*/
uvc_video_free_request(ureq, ep);
spin_unlock_irqrestore(&video->req_lock, flags);
return;
}
last_buf = ureq->last_buf;
ureq->last_buf = NULL;
spin_unlock_irqrestore(&video->req_lock, flags);
switch (req->status) {
case 0:
@@ -277,44 +412,85 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
uvcg_queue_cancel(queue, 0);
}
if (ureq->last_buf) {
uvcg_complete_buffer(&video->queue, ureq->last_buf);
ureq->last_buf = NULL;
if (last_buf) {
spin_lock_irqsave(&queue->irqlock, flags);
uvcg_complete_buffer(queue, last_buf);
spin_unlock_irqrestore(&queue->irqlock, flags);
}
spin_lock_irqsave(&video->req_lock, flags);
list_add_tail(&req->list, &video->req_free);
spin_unlock_irqrestore(&video->req_lock, flags);
/*
* Video stream might have been disabled while we were
* processing the current usb_request. So make sure
* we're still streaming before queueing the usb_request
* back to req_free
*/
if (video->is_enabled) {
/*
* Here we check whether any request is available in the ready
* list. If it is, queue it to the ep and add the current
* usb_request to the req_free list - for video_pump to fill in.
* Otherwise, just use the current usb_request to queue a 0
* length request to the ep. Since we always add to the req_free
* list if we dequeue from the ready list, there will never
* be a situation where the req_free list is completely out of
* requests and cannot recover.
*/
struct usb_request *to_queue = req;
if (uvc->state == UVC_STATE_STREAMING)
queue_work(video->async_wq, &video->pump);
to_queue->length = 0;
if (!list_empty(&video->req_ready)) {
to_queue = list_first_entry(&video->req_ready,
struct usb_request, list);
list_del(&to_queue->list);
list_add_tail(&req->list, &video->req_free);
/*
* Queue work to the wq as well since it is possible that a
* buffer may not have been completely encoded with the set of
* in-flight usb requests for whih the complete callbacks are
* firing.
* In that case, if we do not queue work to the worker thread,
* the buffer will never be marked as complete - and therefore
* not be returned to userpsace. As a result,
* dequeue -> queue -> dequeue flow of uvc buffers will not
* happen.
*/
queue_work(video->async_wq, &video->pump);
}
/*
* Queue to the endpoint. The actual queueing to ep will
* only happen on one thread - the async_wq for bulk endpoints
* and this thread for isoc endpoints.
*/
ret = uvcg_video_usb_req_queue(video, to_queue, !is_bulk);
if (ret < 0) {
/*
* Endpoint error, but the stream is still enabled.
* Put request back in req_free for it to be cleaned
* up later.
*/
list_add_tail(&to_queue->list, &video->req_free);
}
} else {
uvc_video_free_request(ureq, ep);
ret = 0;
}
spin_unlock_irqrestore(&video->req_lock, flags);
if (ret < 0)
uvcg_queue_cancel(queue, 0);
}
static int
uvc_video_free_requests(struct uvc_video *video)
{
unsigned int i;
struct uvc_request *ureq, *temp;
if (video->ureq) {
for (i = 0; i < video->uvc_num_requests; ++i) {
sg_free_table(&video->ureq[i].sgt);
if (video->ureq[i].req) {
usb_ep_free_request(video->ep, video->ureq[i].req);
video->ureq[i].req = NULL;
}
if (video->ureq[i].req_buffer) {
kfree(video->ureq[i].req_buffer);
video->ureq[i].req_buffer = NULL;
}
}
kfree(video->ureq);
video->ureq = NULL;
}
list_for_each_entry_safe(ureq, temp, &video->ureqs, list)
uvc_video_free_request(ureq, video->ep);
INIT_LIST_HEAD(&video->ureqs);
INIT_LIST_HEAD(&video->req_free);
INIT_LIST_HEAD(&video->req_ready);
video->req_size = 0;
return 0;
}
@@ -322,6 +498,7 @@ uvc_video_free_requests(struct uvc_video *video)
static int
uvc_video_alloc_requests(struct uvc_video *video)
{
struct uvc_request *ureq;
unsigned int req_size;
unsigned int i;
int ret = -ENOMEM;
@@ -332,29 +509,33 @@ uvc_video_alloc_requests(struct uvc_video *video)
* max_t(unsigned int, video->ep->maxburst, 1)
* (video->ep->mult);
video->ureq = kcalloc(video->uvc_num_requests, sizeof(struct uvc_request), GFP_KERNEL);
if (video->ureq == NULL)
return -ENOMEM;
for (i = 0; i < video->uvc_num_requests; ++i) {
video->ureq[i].req_buffer = kmalloc(req_size, GFP_KERNEL);
if (video->ureq[i].req_buffer == NULL)
for (i = 0; i < video->uvc_num_requests; i++) {
ureq = kzalloc(sizeof(struct uvc_request), GFP_KERNEL);
if (ureq == NULL)
goto error;
video->ureq[i].req = usb_ep_alloc_request(video->ep, GFP_KERNEL);
if (video->ureq[i].req == NULL)
INIT_LIST_HEAD(&ureq->list);
list_add_tail(&ureq->list, &video->ureqs);
ureq->req_buffer = kmalloc(req_size, GFP_KERNEL);
if (ureq->req_buffer == NULL)
goto error;
video->ureq[i].req->buf = video->ureq[i].req_buffer;
video->ureq[i].req->length = 0;
video->ureq[i].req->complete = uvc_video_complete;
video->ureq[i].req->context = &video->ureq[i];
video->ureq[i].video = video;
video->ureq[i].last_buf = NULL;
ureq->req = usb_ep_alloc_request(video->ep, GFP_KERNEL);
if (ureq->req == NULL)
goto error;
list_add_tail(&video->ureq[i].req->list, &video->req_free);
ureq->req->buf = ureq->req_buffer;
ureq->req->length = 0;
ureq->req->complete = uvc_video_complete;
ureq->req->context = ureq;
ureq->video = video;
ureq->last_buf = NULL;
list_add_tail(&ureq->req->list, &video->req_free);
/* req_size/PAGE_SIZE + 1 for overruns and + 1 for header */
sg_alloc_table(&video->ureq[i].sgt,
sg_alloc_table(&ureq->sgt,
DIV_ROUND_UP(req_size - UVCG_REQUEST_HEADER_LEN,
PAGE_SIZE) + 2, GFP_KERNEL);
}
@@ -382,21 +563,23 @@ static void uvcg_video_pump(struct work_struct *work)
{
struct uvc_video *video = container_of(work, struct uvc_video, pump);
struct uvc_video_queue *queue = &video->queue;
/* video->max_payload_size is only set when using bulk transfer */
bool is_bulk = video->max_payload_size;
struct usb_request *req = NULL;
struct uvc_buffer *buf;
unsigned long flags;
int ret;
bool buf_int;
/* video->max_payload_size is only set when using bulk transfer */
bool is_bulk = video->max_payload_size;
int ret = 0;
while (true) {
if (!video->ep->enabled)
return;
while (video->ep->enabled) {
/*
* Retrieve the first available USB request, protected by the
* request lock.
* Check is_enabled and retrieve the first available USB
* request, protected by the request lock.
*/
spin_lock_irqsave(&video->req_lock, flags);
if (list_empty(&video->req_free)) {
if (!video->is_enabled || list_empty(&video->req_free)) {
spin_unlock_irqrestore(&video->req_lock, flags);
return;
}
@@ -414,69 +597,133 @@ static void uvcg_video_pump(struct work_struct *work)
if (buf != NULL) {
video->encode(req, video, buf);
/* Always interrupt for the last request of a video buffer */
buf_int = buf->state == UVC_BUF_STATE_DONE;
} else if (!(queue->flags & UVC_QUEUE_DISCONNECTED) && !is_bulk) {
/*
* No video buffer available; the queue is still connected and
* we're traferring over ISOC. Queue a 0 length request to
* prevent missed ISOC transfers.
*/
req->length = 0;
buf_int = false;
} else {
/*
* Either queue has been disconnected or no video buffer
* available to bulk transfer. Either way, stop processing
* Either the queue has been disconnected or no video buffer
* available for bulk transfer. Either way, stop processing
* further.
*/
spin_unlock_irqrestore(&queue->irqlock, flags);
break;
}
/*
* With usb3 we have more requests. This will decrease the
* interrupt load to a quarter but also catches the corner
* cases, which needs to be handled.
*/
if (list_empty(&video->req_free) || buf_int ||
!(video->req_int_count %
DIV_ROUND_UP(video->uvc_num_requests, 4))) {
video->req_int_count = 0;
req->no_interrupt = 0;
} else {
req->no_interrupt = 1;
}
/* Queue the USB request */
ret = uvcg_video_ep_queue(video, req);
spin_unlock_irqrestore(&queue->irqlock, flags);
spin_lock_irqsave(&video->req_lock, flags);
/* For bulk end points we queue from the worker thread
* since we would preferably not want to wait on requests
* to be ready, in the uvcg_video_complete() handler.
* For isoc endpoints we add the request to the ready list
* and only queue it to the endpoint from the complete handler.
*/
ret = uvcg_video_usb_req_queue(video, req, is_bulk);
spin_unlock_irqrestore(&video->req_lock, flags);
if (ret < 0) {
uvcg_queue_cancel(queue, 0);
break;
}
/* Endpoint now owns the request */
/* The request is owned by the endpoint / ready list. */
req = NULL;
video->req_int_count++;
}
if (!req)
return;
spin_lock_irqsave(&video->req_lock, flags);
list_add_tail(&req->list, &video->req_free);
if (video->is_enabled)
list_add_tail(&req->list, &video->req_free);
else
uvc_video_free_request(req->context, video->ep);
spin_unlock_irqrestore(&video->req_lock, flags);
return;
}
/*
* Enable or disable the video stream.
* Disable the video stream
*/
int uvcg_video_enable(struct uvc_video *video, int enable)
int
uvcg_video_disable(struct uvc_video *video)
{
unsigned long flags;
struct list_head inflight_bufs;
struct usb_request *req, *temp;
struct uvc_buffer *buf, *btemp;
struct uvc_request *ureq, *utemp;
if (video->ep == NULL) {
uvcg_info(&video->uvc->func,
"Video disable failed, device is uninitialized.\n");
return -ENODEV;
}
INIT_LIST_HEAD(&inflight_bufs);
spin_lock_irqsave(&video->req_lock, flags);
video->is_enabled = false;
/*
* Remove any in-flight buffers from the uvc_requests
* because we want to return them before cancelling the
* queue. This ensures that we aren't stuck waiting for
* all complete callbacks to come through before disabling
* vb2 queue.
*/
list_for_each_entry(ureq, &video->ureqs, list) {
if (ureq->last_buf) {
list_add_tail(&ureq->last_buf->queue, &inflight_bufs);
ureq->last_buf = NULL;
}
}
spin_unlock_irqrestore(&video->req_lock, flags);
cancel_work_sync(&video->pump);
uvcg_queue_cancel(&video->queue, 0);
spin_lock_irqsave(&video->req_lock, flags);
/*
* Remove all uvc_requests from ureqs with list_del_init
* This lets uvc_video_free_request correctly identify
* if the uvc_request is attached to a list or not when freeing
* memory.
*/
list_for_each_entry_safe(ureq, utemp, &video->ureqs, list)
list_del_init(&ureq->list);
list_for_each_entry_safe(req, temp, &video->req_free, list) {
list_del(&req->list);
uvc_video_free_request(req->context, video->ep);
}
list_for_each_entry_safe(req, temp, &video->req_ready, list) {
list_del(&req->list);
uvc_video_free_request(req->context, video->ep);
}
INIT_LIST_HEAD(&video->ureqs);
INIT_LIST_HEAD(&video->req_free);
INIT_LIST_HEAD(&video->req_ready);
video->req_size = 0;
spin_unlock_irqrestore(&video->req_lock, flags);
/*
* Return all the video buffers before disabling the queue.
*/
spin_lock_irqsave(&video->queue.irqlock, flags);
list_for_each_entry_safe(buf, btemp, &inflight_bufs, queue) {
list_del(&buf->queue);
uvcg_complete_buffer(&video->queue, buf);
}
spin_unlock_irqrestore(&video->queue.irqlock, flags);
uvcg_queue_enable(&video->queue, 0);
return 0;
}
/*
* Enable the video stream.
*/
int uvcg_video_enable(struct uvc_video *video)
{
unsigned int i;
int ret;
if (video->ep == NULL) {
@@ -485,18 +732,13 @@ int uvcg_video_enable(struct uvc_video *video, int enable)
return -ENODEV;
}
if (!enable) {
cancel_work_sync(&video->pump);
uvcg_queue_cancel(&video->queue, 0);
for (i = 0; i < video->uvc_num_requests; ++i)
if (video->ureq && video->ureq[i].req)
usb_ep_dequeue(video->ep, video->ureq[i].req);
uvc_video_free_requests(video);
uvcg_queue_enable(&video->queue, 0);
return 0;
}
/*
* Safe to access request related fields without req_lock because
* this is the only thread currently active, and no other
* request handling thread will become active until this function
* returns.
*/
video->is_enabled = true;
if ((ret = uvcg_queue_enable(&video->queue, 1)) < 0)
return ret;
@@ -513,7 +755,7 @@ int uvcg_video_enable(struct uvc_video *video, int enable)
video->req_int_count = 0;
queue_work(video->async_wq, &video->pump);
uvc_video_ep_queue_initial_requests(video);
return ret;
}
@@ -523,7 +765,10 @@ int uvcg_video_enable(struct uvc_video *video, int enable)
*/
int uvcg_video_init(struct uvc_video *video, struct uvc_device *uvc)
{
video->is_enabled = false;
INIT_LIST_HEAD(&video->ureqs);
INIT_LIST_HEAD(&video->req_free);
INIT_LIST_HEAD(&video->req_ready);
spin_lock_init(&video->req_lock);
INIT_WORK(&video->pump, uvcg_video_pump);

View File

@@ -14,7 +14,8 @@
struct uvc_video;
int uvcg_video_enable(struct uvc_video *video, int enable);
int uvcg_video_enable(struct uvc_video *video);
int uvcg_video_disable(struct uvc_video *video);
int uvcg_video_init(struct uvc_video *video, struct uvc_device *uvc);

View File

@@ -297,6 +297,10 @@ static int dp_altmode_vdm(struct typec_altmode *alt,
case CMD_EXIT_MODE:
dp->data.status = 0;
dp->data.conf = 0;
if (dp->hpd) {
dp->hpd = false;
sysfs_notify(&dp->alt->dev.kobj, "displayport", "hpd");
}
break;
case DP_CMD_STATUS_UPDATE:
dp->data.status = *vdo;

View File

@@ -114,10 +114,14 @@ out:
int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
sector_t pblk, unsigned int len)
{
const unsigned int blockbits = inode->i_blkbits;
const unsigned int blocksize = 1 << blockbits;
const unsigned int blocks_per_page_bits = PAGE_SHIFT - blockbits;
const unsigned int blocks_per_page = 1 << blocks_per_page_bits;
const struct fscrypt_info *ci = inode->i_crypt_info;
const unsigned int du_bits = ci->ci_data_unit_bits;
const unsigned int du_size = 1U << du_bits;
const unsigned int du_per_page_bits = PAGE_SHIFT - du_bits;
const unsigned int du_per_page = 1U << du_per_page_bits;
u64 du_index = (u64)lblk << (inode->i_blkbits - du_bits);
u64 du_remaining = (u64)len << (inode->i_blkbits - du_bits);
sector_t sector = pblk << (inode->i_blkbits - SECTOR_SHIFT);
struct page *pages[16]; /* write up to 16 pages at a time */
unsigned int nr_pages;
unsigned int i;
@@ -133,8 +137,8 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
len);
BUILD_BUG_ON(ARRAY_SIZE(pages) > BIO_MAX_VECS);
nr_pages = min_t(unsigned int, ARRAY_SIZE(pages),
(len + blocks_per_page - 1) >> blocks_per_page_bits);
nr_pages = min_t(u64, ARRAY_SIZE(pages),
(du_remaining + du_per_page - 1) >> du_per_page_bits);
/*
* We need at least one page for ciphertext. Allocate the first one
@@ -158,22 +162,23 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
do {
bio_set_dev(bio, inode->i_sb->s_bdev);
bio->bi_iter.bi_sector = pblk << (blockbits - 9);
bio->bi_iter.bi_sector = sector;
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
i = 0;
offset = 0;
do {
err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
ZERO_PAGE(0), pages[i],
blocksize, offset, GFP_NOFS);
err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, du_index,
ZERO_PAGE(0), pages[i],
du_size, offset,
GFP_NOFS);
if (err)
goto out;
lblk++;
pblk++;
len--;
offset += blocksize;
if (offset == PAGE_SIZE || len == 0) {
du_index++;
sector += 1U << (du_bits - SECTOR_SHIFT);
du_remaining--;
offset += du_size;
if (offset == PAGE_SIZE || du_remaining == 0) {
ret = bio_add_page(bio, pages[i++], offset, 0);
if (WARN_ON_ONCE(ret != offset)) {
err = -EIO;
@@ -181,13 +186,13 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
}
offset = 0;
}
} while (i != nr_pages && len != 0);
} while (i != nr_pages && du_remaining != 0);
err = submit_bio_wait(bio);
if (err)
goto out;
bio_reset(bio);
} while (len != 0);
} while (du_remaining != 0);
err = 0;
out:
bio_put(bio);

View File

@@ -70,14 +70,14 @@ void fscrypt_free_bounce_page(struct page *bounce_page)
EXPORT_SYMBOL(fscrypt_free_bounce_page);
/*
* Generate the IV for the given logical block number within the given file.
* For filenames encryption, lblk_num == 0.
* Generate the IV for the given data unit index within the given file.
* For filenames encryption, index == 0.
*
* Keep this in sync with fscrypt_limit_io_blocks(). fscrypt_limit_io_blocks()
* needs to know about any IV generation methods where the low bits of IV don't
* simply contain the lblk_num (e.g., IV_INO_LBLK_32).
* simply contain the data unit index (e.g., IV_INO_LBLK_32).
*/
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 index,
const struct fscrypt_info *ci)
{
u8 flags = fscrypt_policy_flags(&ci->ci_policy);
@@ -85,29 +85,29 @@ void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
memset(iv, 0, ci->ci_mode->ivsize);
if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
WARN_ON_ONCE(lblk_num > U32_MAX);
WARN_ON_ONCE(index > U32_MAX);
WARN_ON_ONCE(ci->ci_inode->i_ino > U32_MAX);
lblk_num |= (u64)ci->ci_inode->i_ino << 32;
index |= (u64)ci->ci_inode->i_ino << 32;
} else if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) {
WARN_ON_ONCE(lblk_num > U32_MAX);
lblk_num = (u32)(ci->ci_hashed_ino + lblk_num);
WARN_ON_ONCE(index > U32_MAX);
index = (u32)(ci->ci_hashed_ino + index);
} else if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
memcpy(iv->nonce, ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE);
}
iv->lblk_num = cpu_to_le64(lblk_num);
iv->index = cpu_to_le64(index);
}
/* Encrypt or decrypt a single filesystem block of file contents */
int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
u64 lblk_num, struct page *src_page,
struct page *dest_page, unsigned int len,
unsigned int offs, gfp_t gfp_flags)
/* Encrypt or decrypt a single "data unit" of file contents. */
int fscrypt_crypt_data_unit(const struct fscrypt_info *ci,
fscrypt_direction_t rw, u64 index,
struct page *src_page, struct page *dest_page,
unsigned int len, unsigned int offs,
gfp_t gfp_flags)
{
union fscrypt_iv iv;
struct skcipher_request *req = NULL;
DECLARE_CRYPTO_WAIT(wait);
struct scatterlist dst, src;
struct fscrypt_info *ci = inode->i_crypt_info;
struct crypto_skcipher *tfm = ci->ci_enc_key.tfm;
int res = 0;
@@ -116,7 +116,7 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
if (WARN_ON_ONCE(len % FSCRYPT_CONTENTS_ALIGNMENT != 0))
return -EINVAL;
fscrypt_generate_iv(&iv, lblk_num, ci);
fscrypt_generate_iv(&iv, index, ci);
req = skcipher_request_alloc(tfm, gfp_flags);
if (!req)
@@ -137,28 +137,29 @@ int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
skcipher_request_free(req);
if (res) {
fscrypt_err(inode, "%scryption failed for block %llu: %d",
(rw == FS_DECRYPT ? "De" : "En"), lblk_num, res);
fscrypt_err(ci->ci_inode,
"%scryption failed for data unit %llu: %d",
(rw == FS_DECRYPT ? "De" : "En"), index, res);
return res;
}
return 0;
}
/**
* fscrypt_encrypt_pagecache_blocks() - Encrypt filesystem blocks from a
* pagecache page
* @page: The locked pagecache page containing the block(s) to encrypt
* @len: Total size of the block(s) to encrypt. Must be a nonzero
* multiple of the filesystem's block size.
* @offs: Byte offset within @page of the first block to encrypt. Must be
* a multiple of the filesystem's block size.
* @gfp_flags: Memory allocation flags. See details below.
* fscrypt_encrypt_pagecache_blocks() - Encrypt data from a pagecache page
* @page: the locked pagecache page containing the data to encrypt
* @len: size of the data to encrypt, in bytes
* @offs: offset within @page of the data to encrypt, in bytes
* @gfp_flags: memory allocation flags; see details below
*
* A new bounce page is allocated, and the specified block(s) are encrypted into
* it. In the bounce page, the ciphertext block(s) will be located at the same
* offsets at which the plaintext block(s) were located in the source page; any
* other parts of the bounce page will be left uninitialized. However, normally
* blocksize == PAGE_SIZE and the whole page is encrypted at once.
* This allocates a new bounce page and encrypts the given data into it. The
* length and offset of the data must be aligned to the file's crypto data unit
* size. Alignment to the filesystem block size fulfills this requirement, as
* the filesystem block size is always a multiple of the data unit size.
*
* In the bounce page, the ciphertext data will be located at the same offset at
* which the plaintext data was located in the source page. Any other parts of
* the bounce page will be left uninitialized.
*
* This is for use by the filesystem's ->writepages() method.
*
@@ -176,28 +177,29 @@ struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
{
const struct inode *inode = page->mapping->host;
const unsigned int blockbits = inode->i_blkbits;
const unsigned int blocksize = 1 << blockbits;
const struct fscrypt_info *ci = inode->i_crypt_info;
const unsigned int du_bits = ci->ci_data_unit_bits;
const unsigned int du_size = 1U << du_bits;
struct page *ciphertext_page;
u64 lblk_num = ((u64)page->index << (PAGE_SHIFT - blockbits)) +
(offs >> blockbits);
u64 index = ((u64)page->index << (PAGE_SHIFT - du_bits)) +
(offs >> du_bits);
unsigned int i;
int err;
if (WARN_ON_ONCE(!PageLocked(page)))
return ERR_PTR(-EINVAL);
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, blocksize)))
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, du_size)))
return ERR_PTR(-EINVAL);
ciphertext_page = fscrypt_alloc_bounce_page(gfp_flags);
if (!ciphertext_page)
return ERR_PTR(-ENOMEM);
for (i = offs; i < offs + len; i += blocksize, lblk_num++) {
err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk_num,
page, ciphertext_page,
blocksize, i, gfp_flags);
for (i = offs; i < offs + len; i += du_size, index++) {
err = fscrypt_crypt_data_unit(ci, FS_ENCRYPT, index,
page, ciphertext_page,
du_size, i, gfp_flags);
if (err) {
fscrypt_free_bounce_page(ciphertext_page);
return ERR_PTR(err);
@@ -224,31 +226,34 @@ EXPORT_SYMBOL(fscrypt_encrypt_pagecache_blocks);
* arbitrary page, not necessarily in the original pagecache page. The @inode
* and @lblk_num must be specified, as they can't be determined from @page.
*
* This is not compatible with FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS.
*
* Return: 0 on success; -errno on failure
*/
int fscrypt_encrypt_block_inplace(const struct inode *inode, struct page *page,
unsigned int len, unsigned int offs,
u64 lblk_num, gfp_t gfp_flags)
{
return fscrypt_crypt_block(inode, FS_ENCRYPT, lblk_num, page, page,
len, offs, gfp_flags);
if (WARN_ON_ONCE(inode->i_sb->s_cop->flags &
FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS))
return -EOPNOTSUPP;
return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_ENCRYPT,
lblk_num, page, page, len, offs,
gfp_flags);
}
EXPORT_SYMBOL(fscrypt_encrypt_block_inplace);
/**
* fscrypt_decrypt_pagecache_blocks() - Decrypt filesystem blocks in a
* pagecache page
* @page: The locked pagecache page containing the block(s) to decrypt
* @len: Total size of the block(s) to decrypt. Must be a nonzero
* multiple of the filesystem's block size.
* @offs: Byte offset within @page of the first block to decrypt. Must be
* a multiple of the filesystem's block size.
* fscrypt_decrypt_pagecache_blocks() - Decrypt data from a pagecache page
* @page: the pagecache page containing the data to decrypt
* @len: size of the data to decrypt, in bytes
* @offs: offset within @page of the data to decrypt, in bytes
*
* The specified block(s) are decrypted in-place within the pagecache page,
* which must still be locked and not uptodate. Normally, blocksize ==
* PAGE_SIZE and the whole page is decrypted at once.
*
* This is for use by the filesystem's ->readpages() method.
* Decrypt data that has just been read from an encrypted file. The data must
* be located in a pagecache page that is still locked and not yet uptodate.
* The length and offset of the data must be aligned to the file's crypto data
* unit size. Alignment to the filesystem block size fulfills this requirement,
* as the filesystem block size is always a multiple of the data unit size.
*
* Return: 0 on success; -errno on failure
*/
@@ -256,22 +261,23 @@ int fscrypt_decrypt_pagecache_blocks(struct page *page, unsigned int len,
unsigned int offs)
{
const struct inode *inode = page->mapping->host;
const unsigned int blockbits = inode->i_blkbits;
const unsigned int blocksize = 1 << blockbits;
u64 lblk_num = ((u64)page->index << (PAGE_SHIFT - blockbits)) +
(offs >> blockbits);
const struct fscrypt_info *ci = inode->i_crypt_info;
const unsigned int du_bits = ci->ci_data_unit_bits;
const unsigned int du_size = 1U << du_bits;
u64 index = ((u64)page->index << (PAGE_SHIFT - du_bits)) +
(offs >> du_bits);
unsigned int i;
int err;
if (WARN_ON_ONCE(!PageLocked(page)))
return -EINVAL;
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, blocksize)))
if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offs, du_size)))
return -EINVAL;
for (i = offs; i < offs + len; i += blocksize, lblk_num++) {
err = fscrypt_crypt_block(inode, FS_DECRYPT, lblk_num, page,
page, blocksize, i, GFP_NOFS);
for (i = offs; i < offs + len; i += du_size, index++) {
err = fscrypt_crypt_data_unit(ci, FS_DECRYPT, index, page,
page, du_size, i, GFP_NOFS);
if (err)
return err;
}
@@ -293,14 +299,20 @@ EXPORT_SYMBOL(fscrypt_decrypt_pagecache_blocks);
* arbitrary page, not necessarily in the original pagecache page. The @inode
* and @lblk_num must be specified, as they can't be determined from @page.
*
* This is not compatible with FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS.
*
* Return: 0 on success; -errno on failure
*/
int fscrypt_decrypt_block_inplace(const struct inode *inode, struct page *page,
unsigned int len, unsigned int offs,
u64 lblk_num)
{
return fscrypt_crypt_block(inode, FS_DECRYPT, lblk_num, page, page,
len, offs, GFP_NOFS);
if (WARN_ON_ONCE(inode->i_sb->s_cop->flags &
FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS))
return -EOPNOTSUPP;
return fscrypt_crypt_data_unit(inode->i_crypt_info, FS_DECRYPT,
lblk_num, page, page, len, offs,
GFP_NOFS);
}
EXPORT_SYMBOL(fscrypt_decrypt_block_inplace);

View File

@@ -68,7 +68,8 @@ struct fscrypt_context_v2 {
u8 contents_encryption_mode;
u8 filenames_encryption_mode;
u8 flags;
u8 __reserved[4];
u8 log2_data_unit_size;
u8 __reserved[3];
u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
u8 nonce[FSCRYPT_FILE_NONCE_SIZE];
};
@@ -186,6 +187,26 @@ fscrypt_policy_flags(const union fscrypt_policy *policy)
BUG();
}
static inline int
fscrypt_policy_v2_du_bits(const struct fscrypt_policy_v2 *policy,
const struct inode *inode)
{
return policy->log2_data_unit_size ?: inode->i_blkbits;
}
static inline int
fscrypt_policy_du_bits(const union fscrypt_policy *policy,
const struct inode *inode)
{
switch (policy->version) {
case FSCRYPT_POLICY_V1:
return inode->i_blkbits;
case FSCRYPT_POLICY_V2:
return fscrypt_policy_v2_du_bits(&policy->v2, inode);
}
BUG();
}
/*
* For encrypted symlinks, the ciphertext length is stored at the beginning
* of the string in little-endian format.
@@ -232,6 +253,16 @@ struct fscrypt_info {
bool ci_inlinecrypt;
#endif
/*
* log2 of the data unit size (granularity of contents encryption) of
* this file. This is computable from ci_policy and ci_inode but is
* cached here for efficiency. Only used for regular files.
*/
u8 ci_data_unit_bits;
/* Cached value: log2 of number of data units per FS block */
u8 ci_data_units_per_block_bits;
/*
* Encryption mode used for this inode. It corresponds to either the
* contents or filenames encryption mode, depending on the inode type.
@@ -286,10 +317,11 @@ typedef enum {
/* crypto.c */
extern struct kmem_cache *fscrypt_info_cachep;
int fscrypt_initialize(struct super_block *sb);
int fscrypt_crypt_block(const struct inode *inode, fscrypt_direction_t rw,
u64 lblk_num, struct page *src_page,
struct page *dest_page, unsigned int len,
unsigned int offs, gfp_t gfp_flags);
int fscrypt_crypt_data_unit(const struct fscrypt_info *ci,
fscrypt_direction_t rw, u64 index,
struct page *src_page, struct page *dest_page,
unsigned int len, unsigned int offs,
gfp_t gfp_flags);
struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags);
void __printf(3, 4) __cold
@@ -304,8 +336,8 @@ fscrypt_msg(const struct inode *inode, const char *level, const char *fmt, ...);
union fscrypt_iv {
struct {
/* logical block number within the file */
__le64 lblk_num;
/* zero-based index of data unit within the file */
__le64 index;
/* per-file nonce; only set in DIRECT_KEY mode */
u8 nonce[FSCRYPT_FILE_NONCE_SIZE];
@@ -314,9 +346,19 @@ union fscrypt_iv {
__le64 dun[FSCRYPT_MAX_IV_SIZE / sizeof(__le64)];
};
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
void fscrypt_generate_iv(union fscrypt_iv *iv, u64 index,
const struct fscrypt_info *ci);
/*
* Return the number of bits used by the maximum file data unit index that is
* possible on the given filesystem, using the given log2 data unit size.
*/
static inline int
fscrypt_max_file_dun_bits(const struct super_block *sb, int du_bits)
{
return fls64(sb->s_maxbytes - 1) - du_bits;
}
/* fname.c */
bool __fscrypt_fname_encrypted_size(const union fscrypt_policy *policy,
u32 orig_len, u32 max_len,

View File

@@ -43,7 +43,7 @@ static unsigned int fscrypt_get_dun_bytes(const struct fscrypt_info *ci)
{
struct super_block *sb = ci->ci_inode->i_sb;
unsigned int flags = fscrypt_policy_flags(&ci->ci_policy);
int ino_bits = 64, lblk_bits = 64;
int dun_bits;
if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY)
return offsetofend(union fscrypt_iv, nonce);
@@ -54,10 +54,9 @@ static unsigned int fscrypt_get_dun_bytes(const struct fscrypt_info *ci)
if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)
return sizeof(__le32);
/* Default case: IVs are just the file logical block number */
if (sb->s_cop->get_ino_and_lblk_bits)
sb->s_cop->get_ino_and_lblk_bits(sb, &ino_bits, &lblk_bits);
return DIV_ROUND_UP(lblk_bits, 8);
/* Default case: IVs are just the file data unit index */
dun_bits = fscrypt_max_file_dun_bits(sb, ci->ci_data_unit_bits);
return DIV_ROUND_UP(dun_bits, 8);
}
/*
@@ -130,7 +129,7 @@ int fscrypt_select_encryption_impl(struct fscrypt_info *ci,
* crypto configuration that the file would use.
*/
crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode;
crypto_cfg.data_unit_size = sb->s_blocksize;
crypto_cfg.data_unit_size = 1U << ci->ci_data_unit_bits;
crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci);
crypto_cfg.key_type =
is_hw_wrapped_key ? BLK_CRYPTO_KEY_TYPE_HW_WRAPPED :
@@ -176,7 +175,7 @@ int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
err = blk_crypto_init_key(blk_key, raw_key, raw_key_size, key_type,
crypto_mode, fscrypt_get_dun_bytes(ci),
sb->s_blocksize);
1U << ci->ci_data_unit_bits);
if (err) {
fscrypt_err(inode, "error %d initializing blk-crypto key", err);
goto fail;
@@ -271,10 +270,11 @@ EXPORT_SYMBOL_GPL(__fscrypt_inode_uses_inline_crypto);
static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
{
u64 index = lblk_num << ci->ci_data_units_per_block_bits;
union fscrypt_iv iv;
int i;
fscrypt_generate_iv(&iv, lblk_num, ci);
fscrypt_generate_iv(&iv, index, ci);
BUILD_BUG_ON(FSCRYPT_MAX_IV_SIZE > BLK_CRYPTO_MAX_IV_SIZE);
memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE);

View File

@@ -627,6 +627,11 @@ fscrypt_setup_encryption_info(struct inode *inode,
WARN_ON_ONCE(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
crypt_info->ci_mode = mode;
crypt_info->ci_data_unit_bits =
fscrypt_policy_du_bits(&crypt_info->ci_policy, inode);
crypt_info->ci_data_units_per_block_bits =
inode->i_blkbits - crypt_info->ci_data_unit_bits;
res = setup_file_encryption_key(crypt_info, need_dirhash_key, &mk);
if (res)
goto out;

View File

@@ -158,9 +158,15 @@ static bool supported_iv_ino_lblk_policy(const struct fscrypt_policy_v2 *policy,
type, sb->s_id);
return false;
}
if (lblk_bits > max_lblk_bits) {
/*
* IV_INO_LBLK_64 and IV_INO_LBLK_32 both require that file data unit
* indices fit in 32 bits.
*/
if (fscrypt_max_file_dun_bits(sb,
fscrypt_policy_v2_du_bits(policy, inode)) > 32) {
fscrypt_warn(inode,
"Can't use %s policy on filesystem '%s' because its block numbers are too long",
"Can't use %s policy on filesystem '%s' because its maximum file size is too large",
type, sb->s_id);
return false;
}
@@ -233,6 +239,32 @@ static bool fscrypt_supported_v2_policy(const struct fscrypt_policy_v2 *policy,
return false;
}
if (policy->log2_data_unit_size) {
if (!(inode->i_sb->s_cop->flags &
FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS)) {
fscrypt_warn(inode,
"Filesystem does not support configuring crypto data unit size");
return false;
}
if (policy->log2_data_unit_size > inode->i_blkbits ||
policy->log2_data_unit_size < SECTOR_SHIFT /* 9 */) {
fscrypt_warn(inode,
"Unsupported log2_data_unit_size in encryption policy: %d",
policy->log2_data_unit_size);
return false;
}
if (policy->log2_data_unit_size != inode->i_blkbits &&
(policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32)) {
/*
* Not safe to enable yet, as we need to ensure that DUN
* wraparound can only occur on a FS block boundary.
*/
fscrypt_warn(inode,
"Sub-block data units not yet supported with IV_INO_LBLK_32");
return false;
}
}
if ((policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) &&
!supported_direct_key_modes(inode, policy->contents_encryption_mode,
policy->filenames_encryption_mode))
@@ -330,6 +362,7 @@ static int fscrypt_new_context(union fscrypt_context *ctx_u,
ctx->filenames_encryption_mode =
policy->filenames_encryption_mode;
ctx->flags = policy->flags;
ctx->log2_data_unit_size = policy->log2_data_unit_size;
memcpy(ctx->master_key_identifier,
policy->master_key_identifier,
sizeof(ctx->master_key_identifier));
@@ -390,6 +423,7 @@ int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
policy->filenames_encryption_mode =
ctx->filenames_encryption_mode;
policy->flags = ctx->flags;
policy->log2_data_unit_size = ctx->log2_data_unit_size;
memcpy(policy->__reserved, ctx->__reserved,
sizeof(policy->__reserved));
memcpy(policy->master_key_identifier,

View File

@@ -10,6 +10,7 @@
#include <linux/writeback.h>
#include <linux/sysctl.h>
#include <linux/gfp.h>
#include <linux/swap.h>
#include "internal.h"
/* A global variable is a bit ugly, but it keeps the code simple */
@@ -59,6 +60,7 @@ int drop_caches_sysctl_handler(struct ctl_table *table, int write,
static int stfu;
if (sysctl_drop_caches & 1) {
lru_add_drain_all();
iterate_supers(drop_pagecache_sb, NULL);
count_vm_event(DROP_PAGECACHE);
}

View File

@@ -1569,6 +1569,7 @@ static void ext4_get_ino_and_lblk_bits(struct super_block *sb,
}
static const struct fscrypt_operations ext4_cryptops = {
.flags = FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS,
.key_prefix = "ext4:",
.get_context = ext4_get_context,
.set_context = ext4_set_context,

View File

@@ -3190,6 +3190,7 @@ static struct block_device **f2fs_get_devices(struct super_block *sb,
}
static const struct fscrypt_operations f2fs_cryptops = {
.flags = FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS,
.key_prefix = "f2fs:",
.get_context = f2fs_get_context,
.set_context = f2fs_set_context,
@@ -3847,7 +3848,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
sbi->blocks_per_blkz = SECTOR_TO_BLOCK(zone_sectors);
FDEV(devi).nr_blkz = div_u64(SECTOR_TO_BLOCK(nr_sectors),
sbi->blocks_per_blkz);
if (nr_sectors & (zone_sectors - 1))
if (!bdev_is_zone_start(bdev, nr_sectors))
FDEV(devi).nr_blkz++;
FDEV(devi).blkz_seq = f2fs_kvzalloc(sbi,

View File

@@ -300,7 +300,9 @@ int fuse_release_initialize(struct fuse_bpf_args *fa, struct fuse_release_in *fr
struct inode *inode, struct fuse_file *ff)
{
/* Always put backing file whatever bpf/userspace says */
fput(ff->backing_file);
if (ff->backing_file) {
fput(ff->backing_file);
}
*fri = (struct fuse_release_in) {
.fh = ff->fh,
@@ -399,23 +401,26 @@ int fuse_lseek_backing(struct fuse_bpf_args *fa, struct file *file, loff_t offse
struct file *backing_file = fuse_file->backing_file;
loff_t ret;
/* TODO: Handle changing of the file handle */
if (offset == 0) {
if (whence == SEEK_CUR) {
flo->offset = file->f_pos;
return flo->offset;
return 0;
}
if (whence == SEEK_SET) {
flo->offset = vfs_setpos(file, 0, 0);
return flo->offset;
return 0;
}
}
inode_lock(file->f_inode);
backing_file->f_pos = file->f_pos;
ret = vfs_llseek(backing_file, fli->offset, fli->whence);
flo->offset = ret;
if (!IS_ERR(ERR_PTR(ret))) {
flo->offset = ret;
ret = 0;
}
inode_unlock(file->f_inode);
return ret;
}
@@ -1114,7 +1119,6 @@ int fuse_lookup_backing(struct fuse_bpf_args *fa, struct inode *dir,
struct kstat stat;
int err;
/* TODO this will not handle lookups over mount points */
inode_lock_nested(dir_backing_inode, I_MUTEX_PARENT);
backing_entry = lookup_one_len(entry->d_name.name, dir_backing_entry,
strlen(entry->d_name.name));
@@ -1133,16 +1137,22 @@ int fuse_lookup_backing(struct fuse_bpf_args *fa, struct inode *dir,
return 0;
}
err = follow_down(&fuse_entry->backing_path);
if (err)
goto err_out;
err = vfs_getattr(&fuse_entry->backing_path, &stat,
STATX_BASIC_STATS, 0);
if (err) {
path_put_init(&fuse_entry->backing_path);
return err;
}
if (err)
goto err_out;
fuse_stat_to_attr(get_fuse_conn(dir),
backing_entry->d_inode, &stat, &feo->attr);
return 0;
err_out:
path_put_init(&fuse_entry->backing_path);
return err;
}
int fuse_handle_backing(struct fuse_entry_bpf *feb, struct inode **backing_inode,

View File

@@ -1021,6 +1021,16 @@ static void fuse_readahead(struct readahead_control *rac)
struct fuse_conn *fc = get_fuse_conn(inode);
unsigned int i, max_pages, nr_pages = 0;
#ifdef CONFIG_FUSE_BPF
/*
* Currently no meaningful readahead is possible with fuse-bpf within
* the kernel, so unless the daemon is aware of this file, ignore this
* call.
*/
if (!get_fuse_inode(inode)->nodeid)
return;
#endif
if (fuse_is_bad(inode))
return;

View File

@@ -213,7 +213,8 @@ int fuse_passthrough_open(struct fuse_dev *fud, u32 lower_fd)
}
if (!passthrough_filp->f_op->read_iter ||
!passthrough_filp->f_op->write_iter) {
!((passthrough_filp->f_path.mnt->mnt_flags | MNT_READONLY) ||
passthrough_filp->f_op->write_iter)) {
pr_err("FUSE: passthrough file misses file operations.\n");
res = -EBADF;
goto err_free_file;

View File

@@ -170,35 +170,109 @@ static inline void delayacct_thrashing_end(void)
}
#else
extern void _trace_android_vh_delayacct_set_flag(struct task_struct *p, int flag);
extern void _trace_android_vh_delayacct_clear_flag(struct task_struct *p, int flag);
extern void _trace_android_rvh_delayacct_init(void);
extern void _trace_android_rvh_delayacct_tsk_init(struct task_struct *tsk);
extern void _trace_android_rvh_delayacct_tsk_free(struct task_struct *tsk);
extern void _trace_android_vh_delayacct_blkio_start(void);
extern void _trace_android_vh_delayacct_blkio_end(struct task_struct *p);
extern void _trace_android_vh_delayacct_add_tsk(struct taskstats *d,
struct task_struct *tsk,
int *ret);
extern void _trace_android_vh_delayacct_blkio_ticks(struct task_struct *tsk, __u64 *ret);
extern void _trace_android_vh_delayacct_is_task_waiting_on_io(struct task_struct *p, int *ret);
extern void _trace_android_vh_delayacct_freepages_start(void);
extern void _trace_android_vh_delayacct_freepages_end(void);
extern void _trace_android_vh_delayacct_thrashing_start(void);
extern void _trace_android_vh_delayacct_thrashing_end(void);
extern void set_delayacct_enabled(bool enabled);
extern bool get_delayacct_enabled(void);
static inline void delayacct_set_flag(struct task_struct *p, int flag)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_set_flag(p, flag);
}
static inline void delayacct_clear_flag(struct task_struct *p, int flag)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_clear_flag(p, flag);
}
static inline void delayacct_init(void)
{}
{
if (get_delayacct_enabled())
_trace_android_rvh_delayacct_init();
}
static inline void delayacct_tsk_init(struct task_struct *tsk)
{}
{
if (get_delayacct_enabled())
_trace_android_rvh_delayacct_tsk_init(tsk);
}
static inline void delayacct_tsk_free(struct task_struct *tsk)
{}
{
if (get_delayacct_enabled())
_trace_android_rvh_delayacct_tsk_free(tsk);
}
static inline void delayacct_blkio_start(void)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_blkio_start();
}
static inline void delayacct_blkio_end(struct task_struct *p)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_blkio_end(p);
}
static inline int delayacct_add_tsk(struct taskstats *d,
struct task_struct *tsk)
{ return 0; }
{
int ret = 0;
if (get_delayacct_enabled())
_trace_android_vh_delayacct_add_tsk(d, tsk, &ret);
return ret;
}
static inline __u64 delayacct_blkio_ticks(struct task_struct *tsk)
{ return 0; }
{
__u64 ret = 0;
if (get_delayacct_enabled())
_trace_android_vh_delayacct_blkio_ticks(tsk, &ret);
return ret;
}
static inline int delayacct_is_task_waiting_on_io(struct task_struct *p)
{ return 0; }
{
int ret = 0;
if (get_delayacct_enabled())
_trace_android_vh_delayacct_is_task_waiting_on_io(p, &ret);
return ret;
}
static inline void delayacct_freepages_start(void)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_freepages_start();
}
static inline void delayacct_freepages_end(void)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_freepages_end();
}
static inline void delayacct_thrashing_start(void)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_thrashing_start();
}
static inline void delayacct_thrashing_end(void)
{}
{
if (get_delayacct_enabled())
_trace_android_vh_delayacct_thrashing_end();
}
#endif /* CONFIG_TASK_DELAY_ACCT */

View File

@@ -66,6 +66,18 @@ struct fscrypt_name {
*/
#define FS_CFLG_OWN_PAGES (1U << 1)
/*
* If set, then fs/crypto/ will allow users to select a crypto data unit size
* that is less than the filesystem block size. This is done via the
* log2_data_unit_size field of the fscrypt policy. This flag is not compatible
* with filesystems that encrypt variable-length blocks (i.e. blocks that aren't
* all equal to filesystem's block size), for example as a result of
* compression. It's also not compatible with the
* fscrypt_encrypt_block_inplace() and fscrypt_decrypt_block_inplace()
* functions.
*/
#define FS_CFLG_SUPPORTS_SUBBLOCK_DATA_UNITS (1U << 2)
/* Crypto operations for filesystems */
struct fscrypt_operations {

View File

@@ -667,6 +667,9 @@ static inline bool gic_enable_sre(void)
return !!(val & ICC_SRE_EL1_SRE);
}
void gic_v3_dist_init(void);
void gic_v3_cpu_init(void);
void gic_v3_dist_wait_for_rwp(void);
void gic_resume(void);

View File

@@ -9,10 +9,25 @@
#define __SOC_CARD_H
enum snd_soc_card_subclass {
SND_SOC_CARD_CLASS_INIT = 0,
SND_SOC_CARD_CLASS_ROOT = 0,
SND_SOC_CARD_CLASS_RUNTIME = 1,
};
static inline void snd_soc_card_mutex_lock_root(struct snd_soc_card *card)
{
mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_ROOT);
}
static inline void snd_soc_card_mutex_lock(struct snd_soc_card *card)
{
mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_RUNTIME);
}
static inline void snd_soc_card_mutex_unlock(struct snd_soc_card *card)
{
mutex_unlock(&card->mutex);
}
struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card,
const char *name);
int snd_soc_card_jack_new(struct snd_soc_card *card, const char *id, int type,

View File

@@ -559,11 +559,6 @@ enum snd_soc_dapm_type {
SND_SOC_DAPM_TYPE_COUNT
};
enum snd_soc_dapm_subclass {
SND_SOC_DAPM_CLASS_INIT = 0,
SND_SOC_DAPM_CLASS_RUNTIME = 1,
};
/*
* DAPM audio route definition.
*

View File

@@ -1352,17 +1352,112 @@ extern struct dentry *snd_soc_debugfs_root;
extern const struct dev_pm_ops snd_soc_pm_ops;
/* Helper functions */
static inline void snd_soc_dapm_mutex_lock(struct snd_soc_dapm_context *dapm)
/*
* DAPM helper functions
*/
enum snd_soc_dapm_subclass {
SND_SOC_DAPM_CLASS_ROOT = 0,
SND_SOC_DAPM_CLASS_RUNTIME = 1,
};
static inline void _snd_soc_dapm_mutex_lock_root_c(struct snd_soc_card *card)
{
mutex_lock_nested(&dapm->card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_ROOT);
}
static inline void snd_soc_dapm_mutex_unlock(struct snd_soc_dapm_context *dapm)
static inline void _snd_soc_dapm_mutex_lock_c(struct snd_soc_card *card)
{
mutex_unlock(&dapm->card->dapm_mutex);
mutex_lock_nested(&card->dapm_mutex, SND_SOC_DAPM_CLASS_RUNTIME);
}
static inline void _snd_soc_dapm_mutex_unlock_c(struct snd_soc_card *card)
{
mutex_unlock(&card->dapm_mutex);
}
static inline void _snd_soc_dapm_mutex_assert_held_c(struct snd_soc_card *card)
{
lockdep_assert_held(&card->dapm_mutex);
}
static inline void _snd_soc_dapm_mutex_lock_root_d(struct snd_soc_dapm_context *dapm)
{
_snd_soc_dapm_mutex_lock_root_c(dapm->card);
}
static inline void _snd_soc_dapm_mutex_lock_d(struct snd_soc_dapm_context *dapm)
{
_snd_soc_dapm_mutex_lock_c(dapm->card);
}
static inline void _snd_soc_dapm_mutex_unlock_d(struct snd_soc_dapm_context *dapm)
{
_snd_soc_dapm_mutex_unlock_c(dapm->card);
}
static inline void _snd_soc_dapm_mutex_assert_held_d(struct snd_soc_dapm_context *dapm)
{
_snd_soc_dapm_mutex_assert_held_c(dapm->card);
}
#define snd_soc_dapm_mutex_lock_root(x) _Generic((x), \
struct snd_soc_card * : _snd_soc_dapm_mutex_lock_root_c, \
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_lock_root_d)(x)
#define snd_soc_dapm_mutex_lock(x) _Generic((x), \
struct snd_soc_card * : _snd_soc_dapm_mutex_lock_c, \
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_lock_d)(x)
#define snd_soc_dapm_mutex_unlock(x) _Generic((x), \
struct snd_soc_card * : _snd_soc_dapm_mutex_unlock_c, \
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_unlock_d)(x)
#define snd_soc_dapm_mutex_assert_held(x) _Generic((x), \
struct snd_soc_card * : _snd_soc_dapm_mutex_assert_held_c, \
struct snd_soc_dapm_context * : _snd_soc_dapm_mutex_assert_held_d)(x)
/*
* PCM helper functions
*/
static inline void _snd_soc_dpcm_mutex_lock_c(struct snd_soc_card *card)
{
mutex_lock_nested(&card->pcm_mutex, card->pcm_subclass);
}
static inline void _snd_soc_dpcm_mutex_unlock_c(struct snd_soc_card *card)
{
mutex_unlock(&card->pcm_mutex);
}
static inline void _snd_soc_dpcm_mutex_assert_held_c(struct snd_soc_card *card)
{
lockdep_assert_held(&card->pcm_mutex);
}
static inline void _snd_soc_dpcm_mutex_lock_r(struct snd_soc_pcm_runtime *rtd)
{
_snd_soc_dpcm_mutex_lock_c(rtd->card);
}
static inline void _snd_soc_dpcm_mutex_unlock_r(struct snd_soc_pcm_runtime *rtd)
{
_snd_soc_dpcm_mutex_unlock_c(rtd->card);
}
static inline void _snd_soc_dpcm_mutex_assert_held_r(struct snd_soc_pcm_runtime *rtd)
{
_snd_soc_dpcm_mutex_assert_held_c(rtd->card);
}
#define snd_soc_dpcm_mutex_lock(x) _Generic((x), \
struct snd_soc_card * : _snd_soc_dpcm_mutex_lock_c, \
struct snd_soc_pcm_runtime * : _snd_soc_dpcm_mutex_lock_r)(x)
#define snd_soc_dpcm_mutex_unlock(x) _Generic((x), \
struct snd_soc_card * : _snd_soc_dpcm_mutex_unlock_c, \
struct snd_soc_pcm_runtime * : _snd_soc_dpcm_mutex_unlock_r)(x)
#define snd_soc_dpcm_mutex_assert_held(x) _Generic((x), \
struct snd_soc_card * : _snd_soc_dpcm_mutex_assert_held_c, \
struct snd_soc_pcm_runtime * : _snd_soc_dpcm_mutex_assert_held_r)(x)
#include <sound/soc-component.h>
#include <sound/soc-card.h>
#include <sound/soc-jack.h>

View File

@@ -72,19 +72,30 @@ TRACE_EVENT(reclaim_retry_zone,
);
TRACE_EVENT(mark_victim,
TP_PROTO(int pid),
TP_PROTO(struct task_struct *task, uid_t uid),
TP_ARGS(pid),
TP_ARGS(task, uid),
TP_STRUCT__entry(
__field(int, pid)
__field(uid_t, uid)
__string(comm, task->comm)
__field(short, oom_score_adj)
),
TP_fast_assign(
__entry->pid = pid;
__entry->pid = task->pid;
__entry->uid = uid;
__assign_str(comm, task->comm);
__entry->oom_score_adj = task->signal->oom_score_adj;
),
TP_printk("pid=%d", __entry->pid)
TP_printk("pid=%d uid=%u comm=%s oom_score_adj=%hd",
__entry->pid,
__entry->uid,
__get_str(comm),
__entry->oom_score_adj
)
);
TRACE_EVENT(wake_reaper,

View File

@@ -39,6 +39,10 @@ DECLARE_HOOK(android_vh_cpufreq_target,
unsigned int old_target_freq),
TP_ARGS(policy, target_freq, old_target_freq));
DECLARE_HOOK(android_vh_cpufreq_online,
TP_PROTO(struct cpufreq_policy *policy),
TP_ARGS(policy));
#endif /* _TRACE_HOOK_CPUFREQ_H */
/* This part must be outside protection */
#include <trace/define_trace.h>

View File

@@ -0,0 +1,67 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifdef PROTECT_TRACE_INCLUDE_PATH
#undef PROTECT_TRACE_INCLUDE_PATH
#else /* PROTECT_TRACE_INCLUDE_PATH */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM delayacct
#define TRACE_INCLUDE_PATH trace/hooks
#if !defined(_TRACE_HOOK_DELAYACCT_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_HOOK_DELAYACCT_H
#include <trace/hooks/vendor_hooks.h>
struct task_struct;
struct taskstats;
DECLARE_HOOK(android_vh_delayacct_set_flag,
TP_PROTO(struct task_struct *p, int flag),
TP_ARGS(p, flag));
DECLARE_HOOK(android_vh_delayacct_clear_flag,
TP_PROTO(struct task_struct *p, int flag),
TP_ARGS(p, flag));
DECLARE_RESTRICTED_HOOK(android_rvh_delayacct_init,
TP_PROTO(void *unused),
TP_ARGS(unused), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_delayacct_tsk_init,
TP_PROTO(struct task_struct *tsk),
TP_ARGS(tsk), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_delayacct_tsk_free,
TP_PROTO(struct task_struct *tsk),
TP_ARGS(tsk), 1);
DECLARE_HOOK(android_vh_delayacct_blkio_start,
TP_PROTO(void *unused),
TP_ARGS(unused));
DECLARE_HOOK(android_vh_delayacct_blkio_end,
TP_PROTO(struct task_struct *p),
TP_ARGS(p));
DECLARE_HOOK(android_vh_delayacct_add_tsk,
TP_PROTO(struct taskstats *d, struct task_struct *tsk, int *ret),
TP_ARGS(d, tsk, ret));
DECLARE_HOOK(android_vh_delayacct_blkio_ticks,
TP_PROTO(struct task_struct *tsk, __u64 *ret),
TP_ARGS(tsk, ret));
DECLARE_HOOK(android_vh_delayacct_is_task_waiting_on_io,
TP_PROTO(struct task_struct *tsk, int *ret),
TP_ARGS(tsk, ret));
DECLARE_HOOK(android_vh_delayacct_freepages_start,
TP_PROTO(void *unused),
TP_ARGS(unused));
DECLARE_HOOK(android_vh_delayacct_freepages_end,
TP_PROTO(void *unused),
TP_ARGS(unused));
DECLARE_HOOK(android_vh_delayacct_thrashing_start,
TP_PROTO(void *unused),
TP_ARGS(unused));
DECLARE_HOOK(android_vh_delayacct_thrashing_end,
TP_PROTO(void *unused),
TP_ARGS(unused));
#endif /* _TRACE_HOOK_DELAYACCT_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
#endif /* PROTECT_TRACE_INCLUDE_PATH */

View File

@@ -11,6 +11,7 @@
*/
struct cpumask;
struct irq_data;
struct gic_chip_data;
DECLARE_HOOK(android_vh_gic_v3_affinity_init,
TP_PROTO(int irq, u32 offset, u64 *affinity),
@@ -21,6 +22,9 @@ DECLARE_RESTRICTED_HOOK(android_rvh_gic_v3_set_affinity,
void __iomem *rbase, u64 redist_stride),
TP_ARGS(d, mask_val, affinity, force, base, rbase, redist_stride),
1);
DECLARE_HOOK(android_vh_gic_v3_suspend,
TP_PROTO(struct gic_chip_data *gd),
TP_ARGS(gd));
#endif /* _TRACE_HOOK_GIC_V3_H */
/* This part must be outside protection */

View File

@@ -168,6 +168,12 @@ DECLARE_HOOK(android_vh_alloc_pages_entry,
TP_PROTO(gfp_t *gfp, unsigned int order, int preferred_nid,
nodemask_t *nodemask),
TP_ARGS(gfp, order, preferred_nid, nodemask));
DECLARE_HOOK(android_vh_isolate_freepages,
TP_PROTO(struct compact_control *cc, struct page *page, bool *bypass),
TP_ARGS(cc, page, bypass));
DECLARE_HOOK(android_vh_ptep_clear_flush_young,
TP_PROTO(bool *skip),
TP_ARGS(skip));
#endif /* _TRACE_HOOK_MM_H */
/* This part must be outside protection */

View File

@@ -85,6 +85,10 @@ DECLARE_RESTRICTED_HOOK(android_rvh_set_user_nice,
TP_PROTO(struct task_struct *p, long *nice, bool *allowed),
TP_ARGS(p, nice, allowed), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_set_user_nice_locked,
TP_PROTO(struct task_struct *p, long *nice),
TP_ARGS(p, nice), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_setscheduler,
TP_PROTO(struct task_struct *p),
TP_ARGS(p), 1);
@@ -434,6 +438,11 @@ DECLARE_RESTRICTED_HOOK(android_rvh_update_load_avg,
TP_PROTO(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se),
TP_ARGS(now, cfs_rq, se), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_update_load_sum,
TP_PROTO(struct sched_avg *sa, u64 *delta, unsigned int *sched_pelt_lshift),
TP_ARGS(sa, delta, sched_pelt_lshift), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_remove_entity_load_avg,
TP_PROTO(struct cfs_rq *cfs_rq, struct sched_entity *se),
TP_ARGS(cfs_rq, se), 1);

View File

@@ -24,6 +24,10 @@ DECLARE_HOOK(android_vh_use_amu_fie,
TP_PROTO(bool *use_amu_fie),
TP_ARGS(use_amu_fie));
DECLARE_RESTRICTED_HOOK(android_rvh_update_thermal_stats,
TP_PROTO(int cpu),
TP_ARGS(cpu), 1);
#endif /* _TRACE_HOOK_TOPOLOGY_H */
/* This part must be outside protection */
#include <trace/define_trace.h>

View File

@@ -71,7 +71,8 @@ struct fscrypt_policy_v2 {
__u8 contents_encryption_mode;
__u8 filenames_encryption_mode;
__u8 flags;
__u8 __reserved[4];
__u8 log2_data_unit_size;
__u8 __reserved[3];
__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
};

View File

@@ -10,7 +10,7 @@ obj-y = fork.o exec_domain.o panic.o \
extable.o params.o \
kthread.o sys_ni.o nsproxy.o \
notifier.o ksysfs.o cred.o reboot.o \
async.o range.o smpboot.o ucount.o regset.o
async.o range.o smpboot.o ucount.o regset.o delayacct.o
obj-$(CONFIG_USERMODE_DRIVER) += usermode_driver.o
obj-$(CONFIG_MODULES) += kmod.o
@@ -97,7 +97,6 @@ obj-$(CONFIG_HARDLOCKUP_DETECTOR_PERF) += watchdog_hld.o
obj-$(CONFIG_SECCOMP) += seccomp.o
obj-$(CONFIG_RELAY) += relay.o
obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o
obj-$(CONFIG_TRACEPOINTS) += tracepoint.o
obj-$(CONFIG_LATENCYTOP) += latencytop.o

View File

@@ -14,6 +14,8 @@
#include <linux/delayacct.h>
#include <linux/module.h>
#ifdef CONFIG_TASK_DELAY_ACCT
DEFINE_STATIC_KEY_FALSE(delayacct_key);
int delayacct_on __read_mostly; /* Delay accounting turned on/off */
struct kmem_cache *delayacct_cache;
@@ -210,3 +212,91 @@ void __delayacct_thrashing_end(void)
&current->delays->thrashing_delay,
&current->delays->thrashing_count);
}
#else
#include <trace/hooks/delayacct.h>
int delayacct_enabled __read_mostly; /* Delay accounting turned on/off */
bool get_delayacct_enabled(void)
{
return delayacct_enabled;
}
void set_delayacct_enabled(bool enabled)
{
delayacct_enabled = enabled;
}
EXPORT_SYMBOL_GPL(set_delayacct_enabled);
void _trace_android_vh_delayacct_set_flag(struct task_struct *p, int flag)
{
trace_android_vh_delayacct_set_flag(p, flag);
}
void _trace_android_vh_delayacct_clear_flag(struct task_struct *p, int flag)
{
trace_android_vh_delayacct_clear_flag(p, flag);
}
void _trace_android_rvh_delayacct_init(void)
{
trace_android_rvh_delayacct_init(NULL);
}
void _trace_android_rvh_delayacct_tsk_init(struct task_struct *tsk)
{
trace_android_rvh_delayacct_tsk_init(tsk);
}
void _trace_android_rvh_delayacct_tsk_free(struct task_struct *tsk)
{
trace_android_rvh_delayacct_tsk_free(tsk);
}
void _trace_android_vh_delayacct_blkio_start(void)
{
trace_android_vh_delayacct_blkio_start(NULL);
}
void _trace_android_vh_delayacct_blkio_end(struct task_struct *p)
{
trace_android_vh_delayacct_blkio_end(p);
}
void _trace_android_vh_delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk, int *ret)
{
trace_android_vh_delayacct_add_tsk(d, tsk, ret);
}
void _trace_android_vh_delayacct_blkio_ticks(struct task_struct *tsk, __u64 *ret)
{
trace_android_vh_delayacct_blkio_ticks(tsk, ret);
}
void _trace_android_vh_delayacct_is_task_waiting_on_io(struct task_struct *p, int *ret)
{
trace_android_vh_delayacct_is_task_waiting_on_io(p, ret);
}
void _trace_android_vh_delayacct_freepages_start(void)
{
trace_android_vh_delayacct_freepages_start(NULL);
}
void _trace_android_vh_delayacct_freepages_end(void)
{
trace_android_vh_delayacct_freepages_end(NULL);
}
void _trace_android_vh_delayacct_thrashing_start(void)
{
trace_android_vh_delayacct_thrashing_start(NULL);
}
void _trace_android_vh_delayacct_thrashing_end(void)
{
trace_android_vh_delayacct_thrashing_end(NULL);
}
#endif

View File

@@ -4855,10 +4855,11 @@ static inline struct callback_head *splice_balance_callbacks(struct rq *rq)
return __splice_balance_callbacks(rq, true);
}
static void __balance_callbacks(struct rq *rq)
void __balance_callbacks(struct rq *rq)
{
do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
}
EXPORT_SYMBOL_GPL(__balance_callbacks);
static inline void balance_callbacks(struct rq *rq, struct callback_head *head)
{
@@ -7123,6 +7124,10 @@ void set_user_nice(struct task_struct *p, long nice)
rq = task_rq_lock(p, &rf);
update_rq_clock(rq);
trace_android_rvh_set_user_nice_locked(p, &nice);
if (task_nice(p) == nice)
goto out_unlock;
/*
* The RT priorities are set via sched_setscheduler(), but we still
* allow the 'normal' nice value to be set - but as expected

View File

@@ -62,6 +62,7 @@ unsigned int sysctl_sched_tunable_scaling = SCHED_TUNABLESCALING_LOG;
* (default: 0.75 msec * (1 + ilog(ncpus)), units: nanoseconds)
*/
unsigned int sysctl_sched_min_granularity = 750000ULL;
EXPORT_SYMBOL_GPL(sysctl_sched_min_granularity);
static unsigned int normalized_sysctl_sched_min_granularity = 750000ULL;
/*
@@ -4665,7 +4666,13 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
s64 delta;
bool skip_preempt = false;
ideal_runtime = sched_slice(cfs_rq, curr);
/*
* When many tasks blow up the sched_period; it is possible that
* sched_slice() reports unusually large results (when many tasks are
* very light for example). Therefore impose a maximum.
*/
ideal_runtime = min_t(u64, sched_slice(cfs_rq, curr), sysctl_sched_latency);
delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
trace_android_rvh_check_preempt_tick(current, &ideal_runtime, &skip_preempt,
delta_exec, cfs_rq, curr, sysctl_sched_min_granularity);

View File

@@ -25,6 +25,7 @@
*/
#include <linux/sched.h>
#include <trace/hooks/sched.h>
#include "sched.h"
#include "pelt.h"
@@ -205,6 +206,8 @@ int ___update_load_sum(u64 now, struct sched_avg *sa,
sa->last_update_time += delta << 10;
trace_android_rvh_update_load_sum(sa, &delta, &sched_pelt_lshift);
/*
* running is a subset of runnable (weight) so running can't be set if
* runnable is clear. But there are some corner cases where the current

View File

@@ -27,6 +27,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_rto_next_cpu);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_is_cpu_allowed);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_get_nohz_timer_target);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_user_nice);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_user_nice_locked);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_setscheduler);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_find_busiest_group);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_dump_throttled_rt_tasks);
@@ -104,6 +105,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_dup_task_struct);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_account_task_time);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_attach_entity_load_avg);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_detach_entity_load_avg);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_update_load_sum);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_update_load_avg);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_remove_entity_load_avg);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_update_blocked_fair);

Some files were not shown because too many files have changed in this diff Show More