ANDROID: reset android13-5.10-lts branch back to android13-5.10 state

The android13-5.10-lts branch was allowed to get out of sync with
regards to the ABI state while some LTS releases were merged into it.
In order to sort this out, and ensure that the ABI is stable, reset it
back to the current state of the android13-5.10 branch as of commit
46fc349c54 ("ANDROID: Update the ABI representation")

Bug: 161946584
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ia1c4798fb0b80e61de81b3f0ae89c89f8c6b1c55
This commit is contained in:
Greg Kroah-Hartman
2022-05-13 17:47:36 +02:00
parent 95c07d1955
commit 1f7d764785
1048 changed files with 22519 additions and 14023 deletions

View File

@@ -0,0 +1,7 @@
What: /sys/fs/erofs/features/
Date: November 2021
Contact: "Huang Jianan" <huangjianan@oppo.com>
Description: Shows all enabled kernel features.
Supported features:
zero_padding, compr_cfgs, big_pcluster, chunked_file,
device_table, compr_head2, sb_chksum.

View File

@@ -55,8 +55,9 @@ Description: Controls the in-place-update policy.
0x04 F2FS_IPU_UTIL
0x08 F2FS_IPU_SSR_UTIL
0x10 F2FS_IPU_FSYNC
0x20 F2FS_IPU_ASYNC,
0x20 F2FS_IPU_ASYNC
0x40 F2FS_IPU_NOCACHE
0x80 F2FS_IPU_HONOR_OPU_WRITE
==== =================
Refer segment.h for details.
@@ -98,6 +99,33 @@ Description: Controls the issue rate of discard commands that consist of small
checkpoint is triggered, and issued during the checkpoint.
By default, it is disabled with 0.
What: /sys/fs/f2fs/<disk>/max_discard_request
Date: December 2021
Contact: "Konstantin Vyshetsky" <vkon@google.com>
Description: Controls the number of discards a thread will issue at a time.
Higher number will allow the discard thread to finish its work
faster, at the cost of higher latency for incomming I/O.
What: /sys/fs/f2fs/<disk>/min_discard_issue_time
Date: December 2021
Contact: "Konstantin Vyshetsky" <vkon@google.com>
Description: Controls the interval the discard thread will wait between
issuing discard requests when there are discards to be issued and
no I/O aware interruptions occur.
What: /sys/fs/f2fs/<disk>/mid_discard_issue_time
Date: December 2021
Contact: "Konstantin Vyshetsky" <vkon@google.com>
Description: Controls the interval the discard thread will wait between
issuing discard requests when there are discards to be issued and
an I/O aware interruption occurs.
What: /sys/fs/f2fs/<disk>/max_discard_issue_time
Date: December 2021
Contact: "Konstantin Vyshetsky" <vkon@google.com>
Description: Controls the interval the discard thread will wait when there are
no discard operations to be issued.
What: /sys/fs/f2fs/<disk>/discard_granularity
Date: July 2017
Contact: "Chao Yu" <yuchao0@huawei.com>
@@ -269,11 +297,16 @@ Description: Shows current reserved blocks in system, it may be temporarily
What: /sys/fs/f2fs/<disk>/gc_urgent
Date: August 2017
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description: Do background GC agressively when set. When gc_urgent = 1,
background thread starts to do GC by given gc_urgent_sleep_time
interval. When gc_urgent = 2, F2FS will lower the bar of
checking idle in order to process outstanding discard commands
and GC a little bit aggressively. It is set to 0 by default.
Description: Do background GC aggressively when set. Set to 0 by default.
gc urgent high(1): does GC forcibly in a period of given
gc_urgent_sleep_time and ignores I/O idling check. uses greedy
GC approach and turns SSR mode on.
gc urgent low(2): lowers the bar of checking I/O idling in
order to process outstanding discard commands and GC a
little bit aggressively. uses cost benefit GC approach.
gc urgent mid(3): does GC forcibly in a period of given
gc_urgent_sleep_time and executes a mid level of I/O idling check.
uses cost benefit GC approach.
What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time
Date: August 2017
@@ -430,6 +463,7 @@ Description: Show status of f2fs superblock in real time.
0x800 SBI_QUOTA_SKIP_FLUSH skip flushing quota in current CP
0x1000 SBI_QUOTA_NEED_REPAIR quota file may be corrupted
0x2000 SBI_IS_RESIZEFS resizefs is in process
0x4000 SBI_IS_FREEZING freefs is in process
====== ===================== =================================
What: /sys/fs/f2fs/<disk>/ckpt_thread_ioprio
@@ -503,7 +537,7 @@ Date: July 2021
Contact: "Daeho Jeong" <daehojeong@google.com>
Description: Show how many segments have been reclaimed by GC during a specific
GC mode (0: GC normal, 1: GC idle CB, 2: GC idle greedy,
3: GC idle AT, 4: GC urgent high, 5: GC urgent low)
3: GC idle AT, 4: GC urgent high, 5: GC urgent low 6: GC urgent mid)
You can re-initialize this value to "0".
What: /sys/fs/f2fs/<disk>/gc_segment_mode
@@ -540,3 +574,9 @@ Contact: "Daeho Jeong" <daehojeong@google.com>
Description: You can set the trial count limit for GC urgent high mode with this value.
If GC thread gets to the limit, the mode will turn back to GC normal mode.
By default, the value is zero, which means there is no limit like before.
What: /sys/fs/f2fs/<disk>/max_roll_forward_node_blocks
Date: January 2022
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description: Controls max # of node block writes to be used for roll forward
recovery. This can limit the roll forward recovery time.

View File

@@ -31,6 +31,7 @@ the Linux memory management.
idle_page_tracking
ksm
memory-hotplug
multigen_lru
nommu-mmap
numa_memory_policy
numaperf

View File

@@ -0,0 +1,152 @@
.. SPDX-License-Identifier: GPL-2.0
=============
Multi-Gen LRU
=============
The multi-gen LRU is an alternative LRU implementation that optimizes
page reclaim and improves performance under memory pressure. Page
reclaim decides the kernel's caching policy and ability to overcommit
memory. It directly impacts the kswapd CPU usage and RAM efficiency.
Quick start
===========
Build the kernel with the following configurations.
* ``CONFIG_LRU_GEN=y``
* ``CONFIG_LRU_GEN_ENABLED=y``
All set!
Runtime options
===============
``/sys/kernel/mm/lru_gen/`` contains stable ABIs described in the
following subsections.
Kill switch
-----------
``enable`` accepts different values to enable or disable the following
components. Its default value depends on ``CONFIG_LRU_GEN_ENABLED``.
All the components should be enabled unless some of them have
unforeseen side effects. Writing to ``enable`` has no effect when a
component is not supported by the hardware, and valid values will be
accepted even when the main switch is off.
====== ===============================================================
Values Components
====== ===============================================================
0x0001 The main switch for the multi-gen LRU.
0x0002 Clearing the accessed bit in leaf page table entries in large
batches, when MMU sets it (e.g., on x86). This behavior can
theoretically worsen lock contention (mmap_lock). If it is
disabled, the multi-gen LRU will suffer a minor performance
degradation.
0x0004 Clearing the accessed bit in non-leaf page table entries as
well, when MMU sets it (e.g., on x86). This behavior was not
verified on x86 varieties other than Intel and AMD. If it is
disabled, the multi-gen LRU will suffer a negligible
performance degradation.
[yYnN] Apply to all the components above.
====== ===============================================================
E.g.,
::
echo y >/sys/kernel/mm/lru_gen/enabled
cat /sys/kernel/mm/lru_gen/enabled
0x0007
echo 5 >/sys/kernel/mm/lru_gen/enabled
cat /sys/kernel/mm/lru_gen/enabled
0x0005
Thrashing prevention
--------------------
Personal computers are more sensitive to thrashing because it can
cause janks (lags when rendering UI) and negatively impact user
experience. The multi-gen LRU offers thrashing prevention to the
majority of laptop and desktop users who do not have ``oomd``.
Users can write ``N`` to ``min_ttl_ms`` to prevent the working set of
``N`` milliseconds from getting evicted. The OOM killer is triggered
if this working set cannot be kept in memory. In other words, this
option works as an adjustable pressure relief valve, and when open, it
terminates applications that are hopefully not being used.
Based on the average human detectable lag (~100ms), ``N=1000`` usually
eliminates intolerable janks due to thrashing. Larger values like
``N=3000`` make janks less noticeable at the risk of premature OOM
kills.
The default value ``0`` means disabled.
Experimental features
=====================
``/sys/kernel/debug/lru_gen`` accepts commands described in the
following subsections. Multiple command lines are supported, so does
concatenation with delimiters ``,`` and ``;``.
``/sys/kernel/debug/lru_gen_full`` provides additional stats for
debugging. ``CONFIG_LRU_GEN_STATS=y`` keeps historical stats from
evicted generations in this file.
Working set estimation
----------------------
Working set estimation measures how much memory an application
requires in a given time interval, and it is usually done with little
impact on the performance of the application. E.g., data centers want
to optimize job scheduling (bin packing) to improve memory
utilizations. When a new job comes in, the job scheduler needs to find
out whether each server it manages can allocate a certain amount of
memory for this new job before it can pick a candidate. To do so, this
job scheduler needs to estimate the working sets of the existing jobs.
When it is read, ``lru_gen`` returns a histogram of numbers of pages
accessed over different time intervals for each memcg and node.
``MAX_NR_GENS`` decides the number of bins for each histogram.
::
memcg memcg_id memcg_path
node node_id
min_gen_nr age_in_ms nr_anon_pages nr_file_pages
...
max_gen_nr age_in_ms nr_anon_pages nr_file_pages
Each generation contains an estimated number of pages that have been
accessed within ``age_in_ms`` non-cumulatively. E.g., ``min_gen_nr``
contains the coldest pages and ``max_gen_nr`` contains the hottest
pages, since ``age_in_ms`` of the former is the largest and that of
the latter is the smallest.
Users can write ``+ memcg_id node_id max_gen_nr
[can_swap[full_scan]]`` to ``lru_gen`` to create a new generation
``max_gen_nr+1``. ``can_swap`` defaults to the swap setting and, if it
is set to ``1``, it forces the scan of anon pages when swap is off.
``full_scan`` defaults to ``1`` and, if it is set to ``0``, it reduces
the overhead as well as the coverage when scanning page tables.
A typical use case is that a job scheduler writes to ``lru_gen`` at a
certain time interval to create new generations, and it ranks the
servers it manages based on the sizes of their cold memory defined by
this time interval.
Proactive reclaim
-----------------
Proactive reclaim induces memory reclaim when there is no memory
pressure and usually targets cold memory only. E.g., when a new job
comes in, the job scheduler wants to proactively reclaim memory on the
server it has selected to improve the chance of successfully landing
this new job.
Users can write ``- memcg_id node_id min_gen_nr [swappiness
[nr_to_reclaim]]`` to ``lru_gen`` to evict generations less than or
equal to ``min_gen_nr``. Note that ``min_gen_nr`` should be less than
``max_gen_nr-1`` as ``max_gen_nr`` and ``max_gen_nr-1`` are not fully
aged and therefore cannot be evicted. ``swappiness`` overrides the
default value in ``/proc/sys/vm/swappiness``. ``nr_to_reclaim`` limits
the number of pages to evict.
A typical use case is that a job scheduler writes to ``lru_gen``
before it tries to land a new job on a server, and if it fails to
materialize the cold memory without impacting the existing jobs on
this server, it retries on the next server according to the ranking
result obtained from the working set estimation step described
earlier.

View File

@@ -787,7 +787,6 @@ bit 1 print system memory info
bit 2 print timer info
bit 3 print locks info if ``CONFIG_LOCKDEP`` is on
bit 4 print ftrace buffer
bit 5 print all printk messages in buffer
===== ============================================
So for example to print tasks and memory info on panic, user can::

View File

@@ -130,11 +130,3 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
subsystem that the buffer is fully accessible at the elevated privilege
level (and ideally inaccessible or at least read-only at the
lesser-privileged levels).
DMA_ATTR_OVERWRITE
------------------
This is a hint to the DMA-mapping subsystem that the device is expected to
overwrite the entire mapped size, thus the caller does not require any of the
previous buffer contents to be preserved. This allows bounce-buffering
implementations to optimise DMA_FROM_DEVICE transfers.

View File

@@ -44,7 +44,7 @@ patternProperties:
properties:
reg:
description:
Contains the chip-select IDs.
Contains the native Ready/Busy IDs.
nand-ecc-mode:
description:
@@ -174,6 +174,6 @@ examples:
nand-ecc-mode = "soft";
nand-ecc-algo = "bch";
/* NAND chip specific properties */
/* controller specific properties */
};
};

View File

@@ -8,13 +8,11 @@ Required properties:
- reg: should contain 2 entries, one for the registers and one for the direct
mapping area
- reg-names: should contain "regs" and "dirmap"
- interrupts: interrupt line connected to the SPI controller
- clock-names: should contain "ps_clk", "send_clk" and "send_dly_clk"
- clocks: should contain 3 entries for the "ps_clk", "send_clk" and
"send_dly_clk" clocks
Optional properties:
- interrupts: interrupt line connected to the SPI controller
Example:
spi@43c30000 {

View File

@@ -93,6 +93,14 @@ dax A legacy option which is an alias for ``dax=always``.
device=%s Specify a path to an extra device to be used together.
=================== =========================================================
Sysfs Entries
=============
Information about mounted erofs file systems can be found in /sys/fs/erofs.
Each mounted filesystem will have a directory in /sys/fs/erofs based on its
device name (i.e., /sys/fs/erofs/sda).
(see also Documentation/ABI/testing/sysfs-fs-erofs)
On-disk details
===============

View File

@@ -1047,8 +1047,8 @@ astute users may notice some differences in behavior:
may be used to overwrite the source files but isn't guaranteed to be
effective on all filesystems and storage devices.
- Direct I/O is not supported on encrypted files. Attempts to use
direct I/O on such files will fall back to buffered I/O.
- Direct I/O is supported on encrypted files only under some
circumstances. For details, see `Direct I/O support`_.
- The fallocate operations FALLOC_FL_COLLAPSE_RANGE and
FALLOC_FL_INSERT_RANGE are not supported on encrypted files and will
@@ -1179,6 +1179,27 @@ Inline encryption doesn't affect the ciphertext or other aspects of
the on-disk format, so users may freely switch back and forth between
using "inlinecrypt" and not using "inlinecrypt".
Direct I/O support
==================
For direct I/O on an encrypted file to work, the following conditions
must be met (in addition to the conditions for direct I/O on an
unencrypted file):
* The file must be using inline encryption. Usually this means that
the filesystem must be mounted with ``-o inlinecrypt`` and inline
encryption hardware must be present. However, a software fallback
is also available. For details, see `Inline encryption support`_.
* The I/O request must be fully aligned to the filesystem block size.
This means that the file position the I/O is targeting, the lengths
of all I/O segments, and the memory addresses of all I/O buffers
must be multiples of this value. Note that the filesystem block
size may be greater than the logical block size of the block device.
If either of the above conditions is not met, then direct I/O on the
encrypted file will fall back to buffered I/O.
Implementation details
======================

View File

@@ -168,16 +168,7 @@ Trees
- The finalized and tagged releases of all stable kernels can be found
in separate branches per version at:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
- The release candidate of all stable kernel versions can be found at:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/
.. warning::
The -stable-rc tree is a snapshot in time of the stable-queue tree and
will change frequently, hence will be rebased often. It should only be
used for testing purposes (e.g. to be consumed by CI systems).
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Review committee

View File

@@ -261,10 +261,6 @@ alc-sense-combo
huawei-mbx-stereo
Enable initialization verbs for Huawei MBX stereo speakers;
might be risky, try this at your own risk
alc298-samsung-headphone
Samsung laptops with ALC298
alc256-samsung-headphone
Samsung laptops with ALC256
ALC66x/67x/892
==============

View File

@@ -41,6 +41,7 @@ descriptions of data structures and algorithms.
ksm
memory-model
mmu_notifier
multigen_lru
numa
overcommit-accounting
page_migration

View File

@@ -0,0 +1,160 @@
.. SPDX-License-Identifier: GPL-2.0
=============
Multi-Gen LRU
=============
The multi-gen LRU is an alternative LRU implementation that optimizes
page reclaim and improves performance under memory pressure. Page
reclaim decides the kernel's caching policy and ability to overcommit
memory. It directly impacts the kswapd CPU usage and RAM efficiency.
Design overview
===============
Objectives
----------
The design objectives are:
* Good representation of access recency
* Try to profit from spatial locality
* Fast paths to make obvious choices
* Simple self-correcting heuristics
The representation of access recency is at the core of all LRU
implementations. In the multi-gen LRU, each generation represents a
group of pages with similar access recency. Generations establish a
common frame of reference and therefore help make better choices,
e.g., between different memcgs on a computer or different computers in
a data center (for job scheduling).
Exploiting spatial locality improves efficiency when gathering the
accessed bit. A rmap walk targets a single page and does not try to
profit from discovering a young PTE. A page table walk can sweep all
the young PTEs in an address space, but the address space can be too
large to make a profit. The key is to optimize both methods and use
them in combination.
Fast paths reduce code complexity and runtime overhead. Unmapped pages
do not require TLB flushes; clean pages do not require writeback.
These facts are only helpful when other conditions, e.g., access
recency, are similar. With generations as a common frame of reference,
additional factors stand out. But obvious choices might not be good
choices; thus self-correction is required.
The benefits of simple self-correcting heuristics are self-evident.
Again, with generations as a common frame of reference, this becomes
attainable. Specifically, pages in the same generation can be
categorized based on additional factors, and a feedback loop can
statistically compare the refault percentages across those categories
and infer which of them are better choices.
Assumptions
-----------
The protection of hot pages and the selection of cold pages are based
on page access channels and patterns. There are two access channels:
* Accesses through page tables
* Accesses through file descriptors
The protection of the former channel is by design stronger because:
1. The uncertainty in determining the access patterns of the former
channel is higher due to the approximation of the accessed bit.
2. The cost of evicting the former channel is higher due to the TLB
flushes required and the likelihood of encountering the dirty bit.
3. The penalty of underprotecting the former channel is higher because
applications usually do not prepare themselves for major page
faults like they do for blocked I/O. E.g., GUI applications
commonly use dedicated I/O threads to avoid blocking the rendering
threads.
There are also two access patterns:
* Accesses exhibiting temporal locality
* Accesses not exhibiting temporal locality
For the reasons listed above, the former channel is assumed to follow
the former pattern unless ``VM_SEQ_READ`` or ``VM_RAND_READ`` is
present, and the latter channel is assumed to follow the latter
pattern unless outlying refaults have been observed.
Workflow overview
=================
Evictable pages are divided into multiple generations for each
``lruvec``. The youngest generation number is stored in
``lrugen->max_seq`` for both anon and file types as they are aged on
an equal footing. The oldest generation numbers are stored in
``lrugen->min_seq[]`` separately for anon and file types as clean file
pages can be evicted regardless of swap constraints. These three
variables are monotonically increasing.
Generation numbers are truncated into ``order_base_2(MAX_NR_GENS+1)``
bits in order to fit into the gen counter in ``page->flags``. Each
truncated generation number is an index to ``lrugen->lists[]``. The
sliding window technique is used to track at least ``MIN_NR_GENS`` and
at most ``MAX_NR_GENS`` generations. The gen counter stores a value
within ``[1, MAX_NR_GENS]`` while a page is on one of
``lrugen->lists[]``; otherwise it stores zero.
Each generation is divided into multiple tiers. Tiers represent
different ranges of numbers of accesses through file descriptors. A
page accessed ``N`` times through file descriptors is in tier
``order_base_2(N)``. In contrast to moving across generations, which
requires the LRU lock, moving across tiers only requires operations on
``page->flags`` and therefore has a negligible cost. A feedback loop
modeled after the PID controller monitors refaults over all the tiers
from anon and file types and decides which tiers from which types to
evict or protect.
There are two conceptually independent procedures: the aging and the
eviction. They form a closed-loop system, i.e., the page reclaim.
Aging
-----
The aging produces young generations. Given an ``lruvec``, it
increments ``max_seq`` when ``max_seq-min_seq+1`` approaches
``MIN_NR_GENS``. The aging promotes hot pages to the youngest
generation when it finds them accessed through page tables; the
demotion of cold pages happens consequently when it increments
``max_seq``. The aging uses page table walks and rmap walks to find
young PTEs. For the former, it iterates ``lruvec_memcg()->mm_list``
and calls ``walk_page_range()`` with each ``mm_struct`` on this list
to scan PTEs. On finding a young PTE, it clears the accessed bit and
updates the gen counter of the page mapped by this PTE to
``(max_seq%MAX_NR_GENS)+1``. After each iteration of this list, it
increments ``max_seq``. For the latter, when the eviction walks the
rmap and finds a young PTE, the aging scans the adjacent PTEs and
follows the same steps just described.
Eviction
--------
The eviction consumes old generations. Given an ``lruvec``, it
increments ``min_seq`` when ``lrugen->lists[]`` indexed by
``min_seq%MAX_NR_GENS`` becomes empty. To select a type and a tier to
evict from, it first compares ``min_seq[]`` to select the older type.
If both types are equally old, it selects the one whose first tier has
a lower refault percentage. The first tier contains single-use
unmapped clean pages, which are the best bet. The eviction sorts a
page according to the gen counter if the aging has found this page
accessed through page tables and updated the gen counter. It also
moves a page to the next generation, i.e., ``min_seq+1``, if this page
was accessed multiple times through file descriptors and the feedback
loop has detected outlying refaults from the tier this page is in. To
do this, the feedback loop uses the first tier as the baseline, for
the reason stated earlier.
Summary
-------
The multi-gen LRU can be disassembled into the following parts:
* Generations
* Page table walks
* Rmap walks
* Bloom filters
* The PID controller
The aging and the eviction is a producer-consumer model; specifically,
the latter drives the former by the sliding window over generations.
Within the aging, rmap walks drive page table walks by inserting hot
densely populated page tables to the Bloom filters. Within the
eviction, the PID controller uses refaults as the feedback to select
types to evict and tiers to protect.

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 10
SUBLEVEL = 110
SUBLEVEL = 107
EXTRAVERSION =
NAME = Dare mighty things

File diff suppressed because it is too large Load Diff

View File

@@ -240,6 +240,7 @@
config_group_init_type_name
config_item_init_type_name
config_item_put
console_set_on_cmdline
console_suspend_enabled
console_trylock
console_unlock
@@ -285,6 +286,8 @@
cpuhp_tasks_frozen
cpu_hwcap_keys
cpu_hwcaps
cpuidle_driver_state_disabled
cpuidle_get_driver
cpuidle_governor_latency_req
cpuidle_pause_and_lock
cpuidle_resume_and_unlock
@@ -388,6 +391,7 @@
devfreq_update_interval
dev_fwnode
dev_get_by_name
dev_get_stats
device_add
device_add_disk
device_add_groups
@@ -567,6 +571,7 @@
dma_fence_remove_callback
dma_fence_signal
dma_fence_signal_locked
dma_fence_wait_timeout
dma_free_attrs
dma_free_noncoherent
dma_get_sgtable_attrs
@@ -769,6 +774,7 @@
drm_poll
drm_prime_gem_destroy
drm_printf
__drm_printfn_debug
__drm_printfn_info
__drm_printfn_seq_file
drm_property_blob_get
@@ -840,6 +846,7 @@
find_last_bit
find_next_bit
find_next_zero_bit
find_pid_ns
find_task_by_vpid
find_vma
finish_wait
@@ -896,6 +903,22 @@
get_device
__get_free_pages
get_governor_parent_kobj
gether_cleanup
gether_connect
gether_disconnect
gether_get_dev_addr
gether_get_host_addr
gether_get_host_addr_u8
gether_get_ifname
gether_get_qmult
gether_register_netdev
gether_set_dev_addr
gether_set_gadget
gether_set_host_addr
gether_set_ifname
gether_set_qmult
gether_setup_name_default
get_pfnblock_flags_mask
get_pid_task
get_random_bytes
get_random_bytes_arch
@@ -990,6 +1013,7 @@
i2c_unregister_device
i2c_verify_client
ida_alloc_range
ida_destroy
ida_free
idr_alloc
idr_alloc_cyclic
@@ -1028,6 +1052,7 @@
input_mt_destroy_slots
input_mt_init_slots
input_mt_report_slot_state
input_mt_sync_frame
input_open_device
input_register_device
input_register_handle
@@ -1038,6 +1063,10 @@
input_unregister_device
input_unregister_handle
input_unregister_handler
interval_tree_insert
interval_tree_iter_first
interval_tree_iter_next
interval_tree_remove
int_sqrt
int_to_scsilun
iomem_resource
@@ -1088,7 +1117,9 @@
irq_create_mapping_affinity
irq_create_of_mapping
__irq_domain_add
irq_domain_get_irq_data
irq_domain_remove
irq_domain_set_info
irq_domain_xlate_twocell
irq_find_mapping
irq_get_irq_data
@@ -1229,6 +1260,7 @@
mbox_free_channel
mbox_request_channel
mbox_send_message
memchr
memcmp
memcpy
__memcpy_fromio
@@ -1328,6 +1360,7 @@
nsec_to_clock_t
ns_to_timespec64
__num_online_cpus
nvhe_hyp_panic_handler
nvmem_device_put
nvmem_device_read
nvmem_device_write
@@ -1422,7 +1455,7 @@
__page_frag_cache_drain
page_frag_free
page_mapping
__page_pinner_migration_failed
__page_pinner_put_page
panic
panic_notifier_list
param_array_ops
@@ -1456,6 +1489,8 @@
pci_irq_vector
pci_load_and_free_saved_state
pci_load_saved_state
pci_msi_mask_irq
pci_msi_unmask_irq
pci_read_config_dword
pci_read_config_word
__pci_register_driver
@@ -1474,6 +1509,7 @@
pci_write_config_dword
pci_write_config_word
PDE_DATA
pelt_load_avg_max
__per_cpu_offset
perf_aux_output_begin
perf_aux_output_end
@@ -1512,9 +1548,11 @@
pin_user_pages_fast
pin_user_pages_remote
pktgen_xfrm_outer_mode_output
pkvm_iommu_finalize
pkvm_iommu_resume
pkvm_iommu_s2mpu_register
pkvm_iommu_suspend
pkvm_iommu_sysmmu_sync_register
platform_bus_type
platform_device_add
platform_device_add_data
@@ -1532,6 +1570,7 @@
platform_find_device_by_driver
platform_get_irq
platform_get_irq_byname
platform_get_irq_optional
platform_get_resource
platform_get_resource_byname
platform_irq_count
@@ -1653,6 +1692,7 @@
__rcu_read_unlock
rdev_get_drvdata
rdev_get_id
reboot_mode
refcount_dec_not_one
refcount_warn_saturate
__refrigerator
@@ -2044,6 +2084,7 @@
submit_bio
submit_bio_wait
subsys_system_register
suspend_set_ops
__sw_hweight32
__sw_hweight64
sync_file_create
@@ -2121,6 +2162,7 @@
_totalram_pages
touch_softlockup_watchdog
__trace_bprintk
__trace_bputs
trace_event_buffer_commit
trace_event_buffer_reserve
trace_event_ignore_this_pid
@@ -2128,6 +2170,7 @@
trace_event_reg
trace_handle_return
__traceiter_android_rvh_arm64_serror_panic
__traceiter_android_rvh_attach_entity_load_avg
__traceiter_android_rvh_bad_mode
__traceiter_android_rvh_cgroup_force_kthread_migration
__traceiter_android_rvh_check_preempt_wakeup
@@ -2135,6 +2178,7 @@
__traceiter_android_rvh_cpu_overutilized
__traceiter_android_rvh_dequeue_task
__traceiter_android_rvh_dequeue_task_fair
__traceiter_android_rvh_detach_entity_load_avg
__traceiter_android_rvh_die_kernel_fault
__traceiter_android_rvh_do_mem_abort
__traceiter_android_rvh_do_sea
@@ -2145,19 +2189,25 @@
__traceiter_android_rvh_find_energy_efficient_cpu
__traceiter_android_rvh_irqs_disable
__traceiter_android_rvh_irqs_enable
__traceiter_android_rvh_pci_d3_sleep
__traceiter_android_rvh_post_init_entity_util_avg
__traceiter_android_rvh_preempt_disable
__traceiter_android_rvh_preempt_enable
__traceiter_android_rvh_remove_entity_load_avg
__traceiter_android_rvh_sched_fork
__traceiter_android_rvh_select_task_rq_fair
__traceiter_android_rvh_select_task_rq_rt
__traceiter_android_rvh_set_iowait
__traceiter_android_rvh_set_task_cpu
__traceiter_android_rvh_typec_tcpci_chk_contaminant
__traceiter_android_rvh_typec_tcpci_get_vbus
__traceiter_android_rvh_uclamp_eff_get
__traceiter_android_rvh_uclamp_rq_util_with
__traceiter_android_rvh_ufs_complete_init
__traceiter_android_rvh_ufs_reprogram_all_keys
__traceiter_android_rvh_update_blocked_fair
__traceiter_android_rvh_update_load_avg
__traceiter_android_rvh_update_rt_rq_load_avg
__traceiter_android_rvh_util_est_update
__traceiter_android_vh_arch_set_freq_scale
__traceiter_android_vh_cma_alloc_finish
@@ -2167,17 +2217,23 @@
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_enable_thermal_genl_check
__traceiter_android_vh_ep_create_wakeup_source
__traceiter_android_vh_get_user_pages
__traceiter_android_vh___get_user_pages_remote
__traceiter_android_vh_internal_get_user_pages_fast
__traceiter_android_vh_ipi_stop
__traceiter_android_vh_meminfo_proc_show
__traceiter_android_vh_of_i2c_get_board_info
__traceiter_android_vh_pagecache_get_page
__traceiter_android_vh_pin_user_pages
__traceiter_android_vh_rmqueue
__traceiter_android_vh_scheduler_tick
__traceiter_android_vh_setscheduler_uclamp
__traceiter_android_vh_snd_compr_use_pause_in_drain
__traceiter_android_vh_sound_usb_support_cpu_suspend
__traceiter_android_vh_sysrq_crash
__traceiter_android_vh_thermal_pm_notify_suspend
__traceiter_android_vh_timerfd_create
__traceiter_android_vh_try_grab_compound_head
__traceiter_android_vh_typec_store_partner_src_caps
__traceiter_android_vh_typec_tcpci_override_toggling
__traceiter_android_vh_typec_tcpm_get_timer
@@ -2218,6 +2274,7 @@
__traceiter_suspend_resume
trace_output_call
__tracepoint_android_rvh_arm64_serror_panic
__tracepoint_android_rvh_attach_entity_load_avg
__tracepoint_android_rvh_bad_mode
__tracepoint_android_rvh_cgroup_force_kthread_migration
__tracepoint_android_rvh_check_preempt_wakeup
@@ -2225,6 +2282,7 @@
__tracepoint_android_rvh_cpu_overutilized
__tracepoint_android_rvh_dequeue_task
__tracepoint_android_rvh_dequeue_task_fair
__tracepoint_android_rvh_detach_entity_load_avg
__tracepoint_android_rvh_die_kernel_fault
__tracepoint_android_rvh_do_mem_abort
__tracepoint_android_rvh_do_sea
@@ -2235,19 +2293,25 @@
__tracepoint_android_rvh_find_energy_efficient_cpu
__tracepoint_android_rvh_irqs_disable
__tracepoint_android_rvh_irqs_enable
__tracepoint_android_rvh_pci_d3_sleep
__tracepoint_android_rvh_post_init_entity_util_avg
__tracepoint_android_rvh_preempt_disable
__tracepoint_android_rvh_preempt_enable
__tracepoint_android_rvh_remove_entity_load_avg
__tracepoint_android_rvh_sched_fork
__tracepoint_android_rvh_select_task_rq_fair
__tracepoint_android_rvh_select_task_rq_rt
__tracepoint_android_rvh_set_iowait
__tracepoint_android_rvh_set_task_cpu
__tracepoint_android_rvh_typec_tcpci_chk_contaminant
__tracepoint_android_rvh_typec_tcpci_get_vbus
__tracepoint_android_rvh_uclamp_eff_get
__tracepoint_android_rvh_uclamp_rq_util_with
__tracepoint_android_rvh_ufs_complete_init
__tracepoint_android_rvh_ufs_reprogram_all_keys
__tracepoint_android_rvh_update_blocked_fair
__tracepoint_android_rvh_update_load_avg
__tracepoint_android_rvh_update_rt_rq_load_avg
__tracepoint_android_rvh_util_est_update
__tracepoint_android_vh_arch_set_freq_scale
__tracepoint_android_vh_cma_alloc_finish
@@ -2257,17 +2321,23 @@
__tracepoint_android_vh_dup_task_struct
__tracepoint_android_vh_enable_thermal_genl_check
__tracepoint_android_vh_ep_create_wakeup_source
__tracepoint_android_vh_get_user_pages
__tracepoint_android_vh___get_user_pages_remote
__tracepoint_android_vh_internal_get_user_pages_fast
__tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_meminfo_proc_show
__tracepoint_android_vh_of_i2c_get_board_info
__tracepoint_android_vh_pagecache_get_page
__tracepoint_android_vh_pin_user_pages
__tracepoint_android_vh_rmqueue
__tracepoint_android_vh_scheduler_tick
__tracepoint_android_vh_setscheduler_uclamp
__tracepoint_android_vh_snd_compr_use_pause_in_drain
__tracepoint_android_vh_sound_usb_support_cpu_suspend
__tracepoint_android_vh_sysrq_crash
__tracepoint_android_vh_thermal_pm_notify_suspend
__tracepoint_android_vh_timerfd_create
__tracepoint_android_vh_try_grab_compound_head
__tracepoint_android_vh_typec_store_partner_src_caps
__tracepoint_android_vh_typec_tcpci_override_toggling
__tracepoint_android_vh_typec_tcpm_get_timer
@@ -2377,26 +2447,37 @@
unregister_virtio_driver
up
update_devfreq
___update_load_avg
__update_load_avg_blocked_se
___update_load_sum
update_rq_clock
up_read
up_write
usb_add_function
usb_add_hcd
usb_assign_descriptors
usb_copy_descriptors
__usb_create_hcd
usb_disabled
usb_enable_autosuspend
usb_ep_alloc_request
usb_ep_autoconfig
usb_ep_disable
usb_ep_enable
usb_ep_free_request
usb_ep_queue
usb_free_all_descriptors
usb_function_register
usb_function_unregister
usb_gadget_activate
usb_gadget_deactivate
usb_gadget_set_state
usb_gstrings_attach
usb_hcd_is_primary_hcd
usb_hcd_platform_shutdown
usb_hub_find_child
usb_interface_id
usb_os_desc_prepare_interf_dir
usb_otg_state_string
usb_put_function_instance
usb_put_hcd

View File

@@ -791,7 +791,6 @@
of_get_next_available_child
of_parse_phandle
of_property_read_u64
__page_pinner_migration_failed
__put_page
put_unused_fd
rb_erase

View File

@@ -28,9 +28,6 @@
bt_warn
cancel_delayed_work_sync
cancel_work_sync
capable
cfg80211_inform_bss_data
cfg80211_put_bss
__cfi_slowpath
__check_object_size
__class_create
@@ -62,15 +59,12 @@
delayed_work_timer_fn
del_gendisk
del_timer
del_timer_sync
destroy_workqueue
dev_close
_dev_err
device_add_disk
device_create
device_initialize
device_init_wakeup
device_register
device_release_driver
device_unregister
_dev_info
@@ -82,7 +76,6 @@
devm_request_threaded_irq
_dev_notice
dev_queue_xmit
dev_set_name
_dev_warn
dma_alloc_attrs
dma_buf_export
@@ -96,7 +89,6 @@
dma_set_mask
dma_sync_sg_for_device
dma_unmap_sg_attrs
down_read
down_write
ether_setup
ethtool_op_get_link
@@ -123,14 +115,8 @@
hci_recv_frame
hci_register_dev
hci_unregister_dev
hwrng_register
hwrng_unregister
ida_alloc_range
ida_free
idr_alloc
idr_destroy
idr_remove
__init_rwsem
__init_swait_queue_head
init_timer_key
init_wait_entry
@@ -160,12 +146,10 @@
kmem_cache_free
kmemdup
kobject_uevent
krealloc
kstrdup
kstrndup
kstrtoint
kstrtouint
kstrtoull
ktime_get
ktime_get_mono_fast_ns
ktime_get_raw_ts64
@@ -184,13 +168,10 @@
memcpy
memmove
memparse
memremap
memset
memstart_addr
memunmap
misc_deregister
misc_register
mod_timer
module_layout
module_put
__msecs_to_jiffies
@@ -216,15 +197,11 @@
nf_conntrack_destroy
no_llseek
nonseekable_open
noop_llseek
nr_cpu_ids
__num_online_cpus
of_find_property
of_get_property
of_property_read_variable_u32_array
__page_pinner_migration_failed
__page_pinner_put_page
param_ops_bool
param_ops_charp
param_ops_int
param_ops_uint
passthru_features_check
@@ -250,11 +227,8 @@
platform_get_irq
platform_get_resource
pm_runtime_allow
__pm_runtime_disable
pm_runtime_enable
pm_runtime_force_resume
pm_runtime_force_suspend
__pm_runtime_idle
__pm_runtime_resume
pm_runtime_set_autosuspend_delay
__pm_runtime_suspend
@@ -305,9 +279,7 @@
schedule
schedule_timeout
scnprintf
seq_lseek
seq_printf
seq_read
serio_close
serio_interrupt
serio_open
@@ -330,7 +302,6 @@
snd_card_free
snd_card_new
snd_card_register
snd_ctl_enum_info
snd_ctl_sync_vmaster
snd_device_new
snd_jack_new
@@ -355,13 +326,11 @@
strncmp
strncpy
strscpy
strsep
sync_file_create
synchronize_rcu
sysfs_create_group
__sysfs_match_string
sysfs_remove_group
sysfs_remove_link
system_wq
trace_event_buffer_commit
trace_event_buffer_reserve
@@ -385,7 +354,6 @@
unregister_netdevice_queue
unregister_virtio_device
unregister_virtio_driver
up_read
up_write
usb_alloc_urb
usb_anchor_urb
@@ -396,7 +364,6 @@
usb_register_driver
usb_submit_urb
usb_unanchor_urb
__usecs_to_jiffies
usleep_range
vabits_actual
vfree
@@ -457,10 +424,12 @@
mmc_remove_host
mmc_request_done
mmc_send_tuning
of_get_property
pinctrl_lookup_state
pinctrl_pm_select_sleep_state
pinctrl_select_default_state
pinctrl_select_state
__pm_runtime_idle
regulator_disable
regulator_enable
reset_control_assert
@@ -525,26 +494,9 @@
netdev_master_upper_dev_link
rtnl_is_locked
# required by gnss-cmdline.ko
bus_find_device
device_find_child
device_match_name
platform_bus_type
strstr
# required by gnss-serial.ko
gnss_allocate_device
gnss_deregister_device
gnss_insert_raw
gnss_put_device
gnss_register_device
serdev_device_close
serdev_device_open
serdev_device_set_baudrate
serdev_device_set_flow_control
serdev_device_wait_until_sent
serdev_device_write
serdev_device_write_wakeup
# required by goldfish_address_space.ko
memremap
memunmap
# required by goldfish_battery.ko
power_supply_changed
@@ -584,12 +536,6 @@
skb_queue_head
skb_queue_purge
# required by ledtrig-audio.ko
led_set_brightness_nosleep
led_trigger_event
led_trigger_register
led_trigger_unregister
# required by lzo-rle.ko
lzorle1x_1_compress
@@ -699,11 +645,15 @@
# required by open-dice.ko
devm_memremap
devm_memunmap
of_reserved_mem_lookup
__platform_driver_probe
simple_read_from_buffer
vm_iomap_memory
# required by psmouse.ko
bus_register_notifier
bus_unregister_notifier
del_timer_sync
device_add_groups
device_create_file
device_remove_file
@@ -724,6 +674,7 @@
input_set_capability
kstrtobool
kstrtou8
mod_timer
ps2_begin_command
ps2_cmd_aborted
ps2_command
@@ -737,6 +688,7 @@
serio_rescan
serio_unregister_child_port
strcasecmp
strsep
# required by pulse8-cec.ko
cec_allocate_adapter
@@ -759,6 +711,7 @@
rtc_update_irq
# required by slcan.ko
capable
hex_asc_upper
hex_to_bin
msleep_interruptible
@@ -769,7 +722,6 @@
# required by snd-hda-codec-generic.ko
_ctype
devm_led_classdev_register_ext
snd_ctl_boolean_stereo_info
strlcat
__sw_hweight32
@@ -783,6 +735,8 @@
get_device_system_crosststamp
kvasprintf
ns_to_timespec64
__pm_runtime_disable
pm_runtime_enable
pm_runtime_forbid
__printk_ratelimit
regcache_mark_dirty
@@ -794,6 +748,7 @@
snd_ctl_add_vmaster_hook
snd_ctl_apply_vmaster_followers
snd_ctl_boolean_mono_info
snd_ctl_enum_info
snd_ctl_find_id
snd_ctl_make_virtual_master
snd_ctl_new1
@@ -817,11 +772,14 @@
bus_unregister
device_add
device_del
device_initialize
dev_set_name
kasprintf
kobject_add
kobject_create_and_add
kobject_init
kobject_put
krealloc
pm_runtime_get_if_active
__pm_runtime_set_status
prepare_to_wait
@@ -840,6 +798,7 @@
param_array_ops
param_get_int
param_ops_bint
param_ops_charp
param_set_int
pci_dev_put
pci_disable_msi
@@ -873,35 +832,6 @@
vmap
vunmap
# required by tpm.ko
alloc_chrdev_region
cdev_device_add
cdev_device_del
cdev_init
compat_only_sysfs_link_entry_to_kobj
devm_add_action
efi
efi_tpm_final_log_size
hash_digest_size
idr_get_next
idr_replace
jiffies_to_usecs
memchr_inv
of_property_match_string
pm_suspend_global_flags
securityfs_create_dir
securityfs_create_file
securityfs_remove
seq_open
seq_putc
seq_release
seq_write
unregister_chrdev_region
# required by tpm_vtpm_proxy.ko
anon_inode_getfile
compat_ptr_ioctl
# required by usbip-core.ko
iov_iter_kvec
param_ops_ulong
@@ -917,10 +847,12 @@
devres_free
of_device_is_compatible
of_find_compatible_node
of_find_property
of_get_next_parent
of_parse_phandle
of_platform_populate
of_root
__usecs_to_jiffies
# required by vexpress-sysreg.ko
bgpio_init
@@ -937,6 +869,7 @@
platform_bus
platform_device_add_data
sockfd_lookup
sysfs_remove_link
usb_add_hcd
usb_create_hcd
usb_create_shared_hcd
@@ -958,6 +891,8 @@
# required by virt_wifi.ko
cfg80211_connect_done
cfg80211_disconnected
cfg80211_inform_bss_data
cfg80211_put_bss
cfg80211_scan_done
__dev_get_by_index
dev_printk
@@ -970,9 +905,6 @@
wiphy_register
wiphy_unregister
# required by virt_wifi_sim.ko
ieee80211_get_channel_khz
# required by virtio-gpu.ko
__devm_request_region
dma_fence_match_context
@@ -1090,6 +1022,7 @@
is_vmalloc_addr
kmalloc_order_trace
memdup_user
noop_llseek
seq_puts
sync_file_get_fence
__traceiter_dma_fence_emit
@@ -1102,6 +1035,8 @@
ww_mutex_unlock
# required by virtio-rng.ko
hwrng_register
hwrng_unregister
wait_for_completion_killable
# required by virtio_blk.ko
@@ -1151,6 +1086,8 @@
pipe_unlock
__refrigerator
__register_chrdev
seq_lseek
seq_read
single_open
single_release
__splice_from_pipe
@@ -1159,6 +1096,7 @@
# required by virtio_mmio.ko
device_for_each_child
device_register
devm_platform_ioremap_resource
platform_device_register_full
@@ -1307,16 +1245,23 @@
crypto_has_alg
disk_end_io_acct
disk_start_io_acct
down_read
flush_dcache_page
free_percpu
fsync_bdev
idr_alloc
idr_destroy
idr_find
idr_for_each
idr_remove
__init_rwsem
kstrtou16
kstrtoull
memset64
mutex_is_locked
page_endio
sysfs_streq
up_read
vzalloc
# required by zsmalloc.ko
@@ -1339,3 +1284,4 @@
register_shrinker
__SetPageMovable
unregister_shrinker

View File

@@ -1180,6 +1180,15 @@ config ARCH_SPLIT_ARG64
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.
config ARCH_HAS_NONLEAF_PMD_YOUNG
bool
depends on PGTABLE_LEVELS > 2
help
Architectures that select this option are capable of setting the
accessed bit in non-leaf PMD entries when using them as part of linear
address translations. Page table walkers that clear the accessed bit
may use this capability to reduce their search space.
source "kernel/gcov/Kconfig"
source "scripts/gcc-plugins/Kconfig"

View File

@@ -43,7 +43,7 @@ SYSCALL_DEFINE0(arc_gettls)
return task_thread_info(current)->thr_ptr;
}
SYSCALL_DEFINE3(arc_usr_cmpxchg, int __user *, uaddr, int, expected, int, new)
SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
{
struct pt_regs *regs = current_pt_regs();
u32 uval;

View File

@@ -433,26 +433,12 @@
#size-cells = <0>;
enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
/* Source for d/i-cache-line-size and d/i-cache-sets
* https://developer.arm.com/documentation/100095/0003
* /Level-1-Memory-System/About-the-L1-memory-system?lang=en
* Source for d/i-cache-size
* https://www.raspberrypi.com/documentation/computers
* /processors.html#bcm2711
*/
cpu0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a72";
reg = <0>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000d8>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
i-cache-size = <0xc000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
next-level-cache = <&l2>;
};
cpu1: cpu@1 {
@@ -461,13 +447,6 @@
reg = <1>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000e0>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
i-cache-size = <0xc000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
next-level-cache = <&l2>;
};
cpu2: cpu@2 {
@@ -476,13 +455,6 @@
reg = <2>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000e8>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
i-cache-size = <0xc000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
next-level-cache = <&l2>;
};
cpu3: cpu@3 {
@@ -491,28 +463,6 @@
reg = <3>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000f0>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
i-cache-size = <0xc000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
next-level-cache = <&l2>;
};
/* Source for d/i-cache-line-size and d/i-cache-sets
* https://developer.arm.com/documentation/100095/0003
* /Level-2-Memory-System/About-the-L2-memory-system?lang=en
* Source for d/i-cache-size
* https://www.raspberrypi.com/documentation/computers
* /processors.html#bcm2711
*/
l2: l2-cache0 {
compatible = "cache";
cache-size = <0x100000>;
cache-line-size = <64>;
cache-sets = <1024>; // 1MiB(size)/64(line-size)=16384ways/16-way set
cache-level = <2>;
};
};

View File

@@ -40,26 +40,12 @@
#size-cells = <0>;
enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
/* Source for d/i-cache-line-size and d/i-cache-sets
* https://developer.arm.com/documentation/ddi0500/e/level-1-memory-system
* /about-the-l1-memory-system?lang=en
*
* Source for d/i-cache-size
* https://magpi.raspberrypi.com/articles/raspberry-pi-3-specs-benchmarks
*/
cpu0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000d8>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
i-cache-size = <0x8000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
next-level-cache = <&l2>;
};
cpu1: cpu@1 {
@@ -68,13 +54,6 @@
reg = <1>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000e0>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
i-cache-size = <0x8000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
next-level-cache = <&l2>;
};
cpu2: cpu@2 {
@@ -83,13 +62,6 @@
reg = <2>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000e8>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
i-cache-size = <0x8000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
next-level-cache = <&l2>;
};
cpu3: cpu@3 {
@@ -98,27 +70,6 @@
reg = <3>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0x000000f0>;
d-cache-size = <0x8000>;
d-cache-line-size = <64>;
d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
i-cache-size = <0x8000>;
i-cache-line-size = <64>;
i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
next-level-cache = <&l2>;
};
/* Source for cache-line-size + cache-sets
* https://developer.arm.com/documentation/ddi0500
* /e/level-2-memory-system/about-the-l2-memory-system?lang=en
* Source for cache-size
* https://datasheets.raspberrypi.com/cm/cm1-and-cm3-datasheet.pdf
*/
l2: l2-cache0 {
compatible = "cache";
cache-size = <0x80000>;
cache-line-size = <64>;
cache-sets = <512>; // 512KiB(size)/64(line-size)=8192ways/16-way set
cache-level = <2>;
};
};
};

View File

@@ -3448,7 +3448,8 @@
ti,timer-pwm;
};
};
timer15_target: target-module@2c000 { /* 0x4882c000, ap 17 02.0 */
target-module@2c000 { /* 0x4882c000, ap 17 02.0 */
compatible = "ti,sysc-omap4-timer", "ti,sysc";
reg = <0x2c000 0x4>,
<0x2c010 0x4>;
@@ -3476,7 +3477,7 @@
};
};
timer16_target: target-module@2e000 { /* 0x4882e000, ap 19 14.0 */
target-module@2e000 { /* 0x4882e000, ap 19 14.0 */
compatible = "ti,sysc-omap4-timer", "ti,sysc";
reg = <0x2e000 0x4>,
<0x2e010 0x4>;

View File

@@ -1093,20 +1093,20 @@
};
/* Local timers, see ARM architected timer wrap erratum i940 */
&timer15_target {
&timer3_target {
ti,no-reset-on-init;
ti,no-idle;
timer@0 {
assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER15_CLKCTRL 24>;
assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
assigned-clock-parents = <&timer_sys_clk_div>;
};
};
&timer16_target {
&timer4_target {
ti,no-reset-on-init;
ti,no-idle;
timer@0 {
assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER16_CLKCTRL 24>;
assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
assigned-clock-parents = <&timer_sys_clk_div>;
};
};

View File

@@ -260,7 +260,7 @@
};
uart3_data: uart3-data {
samsung,pins = "gpa1-4", "gpa1-5";
samsung,pins = "gpa1-4", "gpa1-4";
samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;

View File

@@ -118,9 +118,6 @@
status = "okay";
ddc = <&i2c_2>;
hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
vdd-supply = <&ldo8_reg>;
vdd_osc-supply = <&ldo10_reg>;
vdd_pll-supply = <&ldo8_reg>;
};
&i2c_0 {

View File

@@ -124,9 +124,6 @@
hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
pinctrl-names = "default";
pinctrl-0 = <&hdmi_hpd_irq>;
vdd-supply = <&ldo6_reg>;
vdd_osc-supply = <&ldo7_reg>;
vdd_pll-supply = <&ldo6_reg>;
};
&hsi2c_4 {

View File

@@ -53,31 +53,6 @@
};
};
lvds-decoder {
compatible = "ti,ds90cf364a", "lvds-decoder";
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
lvds_decoder_in: endpoint {
remote-endpoint = <&lvds0_out>;
};
};
port@1 {
reg = <1>;
lvds_decoder_out: endpoint {
remote-endpoint = <&panel_in>;
};
};
};
};
panel {
compatible = "edt,etm0700g0dh6";
pinctrl-0 = <&pinctrl_display_gpio>;
@@ -86,7 +61,7 @@
port {
panel_in: endpoint {
remote-endpoint = <&lvds_decoder_out>;
remote-endpoint = <&lvds0_out>;
};
};
};
@@ -475,7 +450,7 @@
reg = <2>;
lvds0_out: endpoint {
remote-endpoint = <&lvds_decoder_in>;
remote-endpoint = <&panel_in>;
};
};
};

View File

@@ -40,7 +40,7 @@
dailink_master: simple-audio-card,codec {
sound-dai = <&codec>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
};
};
};
@@ -293,7 +293,7 @@
compatible = "fsl,sgtl5000";
#sound-dai-cells = <0>;
reg = <0x0a>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_sai1_mclk>;
VDDA-supply = <&reg_module_3v3_avdd>;

View File

@@ -250,7 +250,7 @@
tlv320aic32x4: audio-codec@18 {
compatible = "ti,tlv320aic32x4";
reg = <0x18>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
clock-names = "mclk";
ldoin-supply = <&reg_audio_3v3>;
iov-supply = <&reg_audio_3v3>;

View File

@@ -288,7 +288,7 @@
codec: wm8960@1a {
compatible = "wlf,wm8960";
reg = <0x1a>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
clock-names = "mclk";
wlf,shared-lrclk;
};

View File

@@ -31,7 +31,7 @@
dailink_master: simple-audio-card,codec {
sound-dai = <&sgtl5000>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
};
};
};
@@ -41,7 +41,7 @@
#sound-dai-cells = <0>;
reg = <0x0a>;
compatible = "fsl,sgtl5000";
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_vref_1v8>;
};

View File

@@ -31,7 +31,7 @@
dailink_master: simple-audio-card,codec {
sound-dai = <&sgtl5000>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
};
};
};
@@ -41,7 +41,7 @@
#sound-dai-cells = <0>;
reg = <0x0a>;
compatible = "fsl,sgtl5000";
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_vref_1v8>;
};

View File

@@ -378,14 +378,14 @@
codec: wm8960@1a {
compatible = "wlf,wm8960";
reg = <0x1a>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
clock-names = "mclk";
wlf,shared-lrclk;
wlf,hp-cfg = <2 2 3>;
wlf,gpio-cfg = <1 3>;
assigned-clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_SRC>,
<&clks IMX7D_PLL_AUDIO_POST_DIV>,
<&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
<&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
assigned-clock-parents = <&clks IMX7D_PLL_AUDIO_POST_DIV>;
assigned-clock-rates = <0>, <884736000>, <12288000>;
};

View File

@@ -75,7 +75,7 @@
dailink_master: simple-audio-card,codec {
sound-dai = <&codec>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
};
};
};
@@ -232,7 +232,7 @@
#sound-dai-cells = <0>;
reg = <0x0a>;
compatible = "fsl,sgtl5000";
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_sai1_mclk>;
VDDA-supply = <&vgen4_reg>;

View File

@@ -142,8 +142,7 @@
clocks {
sleep_clk: sleep_clk {
compatible = "fixed-clock";
clock-frequency = <32000>;
clock-output-names = "gcc_sleep_clk_src";
clock-frequency = <32768>;
#clock-cells = <0>;
};

View File

@@ -146,9 +146,7 @@
reg = <0x108000 0x1000>;
qcom,ipc = <&l2cc 0x8 2>;
interrupts = <GIC_SPI 19 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 21 IRQ_TYPE_EDGE_RISING>,
<GIC_SPI 22 IRQ_TYPE_EDGE_RISING>;
interrupts = <0 19 0>, <0 21 0>, <0 22 0>;
interrupt-names = "ack", "err", "wakeup";
regulators {
@@ -194,7 +192,7 @@
compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm";
reg = <0x16440000 0x1000>,
<0x16400000 0x1000>;
interrupts = <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <0 154 0x0>;
clocks = <&gcc GSBI5_UART_CLK>, <&gcc GSBI5_H_CLK>;
clock-names = "core", "iface";
status = "disabled";
@@ -320,7 +318,7 @@
#address-cells = <1>;
#size-cells = <0>;
reg = <0x16080000 0x1000>;
interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
interrupts = <0 147 0>;
spi-max-frequency = <24000000>;
cs-gpios = <&msmgpio 8 0>;

View File

@@ -413,7 +413,7 @@
pmecc: ecc-engine@f8014070 {
compatible = "atmel,sama5d2-pmecc";
reg = <0xf8014070 0x490>,
<0xf8014500 0x200>;
<0xf8014500 0x100>;
};
};

View File

@@ -136,9 +136,9 @@
reg = <0xb4100000 0x1000>;
interrupts = <0 105 0x4>;
status = "disabled";
dmas = <&dwdma0 13 0 1>,
<&dwdma0 12 1 0>;
dma-names = "rx", "tx";
dmas = <&dwdma0 12 0 1>,
<&dwdma0 13 1 0>;
dma-names = "tx", "rx";
};
thermal@e07008c4 {

View File

@@ -284,9 +284,9 @@
#size-cells = <0>;
interrupts = <0 31 0x4>;
status = "disabled";
dmas = <&dwdma0 5 0 0>,
<&dwdma0 4 0 0>;
dma-names = "rx", "tx";
dmas = <&dwdma0 4 0 0>,
<&dwdma0 5 0 0>;
dma-names = "tx", "rx";
};
rtc@e0580000 {

View File

@@ -524,17 +524,6 @@
#size-cells = <0>;
};
gic: interrupt-controller@1c81000 {
compatible = "arm,gic-400";
reg = <0x01c81000 0x1000>,
<0x01c82000 0x2000>,
<0x01c84000 0x2000>,
<0x01c86000 0x2000>;
interrupt-controller;
#interrupt-cells = <3>;
interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
};
csi1: camera@1cb4000 {
compatible = "allwinner,sun8i-v3s-csi";
reg = <0x01cb4000 0x3000>;
@@ -546,5 +535,16 @@
resets = <&ccu RST_BUS_CSI>;
status = "disabled";
};
gic: interrupt-controller@1c81000 {
compatible = "arm,gic-400";
reg = <0x01c81000 0x1000>,
<0x01c82000 0x2000>,
<0x01c84000 0x2000>,
<0x01c86000 0x2000>;
interrupt-controller;
#interrupt-cells = <3>;
interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
};
};
};

View File

@@ -183,8 +183,8 @@
};
conf_ata {
nvidia,pins = "ata", "atb", "atc", "atd", "ate",
"cdev1", "cdev2", "dap1", "dtb", "dtf",
"gma", "gmb", "gmc", "gmd", "gme", "gpu7",
"cdev1", "cdev2", "dap1", "dtb", "gma",
"gmb", "gmc", "gmd", "gme", "gpu7",
"gpv", "i2cp", "irrx", "irtx", "pta",
"rm", "slxa", "slxk", "spia", "spib",
"uac";
@@ -203,7 +203,7 @@
};
conf_crtp {
nvidia,pins = "crtp", "dap2", "dap3", "dap4",
"dtc", "dte", "gpu", "sdio1",
"dtc", "dte", "dtf", "gpu", "sdio1",
"slxc", "slxd", "spdi", "spdo", "spig",
"uda";
nvidia,pull = <TEGRA_PIN_PULL_NONE>;

View File

@@ -187,7 +187,6 @@ CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_PLATFORM_SUPPORT=y
CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_VIDEO_ASPEED=m
CONFIG_VIDEO_ATMEL_ISI=m

View File

@@ -102,8 +102,6 @@ config CRYPTO_AES_ARM_BS
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_LIB_AES
select CRYPTO_AES
select CRYPTO_CBC
select CRYPTO_SIMD
help
Use a faster and more secure NEON based implementation of AES in CBC,

View File

@@ -22,7 +22,10 @@
* mcount can be thought of as a function called in the middle of a subroutine
* call. As such, it needs to be transparent for both the caller and the
* callee: the original lr needs to be restored when leaving mcount, and no
* registers should be clobbered.
* registers should be clobbered. (In the __gnu_mcount_nc implementation, we
* clobber the ip register. This is OK because the ARM calling convention
* allows it to be clobbered in subroutines and doesn't use it to hold
* parameters.)
*
* When using dynamic ftrace, we patch out the mcount call by a "pop {lr}"
* instead of the __gnu_mcount_nc call (see arch/arm/kernel/ftrace.c).
@@ -67,25 +70,26 @@
.macro __ftrace_regs_caller
str lr, [sp, #-8]! @ store LR as PC and make space for CPSR/OLD_R0,
sub sp, sp, #8 @ space for PC and CPSR OLD_R0,
@ OLD_R0 will overwrite previous LR
ldr lr, [sp, #8] @ get previous LR
add ip, sp, #12 @ move in IP the value of SP as it was
@ before the push {lr} of the mcount mechanism
str lr, [sp, #0] @ store LR instead of PC
ldr lr, [sp, #8] @ get previous LR
str r0, [sp, #8] @ write r0 as OLD_R0 over previous LR
str lr, [sp, #-4]! @ store previous LR as LR
add lr, sp, #16 @ move in LR the value of SP as it was
@ before the push {lr} of the mcount mechanism
push {r0-r11, ip, lr}
stmdb sp!, {ip, lr}
stmdb sp!, {r0-r11, lr}
@ stack content at this point:
@ 0 4 48 52 56 60 64 68 72
@ R0 | R1 | ... | IP | SP + 4 | previous LR | LR | PSR | OLD_R0 |
@ R0 | R1 | ... | LR | SP + 4 | previous LR | LR | PSR | OLD_R0 |
mov r3, sp @ struct pt_regs*
mov r3, sp @ struct pt_regs*
ldr r2, =function_trace_op
ldr r2, [r2] @ pointer to the current
@@ -108,9 +112,11 @@ ftrace_graph_regs_call:
#endif
@ pop saved regs
pop {r0-r11, ip, lr} @ restore r0 through r12
ldr lr, [sp], #4 @ restore LR
ldr pc, [sp], #12
ldmia sp!, {r0-r12} @ restore r0 through r12
ldr ip, [sp, #8] @ restore PC
ldr lr, [sp, #4] @ restore LR
ldr sp, [sp, #0] @ restore SP
mov pc, ip @ return
.endm
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
@@ -126,9 +132,11 @@ ftrace_graph_regs_call:
bl prepare_ftrace_return
@ pop registers saved in ftrace_regs_caller
pop {r0-r11, ip, lr} @ restore r0 through r12
ldr lr, [sp], #4 @ restore LR
ldr pc, [sp], #12
ldmia sp!, {r0-r12} @ restore r0 through r12
ldr ip, [sp, #8] @ restore PC
ldr lr, [sp, #4] @ restore LR
ldr sp, [sp, #0] @ restore SP
mov pc, ip @ return
.endm
#endif
@@ -194,17 +202,16 @@ ftrace_graph_call\suffix:
.endm
.macro mcount_exit
ldmia sp!, {r0-r3}
ldr lr, [sp, #4]
ldr pc, [sp], #8
ldmia sp!, {r0-r3, ip, lr}
ret ip
.endm
ENTRY(__gnu_mcount_nc)
UNWIND(.fnstart)
#ifdef CONFIG_DYNAMIC_FTRACE
push {lr}
ldr lr, [sp, #4]
ldr pc, [sp], #8
mov ip, lr
ldmia sp!, {lr}
ret ip
#else
__mcount
#endif

View File

@@ -195,7 +195,7 @@ static int swp_handler(struct pt_regs *regs, unsigned int instr)
destreg, EXTRACT_REG_NUM(instr, RT2_OFFSET), data);
/* Check access in reasonable access range for both SWP and SWPB */
if (!access_ok((void __user *)(address & ~3), 4)) {
if (!access_ok((address & ~3), 4)) {
pr_debug("SWP{B} emulation: access to %p not allowed!\n",
(void *)address);
res = -EFAULT;

View File

@@ -589,7 +589,7 @@ do_cache_op(unsigned long start, unsigned long end, int flags)
if (end < start || flags)
return -EINVAL;
if (!access_ok((void __user *)start, end - start))
if (!access_ok(start, end - start))
return -EFAULT;
return __do_cache_op(start, end);

View File

@@ -20,7 +20,7 @@
mrc p6, 0, \irqstat, c8, c0, 0 @ Read IINTSRC
cmp \irqstat, #0
clzne \irqnr, \irqstat
rsbne \irqnr, \irqnr, #32
rsbne \irqnr, \irqnr, #31
.endm
.macro arch_ret_to_user, tmp1, tmp2

View File

@@ -9,6 +9,6 @@
#ifndef __IRQS_H
#define __IRQS_H
#define NR_IRQS 33
#define NR_IRQS 32
#endif

View File

@@ -32,14 +32,14 @@ static void intstr_write(u32 val)
static void
iop32x_irq_mask(struct irq_data *d)
{
iop32x_mask &= ~(1 << (d->irq - 1));
iop32x_mask &= ~(1 << d->irq);
intctl_write(iop32x_mask);
}
static void
iop32x_irq_unmask(struct irq_data *d)
{
iop32x_mask |= 1 << (d->irq - 1);
iop32x_mask |= 1 << d->irq;
intctl_write(iop32x_mask);
}
@@ -65,7 +65,7 @@ void __init iop32x_init_irq(void)
machine_is_em7210())
*IOP3XX_PCIIRSR = 0x0f;
for (i = 1; i < NR_IRQS; i++) {
for (i = 0; i < NR_IRQS; i++) {
irq_set_chip_and_handler(i, &ext_chip, handle_level_irq);
irq_clear_status_flags(i, IRQ_NOREQUEST | IRQ_NOPROBE);
}

View File

@@ -7,40 +7,36 @@
#ifndef __IOP32X_IRQS_H
#define __IOP32X_IRQS_H
/* Interrupts in Linux start at 1, hardware starts at 0 */
#define IOP_IRQ(x) ((x) + 1)
/*
* IOP80321 chipset interrupts
*/
#define IRQ_IOP32X_DMA0_EOT IOP_IRQ(0)
#define IRQ_IOP32X_DMA0_EOC IOP_IRQ(1)
#define IRQ_IOP32X_DMA1_EOT IOP_IRQ(2)
#define IRQ_IOP32X_DMA1_EOC IOP_IRQ(3)
#define IRQ_IOP32X_AA_EOT IOP_IRQ(6)
#define IRQ_IOP32X_AA_EOC IOP_IRQ(7)
#define IRQ_IOP32X_CORE_PMON IOP_IRQ(8)
#define IRQ_IOP32X_TIMER0 IOP_IRQ(9)
#define IRQ_IOP32X_TIMER1 IOP_IRQ(10)
#define IRQ_IOP32X_I2C_0 IOP_IRQ(11)
#define IRQ_IOP32X_I2C_1 IOP_IRQ(12)
#define IRQ_IOP32X_MESSAGING IOP_IRQ(13)
#define IRQ_IOP32X_ATU_BIST IOP_IRQ(14)
#define IRQ_IOP32X_PERFMON IOP_IRQ(15)
#define IRQ_IOP32X_CORE_PMU IOP_IRQ(16)
#define IRQ_IOP32X_BIU_ERR IOP_IRQ(17)
#define IRQ_IOP32X_ATU_ERR IOP_IRQ(18)
#define IRQ_IOP32X_MCU_ERR IOP_IRQ(19)
#define IRQ_IOP32X_DMA0_ERR IOP_IRQ(20)
#define IRQ_IOP32X_DMA1_ERR IOP_IRQ(21)
#define IRQ_IOP32X_AA_ERR IOP_IRQ(23)
#define IRQ_IOP32X_MSG_ERR IOP_IRQ(24)
#define IRQ_IOP32X_SSP IOP_IRQ(25)
#define IRQ_IOP32X_XINT0 IOP_IRQ(27)
#define IRQ_IOP32X_XINT1 IOP_IRQ(28)
#define IRQ_IOP32X_XINT2 IOP_IRQ(29)
#define IRQ_IOP32X_XINT3 IOP_IRQ(30)
#define IRQ_IOP32X_HPI IOP_IRQ(31)
#define IRQ_IOP32X_DMA0_EOT 0
#define IRQ_IOP32X_DMA0_EOC 1
#define IRQ_IOP32X_DMA1_EOT 2
#define IRQ_IOP32X_DMA1_EOC 3
#define IRQ_IOP32X_AA_EOT 6
#define IRQ_IOP32X_AA_EOC 7
#define IRQ_IOP32X_CORE_PMON 8
#define IRQ_IOP32X_TIMER0 9
#define IRQ_IOP32X_TIMER1 10
#define IRQ_IOP32X_I2C_0 11
#define IRQ_IOP32X_I2C_1 12
#define IRQ_IOP32X_MESSAGING 13
#define IRQ_IOP32X_ATU_BIST 14
#define IRQ_IOP32X_PERFMON 15
#define IRQ_IOP32X_CORE_PMU 16
#define IRQ_IOP32X_BIU_ERR 17
#define IRQ_IOP32X_ATU_ERR 18
#define IRQ_IOP32X_MCU_ERR 19
#define IRQ_IOP32X_DMA0_ERR 20
#define IRQ_IOP32X_DMA1_ERR 21
#define IRQ_IOP32X_AA_ERR 23
#define IRQ_IOP32X_MSG_ERR 24
#define IRQ_IOP32X_SSP 25
#define IRQ_IOP32X_XINT0 27
#define IRQ_IOP32X_XINT1 28
#define IRQ_IOP32X_XINT2 29
#define IRQ_IOP32X_XINT3 30
#define IRQ_IOP32X_HPI 31
#endif

View File

@@ -72,8 +72,6 @@ static int sram_probe(struct platform_device *pdev)
if (!info)
return -ENOMEM;
platform_set_drvdata(pdev, info);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (res == NULL) {
dev_err(&pdev->dev, "no memory resource defined\n");
@@ -109,6 +107,8 @@ static int sram_probe(struct platform_device *pdev)
list_add(&info->node, &sram_bank_list);
mutex_unlock(&sram_lock);
platform_set_drvdata(pdev, info);
dev_info(&pdev->dev, "initialized\n");
return 0;
@@ -127,19 +127,17 @@ static int sram_remove(struct platform_device *pdev)
struct sram_bank_info *info;
info = platform_get_drvdata(pdev);
if (info == NULL)
return -ENODEV;
if (info->sram_size) {
mutex_lock(&sram_lock);
list_del(&info->node);
mutex_unlock(&sram_lock);
gen_pool_destroy(info->gpool);
iounmap(info->sram_virt);
kfree(info->pool_name);
}
mutex_lock(&sram_lock);
list_del(&info->node);
mutex_unlock(&sram_lock);
gen_pool_destroy(info->gpool);
iounmap(info->sram_virt);
kfree(info->pool_name);
kfree(info);
return 0;
}

View File

@@ -3,7 +3,6 @@ menuconfig ARCH_MSTARV7
depends on ARCH_MULTI_V7
select ARM_GIC
select ARM_HEAVY_MB
select HAVE_ARM_ARCH_TIMER
select MST_IRQ
help
Support for newer MStar/Sigmastar SoC families that are

View File

@@ -236,11 +236,11 @@ static int __init jive_mtdset(char *options)
unsigned long set;
if (options == NULL || options[0] == '\0')
return 1;
return 0;
if (kstrtoul(options, 10, &set)) {
printk(KERN_ERR "failed to parse mtdset=%s\n", options);
return 1;
return 0;
}
switch (set) {
@@ -255,7 +255,7 @@ static int __init jive_mtdset(char *options)
"using default.", set);
}
return 1;
return 0;
}
/* parse the mtdset= option given to the kernel command line */

View File

@@ -111,8 +111,8 @@
compatible = "silabs,si3226x";
reg = <0>;
spi-max-frequency = <5000000>;
spi-cpha;
spi-cpol;
spi-cpha = <1>;
spi-cpol = <1>;
pl022,hierarchy = <0>;
pl022,interface = <0>;
pl022,slave-tx-disable = <0>;
@@ -135,8 +135,8 @@
at25,byte-len = <0x8000>;
at25,addr-mode = <2>;
at25,page-size = <64>;
spi-cpha;
spi-cpol;
spi-cpha = <1>;
spi-cpol = <1>;
pl022,hierarchy = <0>;
pl022,interface = <0>;
pl022,slave-tx-disable = <0>;

View File

@@ -687,7 +687,7 @@
};
};
sata: sata@663f2000 {
sata: ahci@663f2000 {
compatible = "brcm,iproc-ahci", "generic-ahci";
reg = <0x663f2000 0x1000>;
dma-coherent;

View File

@@ -3406,10 +3406,10 @@
#clock-cells = <0>;
clock-frequency = <9600000>;
clock-output-names = "mclk";
qcom,micbias1-microvolt = <1800000>;
qcom,micbias2-microvolt = <1800000>;
qcom,micbias3-microvolt = <1800000>;
qcom,micbias4-microvolt = <1800000>;
qcom,micbias1-millivolt = <1800>;
qcom,micbias2-millivolt = <1800>;
qcom,micbias3-millivolt = <1800>;
qcom,micbias4-millivolt = <1800>;
#address-cells = <1>;
#size-cells = <1>;

View File

@@ -1114,9 +1114,9 @@
qcom,tcs-offset = <0xd00>;
qcom,drv-id = <2>;
qcom,tcs-config = <ACTIVE_TCS 2>,
<SLEEP_TCS 3>,
<WAKE_TCS 3>,
<CONTROL_TCS 1>;
<SLEEP_TCS 1>,
<WAKE_TCS 1>,
<CONTROL_TCS 0>;
rpmhcc: clock-controller {
compatible = "qcom,sm8150-rpmh-clk";

View File

@@ -665,8 +665,8 @@
sd-uhs-sdr104;
/* Power supply */
vqmmc-supply = <&vcc1v8_s3>; /* IO line */
vmmc-supply = <&vcc_sdio>; /* card's power */
vqmmc-supply = &vcc1v8_s3; /* IO line */
vmmc-supply = &vcc_sdio; /* card's power */
#address-cells = <1>;
#size-cells = <0>;

View File

@@ -35,10 +35,7 @@
#interrupt-cells = <3>;
interrupt-controller;
reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
<0x00 0x01880000 0x00 0x90000>, /* GICR */
<0x00 0x6f000000 0x00 0x2000>, /* GICC */
<0x00 0x6f010000 0x00 0x1000>, /* GICH */
<0x00 0x6f020000 0x00 0x2000>; /* GICV */
<0x00 0x01880000 0x00 0x90000>; /* GICR */
/*
* vcpumntirq:
* virtual CPU interface maintenance interrupt

View File

@@ -84,7 +84,6 @@
<0x00 0x46000000 0x00 0x46000000 0x00 0x00200000>,
<0x00 0x47000000 0x00 0x47000000 0x00 0x00068400>,
<0x00 0x50000000 0x00 0x50000000 0x00 0x8000000>,
<0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A53 PERIPHBASE */
<0x00 0x70000000 0x00 0x70000000 0x00 0x200000>,
<0x05 0x00000000 0x05 0x00000000 0x01 0x0000000>,
<0x07 0x00000000 0x07 0x00000000 0x01 0x0000000>;

View File

@@ -47,10 +47,7 @@
#interrupt-cells = <3>;
interrupt-controller;
reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
<0x00 0x01900000 0x00 0x100000>, /* GICR */
<0x00 0x6f000000 0x00 0x2000>, /* GICC */
<0x00 0x6f010000 0x00 0x1000>, /* GICH */
<0x00 0x6f020000 0x00 0x2000>; /* GICV */
<0x00 0x01900000 0x00 0x100000>; /* GICR */
/* vcpumntirq: virtual CPU interface maintenance interrupt */
interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;

View File

@@ -127,7 +127,6 @@
<0x00 0x00a40000 0x00 0x00a40000 0x00 0x00000800>, /* timesync router */
<0x00 0x01000000 0x00 0x01000000 0x00 0x0d000000>, /* Most peripherals */
<0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>, /* MAIN NAVSS */
<0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
<0x00 0x70000000 0x00 0x70000000 0x00 0x00800000>, /* MSMC RAM */
<0x00 0x18000000 0x00 0x18000000 0x00 0x08000000>, /* PCIe1 DAT0 */
<0x41 0x00000000 0x41 0x00000000 0x01 0x00000000>, /* PCIe1 DAT1 */

View File

@@ -108,10 +108,7 @@
#interrupt-cells = <3>;
interrupt-controller;
reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
<0x00 0x01900000 0x00 0x100000>, /* GICR */
<0x00 0x6f000000 0x00 0x2000>, /* GICC */
<0x00 0x6f010000 0x00 0x1000>, /* GICH */
<0x00 0x6f020000 0x00 0x2000>; /* GICV */
<0x00 0x01900000 0x00 0x100000>; /* GICR */
/* vcpumntirq: virtual CPU interface maintenance interrupt */
interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;

View File

@@ -136,7 +136,6 @@
<0x00 0x0e000000 0x00 0x0e000000 0x00 0x01800000>, /* PCIe Core*/
<0x00 0x10000000 0x00 0x10000000 0x00 0x10000000>, /* PCIe DAT */
<0x00 0x64800000 0x00 0x64800000 0x00 0x00800000>, /* C71 */
<0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
<0x44 0x00000000 0x44 0x00000000 0x00 0x08000000>, /* PCIe2 DAT */
<0x44 0x10000000 0x44 0x10000000 0x00 0x08000000>, /* PCIe3 DAT */
<0x4d 0x80800000 0x4d 0x80800000 0x00 0x00800000>, /* C66_0 */

View File

@@ -1,7 +1,6 @@
CONFIG_QRTR=m
CONFIG_QRTR_TUN=m
CONFIG_SCSI_UFS_QCOM=m
CONFIG_USB_NET_AX88179_178A=m
CONFIG_INPUT_PM8941_PWRKEY=m
CONFIG_SERIAL_MSM=m
CONFIG_SERIAL_QCOM_GENI=m

View File

@@ -840,7 +840,7 @@ CONFIG_DMADEVICES=y
CONFIG_DMA_BCM2835=y
CONFIG_DMA_SUN6I=m
CONFIG_FSL_EDMA=y
CONFIG_IMX_SDMA=m
CONFIG_IMX_SDMA=y
CONFIG_K3_DMA=y
CONFIG_MV_XOR=y
CONFIG_MV_XOR_V2=y

View File

@@ -124,6 +124,10 @@ CONFIG_CMA_SYSFS=y
CONFIG_CMA_AREAS=16
CONFIG_READ_ONLY_THP_FOR_FS=y
CONFIG_ANON_VMA_NAME=y
CONFIG_LRU_GEN=y
CONFIG_DAMON=y
CONFIG_DAMON_PADDR=y
CONFIG_DAMON_RECLAIM=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
@@ -331,6 +335,7 @@ CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
CONFIG_WIREGUARD=y
CONFIG_IFB=y
CONFIG_MACSEC=y
CONFIG_TUN=y
CONFIG_VETH=y
CONFIG_PPP=y
@@ -342,7 +347,6 @@ CONFIG_PPPOL2TP=y
CONFIG_USB_RTL8150=y
CONFIG_USB_RTL8152=y
CONFIG_USB_USBNET=y
# CONFIG_USB_NET_AX88179_178A is not set
CONFIG_USB_NET_CDC_EEM=y
# CONFIG_USB_NET_NET1080 is not set
# CONFIG_USB_NET_CDC_SUBSET is not set
@@ -394,6 +398,7 @@ CONFIG_HVC_DCC=y
CONFIG_HVC_DCC_SERIALIZE_SMP=y
CONFIG_SERIAL_DEV_BUS=y
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_CAVIUM is not set
# CONFIG_DEVMEM is not set
# CONFIG_DEVPORT is not set
CONFIG_RANDOM_TRUST_CPU=y
@@ -574,6 +579,7 @@ CONFIG_EXT4_FS_SECURITY=y
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_F2FS_FS_COMPRESSION=y
CONFIG_F2FS_UNFAIR_RWSEM=y
CONFIG_FS_ENCRYPTION=y
CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
CONFIG_FS_VERITY=y

View File

@@ -149,6 +149,20 @@
msr spsr_el2, x0
.endm
.macro __init_el2_mpam
/* Memory Partioning And Monitoring: disable EL2 traps */
mrs x1, id_aa64pfr0_el1
ubfx x0, x1, #ID_AA64PFR0_MPAM_SHIFT, #4
cbz x0, .Lskip_mpam_\@ // skip if no MPAM
msr_s SYS_MPAM0_EL1, xzr // use the default partition..
msr_s SYS_MPAM2_EL2, xzr // ..and disable lower traps
msr_s SYS_MPAM1_EL1, xzr
mrs_s x0, SYS_MPAMIDR_EL1
tbz x0, #17, .Lskip_mpam_\@ // skip if no MPAMHCR reg
msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2
.Lskip_mpam_\@:
.endm
/**
* Initialize EL2 registers to sane values. This should be called early on all
* cores that were booted in EL2. Note that everything gets initialised as
@@ -165,6 +179,7 @@
__init_el2_stage2
__init_el2_gicv3
__init_el2_hstr
__init_el2_mpam
__init_el2_nvhe_idregs
__init_el2_nvhe_cptr
__init_el2_nvhe_sve

View File

@@ -82,6 +82,7 @@ enum __kvm_host_smccc_func {
__KVM_HOST_SMCCC_FUNC___pkvm_iommu_driver_init,
__KVM_HOST_SMCCC_FUNC___pkvm_iommu_register,
__KVM_HOST_SMCCC_FUNC___pkvm_iommu_pm_notify,
__KVM_HOST_SMCCC_FUNC___pkvm_iommu_finalize,
};
#define DECLARE_KVM_VHE_SYM(sym) extern char sym[]
@@ -112,7 +113,7 @@ enum __kvm_host_smccc_func {
#define per_cpu_ptr_nvhe_sym(sym, cpu) \
({ \
unsigned long base, off; \
base = kvm_arm_hyp_percpu_base[cpu]; \
base = kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]; \
off = (unsigned long)&CHOOSE_NVHE_SYM(sym) - \
(unsigned long)&CHOOSE_NVHE_SYM(__per_cpu_start); \
base ? (typeof(CHOOSE_NVHE_SYM(sym))*)(base + off) : NULL; \
@@ -200,7 +201,7 @@ DECLARE_KVM_HYP_SYM(__kvm_hyp_vector);
#define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init)
#define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector)
extern unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
extern unsigned long kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[];
DECLARE_KVM_NVHE_SYM(__per_cpu_start);
DECLARE_KVM_NVHE_SYM(__per_cpu_end);

View File

@@ -41,6 +41,11 @@ void kvm_inject_vabt(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode,
enum exception_type type);
unsigned long get_except64_cpsr(unsigned long old, bool has_mte,
unsigned long sctlr, unsigned long mode);
static inline int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
{
/*

View File

@@ -212,8 +212,6 @@ struct kvm_arch {
u8 pfr0_csv3;
struct kvm_protected_vm pkvm;
u64 hypercall_exit_enabled;
};
struct kvm_protected_vcpu {
@@ -378,6 +376,7 @@ extern u64 kvm_nvhe_sym(hyp_cpu_logical_map)[NR_CPUS];
enum pkvm_iommu_driver_id {
PKVM_IOMMU_DRIVER_S2MPU,
PKVM_IOMMU_DRIVER_SYSMMU_SYNC,
PKVM_IOMMU_NR_DRIVERS,
};
@@ -388,11 +387,15 @@ enum pkvm_iommu_pm_event {
int pkvm_iommu_driver_init(enum pkvm_iommu_driver_id drv_id, void *data, size_t size);
int pkvm_iommu_register(struct device *dev, enum pkvm_iommu_driver_id drv_id,
phys_addr_t pa, size_t size);
phys_addr_t pa, size_t size, struct device *parent);
int pkvm_iommu_suspend(struct device *dev);
int pkvm_iommu_resume(struct device *dev);
int pkvm_iommu_s2mpu_register(struct device *dev, phys_addr_t pa);
int pkvm_iommu_sysmmu_sync_register(struct device *dev, phys_addr_t pa,
struct device *parent);
/* Reject future calls to pkvm_iommu_driver_init() and pkvm_iommu_register(). */
int pkvm_iommu_finalize(void);
struct vcpu_reset_state {
unsigned long pc;

View File

@@ -118,6 +118,10 @@ alternative_cb_end
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void kvm_get__text(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void kvm_get__etext(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
void kvm_compute_layout(void);
void kvm_apply_hyp_relocations(void);

View File

@@ -12,6 +12,10 @@
#include <asm/kvm_mmu.h>
#define S2MPU_MMIO_SIZE SZ_64K
#define SYSMMU_SYNC_MMIO_SIZE SZ_64K
#define SYSMMU_SYNC_S2_OFFSET SZ_32K
#define SYSMMU_SYNC_S2_MMIO_SIZE (SYSMMU_SYNC_MMIO_SIZE - \
SYSMMU_SYNC_S2_OFFSET)
#define NR_VIDS 8
#define NR_CTX_IDS 8
@@ -24,6 +28,7 @@
#define REG_NS_INTERRUPT_ENABLE_PER_VID_SET 0x20
#define REG_NS_INTERRUPT_CLEAR 0x2c
#define REG_NS_VERSION 0x60
#define REG_NS_INFO 0x64
#define REG_NS_STATUS 0x68
#define REG_NS_NUM_CONTEXT 0x100
#define REG_NS_CONTEXT_CFG_VALID_VID 0x104
@@ -35,6 +40,10 @@
#define REG_NS_FAULT_PA_LOW(vid) (0x2004 + ((vid) * 0x20))
#define REG_NS_FAULT_PA_HIGH(vid) (0x2008 + ((vid) * 0x20))
#define REG_NS_FAULT_INFO(vid) (0x2010 + ((vid) * 0x20))
#define REG_NS_READ_MPTC 0x3000
#define REG_NS_READ_MPTC_TAG_PPN 0x3004
#define REG_NS_READ_MPTC_TAG_OTHERS 0x3008
#define REG_NS_READ_MPTC_DATA 0x3010
#define REG_NS_L1ENTRY_L2TABLE_ADDR(vid, gb) (0x4000 + ((vid) * 0x200) + ((gb) * 0x8))
#define REG_NS_L1ENTRY_ATTR(vid, gb) (0x4004 + ((vid) * 0x200) + ((gb) * 0x8))
@@ -42,15 +51,30 @@
#define CTRL0_INTERRUPT_ENABLE BIT(1)
#define CTRL0_FAULT_RESP_TYPE_SLVERR BIT(2) /* for v8 */
#define CTRL0_FAULT_RESP_TYPE_DECERR BIT(2) /* for v9 */
#define CTRL0_MASK (CTRL0_ENABLE | \
CTRL0_INTERRUPT_ENABLE | \
CTRL0_FAULT_RESP_TYPE_SLVERR | \
CTRL0_FAULT_RESP_TYPE_DECERR)
#define CTRL1_DISABLE_CHK_S1L1PTW BIT(0)
#define CTRL1_DISABLE_CHK_S1L2PTW BIT(1)
#define CTRL1_ENABLE_PAGE_SIZE_AWARENESS BIT(2)
#define CTRL1_DISABLE_CHK_USER_MATCHED_REQ BIT(3)
#define CTRL1_MASK (CTRL1_DISABLE_CHK_S1L1PTW | \
CTRL1_DISABLE_CHK_S1L2PTW | \
CTRL1_ENABLE_PAGE_SIZE_AWARENESS | \
CTRL1_DISABLE_CHK_USER_MATCHED_REQ)
#define CFG_MPTW_CACHE_OVERRIDE BIT(0)
#define CFG_MPTW_CACHE_VALUE GENMASK(7, 4)
#define CFG_MPTW_QOS_OVERRIDE BIT(8)
#define CFG_MPTW_QOS_VALUE GENMASK(15, 12)
#define CFG_MPTW_SHAREABLE BIT(16)
#define CFG_MASK (CFG_MPTW_CACHE_OVERRIDE | \
CFG_MPTW_CACHE_VALUE | \
CFG_MPTW_QOS_OVERRIDE | \
CFG_MPTW_QOS_VALUE | \
CFG_MPTW_SHAREABLE)
/* For use with hi_lo_readq_relaxed(). */
#define REG_NS_FAULT_PA_HIGH_LOW(vid) REG_NS_FAULT_PA_LOW(vid)
@@ -68,6 +92,8 @@
VERSION_MINOR_ARCH_VER_MASK | \
VERSION_REV_ARCH_VER_MASK)
#define INFO_NUM_SET_MASK GENMASK(15, 0)
#define STATUS_BUSY BIT(0)
#define STATUS_ON_INVALIDATING BIT(1)
@@ -90,14 +116,31 @@
#define FAULT_INFO_LEN_MASK GENMASK(19, 16)
#define FAULT_INFO_ID_MASK GENMASK(15, 0)
#define L1ENTRY_L2TABLE_ADDR(pa) ((pa) >> 4)
#define L1ENTRY_L2TABLE_ADDR_SHIFT 4
#define L1ENTRY_L2TABLE_ADDR(pa) ((pa) >> L1ENTRY_L2TABLE_ADDR_SHIFT)
#define READ_MPTC_WAY_MASK GENMASK(18, 16)
#define READ_MPTC_SET_MASK GENMASK(15, 0)
#define READ_MPTC_MASK (READ_MPTC_WAY_MASK | READ_MPTC_SET_MASK)
#define READ_MPTC_WAY(way) FIELD_PREP(READ_MPTC_WAY_MASK, (way))
#define READ_MPTC_SET(set) FIELD_PREP(READ_MPTC_SET_MASK, (set))
#define READ_MPTC(set, way) (READ_MPTC_SET(set) | READ_MPTC_WAY(way))
#define READ_MPTC_TAG_PPN_MASK GENMASK(23, 0)
#define READ_MPTC_TAG_OTHERS_VID_MASK GENMASK(10, 8)
#define READ_MPTC_TAG_OTHERS_GRAN_MASK GENMASK(5, 4)
#define READ_MPTC_TAG_OTHERS_VALID_BIT BIT(0)
#define READ_MPTC_TAG_OTHERS_MASK (READ_MPTC_TAG_OTHERS_VID_MASK | \
READ_MPTC_TAG_OTHERS_GRAN_MASK | \
READ_MPTC_TAG_OTHERS_VALID_BIT)
#define L1ENTRY_ATTR_L2TABLE_EN BIT(0)
#define L1ENTRY_ATTR_GRAN_4K 0x0
#define L1ENTRY_ATTR_GRAN_64K 0x1
#define L1ENTRY_ATTR_GRAN_2M 0x2
#define L1ENTRY_ATTR_PROT(prot) FIELD_PREP(GENMASK(2, 1), prot)
#define L1ENTRY_ATTR_GRAN(gran) FIELD_PREP(GENMASK(5, 4), gran)
#define L1ENTRY_ATTR_PROT_MASK GENMASK(2, 1)
#define L1ENTRY_ATTR_GRAN_MASK GENMASK(5, 4)
#define L1ENTRY_ATTR_PROT(prot) FIELD_PREP(L1ENTRY_ATTR_PROT_MASK, prot)
#define L1ENTRY_ATTR_GRAN(gran) FIELD_PREP(L1ENTRY_ATTR_GRAN_MASK, gran)
#define L1ENTRY_ATTR_1G(prot) L1ENTRY_ATTR_PROT(prot)
#define L1ENTRY_ATTR_L2(gran) (L1ENTRY_ATTR_GRAN(gran) | \
L1ENTRY_ATTR_L2TABLE_EN)
@@ -128,6 +171,13 @@ static_assert(SMPT_GRAN <= PAGE_SIZE);
#define SMPT_NUM_PAGES (SMPT_SIZE / PAGE_SIZE)
#define SMPT_ORDER get_order(SMPT_SIZE)
/* SysMMU_SYNC registers, relative to SYSMMU_SYNC_S2_OFFSET. */
#define REG_NS_SYNC_CMD 0x0
#define REG_NS_SYNC_COMP 0x4
#define SYNC_CMD_SYNC BIT(0)
#define SYNC_COMP_COMPLETE BIT(0)
/*
* Iterate over S2MPU gigabyte regions. Skip those that cannot be modified
* (the MMIO registers are read only, with reset value MPT_PROT_NONE).

View File

@@ -1011,23 +1011,13 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
* page after fork() + CoW for pfn mappings. We don't always have a
* hardware-managed access flag on arm64.
*/
static inline bool arch_faults_on_old_pte(void)
{
WARN_ON(preemptible());
return !cpu_has_hw_af();
}
#define arch_faults_on_old_pte arch_faults_on_old_pte
#define arch_has_hw_pte_young cpu_has_hw_af
/*
* Experimentally, it's cheap to set the access flag in hardware and we
* benefit from prefaulting mappings as 'old' to start with.
*/
static inline bool arch_wants_old_prefaulted_pte(void)
{
return !arch_faults_on_old_pte();
}
#define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte
#define arch_wants_old_prefaulted_pte cpu_has_hw_af
#endif /* !__ASSEMBLY__ */

View File

@@ -394,7 +394,10 @@
#define SYS_LOREA_EL1 sys_reg(3, 0, 10, 4, 1)
#define SYS_LORN_EL1 sys_reg(3, 0, 10, 4, 2)
#define SYS_LORC_EL1 sys_reg(3, 0, 10, 4, 3)
#define SYS_MPAMIDR_EL1 sys_reg(3, 0, 10, 4, 4)
#define SYS_LORID_EL1 sys_reg(3, 0, 10, 4, 7)
#define SYS_MPAM1_EL1 sys_reg(3, 0, 10, 5, 0)
#define SYS_MPAM0_EL1 sys_reg(3, 0, 10, 5, 1)
#define SYS_VBAR_EL1 sys_reg(3, 0, 12, 0, 0)
#define SYS_DISR_EL1 sys_reg(3, 0, 12, 1, 1)
@@ -536,6 +539,10 @@
#define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0)
#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
#define SYS_MPAMHCR_EL2 sys_reg(3, 4, 10, 4, 0)
#define SYS_MPAMVPMV_EL2 sys_reg(3, 4, 10, 4, 1)
#define SYS_MPAM2_EL2 sys_reg(3, 4, 10, 5, 0)
#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
#define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
#define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0)

View File

@@ -56,14 +56,14 @@ enum arm64_bp_harden_el1_vectors {
DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);
#ifndef CONFIG_UNMAP_KERNEL_AT_EL0
#define TRAMP_VALIAS 0ul
#define TRAMP_VALIAS 0
#endif
static inline const char *
arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot)
{
if (arm64_kernel_unmapped_at_el0())
return (char *)(TRAMP_VALIAS + SZ_2K * slot);
return (char *)TRAMP_VALIAS + SZ_2K * slot;
WARN_ON_ONCE(slot == EL1_VECTOR_KPTI);

View File

@@ -72,6 +72,12 @@ void __hyp_reset_vectors(void);
DECLARE_STATIC_KEY_FALSE(kvm_protected_mode_initialized);
static inline bool is_pkvm_initialized(void)
{
return IS_ENABLED(CONFIG_KVM) &&
static_branch_likely(&kvm_protected_mode_initialized);
}
/* Reports the availability of HYP mode */
static inline bool is_hyp_mode_available(void)
{
@@ -79,8 +85,7 @@ static inline bool is_hyp_mode_available(void)
* If KVM protected mode is initialized, all CPUs must have been booted
* in EL2. Avoid checking __boot_cpu_mode as CPUs now come up in EL1.
*/
if (IS_ENABLED(CONFIG_KVM) &&
static_branch_likely(&kvm_protected_mode_initialized))
if (is_pkvm_initialized())
return true;
return (__boot_cpu_mode[0] == BOOT_CPU_MODE_EL2 &&
@@ -94,8 +99,7 @@ static inline bool is_hyp_mode_mismatched(void)
* If KVM protected mode is initialized, all CPUs must have been booted
* in EL2. Avoid checking __boot_cpu_mode as CPUs now come up in EL1.
*/
if (IS_ENABLED(CONFIG_KVM) &&
static_branch_likely(&kvm_protected_mode_initialized))
if (is_pkvm_initialized())
return false;
return __boot_cpu_mode[0] != __boot_cpu_mode[1];

View File

@@ -65,6 +65,8 @@ __efistub__ctype = _ctype;
KVM_NVHE_ALIAS(kvm_patch_vector_branch);
KVM_NVHE_ALIAS(kvm_update_va_mask);
KVM_NVHE_ALIAS(kvm_get_kimage_voffset);
KVM_NVHE_ALIAS(kvm_get__text);
KVM_NVHE_ALIAS(kvm_get__etext);
KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter);
KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable);
@@ -99,9 +101,6 @@ KVM_NVHE_ALIAS(gic_nonsecure_priorities);
KVM_NVHE_ALIAS(__start___kvm_ex_table);
KVM_NVHE_ALIAS(__stop___kvm_ex_table);
/* Array containing bases of nVHE per-CPU memory regions. */
KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base);
/* PMU available static key */
KVM_NVHE_ALIAS(kvm_arm_pmu_available);
@@ -116,12 +115,6 @@ KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy);
KVM_NVHE_ALIAS_HYP(__memset, __pi_memset);
#endif
/* Kernel memory sections */
KVM_NVHE_ALIAS(__start_rodata);
KVM_NVHE_ALIAS(__end_rodata);
KVM_NVHE_ALIAS(__bss_start);
KVM_NVHE_ALIAS(__bss_stop);
/* Hyp memory sections */
KVM_NVHE_ALIAS(__hyp_idmap_text_start);
KVM_NVHE_ALIAS(__hyp_idmap_text_end);

View File

@@ -47,7 +47,9 @@ early_param("no-steal-acc", parse_no_stealacc);
/* return stolen time in ns by asking the hypervisor */
static u64 pv_steal_clock(int cpu)
{
struct pvclock_vcpu_stolen_time *kaddr = NULL;
struct pv_time_stolen_time_region *reg;
u64 ret = 0;
reg = per_cpu_ptr(&stolen_time_region, cpu);
@@ -56,28 +58,38 @@ static u64 pv_steal_clock(int cpu)
* online notification callback runs. Until the callback
* has run we just return zero.
*/
if (!reg->kaddr)
rcu_read_lock();
kaddr = rcu_dereference(reg->kaddr);
if (!kaddr) {
rcu_read_unlock();
return 0;
}
return le64_to_cpu(READ_ONCE(reg->kaddr->stolen_time));
ret = le64_to_cpu(READ_ONCE(kaddr->stolen_time));
rcu_read_unlock();
return ret;
}
static int stolen_time_cpu_down_prepare(unsigned int cpu)
{
struct pvclock_vcpu_stolen_time *kaddr = NULL;
struct pv_time_stolen_time_region *reg;
reg = this_cpu_ptr(&stolen_time_region);
if (!reg->kaddr)
return 0;
memunmap(reg->kaddr);
memset(reg, 0, sizeof(*reg));
kaddr = reg->kaddr;
rcu_assign_pointer(reg->kaddr, NULL);
synchronize_rcu();
memunmap(kaddr);
return 0;
}
static int stolen_time_cpu_online(unsigned int cpu)
{
struct pvclock_vcpu_stolen_time *kaddr = NULL;
struct pv_time_stolen_time_region *reg;
struct arm_smccc_res res;
@@ -88,10 +100,12 @@ static int stolen_time_cpu_online(unsigned int cpu)
if (res.a0 == SMCCC_RET_NOT_SUPPORTED)
return -EINVAL;
reg->kaddr = memremap(res.a0,
kaddr = memremap(res.a0,
sizeof(struct pvclock_vcpu_stolen_time),
MEMREMAP_WB);
rcu_assign_pointer(reg->kaddr, kaddr);
if (!reg->kaddr) {
pr_warn("Failed to map stolen time data structure\n");
return -ENOMEM;

View File

@@ -572,12 +572,10 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
{
int err;
if (system_supports_fpsimd()) {
err = sigframe_alloc(user, &user->fpsimd_offset,
sizeof(struct fpsimd_context));
if (err)
return err;
}
err = sigframe_alloc(user, &user->fpsimd_offset,
sizeof(struct fpsimd_context));
if (err)
return err;
/* fault information, if valid */
if (add_all || current->thread.fault_code) {

View File

@@ -50,7 +50,6 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized);
DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
/* The VMID used in the VTTBR */
@@ -63,10 +62,6 @@ static bool vgic_present;
static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
/* KVM "vendor" hypercalls which may be forwarded to userspace on request. */
#define KVM_EXIT_HYPERCALL_VALID_MASK (BIT(ARM_SMCCC_KVM_FUNC_MEM_SHARE) | \
BIT(ARM_SMCCC_KVM_FUNC_MEM_UNSHARE))
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
@@ -117,16 +112,6 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
}
mutex_unlock(&kvm->lock);
break;
case KVM_CAP_EXIT_HYPERCALL:
if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK)
return -EINVAL;
if (cap->args[1] || cap->args[2] || cap->args[3])
return -EINVAL;
WRITE_ONCE(kvm->arch.hypercall_exit_enabled, cap->args[0]);
r = 0;
break;
default:
r = -EINVAL;
break;
@@ -314,9 +299,6 @@ static int kvm_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_PTRAUTH_GENERIC:
r = system_has_full_ptr_auth();
break;
case KVM_CAP_EXIT_HYPERCALL:
r = KVM_EXIT_HYPERCALL_VALID_MASK;
break;
default:
r = 0;
}
@@ -341,7 +323,6 @@ static int pkvm_check_extension(struct kvm *kvm, long ext, int kvm_cap)
case KVM_CAP_MAX_VCPU_ID:
case KVM_CAP_MSI_DEVID:
case KVM_CAP_ARM_VM_IPA_SIZE:
case KVM_CAP_EXIT_HYPERCALL:
r = kvm_cap;
break;
case KVM_CAP_GUEST_DEBUG_HW_BPS:
@@ -892,12 +873,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
ret = kvm_handle_mmio_return(vcpu);
if (ret)
return ret;
} else if (run->exit_reason == KVM_EXIT_HYPERCALL) {
smccc_set_retval(vcpu,
vcpu->run->hypercall.ret,
vcpu->run->hypercall.args[0],
vcpu->run->hypercall.args[1],
vcpu->run->hypercall.args[2]);
}
vcpu_load(vcpu);
@@ -1923,13 +1898,13 @@ static void teardown_hyp_mode(void)
free_hyp_pgds();
for_each_possible_cpu(cpu) {
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
free_pages(kvm_arm_hyp_percpu_base[cpu], nvhe_percpu_order());
free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
}
}
static int do_pkvm_init(u32 hyp_va_bits)
{
void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base);
void *per_cpu_base = kvm_ksym_ref(kvm_nvhe_sym(kvm_arm_hyp_percpu_base));
int ret;
preempt_disable();
@@ -2030,7 +2005,7 @@ static int init_hyp_mode(void)
page_addr = page_address(page);
memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size());
kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr;
kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu] = (unsigned long)page_addr;
}
/*
@@ -2098,7 +2073,7 @@ static int init_hyp_mode(void)
}
for_each_possible_cpu(cpu) {
char *percpu_begin = (char *)kvm_arm_hyp_percpu_base[cpu];
char *percpu_begin = (char *)kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu];
char *percpu_end = percpu_begin + nvhe_percpu_size();
/* Map Hyp percpu pages */

View File

@@ -349,3 +349,4 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%016lx\n",
spsr, elr_virt, esr, far, hpfar, par, vcpu);
}
EXPORT_SYMBOL_GPL(nvhe_hyp_panic_handler);

View File

@@ -60,12 +60,25 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
vcpu->arch.ctxt.spsr_und = val;
}
unsigned long get_except64_offset(unsigned long psr, unsigned long target_mode,
enum exception_type type)
{
u64 mode = psr & (PSR_MODE_MASK | PSR_MODE32_BIT);
u64 exc_offset;
if (mode == target_mode)
exc_offset = CURRENT_EL_SP_ELx_VECTOR;
else if ((mode | PSR_MODE_THREAD_BIT) == target_mode)
exc_offset = CURRENT_EL_SP_EL0_VECTOR;
else if (!(mode & PSR_MODE32_BIT))
exc_offset = LOWER_EL_AArch64_VECTOR;
else
exc_offset = LOWER_EL_AArch32_VECTOR;
return exc_offset + type;
}
/*
* This performs the exception entry at a given EL (@target_mode), stashing PC
* and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE.
* The EL passed to this function *must* be a non-secure, privileged mode with
* bit 0 being set (PSTATE.SP == 1).
*
* When an exception is taken, most PSTATE fields are left unchanged in the
* handler. However, some are explicitly overridden (e.g. M[4:0]). Luckily all
* of the inherited bits have the same position in the AArch64/AArch32 SPSR_ELx
@@ -77,45 +90,17 @@ static void __vcpu_write_spsr_und(struct kvm_vcpu *vcpu, u64 val)
* Here we manipulate the fields in order of the AArch64 SPSR_ELx layout, from
* MSB to LSB.
*/
static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
enum exception_type type)
unsigned long get_except64_cpsr(unsigned long old, bool has_mte,
unsigned long sctlr, unsigned long target_mode)
{
unsigned long sctlr, vbar, old, new, mode;
u64 exc_offset;
mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
if (mode == target_mode)
exc_offset = CURRENT_EL_SP_ELx_VECTOR;
else if ((mode | PSR_MODE_THREAD_BIT) == target_mode)
exc_offset = CURRENT_EL_SP_EL0_VECTOR;
else if (!(mode & PSR_MODE32_BIT))
exc_offset = LOWER_EL_AArch64_VECTOR;
else
exc_offset = LOWER_EL_AArch32_VECTOR;
switch (target_mode) {
case PSR_MODE_EL1h:
vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1);
sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1);
break;
default:
/* Don't do that */
BUG();
}
*vcpu_pc(vcpu) = vbar + exc_offset + type;
old = *vcpu_cpsr(vcpu);
new = 0;
u64 new = 0;
new |= (old & PSR_N_BIT);
new |= (old & PSR_Z_BIT);
new |= (old & PSR_C_BIT);
new |= (old & PSR_V_BIT);
if (kvm_has_mte(vcpu->kvm))
if (has_mte)
new |= PSR_TCO_BIT;
new |= (old & PSR_DIT_BIT);
@@ -151,6 +136,36 @@ static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
new |= target_mode;
return new;
}
/*
* This performs the exception entry at a given EL (@target_mode), stashing PC
* and PSTATE into ELR and SPSR respectively, and compute the new PC/PSTATE.
* The EL passed to this function *must* be a non-secure, privileged mode with
* bit 0 being set (PSTATE.SP == 1).
*/
static void enter_exception64(struct kvm_vcpu *vcpu, unsigned long target_mode,
enum exception_type type)
{
u64 offset = get_except64_offset(*vcpu_cpsr(vcpu), target_mode, type);
unsigned long sctlr, vbar, old, new;
switch (target_mode) {
case PSR_MODE_EL1h:
vbar = __vcpu_read_sys_reg(vcpu, VBAR_EL1);
sctlr = __vcpu_read_sys_reg(vcpu, SCTLR_EL1);
__vcpu_write_sys_reg(vcpu, *vcpu_pc(vcpu), ELR_EL1);
break;
default:
/* Don't do that */
BUG();
}
*vcpu_pc(vcpu) = vbar + offset;
old = *vcpu_cpsr(vcpu);
new = get_except64_cpsr(old, kvm_has_mte(vcpu->kvm), sctlr, target_mode);
*vcpu_cpsr(vcpu) = new;
__vcpu_write_spsr(vcpu, old);
}

View File

@@ -15,15 +15,24 @@ struct pkvm_iommu_ops {
* Driver-specific arguments are passed in a buffer shared by the host.
* The buffer memory has been pinned in EL2 but host retains R/W access.
* Extra care must be taken when reading from it to avoid TOCTOU bugs.
* If the driver maintains its own page tables, it is expected to
* initialize them to all memory owned by the host.
* Driver initialization lock held during callback.
*/
int (*init)(void *data, size_t size);
/*
* Driver-specific validation of device registration inputs.
* This should be stateless. No locks are held at entry.
* Driver-specific validation of a device that is being registered.
* All fields of the device struct have been populated.
* Called with the host lock held.
*/
int (*validate)(phys_addr_t base, size_t size);
int (*validate)(struct pkvm_iommu *dev);
/*
* Validation of a new child device that is being register by
* the parent device the child selected. Called with the host lock held.
*/
int (*validate_child)(struct pkvm_iommu *dev, struct pkvm_iommu *child);
/*
* Callback to apply a host stage-2 mapping change at driver level.
@@ -56,7 +65,10 @@ struct pkvm_iommu_ops {
};
struct pkvm_iommu {
struct pkvm_iommu *parent;
struct list_head list;
struct list_head siblings;
struct list_head children;
unsigned long id;
const struct pkvm_iommu_ops *ops;
phys_addr_t pa;
@@ -70,9 +82,11 @@ int __pkvm_iommu_driver_init(enum pkvm_iommu_driver_id id, void *data, size_t si
int __pkvm_iommu_register(unsigned long dev_id,
enum pkvm_iommu_driver_id drv_id,
phys_addr_t dev_pa, size_t dev_size,
unsigned long parent_id,
void *kern_mem_va, size_t mem_size);
int __pkvm_iommu_pm_notify(unsigned long dev_id,
enum pkvm_iommu_pm_event event);
int __pkvm_iommu_finalize(void);
int pkvm_iommu_host_stage2_adjust_range(phys_addr_t addr, phys_addr_t *start,
phys_addr_t *end);
bool pkvm_iommu_host_dabt_handler(struct kvm_cpu_context *host_ctxt, u32 esr,
@@ -81,5 +95,6 @@ void pkvm_iommu_host_stage2_idmap(phys_addr_t start, phys_addr_t end,
enum kvm_pgtable_prot prot);
extern const struct pkvm_iommu_ops pkvm_s2mpu_ops;
extern const struct pkvm_iommu_ops pkvm_sysmmu_sync_ops;
#endif /* __ARM64_KVM_NVHE_IOMMU_H__ */

View File

@@ -91,6 +91,9 @@ int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages,
struct kvm_hyp_memcache *host_mc);
void reclaim_guest_pages(struct kvm_shadow_vm *vm, struct kvm_hyp_memcache *mc);
void psci_mem_protect_inc(void);
void psci_mem_protect_dec(void);
static __always_inline void __load_host_stage2(void)
{
if (static_branch_likely(&kvm_protected_mode_initialized))

View File

@@ -176,6 +176,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_res *res,
DECLARE_REG(phys_addr_t, rx, ctxt, 2);
DECLARE_REG(u32, npages, ctxt, 3);
int ret = 0;
void *rx_virt, *tx_virt;
if (npages != (KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE) {
ret = FFA_RET_INVALID_PARAMETERS;
@@ -209,8 +210,22 @@ static void do_ffa_rxtx_map(struct arm_smccc_res *res,
goto err_unshare_tx;
}
host_kvm.ffa.tx = hyp_phys_to_virt(tx);
host_kvm.ffa.rx = hyp_phys_to_virt(rx);
tx_virt = hyp_phys_to_virt(tx);
ret = hyp_pin_shared_mem(tx_virt, tx_virt + 1);
if (ret) {
ret = FFA_RET_INVALID_PARAMETERS;
goto err_unshare_rx;
}
rx_virt = hyp_phys_to_virt(rx);
ret = hyp_pin_shared_mem(rx_virt, rx_virt + 1);
if (ret) {
ret = FFA_RET_INVALID_PARAMETERS;
goto err_unpin_tx;
}
host_kvm.ffa.tx = tx_virt;
host_kvm.ffa.rx = rx_virt;
out_unlock:
hyp_spin_unlock(&host_kvm.ffa.lock);
@@ -218,6 +233,10 @@ out:
ffa_to_smccc_res(res, ret);
return;
err_unpin_tx:
hyp_unpin_shared_mem(tx_virt, tx_virt + 1);
err_unshare_rx:
__pkvm_host_unshare_hyp(hyp_phys_to_pfn(rx));
err_unshare_tx:
__pkvm_host_unshare_hyp(hyp_phys_to_pfn(tx));
err_unmap:
@@ -242,9 +261,11 @@ static void do_ffa_rxtx_unmap(struct arm_smccc_res *res,
goto out_unlock;
}
hyp_unpin_shared_mem(host_kvm.ffa.tx, host_kvm.ffa.tx + 1);
WARN_ON(__pkvm_host_unshare_hyp(hyp_virt_to_pfn(host_kvm.ffa.tx)));
host_kvm.ffa.tx = NULL;
hyp_unpin_shared_mem(host_kvm.ffa.rx, host_kvm.ffa.rx + 1);
WARN_ON(__pkvm_host_unshare_hyp(hyp_virt_to_pfn(host_kvm.ffa.rx)));
host_kvm.ffa.rx = NULL;
@@ -263,10 +284,13 @@ static u32 __ffa_host_share_ranges(struct ffa_mem_region_addr_range *ranges,
for (i = 0; i < nranges; ++i) {
struct ffa_mem_region_addr_range *range = &ranges[i];
u64 npages = (range->pg_cnt * FFA_PAGE_SIZE) / PAGE_SIZE;
u64 sz = (u64)range->pg_cnt * FFA_PAGE_SIZE;
u64 pfn = hyp_phys_to_pfn(range->address);
if (__pkvm_host_share_ffa(pfn, npages))
if (!PAGE_ALIGNED(sz))
break;
if (__pkvm_host_share_ffa(pfn, sz / PAGE_SIZE))
break;
}
@@ -280,10 +304,13 @@ static u32 __ffa_host_unshare_ranges(struct ffa_mem_region_addr_range *ranges,
for (i = 0; i < nranges; ++i) {
struct ffa_mem_region_addr_range *range = &ranges[i];
u64 npages = (range->pg_cnt * FFA_PAGE_SIZE) / PAGE_SIZE;
u64 sz = (u64)range->pg_cnt * FFA_PAGE_SIZE;
u64 pfn = hyp_phys_to_pfn(range->address);
if (__pkvm_host_unshare_ffa(pfn, npages))
if (!PAGE_ALIGNED(sz))
break;
if (__pkvm_host_unshare_ffa(pfn, sz / PAGE_SIZE))
break;
}

View File

@@ -198,15 +198,15 @@ SYM_CODE_START(__kvm_hyp_host_vector)
invalid_host_el2_vect // FIQ EL2h
invalid_host_el2_vect // Error EL2h
host_el1_sync_vect // Synchronous 64-bit EL1
invalid_host_el1_vect // IRQ 64-bit EL1
invalid_host_el1_vect // FIQ 64-bit EL1
invalid_host_el1_vect // Error 64-bit EL1
host_el1_sync_vect // Synchronous 64-bit EL1/EL0
invalid_host_el1_vect // IRQ 64-bit EL1/EL0
invalid_host_el1_vect // FIQ 64-bit EL1/EL0
invalid_host_el1_vect // Error 64-bit EL1/EL0
invalid_host_el1_vect // Synchronous 32-bit EL1
invalid_host_el1_vect // IRQ 32-bit EL1
invalid_host_el1_vect // FIQ 32-bit EL1
invalid_host_el1_vect // Error 32-bit EL1
host_el1_sync_vect // Synchronous 32-bit EL1/EL0
invalid_host_el1_vect // IRQ 32-bit EL1/EL0
invalid_host_el1_vect // FIQ 32-bit EL1/EL0
invalid_host_el1_vect // Error 32-bit EL1/EL0
SYM_CODE_END(__kvm_hyp_host_vector)
/*

View File

@@ -696,21 +696,63 @@ static void handle___pkvm_vcpu_sync_state(struct kvm_cpu_context *host_ctxt)
}
}
static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
static struct kvm_vcpu *__get_current_vcpu(struct kvm_vcpu *vcpu,
struct pkvm_loaded_state **state)
{
DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
int ret;
struct pkvm_loaded_state *sstate = NULL;
vcpu = kern_hyp_va(vcpu);
if (unlikely(is_protected_kvm_enabled())) {
struct pkvm_loaded_state *state = this_cpu_ptr(&loaded_state);
sstate = this_cpu_ptr(&loaded_state);
flush_shadow_state(state);
if (!sstate || vcpu != sstate->vcpu->arch.pkvm.host_vcpu) {
sstate = NULL;
vcpu = NULL;
}
}
ret = __kvm_vcpu_run(state->vcpu);
*state = sstate;
return vcpu;
}
sync_shadow_state(state, ret);
#define get_current_vcpu(ctxt, regnr, statepp) \
({ \
DECLARE_REG(struct kvm_vcpu *, __vcpu, ctxt, regnr); \
__get_current_vcpu(__vcpu, statepp); \
})
if (state->vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
#define get_current_vcpu_from_cpu_if(ctxt, regnr, statepp) \
({ \
DECLARE_REG(struct vgic_v3_cpu_if *, cif, ctxt, regnr); \
struct kvm_vcpu *__vcpu; \
__vcpu = container_of(cif, \
struct kvm_vcpu, \
arch.vgic_cpu.vgic_v3); \
\
__get_current_vcpu(__vcpu, statepp); \
})
static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
{
struct pkvm_loaded_state *shadow_state;
struct kvm_vcpu *vcpu;
int ret;
vcpu = get_current_vcpu(host_ctxt, 1, &shadow_state);
if (!vcpu) {
cpu_reg(host_ctxt, 1) = -EINVAL;
return;
}
if (unlikely(shadow_state)) {
flush_shadow_state(shadow_state);
ret = __kvm_vcpu_run(shadow_state->vcpu);
sync_shadow_state(shadow_state, ret);
if (shadow_state->vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
/*
* The guest has used the FP, trap all accesses
* from the host (both FP and SVE).
@@ -722,7 +764,7 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
sysreg_clear_set(cptr_el2, 0, reg);
}
} else {
ret = __kvm_vcpu_run(kern_hyp_va(vcpu));
ret = __kvm_vcpu_run(vcpu);
}
cpu_reg(host_ctxt, 1) = ret;
@@ -759,20 +801,19 @@ out:
static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1);
struct pkvm_loaded_state *shadow_state;
struct kvm_vcpu *vcpu;
vcpu = kern_hyp_va(vcpu);
vcpu = get_current_vcpu(host_ctxt, 1, &shadow_state);
if (!vcpu)
return;
if (unlikely(is_protected_kvm_enabled())) {
struct pkvm_loaded_state *state = this_cpu_ptr(&loaded_state);
/*
* A shadow vcpu can never be updated from EL1, and we
* must have a vcpu loaded when protected mode is
* enabled.
*/
if (!state->vcpu || state->is_protected)
if (shadow_state) {
/* This only applies to non-protected VMs */
if (shadow_state->is_protected)
return;
vcpu = shadow_state->vcpu;
}
__kvm_adjust_pc(vcpu);
@@ -835,56 +876,50 @@ static void handle___kvm_get_mdcr_el2(struct kvm_cpu_context *host_ctxt)
cpu_reg(host_ctxt, 1) = __kvm_get_mdcr_el2();
}
static struct vgic_v3_cpu_if *get_shadow_vgic_v3_cpu_if(struct vgic_v3_cpu_if *cpu_if)
{
if (unlikely(is_protected_kvm_enabled())) {
struct pkvm_loaded_state *state = this_cpu_ptr(&loaded_state);
struct kvm_vcpu *host_vcpu;
if (!state->vcpu)
return NULL;
host_vcpu = state->vcpu->arch.pkvm.host_vcpu;
if (&host_vcpu->arch.vgic_cpu.vgic_v3 != cpu_if)
return NULL;
}
return cpu_if;
}
static void handle___vgic_v3_save_vmcr_aprs(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
struct vgic_v3_cpu_if *shadow_cpu_if;
struct pkvm_loaded_state *shadow_state;
struct kvm_vcpu *vcpu;
cpu_if = kern_hyp_va(cpu_if);
shadow_cpu_if = get_shadow_vgic_v3_cpu_if(cpu_if);
vcpu = get_current_vcpu_from_cpu_if(host_ctxt, 1, &shadow_state);
if (!vcpu)
return;
__vgic_v3_save_vmcr_aprs(shadow_cpu_if);
if (cpu_if != shadow_cpu_if) {
if (shadow_state) {
struct vgic_v3_cpu_if *shadow_cpu_if, *cpu_if;
int i;
shadow_cpu_if = &shadow_state->vcpu->arch.vgic_cpu.vgic_v3;
__vgic_v3_save_vmcr_aprs(shadow_cpu_if);
cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
cpu_if->vgic_vmcr = shadow_cpu_if->vgic_vmcr;
for (i = 0; i < ARRAY_SIZE(cpu_if->vgic_ap0r); i++) {
cpu_if->vgic_ap0r[i] = shadow_cpu_if->vgic_ap0r[i];
cpu_if->vgic_ap1r[i] = shadow_cpu_if->vgic_ap1r[i];
}
} else {
__vgic_v3_save_vmcr_aprs(&vcpu->arch.vgic_cpu.vgic_v3);
}
}
static void handle___vgic_v3_restore_vmcr_aprs(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(struct vgic_v3_cpu_if *, cpu_if, host_ctxt, 1);
struct vgic_v3_cpu_if *shadow_cpu_if;
struct pkvm_loaded_state *shadow_state;
struct kvm_vcpu *vcpu;
cpu_if = kern_hyp_va(cpu_if);
shadow_cpu_if = get_shadow_vgic_v3_cpu_if(cpu_if);
vcpu = get_current_vcpu_from_cpu_if(host_ctxt, 1, &shadow_state);
if (!vcpu)
return;
if (cpu_if != shadow_cpu_if) {
if (shadow_state) {
struct vgic_v3_cpu_if *shadow_cpu_if, *cpu_if;
int i;
shadow_cpu_if = &shadow_state->vcpu->arch.vgic_cpu.vgic_v3;
cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
shadow_cpu_if->vgic_vmcr = cpu_if->vgic_vmcr;
/* Should be a one-off */
shadow_cpu_if->vgic_sre = (ICC_SRE_EL1_DIB |
@@ -894,9 +929,11 @@ static void handle___vgic_v3_restore_vmcr_aprs(struct kvm_cpu_context *host_ctxt
shadow_cpu_if->vgic_ap0r[i] = cpu_if->vgic_ap0r[i];
shadow_cpu_if->vgic_ap1r[i] = cpu_if->vgic_ap1r[i];
}
}
__vgic_v3_restore_vmcr_aprs(shadow_cpu_if);
__vgic_v3_restore_vmcr_aprs(shadow_cpu_if);
} else {
__vgic_v3_restore_vmcr_aprs(&vcpu->arch.vgic_cpu.vgic_v3);
}
}
static void handle___pkvm_init(struct kvm_cpu_context *host_ctxt)
@@ -991,11 +1028,13 @@ static void handle___pkvm_iommu_register(struct kvm_cpu_context *host_ctxt)
DECLARE_REG(enum pkvm_iommu_driver_id, drv_id, host_ctxt, 2);
DECLARE_REG(phys_addr_t, dev_pa, host_ctxt, 3);
DECLARE_REG(size_t, dev_size, host_ctxt, 4);
DECLARE_REG(void *, mem, host_ctxt, 5);
DECLARE_REG(size_t, mem_size, host_ctxt, 6);
DECLARE_REG(unsigned long, parent_id, host_ctxt, 5);
DECLARE_REG(void *, mem, host_ctxt, 6);
DECLARE_REG(size_t, mem_size, host_ctxt, 7);
cpu_reg(host_ctxt, 1) = __pkvm_iommu_register(dev_id, drv_id, dev_pa,
dev_size, mem, mem_size);
dev_size, parent_id,
mem, mem_size);
}
static void handle___pkvm_iommu_pm_notify(struct kvm_cpu_context *host_ctxt)
@@ -1006,6 +1045,11 @@ static void handle___pkvm_iommu_pm_notify(struct kvm_cpu_context *host_ctxt)
cpu_reg(host_ctxt, 1) = __pkvm_iommu_pm_notify(dev_id, event);
}
static void handle___pkvm_iommu_finalize(struct kvm_cpu_context *host_ctxt)
{
cpu_reg(host_ctxt, 1) = __pkvm_iommu_finalize();
}
typedef void (*hcall_t)(struct kvm_cpu_context *);
#define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
@@ -1042,14 +1086,48 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__pkvm_iommu_driver_init),
HANDLE_FUNC(__pkvm_iommu_register),
HANDLE_FUNC(__pkvm_iommu_pm_notify),
HANDLE_FUNC(__pkvm_iommu_finalize),
};
static inline u64 kernel__text_addr(void)
{
u64 val;
asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
"movk %0, #0, lsl #16\n"
"movk %0, #0, lsl #32\n"
"movk %0, #0, lsl #48\n",
kvm_get__text)
: "=r" (val));
return val;
}
static inline u64 kernel__etext_addr(void)
{
u64 val;
asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
"movk %0, #0, lsl #16\n"
"movk %0, #0, lsl #32\n"
"movk %0, #0, lsl #48\n",
kvm_get__etext)
: "=r" (val));
return val;
}
static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(unsigned long, id, host_ctxt, 0);
u64 elr = read_sysreg_el2(SYS_ELR) - 4;
unsigned long hcall_min = 0;
hcall_t hfn;
/* Check for the provenance of the HC */
if (unlikely(elr < kernel__text_addr() || elr >= kernel__etext_addr()))
goto inval;
/*
* If pKVM has been initialised then reject any calls to the
* early "privileged" hypercalls. Note that we cannot reject

View File

@@ -23,6 +23,8 @@ u64 cpu_logical_map(unsigned int cpu)
return hyp_cpu_logical_map[cpu];
}
unsigned long __ro_after_init kvm_arm_hyp_percpu_base[NR_CPUS];
unsigned long __hyp_per_cpu_offset(unsigned int cpu)
{
unsigned long *cpu_base_array;

View File

@@ -31,6 +31,12 @@ static struct pkvm_iommu_driver iommu_drivers[PKVM_IOMMU_NR_DRIVERS];
/* IOMMU device list. Must only be accessed with host_kvm.lock held. */
static LIST_HEAD(iommu_list);
static bool iommu_finalized;
static DEFINE_HYP_SPINLOCK(iommu_registration_lock);
static void *iommu_mem_pool;
static size_t iommu_mem_remaining;
static void assert_host_component_locked(void)
{
hyp_assert_lock_held(&host_kvm.lock);
@@ -65,6 +71,8 @@ static const struct pkvm_iommu_ops *get_driver_ops(enum pkvm_iommu_driver_id id)
switch (id) {
case PKVM_IOMMU_DRIVER_S2MPU:
return IS_ENABLED(CONFIG_KVM_S2MPU) ? &pkvm_s2mpu_ops : NULL;
case PKVM_IOMMU_DRIVER_SYSMMU_SYNC:
return IS_ENABLED(CONFIG_KVM_S2MPU) ? &pkvm_sysmmu_sync_ops : NULL;
default:
return NULL;
}
@@ -89,41 +97,56 @@ static inline bool is_driver_ready(struct pkvm_iommu_driver *drv)
return atomic_read(&drv->state) == IOMMU_DRIVER_READY;
}
/* Global memory pool for allocating IOMMU list entry structs. */
static inline struct pkvm_iommu *
alloc_iommu_list_entry(struct pkvm_iommu_driver *drv, void *mem, size_t mem_size)
static size_t __iommu_alloc_size(struct pkvm_iommu_driver *drv)
{
static void *pool;
static size_t remaining;
static DEFINE_HYP_SPINLOCK(lock);
size_t size = sizeof(struct pkvm_iommu) + drv->ops->data_size;
return ALIGN(sizeof(struct pkvm_iommu) + drv->ops->data_size,
sizeof(unsigned long));
}
/* Global memory pool for allocating IOMMU list entry structs. */
static inline struct pkvm_iommu *alloc_iommu(struct pkvm_iommu_driver *drv,
void *mem, size_t mem_size)
{
size_t size = __iommu_alloc_size(drv);
void *ptr;
size = ALIGN(size, sizeof(unsigned long));
hyp_spin_lock(&lock);
assert_host_component_locked();
/*
* If new memory is being provided, replace the existing pool with it.
* Any remaining memory in the pool is discarded.
*/
if (mem && mem_size) {
pool = mem;
remaining = mem_size;
iommu_mem_pool = mem;
iommu_mem_remaining = mem_size;
}
if (size <= remaining) {
ptr = pool;
pool += size;
remaining -= size;
} else {
ptr = NULL;
}
if (size > iommu_mem_remaining)
return NULL;
hyp_spin_unlock(&lock);
ptr = iommu_mem_pool;
iommu_mem_pool += size;
iommu_mem_remaining -= size;
return ptr;
}
static inline void free_iommu(struct pkvm_iommu_driver *drv, struct pkvm_iommu *ptr)
{
size_t size = __iommu_alloc_size(drv);
assert_host_component_locked();
if (!ptr)
return;
/* Only allow freeing the last allocated buffer. */
if ((void*)ptr + size != iommu_mem_pool)
return;
iommu_mem_pool -= size;
iommu_mem_remaining += size;
}
static bool is_overlap(phys_addr_t r1_start, size_t r1_size,
phys_addr_t r2_start, size_t r2_size)
{
@@ -151,22 +174,23 @@ static bool is_mmio_range(phys_addr_t base, size_t size)
return true;
}
static int __snapshot_host_stage2(u64 start, u64 end, u32 level,
static int __snapshot_host_stage2(u64 start, u64 pa_max, u32 level,
kvm_pte_t *ptep,
enum kvm_pgtable_walk_flags flags,
void * const arg)
{
struct pkvm_iommu_driver * const drv = arg;
enum kvm_pgtable_prot prot;
u64 end = start + kvm_granule_size(level);
kvm_pte_t pte = *ptep;
/*
* Valid stage-2 entries are created lazily, invalid ones eagerly.
* Note: In the future we may need to check if [start,end) is MMIO.
* Note: Drivers initialize their PTs to all memory owned by the host,
* so we only call the driver on regions where that is not the case.
*/
prot = (!pte || kvm_pte_valid(pte)) ? PKVM_HOST_MEM_PROT : 0;
drv->ops->host_stage2_idmap_prepare(start, end, prot);
if (pte && !kvm_pte_valid(pte))
drv->ops->host_stage2_idmap_prepare(start, end, /*prot*/ 0);
return 0;
}
@@ -231,13 +255,24 @@ int __pkvm_iommu_driver_init(enum pkvm_iommu_driver_id id, void *data, size_t si
data = kern_hyp_va(data);
/* New driver initialization not allowed after __pkvm_iommu_finalize(). */
hyp_spin_lock(&iommu_registration_lock);
if (iommu_finalized) {
ret = -EPERM;
goto out_unlock;
}
drv = get_driver(id);
ops = get_driver_ops(id);
if (!drv || !ops)
return -EINVAL;
if (!drv || !ops) {
ret = -EINVAL;
goto out_unlock;
}
if (!driver_acquire_init(drv))
return -EBUSY;
if (!driver_acquire_init(drv)) {
ret = -EBUSY;
goto out_unlock;
}
drv->ops = ops;
@@ -249,7 +284,7 @@ int __pkvm_iommu_driver_init(enum pkvm_iommu_driver_id id, void *data, size_t si
hyp_unpin_shared_mem(data, data + size);
}
if (ret)
goto out;
goto out_release;
}
/*
@@ -262,36 +297,47 @@ int __pkvm_iommu_driver_init(enum pkvm_iommu_driver_id id, void *data, size_t si
driver_release_init(drv, /*success=*/true);
host_unlock_component();
out:
out_release:
if (ret)
driver_release_init(drv, /*success=*/false);
out_unlock:
hyp_spin_unlock(&iommu_registration_lock);
return ret;
}
int __pkvm_iommu_register(unsigned long dev_id,
enum pkvm_iommu_driver_id drv_id,
phys_addr_t dev_pa, size_t dev_size,
unsigned long parent_id,
void *kern_mem_va, size_t mem_size)
{
struct pkvm_iommu *dev = NULL;
struct pkvm_iommu_driver *drv;
void *dev_va, *mem_va = NULL;
void *mem_va = NULL;
int ret = 0;
/* New device registration not allowed after __pkvm_iommu_finalize(). */
hyp_spin_lock(&iommu_registration_lock);
if (iommu_finalized) {
ret = -EPERM;
goto out_unlock;
}
drv = get_driver(drv_id);
if (!drv || !is_driver_ready(drv))
return -ENOENT;
if (!drv || !is_driver_ready(drv)) {
ret = -ENOENT;
goto out_unlock;
}
if (!PAGE_ALIGNED(dev_pa) || !PAGE_ALIGNED(dev_size))
return -EINVAL;
if (!PAGE_ALIGNED(dev_pa) || !PAGE_ALIGNED(dev_size)) {
ret = -EINVAL;
goto out_unlock;
}
if (!is_mmio_range(dev_pa, dev_size))
return -EINVAL;
if (drv->ops->validate) {
ret = drv->ops->validate(dev_pa, dev_size);
if (ret)
return ret;
if (!is_mmio_range(dev_pa, dev_size)) {
ret = -EINVAL;
goto out_unlock;
}
/*
@@ -301,52 +347,102 @@ int __pkvm_iommu_register(unsigned long dev_id,
if (kern_mem_va && mem_size) {
mem_va = kern_hyp_va(kern_mem_va);
if (!PAGE_ALIGNED(mem_va) || !PAGE_ALIGNED(mem_size))
return -EINVAL;
if (!PAGE_ALIGNED(mem_va) || !PAGE_ALIGNED(mem_size)) {
ret = -EINVAL;
goto out_unlock;
}
ret = __pkvm_host_donate_hyp(hyp_virt_to_pfn(mem_va),
mem_size >> PAGE_SHIFT);
if (ret)
return ret;
goto out_unlock;
}
/* Allocate memory for the new device entry. */
dev = alloc_iommu_list_entry(drv, mem_va, mem_size);
if (!dev)
return -ENOMEM;
host_lock_component();
/* Create EL2 mapping for the device. */
dev_va = (void *)__pkvm_create_private_mapping(dev_pa, dev_size,
PAGE_HYP_DEVICE);
if (IS_ERR(dev_va))
return PTR_ERR(dev_va);
/* Allocate memory for the new device entry. */
dev = alloc_iommu(drv, mem_va, mem_size);
if (!dev) {
ret = -ENOMEM;
goto out_free;
}
/* Populate the new device entry. */
*dev = (struct pkvm_iommu){
.children = LIST_HEAD_INIT(dev->children),
.id = dev_id,
.ops = drv->ops,
.pa = dev_pa,
.va = dev_va,
.size = dev_size,
};
/* Take the host_kvm lock to block host stage-2 changes. */
host_lock_component();
if (!validate_against_existing_iommus(dev)) {
ret = -EBUSY;
goto out;
goto out_free;
}
/* Unmap the device's MMIO range from host stage-2. */
if (parent_id) {
dev->parent = find_iommu_by_id(parent_id);
if (!dev->parent) {
ret = -EINVAL;
goto out_free;
}
if (dev->parent->ops->validate_child) {
ret = dev->parent->ops->validate_child(dev->parent, dev);
if (ret)
goto out_free;
}
}
if (dev->ops->validate) {
ret = dev->ops->validate(dev);
if (ret)
goto out_free;
}
/*
* Unmap the device's MMIO range from host stage-2. If registration
* is successful, future attempts to re-map will be blocked by
* pkvm_iommu_host_stage2_adjust_range.
*/
ret = host_stage2_unmap_dev_locked(dev_pa, dev_size);
if (ret)
goto out;
goto out_free;
/* Create EL2 mapping for the device. Do it last as it is irreversible. */
dev->va = (void *)__pkvm_create_private_mapping(dev_pa, dev_size,
PAGE_HYP_DEVICE);
if (IS_ERR(dev->va)) {
ret = PTR_ERR(dev->va);
goto out_free;
}
/* Register device and prevent host from mapping the MMIO range. */
list_add_tail(&dev->list, &iommu_list);
if (dev->parent)
list_add_tail(&dev->siblings, &dev->parent->children);
out:
out_free:
if (ret)
free_iommu(drv, dev);
host_unlock_component();
out_unlock:
hyp_spin_unlock(&iommu_registration_lock);
return ret;
}
int __pkvm_iommu_finalize(void)
{
int ret = 0;
hyp_spin_lock(&iommu_registration_lock);
if (!iommu_finalized)
iommu_finalized = true;
else
ret = -EPERM;
hyp_spin_unlock(&iommu_registration_lock);
return ret;
}
@@ -360,10 +456,12 @@ int __pkvm_iommu_pm_notify(unsigned long dev_id, enum pkvm_iommu_pm_event event)
if (dev) {
if (event == PKVM_IOMMU_PM_SUSPEND) {
ret = dev->ops->suspend ? dev->ops->suspend(dev) : 0;
dev->powered = !!ret;
if (!ret)
dev->powered = false;
} else if (event == PKVM_IOMMU_PM_RESUME) {
ret = dev->ops->resume ? dev->ops->resume(dev) : 0;
dev->powered = !ret;
if (!ret)
dev->powered = true;
} else {
ret = -EINVAL;
}
@@ -418,7 +516,8 @@ bool pkvm_iommu_host_dabt_handler(struct kvm_cpu_context *host_ctxt, u32 esr,
if (pa < dev->pa || pa >= dev->pa + dev->size)
continue;
if (!dev->powered || !dev->ops->host_dabt_handler ||
/* No 'powered' check - the host assumes it is powered. */
if (!dev->ops->host_dabt_handler ||
!dev->ops->host_dabt_handler(dev, host_ctxt, esr, pa - dev->pa))
return false;

View File

@@ -28,6 +28,9 @@
(CONTEXT_CFG_VALID_VID_CTX_VID(ctxid, vid) \
| (((ctxid) < (nr_ctx)) ? CONTEXT_CFG_VALID_VID_CTX_VALID(ctxid) : 0))
#define for_each_child(child, dev) \
list_for_each_entry((child), &(dev)->children, siblings)
struct s2mpu_drv_data {
u32 version;
u32 context_cfg_valid_vid;
@@ -155,6 +158,13 @@ static void __set_control_regs(struct pkvm_iommu *dev)
writel_relaxed(ctrl0, dev->va + REG_NS_CTRL0);
}
/* Poll the given SFR until its value has all bits of a given mask set. */
static void __wait_until(void __iomem *addr, u32 mask)
{
while ((readl_relaxed(addr) & mask) != mask)
continue;
}
/* Poll the given SFR as long as its value has all bits of a given mask set. */
static void __wait_while(void __iomem *addr, u32 mask)
{
@@ -164,6 +174,17 @@ static void __wait_while(void __iomem *addr, u32 mask)
static void __wait_for_invalidation_complete(struct pkvm_iommu *dev)
{
struct pkvm_iommu *sync;
/*
* Wait for transactions to drain if SysMMU_SYNCs were registered.
* Assumes that they are in the same power domain as the S2MPU.
*/
for_each_child(sync, dev) {
writel_relaxed(SYNC_CMD_SYNC, sync->va + REG_NS_SYNC_CMD);
__wait_until(sync->va + REG_NS_SYNC_COMP, SYNC_COMP_COMPLETE);
}
/* Must not access SFRs while S2MPU is busy invalidating (v9 only). */
if (is_version(dev, S2MPU_VERSION_9)) {
__wait_while(dev->va + REG_NS_STATUS,
@@ -372,15 +393,44 @@ static u32 host_mmio_reg_access_mask(size_t off, bool is_write)
const u32 write_only = is_write ? read_write : no_access;
u32 masked_off;
/* IRQ handler can clear interrupts. */
if (off == REG_NS_INTERRUPT_CLEAR)
switch (off) {
/* Allow reading control registers for debugging. */
case REG_NS_CTRL0:
return read_only & CTRL0_MASK;
case REG_NS_CTRL1:
return read_only & CTRL1_MASK;
case REG_NS_CFG:
return read_only & CFG_MASK;
/* Allow EL1 IRQ handler to clear interrupts. */
case REG_NS_INTERRUPT_CLEAR:
return write_only & ALL_VIDS_BITMAP;
/* IRQ handler can read bitmap of pending interrupts. */
if (off == REG_NS_FAULT_STATUS)
/* Allow reading number of sets used by MPTC. */
case REG_NS_INFO:
return read_only & INFO_NUM_SET_MASK;
/* Allow EL1 IRQ handler to read bitmap of pending interrupts. */
case REG_NS_FAULT_STATUS:
return read_only & ALL_VIDS_BITMAP;
/*
* Allow reading MPTC entries for debugging. That involves:
* - writing (set,way) to READ_MPTC
* - reading READ_MPTC_*
*/
case REG_NS_READ_MPTC:
return write_only & READ_MPTC_MASK;
case REG_NS_READ_MPTC_TAG_PPN:
return read_only & READ_MPTC_TAG_PPN_MASK;
case REG_NS_READ_MPTC_TAG_OTHERS:
return read_only & READ_MPTC_TAG_OTHERS_MASK;
case REG_NS_READ_MPTC_DATA:
return read_only;
}
/* IRQ handler can read fault information. */
/* Allow reading L1ENTRY registers for debugging. */
if (off >= REG_NS_L1ENTRY_L2TABLE_ADDR(0, 0) &&
off < REG_NS_L1ENTRY_ATTR(NR_VIDS, 0))
return read_only;
/* Allow EL1 IRQ handler to read fault information. */
masked_off = off & ~REG_NS_FAULT_VID_MASK;
if ((masked_off == REG_NS_FAULT_PA_LOW(0)) ||
(masked_off == REG_NS_FAULT_PA_HIGH(0)) ||
@@ -445,7 +495,7 @@ static int s2mpu_init(void *data, size_t size)
host_mpt.fmpt[gb] = (struct fmpt){
.smpt = smpt,
.gran_1g = true,
.prot = MPT_PROT_NONE,
.prot = MPT_PROT_RW,
};
}
@@ -465,9 +515,28 @@ static int s2mpu_init(void *data, size_t size)
return ret;
}
static int s2mpu_validate(phys_addr_t pa, size_t size)
static int s2mpu_validate(struct pkvm_iommu *dev)
{
if (size != S2MPU_MMIO_SIZE)
if (dev->size != S2MPU_MMIO_SIZE)
return -EINVAL;
return 0;
}
static int s2mpu_validate_child(struct pkvm_iommu *dev, struct pkvm_iommu *child)
{
if (child->ops != &pkvm_sysmmu_sync_ops)
return -EINVAL;
return 0;
}
static int sysmmu_sync_validate(struct pkvm_iommu *dev)
{
if (dev->size != SYSMMU_SYNC_S2_MMIO_SIZE)
return -EINVAL;
if (!dev->parent || dev->parent->ops != &pkvm_s2mpu_ops)
return -EINVAL;
return 0;
@@ -476,6 +545,7 @@ static int s2mpu_validate(phys_addr_t pa, size_t size)
const struct pkvm_iommu_ops pkvm_s2mpu_ops = (struct pkvm_iommu_ops){
.init = s2mpu_init,
.validate = s2mpu_validate,
.validate_child = s2mpu_validate_child,
.resume = s2mpu_resume,
.suspend = s2mpu_suspend,
.host_stage2_idmap_prepare = s2mpu_host_stage2_idmap_prepare,
@@ -483,3 +553,7 @@ const struct pkvm_iommu_ops pkvm_s2mpu_ops = (struct pkvm_iommu_ops){
.host_dabt_handler = s2mpu_host_dabt_handler,
.data_size = sizeof(struct s2mpu_drv_data),
};
const struct pkvm_iommu_ops pkvm_sysmmu_sync_ops = (struct pkvm_iommu_ops){
.validate = sysmmu_sync_validate,
};

View File

@@ -620,6 +620,50 @@ static bool is_dabt(u64 esr)
return ESR_ELx_EC(esr) == ESR_ELx_EC_DABT_LOW;
}
static void host_inject_abort(struct kvm_cpu_context *host_ctxt)
{
u64 spsr = read_sysreg_el2(SYS_SPSR);
u64 esr = read_sysreg_el2(SYS_ESR);
u64 ventry, ec;
/* Repaint the ESR to report a same-level fault if taken from EL1 */
if ((spsr & PSR_MODE_MASK) != PSR_MODE_EL0t) {
ec = ESR_ELx_EC(esr);
if (ec == ESR_ELx_EC_DABT_LOW)
ec = ESR_ELx_EC_DABT_CUR;
else if (ec == ESR_ELx_EC_IABT_LOW)
ec = ESR_ELx_EC_IABT_CUR;
else
WARN_ON(1);
esr &= ~ESR_ELx_EC_MASK;
esr |= ec << ESR_ELx_EC_SHIFT;
}
/*
* Since S1PTW should only ever be set for stage-2 faults, we're pretty
* much guaranteed that it won't be set in ESR_EL1 by the hardware. So,
* let's use that bit to allow the host abort handler to differentiate
* this abort from normal userspace faults.
*
* Note: although S1PTW is RES0 at EL1, it is guaranteed by the
* architecture to be backed by flops, so it should be safe to use.
*/
esr |= ESR_ELx_S1PTW;
write_sysreg_el1(esr, SYS_ESR);
write_sysreg_el1(spsr, SYS_SPSR);
write_sysreg_el1(read_sysreg_el2(SYS_ELR), SYS_ELR);
write_sysreg_el1(read_sysreg_el2(SYS_FAR), SYS_FAR);
ventry = read_sysreg_el1(SYS_VBAR);
ventry += get_except64_offset(spsr, PSR_MODE_EL1h, except_type_sync);
write_sysreg_el2(ventry, SYS_ELR);
spsr = get_except64_cpsr(spsr, system_supports_mte(),
read_sysreg_el1(SYS_SCTLR), PSR_MODE_EL1h);
write_sysreg_el2(spsr, SYS_SPSR);
}
void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt)
{
struct kvm_vcpu_fault_info fault;
@@ -644,7 +688,11 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt)
ret = host_stage2_idmap(addr);
host_unlock_component();
BUG_ON(ret && ret != -EAGAIN);
if (ret == -EPERM)
host_inject_abort(host_ctxt);
else
BUG_ON(ret && ret != -EAGAIN);
}
/* This corresponds to locking order */
@@ -1073,10 +1121,14 @@ static int guest_complete_donation(u64 addr, const struct pkvm_mem_transition *t
u64 size = tx->nr_pages * PAGE_SIZE;
int err;
if (tx->initiator.id == PKVM_ID_HOST && ipa_in_pvmfw_region(vm, addr)) {
err = pkvm_load_pvmfw_pages(vm, addr, phys, size);
if (err)
return err;
if (tx->initiator.id == PKVM_ID_HOST) {
psci_mem_protect_inc();
if (ipa_in_pvmfw_region(vm, addr)) {
err = pkvm_load_pvmfw_pages(vm, addr, phys, size);
if (err)
return err;
}
}
return kvm_pgtable_stage2_map(&vm->pgt, addr, size, phys, prot,
@@ -1893,6 +1945,7 @@ int __pkvm_host_reclaim_page(u64 pfn)
if (ret)
goto unlock;
page->flags &= ~HOST_PAGE_NEED_POISONING;
psci_mem_protect_dec();
}
ret = host_stage2_set_owner_locked(addr, PAGE_SIZE, pkvm_host_id);

View File

@@ -331,6 +331,12 @@ static void *admit_host_page(void *arg)
int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages,
struct kvm_hyp_memcache *host_mc)
{
return __topup_hyp_memcache(mc, min_pages, admit_host_page,
hyp_virt_to_phys, host_mc);
struct kvm_hyp_memcache tmp = *host_mc;
int ret;
ret = __topup_hyp_memcache(mc, min_pages, admit_host_page,
hyp_virt_to_phys, &tmp);
*host_mc = tmp;
return ret;
}

View File

@@ -225,6 +225,11 @@ static int index_to_shadow_handle(int index)
extern unsigned long hyp_nr_cpus;
/*
* Track the vcpu most recently loaded on each physical CPU.
*/
static DEFINE_PER_CPU(struct kvm_vcpu *, last_loaded_vcpu);
/*
* Spinlock for protecting the shadow table related state.
* Protects writes to shadow_table, num_shadow_entries, and next_shadow_alloc,
@@ -267,6 +272,7 @@ struct kvm_vcpu *get_shadow_vcpu(int shadow_handle, unsigned int vcpu_idx)
{
struct kvm_vcpu *vcpu = NULL;
struct kvm_shadow_vm *vm;
bool flush_context = false;
hyp_spin_lock(&shadow_lock);
vm = find_shadow_by_handle(shadow_handle);
@@ -279,12 +285,28 @@ struct kvm_vcpu *get_shadow_vcpu(int shadow_handle, unsigned int vcpu_idx)
vcpu = NULL;
goto unlock;
}
/*
* Guarantee that both TLBs and I-cache are private to each vcpu.
* The check below is conservative and could lead to over-invalidation,
* because there is no need to nuke the contexts if the vcpu belongs to
* a different vm.
*/
if (vcpu != __this_cpu_read(last_loaded_vcpu)) {
flush_context = true;
__this_cpu_write(last_loaded_vcpu, vcpu);
}
vcpu->arch.pkvm.loaded_on_cpu = true;
hyp_page_ref_inc(hyp_virt_to_page(vm));
unlock:
hyp_spin_unlock(&shadow_lock);
/* No need for the lock while flushing the context. */
if (flush_context)
__kvm_flush_cpu_context(vcpu->arch.hw_mmu);
return vcpu;
}
@@ -354,8 +376,19 @@ static void unpin_host_vcpus(struct shadow_vcpu_state *shadow_vcpus, int nr_vcpu
for (i = 0; i < nr_vcpus; i++) {
struct kvm_vcpu *host_vcpu = shadow_vcpus[i].vcpu.arch.pkvm.host_vcpu;
struct kvm_vcpu *shadow_vcpu = &shadow_vcpus[i].vcpu;
size_t sve_state_size;
void *sve_state;
hyp_unpin_shared_mem(host_vcpu, host_vcpu + 1);
if (!test_bit(KVM_ARM_VCPU_SVE, shadow_vcpu->arch.features))
continue;
sve_state = shadow_vcpu->arch.sve_state;
sve_state = kern_hyp_va(sve_state);
sve_state_size = vcpu_sve_state_size(shadow_vcpu);
hyp_unpin_shared_mem(sve_state, sve_state + sve_state_size);
}
}
@@ -405,6 +438,27 @@ static int init_shadow_structs(struct kvm *kvm, struct kvm_shadow_vm *vm,
if (ret)
return ret;
if (test_bit(KVM_ARM_VCPU_SVE, shadow_vcpu->arch.features)) {
size_t sve_state_size;
void *sve_state;
shadow_vcpu->arch.sve_state = READ_ONCE(host_vcpu->arch.sve_state);
shadow_vcpu->arch.sve_max_vl = READ_ONCE(host_vcpu->arch.sve_max_vl);
sve_state = kern_hyp_va(shadow_vcpu->arch.sve_state);
sve_state_size = vcpu_sve_state_size(shadow_vcpu);
if (!shadow_vcpu->arch.sve_state || !sve_state_size ||
hyp_pin_shared_mem(sve_state,
sve_state + sve_state_size)) {
clear_bit(KVM_ARM_VCPU_SVE,
shadow_vcpu->arch.features);
shadow_vcpu->arch.sve_state = NULL;
shadow_vcpu->arch.sve_max_vl = 0;
return -EINVAL;
}
}
if (vm->arch.pkvm.enabled)
pkvm_vcpu_init_traps(shadow_vcpu);
kvm_reset_pvm_sys_regs(shadow_vcpu);
@@ -664,6 +718,7 @@ int __pkvm_teardown_shadow(int shadow_handle)
u64 pfn;
u64 nr_pages;
void *addr;
int i;
/* Lookup then remove entry from the shadow table. */
hyp_spin_lock(&shadow_lock);
@@ -678,6 +733,21 @@ int __pkvm_teardown_shadow(int shadow_handle)
goto err_unlock;
}
/*
* Clear the tracking for last_loaded_vcpu for all cpus for this vm in
* case the same addresses for those vcpus are reused for future vms.
*/
for (i = 0; i < hyp_nr_cpus; i++) {
struct kvm_vcpu **last_loaded_vcpu_ptr =
per_cpu_ptr(&last_loaded_vcpu, i);
struct kvm_vcpu *vcpu = *last_loaded_vcpu_ptr;
if (vcpu && vcpu->arch.pkvm.shadow_handle == shadow_handle)
*last_loaded_vcpu_ptr = NULL;
}
/* Ensure the VMID is clean before it can be reallocated */
__kvm_tlb_flush_vmid(&vm->arch.mmu);
remove_shadow_table(shadow_handle);
hyp_spin_unlock(&shadow_lock);
@@ -799,6 +869,10 @@ void pkvm_reset_vcpu(struct kvm_vcpu *vcpu)
*vcpu_pc(vcpu) = entry;
vm->pvmfw_entry_vcpu = NULL;
/* Auto enroll MMIO guard */
set_bit(KVM_ARCH_FLAG_MMIO_GUARD,
&vcpu->arch.pkvm.shadow_vm->arch.flags);
} else {
*vcpu_pc(vcpu) = reset_state->pc;
vcpu_set_reg(vcpu, 0, reset_state->r0);

View File

@@ -222,6 +222,44 @@ asmlinkage void __noreturn kvm_host_psci_cpu_entry(bool is_cpu_on)
__host_enter(host_ctxt);
}
static DEFINE_HYP_SPINLOCK(mem_protect_lock);
static u64 psci_mem_protect(s64 offset)
{
static u64 cnt;
u64 new = cnt + offset;
hyp_assert_lock_held(&mem_protect_lock);
if (!offset || kvm_host_psci_config.version < PSCI_VERSION(1, 1))
return cnt;
if (!cnt || !new)
psci_call(PSCI_1_1_FN64_MEM_PROTECT, offset < 0 ? 0 : 1, 0, 0);
cnt = new;
return cnt;
}
static bool psci_mem_protect_active(void)
{
return psci_mem_protect(0);
}
void psci_mem_protect_inc(void)
{
hyp_spin_lock(&mem_protect_lock);
psci_mem_protect(1);
hyp_spin_unlock(&mem_protect_lock);
}
void psci_mem_protect_dec(void)
{
hyp_spin_lock(&mem_protect_lock);
psci_mem_protect(-1);
hyp_spin_unlock(&mem_protect_lock);
}
static unsigned long psci_0_1_handler(u64 func_id, struct kvm_cpu_context *host_ctxt)
{
if (is_psci_0_1(cpu_off, func_id) || is_psci_0_1(migrate, func_id))
@@ -251,6 +289,8 @@ static unsigned long psci_0_2_handler(u64 func_id, struct kvm_cpu_context *host_
case PSCI_0_2_FN_SYSTEM_OFF:
case PSCI_0_2_FN_SYSTEM_RESET:
pkvm_clear_pvmfw_pages();
/* Avoid racing with a MEM_PROTECT call. */
hyp_spin_lock(&mem_protect_lock);
return psci_forward(host_ctxt);
case PSCI_0_2_FN64_CPU_SUSPEND:
return psci_cpu_suspend(func_id, host_ctxt);
@@ -266,6 +306,11 @@ static unsigned long psci_1_0_handler(u64 func_id, struct kvm_cpu_context *host_
switch (func_id) {
case PSCI_1_1_FN64_SYSTEM_RESET2:
pkvm_clear_pvmfw_pages();
hyp_spin_lock(&mem_protect_lock);
if (psci_mem_protect_active()) {
return psci_0_2_handler(PSCI_0_2_FN_SYSTEM_RESET,
host_ctxt);
}
fallthrough;
case PSCI_1_0_FN_PSCI_FEATURES:
case PSCI_1_0_FN_SET_SUSPEND_MODE:

View File

@@ -133,20 +133,16 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
}
/*
* Map the host's .bss and .rodata sections RO in the hypervisor, but
* transfer the ownership from the host to the hypervisor itself to
* make sure it can't be donated or shared with another entity.
* Map the host sections RO in the hypervisor, but transfer the
* ownership from the host to the hypervisor itself to make sure they
* can't be donated or shared with another entity.
*
* The ownership transition requires matching changes in the host
* stage-2. This will be done later (see finalize_host_mappings()) once
* the hyp_vmemmap is addressable.
*/
prot = pkvm_mkstate(PAGE_HYP_RO, PKVM_PAGE_SHARED_OWNED);
ret = pkvm_create_mappings(__start_rodata, __end_rodata, prot);
if (ret)
return ret;
ret = pkvm_create_mappings(__hyp_bss_end, __bss_stop, prot);
ret = pkvm_create_mappings(&kvm_vgic_global_state, &kvm_vgic_global_state + 1, prot);
if (ret)
return ret;

View File

@@ -232,15 +232,9 @@ u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
case SYS_ID_AA64MMFR2_EL1:
return get_pvm_id_aa64mmfr2(vcpu);
default:
/*
* Should never happen because all cases are covered in
* pvm_sys_reg_descs[].
*/
WARN_ON(1);
break;
/* Unhandled ID register, RAZ */
return 0;
}
return 0;
}
static u64 read_id_reg(const struct kvm_vcpu *vcpu,
@@ -321,6 +315,16 @@ static bool pvm_gic_read_sre(struct kvm_vcpu *vcpu,
/* Mark the specified system register as an AArch64 feature id register. */
#define AARCH64(REG) { SYS_DESC(REG), .access = pvm_access_id_aarch64 }
/*
* sys_reg_desc initialiser for architecturally unallocated cpufeature ID
* register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2
* (1 <= crm < 8, 0 <= Op2 < 8).
*/
#define ID_UNALLOCATED(crm, op2) { \
Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2), \
.access = pvm_access_id_aarch64, \
}
/* Mark the specified system register as Read-As-Zero/Write-Ignored */
#define RAZ_WI(REG) { SYS_DESC(REG), .access = pvm_access_raz_wi }
@@ -375,24 +379,46 @@ static const struct sys_reg_desc pvm_sys_reg_descs[] = {
AARCH32(SYS_MVFR0_EL1),
AARCH32(SYS_MVFR1_EL1),
AARCH32(SYS_MVFR2_EL1),
ID_UNALLOCATED(3,3),
AARCH32(SYS_ID_PFR2_EL1),
AARCH32(SYS_ID_DFR1_EL1),
AARCH32(SYS_ID_MMFR5_EL1),
ID_UNALLOCATED(3,7),
/* AArch64 ID registers */
/* CRm=4 */
AARCH64(SYS_ID_AA64PFR0_EL1),
AARCH64(SYS_ID_AA64PFR1_EL1),
ID_UNALLOCATED(4,2),
ID_UNALLOCATED(4,3),
AARCH64(SYS_ID_AA64ZFR0_EL1),
ID_UNALLOCATED(4,5),
ID_UNALLOCATED(4,6),
ID_UNALLOCATED(4,7),
AARCH64(SYS_ID_AA64DFR0_EL1),
AARCH64(SYS_ID_AA64DFR1_EL1),
ID_UNALLOCATED(5,2),
ID_UNALLOCATED(5,3),
AARCH64(SYS_ID_AA64AFR0_EL1),
AARCH64(SYS_ID_AA64AFR1_EL1),
ID_UNALLOCATED(5,6),
ID_UNALLOCATED(5,7),
AARCH64(SYS_ID_AA64ISAR0_EL1),
AARCH64(SYS_ID_AA64ISAR1_EL1),
AARCH64(SYS_ID_AA64ISAR2_EL1),
ID_UNALLOCATED(6,3),
ID_UNALLOCATED(6,4),
ID_UNALLOCATED(6,5),
ID_UNALLOCATED(6,6),
ID_UNALLOCATED(6,7),
AARCH64(SYS_ID_AA64MMFR0_EL1),
AARCH64(SYS_ID_AA64MMFR1_EL1),
AARCH64(SYS_ID_AA64MMFR2_EL1),
ID_UNALLOCATED(7,3),
ID_UNALLOCATED(7,4),
ID_UNALLOCATED(7,5),
ID_UNALLOCATED(7,6),
ID_UNALLOCATED(7,7),
/* Scalable Vector Registers are restricted. */

View File

@@ -58,24 +58,6 @@ static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
val[3] = lower_32_bits(cycles);
}
static int kvm_vcpu_exit_hcall(struct kvm_vcpu *vcpu, u32 nr, u32 nr_args)
{
u64 mask = vcpu->kvm->arch.hypercall_exit_enabled;
u32 i;
if (nr_args > 6 || !(mask & BIT(nr)))
return -EINVAL;
vcpu->run->exit_reason = KVM_EXIT_HYPERCALL;
vcpu->run->hypercall.nr = nr;
for (i = 0; i < nr_args; ++i)
vcpu->run->hypercall.args[i] = vcpu_get_reg(vcpu, i + 1);
vcpu->run->hypercall.longmode = !vcpu_mode_is_32bit(vcpu);
return 0;
}
int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
{
u32 func_id = smccc_get_function(vcpu);
@@ -163,14 +145,6 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID:
kvm_ptp_get_time(vcpu, val);
break;
case ARM_SMCCC_VENDOR_HYP_KVM_MEM_SHARE_FUNC_ID:
if (!kvm_vcpu_exit_hcall(vcpu, ARM_SMCCC_KVM_FUNC_MEM_SHARE, 3))
return 0;
break;
case ARM_SMCCC_VENDOR_HYP_KVM_MEM_UNSHARE_FUNC_ID:
if (!kvm_vcpu_exit_hcall(vcpu, ARM_SMCCC_KVM_FUNC_MEM_UNSHARE, 3))
return 0;
break;
case ARM_SMCCC_VENDOR_HYP_KVM_MMIO_GUARD_MAP_FUNC_ID:
if (kvm_vm_is_protected(vcpu->kvm) && !topup_hyp_memcache(vcpu))
val[0] = SMCCC_RET_SUCCESS;

Some files were not shown because too many files have changed in this diff Show More