mirror of
https://github.com/hardkernel/linux.git
synced 2026-03-25 12:00:22 +09:00
binder: back port changes from kernel 4.19 [1/3]
PD#SWPL-8572 Problems: based on android platfrom, each process may allocate 1MB vmalloc memory space for IPC. But most process don't use full memory range of vmalloc space. It's a waste of memory space and may cause driver can't work normal based on 32bit kernel Soluton: On kernel 4.19, google have fixed it, so we need back porting following changes: Squashed commit of the following: commit b12a56e5342e15e99b0fb07c67dfce0891ba2f6b Author: Todd Kjos <tkjos@google.com> Date: Tue Mar 19 09:53:01 2019 -0700 FROMGIT: binder: fix BUG_ON found by selinux-testsuite The selinux-testsuite found an issue resulting in a BUG_ON() where a conditional relied on a size_t going negative when checking the validity of a buffer offset. (cherry picked from commit5997da8214git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-linus) Bug: 67668716 Change-Id: Ib3b408717141deadddcb6b95ad98c0b97d9d98ea Fixes:7a67a39320("binder: add function to copy binder object from buffer") Reported-by: Paul Moore <paul@paul-moore.com> Tested-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Todd Kjos <tkjos@google.com> commit 5b28e504d93a5f1efc074dd7cdcadc07293bb783 Author: Todd Kjos <tkjos@android.com> Date: Thu Feb 14 15:22:57 2019 -0800 UPSTREAM: binder: fix handling of misaligned binder object Fixes crash found by syzbot: kernel BUG at drivers/android/binder_alloc.c:LINE! (2) (cherry pick from commit26528be672) Bug: 67668716 Reported-and-tested-by: syzbot+55de1eb4975dec156d8f@syzkaller.appspotmail.com Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: Ib8597dd05a158f78503d4affe6c5f46ded16a811 commit e110c3b44e437bad09f76c2b42f23dcad898f57d Author: Todd Kjos <tkjos@android.com> Date: Wed Feb 13 11:48:53 2019 -0800 UPSTREAM: binder: fix sparse issue in binder_alloc_selftest.c Fixes sparse issues reported by the kbuild test robot running on https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-testing:bde4a19fc0("binder: use userspace pointer as base of buffer space") Error output (drivers/android/binder_alloc_selftest.c): sparse: warning: incorrect type in assignment (different address spaces) sparse: expected void *page_addr sparse: got void [noderef] <asn:1> *user_data sparse: error: subtraction of different types can't work Fixed by adding necessary "__user" tags. (cherry pick from commit36f3093792) Bug: 67668716 Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: Ia0a16d163251381d4bc04f46a44dddbc18b10a85 commit 9f6fd7733286f1af04d153c9d3a050ca2615b3cc Author: Todd Kjos <tkjos@android.com> Date: Fri Feb 8 10:35:20 2019 -0800 BACKPORT: binder: use userspace pointer as base of buffer space Now that alloc->buffer points to the userspace vm_area rename buffer->data to buffer->user_data and rename local pointers that hold user addresses. Also use the "__user" tag to annotate all user pointers so sparse can flag cases where user pointer vaues are copied to kernel pointers. Refactor code to use offsets instead of user pointers. (cherry pick from commitbde4a19fc0) Bug: 67668716 Change-Id: I9d04b844c5994d1f6214da795799e6b373bc9816 Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit 194d8606b011657ce30bf0c240a5adcad0691201 Author: Todd Kjos <tkjos@android.com> Date: Wed Dec 5 15:19:25 2018 -0800 UPSTREAM: binder: fix kerneldoc header for struct binder_buffer Fix the incomplete kerneldoc header for struct binder_buffer. (cherry pick from commit7a2670a5bc) Bug: 67668716 Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: I6bb942e6a9466b02653349943524462f205af839 commit 55cb58623a60d48678d8eb74e1cabe7744ed62c2 Author: Todd Kjos <tkjos@android.com> Date: Fri Feb 8 10:35:19 2019 -0800 BACKPORT: binder: remove user_buffer_offset Remove user_buffer_offset since there is no kernel buffer pointer anymore. (cherry pick from commitc41358a5f5) Bug: 67668716 Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: I399219867704dc5013453a7738193c742fc970ad commit 3301f77efa9d99e742e5642243b891e014becf17 Author: Todd Kjos <tkjos@android.com> Date: Fri Feb 8 10:35:18 2019 -0800 UPSTREAM: binder: remove kernel vm_area for buffer space Remove the kernel's vm_area and the code that maps buffer pages into it. (cherry pick from commit880211667b) Bug: 67668716 Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: I2595bb8416c2bbfcf97ad3d7380ae94e29c209fb commit 628c27a60665f15984364f6c0a1bda03473b3a78 Author: Todd Kjos <tkjos@android.com> Date: Fri Feb 8 10:35:17 2019 -0800 UPSTREAM: binder: avoid kernel vm_area for buffer fixups Refactor the functions to validate and fixup struct binder_buffer pointer objects to avoid using vm_area pointers. Instead copy to/from kernel space using binder_alloc_copy_to_buffer() and binder_alloc_copy_from_buffer(). The following functions were refactored: refactor binder_validate_ptr() binder_validate_fixup() binder_fixup_parent() (cherry pick from commitdb6b0b810b) Bug: 67668716 Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: Ic222af9b6c56bf48fd0b65debe981d19a7809e77 commit ed39057090cc4a95c318bafcd97f418da56e3867 Author: Todd Kjos <tkjos@android.com> Date: Fri Feb 8 10:35:16 2019 -0800 BACKPORT: binder: add function to copy binder object from buffer When creating or tearing down a transaction, the binder driver examines objects in the buffer and takes appropriate action. To do this without needing to dereference pointers into the buffer, the local copies of the objects are needed. This patch introduces a function to validate and copy binder objects from the buffer to a local structure. (cherry pick from commit7a67a39320) Bug: 67668716 Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: I42dfe238a2d20bdeff479068ca87a80e4577e64a commit 01f8f48c56b53faf1c795112f451a032a0d00b75 Author: Todd Kjos <tkjos@android.com> Date: Fri Feb 8 10:35:15 2019 -0800 BACKPORT: binder: add functions to copy to/from binder buffers Avoid vm_area when copying to or from binder buffers. Instead, new copy functions are added that copy from kernel space to binder buffer space. These use kmap_atomic() and kunmap_atomic() to create temporary mappings and then memcpy() is used to copy within that page. Also, kmap_atomic() / kunmap_atomic() use the appropriate cache flushing to support VIVT cache architectures. Allow binder to build if CPU_CACHE_VIVT is defined. Several uses of the new functions are added here. More to follow in subsequent patches. (cherry picked from commit8ced0c6231) Bug: 67668716 Change-Id: I6a93d2396d0a80c352a1d563fc7fb523a753e38c Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit bfc28d4c046d2a1aea5db66508e7fbb65a31a4a9 Author: Todd Kjos <tkjos@android.com> Date: Fri Feb 8 10:35:14 2019 -0800 UPSTREAM: binder: create userspace-to-binder-buffer copy function The binder driver uses a vm_area to map the per-process binder buffer space. For 32-bit android devices, this is now taking too much vmalloc space. This patch removes the use of vm_area when copying the transaction data from the sender to the buffer space. Instead of using copy_from_user() for multi-page copies, it now uses binder_alloc_copy_user_to_buffer() which uses kmap() and kunmap() to map each page, and uses copy_from_user() for copying to that page. (cherry picked from1a7c3d9bb7) Bug: 67668716 Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Change-Id: I59ff83455984fce4626476e30601ed8b99858a92 commit 89a1a65d35200d8ca94c865f061f11af41a8ced7 Author: Todd Kjos <tkjos@android.com> Date: Mon Jan 14 09:10:21 2019 -0800 FROMGIT: binder: create node flag to request sender's security context To allow servers to verify client identity, allow a node flag to be set that causes the sender's security context to be delivered with the transaction. The BR_TRANSACTION command is extended in BR_TRANSACTION_SEC_CTX to contain a pointer to the security context string. Signed-off-by: Todd Kjos <tkjos@google.com> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commitec74136dedhttps://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master) Change-Id: I44496546e2d0dc0022f818a45cd52feb1c1a92cb Signed-off-by: Todd Kjos <tkjos@google.com> commit 4afd6d2498ecd54e4211c6e47d8956a686a52ee3 Author: Todd Kjos <tkjos@android.com> Date: Wed Dec 5 15:19:26 2018 -0800 UPSTREAM: binder: filter out nodes when showing binder procs When dumping out binder transactions via a debug node, the output is too verbose if a process has many nodes. Change the output for transaction dumps to only display nodes with pending async transactions. Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commitecd589d8f5) Bug: 112037142 Change-Id: Iaa76ebdc844037ce1ee3bf2e590676790a959cef commit 72e3c1d60a499bfa547d962a150082f47bfb16af Author: Todd Kjos <tkjos@android.com> Date: Tue Nov 6 15:55:32 2018 -0800 binder: fix race that allows malicious free of live buffer commit7bada55ab5upstream. Malicious code can attempt to free buffers using the BC_FREE_BUFFER ioctl to binder. There are protections against a user freeing a buffer while in use by the kernel, however there was a window where BC_FREE_BUFFER could be used to free a recently allocated buffer that was not completely initialized. This resulted in a use-after-free detected by KASAN with a malicious test program. This window is closed by setting the buffer's allow_user_free attribute to 0 when the buffer is allocated or when the user has previously freed it instead of waiting for the caller to set it. The problem was that when the struct buffer was recycled, allow_user_free was stale and set to 1 allowing a free to go through. Signed-off-by: Todd Kjos <tkjos@google.com> Acked-by: Arve Hjønnevåg <arve@android.com> Cc: stable <stable@vger.kernel.org> # 4.14 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit c7940ee7e55f4caec80ab646b7f9d495ee2677c6 Author: Martijn Coenen <maco@android.com> Date: Sat Aug 25 13:50:56 2018 -0700 UPSTREAM: binder: Add BINDER_GET_NODE_INFO_FOR_REF ioctl. This allows the context manager to retrieve information about nodes that it holds a reference to, such as the current number of references to those nodes. Such information can for example be used to determine whether the servicemanager is the only process holding a reference to a node. This information can then be passed on to the process holding the node, which can in turn decide whether it wants to shut down to reduce resource usage. Bug: 79983843 Change-Id: I21e52ed1ca2137f7bfdc0300365fb1285b7e3d70 Signed-off-by: Martijn Coenen <maco@android.com> commit afd02b5ead68a94eb6bf1bf5234271687d7eb461 Author: Minchan Kim <minchan@kernel.org> Date: Thu Aug 23 14:29:56 2018 +0900 android: binder: fix the race mmap and alloc_new_buf_locked There is RaceFuzzer report like below because we have no lock to close below the race between binder_mmap and binder_alloc_new_buf_locked. To close the race, let's use memory barrier so that if someone see alloc->vma is not NULL, alloc->vma_vm_mm should be never NULL. (I didn't add stable mark intentionallybecause standard android userspace libraries that interact with binder (libbinder & libhwbinder) prevent the mmap/ioctl race. - from Todd) " Thread interleaving: CPU0 (binder_alloc_mmap_handler) CPU1 (binder_alloc_new_buf_locked) ===== ===== // drivers/android/binder_alloc.c // #L718 (v4.18-rc3) alloc->vma = vma; // drivers/android/binder_alloc.c // #L346 (v4.18-rc3) if (alloc->vma == NULL) { ... // alloc->vma is not NULL at this point return ERR_PTR(-ESRCH); } ... // #L438 binder_update_page_range(alloc, 0, (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr); // In binder_update_page_range() #L218 // But still alloc->vma_vm_mm is NULL here if (need_mm && mmget_not_zero(alloc->vma_vm_mm)) alloc->vma_vm_mm = vma->vm_mm; Crash Log: ================================================================== BUG: KASAN: null-ptr-deref in __atomic_add_unless include/asm-generic/atomic-instrumented.h:89 [inline] BUG: KASAN: null-ptr-deref in atomic_add_unless include/linux/atomic.h:533 [inline] BUG: KASAN: null-ptr-deref in mmget_not_zero include/linux/sched/mm.h:75 [inline] BUG: KASAN: null-ptr-deref in binder_update_page_range+0xece/0x18e0 drivers/android/binder_alloc.c:218 Write of size 4 at addr 0000000000000058 by task syz-executor0/11184 CPU: 1 PID: 11184 Comm: syz-executor0 Not tainted 4.18.0-rc3 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x16e/0x22c lib/dump_stack.c:113 kasan_report_error mm/kasan/report.c:352 [inline] kasan_report+0x163/0x380 mm/kasan/report.c:412 check_memory_region_inline mm/kasan/kasan.c:260 [inline] check_memory_region+0x140/0x1a0 mm/kasan/kasan.c:267 kasan_check_write+0x14/0x20 mm/kasan/kasan.c:278 __atomic_add_unless include/asm-generic/atomic-instrumented.h:89 [inline] atomic_add_unless include/linux/atomic.h:533 [inline] mmget_not_zero include/linux/sched/mm.h:75 [inline] binder_update_page_range+0xece/0x18e0 drivers/android/binder_alloc.c:218 binder_alloc_new_buf_locked drivers/android/binder_alloc.c:443 [inline] binder_alloc_new_buf+0x467/0xc30 drivers/android/binder_alloc.c:513 binder_transaction+0x125b/0x4fb0 drivers/android/binder.c:2957 binder_thread_write+0xc08/0x2770 drivers/android/binder.c:3528 binder_ioctl_write_read.isra.39+0x24f/0x8e0 drivers/android/binder.c:4456 binder_ioctl+0xa86/0xf34 drivers/android/binder.c:4596 vfs_ioctl fs/ioctl.c:46 [inline] do_vfs_ioctl+0x154/0xd40 fs/ioctl.c:686 ksys_ioctl+0x94/0xb0 fs/ioctl.c:701 __do_sys_ioctl fs/ioctl.c:708 [inline] __se_sys_ioctl fs/ioctl.c:706 [inline] __x64_sys_ioctl+0x43/0x50 fs/ioctl.c:706 do_syscall_64+0x167/0x4b0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe " Signed-off-by: Todd Kjos <tkjos@google.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Martijn Coenen <maco@android.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit 3ed5fd0f095e9d6fe5f33f909165a8cd596e8b46 Author: Sherry Yang <sherryy@android.com> Date: Tue Aug 7 12:57:13 2018 -0700 android: binder: Rate-limit debug and userspace triggered err msgs Use rate-limited debug messages where userspace can trigger excessive log spams. Acked-by: Arve Hjønnevåg <arve@android.com> Signed-off-by: Sherry Yang <sherryy@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit 8129fb3ee7af23a888383aa23647c9d576ecdfef Author: Sherry Yang <sherryy@android.com> Date: Thu Jul 26 17:17:17 2018 -0700 android: binder: Show extra_buffers_size in trace Add extra_buffers_size to the binder_transaction_alloc_buf tracepoint. Acked-by: Arve Hjønnevåg <arve@android.com> Signed-off-by: Sherry Yang <sherryy@android.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit 3b0bbcb65457ddec6fbee72bb26002e2bba16089 Author: Guenter Roeck <linux@roeck-us.net> Date: Mon Jul 23 14:41:38 2018 -0700 android: binder: Include asm/cacheflush.h after linux/ include files If asm/cacheflush.h is included first, the following build warnings are seen with sparc32 builds. In file included from arch/sparc/include/asm/cacheflush.h:11:0, from drivers/android/binder.c:54: arch/sparc/include/asm/cacheflush_32.h:40:37: warning: 'struct page' declared inside parameter list will not be visible outside of this definition or declaration Moving the asm/ include after linux/ includes solves the problem. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit e8a4948f49629c6ab122339f46908884d55ca7e9 Author: Guenter Roeck <linux@roeck-us.net> Date: Mon Jul 23 14:47:23 2018 -0700 android: binder_alloc: Include asm/cacheflush.h after linux/ include files If asm/cacheflush.h is included first, the following build warnings are seen with sparc32 builds. In file included from ./arch/sparc/include/asm/cacheflush.h:11:0, from drivers/android/binder_alloc.c:20: ./arch/sparc/include/asm/cacheflush_32.h:40:37: warning: 'struct page' declared inside parameter list Moving the asm/ include after linux/ includes fixes the problem. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> commit 8cae6730ef318700ab3a0db3ef43ee6a5e5856c8 Author: Geert Uytterhoeven <geert@linux-m68k.org> Date: Wed Jun 6 14:40:56 2018 +0200 android: binder: Drop dependency on !M68K As of commit7124330dab("m68k/uaccess: Revive 64-bit get_user()"), the 64-bit Android binder interface builds fine on m68k. Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> verify: p212 Change-Id: I1bac2c5345bcac64a3890f1688c1ecc4a3654a79 Signed-off-by: Tao Zeng <tao.zeng@amlogic.com>
This commit is contained in:
@@ -9,7 +9,7 @@ if ANDROID
|
||||
|
||||
config ANDROID_BINDER_IPC
|
||||
bool "Android Binder IPC Driver"
|
||||
depends on MMU && !M68K
|
||||
depends on MMU
|
||||
default n
|
||||
---help---
|
||||
Binder is used in Android for both communication between processes,
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -17,7 +17,6 @@
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/module.h>
|
||||
@@ -28,6 +27,10 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/list_lru.h>
|
||||
#include <linux/ratelimit.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/highmem.h>
|
||||
#include "binder_alloc.h"
|
||||
#include "binder_trace.h"
|
||||
|
||||
@@ -36,11 +39,12 @@ struct list_lru binder_alloc_lru;
|
||||
static DEFINE_MUTEX(binder_alloc_mmap_lock);
|
||||
|
||||
enum {
|
||||
BINDER_DEBUG_USER_ERROR = 1U << 0,
|
||||
BINDER_DEBUG_OPEN_CLOSE = 1U << 1,
|
||||
BINDER_DEBUG_BUFFER_ALLOC = 1U << 2,
|
||||
BINDER_DEBUG_BUFFER_ALLOC_ASYNC = 1U << 3,
|
||||
};
|
||||
static uint32_t binder_alloc_debug_mask;
|
||||
static uint32_t binder_alloc_debug_mask = BINDER_DEBUG_USER_ERROR;
|
||||
|
||||
module_param_named(debug_mask, binder_alloc_debug_mask,
|
||||
uint, 0644);
|
||||
@@ -48,7 +52,7 @@ module_param_named(debug_mask, binder_alloc_debug_mask,
|
||||
#define binder_alloc_debug(mask, x...) \
|
||||
do { \
|
||||
if (binder_alloc_debug_mask & mask) \
|
||||
pr_info(x); \
|
||||
pr_info_ratelimited(x); \
|
||||
} while (0)
|
||||
|
||||
static struct binder_buffer *binder_buffer_next(struct binder_buffer *buffer)
|
||||
@@ -65,9 +69,8 @@ static size_t binder_alloc_buffer_size(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer)
|
||||
{
|
||||
if (list_is_last(&buffer->entry, &alloc->buffers))
|
||||
return (u8 *)alloc->buffer +
|
||||
alloc->buffer_size - (u8 *)buffer->data;
|
||||
return (u8 *)binder_buffer_next(buffer)->data - (u8 *)buffer->data;
|
||||
return alloc->buffer + alloc->buffer_size - buffer->user_data;
|
||||
return binder_buffer_next(buffer)->user_data - buffer->user_data;
|
||||
}
|
||||
|
||||
static void binder_insert_free_buffer(struct binder_alloc *alloc,
|
||||
@@ -117,9 +120,9 @@ static void binder_insert_allocated_buffer_locked(
|
||||
buffer = rb_entry(parent, struct binder_buffer, rb_node);
|
||||
BUG_ON(buffer->free);
|
||||
|
||||
if (new_buffer->data < buffer->data)
|
||||
if (new_buffer->user_data < buffer->user_data)
|
||||
p = &parent->rb_left;
|
||||
else if (new_buffer->data > buffer->data)
|
||||
else if (new_buffer->user_data > buffer->user_data)
|
||||
p = &parent->rb_right;
|
||||
else
|
||||
BUG();
|
||||
@@ -134,29 +137,27 @@ static struct binder_buffer *binder_alloc_prepare_to_free_locked(
|
||||
{
|
||||
struct rb_node *n = alloc->allocated_buffers.rb_node;
|
||||
struct binder_buffer *buffer;
|
||||
void *kern_ptr;
|
||||
void __user *uptr;
|
||||
|
||||
kern_ptr = (void *)(user_ptr - alloc->user_buffer_offset);
|
||||
uptr = (void __user *)user_ptr;
|
||||
|
||||
while (n) {
|
||||
buffer = rb_entry(n, struct binder_buffer, rb_node);
|
||||
BUG_ON(buffer->free);
|
||||
|
||||
if (kern_ptr < buffer->data)
|
||||
if (uptr < buffer->user_data)
|
||||
n = n->rb_left;
|
||||
else if (kern_ptr > buffer->data)
|
||||
else if (uptr > buffer->user_data)
|
||||
n = n->rb_right;
|
||||
else {
|
||||
/*
|
||||
* Guard against user threads attempting to
|
||||
* free the buffer twice
|
||||
* free the buffer when in use by kernel or
|
||||
* after it's already been freed.
|
||||
*/
|
||||
if (buffer->free_in_progress) {
|
||||
pr_err("%d:%d FREE_BUFFER u%016llx user freed buffer twice\n",
|
||||
alloc->pid, current->pid, (u64)user_ptr);
|
||||
return NULL;
|
||||
}
|
||||
buffer->free_in_progress = 1;
|
||||
if (!buffer->allow_user_free)
|
||||
return ERR_PTR(-EPERM);
|
||||
buffer->allow_user_free = 0;
|
||||
return buffer;
|
||||
}
|
||||
}
|
||||
@@ -186,9 +187,9 @@ struct binder_buffer *binder_alloc_prepare_to_free(struct binder_alloc *alloc,
|
||||
}
|
||||
|
||||
static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
||||
void *start, void *end)
|
||||
void __user *start, void __user *end)
|
||||
{
|
||||
void *page_addr;
|
||||
void __user *page_addr;
|
||||
unsigned long user_page_addr;
|
||||
struct binder_lru_page *page;
|
||||
struct vm_area_struct *vma = NULL;
|
||||
@@ -224,8 +225,9 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
||||
}
|
||||
|
||||
if (!vma && need_mm) {
|
||||
pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n",
|
||||
alloc->pid);
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"%d: binder_alloc_buf failed to map pages in userspace, no vma\n",
|
||||
alloc->pid);
|
||||
goto err_no_vma;
|
||||
}
|
||||
|
||||
@@ -262,18 +264,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
||||
page->alloc = alloc;
|
||||
INIT_LIST_HEAD(&page->lru);
|
||||
|
||||
ret = map_kernel_range_noflush((unsigned long)page_addr,
|
||||
PAGE_SIZE, PAGE_KERNEL,
|
||||
&page->page_ptr);
|
||||
flush_cache_vmap((unsigned long)page_addr,
|
||||
(unsigned long)page_addr + PAGE_SIZE);
|
||||
if (ret != 1) {
|
||||
pr_err("%d: binder_alloc_buf failed to map page at %pK in kernel\n",
|
||||
alloc->pid, page_addr);
|
||||
goto err_map_kernel_failed;
|
||||
}
|
||||
user_page_addr =
|
||||
(uintptr_t)page_addr + alloc->user_buffer_offset;
|
||||
user_page_addr = (uintptr_t)page_addr;
|
||||
ret = vm_insert_page(vma, user_page_addr, page[0].page_ptr);
|
||||
if (ret) {
|
||||
pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n",
|
||||
@@ -311,8 +302,6 @@ free_range:
|
||||
continue;
|
||||
|
||||
err_vm_insert_page_failed:
|
||||
unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
|
||||
err_map_kernel_failed:
|
||||
__free_page(page->page_ptr);
|
||||
page->page_ptr = NULL;
|
||||
err_alloc_page_failed:
|
||||
@@ -327,6 +316,35 @@ err_no_vma:
|
||||
return vma ? -ENOMEM : -ESRCH;
|
||||
}
|
||||
|
||||
|
||||
static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
if (vma)
|
||||
alloc->vma_vm_mm = vma->vm_mm;
|
||||
/*
|
||||
* If we see alloc->vma is not NULL, buffer data structures set up
|
||||
* completely. Look at smp_rmb side binder_alloc_get_vma.
|
||||
* We also want to guarantee new alloc->vma_vm_mm is always visible
|
||||
* if alloc->vma is set.
|
||||
*/
|
||||
smp_wmb();
|
||||
alloc->vma = vma;
|
||||
}
|
||||
|
||||
static inline struct vm_area_struct *binder_alloc_get_vma(
|
||||
struct binder_alloc *alloc)
|
||||
{
|
||||
struct vm_area_struct *vma = NULL;
|
||||
|
||||
if (alloc->vma) {
|
||||
/* Look at description in binder_alloc_set_vma */
|
||||
smp_rmb();
|
||||
vma = alloc->vma;
|
||||
}
|
||||
return vma;
|
||||
}
|
||||
|
||||
static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
struct binder_alloc *alloc,
|
||||
size_t data_size,
|
||||
@@ -338,14 +356,15 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
struct binder_buffer *buffer;
|
||||
size_t buffer_size;
|
||||
struct rb_node *best_fit = NULL;
|
||||
void *has_page_addr;
|
||||
void *end_page_addr;
|
||||
void __user *has_page_addr;
|
||||
void __user *end_page_addr;
|
||||
size_t size, data_offsets_size;
|
||||
int ret;
|
||||
|
||||
if (alloc->vma == NULL) {
|
||||
pr_err("%d: binder_alloc_buf, no vma\n",
|
||||
alloc->pid);
|
||||
if (!binder_alloc_get_vma(alloc)) {
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"%d: binder_alloc_buf, no vma\n",
|
||||
alloc->pid);
|
||||
return ERR_PTR(-ESRCH);
|
||||
}
|
||||
|
||||
@@ -417,11 +436,14 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
if (buffer_size > largest_free_size)
|
||||
largest_free_size = buffer_size;
|
||||
}
|
||||
pr_err("%d: binder_alloc_buf size %zd failed, no address space\n",
|
||||
alloc->pid, size);
|
||||
pr_err("allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n",
|
||||
total_alloc_size, allocated_buffers, largest_alloc_size,
|
||||
total_free_size, free_buffers, largest_free_size);
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"%d: binder_alloc_buf size %zd failed, no address space\n",
|
||||
alloc->pid, size);
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n",
|
||||
total_alloc_size, allocated_buffers,
|
||||
largest_alloc_size, total_free_size,
|
||||
free_buffers, largest_free_size);
|
||||
return ERR_PTR(-ENOSPC);
|
||||
}
|
||||
if (n == NULL) {
|
||||
@@ -433,15 +455,15 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
"%d: binder_alloc_buf size %zd got buffer %pK size %zd\n",
|
||||
alloc->pid, size, buffer, buffer_size);
|
||||
|
||||
has_page_addr =
|
||||
(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK);
|
||||
has_page_addr = (void __user *)
|
||||
(((uintptr_t)buffer->user_data + buffer_size) & PAGE_MASK);
|
||||
WARN_ON(n && buffer_size != size);
|
||||
end_page_addr =
|
||||
(void *)PAGE_ALIGN((uintptr_t)buffer->data + size);
|
||||
(void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data + size);
|
||||
if (end_page_addr > has_page_addr)
|
||||
end_page_addr = has_page_addr;
|
||||
ret = binder_update_page_range(alloc, 1,
|
||||
(void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr);
|
||||
ret = binder_update_page_range(alloc, 1, (void __user *)
|
||||
PAGE_ALIGN((uintptr_t)buffer->user_data), end_page_addr);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
@@ -454,7 +476,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
__func__, alloc->pid);
|
||||
goto err_alloc_buf_struct_failed;
|
||||
}
|
||||
new_buffer->data = (u8 *)buffer->data + size;
|
||||
new_buffer->user_data = (u8 __user *)buffer->user_data + size;
|
||||
list_add(&new_buffer->entry, &buffer->entry);
|
||||
new_buffer->free = 1;
|
||||
binder_insert_free_buffer(alloc, new_buffer);
|
||||
@@ -462,7 +484,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
|
||||
rb_erase(best_fit, &alloc->free_buffers);
|
||||
buffer->free = 0;
|
||||
buffer->free_in_progress = 0;
|
||||
buffer->allow_user_free = 0;
|
||||
binder_insert_allocated_buffer_locked(alloc, buffer);
|
||||
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
|
||||
"%d: binder_alloc_buf size %zd got %pK\n",
|
||||
@@ -480,8 +502,8 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
return buffer;
|
||||
|
||||
err_alloc_buf_struct_failed:
|
||||
binder_update_page_range(alloc, 0,
|
||||
(void *)PAGE_ALIGN((uintptr_t)buffer->data),
|
||||
binder_update_page_range(alloc, 0, (void __user *)
|
||||
PAGE_ALIGN((uintptr_t)buffer->user_data),
|
||||
end_page_addr);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
@@ -516,14 +538,15 @@ struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
|
||||
return buffer;
|
||||
}
|
||||
|
||||
static void *buffer_start_page(struct binder_buffer *buffer)
|
||||
static void __user *buffer_start_page(struct binder_buffer *buffer)
|
||||
{
|
||||
return (void *)((uintptr_t)buffer->data & PAGE_MASK);
|
||||
return (void __user *)((uintptr_t)buffer->user_data & PAGE_MASK);
|
||||
}
|
||||
|
||||
static void *prev_buffer_end_page(struct binder_buffer *buffer)
|
||||
static void __user *prev_buffer_end_page(struct binder_buffer *buffer)
|
||||
{
|
||||
return (void *)(((uintptr_t)(buffer->data) - 1) & PAGE_MASK);
|
||||
return (void __user *)
|
||||
(((uintptr_t)(buffer->user_data) - 1) & PAGE_MASK);
|
||||
}
|
||||
|
||||
static void binder_delete_free_buffer(struct binder_alloc *alloc,
|
||||
@@ -538,7 +561,8 @@ static void binder_delete_free_buffer(struct binder_alloc *alloc,
|
||||
to_free = false;
|
||||
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
|
||||
"%d: merge free, buffer %pK share page with %pK\n",
|
||||
alloc->pid, buffer->data, prev->data);
|
||||
alloc->pid, buffer->user_data,
|
||||
prev->user_data);
|
||||
}
|
||||
|
||||
if (!list_is_last(&buffer->entry, &alloc->buffers)) {
|
||||
@@ -548,23 +572,24 @@ static void binder_delete_free_buffer(struct binder_alloc *alloc,
|
||||
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
|
||||
"%d: merge free, buffer %pK share page with %pK\n",
|
||||
alloc->pid,
|
||||
buffer->data,
|
||||
next->data);
|
||||
buffer->user_data,
|
||||
next->user_data);
|
||||
}
|
||||
}
|
||||
|
||||
if (PAGE_ALIGNED(buffer->data)) {
|
||||
if (PAGE_ALIGNED(buffer->user_data)) {
|
||||
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
|
||||
"%d: merge free, buffer start %pK is page aligned\n",
|
||||
alloc->pid, buffer->data);
|
||||
alloc->pid, buffer->user_data);
|
||||
to_free = false;
|
||||
}
|
||||
|
||||
if (to_free) {
|
||||
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
|
||||
"%d: merge free, buffer %pK do not share page with %pK or %pK\n",
|
||||
alloc->pid, buffer->data,
|
||||
prev->data, next ? next->data : NULL);
|
||||
alloc->pid, buffer->user_data,
|
||||
prev->user_data,
|
||||
next ? next->user_data : NULL);
|
||||
binder_update_page_range(alloc, 0, buffer_start_page(buffer),
|
||||
buffer_start_page(buffer) + PAGE_SIZE);
|
||||
}
|
||||
@@ -590,8 +615,8 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
|
||||
BUG_ON(buffer->free);
|
||||
BUG_ON(size > buffer_size);
|
||||
BUG_ON(buffer->transaction != NULL);
|
||||
BUG_ON(buffer->data < alloc->buffer);
|
||||
BUG_ON(buffer->data > alloc->buffer + alloc->buffer_size);
|
||||
BUG_ON(buffer->user_data < alloc->buffer);
|
||||
BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size);
|
||||
|
||||
if (buffer->async_transaction) {
|
||||
alloc->free_async_space += size + sizeof(struct binder_buffer);
|
||||
@@ -602,8 +627,9 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
|
||||
}
|
||||
|
||||
binder_update_page_range(alloc, 0,
|
||||
(void *)PAGE_ALIGN((uintptr_t)buffer->data),
|
||||
(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK));
|
||||
(void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data),
|
||||
(void __user *)(((uintptr_t)
|
||||
buffer->user_data + buffer_size) & PAGE_MASK));
|
||||
|
||||
rb_erase(&buffer->rb_node, &alloc->allocated_buffers);
|
||||
buffer->free = 1;
|
||||
@@ -659,7 +685,6 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
int ret;
|
||||
struct vm_struct *area;
|
||||
const char *failure_string;
|
||||
struct binder_buffer *buffer;
|
||||
|
||||
@@ -670,30 +695,11 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
|
||||
goto err_already_mapped;
|
||||
}
|
||||
|
||||
area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC);
|
||||
if (area == NULL) {
|
||||
ret = -ENOMEM;
|
||||
failure_string = "get_vm_area";
|
||||
goto err_get_vm_area_failed;
|
||||
}
|
||||
alloc->buffer = area->addr;
|
||||
alloc->user_buffer_offset =
|
||||
vma->vm_start - (uintptr_t)alloc->buffer;
|
||||
alloc->buffer = (void __user *)vma->vm_start;
|
||||
mutex_unlock(&binder_alloc_mmap_lock);
|
||||
|
||||
#ifdef CONFIG_CPU_CACHE_VIPT
|
||||
if (cache_is_vipt_aliasing()) {
|
||||
while (CACHE_COLOUR(
|
||||
(vma->vm_start ^ (uint32_t)alloc->buffer))) {
|
||||
pr_info("%s: %d %lx-%lx maps %pK bad alignment\n",
|
||||
__func__, alloc->pid, vma->vm_start,
|
||||
vma->vm_end, alloc->buffer);
|
||||
vma->vm_start += PAGE_SIZE;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
alloc->pages = kzalloc(sizeof(alloc->pages[0]) *
|
||||
((vma->vm_end - vma->vm_start) / PAGE_SIZE),
|
||||
alloc->pages = kcalloc((vma->vm_end - vma->vm_start) / PAGE_SIZE,
|
||||
sizeof(alloc->pages[0]),
|
||||
GFP_KERNEL);
|
||||
if (alloc->pages == NULL) {
|
||||
ret = -ENOMEM;
|
||||
@@ -709,14 +715,12 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
|
||||
goto err_alloc_buf_struct_failed;
|
||||
}
|
||||
|
||||
buffer->data = alloc->buffer;
|
||||
buffer->user_data = alloc->buffer;
|
||||
list_add(&buffer->entry, &alloc->buffers);
|
||||
buffer->free = 1;
|
||||
binder_insert_free_buffer(alloc, buffer);
|
||||
alloc->free_async_space = alloc->buffer_size / 2;
|
||||
barrier();
|
||||
alloc->vma = vma;
|
||||
alloc->vma_vm_mm = vma->vm_mm;
|
||||
binder_alloc_set_vma(alloc, vma);
|
||||
/* Same as mmgrab() in later kernel versions */
|
||||
atomic_inc(&alloc->vma_vm_mm->mm_count);
|
||||
|
||||
@@ -727,13 +731,13 @@ err_alloc_buf_struct_failed:
|
||||
alloc->pages = NULL;
|
||||
err_alloc_pages_failed:
|
||||
mutex_lock(&binder_alloc_mmap_lock);
|
||||
vfree(alloc->buffer);
|
||||
alloc->buffer = NULL;
|
||||
err_get_vm_area_failed:
|
||||
err_already_mapped:
|
||||
mutex_unlock(&binder_alloc_mmap_lock);
|
||||
pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
|
||||
alloc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"%s: %d %lx-%lx %s failed %d\n", __func__,
|
||||
alloc->pid, vma->vm_start, vma->vm_end,
|
||||
failure_string, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -744,10 +748,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
|
||||
int buffers, page_count;
|
||||
struct binder_buffer *buffer;
|
||||
|
||||
BUG_ON(alloc->vma);
|
||||
|
||||
buffers = 0;
|
||||
mutex_lock(&alloc->mutex);
|
||||
BUG_ON(alloc->vma);
|
||||
|
||||
while ((n = rb_first(&alloc->allocated_buffers))) {
|
||||
buffer = rb_entry(n, struct binder_buffer, rb_node);
|
||||
|
||||
@@ -773,7 +777,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
|
||||
int i;
|
||||
|
||||
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
|
||||
void *page_addr;
|
||||
void __user *page_addr;
|
||||
bool on_lru;
|
||||
|
||||
if (!alloc->pages[i].page_ptr)
|
||||
@@ -786,12 +790,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
|
||||
"%s: %d: page %d at %pK %s\n",
|
||||
__func__, alloc->pid, i, page_addr,
|
||||
on_lru ? "on lru" : "active");
|
||||
unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
|
||||
__free_page(alloc->pages[i].page_ptr);
|
||||
page_count++;
|
||||
}
|
||||
kfree(alloc->pages);
|
||||
vfree(alloc->buffer);
|
||||
}
|
||||
mutex_unlock(&alloc->mutex);
|
||||
if (alloc->vma_vm_mm)
|
||||
@@ -806,7 +808,7 @@ static void print_binder_buffer(struct seq_file *m, const char *prefix,
|
||||
struct binder_buffer *buffer)
|
||||
{
|
||||
seq_printf(m, "%s %d: %pK size %zd:%zd:%zd %s\n",
|
||||
prefix, buffer->debug_id, buffer->data,
|
||||
prefix, buffer->debug_id, buffer->user_data,
|
||||
buffer->data_size, buffer->offsets_size,
|
||||
buffer->extra_buffers_size,
|
||||
buffer->transaction ? "active" : "delivered");
|
||||
@@ -890,7 +892,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
|
||||
*/
|
||||
void binder_alloc_vma_close(struct binder_alloc *alloc)
|
||||
{
|
||||
WRITE_ONCE(alloc->vma, NULL);
|
||||
binder_alloc_set_vma(alloc, NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -925,7 +927,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
|
||||
|
||||
index = page - alloc->pages;
|
||||
page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
|
||||
vma = alloc->vma;
|
||||
vma = binder_alloc_get_vma(alloc);
|
||||
if (vma) {
|
||||
if (!mmget_not_zero(alloc->vma_vm_mm))
|
||||
goto err_mmget;
|
||||
@@ -940,10 +942,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
|
||||
if (vma) {
|
||||
trace_binder_unmap_user_start(alloc, index);
|
||||
|
||||
zap_page_range(vma,
|
||||
page_addr +
|
||||
alloc->user_buffer_offset,
|
||||
PAGE_SIZE, NULL);
|
||||
zap_page_range(vma, page_addr, PAGE_SIZE, NULL);
|
||||
|
||||
trace_binder_unmap_user_end(alloc, index);
|
||||
|
||||
@@ -953,7 +952,6 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
|
||||
|
||||
trace_binder_unmap_kernel_start(alloc, index);
|
||||
|
||||
unmap_kernel_range(page_addr, PAGE_SIZE);
|
||||
__free_page(page->page_ptr);
|
||||
page->page_ptr = NULL;
|
||||
|
||||
@@ -1020,3 +1018,173 @@ int binder_alloc_shrinker_init(void)
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* check_buffer() - verify that buffer/offset is safe to access
|
||||
* @alloc: binder_alloc for this proc
|
||||
* @buffer: binder buffer to be accessed
|
||||
* @offset: offset into @buffer data
|
||||
* @bytes: bytes to access from offset
|
||||
*
|
||||
* Check that the @offset/@bytes are within the size of the given
|
||||
* @buffer and that the buffer is currently active and not freeable.
|
||||
* Offsets must also be multiples of sizeof(u32). The kernel is
|
||||
* allowed to touch the buffer in two cases:
|
||||
*
|
||||
* 1) when the buffer is being created:
|
||||
* (buffer->free == 0 && buffer->allow_user_free == 0)
|
||||
* 2) when the buffer is being torn down:
|
||||
* (buffer->free == 0 && buffer->transaction == NULL).
|
||||
*
|
||||
* Return: true if the buffer is safe to access
|
||||
*/
|
||||
static inline bool check_buffer(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t offset, size_t bytes)
|
||||
{
|
||||
size_t buffer_size = binder_alloc_buffer_size(alloc, buffer);
|
||||
|
||||
return buffer_size >= bytes &&
|
||||
offset <= buffer_size - bytes &&
|
||||
IS_ALIGNED(offset, sizeof(u32)) &&
|
||||
!buffer->free &&
|
||||
(!buffer->allow_user_free || !buffer->transaction);
|
||||
}
|
||||
|
||||
/**
|
||||
* binder_alloc_get_page() - get kernel pointer for given buffer offset
|
||||
* @alloc: binder_alloc for this proc
|
||||
* @buffer: binder buffer to be accessed
|
||||
* @buffer_offset: offset into @buffer data
|
||||
* @pgoffp: address to copy final page offset to
|
||||
*
|
||||
* Lookup the struct page corresponding to the address
|
||||
* at @buffer_offset into @buffer->user_data. If @pgoffp is not
|
||||
* NULL, the byte-offset into the page is written there.
|
||||
*
|
||||
* The caller is responsible to ensure that the offset points
|
||||
* to a valid address within the @buffer and that @buffer is
|
||||
* not freeable by the user. Since it can't be freed, we are
|
||||
* guaranteed that the corresponding elements of @alloc->pages[]
|
||||
* cannot change.
|
||||
*
|
||||
* Return: struct page
|
||||
*/
|
||||
static struct page *binder_alloc_get_page(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
pgoff_t *pgoffp)
|
||||
{
|
||||
binder_size_t buffer_space_offset = buffer_offset +
|
||||
(buffer->user_data - alloc->buffer);
|
||||
pgoff_t pgoff = buffer_space_offset & ~PAGE_MASK;
|
||||
size_t index = buffer_space_offset >> PAGE_SHIFT;
|
||||
struct binder_lru_page *lru_page;
|
||||
|
||||
lru_page = &alloc->pages[index];
|
||||
*pgoffp = pgoff;
|
||||
return lru_page->page_ptr;
|
||||
}
|
||||
|
||||
/**
|
||||
* binder_alloc_copy_user_to_buffer() - copy src user to tgt user
|
||||
* @alloc: binder_alloc for this proc
|
||||
* @buffer: binder buffer to be accessed
|
||||
* @buffer_offset: offset into @buffer data
|
||||
* @from: userspace pointer to source buffer
|
||||
* @bytes: bytes to copy
|
||||
*
|
||||
* Copy bytes from source userspace to target buffer.
|
||||
*
|
||||
* Return: bytes remaining to be copied
|
||||
*/
|
||||
unsigned long
|
||||
binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
const void __user *from,
|
||||
size_t bytes)
|
||||
{
|
||||
if (!check_buffer(alloc, buffer, buffer_offset, bytes))
|
||||
return bytes;
|
||||
|
||||
while (bytes) {
|
||||
unsigned long size;
|
||||
unsigned long ret;
|
||||
struct page *page;
|
||||
pgoff_t pgoff;
|
||||
void *kptr;
|
||||
|
||||
page = binder_alloc_get_page(alloc, buffer,
|
||||
buffer_offset, &pgoff);
|
||||
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
|
||||
kptr = kmap(page) + pgoff;
|
||||
ret = copy_from_user(kptr, from, size);
|
||||
kunmap(page);
|
||||
if (ret)
|
||||
return bytes - size + ret;
|
||||
bytes -= size;
|
||||
from += size;
|
||||
buffer_offset += size;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
|
||||
bool to_buffer,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
void *ptr,
|
||||
size_t bytes)
|
||||
{
|
||||
/* All copies must be 32-bit aligned and 32-bit size */
|
||||
BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes));
|
||||
|
||||
while (bytes) {
|
||||
unsigned long size;
|
||||
struct page *page;
|
||||
pgoff_t pgoff;
|
||||
void *tmpptr;
|
||||
void *base_ptr;
|
||||
|
||||
page = binder_alloc_get_page(alloc, buffer,
|
||||
buffer_offset, &pgoff);
|
||||
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
|
||||
base_ptr = kmap_atomic(page);
|
||||
tmpptr = base_ptr + pgoff;
|
||||
if (to_buffer)
|
||||
memcpy(tmpptr, ptr, size);
|
||||
else
|
||||
memcpy(ptr, tmpptr, size);
|
||||
/*
|
||||
* kunmap_atomic() takes care of flushing the cache
|
||||
* if this device has VIVT cache arch
|
||||
*/
|
||||
kunmap_atomic(base_ptr);
|
||||
bytes -= size;
|
||||
pgoff = 0;
|
||||
ptr = ptr + size;
|
||||
buffer_offset += size;
|
||||
}
|
||||
}
|
||||
|
||||
void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
void *src,
|
||||
size_t bytes)
|
||||
{
|
||||
binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
|
||||
src, bytes);
|
||||
}
|
||||
|
||||
void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
|
||||
void *dest,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
size_t bytes)
|
||||
{
|
||||
binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
|
||||
dest, bytes);
|
||||
}
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/list_lru.h>
|
||||
#include <uapi/linux/android/binder.h>
|
||||
|
||||
extern struct list_lru binder_alloc_lru;
|
||||
struct binder_transaction;
|
||||
@@ -30,16 +31,16 @@ struct binder_transaction;
|
||||
* struct binder_buffer - buffer used for binder transactions
|
||||
* @entry: entry alloc->buffers
|
||||
* @rb_node: node for allocated_buffers/free_buffers rb trees
|
||||
* @free: true if buffer is free
|
||||
* @allow_user_free: describe the second member of struct blah,
|
||||
* @async_transaction: describe the second member of struct blah,
|
||||
* @debug_id: describe the second member of struct blah,
|
||||
* @transaction: describe the second member of struct blah,
|
||||
* @target_node: describe the second member of struct blah,
|
||||
* @data_size: describe the second member of struct blah,
|
||||
* @offsets_size: describe the second member of struct blah,
|
||||
* @extra_buffers_size: describe the second member of struct blah,
|
||||
* @data:i describe the second member of struct blah,
|
||||
* @free: %true if buffer is free
|
||||
* @allow_user_free: %true if user is allowed to free buffer
|
||||
* @async_transaction: %true if buffer is in use for an async txn
|
||||
* @debug_id: unique ID for debugging
|
||||
* @transaction: pointer to associated struct binder_transaction
|
||||
* @target_node: struct binder_node associated with this buffer
|
||||
* @data_size: size of @transaction data
|
||||
* @offsets_size: size of array of offsets
|
||||
* @extra_buffers_size: size of space for other objects (like sg lists)
|
||||
* @user_data: user pointer to base of buffer space
|
||||
*
|
||||
* Bookkeeping structure for binder transaction buffers
|
||||
*/
|
||||
@@ -50,8 +51,7 @@ struct binder_buffer {
|
||||
unsigned free:1;
|
||||
unsigned allow_user_free:1;
|
||||
unsigned async_transaction:1;
|
||||
unsigned free_in_progress:1;
|
||||
unsigned debug_id:28;
|
||||
unsigned debug_id:29;
|
||||
|
||||
struct binder_transaction *transaction;
|
||||
|
||||
@@ -59,7 +59,7 @@ struct binder_buffer {
|
||||
size_t data_size;
|
||||
size_t offsets_size;
|
||||
size_t extra_buffers_size;
|
||||
void *data;
|
||||
void __user *user_data;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -82,7 +82,6 @@ struct binder_lru_page {
|
||||
* (invariant after init)
|
||||
* @vma_vm_mm: copy of vma->vm_mm (invarient after mmap)
|
||||
* @buffer: base of per-proc address space mapped via mmap
|
||||
* @user_buffer_offset: offset between user and kernel VAs for buffer
|
||||
* @buffers: list of all buffers for this proc
|
||||
* @free_buffers: rb tree of buffers available for allocation
|
||||
* sorted by size
|
||||
@@ -103,8 +102,7 @@ struct binder_alloc {
|
||||
struct mutex mutex;
|
||||
struct vm_area_struct *vma;
|
||||
struct mm_struct *vma_vm_mm;
|
||||
void *buffer;
|
||||
ptrdiff_t user_buffer_offset;
|
||||
void __user *buffer;
|
||||
struct list_head buffers;
|
||||
struct rb_root free_buffers;
|
||||
struct rb_root allocated_buffers;
|
||||
@@ -163,26 +161,24 @@ binder_alloc_get_free_async_space(struct binder_alloc *alloc)
|
||||
return free_async_space;
|
||||
}
|
||||
|
||||
/**
|
||||
* binder_alloc_get_user_buffer_offset() - get offset between kernel/user addrs
|
||||
* @alloc: binder_alloc for this proc
|
||||
*
|
||||
* Return: the offset between kernel and user-space addresses to use for
|
||||
* virtual address conversion
|
||||
*/
|
||||
static inline ptrdiff_t
|
||||
binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc)
|
||||
{
|
||||
/*
|
||||
* user_buffer_offset is constant if vma is set and
|
||||
* undefined if vma is not set. It is possible to
|
||||
* get here with !alloc->vma if the target process
|
||||
* is dying while a transaction is being initiated.
|
||||
* Returning the old value is ok in this case and
|
||||
* the transaction will fail.
|
||||
*/
|
||||
return alloc->user_buffer_offset;
|
||||
}
|
||||
unsigned long
|
||||
binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
const void __user *from,
|
||||
size_t bytes);
|
||||
|
||||
void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
void *src,
|
||||
size_t bytes);
|
||||
|
||||
void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
|
||||
void *dest,
|
||||
struct binder_buffer *buffer,
|
||||
binder_size_t buffer_offset,
|
||||
size_t bytes);
|
||||
|
||||
#endif /* _LINUX_BINDER_ALLOC_H */
|
||||
|
||||
|
||||
@@ -102,11 +102,12 @@ static bool check_buffer_pages_allocated(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
size_t size)
|
||||
{
|
||||
void *page_addr, *end;
|
||||
void __user *page_addr;
|
||||
void __user *end;
|
||||
int page_index;
|
||||
|
||||
end = (void *)PAGE_ALIGN((uintptr_t)buffer->data + size);
|
||||
page_addr = buffer->data;
|
||||
end = (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data + size);
|
||||
page_addr = buffer->user_data;
|
||||
for (; page_addr < end; page_addr += PAGE_SIZE) {
|
||||
page_index = (page_addr - alloc->buffer) / PAGE_SIZE;
|
||||
if (!alloc->pages[page_index].page_ptr ||
|
||||
|
||||
@@ -272,14 +272,17 @@ DECLARE_EVENT_CLASS(binder_buffer_class,
|
||||
__field(int, debug_id)
|
||||
__field(size_t, data_size)
|
||||
__field(size_t, offsets_size)
|
||||
__field(size_t, extra_buffers_size)
|
||||
),
|
||||
TP_fast_assign(
|
||||
__entry->debug_id = buf->debug_id;
|
||||
__entry->data_size = buf->data_size;
|
||||
__entry->offsets_size = buf->offsets_size;
|
||||
__entry->extra_buffers_size = buf->extra_buffers_size;
|
||||
),
|
||||
TP_printk("transaction=%d data_size=%zd offsets_size=%zd",
|
||||
__entry->debug_id, __entry->data_size, __entry->offsets_size)
|
||||
TP_printk("transaction=%d data_size=%zd offsets_size=%zd extra_buffers_size=%zd",
|
||||
__entry->debug_id, __entry->data_size, __entry->offsets_size,
|
||||
__entry->extra_buffers_size)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(binder_buffer_class, binder_transaction_alloc_buf,
|
||||
@@ -296,7 +299,7 @@ DEFINE_EVENT(binder_buffer_class, binder_transaction_failed_buffer_release,
|
||||
|
||||
TRACE_EVENT(binder_update_page_range,
|
||||
TP_PROTO(struct binder_alloc *alloc, bool allocate,
|
||||
void *start, void *end),
|
||||
void __user *start, void __user *end),
|
||||
TP_ARGS(alloc, allocate, start, end),
|
||||
TP_STRUCT__entry(
|
||||
__field(int, proc)
|
||||
|
||||
@@ -87,6 +87,14 @@ enum flat_binder_object_flags {
|
||||
* scheduling policy from the caller (for synchronous transactions).
|
||||
*/
|
||||
FLAT_BINDER_FLAG_INHERIT_RT = 0x800,
|
||||
|
||||
/**
|
||||
* @FLAT_BINDER_FLAG_TXN_SECURITY_CTX: request security contexts
|
||||
*
|
||||
* Only when set, causes senders to include their security
|
||||
* context
|
||||
*/
|
||||
FLAT_BINDER_FLAG_TXN_SECURITY_CTX = 0x1000,
|
||||
};
|
||||
|
||||
#ifdef BINDER_IPC_32BIT
|
||||
@@ -246,6 +254,15 @@ struct binder_node_debug_info {
|
||||
__u32 has_weak_ref;
|
||||
};
|
||||
|
||||
struct binder_node_info_for_ref {
|
||||
__u32 handle;
|
||||
__u32 strong_count;
|
||||
__u32 weak_count;
|
||||
__u32 reserved1;
|
||||
__u32 reserved2;
|
||||
__u32 reserved3;
|
||||
};
|
||||
|
||||
#define BINDER_WRITE_READ _IOWR('b', 1, struct binder_write_read)
|
||||
#define BINDER_SET_IDLE_TIMEOUT _IOW('b', 3, __s64)
|
||||
#define BINDER_SET_MAX_THREADS _IOW('b', 5, __u32)
|
||||
@@ -254,6 +271,8 @@ struct binder_node_debug_info {
|
||||
#define BINDER_THREAD_EXIT _IOW('b', 8, __s32)
|
||||
#define BINDER_VERSION _IOWR('b', 9, struct binder_version)
|
||||
#define BINDER_GET_NODE_DEBUG_INFO _IOWR('b', 11, struct binder_node_debug_info)
|
||||
#define BINDER_GET_NODE_INFO_FOR_REF _IOWR('b', 12, struct binder_node_info_for_ref)
|
||||
#define BINDER_SET_CONTEXT_MGR_EXT _IOW('b', 13, struct flat_binder_object)
|
||||
|
||||
/*
|
||||
* NOTE: Two special error codes you should check for when calling
|
||||
@@ -312,6 +331,11 @@ struct binder_transaction_data {
|
||||
} data;
|
||||
};
|
||||
|
||||
struct binder_transaction_data_secctx {
|
||||
struct binder_transaction_data transaction_data;
|
||||
binder_uintptr_t secctx;
|
||||
};
|
||||
|
||||
struct binder_transaction_data_sg {
|
||||
struct binder_transaction_data transaction_data;
|
||||
binder_size_t buffers_size;
|
||||
@@ -347,6 +371,11 @@ enum binder_driver_return_protocol {
|
||||
BR_OK = _IO('r', 1),
|
||||
/* No parameters! */
|
||||
|
||||
BR_TRANSACTION_SEC_CTX = _IOR('r', 2,
|
||||
struct binder_transaction_data_secctx),
|
||||
/*
|
||||
* binder_transaction_data_secctx: the received command.
|
||||
*/
|
||||
BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data),
|
||||
BR_REPLY = _IOR('r', 3, struct binder_transaction_data),
|
||||
/*
|
||||
|
||||
Reference in New Issue
Block a user