Received: by 2002:ab2:7988:0:b0:1f4:b336:87c4 with SMTP id g8csp90949lqj; Thu, 11 Apr 2024 10:37:52 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVucBlRYKnw6nDn5N3LY4+3T5eO0OnedOK/6QqrRNQWmhxLf3SYLmtkteJ2hx0f3OkTBkEm43h6bsjWhtYbnjn3CQCazXRvlOaOO6CVTA== X-Google-Smtp-Source: AGHT+IFjhDsvF2gaaZEeTL11mKtR/0RSOCse+nrKWq7WBJtzBy7rwQEBRCz6Qfv3w8vG2HjRtO27 X-Received: by 2002:ae9:f21a:0:b0:78d:70c7:af with SMTP id m26-20020ae9f21a000000b0078d70c700afmr326074qkg.13.1712857072204; Thu, 11 Apr 2024 10:37:52 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712857072; cv=pass; d=google.com; s=arc-20160816; b=IcnLpmCO9rDp4efFtn1hHY0GMlwejxGGSoalmAXKGYOlQFcZixihl+uPIo8MPKJvxC RRHwuBGo08ob10e+kyTgypozi2CUe+SqD9JtzpRDyTN/piuNMXeJLS6LCQdNb6PGGjgF 3e2mzK149dFko6UVINIjvtDRUWf9jud7aEHpMVGSZguX+BvMrIIF86bYm9cHh+gKTjEO FZc0Hrwp5UnlHQS5whPb2Gxe66WQVX5shXjhE1XjnfGkmU6ErQcKzNQJkzy03rRImMwS FikkYcnv/+cqjjzg8sXPnX7eURqXS5M5uUXoJBeC2+IdBeJABhFWNMgh0cN1us0dxnYh 8e0Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=1oLeYW4kNqv356U1vzmep8d+NvSaDxFlgcEZlsaSopY=; fh=gMFRjFG9KasUYvwr0WK5/KYxde7tueU8fZSQFYWo6W4=; b=apZLSPN6jxh2U74Fsow9Ytm8/Ny+sGcJcy7AOHiWU1IMNf33Z+/copOdY/vzClRPa6 /bBZv7kiluMV+rDuJ1mVDCf26+/YxpEjmLvYEyde4U1OBZJlLbclLiJaYvm9/HzAeTT+ LSQ8lgx2XBjTdnyrV/ohHn1dgxf9Xd1JnCQrHro3qOh7fNJ8zYK+kdH0aUKlkIvh+XdX DE3o55KZBgGR8ilaVpAwZb1qMwIJ+W32IyrU8zLP61K1+2/a15iG8OxB/sUa8mpAENA4 GC/R7X02BvDTwtMty95IdEXerMDVcOKRCiZaSy6vd/dYgrQtNUJGd5WJAJ69sQMXJbZ8 6jUQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=jgfDWV+k; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-141276-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-141276-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id wl26-20020a05620a57da00b0078d760dfcf0si1934541qkn.578.2024.04.11.10.37.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Apr 2024 10:37:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-141276-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=jgfDWV+k; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-141276-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-141276-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id D72401C2144F for ; Thu, 11 Apr 2024 17:37:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E143114F9F8; Thu, 11 Apr 2024 16:06:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jgfDWV+k" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69D7414F9CD; Thu, 11 Apr 2024 16:06:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712851594; cv=none; b=GA6sn2ZWH3hCh3B6H19TzEMJlWCXyluABU4w/3ukMu4y5ETC/P13NlcwvI8uV2W7VZ8TlDX3gO7CDH9bXqRpm7M4hl2VMwbd8nsJm30ubeqj03JyUJXuD+1KZpyuxinepC53cdJDquwbDZXHCMgTpVAv0+c8hHB2vKNrov6lSEQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712851594; c=relaxed/simple; bh=FzXFJSmb6FPYptE+tJEPsseMYyGfozQQhnGqwpvvWSs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gF26pFPLEU2+cNY8qLslIMhiaLl8Adyepbs6oEO6EtvHcgzoJQIq19w/OgtfJf+3HMmkIzM0zW0wAI2J9TL9O20vhHzrr991hGkjYaB9shAfm3HgkY0Ec89N6D68BGoxLFcJwGtmeyrUanngJFYt/h2Cnh6yDAyoDgzhwNewkiY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jgfDWV+k; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CEAFC113CD; Thu, 11 Apr 2024 16:06:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1712851594; bh=FzXFJSmb6FPYptE+tJEPsseMYyGfozQQhnGqwpvvWSs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jgfDWV+kGnsVbXX4csXun1AXMBmRYtUhZyOYrmoXgqBXVZt0vYKNVnTNCAQEGOZf1 63C3Y2YATnhkd3fsJ7ynR6ExQsaOg5XSxMIi42VqOYQdnHjG3O79uTgW4oXlf1R9Gt FHiNX9BiBg6uFHUNBZ8IfTJGYVZSeoquZk16BDVZjM5CwBNMoFy+7Wi9H150mRg9JS CDcPK65or1p/QJwGKWJuUbTDXNT+joIdWuAt4AiweUmYb0xSdD5p6o/4lWaCTk1PdA MNcxCn3pWEjarSozP7y/oxjgXvkg26U7ZofIVv33IMc4avvZLsf4M7hXrB0PgcsNpD F9tI/iyfZnPYQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Andy Lutomirski , Arnd Bergmann , Catalin Marinas , Christoph Hellwig , Helge Deller , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mathieu Desnoyers , Michael Ellerman , Mike Rapoport , Palmer Dabbelt , Peter Zijlstra , Russell King , Song Liu , Steven Rostedt , Thomas Gleixner , Uladzislau Rezki , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org Subject: [RFC PATCH 6/7] execmem: add support for cache of large ROX pages Date: Thu, 11 Apr 2024 19:05:25 +0300 Message-ID: <20240411160526.2093408-7-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240411160526.2093408-1-rppt@kernel.org> References: <20240411160526.2093408-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "Mike Rapoport (IBM)" Using large pages to map text areas reduces iTLB pressure and improves performance. Extend execmem_alloc() with an ability to use PMD_SIZE'ed pages with ROX permissions as a cache for smaller allocations. To populate the cache, a writable large page is allocated from vmalloc with VM_ALLOW_HUGE_VMAP, filled with invalid instructions and then remapped as ROX. Portions of that large page are handed out to execmem_alloc() callers without any changes to the permissions. When the memory is freed with execmem_free() it is invalidated again so that it won't contain stale instructions. The cache is enabled when an architecture sets EXECMEM_ROX_CACHE flag in definition of an execmem_range. Signed-off-by: Mike Rapoport (IBM) --- include/linux/execmem.h | 2 + mm/execmem.c | 267 ++++++++++++++++++++++++++++++++++++++-- 2 files changed, 262 insertions(+), 7 deletions(-) diff --git a/include/linux/execmem.h b/include/linux/execmem.h index 9d22999dbd7d..06f678e6fe55 100644 --- a/include/linux/execmem.h +++ b/include/linux/execmem.h @@ -77,12 +77,14 @@ struct execmem_range { /** * struct execmem_info - architecture parameters for code allocations + * @invalidate: set memory to contain invalid instructions * @ranges: array of parameter sets defining architecture specific * parameters for executable memory allocations. The ranges that are not * explicitly initialized by an architecture use parameters defined for * @EXECMEM_DEFAULT. */ struct execmem_info { + void (*invalidate)(void *ptr, size_t size, bool writable); struct execmem_range ranges[EXECMEM_TYPE_MAX]; }; diff --git a/mm/execmem.c b/mm/execmem.c index c920d2b5a721..716fba68ab0e 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -1,30 +1,88 @@ // SPDX-License-Identifier: GPL-2.0 #include +#include #include #include +#include #include #include +#include + +#include "internal.h" + static struct execmem_info *execmem_info __ro_after_init; static struct execmem_info default_execmem_info __ro_after_init; -static void *__execmem_alloc(struct execmem_range *range, size_t size) +struct execmem_cache { + struct mutex mutex; + struct maple_tree busy_areas; + struct maple_tree free_areas; +}; + +static struct execmem_cache execmem_cache = { + .mutex = __MUTEX_INITIALIZER(execmem_cache.mutex), + .busy_areas = MTREE_INIT_EXT(busy_areas, MT_FLAGS_LOCK_EXTERN, + execmem_cache.mutex), + .free_areas = MTREE_INIT_EXT(free_areas, MT_FLAGS_LOCK_EXTERN, + execmem_cache.mutex), +}; + +static void execmem_cache_clean(struct work_struct *work) +{ + struct maple_tree *free_areas = &execmem_cache.free_areas; + struct mutex *mutex = &execmem_cache.mutex; + MA_STATE(mas, free_areas, 0, ULONG_MAX); + void *area; + + mutex_lock(mutex); + mas_for_each(&mas, area, ULONG_MAX) { + size_t size; + + if (!xa_is_value(area)) + continue; + + size = xa_to_value(area); + + if (IS_ALIGNED(size, PMD_SIZE) && IS_ALIGNED(mas.index, PMD_SIZE)) { + void *ptr = (void *)mas.index; + + mas_erase(&mas); + vfree(ptr); + } + } + mutex_unlock(mutex); +} + +static DECLARE_WORK(execmem_cache_clean_work, execmem_cache_clean); + +static void execmem_invalidate(void *ptr, size_t size, bool writable) +{ + if (execmem_info->invalidate) + execmem_info->invalidate(ptr, size, writable); + else + memset(ptr, 0, size); +} + +static void *execmem_vmalloc(struct execmem_range *range, size_t size, + pgprot_t pgprot, unsigned long vm_flags) { bool kasan = range->flags & EXECMEM_KASAN_SHADOW; - unsigned long vm_flags = VM_FLUSH_RESET_PERMS; gfp_t gfp_flags = GFP_KERNEL | __GFP_NOWARN; + unsigned int align = range->alignment; unsigned long start = range->start; unsigned long end = range->end; - unsigned int align = range->alignment; - pgprot_t pgprot = range->pgprot; void *p; if (kasan) vm_flags |= VM_DEFER_KMEMLEAK; - p = __vmalloc_node_range(size, align, start, end, gfp_flags, - pgprot, vm_flags, NUMA_NO_NODE, + if (vm_flags & VM_ALLOW_HUGE_VMAP) + align = PMD_SIZE; + + p = __vmalloc_node_range(size, align, start, end, gfp_flags, pgprot, + vm_flags, NUMA_NO_NODE, __builtin_return_address(0)); if (!p && range->fallback_start) { start = range->fallback_start; @@ -44,6 +102,199 @@ static void *__execmem_alloc(struct execmem_range *range, size_t size) return NULL; } + return p; +} + +static int execmem_cache_add(void *ptr, size_t size) +{ + struct maple_tree *free_areas = &execmem_cache.free_areas; + struct mutex *mutex = &execmem_cache.mutex; + unsigned long addr = (unsigned long)ptr; + MA_STATE(mas, free_areas, addr - 1, addr + 1); + unsigned long lower, lower_size = 0; + unsigned long upper, upper_size = 0; + unsigned long area_size; + void *area = NULL; + int err; + + lower = addr; + upper = addr + size - 1; + + mutex_lock(mutex); + area = mas_walk(&mas); + if (area && xa_is_value(area) && mas.last == addr - 1) { + lower = mas.index; + lower_size = xa_to_value(area); + } + + area = mas_next(&mas, ULONG_MAX); + if (area && xa_is_value(area) && mas.index == addr + size) { + upper = mas.last; + upper_size = xa_to_value(area); + } + + mas_set_range(&mas, lower, upper); + area_size = lower_size + upper_size + size; + err = mas_store_gfp(&mas, xa_mk_value(area_size), GFP_KERNEL); + mutex_unlock(mutex); + if (err) + return -ENOMEM; + + return 0; +} + +static void *__execmem_cache_alloc(size_t size) +{ + struct maple_tree *free_areas = &execmem_cache.free_areas; + struct maple_tree *busy_areas = &execmem_cache.busy_areas; + MA_STATE(mas_free, free_areas, 0, ULONG_MAX); + MA_STATE(mas_busy, busy_areas, 0, ULONG_MAX); + struct mutex *mutex = &execmem_cache.mutex; + unsigned long addr, last, area_size = 0; + void *area, *ptr = NULL; + int err; + + mutex_lock(mutex); + mas_for_each(&mas_free, area, ULONG_MAX) { + area_size = xa_to_value(area); + if (area_size >= size) + break; + } + + if (area_size < size) + goto out_unlock; + + addr = mas_free.index; + last = mas_free.last; + + /* insert allocated size to busy_areas at range [addr, addr + size) */ + mas_set_range(&mas_busy, addr, addr + size - 1); + err = mas_store_gfp(&mas_busy, xa_mk_value(size), GFP_KERNEL); + if (err) + goto out_unlock; + + mas_erase(&mas_free); + if (area_size > size) { + /* + * re-insert remaining free size to free_areas at range + * [addr + size, last] + */ + mas_set_range(&mas_free, addr + size, last); + size = area_size - size; + err = mas_store_gfp(&mas_free, xa_mk_value(size), GFP_KERNEL); + if (err) { + mas_erase(&mas_busy); + goto out_unlock; + } + } + ptr = (void *)addr; + +out_unlock: + mutex_unlock(mutex); + return ptr; +} + +static int execmem_cache_populate(struct execmem_range *range, size_t size) +{ + unsigned long vm_flags = VM_FLUSH_RESET_PERMS | VM_ALLOW_HUGE_VMAP; + unsigned long start, end; + struct vm_struct *vm; + size_t alloc_size; + int err = -ENOMEM; + void *p; + + alloc_size = round_up(size, PMD_SIZE); + p = execmem_vmalloc(range, alloc_size, PAGE_KERNEL, vm_flags); + if (!p) + return err; + + vm = find_vm_area(p); + if (!vm) + goto err_free_mem; + + /* fill memory with invalid instructions */ + execmem_invalidate(p, alloc_size, /* writable = */ true); + + start = (unsigned long)p; + end = start + alloc_size; + + vunmap_range_noflush(start, end); + flush_tlb_kernel_range(start, end); + + /* FIXME: handle direct map alias */ + + err = vmap_pages_range_noflush(start, end, range->pgprot, vm->pages, + PMD_SHIFT); + if (err) + goto err_free_mem; + + err = execmem_cache_add(p, alloc_size); + if (err) + goto err_free_mem; + + return 0; + +err_free_mem: + vfree(p); + return err; +} + +static void *execmem_cache_alloc(struct execmem_range *range, size_t size) +{ + void *p; + int err; + + p = __execmem_cache_alloc(size); + if (p) + return p; + + err = execmem_cache_populate(range, size); + if (err) + return NULL; + + return __execmem_cache_alloc(size); +} + +static bool execmem_cache_free(void *ptr) +{ + struct maple_tree *busy_areas = &execmem_cache.busy_areas; + struct mutex *mutex = &execmem_cache.mutex; + unsigned long addr = (unsigned long)ptr; + MA_STATE(mas, busy_areas, addr, addr); + size_t size; + void *area; + + mutex_lock(mutex); + area = mas_walk(&mas); + if (!area) { + mutex_unlock(mutex); + return false; + } + size = xa_to_value(area); + mas_erase(&mas); + mutex_unlock(mutex); + + execmem_invalidate(ptr, size, /* writable = */ false); + + execmem_cache_add(ptr, size); + + schedule_work(&execmem_cache_clean_work); + + return true; +} + +static void *__execmem_alloc(struct execmem_range *range, size_t size) +{ + bool use_cache = range->flags & EXECMEM_ROX_CACHE; + unsigned long vm_flags = VM_FLUSH_RESET_PERMS; + pgprot_t pgprot = range->pgprot; + void *p; + + if (use_cache) + p = execmem_cache_alloc(range, size); + else + p = execmem_vmalloc(range, size, pgprot, vm_flags); + return kasan_reset_tag(p); } @@ -61,7 +312,9 @@ void execmem_free(void *ptr) * supported by vmalloc. */ WARN_ON(in_interrupt()); - vfree(ptr); + + if (!execmem_cache_free(ptr)) + vfree(ptr); } void *execmem_update_copy(void *dst, const void *src, size_t size) -- 2.43.0