Received: by 2002:a05:7412:798b:b0:fc:a2b0:25d7 with SMTP id fb11csp332573rdb; Thu, 22 Feb 2024 05:22:37 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWOzUmAl9PN5JGvroMG7wIxtuUgsMl9Oj5E5zHd7QKlEOle2VG1YBD29Ph9gkCSjUXwj3omdNUQN4P0/VtI9y+HB5IoafBou7Xqm28GXw== X-Google-Smtp-Source: AGHT+IFheydg7Q7o2eHihaLk0YNScP4sKp9gr6IsXig5iV4u8oi5XC6PSGch5Ppy6fNyiL8e8Gct X-Received: by 2002:a17:902:bd0a:b0:1d9:4ebd:b94d with SMTP id p10-20020a170902bd0a00b001d94ebdb94dmr18742025pls.55.1708608156752; Thu, 22 Feb 2024 05:22:36 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708608156; cv=pass; d=google.com; s=arc-20160816; b=BlFr3ADnE3OSliH3/ViJbqbcYMIs9W5ax2xafJSBquHc1z3N3GGWYXzCr8vQKNDq4J Vr5+eKcr/aRFcdgz3A5GLoNfgvDRhUroIiuhO0EcAMQd6m/1v1YbdCqrr3/L2QsVDovV PV/ODSWV+LKwWFXvXMH5uXyDceCV1NPwYJkfLydBXiVW3CAniwPConbwS9IiM+VStmNL tDhUUflbWs/dh8pITBZUwc/RZPAOLYmWfK9SLBOi8q4SQYKS2ehLlcJy8bqLsW/jyYwv 6colUWsaNapgdjjtRe3RRIZ+2rOxccwl1KQ+vpcJk/ww+CxkTu6Uq1esbpbqNm+Rww82 ZR0Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=/ksXYEHCELSVduu3bTVdvNzG4Whwx3xFEBJ2gTz345Q=; fh=OXORlsXUckX+pyxJQOg+QqdF4LdKaF3EeGCuNTKSFYQ=; b=eGVYUMWRjmXIcdoBjL6zw8jAdk+PKs8evw4vr0QXRPaWNbdGpT1Vnjq3za1rQEtmHN sLu2rbhbyUAkb4FbZtLjO9F+WmsQb8ALGJhM5pgoSUVK+C8JweHudhjmazj0NNtKatn5 j2s6J+xI3vc4Y+O3FnwbPrEGxRm+l53tgg7mwbBCVrlHD/KJ6kj3v0BbuRm7uBN+/JR6 BCWatKIADhjTJdTqH6QfBp7jNFSKicZUr4H/0YSklzeL1g2ItK26/PRcszDJDfg5K+kh jSZSc/fY2NfGLlYqsI0GgVfNvkJw3kxiCMMQwMPD6CmK/N7G63NPer1Dz7JE6JoRBVD9 9uYA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huaweicloud.com); spf=pass (google.com: domain of linux-kernel+bounces-76572-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-76572-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id i9-20020a170902eb4900b001db8cd74953si10095081pli.112.2024.02.22.05.22.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 05:22:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-76572-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huaweicloud.com); spf=pass (google.com: domain of linux-kernel+bounces-76572-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-76572-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 6DD19B22945 for ; Thu, 22 Feb 2024 13:15:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 755E21332B5; Thu, 22 Feb 2024 13:14:43 +0000 (UTC) Received: from frasgout11.his.huawei.com (frasgout11.his.huawei.com [14.137.139.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE1BE3F9ED; Thu, 22 Feb 2024 13:14:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=14.137.139.23 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708607682; cv=none; b=Jf3AeDeg7KgzYGlxq6/hBbwirUeUblucKS8jAk0LS1e3LRtUx+IWk9VZrBqa37P+1Yh1U/Y21Y6q4eNLkWyshQ0Errv/3Gl5DB09HKo/BdtVMdYKjGy8GGDiVCn9fLNbQxG4vUbeBosEQe0NFfQiM8Aitgd1UM5jhKtH5cs2Yjs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708607682; c=relaxed/simple; bh=Q/+WYKnLN3DR/LqtTKMcu0loAE6e1ujPVVq8kRwUNEc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Y9mgG25zzOvvrKoqPxwnU9yKGbOqsOsDR3l7BCKXZvWYHxtW/C+NZvd+z2ZsxRpOzXtkyLUyc3t440wUAMN3lCU3AMNOUgAy5GD5ePsl9hKwlHkySdgN5nVh/EfeWeKGCgmj/P4F+TXy5S72y99VyfWwaRDJG7pnS89dNt1P5SQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=14.137.139.23 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.18.186.51]) by frasgout11.his.huawei.com (SkyGuard) with ESMTP id 4TgYB76pHpz9xyNs; Thu, 22 Feb 2024 20:59:11 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id 0B667140684; Thu, 22 Feb 2024 21:14:28 +0800 (CST) Received: from huaweicloud.com (unknown [10.45.157.235]) by APP1 (Coremail) with SMTP id LxC2BwDXzhdSSNdlhi4AAw--.34998S6; Thu, 22 Feb 2024 14:14:27 +0100 (CET) From: Petr Tesarik To: Dave Hansen Cc: =?UTF-8?B?UGV0ciBUZXNhxZnDrWs=?= , Petr Tesarik , Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , "H. Peter Anvin" , Andy Lutomirski , Oleg Nesterov , Peter Zijlstra , Xin Li , Arnd Bergmann , Andrew Morton , Rick Edgecombe , Kees Cook , "Masami Hiramatsu (Google)" , Pengfei Xu , Josh Poimboeuf , Ze Gao , "Kirill A. Shutemov" , Kai Huang , David Woodhouse , Brian Gerst , Jason Gunthorpe , Joerg Roedel , "Mike Rapoport (IBM)" , Tina Zhang , Jacob Pan , "open list:DOCUMENTATION" , open list , Roberto Sassu , John Johansen , Paul Moore , James Morris , "Serge E. Hallyn" , apparmor@lists.ubuntu.com, linux-security-module@vger.kernel.org, Petr Tesarik Subject: [RFC 4/5] sbm: fix up calls to dynamic memory allocators Date: Thu, 22 Feb 2024 14:12:29 +0100 Message-Id: <20240222131230.635-5-petrtesarik@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240222131230.635-1-petrtesarik@huaweicloud.com> References: <20240222131230.635-1-petrtesarik@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID:LxC2BwDXzhdSSNdlhi4AAw--.34998S6 X-Coremail-Antispam: 1UD129KBjvJXoW3Xw4rWrWfury8KFy8Gry8Xwb_yoW3tFW3pF 4xuFn8GFWrtryUCry7ArWjqryDW3WDJF40kaya9a4fZasxtF1xGr1qv34qqr48ArWkuF1Y kF9YqrZ8Aw48Aw7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPq14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Jr0_JF4l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr1j6F 4UJwA2z4x0Y4vEx4A2jsIE14v26r4j6F4UM28EF7xvwVC2z280aVCY1x0267AKxVW8Jr0_ Cr1UM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6x IIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_ Gr1lF7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I0E8c xan2IY04v7MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8C rVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWrXVW8Jr 1lIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x 0JUQSdkUUUUU= X-CM-SenderInfo: hshw23xhvd2x3n6k3tpzhluzxrxghudrp/ From: Petr Tesarik Add fixup functions to call kmalloc(), vmalloc() and friends on behalf of the sandbox code. Signed-off-by: Petr Tesarik --- arch/x86/kernel/sbm/core.c | 81 ++++++++++++++++++++++++++++++++++++++ mm/slab_common.c | 3 +- mm/slub.c | 17 ++++---- mm/vmalloc.c | 11 +++--- 4 files changed, 98 insertions(+), 14 deletions(-) diff --git a/arch/x86/kernel/sbm/core.c b/arch/x86/kernel/sbm/core.c index c8ac7ecb08cc..3cf3842292b9 100644 --- a/arch/x86/kernel/sbm/core.c +++ b/arch/x86/kernel/sbm/core.c @@ -20,6 +20,12 @@ #include #include +/* + * FIXME: Remove these includes when there is proper API for defining + * which functions can be called from sandbox mode. + */ +#include + #define GFP_SBM_PGTABLE (GFP_KERNEL | __GFP_ZERO) #define PGD_ORDER get_order(sizeof(pgd_t) * PTRS_PER_PGD) @@ -52,8 +58,83 @@ struct sbm_fixup { sbm_proxy_call_fn proxy; }; +static int map_range(struct x86_sbm_state *state, unsigned long start, + unsigned long end, pgprot_t prot); + +/* Map the newly allocated dynamic memory region. */ +static unsigned long post_alloc(struct x86_sbm_state *state, + unsigned long objp, size_t size) +{ + int err; + + if (!objp) + return objp; + + err = map_range(state, objp, objp + size, PAGE_SHARED); + if (err) { + kfree((void*)objp); + return 0UL; + } + return objp; +} + +/* Allocation proxy handler if size is the 1st parameter. */ +static unsigned long proxy_alloc1(struct x86_sbm_state *state, + unsigned long func, struct pt_regs *regs) +{ + unsigned long objp; + + objp = x86_sbm_proxy_call(state, func, regs); + return post_alloc(state, objp, regs->di); +} + +/* Allocation proxy handler if size is the 2nd parameter. */ +static unsigned long proxy_alloc2(struct x86_sbm_state *state, + unsigned long func, struct pt_regs *regs) +{ + unsigned long objp; + + objp = x86_sbm_proxy_call(state, func, regs); + return post_alloc(state, objp, regs->si); +} + +/* Allocation proxy handler if size is the 3rd parameter. */ +static unsigned long proxy_alloc3(struct x86_sbm_state *state, + unsigned long func, struct pt_regs *regs) +{ + unsigned long objp; + + objp = x86_sbm_proxy_call(state, func, regs); + return post_alloc(state, objp, regs->dx); +} + +/* Proxy handler to free previously allocated memory. */ +static unsigned long proxy_free(struct x86_sbm_state *state, + unsigned long func, struct pt_regs *regs) +{ + /* TODO: unmap allocated addresses from sandbox! */ + return x86_sbm_proxy_call(state, func, regs); +} + static const struct sbm_fixup fixups[] = { + /* kmalloc() and friends */ + { kmalloc_trace, proxy_alloc3 }, + { __kmalloc, proxy_alloc1 }, + { __kmalloc_node, proxy_alloc1 }, + { __kmalloc_node_track_caller, proxy_alloc1 }, + { kmalloc_large, proxy_alloc1 }, + { kmalloc_large_node, proxy_alloc1 }, + { krealloc, proxy_alloc2 }, + { kfree, proxy_free }, + + /* vmalloc() and friends */ + { vmalloc, proxy_alloc1 }, + { __vmalloc, proxy_alloc1 }, + { __vmalloc_node, proxy_alloc1 }, + { vzalloc, proxy_alloc1 }, + { vfree, proxy_free }, + { } }; diff --git a/mm/slab_common.c b/mm/slab_common.c index 238293b1dbe1..2b72118d9bfa 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -28,6 +28,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" @@ -1208,7 +1209,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags) * * Return: pointer to the allocated memory or %NULL in case of error */ -void *krealloc(const void *p, size_t new_size, gfp_t flags) +void * __nosbm krealloc(const void *p, size_t new_size, gfp_t flags) { void *ret; diff --git a/mm/slub.c b/mm/slub.c index 2ef88bbf56a3..5f2290fe4df0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include @@ -3913,7 +3914,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_node); * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) +static void * __nosbm __kmalloc_large_node(size_t size, gfp_t flags, int node) { struct folio *folio; void *ptr = NULL; @@ -3938,7 +3939,7 @@ static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) return ptr; } -void *kmalloc_large(size_t size, gfp_t flags) +void * __nosbm kmalloc_large(size_t size, gfp_t flags) { void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE); @@ -3983,26 +3984,26 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, return ret; } -void *__kmalloc_node(size_t size, gfp_t flags, int node) +void * __nosbm __kmalloc_node(size_t size, gfp_t flags, int node) { return __do_kmalloc_node(size, flags, node, _RET_IP_); } EXPORT_SYMBOL(__kmalloc_node); -void *__kmalloc(size_t size, gfp_t flags) +void * __nosbm __kmalloc(size_t size, gfp_t flags) { return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); } EXPORT_SYMBOL(__kmalloc); -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) +void * __nosbm __kmalloc_node_track_caller(size_t size, gfp_t flags, + int node, unsigned long caller) { return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) +void * __nosbm kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc_node(s, NULL, gfpflags, NUMA_NO_NODE, _RET_IP_, size); @@ -4386,7 +4387,7 @@ static void free_large_kmalloc(struct folio *folio, void *object) * * If @object is NULL, no operation is performed. */ -void kfree(const void *object) +void __nosbm kfree(const void *object) { struct folio *folio; struct slab *slab; diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d12a17fc0c17..d7a5b715ac03 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include @@ -2804,7 +2805,7 @@ void vfree_atomic(const void *addr) * if we have CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG, but making the calling * conventions for vfree() arch-dependent would be a really bad idea). */ -void vfree(const void *addr) +void __nosbm vfree(const void *addr) { struct vm_struct *vm; int i; @@ -3379,7 +3380,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, * * Return: pointer to the allocated memory or %NULL on error */ -void *__vmalloc_node(unsigned long size, unsigned long align, +void * __nosbm __vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, int node, const void *caller) { return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, @@ -3394,7 +3395,7 @@ void *__vmalloc_node(unsigned long size, unsigned long align, EXPORT_SYMBOL_GPL(__vmalloc_node); #endif -void *__vmalloc(unsigned long size, gfp_t gfp_mask) +void * __nosbm __vmalloc(unsigned long size, gfp_t gfp_mask) { return __vmalloc_node(size, 1, gfp_mask, NUMA_NO_NODE, __builtin_return_address(0)); @@ -3413,7 +3414,7 @@ EXPORT_SYMBOL(__vmalloc); * * Return: pointer to the allocated memory or %NULL on error */ -void *vmalloc(unsigned long size) +void * __nosbm vmalloc(unsigned long size) { return __vmalloc_node(size, 1, GFP_KERNEL, NUMA_NO_NODE, __builtin_return_address(0)); @@ -3453,7 +3454,7 @@ EXPORT_SYMBOL_GPL(vmalloc_huge); * * Return: pointer to the allocated memory or %NULL on error */ -void *vzalloc(unsigned long size) +void * __nosbm vzalloc(unsigned long size) { return __vmalloc_node(size, 1, GFP_KERNEL | __GFP_ZERO, NUMA_NO_NODE, __builtin_return_address(0)); -- 2.34.1