Received: by 2002:a05:7412:a9a2:b0:e2:908c:2ebd with SMTP id o34csp283442rdh; Thu, 26 Oct 2023 01:58:47 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGMFNvSHWqqD0KXkQWwbGUMuJe2CbyZnVOivumj1D/wrY3XsNzvrQ4oOWg4wKbr9nb7zInS X-Received: by 2002:a0d:e281:0:b0:5a8:dfd3:c34d with SMTP id l123-20020a0de281000000b005a8dfd3c34dmr17441244ywe.20.1698310727079; Thu, 26 Oct 2023 01:58:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698310727; cv=none; d=google.com; s=arc-20160816; b=C/+hYCgKvOytmJi2apCjWL1nnT+lmXuGmc3lkn7xL1izyzuxgAs/3tpIbYpLvCvi5B 82lWsU46tqr4GqMLmQNdXcukf2FBcGVxKz4vyPUMwxeR/RNXuRtHj1H6+hMB1rH48dLq SPBO9fT5CHAwWzXGb+kppoQHTyn16cmFxc7Gx52R2oEmvWAZJibtnm2xg/RXwhb5fXvz r1/IOHqa49ekU4d+VzJZSvCo1SReoMJAFYQt11oOTmei/kG9Je/K6EXxyltm3U9E2xqb EPkKrFxIPT1qkEXh4lLdnbgenDMDAdnXo37ITEBsWp0sZxGQiGg8NPWaADaKPA2yJnCA R9Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=yY79/egdbk9Yne0vD8LB2WD/zuQH//GjRKGCkOYdQCI=; fh=ngP2hO7HOj4pNrD5gT3RGdDlwE7RhkgTMDM0O8CMnI0=; b=fpFjlCdittwd84mgoH+1tIPaosDlulrgKgqHD9IsGpr5zq7ePmzXKFzKt//7/5nCfH qnTBgK+12mH05NQxbNb3CYZGKSSV7ud7vIueF5yUN4qXcmAi6wkRqTQQvj0BViLG60NK VD+o2sUcOH6AeF+YzgooG2NBQ6xIIQ6kZqdtvwS3BJ/JrkTDYjDOxehiyeKE1EriCIBl KW6m5gXF5IN6XMq4RS35bV1o2nERmfR3rSLdwUQdGLEXNx0nMcj+p5VQBEj4hS6j18Br 35a/2xzHx0grysS5MFpBUN06uZLrpDHLOLbc5Me8ITh1LjU/Ido+3GoM6M0WRek/D3W9 KboA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=r2IeoKsV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id bc15-20020a05690c000f00b005ac61ae41c5si10673738ywb.227.2023.10.26.01.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Oct 2023 01:58:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=r2IeoKsV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 5C23F8244E65; Thu, 26 Oct 2023 01:58:44 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344406AbjJZI60 (ORCPT + 99 others); Thu, 26 Oct 2023 04:58:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229642AbjJZI6Y (ORCPT ); Thu, 26 Oct 2023 04:58:24 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32AEE10E; Thu, 26 Oct 2023 01:58:22 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9B0BC433C7; Thu, 26 Oct 2023 08:58:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1698310701; bh=daj1kZsVxPjd+zO39ZAev3zCXUc1FaedBgbX73nudCk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=r2IeoKsV+mJJpE6z2XsX1iIp24lKjd2VIwT/xdGbzM4QJ1z6/95hZgo8dTrbqyffN 8DV48I6gafiHJ1W+sQ7xkSCEWs1Wh3ihhaTtNnOomkd5rdfUSH/VyEwenBw8FyxoGy dhmaO/Bq2xkkYLIg5jb3D7re45cnRROW7gfrE4czIzlLWldF8At6B9lE0w5M6vGi01 k5USZIeXtTxyoNfpdwZ4lyaGy8S4FOFzwfuRadAU3111ZEEfuMlEZmph7ucAT6ZG1d z6H7920gFiqm1FRnq/jWuFKwHTDrcmSHuk47fqo8k/MnZWV2iXv2IEUOtl8o74ARYW oZPw5JfD44y3w== Date: Thu, 26 Oct 2023 11:58:00 +0300 From: Mike Rapoport To: Will Deacon Cc: linux-kernel@vger.kernel.org, Andrew Morton , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Michael Ellerman , Nadav Amit , "Naveen N. Rao" , Palmer Dabbelt , Puranjay Mohan , Rick Edgecombe , Russell King , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v3 04/13] mm/execmem, arch: convert remaining overrides of module_alloc to execmem Message-ID: <20231026085800.GK2824@kernel.org> References: <20230918072955.2507221-1-rppt@kernel.org> <20230918072955.2507221-5-rppt@kernel.org> <20231023171420.GA4041@willie-the-truck> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231023171420.GA4041@willie-the-truck> X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Thu, 26 Oct 2023 01:58:44 -0700 (PDT) Hi Will, On Mon, Oct 23, 2023 at 06:14:20PM +0100, Will Deacon wrote: > Hi Mike, > > On Mon, Sep 18, 2023 at 10:29:46AM +0300, Mike Rapoport wrote: > > From: "Mike Rapoport (IBM)" > > > > Extend execmem parameters to accommodate more complex overrides of > > module_alloc() by architectures. > > > > This includes specification of a fallback range required by arm, arm64 > > and powerpc and support for allocation of KASAN shadow required by > > arm64, s390 and x86. > > > > The core implementation of execmem_alloc() takes care of suppressing > > warnings when the initial allocation fails but there is a fallback range > > defined. > > > > Signed-off-by: Mike Rapoport (IBM) > > --- > > arch/arm/kernel/module.c | 38 ++++++++++++--------- > > arch/arm64/kernel/module.c | 57 ++++++++++++++------------------ > > arch/powerpc/kernel/module.c | 52 ++++++++++++++--------------- > > arch/s390/kernel/module.c | 52 +++++++++++------------------ > > arch/x86/kernel/module.c | 64 +++++++++++------------------------- > > include/linux/execmem.h | 14 ++++++++ > > mm/execmem.c | 43 ++++++++++++++++++++++-- > > 7 files changed, 167 insertions(+), 153 deletions(-) > > [...] > > > diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c > > index dd851297596e..cd6320de1c54 100644 > > --- a/arch/arm64/kernel/module.c > > +++ b/arch/arm64/kernel/module.c > > @@ -20,6 +20,7 @@ > > #include > > #include > > #include > > +#include > > > > #include > > #include > > @@ -108,46 +109,38 @@ static int __init module_init_limits(void) > > > > return 0; > > } > > -subsys_initcall(module_init_limits); > > > > -void *module_alloc(unsigned long size) > > +static struct execmem_params execmem_params __ro_after_init = { > > + .ranges = { > > + [EXECMEM_DEFAULT] = { > > + .flags = EXECMEM_KASAN_SHADOW, > > + .alignment = MODULE_ALIGN, > > + }, > > + }, > > +}; > > + > > +struct execmem_params __init *execmem_arch_params(void) > > { > > - void *p = NULL; > > + struct execmem_range *r = &execmem_params.ranges[EXECMEM_DEFAULT]; > > > > - /* > > - * Where possible, prefer to allocate within direct branch range of the > > - * kernel such that no PLTs are necessary. > > - */ > > Why are you removing this comment? I think you could just move it next > to the part where we set a 128MiB range. Oops, my bad. Will add it back. > > - if (module_direct_base) { > > - p = __vmalloc_node_range(size, MODULE_ALIGN, > > - module_direct_base, > > - module_direct_base + SZ_128M, > > - GFP_KERNEL | __GFP_NOWARN, > > - PAGE_KERNEL, 0, NUMA_NO_NODE, > > - __builtin_return_address(0)); > > - } > > + module_init_limits(); > > Hmm, this used to be run from subsys_initcall(), but now you're running > it _really_ early, before random_init(), so randomization of the module > space is no longer going to be very random if we don't have early entropy > from the firmware or the CPU, which is likely to be the case on most SoCs. Well, it will be as random as KASLR. Won't that be enough? > > diff --git a/mm/execmem.c b/mm/execmem.c > > index f25a5e064886..a8c2f44d0133 100644 > > --- a/mm/execmem.c > > +++ b/mm/execmem.c > > @@ -11,12 +11,46 @@ static void *execmem_alloc(size_t size, struct execmem_range *range) > > { > > unsigned long start = range->start; > > unsigned long end = range->end; > > + unsigned long fallback_start = range->fallback_start; > > + unsigned long fallback_end = range->fallback_end; > > unsigned int align = range->alignment; > > pgprot_t pgprot = range->pgprot; > > + bool kasan = range->flags & EXECMEM_KASAN_SHADOW; > > + unsigned long vm_flags = VM_FLUSH_RESET_PERMS; > > + bool fallback = !!fallback_start; > > + gfp_t gfp_flags = GFP_KERNEL; > > + void *p; > > > > - return __vmalloc_node_range(size, align, start, end, > > - GFP_KERNEL, pgprot, VM_FLUSH_RESET_PERMS, > > - NUMA_NO_NODE, __builtin_return_address(0)); > > + if (PAGE_ALIGN(size) > (end - start)) > > + return NULL; > > + > > + if (kasan) > > + vm_flags |= VM_DEFER_KMEMLEAK; > > Hmm, I don't think we passed this before on arm64, should we have done? It was there on arm64 before commit 8339f7d8e178 ("arm64: module: remove old !KASAN_VMALLOC logic"). There's no need to pass VM_DEFER_KMEMLEAK when KASAN_VMALLOC is enabled and arm64 always selects KASAN_VMALLOC with KASAN. And for the generic case, I should have made the condition to check for KASAN_VMALLOC as well. > Will -- Sincerely yours, Mike.