Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp6737078rdb; Tue, 2 Jan 2024 11:29:29 -0800 (PST) X-Google-Smtp-Source: AGHT+IGdKIl5d5tEMrmAXyqBFJE6wzaDnL70UV8eibcO+99KNh8L6eaEiiBF2Ezlp5CNTz/cpHRK X-Received: by 2002:a9d:4f14:0:b0:6db:da49:ac8f with SMTP id d20-20020a9d4f14000000b006dbda49ac8fmr12845620otl.10.1704223768632; Tue, 02 Jan 2024 11:29:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704223768; cv=none; d=google.com; s=arc-20160816; b=n9k8gC5Ei+Wl+RMJJyDDdBB0O9OtDAVdI1XA0Pqb9EhoUVNNshJrrACzK8yxoC66sp VHC/ehyNeJ70/J9mMm/TDzQ0NAh+Wncdq8eHviYwbVYEqsc6ZEBFQbW6BHohSzUsDARw peFuUGLT+GG3FOgYUnInKLSfrv9p31iRvmqnFG1j3gktxyAcA9ua8gn4JoTkryDEUPvM 6dk9uP91rUTGuCOwbx/mysJE1w4tsO5q/PhDZS8p5u37xptu4WsNONXI3gQKm4Zw7JMK omX5hwzNA2lykQN3RXHzHTG+v64PvfuD99eEdt3oEPWQIz4Rmshsa3yp3iimooG/P4Nt 8yhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:subject:cc:to:from :date:dkim-signature:dkim-filter:message-id; bh=NWTVgm1Yghg6D+rhM1rcN+66tvwgyzPsGZv9i/VSt6U=; fh=iwWWqFT9DBgUuitUP/Rw5OJe6XAkKkYZgCbvnupIFrU=; b=L09IXkeQpRWw7pZyB7VzcW3F2JwOIp1S0LCp/XJ9XnlRhtMaoTxaFX9aa+b8O4j+rw znS/6l1YKfVFrtqs9IIXkhn0mwO6r0ibxvIj7uVm4GdEoNuFyI/U3N2rl2mlzujYiUqz otYJ4G/eSQ8x/pHeYcrmR6GPbkwzdPwG9vUlLrywQcAu54lFtKRTw2Zl/r3L2taJpUbj 51N6OFEmiNMwUhcPHSSuMStnz+pnzZcalatrc0QlpnWcQGooPkwm+opaixsCGUUYcarR Vm1dmNKTlaYf3JRGHogr7pkgkvu+fIZn5hACsZzfanas+kDTZMem1R+I9xxDCjNuFnop AjKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linux.microsoft.com header.s=default header.b=AGKUIlWB; spf=pass (google.com: domain of linux-kernel+bounces-14771-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-14771-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id y68-20020a633247000000b005897813624fsi20648659pgy.476.2024.01.02.11.29.28 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 11:29:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-14771-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=fail header.i=@linux.microsoft.com header.s=default header.b=AGKUIlWB; spf=pass (google.com: domain of linux-kernel+bounces-14771-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-14771-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Message-ID: <65946418.630a0220.c3f7f.2ef2SMTPIN_ADDED_BROKEN@mx.google.com> X-Google-Original-Message-ID: <20240101030141.GA723@skinsburskii.> Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 46D472843E1 for ; Tue, 2 Jan 2024 19:29:28 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 31ED915E95; Tue, 2 Jan 2024 19:29:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="AGKUIlWB" X-Original-To: linux-kernel@vger.kernel.org Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1683B16401; Tue, 2 Jan 2024 19:29:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Received: from skinsburskii. (c-73-239-240-195.hsd1.wa.comcast.net [73.239.240.195]) by linux.microsoft.com (Postfix) with ESMTPSA id 13EF120B3CC1; Tue, 2 Jan 2024 11:29:17 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 13EF120B3CC1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1704223757; bh=NWTVgm1Yghg6D+rhM1rcN+66tvwgyzPsGZv9i/VSt6U=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=AGKUIlWB2Mm3FV6uXLBIaYKw+ZOwF2oq+eG0GukNiGT0FaLu/cVNCJon1WWLrmbVf J0hpNTn0r5lQ7JZB7M+RoCQjykomiAtupbkO2imOrk6t0a4a8otvll3IN45v8vvg9M ESzfuKVdNYd5yhrMIDgJwHYz4CZGB1jbtbq0mHws= Date: Sun, 31 Dec 2023 19:01:41 -0800 From: Stanislav Kinsburskii To: Alexander Graf , ""@skinsburskii Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kexec@lists.infradead.org, linux-doc@vger.kernel.org, x86@kernel.org, Eric Biederman , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Rob Herring , Steven Rostedt , Andrew Morton , Mark Rutland , Tom Lendacky , Ashish Kalra , James Gowans , arnd@arndb.de, pbonzini@redhat.com, madvenka@linux.microsoft.com, Anthony Yznaga , Usama Arif , David Woodhouse , Benjamin Herrenschmidt Subject: Re: [PATCH v2 02/17] memblock: Declare scratch memory as CMA References: <20231222193607.15474-1-graf@amazon.com> <20231222193607.15474-3-graf@amazon.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231222193607.15474-3-graf@amazon.com> On Fri, Dec 22, 2023 at 07:35:52PM +0000, Alexander Graf wrote: > When we finish populating our memory, we don't want to lose the scratch > region as memory we can use for useful data. Do do that, we mark it as > CMA memory. That means that any allocation within it only happens with > movable memory which we can then happily discard for the next kexec. > > That way we don't lose the scratch region's memory anymore for > allocations after boot. > > Signed-off-by: Alexander Graf > > --- > > v1 -> v2: > > - test bot warning fix > --- > mm/memblock.c | 30 ++++++++++++++++++++++++++---- > 1 file changed, 26 insertions(+), 4 deletions(-) > > diff --git a/mm/memblock.c b/mm/memblock.c > index e89e6c8f9d75..3700c2c1a96d 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -16,6 +16,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -1100,10 +1101,6 @@ static bool should_skip_region(struct memblock_type *type, > if ((flags & MEMBLOCK_SCRATCH) && !memblock_is_scratch(m)) > return true; > > - /* Leave scratch memory alone after scratch-only phase */ > - if (!(flags & MEMBLOCK_SCRATCH) && memblock_is_scratch(m)) > - return true; > - > return false; > } > > @@ -2153,6 +2150,20 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) > } > } > > +#ifdef CONFIG_MEMBLOCK_SCRATCH > +static void reserve_scratch_mem(phys_addr_t start, phys_addr_t end) nit: the function name doesn't look reasonable as it has nothing limiting it to neither reservation nor scratch mem. Perhaps something like "set_mem_cma_type" would be a better fit. > +{ > + ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); > + ulong end_pfn = pageblock_align(PFN_UP(end)); > + ulong pfn; > + > + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) { > + /* Mark as CMA to prevent kernel allocations in it */ nit: the comment above looks irrelevant/redundant. > + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_CMA); > + } > +} > +#endif > + > static unsigned long __init __free_memory_core(phys_addr_t start, > phys_addr_t end) > { > @@ -2214,6 +2225,17 @@ static unsigned long __init free_low_memory_core_early(void) > > memmap_init_reserved_pages(); > > +#ifdef CONFIG_MEMBLOCK_SCRATCH > + /* > + * Mark scratch mem as CMA before we return it. That way we ensure that > + * no kernel allocations happen on it. That means we can reuse it as > + * scratch memory again later. > + */ > + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, > + MEMBLOCK_SCRATCH, &start, &end, NULL) > + reserve_scratch_mem(start, end); > +#endif > + > /* > * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id > * because in some case like Node0 doesn't have RAM installed > -- > 2.40.1 > > > > > Amazon Development Center Germany GmbH > Krausenstr. 38 > 10117 Berlin > Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss > Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B > Sitz: Berlin > Ust-ID: DE 289 237 879 > >