Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp3337707pxu; Sat, 19 Dec 2020 22:50:50 -0800 (PST) X-Google-Smtp-Source: ABdhPJx48QygIEeRm6dryiW5k7gAm/h8jBD8LIVHdPEqaDJ602OxvrDk7VPK9qOQykF5F5NiISS6 X-Received: by 2002:a17:906:3881:: with SMTP id q1mr11070216ejd.490.1608447050617; Sat, 19 Dec 2020 22:50:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608447050; cv=none; d=google.com; s=arc-20160816; b=zb2ICOR7GpC/SZrAQOhgncgQ1AnQET3mc+yFYL2ZYW+e3252zNAhKDCIXcRR8I1to5 at7fSMw/9HtNr+vFE60e6ZXKgLm5VgpIk1Q5aP75jU8z/P9rt63alcdnzmB+5v6ssR4c qRU8e2DjAECVm5/MIpvsij1Q9LiPn9NOiluyyJ152D/3OgDPt88hZVQtCjXTCc4fYN0p 6b8SJpKCGuELmW7I+a+CmNcbxYBxEL0YyPdRyD2KEfz85WN93L+/q5qrhTHsm+QhDdYr 07BEkIWruXD/WQ7hQBT2fsmRNLyZuy48KD2k6HX1/l/whVApjqGfVc70dU+eZmSc5MH6 3MZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:dkim-signature:date; bh=PHVjO5pvv/qVCIbvl2TkIstM25/VT8gBLVgAtif4fpQ=; b=kmuDfdYtoqQKIHJnwin9m9CVv8STnjI7JgfuKi5FPdURxHuwn2p0i2I1Zk+FXUoA/6 hNxOqldBmI62FsZdettZKR2oP5jkiIqlUXZpQjbBXo40Hsy44A2TrNJRFX9U8Tllqk/J KOzpR0dkBukXtLGwWYPjrMEs2vEm0hchbP5BLYGjA9AM4jUr++zKPul7Wm/bbPeFC7JK 4wV5+2dIf6ZYKNIgN6cxoCVJa3O5QlSVbQx7pTa5G/oss88L6XDsbDZIfHdxYaW+wODY yDROJJinog5+xTdJb36gpONmY/ndcw+q4RhMzEl+HXSge7vIMK71aO1+oZH48eOrDEl8 E8WA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OCtSL4yh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id no7si7014582ejb.586.2020.12.19.22.50.27; Sat, 19 Dec 2020 22:50:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OCtSL4yh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727067AbgLTGth (ORCPT + 99 others); Sun, 20 Dec 2020 01:49:37 -0500 Received: from mail.kernel.org ([198.145.29.99]:49784 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726902AbgLTGth (ORCPT ); Sun, 20 Dec 2020 01:49:37 -0500 Date: Sun, 20 Dec 2020 08:48:48 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1608446936; bh=sQOerph9MQe7AswiBUH8eKohhH26K3GuSudyYv4NLGo=; h=From:To:Cc:Subject:References:In-Reply-To:From; b=OCtSL4yhSW+77vw+65MLs+58vT5STuDSfQ1orn3Pp2Himzk4nPAxXjalyB4YNGo+r i183bMc7NOmgTVOuoHx8P8/0/1ydyOwvp3NawuzdOdo77bCrceiQGaCZXj+tfS8Xu/ Ly35u71KGnqA9V8jnttIbVhRqzp1kWdzqzfwRLYkQlNWmFCICeduOY2/CuQL+F9CFv 7rgy0UZ7wrH0jemOCkiqeaDOjAnuW3HWACVG9zIdL7bIJy4GmwcgJBY/ivBj8ZaRVv hHa+sn/Ixmzz/VLZmygYxmJAPTbFHS4lJrxDzEdqiEKVQ/tbbIMQjusNNwa4ihgE+V rbqQYmUud0Brw== From: Mike Rapoport To: Roman Gushchin Cc: Andrew Morton , linux-mm@kvack.org, Joonsoo Kim , Rik van Riel , Michal Hocko , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up Message-ID: <20201220064848.GA392325@kernel.org> References: <20201217201214.3414100-1-guro@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201217201214.3414100-1-guro@fb.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 17, 2020 at 12:12:13PM -0800, Roman Gushchin wrote: > Currently cma areas without a fixed base are allocated close to the > end of the node. This placement is sub-optimal because of compaction: > it brings pages into the cma area. In particular, it can bring in hot > executable pages, even if there is a plenty of free memory on the > machine. This results in cma allocation failures. > > Instead let's place cma areas close to the beginning of a node. > In this case the compaction will help to free cma areas, resulting > in better cma allocation success rates. > > If there is enough memory let's try to allocate bottom-up starting > with 4GB to exclude any possible interference with DMA32. On smaller > machines or in a case of a failure, stick with the old behavior. > > 16GB vm, 2GB cma area: > With this patch: > [ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G > [ 0.002928] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node > [ 0.002930] cma: Reserved 2048 MiB at 0x0000000100000000 > [ 0.002931] hugetlb_cma: reserved 2048 MiB on node 0 > > Without this patch: > [ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G > [ 0.002930] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node > [ 0.002933] cma: Reserved 2048 MiB at 0x00000003c0000000 > [ 0.002934] hugetlb_cma: reserved 2048 MiB on node 0 > > v2: > - switched to memblock_set_bottom_up(true), by Mike > - start with 4GB, by Mike > > Signed-off-by: Roman Gushchin With one nit below Reviewed-by: Mike Rapoport > --- > mm/cma.c | 16 ++++++++++++++++ > 1 file changed, 16 insertions(+) > > diff --git a/mm/cma.c b/mm/cma.c > index 7f415d7cda9f..21fd40c092f0 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -337,6 +337,22 @@ int __init cma_declare_contiguous_nid(phys_addr_t base, > limit = highmem_start; > } > > + /* > + * If there is enough memory, try a bottom-up allocation first. > + * It will place the new cma area close to the start of the node > + * and guarantee that the compaction is moving pages out of the > + * cma area and not into it. > + * Avoid using first 4GB to not interfere with constrained zones > + * like DMA/DMA32. > + */ > + if (!memblock_bottom_up() && > + memblock_end >= SZ_4G + size) { This seems short enough to fit a single line > + memblock_set_bottom_up(true); > + addr = memblock_alloc_range_nid(size, alignment, SZ_4G, > + limit, nid, true); > + memblock_set_bottom_up(false); > + } > + > if (!addr) { > addr = memblock_alloc_range_nid(size, alignment, base, > limit, nid, true); > -- > 2.26.2 > -- Sincerely yours, Mike.