Received: by 2002:a05:7412:98c1:b0:fa:551:50a7 with SMTP id kc1csp448907rdb; Fri, 5 Jan 2024 16:06:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IEXBrlqfIrdZsFGCE+05ByqBMhAk+sKkZMxWEVJGK1ELZurx0E80WXP8Vs69Pl6PJ3FEXDZ X-Received: by 2002:a17:903:1248:b0:1d3:c730:f0a2 with SMTP id u8-20020a170903124800b001d3c730f0a2mr220837plh.118.1704499570854; Fri, 05 Jan 2024 16:06:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704499570; cv=none; d=google.com; s=arc-20160816; b=DMuuanR+QqYCvLXCmou+7/ojecCMNkVi9i2O0sSMGQn7yhLb2ay6IdAMLE+0IrEytG XFbHE/DQs5Hjdw6oY2GLdNxmlhojc3HDdFE2oBqA9X84XZAg23mzxjrKEho2JBR0ziAn LlowOwzzWI17SSCHKhs5QKQZ7NwNR+JEOxf3x9Xh0wikopROk4jwtn0LvRl1z5p4Jooz BL4uMq4d0ZjY6LrigNhuna1hHn68ee0Uw37ZtLNTylVyPG7CjjZwXjWydYvbeNUDGKtY OAJhYd04L4rfCrXmfGHNzdLGiF3Gf9GGSLvlA1X5bW2VLCqusoQOjET+7oUcWz9Fbnoa 9Fuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:dkim-signature:date; bh=srTL0MzvVyILIQT7wSMgJAQ4spYLZ+uATUXyxB8CAMI=; fh=bLhTTycSMaEbYmmpqbJfGo9ql5eLh/oSiidtYSQ2vkU=; b=Vg/7Z2E5I6vGziRVcSQX2ChJ7UI3npM1s1vKaWEWgSlYaBQaWWjBFLw6QYm8acdPXp 4A6kAX88gllbajq2XmuBHr/nEJ0WKfNLs9neGHz92hakX7etM1qTy9sNIEIsTUsDdtjb ftyg0qXRedttgPQ6rRgJwQ4f4khCZv73STMs8BsJwsiUZ/DAgDrpDI7Mc64GuTijQ0hM cExjPGjEmbXQy/DkZfPWWiD9C4phKV/37FnbErCNZbBErLp34+0aKbm0/9KYGKj2ZKxc Khy1lN5SCRktScrV3N56pjPGypG68u1CsMqi4VqxEttxwvNN2oRyt8FWOAyDGRPblxnD Zufw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=vll94vkZ; spf=pass (google.com: domain of linux-kernel+bounces-18430-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18430-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id u9-20020a17090282c900b001d4b1ddc8e0si2003998plz.67.2024.01.05.16.06.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 16:06:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18430-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=vll94vkZ; spf=pass (google.com: domain of linux-kernel+bounces-18430-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18430-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 6E5BF2840B4 for ; Sat, 6 Jan 2024 00:06:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 72FA415CE; Sat, 6 Jan 2024 00:05:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="vll94vkZ" X-Original-To: linux-kernel@vger.kernel.org Received: from out-186.mta1.migadu.com (out-186.mta1.migadu.com [95.215.58.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 109F815AC for ; Sat, 6 Jan 2024 00:05:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Date: Fri, 5 Jan 2024 16:05:42 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1704499547; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=srTL0MzvVyILIQT7wSMgJAQ4spYLZ+uATUXyxB8CAMI=; b=vll94vkZ7KCGoBy+DSm2FGUHPLe7uhWa3M5e9kWbXYOTD8wmghLtEkDJ497An8g38hEFWT F7BBQ117s76HZew4lbfxYsV3hDPrEv9RDt/I+rBMrjSQtudAdFO1zCvXlY15F4GcT+6nHX ZqR93w+S8bwZB3wIqz6zI+dGjyZy4y0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Sukadev Bhattiprolu Cc: Minchan Kim , Chris Goldsworthy , Andrew Morton , Rik van Riel , Roman Gushchin , Vlastimil Babka , Joonsoo Kim , Georgi Djakov , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm,page_alloc,cma: configurable CMA utilization Message-ID: References: <20230131071052.GB19285@hu-sbhattip-lv.qualcomm.com> <20230131201001.GA8585@hu-sbhattip-lv.qualcomm.com> <20230201040628.GA3767@hu-cgoldswo-sd.qualcomm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT On Fri, Jan 05, 2024 at 03:46:55PM -0800, Sukadev Bhattiprolu wrote: > > On 2/1/2023 3:47 PM, Minchan Kim wrote: > > > > I like this patch for different reason but for the specific problem you > > mentioned, How about making reclaimer/compaction aware of the problem: > > > > IOW, when the GFP_KERNEL/DMA allocation happens but not enough memory > > in the zones, let's migrates movable pages in those zones into CMA > > area/movable zone if they are plenty of free memory. > > Hi Minchan, > > Coming back to this thread after a while. > > If the CMA region is usually free, allocating pages first in the non-CMA > region and then moving them into the CMA region would be extra work since > it would happen most of the time. In such cases, wouldn't it be better to > allocate from the CMA region itself? I'm not sure there is a "one size fits all" solution here. There are two distinctive cases: 1) A relatively small cma area used for a specific purpose. This is how cma was used until recently. And it was barely used by the kernel for non-cma allocations. 2) A relatively large cma area which is used to allocate gigantic hugepages and as an anti-fragmentation mechanism in general (basically as a movable zone). In this case it might be preferable to use cma for movable allocations, because the space for non-movable allocations might be limited. I see two options here: 1) introduce per-cma area flags which will define the usage policy 2) redesign the page allocator to better take care of fragmentation at 1Gb scale The latter is obviously not a small endeavour. The fundamentally missing piece is a notion of an anti-fragmentation cost. E.g. how much work does it makes sense to put into page migration before "polluting" a new large block of memory with an unmovable folio. Thanks!