Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp16864rwr; Tue, 2 May 2023 15:06:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7pfvzt9EZ7swqhcLqXphv8N071iypDDp5QS3Xdi0fVVFGW1YjKSovDklvcNze9OyRvX8Qk X-Received: by 2002:a17:90a:86c1:b0:246:5787:6f5d with SMTP id y1-20020a17090a86c100b0024657876f5dmr87277pjv.10.1683065218367; Tue, 02 May 2023 15:06:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683065218; cv=none; d=google.com; s=arc-20160816; b=MVlcn4MG62wSnpjiaX0TYlD0B1Zu4RDIzW0L/+9W5wjNmpV26Zox8VKnMVZdwRgiMq iOW8ud6BB1PVVuR4qxg7i2PzaVCqd73stIAgJ1o+3DawCdquORIp3wOvRiT48WOHxuZz Kuhy8yRCcSkg7jK91tBbmz19f3VViYVdpIrUZah7FsRyAfI0u0qH4tLMlHpnCivuf/1+ JmklsPLmOOqAHqfG2TApy7Oo7ujbEuznSYlIDv56KqVumH+PwiPKkSNd+A+AeBhVboDn 7ug4/tIfLTk/U82O9/78gmWQ91AK+8xcyjtLIOqfx65hPQYlUW/tgesq5r1D8klWmBms bfkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:dkim-signature:date; bh=AMmiYuTgPnt6TKBna33T3vw4E7dgnbLqqVwuuubJ7mY=; b=Somvf0qX42RopgYL3hA0xcJSzOjV3tcarYqyXzjKmMDRPumyiBkccaHGHOMuezeLui flHIhpgSerq5X2JNZ1tL+7/AUbH3Qjmpv4CCnS/OIAfkf3HhdUw+mecjEwe9wjg7yaW6 KFYT8WbezRgX+DNatSXuJjhqcTXuwQeuRFd8hqjsmPar2wA0HgAN78w+cgK0didjqSy2 91m98XDa8JroiTa/XZaOk0yoz/Dz72xxoI6x1iniknE15bh2qVkUIkx1PgrPuXZCa+E4 CLXGSo2jvOi/KepwuRwQFpt76iwM6N4c5CUzC3njofcIa5zuCyPvzMOZpEnNVorb2iLM S+gg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Qrh3b92Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gz18-20020a17090b0ed200b00233ebd4c537si11930215pjb.22.2023.05.02.15.06.42; Tue, 02 May 2023 15:06:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Qrh3b92Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229746AbjEBWBs (ORCPT + 99 others); Tue, 2 May 2023 18:01:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229497AbjEBWBr (ORCPT ); Tue, 2 May 2023 18:01:47 -0400 Received: from out-45.mta1.migadu.com (out-45.mta1.migadu.com [95.215.58.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E70C1704 for ; Tue, 2 May 2023 15:01:45 -0700 (PDT) Date: Tue, 2 May 2023 15:01:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1683064903; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AMmiYuTgPnt6TKBna33T3vw4E7dgnbLqqVwuuubJ7mY=; b=Qrh3b92Q6myey0WwoiNsP/Q8uimN60HFMFCE4kXLji9n8e1wLJqsysP5BbT5QB4z509GV8 5PBgbpCcFUI459Aw/kSKrRQLYHZzG2/Y+GEbyxvUEAXdkQ6cgy4VcLYm73fYMZOcnjCYL2 R7V8wVa1iG3XF9V7Mz8DF5W95PDT7ow= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: =?utf-8?B?6buE5pyd6ZizIChaaGFveWFuZyBIdWFuZyk=?= Cc: Andrew Morton , Roman Gushchin , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Zhaoyang Huang , =?utf-8?B?546L56eRIChLZSBXYW5nKQ==?= Subject: Re: =?utf-8?B?562U5aSNOiBbUEFUQ0g=?= =?utf-8?Q?=5D?= mm: optimization on page allocation when CMA enabled Message-ID: References: <1682679641-13652-1-git-send-email-zhaoyang.huang@unisoc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 02, 2023 at 12:12:28PM +0000, 黄朝阳 (Zhaoyang Huang) wrote: > > Hi Zhaoyang! > > > > On Fri, Apr 28, 2023 at 07:00:41PM +0800, zhaoyang.huang wrote: > > > From: Zhaoyang Huang > > > > > > Please be notice bellowing typical scenario that commit 168676649 > > > introduce, that is, 12MB free cma pages 'help' GFP_MOVABLE to keep > > > draining/fragmenting U&R page blocks until they shrink to 12MB without > > > enter slowpath which against current reclaiming policy. This commit change > > the criteria from hard coded '1/2' > > > to watermark check which leave U&R free pages stay around WMARK_LOW > > > when being fallback. > > > > Can you, please, explain the problem you're solving in more details? > I am trying to solve a OOM problem caused by slab allocation fail as all free pages are MIGRATE_CMA by applying 168676649, which could help to reduce the fault ration from 12/20 to 2/20. I noticed it introduce the phenomenon which I describe above. > > > > If I understand your code correctly, you're effectively reducing the use of cma > > areas for movable allocations. Why it's good? > Not exactly. In fact, this commit lead to the use of cma early than it is now, which could help to protect U&R be 'stolen' by GFP_MOVABLE. Imagine this scenario, 30MB total free pages composed of 10MB CMA and 20MB U&R, while zone's watermark low is 25MB. An GFP_MOVABLE allocation can keep stealing U&R pages(don't meet 1/2 criteria) without enter slowpath(zone_watermark_ok(WMARK_LOW) is true) until they shrink to 15MB. In my opinion, it makes more sense to have CMA take its duty to help movable allocation when U&R lower to certain zone's watermark instead of when their size become smaller than CMA. > > Also, this is a hot path, please, make sure you're not adding much overhead. > I would like to take more thought. Got it, thank you for the explanation! How about the following approach (completely untested)? diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6da423ec356f..4b50f497c09d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2279,12 +2279,13 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA when over half of the zone's easily + * available free memory is in the CMA area. */ if (alloc_flags & ALLOC_CMA && zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + (zone_page_state(zone, NR_FREE_PAGES) - + zone->_watermark[WMARK_LOW]) / 2) { page = __rmqueue_cma_fallback(zone, order); if (page) return page; Basically the idea is to keep free space equally split between cma and non-cma areas. Will it work for you? Thanks!