Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp2329615rdg; Mon, 16 Oct 2023 00:14:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH5BH3+CYdb0GYKhExgva7SwzZuAiOHzgoPZ4udHpC0uiDLQko7gtx8f10V8uobRLQRVJ0P X-Received: by 2002:a05:6358:7e9e:b0:13a:c28f:3cd7 with SMTP id o30-20020a0563587e9e00b0013ac28f3cd7mr39180525rwn.14.1697440454706; Mon, 16 Oct 2023 00:14:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697440454; cv=none; d=google.com; s=arc-20160816; b=wJOfD7DPNSN++pOq/ZqRzmduh1OCMNCm0qE05qT1PaJ9UXkt5XC9umY0Hfi6HWwH4P axBqsAG4Ze6qvKJd8X8C0MQEZEmnaY/EhprjKUshBrLuOPp8te58WYPrSPrB8Fwz5/yp H2i12/DddwcajL44IC5Q4OqaQEEHl7kep37lt9K3Tm5xmTXTpx60GpCuyM9+qZ7xvQUm Bt/bljtvC9SsS0qCeAsvJLEuOZ9N2kb3m6ejYszJpeWPFLIpYQj8SQ/Nc1kXzWNKCaSc H6JOTB4zaLSYByIaRgWAc7n/NcNnQMleULXeBVM33HUYERA9F2s7H3xXMdkKyvIhZNPy rSow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:to:from; bh=rbvh5O7avPAgsCze7na39wTf5wDX5P0g3D5WnqbUxJE=; fh=wY7U6F8dgiPLz/DwDfZWxyAVk57Bl0b09DnWj2NpYfo=; b=N5aFLTUMXxxwQjxRS10npf2oFilBkAlsXL/ui22XBFi90Yf0Rknsjd8VaZBi6EvXs3 Fz3CkcnPUy2bfadojBtBaHq0t7X+ZGfhCijlIj/8MHHKsvxWJn9EreqfRuH9LYCzEvDO funAlVEfSnouZFo+NczfyRFSrcQcNDpLtN6FTppg/rCCPfEcbnV9KVHx4xlgtIsOrFL/ 5DgalhgseBovCaSaWspRK9yrm/sAyz/KJIGlBizxslze1XgqGz+o8O/vmxfbWOzw9Pcl sXTdyJI58yvwrY5rcqq1/xPgjzBRghR0M1E/baH9RP289/yPE2B86pjrKlglCyKSbkMD hLrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id e128-20020a636986000000b00578b6e32b5dsi10029655pgc.405.2023.10.16.00.14.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Oct 2023 00:14:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id E77A88098FC4; Mon, 16 Oct 2023 00:14:08 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231975AbjJPHNv (ORCPT + 99 others); Mon, 16 Oct 2023 03:13:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232064AbjJPHNs (ORCPT ); Mon, 16 Oct 2023 03:13:48 -0400 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B155ED for ; Mon, 16 Oct 2023 00:13:44 -0700 (PDT) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 39G7Cote053037; Mon, 16 Oct 2023 15:12:50 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4S87WF1gbWz2KmQJM; Mon, 16 Oct 2023 15:08:41 +0800 (CST) Received: from bj03382pcu01.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 16 Oct 2023 15:12:48 +0800 From: "zhaoyang.huang" To: Andrew Morton , Johannes Weiner , Roman Gushchin , , , Zhaoyang Huang , Subject: [PATCHv6 1/1] mm: optimization on page allocation when CMA enabled Date: Mon, 16 Oct 2023 15:12:45 +0800 Message-ID: <20231016071245.2865233-1-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.0.73.40] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 39G7Cote053037 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 16 Oct 2023 00:14:09 -0700 (PDT) From: Zhaoyang Huang According to current CMA utilization policy, an alloc_pages(GFP_USER) could 'steal' UNMOVABLE & RECLAIMABLE page blocks via the help of CMA(pass zone_watermark_ok by counting CMA in but use U&R in rmqueue), which could lead to following alloc_pages(GFP_KERNEL) fail. Solving this by introducing second watermark checking for GFP_MOVABLE, which could have the allocation use CMA when proper. -- Free_pages(30MB) | | -- WMARK_LOW(25MB) | -- Free_CMA(12MB) | | -- Signed-off-by: Zhaoyang Huang --- v6: update comments --- --- mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 452459836b71..5a146aa7c0aa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2078,6 +2078,43 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + } else { + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + } + return cma_first; +} +#else +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -2091,12 +2128,11 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { page = __rmqueue_cma_fallback(zone, order); if (page) return page; -- 2.25.1