Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp560331rwr; Fri, 5 May 2023 01:11:07 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4tPf+pIfDQU39rL1scQ5xBrOiqWxYk7uedPkyKSwovD19EPV3RBILGmzVFZvBxemxPfbrd X-Received: by 2002:a17:90a:a784:b0:250:27ad:e1a6 with SMTP id f4-20020a17090aa78400b0025027ade1a6mr636231pjq.31.1683274267483; Fri, 05 May 2023 01:11:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683274267; cv=none; d=google.com; s=arc-20160816; b=EvKV+nM3hlsBqWuIxWTgTRyb6mzHPmjFX77gKD4B0yFcyIJw8kxxRGda/7LKWvyvtK M05/RCKwFIzJVSzMa9lWRLOzFQs3siJM3dz+O9Ulaqs7LfuVcEeVOIy9l3iq+4c70gZ1 ATDJL5PFQ1wLs9Zz6Xau6ngYdUE6FfpzLryrJwO+05APdSnGrMffq8H0z3zYUb6bgQeW nJk7EAzN04+26giWadT3AX6hOmc9mEipmcR1RWkAKLlq1FuwBFM1Vjqj9rfvGfPV94uh GC1RYc/z7jXmZJNO8MlN/PYn1PdfCw/9KhMPRfSDjMXNw7GasdzEqn6ME4V+VYHBmopi TxQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=1fbuRo3h5EmOX7ozGvJB42czu0NHSlI3RqyThUj5mBo=; b=ph/HCHYFa+JRRF/UOr3LCYynuLj7AVNSHk0KHdjM4W8f2rfvG3Icf+ccfFy7IjouI4 2tx+EF8exDdzxGOjR4ZmQgcZ9pl5oDgH2oLfMQ3J7DHvEutqtyBKpjxfhBg18l6LW6Z3 qehsjuRrabQP1qBuc65bCrORVS9SkYGKgNfRh8iSp/CaOucWRxCgNry57UOknKsOlH+H pY98f3nksZLQj5orWVlyNzE5VHF1wzEAcFx9NVMVX3KhEnq+0wCBdFQ/fojUJOUQhi3m wZ2XkjGsxxgKSicrIEGL0qvyikC3UjQGK1wvn5ahA4Kr8K8fqKKXBC7DvHuZIP0aHFWJ K8Xg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=r2tm8XmZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x7-20020a17090a2b0700b0024e43558fbdsi5880581pjc.129.2023.05.05.01.10.52; Fri, 05 May 2023 01:11:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=r2tm8XmZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230508AbjEEICj (ORCPT + 99 others); Fri, 5 May 2023 04:02:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229875AbjEEICi (ORCPT ); Fri, 5 May 2023 04:02:38 -0400 Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com [IPv6:2a00:1450:4864:20::12d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83D8217FE5 for ; Fri, 5 May 2023 01:02:37 -0700 (PDT) Received: by mail-lf1-x12d.google.com with SMTP id 2adb3069b0e04-4efe8991b8aso1692947e87.0 for ; Fri, 05 May 2023 01:02:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683273756; x=1685865756; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=1fbuRo3h5EmOX7ozGvJB42czu0NHSlI3RqyThUj5mBo=; b=r2tm8XmZ8I2daChoV5gdUed5f/GdQhQp0Gw+vyyTdKUsv3/nrqG1ZT2kH8kk5aeUYG koVXIrk3/GAivaLXsgDBcw6QZxK8V9Ie/n5jkd6VKS1aXRgEgYoUI81vCJ9k8Kd1P/q2 41blV+kv+oPPdG0yFc1OiZ+x5BfyRFCEMlifpiLygIrtsBA5ZGrf210AUGEOSmz+BE/9 irkUZxH5+MBkkdFjrDKMtMVioEC9W/R4nFWIZxJp+fs19MCD9JZRPVpivDzjfoGo6/4p lcpY2Vv0XgAi1uubBRBBMcp2NAGaltkPVYedTn6N0EDB/Xtg2clBiz6ZMMF3mh2DqqC9 u0wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683273756; x=1685865756; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1fbuRo3h5EmOX7ozGvJB42czu0NHSlI3RqyThUj5mBo=; b=Vu2hYqDkmxnsEGtjZjuuRMCirBIiIX3Lb/tCaPv4+2AGqN0NjBHjRKR1xFEaurUW6x CGzmA2lD0kbPxH7V4KichHAfz3Qzibk/VXUjhKZ9+qLHGUknHQ9uYuB7YZwyztyf39+z iAa1/2knWbWjsSLFpmZ6qL7aOqkl3LkEbsrZNvgjCWwPTq8ngYja+oLff+t/IcRjpUrj /97oDpM+oHnD95vBN/YXE/PJRLYty+I0ruTNI8a3IjWyT1kgOGibN0Q/1SBlxpuR15gH zEG+KsSunsanGyF3IhF2/Q3EAivLWRgM2HZ3+wb49SdGkh1XtEL4uVsHWCsHt6wbY9FH AsCw== X-Gm-Message-State: AC+VfDwel/YYANSt4sw0HsbVDIi6tNFNC3i47jLlizkcUtjjCISNfEGz aHz4suFJXx5/ycev3AE8uGnOMTMuSTTZ2GF9TRg+xRWc X-Received: by 2002:ac2:4846:0:b0:4f0:15dc:4f23 with SMTP id 6-20020ac24846000000b004f015dc4f23mr292344lfy.29.1683273755424; Fri, 05 May 2023 01:02:35 -0700 (PDT) MIME-Version: 1.0 References: <1683194994-3070-1-git-send-email-zhaoyang.huang@unisoc.com> In-Reply-To: <1683194994-3070-1-git-send-email-zhaoyang.huang@unisoc.com> From: Zhaoyang Huang Date: Fri, 5 May 2023 16:02:12 +0800 Message-ID: Subject: Re: [PATCHv2] mm: optimization on page allocation when CMA enabled To: "zhaoyang.huang" , kernel-team@fb.com, Qian Cai , Vlastimil Babka , Mel Gorman , Anshuman Khandual Cc: Andrew Morton , Roman Gushchin , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, ke.wang@unisoc.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org add more reviewer On Thu, May 4, 2023 at 6:11=E2=80=AFPM zhaoyang.huang wrote: > > From: Zhaoyang Huang > > Let us look at the series of scenarios below with WMARK_LOW=3D25MB,WMARK_= MIN=3D5MB > (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start t= o use > CMA since C which actually has caused U&R lower than WMARK_LOW (this shou= ld be > deemed as against current memory policy, that is, U&R should either stay = around > WATERMARK_LOW when no allocation or do reclaim via enter slowpath) > > free_cma/free_pages(MB) A(12/30) B(12/25) C(12/20) > fixed 1/2 ratio N N Y > this commit Y Y Y > > Suggested-by: Roman Gushchin > Signed-off-by: Zhaoyang Huang > --- > v2: do proportion check when zone_watermark_ok, update commit message > --- > --- > mm/page_alloc.c | 36 ++++++++++++++++++++++++++++++++---- > 1 file changed, 32 insertions(+), 4 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0745aed..d0baeab 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3071,6 +3071,34 @@ static bool unreserve_highatomic_pageblock(const s= truct alloc_context *ac, > > } > > +#ifdef CONFIG_CMA > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, un= signed int alloc_flags) > +{ > + unsigned long cma_proportion =3D 0; > + unsigned long cma_free_proportion =3D 0; > + unsigned long watermark =3D 0; > + long count =3D 0; > + bool cma_first =3D false; > + > + watermark =3D wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); > + /*check if GFP_MOVABLE pass previous watermark check via the help= of CMA*/ > + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (= ~ALLOC_CMA))) > + /* WMARK_LOW failed lead to using cma first, this helps U= &R stay > + * around low when being drained by GFP_MOVABLE > + */ > + cma_first =3D true; > + else { > + /*check proportion when zone_watermark_ok*/ > + count =3D atomic_long_read(&zone->managed_pages); > + cma_proportion =3D zone->cma_pages * 100 / count; > + cma_free_proportion =3D zone_page_state(zone, NR_FREE_CMA= _PAGES) * 100 > + / zone_page_state(zone, NR_FREE_PAGES); > + cma_first =3D (cma_free_proportion >=3D cma_proportion * = 2 > + || cma_free_proportion >=3D 50); > + } > + return cma_first; > +} > +#endif > /* > * Do the hard work of removing an element from the buddy allocator. > * Call me with the zone->lock already held. > @@ -3087,10 +3115,10 @@ static bool unreserve_highatomic_pageblock(const = struct alloc_context *ac, > * allocating from CMA when over half of the zone's free = memory > * is in the CMA area. > */ > - if (alloc_flags & ALLOC_CMA && > - zone_page_state(zone, NR_FREE_CMA_PAGES) > > - zone_page_state(zone, NR_FREE_PAGES) / 2) { > - page =3D __rmqueue_cma_fallback(zone, order); > + if (migratetype =3D=3D MIGRATE_MOVABLE) { > + bool cma_first =3D __if_use_cma_first(zone, order= , alloc_flags); > + > + page =3D cma_first ? __rmqueue_cma_fallback(zone,= order) : NULL; > if (page) > return page; > } > -- > 1.9.1 >