Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp222711rwb; Mon, 26 Sep 2022 17:41:44 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5ph9E3OzyzNXpcLxo03LcGOwCIxQ721SDs+Cs5Ajsi4YJuS4I/KIJmOd2r4dsqLmfBhvHV X-Received: by 2002:a17:906:4787:b0:780:eca1:b5c3 with SMTP id cw7-20020a170906478700b00780eca1b5c3mr20604680ejc.92.1664239303892; Mon, 26 Sep 2022 17:41:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664239303; cv=none; d=google.com; s=arc-20160816; b=N2AfCbrbud2x4Q2mbhWAEdTm1Oy408f9luVUIs1Z9EPst4wl+TU3c5raHibWHz0MNj QNFoMRVqW83haoF2A1YZtxPMmvLv77cZEYf1l2L2CT040tlrPgsjgyXkdWy6I+T9iIae ZIwTlFdsp6wNUkoyDOVunoe4APELAlZiCN0vPh5LyubLtZutBac/UOFZ8oFuqtErJxvx 8D6oxpXs46dEeA02wkieybpWtHeYfEShHSRQU2qyIYrye57Y9sTEtNFVw/J7jfNFY5NQ Wzg28X+52ssPN9pz1wTlAdDugqvB9EZRXEkdHeL9ua0XgzaZNN7JEuy5lsSyJOwE3qCF maGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=9Z7Dnh4dxUwvGBoF+LGKrG1Tc6dsGGQHShG++YQCKvA=; b=KORh5d8NYwxuSDhr3mOgCZgA7uepwV9XYpimi4ceMDom2FGBvdpIWpUlDa+CRX7M4n 6o5lGHmytmgLmDobkKnT/kvrwj7LqFkvxCM/fu/buhBMopyk7QtpgzrC2Jh+RWXaDsb8 0NkDa4mv0AKd8RN0OChW7dKECpPD/BgwFcCc2GSPtLHV8+9v+XweT7iSknsPz8qZ/khb R1VIi1Son/ETy8eA4k6N21jMvlpTJVGBRsTuMP5zxpxbd/VHkndDqpCjDDK6SAlJAEbV fU7SaP1ZoGXRTkCX8pG6ql7avb8InltoKL+QLDY4beboqCtMM+gDift9ZfRltNwgJ/ma U7wQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m18-20020a056402431200b00453e1c676c2si200855edc.357.2022.09.26.17.41.16; Mon, 26 Sep 2022 17:41:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229790AbiI0AT6 (ORCPT + 99 others); Mon, 26 Sep 2022 20:19:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229573AbiI0AT4 (ORCPT ); Mon, 26 Sep 2022 20:19:56 -0400 Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E3645A3DD for ; Mon, 26 Sep 2022 17:19:55 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R321e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VQpBVL._1664237991; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0VQpBVL._1664237991) by smtp.aliyun-inc.com; Tue, 27 Sep 2022 08:19:51 +0800 From: Xin Hao To: sj@kernel.org Cc: akpm@linux-foundation.org, damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, xhao@linux.alibaba.com Subject: [PATCH v2 2/2] mm/damon: use damon_sz_region() in appropriate place Date: Tue, 27 Sep 2022 08:19:46 +0800 Message-Id: <20220927001946.85375-2-xhao@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220927001946.85375-1-xhao@linux.alibaba.com> References: <20220927001946.85375-1-xhao@linux.alibaba.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In many place, we can use damon_sz_region() to instead of "r->ar.end - r->ar.start". Suggested-by: SeongJae Park Signed-off-by: Xin Hao --- Changes from v1 ( https://lore.kernel.org/linux-mm/20220926071100.76379-1-xhao@linux.alibaba.com/) - Move sz_damon_region() to static inline in include/linux/damon.h - Rename sz_damon_region() to damon_sz_region() mm/damon/core.c | 17 ++++++++--------- mm/damon/vaddr.c | 4 ++-- 2 files changed, 10 insertions(+), 11 deletions(-) diff --git a/mm/damon/core.c b/mm/damon/core.c index 5b9e0d585aef..515ac4e52a11 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -490,7 +490,7 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx) damon_for_each_target(t, ctx) { damon_for_each_region(r, t) - sz += r->ar.end - r->ar.start; + sz += damon_sz_region(r); } if (ctx->attrs.min_nr_regions) @@ -673,7 +673,7 @@ static bool __damos_valid_target(struct damon_region *r, struct damos *s) { unsigned long sz; - sz = r->ar.end - r->ar.start; + sz = damon_sz_region(r); return s->pattern.min_sz_region <= sz && sz <= s->pattern.max_sz_region && s->pattern.min_nr_accesses <= r->nr_accesses && @@ -701,7 +701,7 @@ static void damon_do_apply_schemes(struct damon_ctx *c, damon_for_each_scheme(s, c) { struct damos_quota *quota = &s->quota; - unsigned long sz = r->ar.end - r->ar.start; + unsigned long sz = damon_sz_region(r); struct timespec64 begin, end; unsigned long sz_applied = 0; @@ -730,14 +730,14 @@ static void damon_do_apply_schemes(struct damon_ctx *c, sz = ALIGN_DOWN(quota->charge_addr_from - r->ar.start, DAMON_MIN_REGION); if (!sz) { - if (r->ar.end - r->ar.start <= - DAMON_MIN_REGION) + if (damon_sz_region(r) <= + DAMON_MIN_REGION) continue; sz = DAMON_MIN_REGION; } damon_split_region_at(t, r, sz); r = damon_next_region(r); - sz = r->ar.end - r->ar.start; + sz = damon_sz_region(r); } quota->charge_target_from = NULL; quota->charge_addr_from = 0; @@ -842,8 +842,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c) continue; score = c->ops.get_scheme_score( c, t, r, s); - quota->histogram[score] += - r->ar.end - r->ar.start; + quota->histogram[score] += damon_sz_region(r); if (score > max_score) max_score = score; } @@ -957,7 +956,7 @@ static void damon_split_regions_of(struct damon_target *t, int nr_subs) int i; damon_for_each_region_safe(r, next, t) { - sz_region = r->ar.end - r->ar.start; + sz_region = damon_sz_region(r); for (i = 0; i < nr_subs - 1 && sz_region > 2 * DAMON_MIN_REGION; i++) { diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index ea94e0b2c311..15f03df66db6 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -72,7 +72,7 @@ static int damon_va_evenly_split_region(struct damon_target *t, return -EINVAL; orig_end = r->ar.end; - sz_orig = r->ar.end - r->ar.start; + sz_orig = damon_sz_region(r); sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION); if (!sz_piece) @@ -618,7 +618,7 @@ static unsigned long damos_madvise(struct damon_target *target, { struct mm_struct *mm; unsigned long start = PAGE_ALIGN(r->ar.start); - unsigned long len = PAGE_ALIGN(r->ar.end - r->ar.start); + unsigned long len = PAGE_ALIGN(damon_sz_region(r)); unsigned long applied; mm = damon_get_mm(target); -- 2.31.0