Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1349940ybt; Thu, 18 Jun 2020 06:45:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyyJkNV1oIhLbmBIwtW5i2JVdVp7rzKYEmWlIFj9CAKwv4ifp6t7wlixuyo9TBp3ABVK8Md X-Received: by 2002:a05:6402:1d82:: with SMTP id dk2mr4014848edb.75.1592487954151; Thu, 18 Jun 2020 06:45:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592487954; cv=none; d=google.com; s=arc-20160816; b=r5N029UYMrbqzfPM8fFShy0umZYbEPpi9Q2CCbkeLBSryl/vTlVdMFOTgyCmW/4hkv aguHCWxBAfW44XYJmhF8RxAYGjmcnP42HrbeVZrAT7jauulVK3U/8xoHYKdraVGr354V aCuBDPPwCw0Bjt2OIk7ppHQweSISgLGLMAB2oYcKrywDzklpIqhkAUTFPTycoScM1peJ YK7yvZO5gz8PGG9mxLU4oPg4sxC/APyW3s09SlDpP0nsHDVbKKfQv26ExoXyNqU4RGLD 1WB7GiFMQwuKgb7JLFqwP6jYLDE9IoK8AB/sl1PmZU/Z4Nxni7A+XinV9HkMlnrAMUJY sAbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=5Cexa4tsvwTnBBGHx0OVpN/yf320flKSSjXgML68NX8=; b=JCLx+nMdgPkQ93PkaVkskJG1tijm/1160Z0j9lM/dDL80EYE75WqInl21bgcXN2l5q Nffg19q2IhYVaNA102yVOG2xwsppTSJRhNIR+Q2tWOxJ0SWbVPlrv0TNNOy0JBKhPFX9 1CD3nKNl8bK9WOk2I6CK5+y/K9w9RA4E0OC76NzuT2FWBcDolecdPYN8sAOfmYfkBe0C GEdi9qxsQfJPvW8mBXL+1ZAJRP2BLT/+iBfbKRa8NkyiUFGsCO31T+e277MO7sKVEIjo UznGnHWdEp8rrb/tWHnOs1jrORxlS4Xbol994dNNS9RUb+H56zw8bFKpBE3JU4omSxKi MeCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VwLMLEBf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cx3si1868229edb.547.2020.06.18.06.45.30; Thu, 18 Jun 2020 06:45:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VwLMLEBf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730277AbgFRNl5 (ORCPT + 99 others); Thu, 18 Jun 2020 09:41:57 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:26234 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727101AbgFRNly (ORCPT ); Thu, 18 Jun 2020 09:41:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1592487712; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=5Cexa4tsvwTnBBGHx0OVpN/yf320flKSSjXgML68NX8=; b=VwLMLEBfB1WXN8qBJQvFRkJ+hFb4TwYOqNXRfnwgPwDA/pufzfB5TD/WheDbBfkGaZDlJ1 2oZP/hxBYeVMQWT+LJsDL6KEjQdcWgzEJVO7PLrfLvHg4Im7h0lh0+CI8cj6nTPHnUDh3g kX8za3lCJMzCtn/0EY8WhbhK/oO0x0w= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-214-k6bxuqGrPWmC0i8HF5oCAg-1; Thu, 18 Jun 2020 09:41:48 -0400 X-MC-Unique: k6bxuqGrPWmC0i8HF5oCAg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B3938107ACF7; Thu, 18 Jun 2020 13:41:46 +0000 (UTC) Received: from localhost (ovpn-12-22.pek2.redhat.com [10.72.12.22]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 961BB19C79; Thu, 18 Jun 2020 13:41:45 +0000 (UTC) Date: Thu, 18 Jun 2020 21:41:42 +0800 From: Baoquan He To: Nitin Gupta Cc: Andrew Morton , Nitin Gupta , Luis Chamberlain , Kees Cook , Iurii Zaikin , Vlastimil Babka , Joonsoo Kim , open list , "open list:PROC SYSCTL" , "open list:MEMORY MANAGEMENT" Subject: Re: [PATCH] mm: Use unsigned types for fragmentation score Message-ID: <20200618134142.GD3346@MiWiFi-R3L-srv> References: <20200618010319.13159-1-nigupta@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200618010319.13159-1-nigupta@nvidia.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/17/20 at 06:03pm, Nitin Gupta wrote: > Proactive compaction uses per-node/zone "fragmentation score" which > is always in range [0, 100], so use unsigned type of these scores > as well as for related constants. > > Signed-off-by: Nitin Gupta > --- > include/linux/compaction.h | 4 ++-- > kernel/sysctl.c | 2 +- > mm/compaction.c | 18 +++++++++--------- > mm/vmstat.c | 2 +- > 4 files changed, 13 insertions(+), 13 deletions(-) > > diff --git a/include/linux/compaction.h b/include/linux/compaction.h > index 7a242d46454e..25a521d299c1 100644 > --- a/include/linux/compaction.h > +++ b/include/linux/compaction.h > @@ -85,13 +85,13 @@ static inline unsigned long compact_gap(unsigned int order) > > #ifdef CONFIG_COMPACTION > extern int sysctl_compact_memory; > -extern int sysctl_compaction_proactiveness; > +extern unsigned int sysctl_compaction_proactiveness; > extern int sysctl_compaction_handler(struct ctl_table *table, int write, > void *buffer, size_t *length, loff_t *ppos); > extern int sysctl_extfrag_threshold; > extern int sysctl_compact_unevictable_allowed; > > -extern int extfrag_for_order(struct zone *zone, unsigned int order); > +extern unsigned int extfrag_for_order(struct zone *zone, unsigned int order); > extern int fragmentation_index(struct zone *zone, unsigned int order); > extern enum compact_result try_to_compact_pages(gfp_t gfp_mask, > unsigned int order, unsigned int alloc_flags, > diff --git a/kernel/sysctl.c b/kernel/sysctl.c > index 58b0a59c9769..40180cdde486 100644 > --- a/kernel/sysctl.c > +++ b/kernel/sysctl.c > @@ -2833,7 +2833,7 @@ static struct ctl_table vm_table[] = { > { > .procname = "compaction_proactiveness", > .data = &sysctl_compaction_proactiveness, > - .maxlen = sizeof(int), > + .maxlen = sizeof(sysctl_compaction_proactiveness), Patch looks good to me. Wondering why not using 'unsigned int' here, just curious. > .mode = 0644, > .proc_handler = proc_dointvec_minmax, > .extra1 = SYSCTL_ZERO, > diff --git a/mm/compaction.c b/mm/compaction.c > index ac2030814edb..45fd24a0ea0b 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -53,7 +53,7 @@ static inline void count_compact_events(enum vm_event_item item, long delta) > /* > * Fragmentation score check interval for proactive compaction purposes. > */ > -static const int HPAGE_FRAG_CHECK_INTERVAL_MSEC = 500; > +static const unsigned int HPAGE_FRAG_CHECK_INTERVAL_MSEC = 500; > > /* > * Page order with-respect-to which proactive compaction > @@ -1890,7 +1890,7 @@ static bool kswapd_is_running(pg_data_t *pgdat) > * ZONE_DMA32. For smaller zones, the score value remains close to zero, > * and thus never exceeds the high threshold for proactive compaction. > */ > -static int fragmentation_score_zone(struct zone *zone) > +static unsigned int fragmentation_score_zone(struct zone *zone) > { > unsigned long score; > > @@ -1906,9 +1906,9 @@ static int fragmentation_score_zone(struct zone *zone) > * the node's score falls below the low threshold, or one of the back-off > * conditions is met. > */ > -static int fragmentation_score_node(pg_data_t *pgdat) > +static unsigned int fragmentation_score_node(pg_data_t *pgdat) > { > - unsigned long score = 0; > + unsigned int score = 0; > int zoneid; > > for (zoneid = 0; zoneid < MAX_NR_ZONES; zoneid++) { > @@ -1921,17 +1921,17 @@ static int fragmentation_score_node(pg_data_t *pgdat) > return score; > } > > -static int fragmentation_score_wmark(pg_data_t *pgdat, bool low) > +static unsigned int fragmentation_score_wmark(pg_data_t *pgdat, bool low) > { > - int wmark_low; > + unsigned int wmark_low; > > /* > * Cap the low watermak to avoid excessive compaction > * activity in case a user sets the proactivess tunable > * close to 100 (maximum). > */ > - wmark_low = max(100 - sysctl_compaction_proactiveness, 5); > - return low ? wmark_low : min(wmark_low + 10, 100); > + wmark_low = max(100U - sysctl_compaction_proactiveness, 5U); > + return low ? wmark_low : min(wmark_low + 10, 100U); > } > > static bool should_proactive_compact_node(pg_data_t *pgdat) > @@ -2604,7 +2604,7 @@ int sysctl_compact_memory; > * aggressively the kernel should compact memory in the > * background. It takes values in the range [0, 100]. > */ > -int __read_mostly sysctl_compaction_proactiveness = 20; > +unsigned int __read_mostly sysctl_compaction_proactiveness = 20; > > /* > * This is the entry point for compacting all nodes via > diff --git a/mm/vmstat.c b/mm/vmstat.c > index 3e7ba8bce2ba..b1de695b826d 100644 > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -1079,7 +1079,7 @@ static int __fragmentation_index(unsigned int order, struct contig_page_info *in > * It is defined as the percentage of pages found in blocks of size > * less than 1 << order. It returns values in range [0, 100]. > */ > -int extfrag_for_order(struct zone *zone, unsigned int order) > +unsigned int extfrag_for_order(struct zone *zone, unsigned int order) > { > struct contig_page_info info; > > -- > 2.27.0 > >