Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp7203386pxv; Fri, 30 Jul 2021 12:33:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyzqDiVgk40tj/4MJw/ZANjuIdpd3T4gwRXUVr4l5NwU+5M816g2wSwvABW5iX/UVC8IMqM X-Received: by 2002:a5d:9d19:: with SMTP id j25mr1359628ioj.84.1627673596470; Fri, 30 Jul 2021 12:33:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627673596; cv=none; d=google.com; s=arc-20160816; b=ljhw9Yl8idT8wCyQhSQQeWObe44xyCuc+hOJvOsWyglLSfXxVDop2SZ/+wOLwin4ZE ihXay1FK3ssxy/xs2YSlRfZE8nsGJLavk1UQ+FQSs/1AQx0y5Vl3G+NBODQfWoYw57dG dNi5gp3sv+HE7f8ttMj62wCT9ETXd/5emgI8UdKoKG6DxXudMYerf/1qFeR7KDhEJjIN 7LWVWM/rJOLcqUzGklc2ORDocSt6mW87ZnpXTjzRQmLSFXZaqtvp7iVIcA5aNQWWzfl7 POlNeZq51/3nwe/c3fj2AbTbSq+lwCPIo3CLhGXTGTkxEL6EWN003+PA83FhlM0xEsV9 UjLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=ybdSHdjpNw6BwN445/aJFEEkS3jZcnWkniYrRqYgrYg=; b=p3ZYhskt94SpyDdNENhEybVF379W/rvnk8KZz+7TRy7Vh0Z2St6FES1OesF1ARs8we LBTM6bYCvmvD9KHFNZ1a7so/8hnLVDKjgAnrfgtxKJzyWXVq/dtoaBgflk958cqTyhGE 6rMSVsQk2wcb1mPxpeu3Sq+Vg8suHUlw/jXM2gz/pNafTRuF6mEWsymDhuk2SVXjOOQi 57SccjgveujYGKBU2Mk/j3R2OPJfXMds6go7XviE9IrJSszwNFPspbZQs4/rmXdLnqJy OxL3ICC+QS6YG//0Pi8GDMe/V2MolUH7hneqFVqg7e9HT4K6PkdwlpMi/ALhm3z5wcIA c3tg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Im4pmu+s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t18si2675485ilf.96.2021.07.30.12.33.04; Fri, 30 Jul 2021 12:33:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Im4pmu+s; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230475AbhG3TcJ (ORCPT + 99 others); Fri, 30 Jul 2021 15:32:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:50130 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230335AbhG3TcG (ORCPT ); Fri, 30 Jul 2021 15:32:06 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2BA8A60F01; Fri, 30 Jul 2021 19:31:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1627673520; bh=oMJdAK1/uqVtScnYBeamslsc/N0GvUq5kx86SJOVb9Y=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Im4pmu+sHhsMi8XhIj/K7BbkDAzeZWLhh9rrAZCrFOVDahZ99J0DLQA/j7SQYgupJ Jcl2hXEFPoXJOfhiC88mtDefNLv0bun7IG3TK1GSZam0jgPnuWXI4wHKskP7HzEVi0 TSFYD85ZAStQWH2tcLn5irx3N+qAhbkWGRiSUCo5zganKpGJjXonnHp5/dagZDWLWj 4qMYktcC5nNWFKYzbCD6Q+pVd/skjYzsCNsMukI8ambJPZ8Sne378pmB6ce+EvL8Ke ntMNylNt08wxGENcGdFw+JYXYAH+ZerLyV+i0OWujimJWHFRpHZOq10mCwwRw//mpw m1tQS8hzQRvrQ== Date: Fri, 30 Jul 2021 22:31:52 +0300 From: Mike Rapoport To: Charan Teja Reddy Cc: akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, dave.hansen@linux.intel.com, vbabka@suse.cz, mgorman@techsingularity.net, nigupta@nvidia.com, corbet@lwn.net, khalid.aziz@oracle.com, rientjes@google.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, vinmenon@codeaurora.org Subject: Re: [PATCH V5] mm: compaction: support triggering of proactive compaction by user Message-ID: References: <1627653207-12317-1-git-send-email-charante@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1627653207-12317-1-git-send-email-charante@codeaurora.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 30, 2021 at 07:23:27PM +0530, Charan Teja Reddy wrote: > The proactive compaction[1] gets triggered for every 500msec and run > compaction on the node for COMPACTION_HPAGE_ORDER (usually order-9) > pages based on the value set to sysctl.compaction_proactiveness. > Triggering the compaction for every 500msec in search of > COMPACTION_HPAGE_ORDER pages is not needed for all applications, > especially on the embedded system usecases which may have few MB's of > RAM. Enabling the proactive compaction in its state will endup in > running almost always on such systems. > > Other side, proactive compaction can still be very much useful for > getting a set of higher order pages in some controllable > manner(controlled by using the sysctl.compaction_proactiveness). So, on > systems where enabling the proactive compaction always may proove not > required, can trigger the same from user space on write to its sysctl > interface. As an example, say app launcher decide to launch the memory > heavy application which can be launched fast if it gets more higher > order pages thus launcher can prepare the system in advance by > triggering the proactive compaction from userspace. > > This triggering of proactive compaction is done on a write to > sysctl.compaction_proactiveness by user. > > [1]https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit?id=facdaa917c4d5a376d09d25865f5a863f906234a > > Signed-off-by: Charan Teja Reddy > --- > Changes in V5: > -- Avoid unnecessary wakeup of proactive compaction when it is disabled. > -- No changes in the logic of triggering the proactive compaction. > > Changes in V4: > -- Changed the code as the 'proactive_defer' counter is removed. > -- No changes in the logic of triggering the proactive compaction. > -- https://lore.kernel.org/patchwork/patch/1448777/ > > Changes in V3: > -- Fixed review comments from Valstimil and others. > -- https://lore.kernel.org/patchwork/patch/1438211/ > > Changes in V2: > -- remove /proc/../proactive_compact_memory interface trigger for proactive compaction > -- Intention is same that add a way to trigger proactive compaction by user. > -- https://lore.kernel.org/patchwork/patch/1431283/ > > changes in V1: > -- Created the new /proc/sys/vm/proactive_compact_memory in > interface to trigger proactive compaction from user > -- https://lore.kernel.org/lkml/1619098678-8501-1-git-send-email-charante@codeaurora.org/ > > Documentation/admin-guide/sysctl/vm.rst | 3 ++- > include/linux/compaction.h | 2 ++ > include/linux/mmzone.h | 1 + > kernel/sysctl.c | 2 +- > mm/compaction.c | 38 +++++++++++++++++++++++++++++++-- > 5 files changed, 42 insertions(+), 4 deletions(-) > > diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst > index 003d5cc..b526cf6 100644 > --- a/Documentation/admin-guide/sysctl/vm.rst > +++ b/Documentation/admin-guide/sysctl/vm.rst > @@ -118,7 +118,8 @@ compaction_proactiveness > > This tunable takes a value in the range [0, 100] with a default value of > 20. This tunable determines how aggressively compaction is done in the > -background. Setting it to 0 disables proactive compaction. > +background. On write of non zero value to this tunable will immediately Nit: I think "Write of non zero ..." > +trigger the proactive compaction. Setting it to 0 disables proactive compaction. > > Note that compaction has a non-trivial system-wide impact as pages > belonging to different processes are moved around, which could also lead > diff --git a/include/linux/compaction.h b/include/linux/compaction.h > index c24098c..34bce35 100644 > --- a/include/linux/compaction.h > +++ b/include/linux/compaction.h > @@ -84,6 +84,8 @@ static inline unsigned long compact_gap(unsigned int order) > extern unsigned int sysctl_compaction_proactiveness; > extern int sysctl_compaction_handler(struct ctl_table *table, int write, > void *buffer, size_t *length, loff_t *ppos); > +extern int compaction_proactiveness_sysctl_handler(struct ctl_table *table, > + int write, void *buffer, size_t *length, loff_t *ppos); > extern int sysctl_extfrag_threshold; > extern int sysctl_compact_unevictable_allowed; > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 4610750..6a1d79d 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -853,6 +853,7 @@ typedef struct pglist_data { > enum zone_type kcompactd_highest_zoneidx; > wait_queue_head_t kcompactd_wait; > struct task_struct *kcompactd; > + bool proactive_compact_trigger; > #endif > /* > * This is a per-node reserve of pages that are not available > diff --git a/kernel/sysctl.c b/kernel/sysctl.c > index 82d6ff6..65bc6f7 100644 > --- a/kernel/sysctl.c > +++ b/kernel/sysctl.c > @@ -2871,7 +2871,7 @@ static struct ctl_table vm_table[] = { > .data = &sysctl_compaction_proactiveness, > .maxlen = sizeof(sysctl_compaction_proactiveness), > .mode = 0644, > - .proc_handler = proc_dointvec_minmax, > + .proc_handler = compaction_proactiveness_sysctl_handler, > .extra1 = SYSCTL_ZERO, > .extra2 = &one_hundred, > }, > diff --git a/mm/compaction.c b/mm/compaction.c > index f984ad0..fbc60f9 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -2700,6 +2700,30 @@ static void compact_nodes(void) > */ > unsigned int __read_mostly sysctl_compaction_proactiveness = 20; > > +int compaction_proactiveness_sysctl_handler(struct ctl_table *table, int write, > + void *buffer, size_t *length, loff_t *ppos) > +{ > + int rc, nid; > + > + rc = proc_dointvec_minmax(table, write, buffer, length, ppos); > + if (rc) > + return rc; > + > + if (write && sysctl_compaction_proactiveness) { > + for_each_online_node(nid) { > + pg_data_t *pgdat = NODE_DATA(nid); > + > + if (pgdat->proactive_compact_trigger) > + continue; > + > + pgdat->proactive_compact_trigger = true; > + wake_up_interruptible(&pgdat->kcompactd_wait); > + } > + } > + > + return 0; > +} > + > /* > * This is the entry point for compacting all nodes via > * /proc/sys/vm/compact_memory > @@ -2744,7 +2768,8 @@ void compaction_unregister_node(struct node *node) > > static inline bool kcompactd_work_requested(pg_data_t *pgdat) > { > - return pgdat->kcompactd_max_order > 0 || kthread_should_stop(); > + return pgdat->kcompactd_max_order > 0 || kthread_should_stop() || > + pgdat->proactive_compact_trigger; > } > > static bool kcompactd_node_suitable(pg_data_t *pgdat) > @@ -2895,9 +2920,16 @@ static int kcompactd(void *p) > while (!kthread_should_stop()) { > unsigned long pflags; > > + /* > + * Avoid the unnecessary wakeup for proactive compaction > + * when it is disabled. > + */ > + if (!sysctl_compaction_proactiveness) > + timeout = MAX_SCHEDULE_TIMEOUT; > trace_mm_compaction_kcompactd_sleep(pgdat->node_id); > if (wait_event_freezable_timeout(pgdat->kcompactd_wait, > - kcompactd_work_requested(pgdat), timeout)) { > + kcompactd_work_requested(pgdat), timeout) && > + !pgdat->proactive_compact_trigger) { > > psi_memstall_enter(&pflags); > kcompactd_do_work(pgdat); > @@ -2932,6 +2964,8 @@ static int kcompactd(void *p) > timeout = > default_timeout << COMPACT_MAX_DEFER_SHIFT; > } > + if (unlikely(pgdat->proactive_compact_trigger)) > + pgdat->proactive_compact_trigger = false; > } > > return 0; > -- > QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a > member of the Code Aurora Forum, hosted by The Linux Foundation > > -- Sincerely yours, Mike.