Received: by 2002:a05:6358:f14:b0:e5:3b68:ec04 with SMTP id b20csp744690rwj; Fri, 23 Dec 2022 07:33:53 -0800 (PST) X-Google-Smtp-Source: AMrXdXs9aJz53nOEx2DXikG0WzPeIS2oBSzJ/M3xiQ8p4bLTaS9Q8PRcdBx3Kct9ynQ7XMSIM7Iy X-Received: by 2002:a17:90a:cc2:b0:219:9973:2746 with SMTP id 2-20020a17090a0cc200b0021999732746mr25974883pjt.0.1671809633078; Fri, 23 Dec 2022 07:33:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671809633; cv=none; d=google.com; s=arc-20160816; b=wS9gn53///iGpzjIDzZzpQeJATBrpKWpZtnpphcXiAX3FnStpe7SNqAKwBSZCCAMOI QJKEcp3FRPXSsny2Izd0h+PZVMshLD9dd3GNCqArjBbNzwH4MrTTBpsdo/gePDANrS97 +z4GcFwsw7HiTSrBoWnVve2ptLLHwJtSY59p+7Y/JSGfaRcI8LqrZkOxAXpbNGHZEhLZ f+sMG2feaJMm8fGonuTGhTLeGcPO746Hye5AhhWP3saJPoWbPi7crrKTk8/KJaB7xkDP ClqWZezssOB3nVAwtHC2aFzGp+bV29XIzh2mLNLiVXAhZPZN+Io4wR1k5aBYXV/joLOA KJrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=8o+UIo9FiDlWfFDeauUPvj3jRYbIOH5zfA9HbrsYyR8=; b=NdQi4pGWKDPpsc7hL59WS2s2K30PILVzndkRRKLT7JVubLojelfoqfdEChDvwgoZb1 Uzrnqozds3v4ihjJrSqhymjS+pvjrr6/DG+KtIg6HaLVUS/PdywxTUmrWhmqSpd6bhre mQKIEzHfimRAnA/dNNZWYKKxYVIg+jtw9uNoP03UcxcZnCSA+/IaRtjVV397GKFzF5Fu cVZlkRezCHPvn+s7aV84DQ9uI+Z3DN3IfFBXJp7DKqTE4DnsEOxSty+EgJmfbqJ8Iho0 nHoZPiSp95uyh/bqtQKxuqb+EdGbPmQKigke5er0jSnoxSU4Z8Mj/+AF0TAYddsfwcBw x5UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kox8nRoW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id me5-20020a17090b17c500b00218bca92794si4061013pjb.114.2022.12.23.07.33.44; Fri, 23 Dec 2022 07:33:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kox8nRoW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236329AbiLWOl6 (ORCPT + 65 others); Fri, 23 Dec 2022 09:41:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230239AbiLWOlz (ORCPT ); Fri, 23 Dec 2022 09:41:55 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D76DC26A82 for ; Fri, 23 Dec 2022 06:41:54 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6F654611DB for ; Fri, 23 Dec 2022 14:41:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A76DC433D2; Fri, 23 Dec 2022 14:41:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1671806513; bh=37BtRM3lBO+1nDbvr3DPq5coHa2cgacS+AVXyWnLNOc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=kox8nRoWLjje3Cjgbtjsl1MUkRM6pr89A0hlUvaZL1bXsWL6WvlOZT8S5LNQ437Ik 9MGEtfPFcaJvdgRCrlFQ1nWuCz8f4iAB8NwrZe3hhfkHWq3yJ7N6Z72IF6otN52F+P QvaIQtXHPH5dWnGMu6tagb/hP9Lh+/8AzXx+a0gNepIwzX5CCRwejwtS4xlHRxGf2/ UQWBAU0CaKLo3GT9R2CObcJeSb3TvX31aIAhX383GjJFp7KgXe/XeMKYrH+nGRKjWG ls9M/e9igPRFWQMahWwo7ivthVHTKTH/1JOstNMAxAKqpYpsBXRtl0YMaAHyaiQgIz FjSJasnbusqFw== Date: Fri, 23 Dec 2022 15:41:50 +0100 From: Frederic Weisbecker To: Marcelo Tosatti Cc: atomlin@atomlin.com, cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v11 3/6] mm/vmstat: manage per-CPU stats from CPU context when NOHZ full Message-ID: <20221223144150.GA79369@lothringen> References: <20221221165801.362118576@redhat.com> <20221221170436.330627967@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221221170436.330627967@redhat.com> X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 21, 2022 at 01:58:04PM -0300, Marcelo Tosatti wrote: > @@ -194,21 +195,50 @@ void fold_vm_numa_events(void) > #endif > > #ifdef CONFIG_SMP > -static DEFINE_PER_CPU_ALIGNED(bool, vmstat_dirty); > + > +struct vmstat_dirty { > + bool dirty; > + bool cpuhotplug; May be call it "online" for clarity. Also should it depend on CONFIG_FLUSH_WORK_ON_RESUME_USER? > +}; > + > +static DEFINE_PER_CPU_ALIGNED(struct vmstat_dirty, vmstat_dirty_pcpu); > +static DEFINE_PER_CPU(struct delayed_work, vmstat_work); > +int sysctl_stat_interval __read_mostly = HZ; > > static inline void vmstat_mark_dirty(void) > { > - this_cpu_write(vmstat_dirty, true); > + struct vmstat_dirty *vms = this_cpu_ptr(&vmstat_dirty_pcpu); > + > +#ifdef CONFIG_FLUSH_WORK_ON_RESUME_USER Please avoid ifdeffery in the middle of a function when possible. This block could be in a different function or use IS_ENABLED() for example. > + int cpu = smp_processor_id(); > + > + if (tick_nohz_full_cpu(cpu) && !vms->dirty) { > + struct delayed_work *dw; > + > + dw = this_cpu_ptr(&vmstat_work); > + if (!delayed_work_pending(dw) && !vms->cpuhotplug) { > + unsigned long delay; > + > + delay = round_jiffies_relative(sysctl_stat_interval); > + queue_delayed_work_on(cpu, mm_percpu_wq, dw, delay); > + } > + } > +#endif > + vms->dirty = true; > } > > static inline void vmstat_clear_dirty(void) > { > - this_cpu_write(vmstat_dirty, false); > + struct vmstat_dirty *vms = this_cpu_ptr(&vmstat_dirty_pcpu); > + > + vms->dirty = false; You could keep this_cpu_write(vmstat_dirty.dirty, false) > } > > static inline bool is_vmstat_dirty(void) > { > - return this_cpu_read(vmstat_dirty); > + struct vmstat_dirty *vms = this_cpu_ptr(&vmstat_dirty_pcpu); > + > + return vms->dirty; Ditto with this_cpu_read()? > } > > int calculate_pressure_threshold(struct zone *zone) > @@ -1981,13 +2008,18 @@ void quiet_vmstat(void) > if (!is_vmstat_dirty()) > return; > > + refresh_cpu_vm_stats(false); > + > +#ifdef CONFIG_FLUSH_WORK_ON_RESUME_USER This can use IS_ENABLED() > + if (!user) > + return; > /* > - * Just refresh counters and do not care about the pending delayed > - * vmstat_update. It doesn't fire that often to matter and canceling > - * it would be too expensive from this path. > - * vmstat_shepherd will take care about that for us. > + * If the tick is stopped, cancel any delayed work to avoid > + * interruptions to this CPU in the future. > */ > - refresh_cpu_vm_stats(false); > + if (delayed_work_pending(this_cpu_ptr(&vmstat_work))) > + cancel_delayed_work(this_cpu_ptr(&vmstat_work)); > +#endif > } > > /* > @@ -2008,8 +2040,15 @@ static void vmstat_shepherd(struct work_ > /* Check processors whose vmstat worker threads have been disabled */ > for_each_online_cpu(cpu) { > struct delayed_work *dw = &per_cpu(vmstat_work, cpu); > + struct vmstat_dirty *vms = per_cpu_ptr(&vmstat_dirty_pcpu, cpu); > > - if (!delayed_work_pending(dw) && per_cpu(vmstat_dirty, cpu)) > +#ifdef CONFIG_FLUSH_WORK_ON_RESUME_USER Same here. > + /* NOHZ full CPUs manage their own vmstat flushing */ > + if (tick_nohz_full_cpu(cpu)) > + continue; > +#endif > + > + if (!delayed_work_pending(dw) && vms->dirty) > queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); > > cond_resched(); > @@ -2053,8 +2111,15 @@ static int vmstat_cpu_online(unsigned in > return 0; > } > > +/* > + * ONLINE: The callbacks are invoked on the hotplugged CPU from the per CPU > + * hotplug thread with interrupts and preemption enabled. This is OFFLINE and the reason behind that comment is confusing. > + */ > static int vmstat_cpu_down_prep(unsigned int cpu) > { > + struct vmstat_dirty *vms = per_cpu_ptr(&vmstat_dirty_pcpu, cpu); > + > + vms->cpuhotplug = true; this_cpu_write() ? > cancel_delayed_work_sync(&per_cpu(vmstat_work, cpu)); > return 0; > } > +config FLUSH_WORK_ON_RESUME_USER > + bool "Flush per-CPU vmstats on user return (for nohz full CPUs)" > + depends on NO_HZ_FULL > + default y > + > + help > + By default, nohz full CPUs flush per-CPU vm statistics on return > + to userspace (to avoid additional interferences when executing > + userspace code). This has a small but measurable impact on > + system call performance. You can disable this to improve system call > + performance, at the expense of potential interferences to userspace > + execution. Can you move that below config CPU_ISOLATION ? Thanks! > + > # multi-gen LRU { > config LRU_GEN > bool "Multi-Gen LRU" > >