Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp1219842pxv; Fri, 23 Jul 2021 03:03:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwXrkMvFPiiW9ZELNHlgz0a/P5BA+tr0gFcw+R8npK5/NmoQsUechkYJQv6QIML5xnqunWm X-Received: by 2002:a50:d0cf:: with SMTP id g15mr4583686edf.219.1627034630002; Fri, 23 Jul 2021 03:03:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627034629; cv=none; d=google.com; s=arc-20160816; b=X/GikGaFenfiSEa+mPu8WLXx2180gjZVyzlmhsDNyi7niACgxPC8opBAV+u42+i+U0 f7TuyH3kn6kQExK7EOEb4n0qE4wfE9rP6+nfovuVEfSqrTlfItR7c2qdEt98NhM4GnI9 DArIFkCh0u9AkNFKvI7fy6rxa2yIupuiAGkRsFIlmCRLv9ozz1u2ThTSgk7fqq8CQrJA EaL1nq2L29dmicFHkz7BrgUDSOZfAc2oFBofGPvbGRiX9xvEQZFPIuzbU9pMfJNcVB8s 7ki+GoiDnFb9F1mvC9bepQYdyR7s1kmFYfBe8qnvZ9y7QGu+4npb0xQX/ALI3ldmv3Jn PYDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cP+URqGmDyzWkkWcFL6GwKOkBDrv2ICgsPMXMBaftfk=; b=1DGAFh2CpBjCQYjLZ0+VaUwFQ748k/h/j9Q8gi2QzO5pU9pBkpr6NSH1H+XYvxgm/A HHT7SzrvAfa9oInbs32ngG5XMZG8USqyxbenL95YRP8up19UHcNThXt2o6K1JvUm03m0 zJZaGJEJltpiFNXYPj6xJ32YBf4tjoQ/RMQn5Q6jiWTpVzjSTZSG4hhCibJfe2nEuwEH CRj7G3i04bDHzyP5EgU/RxWekkqHT94XBK3RJbMWQ+IG3kAlv5kM8Uf+QR3YHZ7IzMnO 1aL0NL86DGW/APUABdp705vzFItY9nxuU5RzEX6ocOf4T0ODe4RWJuUQSCx2+9x0wm0/ hdfQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s18si38016311ejh.472.2021.07.23.03.03.26; Fri, 23 Jul 2021 03:03:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231759AbhGWJUc (ORCPT + 99 others); Fri, 23 Jul 2021 05:20:32 -0400 Received: from outbound-smtp27.blacknight.com ([81.17.249.195]:46360 "EHLO outbound-smtp27.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231574AbhGWJUc (ORCPT ); Fri, 23 Jul 2021 05:20:32 -0400 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp27.blacknight.com (Postfix) with ESMTPS id 37B45CAFC1 for ; Fri, 23 Jul 2021 11:01:05 +0100 (IST) Received: (qmail 9456 invoked from network); 23 Jul 2021 10:01:04 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.255]) by 81.17.254.9 with ESMTPA; 23 Jul 2021 10:01:04 -0000 From: Mel Gorman To: Andrew Morton Cc: Thomas Gleixner , Ingo Molnar , Vlastimil Babka , Hugh Dickins , Linux-MM , Linux-RT-Users , LKML , Mel Gorman Subject: [PATCH 2/2] mm/vmstat: Protect per cpu variables with preempt disable on RT Date: Fri, 23 Jul 2021 11:00:34 +0100 Message-Id: <20210723100034.13353-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210723100034.13353-1-mgorman@techsingularity.net> References: <20210723100034.13353-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ingo Molnar Disable preemption on -RT for the vmstat code. On vanila the code runs in IRQ-off regions while on -RT it may not when stats are updated under a local_lock. "preempt_disable" ensures that the same resources is not updated in parallel due to preemption. This patch differs from the preempt-rt version where __count_vm_event and __count_vm_events are also protected. The counters are explicitly "allowed to be to be racy" so there is no need to protect them from preemption. Only the accurate page stats that are updated by a read-modify-write need protection. Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner Signed-off-by: Mel Gorman --- mm/vmstat.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/vmstat.c b/mm/vmstat.c index b0534e068166..d06332c221b1 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -319,6 +319,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, long x; long t; + preempt_disable_rt(); x = delta + __this_cpu_read(*p); t = __this_cpu_read(pcp->stat_threshold); @@ -328,6 +329,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, x = 0; } __this_cpu_write(*p, x); + preempt_enable_rt(); } EXPORT_SYMBOL(__mod_zone_page_state); @@ -350,6 +352,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, delta >>= PAGE_SHIFT; } + preempt_disable_rt(); x = delta + __this_cpu_read(*p); t = __this_cpu_read(pcp->stat_threshold); @@ -359,6 +362,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, x = 0; } __this_cpu_write(*p, x); + preempt_enable_rt(); } EXPORT_SYMBOL(__mod_node_page_state); @@ -391,6 +395,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item) s8 __percpu *p = pcp->vm_stat_diff + item; s8 v, t; + preempt_disable_rt(); v = __this_cpu_inc_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { @@ -399,6 +404,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item) zone_page_state_add(v + overstep, zone, item); __this_cpu_write(*p, -overstep); } + preempt_enable_rt(); } void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) @@ -409,6 +415,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) VM_WARN_ON_ONCE(vmstat_item_in_bytes(item)); + preempt_disable_rt(); v = __this_cpu_inc_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { @@ -417,6 +424,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) node_page_state_add(v + overstep, pgdat, item); __this_cpu_write(*p, -overstep); } + preempt_enable_rt(); } void __inc_zone_page_state(struct page *page, enum zone_stat_item item) @@ -437,6 +445,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item) s8 __percpu *p = pcp->vm_stat_diff + item; s8 v, t; + preempt_disable_rt(); v = __this_cpu_dec_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v < - t)) { @@ -445,6 +454,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item) zone_page_state_add(v - overstep, zone, item); __this_cpu_write(*p, overstep); } + preempt_enable_rt(); } void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) @@ -455,6 +465,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) VM_WARN_ON_ONCE(vmstat_item_in_bytes(item)); + preempt_disable_rt(); v = __this_cpu_dec_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v < - t)) { @@ -463,6 +474,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) node_page_state_add(v - overstep, pgdat, item); __this_cpu_write(*p, overstep); } + preempt_enable_rt(); } void __dec_zone_page_state(struct page *page, enum zone_stat_item item) -- 2.26.2