Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2980307imu; Sun, 9 Dec 2018 14:11:49 -0800 (PST) X-Google-Smtp-Source: AFSGD/UYzcHd78MtruXKks9O7xUYMFBXmE64JdfugMxlbJecPzt154uR+KPIc6JWE8SbWMxPcDXO X-Received: by 2002:a17:902:7581:: with SMTP id j1mr9957105pll.308.1544393509002; Sun, 09 Dec 2018 14:11:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544393508; cv=none; d=google.com; s=arc-20160816; b=ablTGMQhxcCGhpufymAKjAjOcvboVHwgGPEYYTIRWRs9TOjCqGY7nD3t1ShoSzxu09 ycOxZ7FqoUB/wsGiiQOkuWJXqaLAVK9qKEIJQcRRwsUwb6eHhqIejsRhCeapHWU9inT8 tbLf61tecQP3Nj3ivhkpgruzOFNd114d83qNCYxmPEV0L0Zg4h48dhZSdicWUzZ0XYEd CfoJs4AXKSlN6eX2/FvRlrrM2CYlPTBBeMKICwGQs5S4Uox1I4P8r4fYh3ty88Rln2yE boqDRoaPsD/SiiNdykcED4NMM/3S96icZ+4BRS1zI8XQK4xwL5vTvvV+f2BHYfNC8gmz rV7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition; bh=GBbY47fFqLRkczHcnphKj4F2+9brW/JPA9F5VQ5M+Rk=; b=fFZNqElH97p0MFZ5JnTd7csXFAQIGq/UIb87WlQo3RMaLeoSzkFyuN2LEC6Ipl437n +3uVY+pv6mhlLIZ5eRSGqnHkm1zSscPv9raZSibOasuy/b4d1BSaoQd4yrCuv3sAgBle isfzvEyiyV/7KDR7h2dY4mVINMzcIFNPKflhaGeo1zV9l1jht4+S9+dfZnoPeaUKw4AN sm7faZfJZzI+HV4Htkjx1rjulokzIzeQaRhr0TkdYEmZXt5jz2A61WEWnBxdD6iZTvBW 8D7lbFTtgNmqfbVV54npFDWLlh0dzOOpaJWFM9YGc1REJbHmKKtIjd67nNuROKhag+5f p+xg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u186si7798777pgd.131.2018.12.09.14.11.33; Sun, 09 Dec 2018 14:11:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727087AbeLIWKU (ORCPT + 99 others); Sun, 9 Dec 2018 17:10:20 -0500 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:37530 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726337AbeLIWKR (ORCPT ); Sun, 9 Dec 2018 17:10:17 -0500 Received: from pub.yeoldevic.com ([81.174.156.145] helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1gW73I-0002il-In; Sun, 09 Dec 2018 21:55:52 +0000 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1gW72c-0003Lr-8q; Sun, 09 Dec 2018 21:55:10 +0000 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Dave Chinner" , "Tejun Heo" , "Dave Chinner" Date: Sun, 09 Dec 2018 21:50:33 +0000 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) X-Patchwork-Hint: ignore Subject: [PATCH 3.16 108/328] percpu_counter: batch size aware __percpu_counter_compare() In-Reply-To: X-SA-Exim-Connect-IP: 81.174.156.145 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.62-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Dave Chinner commit 80188b0d77d7426b494af739ac129e0e684acb84 upstream. XFS uses non-stanard batch sizes for avoiding frequent global counter updates on it's allocated inode counters, as they increment or decrement in batches of 64 inodes. Hence the standard percpu counter batch of 32 means that the counter is effectively a global counter. Currently Xfs uses a batch size of 128 so that it doesn't take the global lock on every single modification. However, Xfs also needs to compare accurately against zero, which means we need to use percpu_counter_compare(), and that has a hard-coded batch size of 32, and hence will spuriously fail to detect when it is supposed to use precise comparisons and hence the accounting goes wrong. Add __percpu_counter_compare() to take a custom batch size so we can use it sanely in XFS and factor percpu_counter_compare() to use it. Signed-off-by: Dave Chinner Acked-by: Tejun Heo Signed-off-by: Dave Chinner Signed-off-by: Ben Hutchings --- include/linux/percpu_counter.h | 13 ++++++++++++- lib/percpu_counter.c | 6 +++--- 2 files changed, 15 insertions(+), 4 deletions(-) --- a/include/linux/percpu_counter.h +++ b/include/linux/percpu_counter.h @@ -40,7 +40,12 @@ void percpu_counter_destroy(struct percp void percpu_counter_set(struct percpu_counter *fbc, s64 amount); void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch); s64 __percpu_counter_sum(struct percpu_counter *fbc); -int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs); +int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch); + +static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs) +{ + return __percpu_counter_compare(fbc, rhs, percpu_counter_batch); +} static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount) { @@ -114,6 +119,12 @@ static inline int percpu_counter_compare return 0; } +static inline int +__percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) +{ + return percpu_counter_compare(fbc, rhs); +} + static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount) { --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -193,13 +193,13 @@ static int percpu_counter_hotcpu_callbac * Compare counter against given value. * Return 1 if greater, 0 if equal and -1 if less */ -int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs) +int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch) { s64 count; count = percpu_counter_read(fbc); /* Check to see if rough count will be sufficient for comparison */ - if (abs(count - rhs) > (percpu_counter_batch*num_online_cpus())) { + if (abs(count - rhs) > (batch * num_online_cpus())) { if (count > rhs) return 1; else @@ -214,7 +214,7 @@ int percpu_counter_compare(struct percpu else return 0; } -EXPORT_SYMBOL(percpu_counter_compare); +EXPORT_SYMBOL(__percpu_counter_compare); static int __init percpu_counter_startup(void) {