Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp724761ybi; Fri, 24 May 2019 10:23:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqwJ/7hd9/53Uml2/aGUwzW6tSV3wY+n3r77jchv/Vz/XRbeBgc/PP++3SCXxBDWD7qqqBIS X-Received: by 2002:a63:8949:: with SMTP id v70mr107097943pgd.196.1558718596493; Fri, 24 May 2019 10:23:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558718596; cv=none; d=google.com; s=arc-20160816; b=zzBb/mGPdWCiQysmZedyHK9zOmziogPrXuRma0W/7q3gYOQjYNOgPRvCipRiP0jGSd tmbViTaLlM3R0nMvy2AiLl3lUXFlYe3sV6lZnU5gWW1Ss9CN1KxX/OVyclsbmuK3HFoS qrMM5Ao0tCq55a1Z2VMfkkmQCC/3l3J6O8bEtC5DeCEwEX/wZRxmIzGJmu9xc4ziT3K0 G+EogOg3EDUgt0oaQW1swFb5KS38FJLVWN5qgrGMf+G9AKK58rdZfODBiOfZm6+9fkfs gGFnAKkzfwmjeOHzPc5Kw0DaSfNXeVSxpvaMv/+isOk2nz4zwIeEHzGsiLzNZR1sLJgP Ebsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=J+F7D3S1ucvdZ5UpgYs79SskfmLsvCLUNrE6Mr1tQ/c=; b=lXc/r8lCHwWNiaoe0QyHRzzZUsbYuN+W5+qNEZgOOFpjgD/P+bdYiv610tfBpNcnea i9HXuaBZujRKT2aMA3ZEUKQsSVzKsH1t/zwyQcdCPC5DJFFFHH6Y/1hmaDxW7QYFEXwn VE6gFWGsJcZqej4XIBOI9qfuwerJL0qQNlwhyflAdMcFmV7B/MJyG9U5ExyJQbgU1EFL gR7ZVVif5o+elcQv35rbmjjh6yWrK0chjmiS8M2t7y6IXjJz9jif9VINgY9t1Np2GwxR C6bosWUuSg0z9IZ+9zSnCdEydUSs5eMAcSKIJMcfwKSdJv5b6Rh4jVj8/swkhQ9veEJr dh7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e9si3146555pgb.142.2019.05.24.10.23.00; Fri, 24 May 2019 10:23:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731914AbfEXRTr (ORCPT + 99 others); Fri, 24 May 2019 13:19:47 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:47128 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726674AbfEXRTq (ORCPT ); Fri, 24 May 2019 13:19:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11B2780D; Fri, 24 May 2019 10:19:46 -0700 (PDT) Received: from fuggles.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F068E3F703; Fri, 24 May 2019 10:19:43 -0700 (PDT) Date: Fri, 24 May 2019 18:19:39 +0100 From: Will Deacon To: Waiman Long Cc: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org, x86@kernel.org, Davidlohr Bueso , Linus Torvalds , Tim Chen , huang ying Subject: Re: [PATCH v2] locking/lock_events: Use this_cpu_add() when necessary Message-ID: <20190524171939.GA9120@fuggles.cambridge.arm.com> References: <20190524165346.26373-1-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190524165346.26373-1-longman@redhat.com> User-Agent: Mutt/1.11.1+86 (6f28e57d73f2) () Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 24, 2019 at 12:53:46PM -0400, Waiman Long wrote: > The kernel test robot has reported that the use of __this_cpu_add() > causes bug messages like: > > BUG: using __this_cpu_add() in preemptible [00000000] code: ... > > This is only an issue on preempt kernel where preemption can happen in > the middle of a percpu operation. We are still using __this_cpu_*() for > !preempt kernel to avoid additional overhead in case CONFIG_PREEMPT_COUNT > is set. > > v2: Simplify the condition to just preempt or !preempt. > > Fixes: a8654596f0371 ("locking/rwsem: Enable lock event counting") > Signed-off-by: Waiman Long > --- > kernel/locking/lock_events.h | 23 +++++++++++++++++++++-- > 1 file changed, 21 insertions(+), 2 deletions(-) > > diff --git a/kernel/locking/lock_events.h b/kernel/locking/lock_events.h > index feb1acc54611..05f34068ec06 100644 > --- a/kernel/locking/lock_events.h > +++ b/kernel/locking/lock_events.h > @@ -30,13 +30,32 @@ enum lock_events { > */ > DECLARE_PER_CPU(unsigned long, lockevents[lockevent_num]); > > +/* > + * The purpose of the lock event counting subsystem is to provide a low > + * overhead way to record the number of specific locking events by using > + * percpu counters. It is the percpu sum that matters, not specifically > + * how many of them happens in each cpu. > + * > + * In !preempt kernel, we can just use __this_cpu_*() as preemption > + * won't happen in the middle of the percpu operation. In preempt kernel, > + * preemption happens in the middle of the percpu operation may produce > + * incorrect result. > + */ > +#ifdef CONFIG_PREEMPT > +#define lockevent_percpu_inc(x) this_cpu_inc(x) > +#define lockevent_percpu_add(x, v) this_cpu_add(x, v) > +#else > +#define lockevent_percpu_inc(x) __this_cpu_inc(x) > +#define lockevent_percpu_add(x, v) __this_cpu_add(x, v) Are you sure this works wrt IRQs? For example, if I take an interrupt when trying to update the counter, and then the irq handler takes a qspinlock which in turn tries to update the counter. Would I lose an update in that scenario? Will