Received: by 2002:a25:86ce:0:0:0:0:0 with SMTP id y14csp629517ybm; Wed, 22 May 2019 08:57:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqwryCJe457hpp5vDfpLcpEjx5r9gCEqCxl2qLi4ECaBbDjuVLCwA4fxiZFodsPjjCXjvHAX X-Received: by 2002:aa7:864e:: with SMTP id a14mr86059496pfo.132.1558540632873; Wed, 22 May 2019 08:57:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558540632; cv=none; d=google.com; s=arc-20160816; b=1F5NnoUMqMwR2NvYPOhApJbFCxYKFMfmn4yd97NOyTErDQzf5H6dzG+8PeK4Mv8fuL ixGEztFvRJ66xDLeW1Ps67IYZAizVjQEyhWrcjBwAlqWJVP8bDeRSakeeAxh073J65Dy ostQvQGNk05KWTMEtqXhG5z7bijxNhL74QYxAdAjQ7aGBXfrb1sv+LOjaE2yfh6FBUbP EwQLGoJYf4VRDel5CFBkGCwtFsA6ZCUQLtXp+AHw2RRKrM1GxWgU1HF2vI2ltnf+w+Mf nsMalbWR9ToqLBWRgOEQ9b+v2A6o/MgxKDxOHg76hbnNj9+61opp/0dvDYDmIBv4da18 xS3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=/TY3gFHeS/6LY75C41tJVX4idBQZMxxOojaTrgG7x3U=; b=kLcTU8l9qJVrpHdPAAfL/qGthdX1g+/wCxTGAg0sZntqZLZ7n0DTrcvTsPUE8vG7HR WBvNsutTrjDKSN8kS/63YqZbOvceFeToo4VBlku7n2kdGJhBfMN25P/k/gKbb4hDAKRa SpFpeiweIBI2KGx5pScqnGhqdTYGNjY7H4V+LrAw/t7o74D2KPU1yfq7ht9o33ubzt6d 5hAMe7LAgZyqb7YlPB9wbnTyPy9SLr1qUQDrG5FxvKsAHR5LAnS87RR1cmuV4MGUUTNR 144cARAUI6kt25vpLsJS8BD13SsYujB4HlJwSv5Fs3e0ppT4Et6gYbTbRyHb4Xl4upwd H6nQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f66si24262032pgc.449.2019.05.22.08.56.57; Wed, 22 May 2019 08:57:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729630AbfEVPkZ (ORCPT + 99 others); Wed, 22 May 2019 11:40:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47198 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728466AbfEVPkY (ORCPT ); Wed, 22 May 2019 11:40:24 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6702180F91; Wed, 22 May 2019 15:40:13 +0000 (UTC) Received: from llong.com (dhcp-17-85.bos.redhat.com [10.18.17.85]) by smtp.corp.redhat.com (Postfix) with ESMTP id EA53D60857; Wed, 22 May 2019 15:40:05 +0000 (UTC) From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Will Deacon , Thomas Gleixner , Borislav Petkov , "H. Peter Anvin" Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Davidlohr Bueso , Linus Torvalds , Tim Chen , huang ying , Waiman Long Subject: [PATCH] locking/lock_events: Use this_cpu_add() when necessary Date: Wed, 22 May 2019 11:39:53 -0400 Message-Id: <20190522153953.30341-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 22 May 2019 15:40:24 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The kernel test robot has reported that the use of __this_cpu_add() causes bug messages like: BUG: using __this_cpu_add() in preemptible [00000000] code: ... This is only an issue on preempt kernel where preemption can happen in the middle of the multi-instruction percpu operation. It is not an issue on x86 as the percpu operation is a single instruction. The lock events code is updated to use the slower this_cpu_add() for non-x86 preempt kernel or when CONFIG_DEBUG_PREEMPT is defined. Fixes: a8654596f0371 ("locking/rwsem: Enable lock event counting") Signed-off-by: Waiman Long --- kernel/locking/lock_events.h | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/kernel/locking/lock_events.h b/kernel/locking/lock_events.h index feb1acc54611..2b6c8b7588dc 100644 --- a/kernel/locking/lock_events.h +++ b/kernel/locking/lock_events.h @@ -30,13 +30,36 @@ enum lock_events { */ DECLARE_PER_CPU(unsigned long, lockevents[lockevent_num]); +/* + * The purpose of the lock event counting subsystem is to provide a low + * overhead way to record the number of specific locking events by using + * percpu counters. It is the percpu sum that matters, not specifically + * how many of them happens in each cpu. + * + * In !preempt kernel, we can just use __this_cpu_{inc|add}() as preemption + * won't happen in the middle of the percpu operation. In preempt kernel, + * it depends on whether the percpu operation is atomic (1 instruction) + * or not. We know x86 generates a single instruction to do percpu op, but + * we can't guarantee that for other architectures. We also need to use + * the slower this_cpu_{inc|add}() when CONFIG_DEBUG_PREEMPT is defined + * to make the checking code happy. + */ +#if defined(CONFIG_PREEMPT) && \ + (defined(CONFIG_DEBUG_PREEMPT) || !defined(CONFIG_X86)) +#define lockevent_percpu_inc(x) this_cpu_inc(x) +#define lockevent_percpu_add(x, v) this_cpu_add(x, v) +#else +#define lockevent_percpu_inc(x) __this_cpu_inc(x) +#define lockevent_percpu_add(x, v) __this_cpu_add(x, v) +#endif + /* * Increment the PV qspinlock statistical counters */ static inline void __lockevent_inc(enum lock_events event, bool cond) { if (cond) - __this_cpu_inc(lockevents[event]); + lockevent_percpu_inc(lockevents[event]); } #define lockevent_inc(ev) __lockevent_inc(LOCKEVENT_ ##ev, true) @@ -44,7 +67,7 @@ static inline void __lockevent_inc(enum lock_events event, bool cond) static inline void __lockevent_add(enum lock_events event, int inc) { - __this_cpu_add(lockevents[event], inc); + lockevent_percpu_add(lockevents[event], inc); } #define lockevent_add(ev, c) __lockevent_add(LOCKEVENT_ ##ev, c) -- 2.18.1