Received: by 10.192.165.148 with SMTP id m20csp314764imm; Fri, 4 May 2018 10:42:54 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqRRO7B+0capwZIi9CCY+j8o0fY45EjHX/1qHFTif3bI+WhNU0E1mXvYJhAcJor2Kjcbsnu X-Received: by 10.98.62.194 with SMTP id y63mr27931973pfj.102.1525455774537; Fri, 04 May 2018 10:42:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525455774; cv=none; d=google.com; s=arc-20160816; b=FlIegneE9lSg+SHJAp65tGshJNUdgEm4BdnvP4+MlD4OW5bkmggvEtVVX5acEO+91M JpeNkTvFpI0yRPfJDSLK5QFpKavGaa1SYTgzTy83jr+C5UkEDq5AHIwpLOKNyIMvozUQ I/0fvzuplws3ffrZ5nMneHxmJzaAllhVHCkj2GSysuwAG3xIfJBFTr5vX7Qv4Q8qpitG xLSnwP6rYV1M2rBVTCHogdTZVH9+LW6bluU4v+WQMYY5nckX7Vl3LcWgvtC6/cSN1MZW 3RUiSwQai+jPIbzLS2Zj3UqX3gl870+Ak0YqQ/fszpUB15YniGAVx5T4QRDYGU9NNnxn Oi5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ePDAyjPaQ3WvTgdN3hNsJ1EKTenud2zoA07iLL8SS34=; b=BLIKSPyMNu77ZZURoe75/wHgw1ISOJamVlV509DK1JNHCWPiao8q2T0D/j1PBUKPsF bgPVNcVX3ECWwhaZdz1aXPNF1Xh29ZW4c1p8tcsqkPSx9sjDB2ThUvJCiS6Xyn7qpXWJ cvOzYzfpTKqOd6yxOdyD96bFkQ3dxxXbn5b7MCxc4YUj6soJYkskrrAp47L10j107/R3 UVm67HqQbQJJTjsFu3ryojshv/ZOP9+Rnen+Bik4aOa5pHC+WLyc8HK89iOVD+LA+pLl 8Jx56WJzayqbBEnpuVCagpak3n0DOEGH9PRpRafjeujtG7Bp7p6/OdSaJ1J/hcdviGWF aA+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14si1109903pfn.41.2018.05.04.10.42.40; Fri, 04 May 2018 10:42:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751998AbeEDRj6 (ORCPT + 99 others); Fri, 4 May 2018 13:39:58 -0400 Received: from foss.arm.com ([217.140.101.70]:57374 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751988AbeEDRj4 (ORCPT ); Fri, 4 May 2018 13:39:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7CF3A164F; Fri, 4 May 2018 10:39:56 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B302B3F487; Fri, 4 May 2018 10:39:54 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, mark.rutland@arm.com, mingo@kernel.org, peterz@infradead.org, will.deacon@arm.com Subject: [PATCH 2/6] locking/atomic, asm-generic: instrument atomic*andnot*() Date: Fri, 4 May 2018 18:39:33 +0100 Message-Id: <20180504173937.25300-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180504173937.25300-1-mark.rutland@arm.com> References: <20180504173937.25300-1-mark.rutland@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We don't currently define instrumentation wrappers for the various forms of atomic*andnot*(), as these aren't implemented directly by x86. So that we can instrument architectures which provide these, let's define wrappers for all the variants of these atomics. Signed-off-by: Mark Rutland Cc: Andrey Ryabinin Cc: Boqun Feng Cc: Dmitry Vyukov Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Will Deacon --- include/asm-generic/atomic-instrumented.h | 112 ++++++++++++++++++++++++++++++ 1 file changed, 112 insertions(+) diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 26f0e3098442..b1920f0f64ab 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -498,6 +498,62 @@ INSTR_ATOMIC64_AND(_release) #define atomic64_and_release atomic64_and_release #endif +#define INSTR_ATOMIC_ANDNOT(order) \ +static __always_inline void \ +atomic_andnot##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic_andnot##order(i, v); \ +} + +#ifdef arch_atomic_andnot +INSTR_ATOMIC_ANDNOT() +#define atomic_andnot atomic_andnot +#endif + +#ifdef arch_atomic_andnot_relaxed +INSTR_ATOMIC_ANDNOT(_relaxed) +#define atomic_andnot_relaxed atomic_andnot_relaxed +#endif + +#ifdef arch_atomic_andnot_acquire +INSTR_ATOMIC_ANDNOT(_acquire) +#define atomic_andnot_acquire atomic_andnot_acquire +#endif + +#ifdef arch_atomic_andnot_release +INSTR_ATOMIC_ANDNOT(_release) +#define atomic_andnot_release atomic_andnot_release +#endif + +#define INSTR_ATOMIC64_ANDNOT(order) \ +static __always_inline void \ +atomic64_andnot##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + arch_atomic64_andnot##order(i, v); \ +} + +#ifdef arch_atomic64_andnot +INSTR_ATOMIC64_ANDNOT() +#define atomic64_andnot atomic64_andnot +#endif + +#ifdef arch_atomic64_andnot_relaxed +INSTR_ATOMIC64_ANDNOT(_relaxed) +#define atomic64_andnot_relaxed atomic64_andnot_relaxed +#endif + +#ifdef arch_atomic64_andnot_acquire +INSTR_ATOMIC64_ANDNOT(_acquire) +#define atomic64_andnot_acquire atomic64_andnot_acquire +#endif + +#ifdef arch_atomic64_andnot_release +INSTR_ATOMIC64_ANDNOT(_release) +#define atomic64_andnot_release atomic64_andnot_release +#endif + #define INSTR_ATOMIC_OR(order) \ static __always_inline void \ atomic_or##order(int i, atomic_t *v) \ @@ -984,6 +1040,62 @@ INSTR_ATOMIC64_FETCH_AND(_release) #define atomic64_fetch_and_release atomic64_fetch_and_release #endif +#define INSTR_ATOMIC_FETCH_ANDNOT(order) \ +static __always_inline int \ +atomic_fetch_andnot##order(int i, atomic_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic_fetch_andnot##order(i, v); \ +} + +#ifdef arch_atomic_fetch_andnot +INSTR_ATOMIC_FETCH_ANDNOT() +#define atomic_fetch_andnot atomic_fetch_andnot +#endif + +#ifdef arch_atomic_fetch_andnot_relaxed +INSTR_ATOMIC_FETCH_ANDNOT(_relaxed) +#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed +#endif + +#ifdef arch_atomic_fetch_andnot_acquire +INSTR_ATOMIC_FETCH_ANDNOT(_acquire) +#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire +#endif + +#ifdef arch_atomic_fetch_andnot_release +INSTR_ATOMIC_FETCH_ANDNOT(_release) +#define atomic_fetch_andnot_release atomic_fetch_andnot_release +#endif + +#define INSTR_ATOMIC64_FETCH_ANDNOT(order) \ +static __always_inline s64 \ +atomic64_fetch_andnot##order(s64 i, atomic64_t *v) \ +{ \ + kasan_check_write(v, sizeof(*v)); \ + return arch_atomic64_fetch_andnot##order(i, v); \ +} + +#ifdef arch_atomic64_fetch_andnot +INSTR_ATOMIC64_FETCH_ANDNOT() +#define atomic64_fetch_andnot atomic64_fetch_andnot +#endif + +#ifdef arch_atomic64_fetch_andnot_relaxed +INSTR_ATOMIC64_FETCH_ANDNOT(_relaxed) +#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed +#endif + +#ifdef arch_atomic64_fetch_andnot_acquire +INSTR_ATOMIC64_FETCH_ANDNOT(_acquire) +#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire +#endif + +#ifdef arch_atomic64_fetch_andnot_release +INSTR_ATOMIC64_FETCH_ANDNOT(_release) +#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release +#endif + #define INSTR_ATOMIC_FETCH_OR(order) \ static __always_inline int \ atomic_fetch_or##order(int i, atomic_t *v) \ -- 2.11.0