Received: by 2002:a05:7412:3b8b:b0:fc:a2b0:25d7 with SMTP id nd11csp2380102rdb; Mon, 12 Feb 2024 03:02:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IG42aQ7A3sUVrRHP26HZtyfGp/0x3R/mBaDqff294b/z5+Ry81Pa8/B+t2L+gkNoJYaF0ZR X-Received: by 2002:a6b:6817:0:b0:7c3:f279:f425 with SMTP id d23-20020a6b6817000000b007c3f279f425mr8185783ioc.13.1707735736359; Mon, 12 Feb 2024 03:02:16 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707735736; cv=pass; d=google.com; s=arc-20160816; b=L37x5kllSImv8K4faj/ocjH3sQnDT1FHFMPbZtC+hpXIyfnw8erpktbeHkKs50Zd0e 1h2X13AX17+ldO8EkGjTzVuoIlY5D5elJhlfOgKf9smtxY99RzcMTph8wHpnPCbflwY0 LpO/b5hwoIQGEAJPUBX1/3N4B6/wBuw8JuaolbbdM3SpPruCwn12giGdfCQGF3O2fQjl 5kZmowPmX7i7Tf2sBcu3O8+Fb4sW+M52QpWjxEoRTasqtyyYgxrSP2y1/SGzQnLZbtnX QLeVAO2UvirQCg+5vfLGxZsCrOpjLJZc0BFRWSscPTCC0cfn4Wb0qvbpreAeNgRU/ZOo 8cwg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=30RM5+iq9SvHMBxtM7LQaJQAVbJipWzfZQPjP9gfDBE=; fh=RYB1XXyCmC+rr4btL9GCmuuGy5Zk5fpcrkcp1Bkaq6k=; b=RjhanD8njYsoZKXO+5OVi4Zr+Q/htDWxidtlbYSaZjbf0/5WpVPL0Kc9131y8jFW2X IlMrA6HXaMwS2+An30YZ8ARM34BHhs3u/o+qTG0jRmnl7VE1mY4yxJicFdIvUYwSK6EL Fu53kjGXDzf/FWHDgELSkB/lCtlLx0pDBXNtxfcSESgKAjGIw1eXgfau7UvdzRJl3r2w ZtnqaNd1E0Y61ynFT3vXK3uRBWfYCsQsV50Q24wmLB92kbYxOk4hyh1QfgxbBozIfDY6 dLe1Q2mj+yQsX8Q9E/HgGBb3QF5Wagm5RjgaQPpbu7Wf3Ctm7J3akJCWFMAkMras7yx7 IIgQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Rj2tQgnl; arc=pass (i=1 spf=pass spfdomain=flex--ericchancf.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-61385-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61385-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCUt4bPxIR6jZZKes0W5C5+al4yr1zFgfPC2FNxkvI143y9WXIFcU8ZMQkr2dDVJ6lB/Fc3A61Wor49FA/xcfIJH6TJZ3Ig+GkCjQRBj1g== Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id m29-20020a638c1d000000b005cdffc2828esi83767pgd.182.2024.02.12.03.02.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 03:02:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-61385-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Rj2tQgnl; arc=pass (i=1 spf=pass spfdomain=flex--ericchancf.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-61385-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-61385-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 798102827F9 for ; Mon, 12 Feb 2024 11:00:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A735128385; Mon, 12 Feb 2024 10:59:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Rj2tQgnl" Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2C0A22067 for ; Mon, 12 Feb 2024 10:59:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707735596; cv=none; b=VzSLnIg+jiRLhc9FFVsgzC6lt4R8WzsoKiP7q2KVzeQfmi2hpZfaMmS4YNZjMd3DZ8R/ApJq00OkVpWHtU9oAvbUWliq9fhYclFrZ2tJked90CyEpuNrsDJqfLdjqYhqdDk17+d6qIkY7/Rtvaoy1Ohg8b4qWy8EbmeAABIGwl8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707735596; c=relaxed/simple; bh=rc4as0cBPhSqMhFzrCgDly1QnCy7idiWLwzNCmnoXyE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Cwqzr2Xhl76fMN4wa8D1qxvf4Zxj6Q2Hb2IvBh2Ej2t7qE8049FSjPBhC+GN/nmEZySZuvN7OCLMLcfDiNePDxzIwjelcqGP250Tc/nH/wTEvN6x8TQ7Uzbq2A9iattXxKSWqo7muo7y0feeH8TFnC5qrPo40uhNNq7ggHPnmrA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ericchancf.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Rj2tQgnl; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ericchancf.bounces.google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-602dae507caso52911257b3.0 for ; Mon, 12 Feb 2024 02:59:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707735593; x=1708340393; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=30RM5+iq9SvHMBxtM7LQaJQAVbJipWzfZQPjP9gfDBE=; b=Rj2tQgnl9E0DrEKFuo+/DJnXLycPR0DeqLmlIDxn1ouixTcqLrz2iSa8yo1jNMnjqf ekLAvDD78jr1mfOD32jkPxwwDJyUGY6J2R+mBvVnr3u66mBlR3AiLWMwOJx78oe7Jm/o /RALP2yfoN/YkgjOKd5EoX65xfHSNWYp5GX3lG6Su1tgziR6zmeU+t+4+eY1Yw61ZFRB iCiAMa8rVN+pz/aIdMVAbXc3uA7jhjUHMCecScycQSq0JnxO8aKYSqGJ7EthAiXuZaHj zx5zG4OyE7d2gOS6eQ+4gF7fGwb4LNDAukHLScnWWFwYKrcSt3/jVod03cBBn75JYVIa J8bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707735593; x=1708340393; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=30RM5+iq9SvHMBxtM7LQaJQAVbJipWzfZQPjP9gfDBE=; b=ZIxzIILVcSniGVUAs3CXe/B2zSgHQyAb5nZScRDpiAa1zjcQArp0wVAcA54iLyV5QL OOutv56Y8rb6yidK/bkO732nPjSmTjQHhf9PLGpC4L6D/z1n95+gpmyVOZftjvAoPtJk yENSai/KFQ4MwPqyD+GmQqd/VQEH2RpWjj0nmccaofcfasTBzJNIHOi8ba6kU+ZPrwel DROgLi02wpwoajko6nnEVJjeDKhcp1MWGTlBPNNKvSbeYiRc3FybhVyUZ2SMnovzH99D GUTeS7po8dr53VeRmKcG6iflb3AgeYiB6l64jB/fyHCbf1xNmN7ooTuiv9uWnvzkL7fN raNw== X-Gm-Message-State: AOJu0Yw7oSEvwuhhGwVXWjT3HaprcQuXkZC9oMcb+2Pptls8xhxSikfg o2kx6WpKbLOmOvyti2tOE96px/ie13lWH+AUJEJzDeej+TBxV1Rgt3zl5LKjzUhXI8xnXdNOqxS l4cmkEvG/2R371z1AGA== X-Received: from ericchancf.c.googlers.com ([fda3:e722:ac3:cc00:4f:4b78:c0a8:4139]) (user=ericchancf job=sendgmr) by 2002:a0d:d681:0:b0:5ff:88f9:96f1 with SMTP id y123-20020a0dd681000000b005ff88f996f1mr1938575ywd.9.1707735593682; Mon, 12 Feb 2024 02:59:53 -0800 (PST) Date: Mon, 12 Feb 2024 10:59:46 +0000 In-Reply-To: <20240212-projector-dangle-7815fa2f7415@wendy> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240212-projector-dangle-7815fa2f7415@wendy> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212105946.1241100-1-ericchancf@google.com> Subject: [PATCH v2] riscv/fence: Consolidate fence definitions and define __{mb,rmb,wmb} From: Eric Chan To: conor.dooley@microchip.com Cc: aou@eecs.berkeley.edu, ericchancf@google.com, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, palmer@dabbelt.com, paul.walmsley@sifive.com Content-Type: text/plain; charset="UTF-8" Disparate fence implementations are consolidated into fence.h. Introduce __{mb,rmb,wmb}, and rely on the generic definitions for {mb,rmb,wmb}. A first consequence is that __{mb,rmb,wmb} map to a compiler barrier on !SMP (while their definition remains unchanged on SMP). Introduce RISCV_FULL_BARRIER and use in arch_atomic* function. like RISCV_ACQUIRE_BARRIER and RISCV_RELEASE_BARRIER, The fence instruction can be eliminated When SMP is not enabled. Also clean up the warning with scripts/checkpatch.pl. Signed-off-by: Eric Chan --- v1 -> v2: makes compilation pass with allyesconfig instead of defconfig only, also satisfy scripts/checkpatch.pl. - (__asm__ __volatile__ (RISCV_FENCE_ASM(p, s) : : : "memory")) + ({ __asm__ __volatile__ (RISCV_FENCE_ASM(p, s) : : : "memory"); }) Thanks Conor help review it. arch/riscv/include/asm/atomic.h | 24 ++++++++++-------------- arch/riscv/include/asm/barrier.h | 21 ++++++++++----------- arch/riscv/include/asm/cmpxchg.h | 5 ++--- arch/riscv/include/asm/fence.h | 11 +++++++++-- arch/riscv/include/asm/io.h | 8 ++++---- arch/riscv/include/asm/mmio.h | 5 +++-- arch/riscv/include/asm/mmiowb.h | 2 +- 7 files changed, 39 insertions(+), 37 deletions(-) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index f5dfef6c2153..19050d13b6c1 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -17,13 +17,9 @@ #endif #include -#include -#define __atomic_acquire_fence() \ - __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory") - -#define __atomic_release_fence() \ - __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory"); +#define __atomic_acquire_fence() RISCV_FENCE(r, rw) +#define __atomic_release_fence() RISCV_FENCE(rw, r) static __always_inline int arch_atomic_read(const atomic_t *v) { @@ -207,7 +203,7 @@ static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int " add %[rc], %[p], %[a]\n" " sc.w.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : [a]"r" (a), [u]"r" (u) @@ -228,7 +224,7 @@ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, " add %[rc], %[p], %[a]\n" " sc.d.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : [a]"r" (a), [u]"r" (u) @@ -248,7 +244,7 @@ static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v) " addi %[rc], %[p], 1\n" " sc.w.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : @@ -268,7 +264,7 @@ static __always_inline bool arch_atomic_dec_unless_positive(atomic_t *v) " addi %[rc], %[p], -1\n" " sc.w.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : @@ -288,7 +284,7 @@ static __always_inline int arch_atomic_dec_if_positive(atomic_t *v) " bltz %[rc], 1f\n" " sc.w.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : @@ -310,7 +306,7 @@ static __always_inline bool arch_atomic64_inc_unless_negative(atomic64_t *v) " addi %[rc], %[p], 1\n" " sc.d.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : @@ -331,7 +327,7 @@ static __always_inline bool arch_atomic64_dec_unless_positive(atomic64_t *v) " addi %[rc], %[p], -1\n" " sc.d.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : @@ -352,7 +348,7 @@ static __always_inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) " bltz %[rc], 1f\n" " sc.d.rl %[rc], %[rc], %[c]\n" " bnez %[rc], 0b\n" - " fence rw, rw\n" + RISCV_FULL_BARRIER "1:\n" : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) : diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h index 110752594228..880b56d8480d 100644 --- a/arch/riscv/include/asm/barrier.h +++ b/arch/riscv/include/asm/barrier.h @@ -11,28 +11,27 @@ #define _ASM_RISCV_BARRIER_H #ifndef __ASSEMBLY__ +#include #define nop() __asm__ __volatile__ ("nop") #define __nops(n) ".rept " #n "\nnop\n.endr\n" #define nops(n) __asm__ __volatile__ (__nops(n)) -#define RISCV_FENCE(p, s) \ - __asm__ __volatile__ ("fence " #p "," #s : : : "memory") /* These barriers need to enforce ordering on both devices or memory. */ -#define mb() RISCV_FENCE(iorw,iorw) -#define rmb() RISCV_FENCE(ir,ir) -#define wmb() RISCV_FENCE(ow,ow) +#define __mb() RISCV_FENCE(iorw, iorw) +#define __rmb() RISCV_FENCE(ir, ir) +#define __wmb() RISCV_FENCE(ow, ow) /* These barriers do not need to enforce ordering on devices, just memory. */ -#define __smp_mb() RISCV_FENCE(rw,rw) -#define __smp_rmb() RISCV_FENCE(r,r) -#define __smp_wmb() RISCV_FENCE(w,w) +#define __smp_mb() RISCV_FENCE(rw, rw) +#define __smp_rmb() RISCV_FENCE(r, r) +#define __smp_wmb() RISCV_FENCE(w, w) #define __smp_store_release(p, v) \ do { \ compiletime_assert_atomic_type(*p); \ - RISCV_FENCE(rw,w); \ + RISCV_FENCE(rw, w); \ WRITE_ONCE(*p, v); \ } while (0) @@ -40,7 +39,7 @@ do { \ ({ \ typeof(*p) ___p1 = READ_ONCE(*p); \ compiletime_assert_atomic_type(*p); \ - RISCV_FENCE(r,rw); \ + RISCV_FENCE(r, rw); \ ___p1; \ }) @@ -69,7 +68,7 @@ do { \ * instances the scheduler pairs this with an mb(), so nothing is necessary on * the new hart. */ -#define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw) +#define smp_mb__after_spinlock() RISCV_FENCE(iorw, iorw) #include diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 2f4726d3cfcc..2fee65cc8443 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -8,7 +8,6 @@ #include -#include #include #define __xchg_relaxed(ptr, new, size) \ @@ -313,7 +312,7 @@ " bne %0, %z3, 1f\n" \ " sc.w.rl %1, %z4, %2\n" \ " bnez %1, 0b\n" \ - " fence rw, rw\n" \ + RISCV_FULL_BARRIER \ "1:\n" \ : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ : "rJ" ((long)__old), "rJ" (__new) \ @@ -325,7 +324,7 @@ " bne %0, %z3, 1f\n" \ " sc.d.rl %1, %z4, %2\n" \ " bnez %1, 0b\n" \ - " fence rw, rw\n" \ + RISCV_FULL_BARRIER \ "1:\n" \ : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr) \ : "rJ" (__old), "rJ" (__new) \ diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h index 2b443a3a487f..c0fd82d72a0e 100644 --- a/arch/riscv/include/asm/fence.h +++ b/arch/riscv/include/asm/fence.h @@ -1,12 +1,19 @@ #ifndef _ASM_RISCV_FENCE_H #define _ASM_RISCV_FENCE_H +#define RISCV_FENCE_ASM(p, s) \ + "\tfence " #p "," #s "\n" +#define RISCV_FENCE(p, s) \ + ({ __asm__ __volatile__ (RISCV_FENCE_ASM(p, s) : : : "memory"); }) + #ifdef CONFIG_SMP -#define RISCV_ACQUIRE_BARRIER "\tfence r , rw\n" -#define RISCV_RELEASE_BARRIER "\tfence rw, w\n" +#define RISCV_ACQUIRE_BARRIER RISCV_FENCE_ASM(r, rw) +#define RISCV_RELEASE_BARRIER RISCV_FENCE_ASM(rw, w) +#define RISCV_FULL_BARRIER RISCV_FENCE_ASM(rw, rw) #else #define RISCV_ACQUIRE_BARRIER #define RISCV_RELEASE_BARRIER +#define RISCV_FULL_BARRIER #endif #endif /* _ASM_RISCV_FENCE_H */ diff --git a/arch/riscv/include/asm/io.h b/arch/riscv/include/asm/io.h index 42497d487a17..1c5c641075d2 100644 --- a/arch/riscv/include/asm/io.h +++ b/arch/riscv/include/asm/io.h @@ -47,10 +47,10 @@ * sufficient to ensure this works sanely on controllers that support I/O * writes. */ -#define __io_pbr() __asm__ __volatile__ ("fence io,i" : : : "memory"); -#define __io_par(v) __asm__ __volatile__ ("fence i,ior" : : : "memory"); -#define __io_pbw() __asm__ __volatile__ ("fence iow,o" : : : "memory"); -#define __io_paw() __asm__ __volatile__ ("fence o,io" : : : "memory"); +#define __io_pbr() RISCV_FENCE(io, i) +#define __io_par(v) RISCV_FENCE(i, ior) +#define __io_pbw() RISCV_FENCE(iow, o) +#define __io_paw() RISCV_FENCE(o, io) /* * Accesses from a single hart to a single I/O address must be ordered. This diff --git a/arch/riscv/include/asm/mmio.h b/arch/riscv/include/asm/mmio.h index 4c58ee7f95ec..06cadfd7a237 100644 --- a/arch/riscv/include/asm/mmio.h +++ b/arch/riscv/include/asm/mmio.h @@ -12,6 +12,7 @@ #define _ASM_RISCV_MMIO_H #include +#include #include /* Generic IO read/write. These perform native-endian accesses. */ @@ -131,8 +132,8 @@ static inline u64 __raw_readq(const volatile void __iomem *addr) * doesn't define any ordering between the memory space and the I/O space. */ #define __io_br() do {} while (0) -#define __io_ar(v) ({ __asm__ __volatile__ ("fence i,ir" : : : "memory"); }) -#define __io_bw() ({ __asm__ __volatile__ ("fence w,o" : : : "memory"); }) +#define __io_ar(v) RISCV_FENCE(i, ir) +#define __io_bw() RISCV_FENCE(w, o) #define __io_aw() mmiowb_set_pending() #define readb(c) ({ u8 __v; __io_br(); __v = readb_cpu(c); __io_ar(__v); __v; }) diff --git a/arch/riscv/include/asm/mmiowb.h b/arch/riscv/include/asm/mmiowb.h index 0b2333e71fdc..52ce4a399d9b 100644 --- a/arch/riscv/include/asm/mmiowb.h +++ b/arch/riscv/include/asm/mmiowb.h @@ -7,7 +7,7 @@ * "o,w" is sufficient to ensure that all writes to the device have completed * before the write to the spinlock is allowed to commit. */ -#define mmiowb() __asm__ __volatile__ ("fence o,w" : : : "memory"); +#define mmiowb() RISCV_FENCE(o, w) #include #include -- 2.43.0.687.g38aa6559b0-goog