Received: by 10.213.65.68 with SMTP id h4csp1741414imn; Sun, 1 Apr 2018 13:43:24 -0700 (PDT) X-Google-Smtp-Source: AIpwx48ZLfhuppnnwsqfJbsXh5c3sTRa8QWRVeOkEfR0b/IyFIkzy/HyabpUYUKE9G2FzNo2uNE7 X-Received: by 2002:a17:902:7804:: with SMTP id p4-v6mr7404873pll.17.1522615404953; Sun, 01 Apr 2018 13:43:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522615404; cv=none; d=google.com; s=arc-20160816; b=0fIY8Wq+75gJgujI6nOlLIuAiISXiOB5onuD7nPfqJOa5Bh5fv8ho1b95BdLL/PV2G QI5iTXs1iICLgSBo6lCclFMReSN+l8PyucmZiRpBgT37KaOK2YwFjfcMQ4iNWcsU+SgZ NIEJxg2RkaijcDlLW8eQus+QmoPyBjiLYkEgXUDmKccv5Q/VJg5ml24VyQBCSLgUyRGL 9HrGcobDZn2EcMgVzX/lv9UJVNVH/kJizgSp1NPNt4Gr7152FBJWL1wgY9wrwFYwt/zq +FTuMwH5KdefJRYuagU8W+LCAMPCzZB9T6JPSH4LMLN8yDpO59yCjcqhY+2yA1aQoFS8 Xy/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:to:from:subject :organization:arc-authentication-results; bh=lGFJ+42Ux/DSUPNyuco+AyAH/uTXXlBzAqlCKbe7X1Y=; b=J/SvZVkErTo4qxPKHjHnXuJ2MfyUoYFPcs3w4E70i7W/pthv98ZQ7yzS43rEe89jiv VSnh7pk9otf8mF/1REzKFmMXWu+xbL5z7Qy7Ejc22nJz8SiQ1+adsQgBH9GUy8EaFWFh lkrrsNVx+I1cknCwpP6DFLEmPcvW2x0t56oayUO5UT/Bx7UmEBzhf3dLqXFClch43XxD BRWt6O5jBUFWyaibie/QAKvjN7gm15fxtcA+Sl1UOZ2gyTm+S7990LC5OnIn/qF65xp0 DRmhLKLSwuxoa18fUHZmm3bq7hXherLNCou8zcWDIxRB42qEpHL9E6OSuJtFV/TR+PHM /AnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g2-v6si8974711pli.427.2018.04.01.13.43.11; Sun, 01 Apr 2018 13:43:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754166AbeDAUlS (ORCPT + 99 others); Sun, 1 Apr 2018 16:41:18 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:37298 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753951AbeDAUlQ (ORCPT ); Sun, 1 Apr 2018 16:41:16 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5F90E8182D0C for ; Sun, 1 Apr 2018 20:41:16 +0000 (UTC) Received: from warthog.procyon.org.uk (ovpn-120-59.rdu2.redhat.com [10.10.120.59]) by smtp.corp.redhat.com (Postfix) with ESMTP id EB8B3215CDAC; Sun, 1 Apr 2018 20:41:15 +0000 (UTC) Organization: Red Hat UK Ltd. Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Subject: [PATCH 10/45] C++: x86: Turn xchg(), xadd() & co. into inline template functions From: David Howells To: linux-kernel@vger.kernel.org Date: Sun, 01 Apr 2018 21:41:15 +0100 Message-ID: <152261527543.30503.554397458877242700.stgit@warthog.procyon.org.uk> In-Reply-To: <152261521484.30503.16131389653845029164.stgit@warthog.procyon.org.uk> References: <152261521484.30503.16131389653845029164.stgit@warthog.procyon.org.uk> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Sun, 01 Apr 2018 20:41:16 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Sun, 01 Apr 2018 20:41:16 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'dhowells@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Turn xchg(), xadd() and similar functions into inline C++ template functions. This produces more robust source as the all the casting the C macros require is then unnecessary. Signed-off-by: David Howells --- arch/x86/include/asm/cmpxchg.h | 109 ++++++++++++++++++++++++---------------- 1 file changed, 65 insertions(+), 44 deletions(-) diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h index 56bd436ed01b..5e896c17476d 100644 --- a/arch/x86/include/asm/cmpxchg.h +++ b/arch/x86/include/asm/cmpxchg.h @@ -1,4 +1,4 @@ -/* SPDX-License-Identifier: GPL-2.0 */ +/* SPDX-License-Identifier: GPL-2.0 -*- c++ -*- */ #ifndef ASM_X86_CMPXCHG_H #define ASM_X86_CMPXCHG_H @@ -39,43 +39,73 @@ extern void __add_wrong_size(void) * An exchange-type operation, which takes a value and a pointer, and * returns the old value. */ -#define __xchg_op(ptr, arg, op, lock) \ - ({ \ - __typeof__ (*(ptr)) __ret = (arg); \ - switch (sizeof(*(ptr))) { \ - case __X86_CASE_B: \ - asm volatile (lock #op "b %b0, %1\n" \ - : "+q" (__ret), "+m" (*(ptr)) \ - : : "memory", "cc"); \ - break; \ - case __X86_CASE_W: \ - asm volatile (lock #op "w %w0, %1\n" \ - : "+r" (__ret), "+m" (*(ptr)) \ - : : "memory", "cc"); \ - break; \ - case __X86_CASE_L: \ - asm volatile (lock #op "l %0, %1\n" \ - : "+r" (__ret), "+m" (*(ptr)) \ - : : "memory", "cc"); \ - break; \ - case __X86_CASE_Q: \ - asm volatile (lock #op "q %q0, %1\n" \ - : "+r" (__ret), "+m" (*(ptr)) \ - : : "memory", "cc"); \ - break; \ - default: \ - __ ## op ## _wrong_size(); \ - } \ - __ret; \ - }) +template +static inline P xchg(P *ptr, N rep) +{ + P v = rep; + + if (sizeof(P) > sizeof(unsigned long)) + __xchg_wrong_size(); + + /* Note: no "lock" prefix even on SMP: xchg always implies lock anyway. + * Since this is generally used to protect other memory information, we + * use "asm volatile" and "memory" clobbers to prevent gcc from moving + * information around. + */ + asm volatile("xchg %[v], %[ptr]" + : [ptr] "+m" (*ptr), + [v] "+a" (v) + : + : "memory"); + + return v; +} /* - * Note: no "lock" prefix even on SMP: xchg always implies lock anyway. - * Since this is generally used to protect other memory information, we - * use "asm volatile" and "memory" clobbers to prevent gcc from moving - * information around. + * __xadd() adds "inc" to "*ptr" and atomically returns the previous + * value of "*ptr". + * + * __xadd() is always locked. */ -#define xchg(ptr, v) __xchg_op((ptr), (v), xchg, "") +template +static inline P __xadd(P *ptr, N inc) +{ + P v = inc; + + if (sizeof(P) > sizeof(unsigned long)) + __xadd_wrong_size(); + + asm volatile("lock; xadd %[v], %[ptr]" + : [ptr] "+m" (*ptr), + [v] "+a" (v) + : + : "memory"); + + return v; +} + +/* + * xadd() adds "inc" to "*ptr" and atomically returns the previous + * value of "*ptr". + * + * xadd() is locked when multiple CPUs are online + */ +template +static inline P xadd(P *ptr, N inc) +{ + P v = inc; + + if (sizeof(P) > sizeof(unsigned long)) + __xadd_wrong_size(); + + asm volatile(LOCK_PREFIX "xadd %[v], %[ptr]" + : [ptr] "+m" (*ptr), + [v] "+a" (v) + : + : "memory"); + + return v; +} /* * Atomic compare and exchange. Compare OLD with MEM, if identical, @@ -224,15 +254,6 @@ extern void __add_wrong_size(void) #define try_cmpxchg(ptr, pold, new) \ __try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr))) -/* - * xadd() adds "inc" to "*ptr" and atomically returns the previous - * value of "*ptr". - * - * xadd() is locked when multiple CPUs are online - */ -#define __xadd(ptr, inc, lock) __xchg_op((ptr), (inc), xadd, lock) -#define xadd(ptr, inc) __xadd((ptr), (inc), LOCK_PREFIX) - #define __cmpxchg_double(pfx, p1, p2, o1, o2, n1, n2) \ ({ \ bool __ret; \