Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp4110886imm; Mon, 17 Sep 2018 08:20:38 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZIQMDxFRV+HHUgrRoSCNtian6ST/m2M3OuO7L7Cht9E94UKxM9huta0SzZyjyEuQ8XGCgu X-Received: by 2002:a62:1d54:: with SMTP id d81-v6mr26502230pfd.139.1537197638701; Mon, 17 Sep 2018 08:20:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537197638; cv=none; d=google.com; s=arc-20160816; b=qyD30BWRLgmSzZamDZQZk2lBNAG1HGqDa7R7E3M5ax/hlIWOYmS2PzRQuwJtdF8Rol wgzI2Ew82Zv/IiQZq+YJmmKqS2yyRbm91T77U/LY4C4Y1aGGkrSGpSUHfcvWYGCV9hse 1NqZV2TXn3QrUsK/di0v28xy5x+zOTsZt8EDcfb+im6Kk9lUIpGxuusLROF1NJRkL3Ez UxMqzXlJrjzCuSY2uXoz0ZrqDhbn+vDI6zZ4OAORLoJygrDe2tufgjVB08wpi/rWLuSK hABD4Co7dN5PsI+9V970EMZ7Ds2/qMZVYxk/0ozBcYzcFTJrCo1XriLI2B4cMD2OJoCz UzSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=bfZDH5EWyISA6GY6EndywGVe2aijjUgOOjHFUZMzpfA=; b=EhwcwCT9i1pyHgQtb9VMSaaH3l6b2Gq44kXsJ3gXEOmPmBbB4NCGRGKim53XLdv50x iQH6YEuKYH2np/mmaLja7HLc515wkhLuK0ReOGEOclM28NvJ2XzGE5j/dQxrMEjigWgJ sG+ljM4HDxKFjHEmG71F/S+pKIq0NktrD6KWOdRvdNpKg+Fu5/FVRuFwAaujWMYK6ykv 6x+5KuXj1zVE5EKsy4RzwFUZuULXwSauZO/mmo287nErEqTkfQ7ieiRSqmjMwERekzZn y52OhFEy6RlYkyNKvOOrdb7ZroUFPnpmCz+wwnp7a9DqwBr3yqDCkN6N1QgDOh3Beoqq KulA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p14-v6si15874691pgg.67.2018.09.17.08.20.17; Mon, 17 Sep 2018 08:20:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728820AbeIQUq2 (ORCPT + 99 others); Mon, 17 Sep 2018 16:46:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45752 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728016AbeIQUq2 (ORCPT ); Mon, 17 Sep 2018 16:46:28 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A211830842D0; Mon, 17 Sep 2018 15:18:40 +0000 (UTC) Received: from llong.com (dhcp-17-55.bos.redhat.com [10.18.17.55]) by smtp.corp.redhat.com (Postfix) with ESMTP id 45EB57EB71; Mon, 17 Sep 2018 15:18:36 +0000 (UTC) From: Waiman Long To: Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andi Kleen , Dave Chinner , Boqun Feng , Davidlohr Bueso , Waiman Long Subject: [PATCH v9 6/6] prefetch: Remove spin_lock_prefetch() Date: Mon, 17 Sep 2018 11:18:04 -0400 Message-Id: <1537197484-22154-1-git-send-email-longman@redhat.com> In-Reply-To: <1536780532-4092-1-git-send-email-longman@redhat.com> References: <1536780532-4092-1-git-send-email-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Mon, 17 Sep 2018 15:18:40 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The spin_lock_prefetch() call in new_inode() of fs/inode.c was the only instance of this function call left. With the dlock list patch, the last spin_lock_prefetch() call was removed. So the spin_lock_prefetch() function and its architectural specific codes can now be removed as well. Signed-off-by: Waiman Long --- arch/alpha/include/asm/processor.h | 12 ------------ arch/arm64/include/asm/processor.h | 8 -------- arch/ia64/include/asm/processor.h | 2 -- .../include/asm/mach-cavium-octeon/cpu-feature-overrides.h | 1 - arch/powerpc/include/asm/processor.h | 2 -- arch/sparc/include/asm/processor_64.h | 2 -- arch/x86/include/asm/processor.h | 5 ----- include/linux/prefetch.h | 7 +------ 8 files changed, 1 insertion(+), 38 deletions(-) diff --git a/arch/alpha/include/asm/processor.h b/arch/alpha/include/asm/processor.h index cb05d04..d17d395 100644 --- a/arch/alpha/include/asm/processor.h +++ b/arch/alpha/include/asm/processor.h @@ -61,11 +61,6 @@ #define ARCH_HAS_PREFETCHW #define ARCH_HAS_SPINLOCK_PREFETCH -#ifndef CONFIG_SMP -/* Nothing to prefetch. */ -#define spin_lock_prefetch(lock) do { } while (0) -#endif - extern inline void prefetch(const void *ptr) { __builtin_prefetch(ptr, 0, 3); @@ -76,11 +71,4 @@ extern inline void prefetchw(const void *ptr) __builtin_prefetch(ptr, 1, 3); } -#ifdef CONFIG_SMP -extern inline void spin_lock_prefetch(const void *ptr) -{ - __builtin_prefetch(ptr, 1, 3); -} -#endif - #endif /* __ASM_ALPHA_PROCESSOR_H */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 79657ad..56aca32 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -232,14 +232,6 @@ static inline void prefetchw(const void *ptr) asm volatile("prfm pstl1keep, %a0\n" : : "p" (ptr)); } -#define ARCH_HAS_SPINLOCK_PREFETCH -static inline void spin_lock_prefetch(const void *ptr) -{ - asm volatile(ARM64_LSE_ATOMIC_INSN( - "prfm pstl1strm, %a0", - "nop") : : "p" (ptr)); -} - #define HAVE_ARCH_PICK_MMAP_LAYOUT #endif diff --git a/arch/ia64/include/asm/processor.h b/arch/ia64/include/asm/processor.h index 10061ccf..12770bf 100644 --- a/arch/ia64/include/asm/processor.h +++ b/arch/ia64/include/asm/processor.h @@ -676,8 +676,6 @@ struct thread_struct { ia64_lfetch_excl(ia64_lfhint_none, x); } -#define spin_lock_prefetch(x) prefetchw(x) - extern unsigned long boot_option_idle_override; enum idle_boot_override {IDLE_NO_OVERRIDE=0, IDLE_HALT, IDLE_FORCE_MWAIT, diff --git a/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h b/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h index a4f7986..9981729 100644 --- a/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h +++ b/arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h @@ -62,7 +62,6 @@ #define ARCH_HAS_IRQ_PER_CPU 1 #define ARCH_HAS_SPINLOCK_PREFETCH 1 -#define spin_lock_prefetch(x) prefetch(x) #define PREFETCH_STRIDE 128 #ifdef __OCTEON__ diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h index 52fadded..3c4dcad 100644 --- a/arch/powerpc/include/asm/processor.h +++ b/arch/powerpc/include/asm/processor.h @@ -488,8 +488,6 @@ static inline void prefetchw(const void *x) __asm__ __volatile__ ("dcbtst 0,%0" : : "r" (x)); } -#define spin_lock_prefetch(x) prefetchw(x) - #define HAVE_ARCH_PICK_MMAP_LAYOUT #ifdef CONFIG_PPC64 diff --git a/arch/sparc/include/asm/processor_64.h b/arch/sparc/include/asm/processor_64.h index aac23d4..71a61dd 100644 --- a/arch/sparc/include/asm/processor_64.h +++ b/arch/sparc/include/asm/processor_64.h @@ -252,8 +252,6 @@ static inline void prefetchw(const void *x) : "r" (x)); } -#define spin_lock_prefetch(x) prefetchw(x) - #define HAVE_ARCH_PICK_MMAP_LAYOUT int do_mathemu(struct pt_regs *regs, struct fpustate *f, bool illegal_insn_trap); diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index b922eed..44df38e 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -830,11 +830,6 @@ static inline void prefetchw(const void *x) "m" (*(const char *)x)); } -static inline void spin_lock_prefetch(const void *x) -{ - prefetchw(x); -} - #define TOP_OF_INIT_STACK ((unsigned long)&init_stack + sizeof(init_stack) - \ TOP_OF_KERNEL_STACK_PADDING) diff --git a/include/linux/prefetch.h b/include/linux/prefetch.h index 13eafeb..8f074fb 100644 --- a/include/linux/prefetch.h +++ b/include/linux/prefetch.h @@ -24,11 +24,10 @@ prefetch() should be defined by the architecture, if not, the #define below provides a no-op define. - There are 3 prefetch() macros: + There are 2 prefetch() macros: prefetch(x) - prefetches the cacheline at "x" for read prefetchw(x) - prefetches the cacheline at "x" for write - spin_lock_prefetch(x) - prefetches the spinlock *x for taking there is also PREFETCH_STRIDE which is the architecure-preferred "lookahead" size for prefetching streamed operations. @@ -43,10 +42,6 @@ #define prefetchw(x) __builtin_prefetch(x,1) #endif -#ifndef ARCH_HAS_SPINLOCK_PREFETCH -#define spin_lock_prefetch(x) prefetchw(x) -#endif - #ifndef PREFETCH_STRIDE #define PREFETCH_STRIDE (4*L1_CACHE_BYTES) #endif -- 1.8.3.1