Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp1841932ybe; Tue, 3 Sep 2019 04:26:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyTcxue/Hmmhw1SbH9sh8XdM3hUNYz4/FtVciXmDV+IW3zuTAdrZJw4U8mA0vaOc2hf0V1O X-Received: by 2002:aa7:85c2:: with SMTP id z2mr8470180pfn.205.1567510007203; Tue, 03 Sep 2019 04:26:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567510007; cv=none; d=google.com; s=arc-20160816; b=wexCziE3hQ68QrUXoO8CcjY6l0eWouhRUT1cNKJ6yu5ii2LOWlmd2s5Uiv9kVTME05 6u3nuCjQ1jZIC8TAxmpB6DRN7z7BnHYb66NHCapgxvvPgNVRzUr+IikVLE6oI/EXMFHP hiWEXbtaTbBvx3PNSg0m8HfD4mO9KwSubyrfJx8kQ1CT9CBF2H5TIzDw1J6YvNolf6Rs LlrMyYqJRR9vCdsgZQweH7bYSuU/eo/CBzkT3fpsgs9ri09rJFYgTtH4dN5aXY6UZtIt lg2z1zR+RnIRFNfsZuq6l2nkb1qfar3pJCucNUpZDf5bOLb7qw6eEQPAbdaxCdyLnLBo o5/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:references:in-reply-to:subject:cc:to:from; bh=3bqXOZsGHcbYlu+SUY8sKXj96ymwc9jHyz/Qz/4k1O8=; b=opMQwIp+1bAj6nZupAaA3lQ9t2XNIIEMzHDym3C6Yl2xntjYLY1B5vSYynfNgLoMZV lYsqLZGHSbOHsalE+iT3SEhV+l9DkO2O8Yn8hrwC83Q8M1dJVyUOn7M1/crusI6aOyCL M5A17+HI6cW8zXqZ2140Yg/Fklu/wZPdbxZih5Lunp+l0X3MaH4kOGN3MoqfltsRN7RY PRQZR+1ibXEkcrz1s8kLay3I7d+e1wacLWsqxnoKDoKbAIOiYQm1B6Ymq3zo7BKOP3VA p04+F+un8BJYg0VKY7hfaZ735AwNfEkWm3/WNRoCSqcTOO+lGKdOIiNHI3zaBX+6L7FC kqKg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q128si17478897pfb.192.2019.09.03.04.26.31; Tue, 03 Sep 2019 04:26:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728678AbfICLZp convert rfc822-to-8bit (ORCPT + 99 others); Tue, 3 Sep 2019 07:25:45 -0400 Received: from bilbo.ozlabs.org ([203.11.71.1]:44731 "EHLO ozlabs.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727639AbfICLZp (ORCPT ); Tue, 3 Sep 2019 07:25:45 -0400 Received: from authenticated.ozlabs.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mail.ozlabs.org (Postfix) with ESMTPSA id 46N4Pw6dvpz9sDB; Tue, 3 Sep 2019 21:25:40 +1000 (AEST) From: Michael Ellerman To: Christophe Leroy , Alastair D'Silva , alastair@d-silva.org Cc: Benjamin Herrenschmidt , Paul Mackerras , Greg Kroah-Hartman , Thomas Gleixner , Qian Cai , Nicholas Piggin , Allison Randal , Andrew Morton , David Hildenbrand , Mike Rapoport , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 3/6] powerpc: Convert flush_icache_range & friends to C In-Reply-To: <44b8223d-52d9-e932-4bb7-b7590ea11a03@c-s.fr> References: <20190903052407.16638-1-alastair@au1.ibm.com> <20190903052407.16638-4-alastair@au1.ibm.com> <44b8223d-52d9-e932-4bb7-b7590ea11a03@c-s.fr> Date: Tue, 03 Sep 2019 21:25:39 +1000 Message-ID: <87muflr58s.fsf@mpe.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Christophe Leroy writes: > Le 03/09/2019 à 07:23, Alastair D'Silva a écrit : >> From: Alastair D'Silva >> >> Similar to commit 22e9c88d486a >> ("powerpc/64: reuse PPC32 static inline flush_dcache_range()") >> this patch converts the following ASM symbols to C: >> flush_icache_range() >> __flush_dcache_icache() >> __flush_dcache_icache_phys() >> >> This was done as we discovered a long-standing bug where the length of the >> range was truncated due to using a 32 bit shift instead of a 64 bit one. >> >> By converting these functions to C, it becomes easier to maintain. >> >> flush_dcache_icache_phys() retains a critical assembler section as we must >> ensure there are no memory accesses while the data MMU is disabled >> (authored by Christophe Leroy). Since this has no external callers, it has >> also been made static, allowing the compiler to inline it within >> flush_dcache_icache_page(). >> >> Signed-off-by: Alastair D'Silva >> Signed-off-by: Christophe Leroy >> --- >> arch/powerpc/include/asm/cache.h | 26 ++--- >> arch/powerpc/include/asm/cacheflush.h | 24 ++-- >> arch/powerpc/kernel/misc_32.S | 117 -------------------- >> arch/powerpc/kernel/misc_64.S | 102 ----------------- >> arch/powerpc/mm/mem.c | 152 +++++++++++++++++++++++++- >> 5 files changed, 173 insertions(+), 248 deletions(-) >> >> diff --git a/arch/powerpc/include/asm/cache.h b/arch/powerpc/include/asm/cache.h >> index f852d5cd746c..91c808c6738b 100644 >> --- a/arch/powerpc/include/asm/cache.h >> +++ b/arch/powerpc/include/asm/cache.h >> @@ -98,20 +98,7 @@ static inline u32 l1_icache_bytes(void) >> #endif >> #endif /* ! __ASSEMBLY__ */ >> >> -#if defined(__ASSEMBLY__) >> -/* >> - * For a snooping icache, we still need a dummy icbi to purge all the >> - * prefetched instructions from the ifetch buffers. We also need a sync >> - * before the icbi to order the the actual stores to memory that might >> - * have modified instructions with the icbi. >> - */ >> -#define PURGE_PREFETCHED_INS \ >> - sync; \ >> - icbi 0,r3; \ >> - sync; \ >> - isync >> - >> -#else >> +#if !defined(__ASSEMBLY__) >> #define __read_mostly __attribute__((__section__(".data..read_mostly"))) >> >> #ifdef CONFIG_PPC_BOOK3S_32 >> @@ -145,6 +132,17 @@ static inline void dcbst(void *addr) >> { >> __asm__ __volatile__ ("dcbst %y0" : : "Z"(*(u8 *)addr) : "memory"); >> } >> + >> +static inline void icbi(void *addr) >> +{ >> + __asm__ __volatile__ ("icbi 0, %0" : : "r"(addr) : "memory"); > > I think "__asm__ __volatile__" is deprecated. Use "asm volatile" instead. Yes please. >> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c >> index 9191a66b3bc5..cd540123874d 100644 >> --- a/arch/powerpc/mm/mem.c >> +++ b/arch/powerpc/mm/mem.c >> @@ -321,6 +321,105 @@ void free_initmem(void) >> free_initmem_default(POISON_FREE_INITMEM); >> } >> >> +/* >> + * Warning: This macro will perform an early return if the CPU has >> + * a coherent icache. The intent is is call this early in function, >> + * and handle the non-coherent icache variant afterwards. >> + * >> + * For a snooping icache, we still need a dummy icbi to purge all the >> + * prefetched instructions from the ifetch buffers. We also need a sync >> + * before the icbi to order the the actual stores to memory that might >> + * have modified instructions with the icbi. >> + */ >> +#define flush_coherent_icache_or_return(addr) { \ >> + if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) { \ >> + mb(); /* sync */ \ >> + icbi(addr); \ >> + mb(); /* sync */ \ >> + isync(); \ >> + return; \ >> + } \ >> +} > > I hate this kind of awful macro which kills code readability. Yes I agree. > Please to something like > > static bool flush_coherent_icache_or_return(unsigned long addr) > { > if (!cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) > return false; > > mb(); /* sync */ > icbi(addr); > mb(); /* sync */ > isync(); > return true; > } > > then callers will do: > > if (flush_coherent_icache_or_return(addr)) > return; I don't think it needs the "_or_return" in the name. eg, it can just be: if (flush_coherent_icache(addr)) return; Which reads fine I think, ie. flush the coherent icache, and if that succeeds return, else continue. cheers