Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752479AbdFNQtc (ORCPT ); Wed, 14 Jun 2017 12:49:32 -0400 Received: from mail-oi0-f45.google.com ([209.85.218.45]:35385 "EHLO mail-oi0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751719AbdFNQtb (ORCPT ); Wed, 14 Jun 2017 12:49:31 -0400 MIME-Version: 1.0 In-Reply-To: <20170614105425.GD21506@quack2.suse.cz> References: <149703982465.20620.14881139332926778446.stgit@dwillia2-desk3.amr.corp.intel.com> <149703986971.20620.10303247412197996310.stgit@dwillia2-desk3.amr.corp.intel.com> <20170614105425.GD21506@quack2.suse.cz> From: Dan Williams Date: Wed, 14 Jun 2017 09:49:29 -0700 Message-ID: Subject: Re: [PATCH v3 08/14] x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm To: Jan Kara Cc: "linux-nvdimm@lists.01.org" , dm-devel@redhat.com, Matthew Wilcox , X86 ML , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Jeff Moyer , Ingo Molnar , "Oliver O'Halloran" , Al Viro , "H. Peter Anvin" , linux-fsdevel , Thomas Gleixner , Ross Zwisler Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2153 Lines: 55 On Wed, Jun 14, 2017 at 3:54 AM, Jan Kara wrote: > On Fri 09-06-17 13:24:29, Dan Williams wrote: >> With all calls to this routine re-directed through the pmem driver, we can kill >> the pmem api indirection. arch_wb_cache_pmem() is now optionally supplied by >> the arch specific asm/pmem.h. Same as before, pmem flushing is only defined >> for x86_64, but it is straightforward to add other archs in the future. >> >> Cc: >> Cc: Jan Kara >> Cc: Jeff Moyer >> Cc: Ingo Molnar >> Cc: Christoph Hellwig >> Cc: "H. Peter Anvin" >> Cc: Thomas Gleixner >> Cc: Oliver O'Halloran >> Cc: Matthew Wilcox >> Cc: Ross Zwisler >> Signed-off-by: Dan Williams > > Looks good to me. Just one question below... > >> -/** >> - * arch_wb_cache_pmem - write back a cache range with CLWB >> - * @vaddr: virtual start address >> - * @size: number of bytes to write back >> - * >> - * Write back a cache range using the CLWB (cache line write back) >> - * instruction. Note that @size is internally rounded up to be cache >> - * line size aligned. >> - */ >> static inline void arch_wb_cache_pmem(void *addr, size_t size) >> { >> - u16 x86_clflush_size = boot_cpu_data.x86_clflush_size; >> - unsigned long clflush_mask = x86_clflush_size - 1; >> - void *vend = addr + size; >> - void *p; >> - >> - for (p = (void *)((unsigned long)addr & ~clflush_mask); >> - p < vend; p += x86_clflush_size) >> - clwb(p); >> + clean_cache_range(addr,size); >> } > > So this will make compilation break on 32-bit x86 as it does not define > clean_cache_range(). Do we somewhere force we are on x86_64 when pmem is > enabled? Yes, this is enforced by: select ARCH_HAS_PMEM_API if X86_64 ...in arch/x86/Kconfig. We fallback to a dummy arch_wb_cache_pmem() implementation and emit this warning for !ARCH_HAS_PMEM_API archs: "nd_pmem namespace0.0: unable to guarantee persistence of writes"