Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752419AbdFOIL2 (ORCPT ); Thu, 15 Jun 2017 04:11:28 -0400 Received: from mx2.suse.de ([195.135.220.15]:44795 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751752AbdFOIL1 (ORCPT ); Thu, 15 Jun 2017 04:11:27 -0400 Date: Thu, 15 Jun 2017 10:11:24 +0200 From: Jan Kara To: Dan Williams Cc: Jan Kara , "linux-nvdimm@lists.01.org" , dm-devel@redhat.com, Matthew Wilcox , X86 ML , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Jeff Moyer , Ingo Molnar , "Oliver O'Halloran" , Al Viro , "H. Peter Anvin" , linux-fsdevel , Thomas Gleixner , Ross Zwisler Subject: Re: [PATCH v3 08/14] x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm Message-ID: <20170615081124.GD1764@quack2.suse.cz> References: <149703982465.20620.14881139332926778446.stgit@dwillia2-desk3.amr.corp.intel.com> <149703986971.20620.10303247412197996310.stgit@dwillia2-desk3.amr.corp.intel.com> <20170614105425.GD21506@quack2.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1523 Lines: 46 On Wed 14-06-17 09:49:29, Dan Williams wrote: > On Wed, Jun 14, 2017 at 3:54 AM, Jan Kara wrote: > >> -/** > >> - * arch_wb_cache_pmem - write back a cache range with CLWB > >> - * @vaddr: virtual start address > >> - * @size: number of bytes to write back > >> - * > >> - * Write back a cache range using the CLWB (cache line write back) > >> - * instruction. Note that @size is internally rounded up to be cache > >> - * line size aligned. > >> - */ > >> static inline void arch_wb_cache_pmem(void *addr, size_t size) > >> { > >> - u16 x86_clflush_size = boot_cpu_data.x86_clflush_size; > >> - unsigned long clflush_mask = x86_clflush_size - 1; > >> - void *vend = addr + size; > >> - void *p; > >> - > >> - for (p = (void *)((unsigned long)addr & ~clflush_mask); > >> - p < vend; p += x86_clflush_size) > >> - clwb(p); > >> + clean_cache_range(addr,size); > >> } > > > > So this will make compilation break on 32-bit x86 as it does not define > > clean_cache_range(). Do we somewhere force we are on x86_64 when pmem is > > enabled? > > Yes, this is enforced by: > > select ARCH_HAS_PMEM_API if X86_64 > > ...in arch/x86/Kconfig. We fallback to a dummy arch_wb_cache_pmem() > implementation and emit this warning for !ARCH_HAS_PMEM_API archs: > > "nd_pmem namespace0.0: unable to guarantee persistence of writes" Aha, right. Feel free to add: Reviewed-by: Jan Kara Honza -- Jan Kara SUSE Labs, CR