Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752139AbdFNKy2 (ORCPT ); Wed, 14 Jun 2017 06:54:28 -0400 Received: from mx2.suse.de ([195.135.220.15]:53802 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751829AbdFNKy1 (ORCPT ); Wed, 14 Jun 2017 06:54:27 -0400 Date: Wed, 14 Jun 2017 12:54:25 +0200 From: Jan Kara To: Dan Williams Cc: linux-nvdimm@lists.01.org, Jan Kara , dm-devel@redhat.com, Matthew Wilcox , x86@kernel.org, linux-kernel@vger.kernel.org, hch@lst.de, Jeff Moyer , Ingo Molnar , "Oliver O'Halloran" , viro@zeniv.linux.org.uk, "H. Peter Anvin" , linux-fsdevel@vger.kernel.org, Thomas Gleixner , Ross Zwisler Subject: Re: [PATCH v3 08/14] x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm Message-ID: <20170614105425.GD21506@quack2.suse.cz> References: <149703982465.20620.14881139332926778446.stgit@dwillia2-desk3.amr.corp.intel.com> <149703986971.20620.10303247412197996310.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <149703986971.20620.10303247412197996310.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1780 Lines: 50 On Fri 09-06-17 13:24:29, Dan Williams wrote: > With all calls to this routine re-directed through the pmem driver, we can kill > the pmem api indirection. arch_wb_cache_pmem() is now optionally supplied by > the arch specific asm/pmem.h. Same as before, pmem flushing is only defined > for x86_64, but it is straightforward to add other archs in the future. > > Cc: > Cc: Jan Kara > Cc: Jeff Moyer > Cc: Ingo Molnar > Cc: Christoph Hellwig > Cc: "H. Peter Anvin" > Cc: Thomas Gleixner > Cc: Oliver O'Halloran > Cc: Matthew Wilcox > Cc: Ross Zwisler > Signed-off-by: Dan Williams Looks good to me. Just one question below... > -/** > - * arch_wb_cache_pmem - write back a cache range with CLWB > - * @vaddr: virtual start address > - * @size: number of bytes to write back > - * > - * Write back a cache range using the CLWB (cache line write back) > - * instruction. Note that @size is internally rounded up to be cache > - * line size aligned. > - */ > static inline void arch_wb_cache_pmem(void *addr, size_t size) > { > - u16 x86_clflush_size = boot_cpu_data.x86_clflush_size; > - unsigned long clflush_mask = x86_clflush_size - 1; > - void *vend = addr + size; > - void *p; > - > - for (p = (void *)((unsigned long)addr & ~clflush_mask); > - p < vend; p += x86_clflush_size) > - clwb(p); > + clean_cache_range(addr,size); > } So this will make compilation break on 32-bit x86 as it does not define clean_cache_range(). Do we somewhere force we are on x86_64 when pmem is enabled? Honza -- Jan Kara SUSE Labs, CR