Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752876AbbGENLh (ORCPT ); Sun, 5 Jul 2015 09:11:37 -0400 Received: from mail-wi0-f182.google.com ([209.85.212.182]:36650 "EHLO mail-wi0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752014AbbGENLK (ORCPT ); Sun, 5 Jul 2015 09:11:10 -0400 Message-ID: <55992CEB.60004@plexistor.com> Date: Sun, 05 Jul 2015 16:11:07 +0300 From: Boaz Harrosh User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Alexander Viro CC: Matthew Wilcox Subject: Re: [PATCH v2 2/6] dax: Use copy_from_iter_nocache References: <1435934443-17090-1-git-send-email-matthew.r.wilcox@intel.com> <1435934443-17090-3-git-send-email-matthew.r.wilcox@intel.com> In-Reply-To: <1435934443-17090-3-git-send-email-matthew.r.wilcox@intel.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2462 Lines: 73 On 07/03/2015 05:40 PM, Matthew Wilcox wrote: > From: Matthew Wilcox > > When userspace does a write, there's no need for the written data to > pollute the CPU cache. This matches the original XIP code. > > Signed-off-by: Matthew Wilcox > --- > fs/dax.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/dax.c b/fs/dax.c > index 99b5fbc..eaa9e06 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -155,7 +155,7 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, > } > > if (iov_iter_rw(iter) == WRITE) > - len = copy_from_iter(addr, max - pos, iter); > + len = copy_from_iter_nocache(addr, max - pos, iter); > else if (!hole) > len = copy_to_iter(addr, max - pos, iter); > else > With the current ioremap_nocache at pmem none of this matters for pmem. For brd yes, so We've been conducting some measurements and regular ext4 (no DAX) benchmark gives 6-16% increase in performance with this above. And DAX is almost x2 then no DAX. Is why the network guys been using this for a long time. So I'd say this is a good default for any page-cache writes. (Think about it it makes sense, we will 95% of the time flush these to real memory before DMA) For pmem with any sane cached mapping (We use page-stuct-pmem actually) Then DAX, for it to actually work (persist) with pmem, needs this: static size_t copy_from_iter_nt(void *addr, size_t bytes, struct iov_iter *ii) { size_t ret = copy_from_iter_nocache(addr, bytes, ii); if (unlikely((ii->type & ITER_BVEC) || (ii->type & ITER_KVEC))) { /* FIXME: copy_from_iter_nocache did regular copy for Kernel * buffers (BVEC or KVEC). Before we fix it do cl_flush * for now. */ cl_flush(addr, bytes, false); } else { /* copy_from_iter_nocache only persists in 8-byte aligned words. * Lets persist remaining unaligned edges. */ if (unlikely((ulong)addr & 0x7)) cl_flush(addr, 1, false); if (unlikely((ulong)(addr + bytes) & 0x7)) cl_flush((addr + bytes), 1, false); } return ret; } This is based on an not-in-kernel cl_flush(). The first part FIXME above could be fixed with Dan's memcpy_persistent() patches Cheers Boaz -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/