Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932983AbdDGXvZ (ORCPT ); Fri, 7 Apr 2017 19:51:25 -0400 Received: from mail-oi0-f42.google.com ([209.85.218.42]:33075 "EHLO mail-oi0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753962AbdDGXvQ (ORCPT ); Fri, 7 Apr 2017 19:51:16 -0400 MIME-Version: 1.0 In-Reply-To: <1491586851.9118.33.camel@hpe.com> References: <149151227310.16957.8527168777601554707.stgit@dwillia2-desk3.amr.corp.intel.com> <1491586851.9118.33.camel@hpe.com> From: Dan Williams Date: Fri, 7 Apr 2017 16:51:15 -0700 Message-ID: Subject: Re: [PATCH] x86, pmem: fix broken __copy_user_nocache cache-bypass assumptions To: "Kani, Toshimitsu" Cc: "linux-nvdimm@lists.01.org" , "linux-kernel@vger.kernel.org" , "jmoyer@redhat.com" , "tglx@linutronix.de" , "hch@lst.de" , "stable@vger.kernel.org" , "viro@zeniv.linux.org.uk" , "x86@kernel.org" , "mawilcox@microsoft.com" , "hpa@zytor.com" , "mingo@redhat.com" , "ross.zwisler@linux.intel.com" , "jack@suse.cz" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1061 Lines: 20 On Fri, Apr 7, 2017 at 10:41 AM, Kani, Toshimitsu wrote: > On Thu, 2017-04-06 at 13:59 -0700, Dan Williams wrote: >> Before we rework the "pmem api" to stop abusing __copy_user_nocache() >> for memcpy_to_pmem() we need to fix cases where we may strand dirty >> data in the cpu cache. The problem occurs when copy_from_iter_pmem() >> is used for arbitrary data transfers from userspace. There is no >> guarantee that these transfers, performed by dax_iomap_actor(), will >> have aligned destinations or aligned transfer lengths. Backstop the >> usage __copy_user_nocache() with explicit cache management in these >> unaligned cases. >> >> Yes, copy_from_iter_pmem() is now too big for an inline, but >> addressing that is saved for a later patch that moves the entirety of >> the "pmem api" into the pmem driver directly. > > The change looks good to me. Should we also avoid cache flushing in > the case of size=4B & dest aligned by 4B? Yes, since you fixed the 4B aligned case we should skip cache flushing in that case. I'll send a v2.