Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp558072pxk; Wed, 16 Sep 2020 10:43:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz8IqfxnNRNThZ1zNr2q8pSj+/8ImTdUGQ4VzltIXLGC8yZXyqKdhVAPOFZKn9fvparIRBE X-Received: by 2002:a50:ab59:: with SMTP id t25mr27950975edc.364.1600278196999; Wed, 16 Sep 2020 10:43:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600278196; cv=none; d=google.com; s=arc-20160816; b=FtyNryv1KaSklijCW/Ducq9pQGKuIqeNUN1iYdKX9CTzXUB8k8/X20TyN0HNG+AA+6 pODKCBGAqL3a6cxAh9FN5TY5lIa1dOThgp+B+VQ4Cn558G9fXck5YgvOufoCqxh2eMXu rv7FaZH27WZClBFAt5DMaCJa8mbZZzga8zA1hjszbFg9x5Y55SgEQD1Aasy3EkwhXnOo CBQSnyDfcxlSOU1NhpMSJt3+m/jkCQ4qlfhoMMt1Qw6R+wgEDWrW3bwwMByzH7noB6Qr o5xh9RiSt1kYbIoPQWu3NIrNWwi2Rpe8tcuKHiMU2K9S12iVOSzbg5X76RrlCgqOlkBg 4YXg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=uV0tE7wFf4iDXsIGWtEvvLMw+8GMDfCmV/fmV+L5rtE=; b=JNgFNqdG6kY2dj3IvFlHuKXqi7paNxzfyw4LPsVUm3wTtgIChzt3mFyuFyqmOL5fYT CYaSrI8U9LTFzws1olvagliI4s3AIdNGc9dqWOFbm98XNQbUhzAmYmSeLme8WeYdumc2 BRWPJ1qJB0dwbtbHclXltBNRH7Gk60MQBJp9QPhiG+mHcT7BUWQFhnHTqQuj3hd/KkVP T9+rq5bXYqZWxvsDx3C1b8qpY/iybxRy5XlAYvoDxyDve8xoSYvA4D8dn5MhMFe3DFtk xzDG0X4xLUv4dpnmhzDFfH/wDBuGXdfTVjPP3WR/bGHRHJuD595tDo9XPWiidDD8o0vb HPAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=VUL59OIj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u23si12433899edq.185.2020.09.16.10.42.54; Wed, 16 Sep 2020 10:43:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=VUL59OIj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727048AbgIPRmH (ORCPT + 99 others); Wed, 16 Sep 2020 13:42:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727309AbgIPRk1 (ORCPT ); Wed, 16 Sep 2020 13:40:27 -0400 Received: from mail-ed1-x543.google.com (mail-ed1-x543.google.com [IPv6:2a00:1450:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6347C061756 for ; Wed, 16 Sep 2020 10:40:25 -0700 (PDT) Received: by mail-ed1-x543.google.com with SMTP id j2so7131670eds.9 for ; Wed, 16 Sep 2020 10:40:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=uV0tE7wFf4iDXsIGWtEvvLMw+8GMDfCmV/fmV+L5rtE=; b=VUL59OIjIyapTdJmj8801q2Y3Mpvx7u/EUu0OW6UkOjIUtwW/SNHGKoOeax/rYuwFm BV+zC1jvzAnrHZ1+lB6gnvz/k34BGHKFYWsvs4xz5diwhGTkav8A0fXp1OSF1SVXrMNA GLgLeBWbfYQFhlJWYDSqhAvOOynHfdz3tdzgBr7WR0swtYyTgVYf3vz1RVxlC/uoAIWp pG98RBSBX5qWKRT1T68utMs4dX5/GL6lROu88danJYsXhPfOOipKUgyN/i0KEo2LcHRN nUr+R+oaGvfLrK5T8EnNypzlI70SgVNfFRidyM8avNMPlJwthJJAKYpib0C16t3rg8t/ 7lkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=uV0tE7wFf4iDXsIGWtEvvLMw+8GMDfCmV/fmV+L5rtE=; b=CXeROzJK9+Ce6/Gr/q127J+O0xUpx0oqeGE+MbSNnN/pG4QO+HaSaeUwo9ujiIeVAa WpSi0DaXznTdfubhoZYZ8uKNcU1AhNzbZo1DmurgWBuzREXSYhLtp+B88wMJGNDtGU+V x3I6KigEDq7Nb48jsOTt4W8yIhVfQhvneh69OKX3HneVonZ7xCo680SPUMaaB7hmBmjO rCpRp/1iY9qh8IxiEcHzhdg/TaYVpI96Ci0Xx6/0POicqSjRs4TjktOsIUTKzcS2ywJ9 CePaVP5I+b1Wh6plHtb4pcv5g79S47w9ZPrQTAPMEcaFMfgZl9ZIu6AQ8oxMviYp+6kh gAiQ== X-Gm-Message-State: AOAM533Z+mlbxzJAxDU5r2ugOAcTZSCB1wehVPbtRFB7tjZPabkYBDDl 8ccVFDkvJ24a6dk52BaD8RN5hmVP6uCi3qU3NGzIgrGiFjcZTA== X-Received: by 2002:aa7:c148:: with SMTP id r8mr29631134edp.210.1600278024225; Wed, 16 Sep 2020 10:40:24 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Dan Williams Date: Wed, 16 Sep 2020 10:40:13 -0700 Message-ID: Subject: Re: [PATCH] pmem: export the symbols __copy_user_flushcache and __copy_from_user_flushcache To: Mikulas Patocka Cc: Linus Torvalds , Alexander Viro , Andrew Morton , Vishal Verma , Dave Jiang , Ira Weiny , Matthew Wilcox , Jan Kara , Eric Sandeen , Dave Chinner , "Kani, Toshi" , "Norton, Scott J" , "Tadakamadla, Rajesh (DCIG/CDI/HPS Perf)" , Linux Kernel Mailing List , linux-fsdevel , linux-nvdimm Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 16, 2020 at 10:24 AM Mikulas Patocka wrote: > > > > On Wed, 16 Sep 2020, Dan Williams wrote: > > > On Wed, Sep 16, 2020 at 3:57 AM Mikulas Patocka wrote: > > > > > > > > > > > > I'm submitting this patch that adds the required exports (so that we could > > > use __copy_from_user_flushcache on x86, arm64 and powerpc). Please, queue > > > it for the next merge window. > > > > Why? This should go with the first user, and it's not clear that it > > needs to be relative to the current dax_operations export scheme. > > Before nvfs gets included in the kernel, I need to distribute it as a > module. So, it would make my maintenance easier. But if you don't want to > export it now, no problem, I can just copy __copy_user_flushcache from the > kernel to the module. That sounds a better plan than exporting symbols with no in-kernel consumer. > > My first question about nvfs is how it compares to a daxfs with > > executables and other binaries configured to use page cache with the > > new per-file dax facility? > > nvfs is faster than dax-based filesystems on metadata-heavy operations > because it doesn't have the overhead of the buffer cache and bios. See > this: http://people.redhat.com/~mpatocka/nvfs/BENCHMARKS ...and that metadata problem is intractable upstream? Christoph poked at bypassing the block layer for xfs metadata operations [1], I just have not had time to carry that further. [1]: "xfs: use dax_direct_access for log writes", although it seems he's dropped that branch from his xfs.git