Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp3001893imm; Sun, 1 Jul 2018 10:04:12 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd+oo0dirnLZuI525tx/Uu/gFt0jsHObPaqQmS13F7FSzWJfhMWiUOTDCjlTzGJyLYmxW5M X-Received: by 2002:a62:a8e:: with SMTP id 14-v6mr22238003pfk.57.1530464652578; Sun, 01 Jul 2018 10:04:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530464652; cv=none; d=google.com; s=arc-20160816; b=Ybc3pt+li6enChnnQYQ8WLhZn//1mVE28+9bQpUWisCho0XDXcMZnO1vAVQPTM0AOF uyLgoas000fofTqntmZczldYcwKY5YwDgUWeaEuQA+hkcyRuUI6SJTOHxUuWu5FtREq+ WbXO3A7pV5gmri5YfsGKeXfmfrEdQv4dYyjMzuOQVpAnQxSaAEfB2PrYPodJMKeUNPTK LkO+1iCU3922DYiSt1m7kpvGZFUOwCFOsAI3e1HSOT7Yhksk8Yldv7Nn/aWpoQmn02Zy VCNcKaKHNZoFZQ7/odTaAfLgVoa7iqHRLS62zHsjchU18/utyEGBww8cd7Z9ajSpRx6r MxgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=sclXyGRaVaDF9gZMaNOIPHixi5IfZYHjyH/MC1pFDF0=; b=AmZPUaMkNdhlq3k3Go/x2mZa0tFt9V9kfN1AKpEHHWJwpMWf4rcWQLc0DDmpnOmtzH 71Gxu6+YABMANC0RKjD3y0EmcU/lSCJny12lh+tzKhvTxEH3/04BZ+c9DSWUnmCRonsE TFgMb30PYIAR4SuAFd2XKnJleUruPtTX06IIyDOQxrjL16jgQF2BLOZB+mGSxxuk/9ut UaU0Ya4tZnGNbnKgKpEdzRVdmrn5IJbaxOmfxxpCRcYwIVqgV5Kl7VMQF8gVwV71s5Cl AkC6uhmLpEj5/RSTlyGSLQJkirQLhOwM5FfGAC8qXptNeMS9GSTRYSjjmIXoLSvhmrJm zZQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a23-v6si13665803plm.305.2018.07.01.10.03.57; Sun, 01 Jul 2018 10:04:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032339AbeGAQnW (ORCPT + 99 others); Sun, 1 Jul 2018 12:43:22 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:37808 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1032261AbeGAQnT (ORCPT ); Sun, 1 Jul 2018 12:43:19 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 1402BA73; Sun, 1 Jul 2018 16:43:17 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ross Zwisler , Dan Williams Subject: [PATCH 4.17 146/220] libnvdimm, pmem: Unconditionally deep flush on *sync Date: Sun, 1 Jul 2018 18:22:50 +0200 Message-Id: <20180701160914.407149719@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180701160908.272447118@linuxfoundation.org> References: <20180701160908.272447118@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.17-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ross Zwisler commit ce7f11a230d5b7165480b96c0cc7a90358b5b5e2 upstream. Prior to this commit we would only do a "deep flush" (have nvdimm_flush() write to each of the flush hints for a region) in response to an msync/fsync/sync call if the nvdimm_has_cache() returned true at the time we were setting up the request queue. This happens due to the write cache value passed in to blk_queue_write_cache(), which then causes the block layer to send down BIOs with REQ_FUA and REQ_PREFLUSH set. We do have a "write_cache" sysfs entry for namespaces, i.e.: /sys/bus/nd/devices/pfn0.1/block/pmem0/dax/write_cache which can be used to control whether or not the kernel thinks a given namespace has a write cache, but this didn't modify the deep flush behavior that we set up when the driver was initialized. Instead, it only modified whether or not DAX would flush CPU caches via dax_flush() in response to *sync calls. Simplify this by making the *sync deep flush always happen, regardless of the write cache setting of a namespace. The DAX CPU cache flushing will still be controlled the write_cache setting of the namespace. Cc: Fixes: 5fdf8e5ba566 ("libnvdimm: re-enable deep flush for pmem devices via fsync()") Signed-off-by: Ross Zwisler Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman --- drivers/nvdimm/pmem.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -299,7 +299,7 @@ static int pmem_attach_disk(struct devic { struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); struct nd_region *nd_region = to_nd_region(dev->parent); - int nid = dev_to_node(dev), fua, wbc; + int nid = dev_to_node(dev), fua; struct resource *res = &nsio->res; struct resource bb_res; struct nd_pfn *nd_pfn = NULL; @@ -335,7 +335,6 @@ static int pmem_attach_disk(struct devic dev_warn(dev, "unable to guarantee persistence of writes\n"); fua = 0; } - wbc = nvdimm_has_cache(nd_region); if (!devm_request_mem_region(dev, res->start, resource_size(res), dev_name(&ndns->dev))) { @@ -382,7 +381,7 @@ static int pmem_attach_disk(struct devic return PTR_ERR(addr); pmem->virt_addr = addr; - blk_queue_write_cache(q, wbc, fua); + blk_queue_write_cache(q, true, fua); blk_queue_make_request(q, pmem_make_request); blk_queue_physical_block_size(q, PAGE_SIZE); blk_queue_logical_block_size(q, pmem_sector_size(ndns)); @@ -413,7 +412,7 @@ static int pmem_attach_disk(struct devic put_disk(disk); return -ENOMEM; } - dax_write_cache(dax_dev, wbc); + dax_write_cache(dax_dev, nvdimm_has_cache(nd_region)); pmem->dax_dev = dax_dev; gendev = disk_to_dev(disk);