Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp882903ybc; Fri, 22 Nov 2019 14:55:32 -0800 (PST) X-Google-Smtp-Source: APXvYqzR2hCu2SxsjDylT6GlXxecgJYoEfXm0dAiAkLmFX2QaJgWSH6qZsvQ+3NoFYkGDwFThcL1 X-Received: by 2002:a17:906:2e52:: with SMTP id r18mr25278448eji.178.1574463332051; Fri, 22 Nov 2019 14:55:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574463332; cv=none; d=google.com; s=arc-20160816; b=MqVOg1j58+RX0mYeH+MsdCGtR2VAGyBLLe+gYcyO2c481VU0PchVimo6JfcF3mCqk7 KVtnIZ3IW8O2e1a8ofxqbxpGycyZtkkD4uFKFgd9i/b6fJJbtpr1b1sWi50j4ty/Ajff 5aAJLpjXxmvvrTkEqO4GVviK3fXoG7FEQbhkG7TUfOJvhPV1Yce2VOFf0YwR/VUbkiHH C2KguKJ8eYUAFlEimw/yXn7BWGt/aq+mC5JVCCS674Hipo+kyMFCBSa5oHeHGPJyfsYS efrCv/ngiHPoAstPqQ5GPkO3vdtnJNE/q3ruMQ2HLlIl6hgQkCSYDqX7r9rO0wdV5HGl epVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=tMJYperTaWW9B5cjv6OfL9Z7Y/zX/rl4iTRjBSXhv9Q=; b=SHuHZSjhAAOpM9Toc4okCyMh/yRo7RzWfSYKp3bmCdFa5bzQXXEMKji4yGjzNLGIv9 HbtMU+jb5VznMhjv2KXE3Rrpuf3aZ1kNQCruaMiUNRJUM1wVOp6X/vKBsz091oaOJppU edVEw63GmL83BZThGNkzdbSB64CL/s78uiCKwhhieNizdScipTgIJU27q/QVExyMUsvf 4PpEkDb/V8Qjp1NcOk6BVwSn0M/E2+M/LbihSaY4EhT9q0WA7jcP2kMUlZXsov1fTF9s 5QoiNOH4DVS0kAcsA/4/ZgvHYYooLuC0x6UW6Y62cxg26Aw6eixzWDhmLb3IKHYzD3YY Zlng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=Rbsu0mdI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o14si5695079edt.219.2019.11.22.14.55.07; Fri, 22 Nov 2019 14:55:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=Rbsu0mdI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726947AbfKVWwi (ORCPT + 99 others); Fri, 22 Nov 2019 17:52:38 -0500 Received: from mail-ot1-f67.google.com ([209.85.210.67]:39263 "EHLO mail-ot1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726568AbfKVWwi (ORCPT ); Fri, 22 Nov 2019 17:52:38 -0500 Received: by mail-ot1-f67.google.com with SMTP id w24so7625442otk.6 for ; Fri, 22 Nov 2019 14:52:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=tMJYperTaWW9B5cjv6OfL9Z7Y/zX/rl4iTRjBSXhv9Q=; b=Rbsu0mdINhGdn4Yen92e+l8i7scAEbACAdgfECz4skPbUUq3Z90+cX2Du/bjdaDjlS FcP39JriBOkr4cwXgl6dYentJve2gTxqlPkcKOjkZuVcXwBkounuwBgB1/OTAarCilb4 Xi8Oh+elkGG11yU7q0emKSzs6JvjEkeucmiH5lxuKVt++ykW0ahEP+CwbPzoX51/VXNs MNpK8PYzdrxLOvJAH6bl35yvsuabsVwqZ0Bom4pS7we3llz8CHMELHpFLBx9qn5szPcM CI5v4zBIGmuLgqwmQdD+S0okwtzt0IBKV3ml+j8oKcmWU9kBQG2cK9XaaKEWVdF48hdy 8QDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tMJYperTaWW9B5cjv6OfL9Z7Y/zX/rl4iTRjBSXhv9Q=; b=q5tCoHOOngTBMMEX1NQmPphK2DTeLrl7rnSjnCjDi5zWBjT6KUtdZpYlEMsrXFBqEw f+FidHTSVb24JR5n+kjmRYO/+1nM3187p4JaUsRX8qNNwOrdFS1538VFMDHFnKiLvRuY aSaLtLjBis8AB9SpRb90jjpHhjmmVhFTNKFAOOmoyWgJOSQRuUiOolzf8mbVGkbwv0vq TprPlrjNnOudVm+iw3B5HdC0/6fSZO5VgLMKc6+ef/wDwjFSmGHH2BxE9jOwaiamijXK 2H+LM0qEuDkn3lN/l5oLCbU0E7tT7w2Y7ig7VEeNQDdfPC7D+LkkCd7fjzigXGX2IRqT TwhA== X-Gm-Message-State: APjAAAUE7gYhuPZvv7JI6NnoAKARbcFVPaG1srmCNWoXJLcYvET7PXK7 6q81BFnhzXTTGIAp8EXQwZbtJp/HCDqNpRHlHV4nNQ== X-Received: by 2002:a9d:30c8:: with SMTP id r8mr12840505otg.363.1574463157247; Fri, 22 Nov 2019 14:52:37 -0800 (PST) MIME-Version: 1.0 References: <20191120092831.6198-1-pagupta@redhat.com> <1617854972.35808055.1574323227395.JavaMail.zimbra@redhat.com> <560894997.35969622.1574397521533.JavaMail.zimbra@redhat.com> <838611538.35971353.1574401020319.JavaMail.zimbra@redhat.com> In-Reply-To: <838611538.35971353.1574401020319.JavaMail.zimbra@redhat.com> From: Dan Williams Date: Fri, 22 Nov 2019 14:52:25 -0800 Message-ID: Subject: Re: [PATCH] virtio pmem: fix async flush ordering To: Pankaj Gupta Cc: Jeff Moyer , linux-nvdimm , Linux Kernel Mailing List , Linux ACPI , Vishal L Verma , Dave Jiang , Ira Weiny , "Rafael J. Wysocki" , Len Brown , Vivek Goyal , Keith Busch Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 21, 2019 at 9:37 PM Pankaj Gupta wrote: > > > > > > > > > > > > > > > > I added that and was about to push this out, but what about the > > > > > > > fact > > > > > > > that now the guest will synchronously wait for flushing to occur. > > > > > > > The > > > > > > > goal of the child bio was to allow that to be an I/O wait with > > > > > > > overlapping I/O, or at least not blocking the submission thread. > > > > > > > Does > > > > > > > the block layer synchronously wait for PREFLUSH requests? If not I > > > > > > > think a synchronous wait is going to be a significant performance > > > > > > > regression. Are there any numbers to accompany this change? > > > > > > > > > > > > Why not just swap the parent child relationship in the PREFLUSH case? > > > > > > > > > > I we are already inside parent bio "make_request" function and we > > > > > create > > > > > child > > > > > bio. How we exactly will swap the parent/child relationship for > > > > > PREFLUSH > > > > > case? > > > > > > > > > > Child bio is queued after parent bio completes. > > > > > > > > Sorry, I didn't quite mean with bio_split, but issuing another request > > > > in front of the real bio. See md_flush_request() for inspiration. > > > > > > o.k. Thank you. Will try to post patch today to be considered for 5.4. > > > > > > > I think it is too late for v5.4-final, but we can get it in the > > -stable queue. Let's take the time to do it right and get some testing > > on it. > > Sure. > > Just sharing probable patch for early feedback, if I am doing it correctly? > I will test it thoroughly. > > Thanks, > Pankaj > > ======== > > diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c > index 10351d5b49fa..c683e0e2515c 100644 > --- a/drivers/nvdimm/nd_virtio.c > +++ b/drivers/nvdimm/nd_virtio.c > @@ -112,6 +112,12 @@ int async_pmem_flush(struct nd_region *nd_region, struct bio *bio) > bio_copy_dev(child, bio); > child->bi_opf = REQ_PREFLUSH; > child->bi_iter.bi_sector = -1; > + > + if (unlikely(bio->bi_opf & REQ_PREFLUSH)) { > + struct request_queue *q = bio->bi_disk->queue; > + q->make_request_fn(q, child); > + return 0; > + } > bio_chain(child, bio); > submit_bio(child); In the md case there is a lower level device to submit to. In this case I expect you would - create a flush workqueue - queue the bio that workqueue and wait for any previous flush request to complete (md_flush_request does this) - run virtio_pmem_flush - complete the original bio Is there a way to make virtio_pmem_flush() get an interrupt when the flush is complete rather than synchronously waiting. That way if you get a storm of flush requests you can coalesce them like md_flush_request() does.