Received: by 2002:a17:90a:88:0:0:0:0 with SMTP id a8csp4549095pja; Thu, 21 Nov 2019 21:38:55 -0800 (PST) X-Google-Smtp-Source: APXvYqz6k8liKBRcjh8If2OMA5Kn0Lw48EBEEwy7/i5CzoOgBgEDB9nX9W/pUlLm3+5opFY8fBiF X-Received: by 2002:a17:906:d793:: with SMTP id pj19mr19472745ejb.303.1574401135136; Thu, 21 Nov 2019 21:38:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574401135; cv=none; d=google.com; s=arc-20160816; b=vTEkEZ0EESREgly4GkrvQTcYWeSad5b92uETiczYOIvFVQn8lmTvfbGSOQPVx2on7V jx1pjRs98RYHRG4+tO3S+IV2AOn34AJQbzg6mHb233p3EkZpNZltasa2Y0js7iur0/i/ VuVRl/Do0h+hp4DzFfyD+JrQwbGlDECHTx6jvw3deKbdyKaZ6oIuUNOnMTefz14Fvbr+ Ijbvh55qE422aWdsO6lMGnMB8EwIYcGdLo0WILbX3SYNnKkP1FEIAoZRaeP7hc56ncFf NtMuD3AN9+Bo0WxglHsT7aUN7f26Ul4693ek+zR4nXuCuOPYn+Xh6deeZsyCIIwa10su y33g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:thread-index :thread-topic:mime-version:subject:references:in-reply-to:message-id :cc:to:from:date:dkim-signature; bh=eKkONTtWVn2eOzE/ZmOcrElOUkXfPmqcjl279Q5P6lw=; b=ZPjQnkQKo5/CKjAc29smStIysiQeoxrfl/X9krA3V9CB1B4vdXv/ASiBtKvFIIh+l8 7XtLbAD3LnsEsiL0Mt4BDy3+hV3vfDt//g925MmaCRozXgYEIQuWsYYsci8KBJShgdTS XE/yUBPpB21CPvsQT7t6Vk3wHsyB8cPrwYBCgHlZHZk7V9DjT/7whfoZcJ7m406RcEgP Gzq82t1QJiV4Vy8F8conjh5OMJVAN5aNQirGjEpnzwUgHBUSwr+lnhCNuPHZXhdBZlxq 186LZYWhmWI+oSxRx79c/7bxbxh3xxHOiEUehkkuQWrrYinPISRHjny8gVGZwzfVopUN JTig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Pwj48DzQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w19si3435988eje.290.2019.11.21.21.38.29; Thu, 21 Nov 2019 21:38:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Pwj48DzQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726546AbfKVFhH (ORCPT + 99 others); Fri, 22 Nov 2019 00:37:07 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:36169 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726364AbfKVFhH (ORCPT ); Fri, 22 Nov 2019 00:37:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1574401026; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eKkONTtWVn2eOzE/ZmOcrElOUkXfPmqcjl279Q5P6lw=; b=Pwj48DzQ9qQ/UmCuOYaEw8tRNYu4WoZELV/8Z5S17SuXYsftabY+jxq/jey/a383PIFG0x ttTpxnOkQmySKNgtC/grCBzeMU970ib1aGy2VXXbIcf4lCLmIKbQT0Uzk9I6iEcZuwAbKL YPDsmvmlzmpWaEk5FJmjTW6muWKhwoQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-171-T_UOHI-YNQGQjE4f7guipw-1; Fri, 22 Nov 2019 00:37:02 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D69E11800D41; Fri, 22 Nov 2019 05:37:00 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B6730100EA05; Fri, 22 Nov 2019 05:37:00 +0000 (UTC) Received: from zmail21.collab.prod.int.phx2.redhat.com (zmail21.collab.prod.int.phx2.redhat.com [10.5.83.24]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 895A218095FF; Fri, 22 Nov 2019 05:37:00 +0000 (UTC) Date: Fri, 22 Nov 2019 00:37:00 -0500 (EST) From: Pankaj Gupta To: Dan Williams Cc: Jeff Moyer , linux-nvdimm , Linux Kernel Mailing List , Linux ACPI , Vishal L Verma , Dave Jiang , Ira Weiny , "Rafael J. Wysocki" , Len Brown , Vivek Goyal , Keith Busch Message-ID: <838611538.35971353.1574401020319.JavaMail.zimbra@redhat.com> In-Reply-To: References: <20191120092831.6198-1-pagupta@redhat.com> <1617854972.35808055.1574323227395.JavaMail.zimbra@redhat.com> <560894997.35969622.1574397521533.JavaMail.zimbra@redhat.com> Subject: Re: [PATCH] virtio pmem: fix async flush ordering MIME-Version: 1.0 X-Originating-IP: [10.67.116.214, 10.4.195.30] Thread-Topic: virtio pmem: fix async flush ordering Thread-Index: yct1AVHQcm+1LDSyVVEj51GYlhrF5A== X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-MC-Unique: T_UOHI-YNQGQjE4f7guipw-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > > > > > > > > > > > I added that and was about to push this out, but what about the > > > > > > fact > > > > > > that now the guest will synchronously wait for flushing to occu= r. > > > > > > The > > > > > > goal of the child bio was to allow that to be an I/O wait with > > > > > > overlapping I/O, or at least not blocking the submission thread= . > > > > > > Does > > > > > > the block layer synchronously wait for PREFLUSH requests? If no= t I > > > > > > think a synchronous wait is going to be a significant performan= ce > > > > > > regression. Are there any numbers to accompany this change? > > > > > > > > > > Why not just swap the parent child relationship in the PREFLUSH c= ase? > > > > > > > > I we are already inside parent bio "make_request" function and we > > > > create > > > > child > > > > bio. How we exactly will swap the parent/child relationship for > > > > PREFLUSH > > > > case? > > > > > > > > Child bio is queued after parent bio completes. > > > > > > Sorry, I didn't quite mean with bio_split, but issuing another reques= t > > > in front of the real bio. See md_flush_request() for inspiration. > > > > o.k. Thank you. Will try to post patch today to be considered for 5.4. > > >=20 > I think it is too late for v5.4-final, but we can get it in the > -stable queue. Let's take the time to do it right and get some testing > on it. Sure. Just sharing probable patch for early feedback, if I am doing it correctly? I will test it thoroughly. Thanks, Pankaj =3D=3D=3D=3D=3D=3D=3D=3D diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c index 10351d5b49fa..c683e0e2515c 100644 --- a/drivers/nvdimm/nd_virtio.c +++ b/drivers/nvdimm/nd_virtio.c @@ -112,6 +112,12 @@ int async_pmem_flush(struct nd_region *nd_region, stru= ct bio *bio) bio_copy_dev(child, bio); child->bi_opf =3D REQ_PREFLUSH; child->bi_iter.bi_sector =3D -1; + + if (unlikely(bio->bi_opf & REQ_PREFLUSH)) { + struct request_queue *q =3D bio->bi_disk->queue; + q->make_request_fn(q, child); + return 0; + } bio_chain(child, bio); submit_bio(child); return 0; =20