Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2007924imm; Thu, 27 Sep 2018 06:07:51 -0700 (PDT) X-Google-Smtp-Source: ACcGV62mvjw62t+5981wIdSyyyJ9Gc8CfatwyFkmUfjKTcj7JLUuPcsimI6Mv6BpPMJ3JnOnyr1m X-Received: by 2002:a17:902:9a45:: with SMTP id x5-v6mr10722297plv.213.1538053671125; Thu, 27 Sep 2018 06:07:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538053671; cv=none; d=google.com; s=arc-20160816; b=B/F39bZYNxDQMdnSQyw6afG+nR8+D393+/wBphqCmztgqc8106xnjrxHW8EGZLgG/j sL7E1rZvfd0JhrkaP6PGb+YBup8OB9IrdSyZiv/XcmQl1fNW2U9cSYlaMZ5aybNGdCsV ajc74y4x0eViota88XjJ/HCtAR1m3YxbIX4AyotmS6U+du2kw6yyYQ2kIUVpUdm2vokY XtTwuU497eKgHBvp2dnPiq1FxGFEPsMy1R+NMfevc1LqLS20U3fbrt5MJ3bvM7EOf0Nz HVk2wkMGBi9bj13hxRLU6CirIiYphScbdrazo4LbP6ifDUMjBv2wR/PguGi5u2qpQ/m7 b3kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:thread-index:thread-topic :content-transfer-encoding:mime-version:subject:references :in-reply-to:message-id:cc:to:from:date; bh=tgjjvtBJ6snW6ZYXsxffp1HvC50LtPPrS2fsbd3rGIU=; b=sH7on3aqr+8VoxpjXvINLuwPvdD8nbewwlWqDGWmQ7PZZbkoo93MMPYqYBUrVmRHKp kbShBoQ0GyougZJlVYPiztH1vS1rA6AmogowPLmyUghpfvnEAGq2XA9J8pjabjYtuoj7 7Up6HDIemIvuyqZGAhFobSvNRxrJee2KFo1O3equpfaL5R9iT2V2wNfbJ/5F9So6Px29 cEeB12/C3rtW4koUjAUUXRhrS9XpoDC19vlQdLtwQYfSy4sQk0QCsTzY+VMAO5A41p0N eIAAjPVRmxcFnHzlVFzTlB+suquxklDruR08m8utsezAzI3UvYBqi1USonaG8xJwPx7Y KibA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k130-v6si1980183pfc.282.2018.09.27.06.07.10; Thu, 27 Sep 2018 06:07:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727477AbeI0TYx (ORCPT + 99 others); Thu, 27 Sep 2018 15:24:53 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37170 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727341AbeI0TYx (ORCPT ); Thu, 27 Sep 2018 15:24:53 -0400 Received: from smtp.corp.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6628530024D8; Thu, 27 Sep 2018 13:06:42 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 5584CB7E8F; Thu, 27 Sep 2018 13:06:42 +0000 (UTC) Received: from zmail21.collab.prod.int.phx2.redhat.com (zmail21.collab.prod.int.phx2.redhat.com [10.5.83.24]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 32C124BB74; Thu, 27 Sep 2018 13:06:41 +0000 (UTC) Date: Thu, 27 Sep 2018 09:06:40 -0400 (EDT) From: Pankaj Gupta To: Dan Williams Cc: Linux Kernel Mailing List , KVM list , Qemu Developers , linux-nvdimm , Jan Kara , Stefan Hajnoczi , Rik van Riel , Nitesh Narayan Lal , Kevin Wolf , Paolo Bonzini , Ross Zwisler , David Hildenbrand , Xiao Guangrong , Christoph Hellwig , "Michael S. Tsirkin" , niteshnarayanlal@hotmail.com, lcapitulino@redhat.com, Igor Mammedov , Eric Blake Message-ID: <435471901.16563045.1538053600799.JavaMail.zimbra@redhat.com> In-Reply-To: <1204243972.15515798.1537782119951.JavaMail.zimbra@redhat.com> References: <20180831133019.27579-1-pagupta@redhat.com> <20180831133019.27579-4-pagupta@redhat.com> <1204243972.15515798.1537782119951.JavaMail.zimbra@redhat.com> Subject: Re: [PATCH 3/3] virtio-pmem: Add virtio pmem driver MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.116.173, 10.4.195.17] Thread-Topic: virtio-pmem: Add virtio pmem driver Thread-Index: O2xK7RuQHwBLvgXctDUcgKgtPS87E2LFBAoH X-Scanned-By: MIMEDefang 2.84 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Thu, 27 Sep 2018 13:06:42 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello Dan, > > > + /* The request submission function */ > > > +static int virtio_pmem_flush(struct nd_region *nd_region) > > > +{ > > > + int err; [...] > > > + init_waitqueue_head(&req->host_acked); > > > + init_waitqueue_head(&req->wq_buf); > > > + > > > + spin_lock_irqsave(&vpmem->pmem_lock, flags); > > > + sg_init_one(&sg, req->name, strlen(req->name)); > > > + sgs[0] = &sg; > > > + sg_init_one(&ret, &req->ret, sizeof(req->ret)); > > > + sgs[1] = &ret; [...] > > > + spin_unlock_irqrestore(&vpmem->pmem_lock, flags); > > > + /* When host has read buffer, this completes via host_ack */ > > > + wait_event(req->host_acked, req->done); > > > > Hmm, this seems awkward if this is called from pmem_make_request. If > > we need to wait for completion that should be managed by the guest > > block layer. I.e. make_request should just queue request and then > > trigger bio_endio() when the response comes back. > > We are plugging VIRTIO based flush callback for virtio_pmem driver. If pmem > driver (pmem_make_request) has to queue request we have to plug "blk_mq_ops" > callbacks for corresponding VIRTIO vqs. AFAICU there is no existing > multiqueue > code merged for pmem driver yet, though i could see patches by Dave upstream. > I thought about this and with current infrastructure "make_request" releases spinlock and makes current thread/task. All Other threads are free to call 'make_request'/flush and similarly wait by releasing the lock. This actually works like a queue of threads waiting for notifications from host. Current pmem code do not have multiqueue support and I am not sure if core pmem code needs it. Adding multiqueue support just for virtio-pmem and not for pmem in same driver will be confusing or require alot of tweaking. Could you please give your suggestions on this. Thanks, Pankaj