Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2199302imm; Thu, 27 Sep 2018 08:56:45 -0700 (PDT) X-Google-Smtp-Source: ACcGV61rScijKQ1hH2XA1D9f3VFcLUGPoaysdc1ENjMNpO9mkV+fZmGafkSxc6JdXQEpAV6drIqD X-Received: by 2002:a65:6295:: with SMTP id f21-v6mr11174374pgv.167.1538063805859; Thu, 27 Sep 2018 08:56:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538063805; cv=none; d=google.com; s=arc-20160816; b=RKkLcnn3Cnf7XIfnDt331/cD4+aKTyN5z5+FfpiatYe1ahEVp3LZh1xXFXPynClOVV ZK1dLO4ry+wd2ClcNRG7zcspgUsQP6qJG4zGSbIKxcNkCZG2py2c6k24JqsuuzOaT+bG VB6OWDUIe8qACW8LemYRdbz3o9hnYpJBrdS6eibPMAnRMVKVA4KWFZIlU+eLE8oy8UNa 24gneni7fau6c3gVvUM0OIl4XpwP3BsS4NWBuG3c9nSP8T38/PH8YYL2kTlzUqT7uu9W v9Vypzn1rTUb3hyALwVQt0/yKdztrNwXyXcfs+tBJY1GJNoNmUxXs1/BViB/Vm1A2zhu ak7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=pcwHlhuTNn6bcbXxwdbtw3qezeIwUyXF8lUnTN92Rrw=; b=BOnaVCaB7CnPvvdXyP4LPGX3F0oaqZPZ00fQy80IxP54BkWMC3FyzIiJPQGBeKhT39 tyPcYsXfqfQUo/sxCiwKsmx5cYmHmBgG1GHBOeS4jZHtlkupcgcIXeh9BSz4MCWe7K9O GiRzni5rFjWAULjyTKrj/yRwD+RBFyHSslZHVM3fhfEuyDeNEf5NWfexhn/wkvFDvvIK myT1HG6Xww2LHGwpG6lgw86UnrQA2atlEHyq0LS/sN9MP+vvXkpI8xd8MQLzgfVyb0Bd MIaDyIntqpn/1Ny4RG2urB4Wq0YzUBfbKgAOKOObSa7ASae3SN4w5pRdAwBlyKZ9waV7 1UEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=SJ2IRX3u; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l1-v6si2204918pld.184.2018.09.27.08.56.29; Thu, 27 Sep 2018 08:56:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=SJ2IRX3u; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727987AbeI0WPD (ORCPT + 99 others); Thu, 27 Sep 2018 18:15:03 -0400 Received: from mail-oi1-f193.google.com ([209.85.167.193]:44318 "EHLO mail-oi1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727282AbeI0WPD (ORCPT ); Thu, 27 Sep 2018 18:15:03 -0400 Received: by mail-oi1-f193.google.com with SMTP id u74-v6so2584967oia.11 for ; Thu, 27 Sep 2018 08:56:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=pcwHlhuTNn6bcbXxwdbtw3qezeIwUyXF8lUnTN92Rrw=; b=SJ2IRX3u5kLDI0UmAubfmtEnvEYL51DyZhZ0yUUgm9rE3fBnkxwSrZvl8XUgiAkhig yUvFFiiejqgLzfOnHGJwSu78h0LM9J+pLxwP6SsTmZrM3ZKNf5os+jxsm3+QVaitG24a wuPVTewPG/VXh4KVXAEllCYW6PeuwZ1l4jo6szqymIXKcO798fOrAIwaByBZeXvuTHVo 433Kwjh63BxuM0V/xb4wbrtQ5A0CNJ0kjZOQFDLG9AUohd9wEvvkPaqB2ntg+xyBUI11 cixjFCszpSCVTBhnGQJp22VCjkXY3u/4FLFIf25vDOHNS+/ITx39LeA0Gz6F48waVI1S Es3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=pcwHlhuTNn6bcbXxwdbtw3qezeIwUyXF8lUnTN92Rrw=; b=M4qAGmBW/Nzay77xGYTu2Z0E2VFdobQ3us8Y3Z0wpCXzbjCbpEcXaO53pJ5W2FsBH8 y0iAA3l2SAjhE2nuhGaccU9JW8FL6Nlz953qunM3SC0fUUoGu1V++1FuRf8qOKr0BPUW hYx0eW6QnuDhvm+A7pyLqOoLT964cBT7lLOJq9kUF2EFjnI5fTg1htsSyhRHZKIOQJE8 L/4sjDlLJu6J8AOT1aVH7ZCOBuv9EOFIwx2XgVJYCoJ9oBVXFi771+k4pQvxZ9a2KrnT 6dxsiWKYb9DPoh7YDk1KWWcaTAyuOA8aJqBxGp+u3m03sF0tspV/2UJFbwSZl6eFnxI3 /Ajw== X-Gm-Message-State: ABuFfogCQ2tZlJskxy7a0g64B21zW2hD/yrrVs+ru0lZXlL9Bv3o/n0O 45LomeZY1Ivh739CE7Efg+6BX8J20lRkhcHFY1CkQQ== X-Received: by 2002:aca:dc55:: with SMTP id t82-v6mr3254202oig.159.1538063768336; Thu, 27 Sep 2018 08:56:08 -0700 (PDT) MIME-Version: 1.0 References: <20180831133019.27579-1-pagupta@redhat.com> <20180831133019.27579-4-pagupta@redhat.com> <1204243972.15515798.1537782119951.JavaMail.zimbra@redhat.com> <435471901.16563045.1538053600799.JavaMail.zimbra@redhat.com> In-Reply-To: <435471901.16563045.1538053600799.JavaMail.zimbra@redhat.com> From: Dan Williams Date: Thu, 27 Sep 2018 08:55:56 -0700 Message-ID: Subject: Re: [PATCH 3/3] virtio-pmem: Add virtio pmem driver To: Pankaj Gupta Cc: Linux Kernel Mailing List , KVM list , Qemu Developers , linux-nvdimm , Jan Kara , Stefan Hajnoczi , Rik van Riel , Nitesh Narayan Lal , Kevin Wolf , Paolo Bonzini , "Zwisler, Ross" , David Hildenbrand , Xiao Guangrong , Christoph Hellwig , "Michael S. Tsirkin" , niteshnarayanlal@hotmail.com, lcapitulino@redhat.com, Igor Mammedov , Eric Blake Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 27, 2018 at 6:07 AM Pankaj Gupta wrote: [..] > > We are plugging VIRTIO based flush callback for virtio_pmem driver. If pmem > > driver (pmem_make_request) has to queue request we have to plug "blk_mq_ops" > > callbacks for corresponding VIRTIO vqs. AFAICU there is no existing > > multiqueue > > code merged for pmem driver yet, though i could see patches by Dave upstream. > > > > I thought about this and with current infrastructure "make_request" releases spinlock > and makes current thread/task. All Other threads are free to call 'make_request'/flush > and similarly wait by releasing the lock. Which lock are you referring? > This actually works like a queue of threads > waiting for notifications from host. > > Current pmem code do not have multiqueue support and I am not sure if core pmem code > needs it. Adding multiqueue support just for virtio-pmem and not for pmem in same driver > will be confusing or require alot of tweaking. Why does the pmem driver need to be converted to multiqueue support? > Could you please give your suggestions on this. I was expecting that flush requests that cannot be completed synchronously be placed on a queue and have bio_endio() called at a future time. I.e. use bio_chain() to manage the async portion of the flush request. This causes the guest block layer to just assume the bio was queued and will be completed at some point in the future.