Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp3972818ybe; Mon, 9 Sep 2019 02:03:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqyjKFePiZRENCpDhKZW5MeU86iMeRFCPFkC7buR4x3tGuDfUEATHgSfb9Oj4FaL+/XNI5ks X-Received: by 2002:a05:6402:658:: with SMTP id u24mr23802733edx.102.1568019799703; Mon, 09 Sep 2019 02:03:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568019799; cv=none; d=google.com; s=arc-20160816; b=N+Zv/oggncnQU/UIMs95iiEX4UuGLs1eBqoUA4RZ6kW66ytq689ixxioqCcFMyjkW6 Ns+pCOSoGzdehuIblf9n+RNdqgNp9OokQFOXWMFexAFLVQ5vwTC/O3HkXiUJ6hEZknFb QwrU/Ri3adN6P4waWVFACiJ+uTEmSYZPfze0XJqz6RM/w7K0B5B2t9pq+nXvD+IplUKE 7yLtW4yuw9wmsC+tlEkuY7F1heamln1wUVwIXSvzG9gWTwFz12m4s223CK2dyZbPN/4O xeaQpyq/gM6fESOTwAsX+f3xpcFrzTGDh3ju5TI+/DNvFHVno3D5MzGwqmSiVmWKGHct 8OUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=391oRa7NXu37FL4tVp7O0949fG00SLVibAk7/HDeEjg=; b=d46sbZUY5sp778H4hj+kgWvpPMAccZhqEW0D8iaRM74uwa1h5B/SM4SirfM6XpnyjZ EUXCSuUQY2m0fS79qBB/092N/bF5Z+Sluoo2vpmTOiGjVi2nP4mUuuGelheYxIa4ITqF eil9SWhj94C+GdXTmaFbP8XUr3EtX8C+xe3aA+kpOE5ERpBimaLbx5EYY8naV9iWztS4 79gc4Cu01PQs905SsClcURbI/0r8koRz69j6tPnTqVJTG+C4MT9+h4exwqL7+dpcq5VS 4vrG3aMOOfWCe+UwGvWr7aL3529O2hLRiYVYuc+PTUZ43H/8OC4+M0viDPgxlCFEX9dX 0u5A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y47si8858049edd.236.2019.09.09.02.02.56; Mon, 09 Sep 2019 02:03:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728086AbfIHKLj (ORCPT + 99 others); Sun, 8 Sep 2019 06:11:39 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2236 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728068AbfIHKLj (ORCPT ); Sun, 8 Sep 2019 06:11:39 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 8BE5337150F992917632; Sun, 8 Sep 2019 18:11:36 +0800 (CST) Received: from [10.45.6.3] (10.45.6.3) by smtp.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.439.0; Sun, 8 Sep 2019 18:11:32 +0800 Subject: Re: [Virtio-fs] [PATCH 08/18] virtiofs: Drain all pending requests during ->remove time To: Vivek Goyal , , , CC: , , References: <20190905194859.16219-1-vgoyal@redhat.com> <20190905194859.16219-9-vgoyal@redhat.com> From: piaojun Message-ID: Date: Sun, 8 Sep 2019 18:11:13 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190905194859.16219-9-vgoyal@redhat.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.45.6.3] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/9/6 3:48, Vivek Goyal wrote: > When device is going away, drain all pending requests. > > Signed-off-by: Vivek Goyal > --- > fs/fuse/virtio_fs.c | 83 ++++++++++++++++++++++++++++----------------- > 1 file changed, 51 insertions(+), 32 deletions(-) > > diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c > index 90e7b2f345e5..d5730a50b303 100644 > --- a/fs/fuse/virtio_fs.c > +++ b/fs/fuse/virtio_fs.c > @@ -63,6 +63,55 @@ static inline struct fuse_pqueue *vq_to_fpq(struct virtqueue *vq) > return &vq_to_fsvq(vq)->fud->pq; > } > > +static void virtio_fs_drain_queue(struct virtio_fs_vq *fsvq) > +{ > + WARN_ON(fsvq->in_flight < 0); > + > + /* Wait for in flight requests to finish.*/ blank space missed after *finish.*. > + while (1) { > + spin_lock(&fsvq->lock); > + if (!fsvq->in_flight) { > + spin_unlock(&fsvq->lock); > + break; > + } > + spin_unlock(&fsvq->lock); > + usleep_range(1000, 2000); > + } > + > + flush_work(&fsvq->done_work); > + flush_delayed_work(&fsvq->dispatch_work); > +} > + > +static inline void drain_hiprio_queued_reqs(struct virtio_fs_vq *fsvq) Should we add *virtio_fs* prefix for this function? And I wonder if there are only forget reqs to drain? Maybe we should call it *virtio_fs_drain_queued_forget_reqs* or someone containing *forget_reqs*. Thanks, Jun > +{ > + struct virtio_fs_forget *forget; > + > + spin_lock(&fsvq->lock); > + while (1) { > + forget = list_first_entry_or_null(&fsvq->queued_reqs, > + struct virtio_fs_forget, list); > + if (!forget) > + break; > + list_del(&forget->list); > + kfree(forget); > + } > + spin_unlock(&fsvq->lock); > +} > + > +static void virtio_fs_drain_all_queues(struct virtio_fs *fs) > +{ > + struct virtio_fs_vq *fsvq; > + int i; > + > + for (i = 0; i < fs->nvqs; i++) { > + fsvq = &fs->vqs[i]; > + if (i == VQ_HIPRIO) > + drain_hiprio_queued_reqs(fsvq); > + > + virtio_fs_drain_queue(fsvq); > + } > +} > + > /* Add a new instance to the list or return -EEXIST if tag name exists*/ > static int virtio_fs_add_instance(struct virtio_fs *fs) > { > @@ -511,6 +560,7 @@ static void virtio_fs_remove(struct virtio_device *vdev) > struct virtio_fs *fs = vdev->priv; > > virtio_fs_stop_all_queues(fs); > + virtio_fs_drain_all_queues(fs); > vdev->config->reset(vdev); > virtio_fs_cleanup_vqs(vdev, fs); > > @@ -865,37 +915,6 @@ __releases(fiq->waitq.lock) > } > } > > -static void virtio_fs_flush_hiprio_queue(struct virtio_fs_vq *fsvq) > -{ > - struct virtio_fs_forget *forget; > - > - WARN_ON(fsvq->in_flight < 0); > - > - /* Go through pending forget requests and free them */ > - spin_lock(&fsvq->lock); > - while (1) { > - forget = list_first_entry_or_null(&fsvq->queued_reqs, > - struct virtio_fs_forget, list); > - if (!forget) > - break; > - list_del(&forget->list); > - kfree(forget); > - } > - > - spin_unlock(&fsvq->lock); > - > - /* Wait for in flight requests to finish.*/ > - while (1) { > - spin_lock(&fsvq->lock); > - if (!fsvq->in_flight) { > - spin_unlock(&fsvq->lock); > - break; > - } > - spin_unlock(&fsvq->lock); > - usleep_range(1000, 2000); > - } > -} > - > const static struct fuse_iqueue_ops virtio_fs_fiq_ops = { > .wake_forget_and_unlock = virtio_fs_wake_forget_and_unlock, > .wake_interrupt_and_unlock = virtio_fs_wake_interrupt_and_unlock, > @@ -988,7 +1007,7 @@ static void virtio_kill_sb(struct super_block *sb) > spin_lock(&fsvq->lock); > fsvq->connected = false; > spin_unlock(&fsvq->lock); > - virtio_fs_flush_hiprio_queue(fsvq); > + virtio_fs_drain_all_queues(vfs); > > fuse_kill_sb_anon(sb); > virtio_fs_free_devs(vfs); >