Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3422141imm; Tue, 17 Jul 2018 04:44:10 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcnPxIkDtO6SZIYJWjmQuadX7R9lY+5On8vI1vgGbflZoLoLSHcOVorHZMpuq7luEQ8rNCk X-Received: by 2002:a63:bf43:: with SMTP id i3-v6mr1279107pgo.342.1531827850364; Tue, 17 Jul 2018 04:44:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531827850; cv=none; d=google.com; s=arc-20160816; b=Q6wcIlQ2je01e8BqEDkh+jlNy5GiNYd81KonaeWVoC0ZJlyvQ3tdDSsfSt8nbJsn+V WjMLb2JWyEU8eF8/hACz2HdDN2FXLOmGxEzwvVQmruVJQcPlVS+KgEq8sn72fHRfMDON Sd+D94j87tWJaHT5afc+YsQae6v+t1cA4nEV80tC32SOW8wUC1KoWUFhwYsSKs0hCz7G TG8tM4qnPDVwl9pdRZNyDMqsJv6/YEukN1rmLOlNWgZcSe/MMOsV0gX0boRQWyDRSpnq v1l8Ma707gZKIB+CBZgDgm0B8eqq/razTPhDTyAxLBh4OUuxve10QXGer2bdHKJaYv9C zvjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=ov4U8RUSnLAWqIb52GCwOAt1RDHaQJ7RYOf7zkSURIQ=; b=OfHm0mAq7eEZAWUm9TNDzymCB5Y32cHIli6qWi7JUy329hu9OVf5jIlc0jQbqyyU3W H/O7uO2ysLeNhAry5vGAzB8k7b2REy1e/HQ9vMymK4ISQ0ZyI6umZ2DrOsH1Vrxss+Es Sal7CK8W7pxgNbAAi7Khut7V0vhXE5A06/+PBzFAChtjgxOt1AQ9cLDkmfv3r8HG1AIT ILuGTCUwfOj0mKxO+F45vgkkLUjwjRmbaNLOKmkvVV758LyyGkB3ugAj0pyB49ET33kL NEA69X8MBFtAWRLPGv9Js0GrReCj+PEzCCp0+aV/KpS8mSQIy7AyIWDe1PVPDsF/TkE0 NOpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d30-v6si665064pla.64.2018.07.17.04.43.55; Tue, 17 Jul 2018 04:44:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731289AbeGQMOq (ORCPT + 99 others); Tue, 17 Jul 2018 08:14:46 -0400 Received: from nautica.notk.org ([91.121.71.147]:37120 "EHLO nautica.notk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730875AbeGQMOq (ORCPT ); Tue, 17 Jul 2018 08:14:46 -0400 Received: by nautica.notk.org (Postfix, from userid 1001) id 40513C009; Tue, 17 Jul 2018 13:42:30 +0200 (CEST) Date: Tue, 17 Jul 2018 13:42:15 +0200 From: Dominique Martinet To: jiangyiwen Cc: Andrew Morton , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , Linux Kernel Mailing List , v9fs-developer@lists.sourceforge.net Subject: Re: [V9fs-developer] [PATCH v2] net/9p: Fix a deadlock case in the virtio transport Message-ID: <20180717114215.GA14414@nautica> References: <5B4DCD0A.8040600@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <5B4DCD0A.8040600@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Subject: net/9p: Fix a deadlock case in the virtio transport I hadn't noticed in the v1, but how is that a deadlock fix? The previous code doesn't look like it deadlocks to me, the commit message is more correct. jiangyiwen wrote on Tue, Jul 17, 2018: > When client has multiple threads that issue io requests > all the time, and the server has a very good performance, > it may cause cpu is running in the irq context for a long > time because it can check virtqueue has buf in the *while* > loop. > > So we should keep chan->lock in the whole loop. > > Signed-off-by: Yiwen Jiang > --- > net/9p/trans_virtio.c | 17 ++++++----------- > 1 file changed, 6 insertions(+), 11 deletions(-) > > diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c > index 05006cb..e5fea8b 100644 > --- a/net/9p/trans_virtio.c > +++ b/net/9p/trans_virtio.c > @@ -148,20 +148,15 @@ static void req_done(struct virtqueue *vq) > > p9_debug(P9_DEBUG_TRANS, ": request done\n"); > > - while (1) { > - spin_lock_irqsave(&chan->lock, flags); > - req = virtqueue_get_buf(chan->vq, &len); > - if (req == NULL) { > - spin_unlock_irqrestore(&chan->lock, flags); > - break; > - } > - chan->ring_bufs_avail = 1; > - spin_unlock_irqrestore(&chan->lock, flags); > - /* Wakeup if anyone waiting for VirtIO ring space. */ > - wake_up(chan->vc_wq); > + spin_lock_irqsave(&chan->lock, flags); > + while ((req = virtqueue_get_buf(chan->vq, &len)) != NULL) { > if (len) > p9_client_cb(chan->client, req, REQ_STATUS_RCVD); > } > + chan->ring_bufs_avail = 1; Do we have a guarantee that req_done is only called if there is at least one buf to read? For example, that there isn't two threads queueing the same callback but the first one reads everything and the second has nothing to read? If virtblk_done takes care of setting up a "req_done" bool to only notify waiters if something has been done I'd rather have a reason to do differently, even if you can argue that nothing bad will happen in case of a gratuitous wake_up > + spin_unlock_irqrestore(&chan->lock, flags); > + /* Wakeup if anyone waiting for VirtIO ring space. */ > + wake_up(chan->vc_wq); > } Thanks, -- Dominique