Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3468484imm; Tue, 17 Jul 2018 05:28:49 -0700 (PDT) X-Google-Smtp-Source: AAOMgpftSO+Jy2KjdXpRyw1802oHeOg2iY5n8GEXO7Bkhs5CAM5QcMN+fOsosnCq2ljJ9G0mo1G6 X-Received: by 2002:a63:ed56:: with SMTP id m22-v6mr1413861pgk.148.1531830529748; Tue, 17 Jul 2018 05:28:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531830529; cv=none; d=google.com; s=arc-20160816; b=gta4+rZDJAuF/viwqi3BLOw2VyBMrf6JZ1vsgUol5R+PnUIZ+W01gVVjCcaRtcT+FT a9XzBAHPDMgIc/7ESGoIp28aVPS9qZwDUoX+BwVRixwERGgDUV6zh6n+eb+7ISl6KXb4 5OTA7IlglwpDyuajCYWrHHlUFjtLqJn4XIl8HHBn9FBbKlRvpO+Bc8UxEESoPDk9ph+p s0yu9CdzpiR7ArD/N6Wvg+N58FPqMB4x4ISyKmUqMgPL+Yzn3oabAOt2iyz7tp+dv9WJ ve1fWsga2og/rpBdYFEVyqE6lsEqSMRECXegYMZ7o8prJi/hpkrCwWRh4mVmwDb9AVus Kv+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject:arc-authentication-results; bh=FINzWZJpYcgg+58kJgVjqv/d/ZNZLTA8iDhFutq+8Ds=; b=0B82XOJv7l90eIOpnmKUEcvcZX5oe7pehoBAVOe0OsxeQo85UqjozF7TkEm5GwLKO1 2KLZt2ssqqJUEPY/u6M3HXAPrkOvcscrTAduLQwbrvFrkuLY9wvJhIzpMXYqA7QdUZrM kvKeOaDaokrX3A2g0F1u1YLfRVPh0jJP9/ZInucQyiO19FAoBAVyZgOru8a7R6CUam4j YUkrPZ2AKsrKKMLH8kjo+snShYUDc0JXe5gtIDCwi/NuOdMEQMLJPwiUUIE2S35uQKp3 yldhGlOEioTrCS2jUy4GkXZdMSD6/YxHRoO4fu+Z2F+l4a6JDZcxbS03Zi5topbMYqp7 DtAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a4-v6si853938pgl.9.2018.07.17.05.28.34; Tue, 17 Jul 2018 05:28:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731398AbeGQM7q (ORCPT + 99 others); Tue, 17 Jul 2018 08:59:46 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:44824 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728489AbeGQM7q (ORCPT ); Tue, 17 Jul 2018 08:59:46 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id D60AE774A53CE; Tue, 17 Jul 2018 20:27:16 +0800 (CST) Received: from [127.0.0.1] (10.177.16.168) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.382.0; Tue, 17 Jul 2018 20:27:12 +0800 Subject: Re: [V9fs-developer] [PATCH v2] net/9p: Fix a deadlock case in the virtio transport To: Dominique Martinet References: <5B4DCD0A.8040600@huawei.com> <20180717114215.GA14414@nautica> CC: Andrew Morton , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , Linux Kernel Mailing List , From: jiangyiwen Message-ID: <5B4DE09F.5000800@huawei.com> Date: Tue, 17 Jul 2018 20:27:11 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <20180717114215.GA14414@nautica> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.16.168] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/7/17 19:42, Dominique Martinet wrote: > >> Subject: net/9p: Fix a deadlock case in the virtio transport > > I hadn't noticed in the v1, but how is that a deadlock fix? > The previous code doesn't look like it deadlocks to me, the commit > message is more correct. > Hi Dominique, If cpu is running in the irq context for a long time, NMI watchdog will detect the hard lockup in the cpu, and then it will cause kernel panic. So I use this subject to underline the scenario. > jiangyiwen wrote on Tue, Jul 17, 2018: >> When client has multiple threads that issue io requests >> all the time, and the server has a very good performance, >> it may cause cpu is running in the irq context for a long >> time because it can check virtqueue has buf in the *while* >> loop. >> >> So we should keep chan->lock in the whole loop. >> >> Signed-off-by: Yiwen Jiang >> --- >> net/9p/trans_virtio.c | 17 ++++++----------- >> 1 file changed, 6 insertions(+), 11 deletions(-) >> >> diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c >> index 05006cb..e5fea8b 100644 >> --- a/net/9p/trans_virtio.c >> +++ b/net/9p/trans_virtio.c >> @@ -148,20 +148,15 @@ static void req_done(struct virtqueue *vq) >> >> p9_debug(P9_DEBUG_TRANS, ": request done\n"); >> >> - while (1) { >> - spin_lock_irqsave(&chan->lock, flags); >> - req = virtqueue_get_buf(chan->vq, &len); >> - if (req == NULL) { >> - spin_unlock_irqrestore(&chan->lock, flags); >> - break; >> - } >> - chan->ring_bufs_avail = 1; >> - spin_unlock_irqrestore(&chan->lock, flags); >> - /* Wakeup if anyone waiting for VirtIO ring space. */ >> - wake_up(chan->vc_wq); >> + spin_lock_irqsave(&chan->lock, flags); >> + while ((req = virtqueue_get_buf(chan->vq, &len)) != NULL) { >> if (len) >> p9_client_cb(chan->client, req, REQ_STATUS_RCVD); >> } >> + chan->ring_bufs_avail = 1; > > Do we have a guarantee that req_done is only called if there is at least > one buf to read? > For example, that there isn't two threads queueing the same callback but > the first one reads everything and the second has nothing to read? > > If virtblk_done takes care of setting up a "req_done" bool to only > notify waiters if something has been done I'd rather have a reason to do > differently, even if you can argue that nothing bad will happen in case > of a gratuitous wake_up > Sorry, I don't fully understand what your mean. I think even if the ring buffer don't have the data, wakeup operation will not cause any other problem, and the loss of performance can be ignored. Thanks. >> + spin_unlock_irqrestore(&chan->lock, flags); >> + /* Wakeup if anyone waiting for VirtIO ring space. */ >> + wake_up(chan->vc_wq); >> } > > Thanks, >