Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp430650imm; Sat, 14 Jul 2018 04:13:38 -0700 (PDT) X-Google-Smtp-Source: AAOMgpckLZWr6xEv7Nvn9YDAkVi5ANXNz4sTomnIC3fU85dY+OnOkcSwimVvM4A5qadK7KSXKPnV X-Received: by 2002:a62:4cd3:: with SMTP id e80-v6mr10538813pfj.234.1531566817941; Sat, 14 Jul 2018 04:13:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531566817; cv=none; d=google.com; s=arc-20160816; b=cYpTZdxR77TtxoopFryzOidnkEoMPMOuR6OIpPiY1wq7VQUzj6xR5L4SqdCgsCFPh4 dl0EyNE7/JoiWDklmhpyXjwRkz98NYxa/KdeA/AdKOGrz/9HXZUVY2TRqUx5vnGUhp66 NgyyFrP6+BT0g7J/Fb6qwT0DyUxFpGnrNZ0TAkg1w8bXbH18OJTIr5uythAsux13ypL4 Y9ukDQX2syOcAxCYIDjdgexnmcmCGB1x4LtLZHKkUlYVcrJta206idL3YXonJ/h0YJXK eLaRv4pUCQ2qvOyDPAn9mQHlWhmXx0SzU7Z1l5WrqVt6iiGvx28SC0AcmwRtM12PGYQo 64Mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject:arc-authentication-results; bh=uuiv5cReR2z1gi5g8di+4EEKH6KdecMyz4cHjIL5jv8=; b=O1YkbJg8sxTG8H4+g7FkaxtlMSMys5TsjsAt8O2C8lJRDQ4Z0WOqNH7uyObtv94Hsx Qev9jMO4vqe+RORxquYrEuU4lXvL2lqGe9OxgmjMfDSZN7e8NoBX8rAHHhydyrS9AwKv jmvYwpnMV6XobGNj7rGgfFyXglMQtHVmu42kcSZLIplKKM+5KuS77CUBVTPMNOUmLyzG UrHvB9NYkQ70AQ/pN2asz2OqZreA3JR0Qk1HfAJSDMmTf8Z76chFENjbmVHbW8/sDozE h35RNnJygk8wGfh8d/dD/IzjWBoIt6lYrmsVOgXMcWDrvH5B9j+UVVkVmQKSFDibEx3r YrUg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x5-v6si24596875pgc.210.2018.07.14.04.13.21; Sat, 14 Jul 2018 04:13:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726792AbeGNLb3 (ORCPT + 99 others); Sat, 14 Jul 2018 07:31:29 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:45806 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725978AbeGNLb3 (ORCPT ); Sat, 14 Jul 2018 07:31:29 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id AC88692BE3564; Sat, 14 Jul 2018 19:12:42 +0800 (CST) Received: from [127.0.0.1] (10.177.16.168) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.382.0; Sat, 14 Jul 2018 19:12:38 +0800 Subject: Re: [V9fs-developer] [PATCH] net/9p: Fix a deadlock case in the virtio transport To: Dominique Martinet References: <5B49B8CF.40709@huawei.com> <20180714090502.GA16186@nautica> CC: Andrew Morton , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , Linux Kernel Mailing List , From: jiangyiwen Message-ID: <5B49DAA5.3020600@huawei.com> Date: Sat, 14 Jul 2018 19:12:37 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <20180714090502.GA16186@nautica> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.16.168] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/7/14 17:05, Dominique Martinet wrote: > jiangyiwen wrote on Sat, Jul 14, 2018: >> When client has multiple threads that issue io requests all the >> time, and the server has a very good performance, it may cause >> cpu is running in the irq context for a long time because it can >> check virtqueue has buf in the *while* loop. >> >> So we should keep chan->lock in the whole loop. > > Hmm, this is generally bad practice to hold a spin lock for long. > In general, spin locks are meant to protect data, not code. > > I'd want some numbers to decide on this one, even if I think this > particular case is safe (e.g. this cannot dead-lock) > Actually, the loop will not hold a spin lock for long, because other threads will not issue new requests in this case. In addition, virtio-blk or virtio-scsi also use this solution, I guess it may also encounter this problem before. >> Signed-off-by: Yiwen Jiang >> --- >> net/9p/trans_virtio.c | 8 +++----- >> 1 file changed, 3 insertions(+), 5 deletions(-) >> >> diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c >> index 05006cb..9b0f5f2 100644 >> --- a/net/9p/trans_virtio.c >> +++ b/net/9p/trans_virtio.c >> @@ -148,20 +148,18 @@ static void req_done(struct virtqueue *vq) >> >> p9_debug(P9_DEBUG_TRANS, ": request done\n"); >> >> + spin_lock_irqsave(&chan->lock, flags); >> while (1) { >> - spin_lock_irqsave(&chan->lock, flags); >> req = virtqueue_get_buf(chan->vq, &len); >> - if (req == NULL) { >> - spin_unlock_irqrestore(&chan->lock, flags); >> + if (req == NULL) >> break; >> - } >> chan->ring_bufs_avail = 1; >> - spin_unlock_irqrestore(&chan->lock, flags); >> /* Wakeup if anyone waiting for VirtIO ring space. */ >> wake_up(chan->vc_wq); > > In particular, the wake up here echoes to wait events that will > immediately try to grab the lock, and will needlessly spin on it until > this thread is done. > If we do go this way I'd want setting chan->ring_bufs_avail to be done > just before unlocking and the wakeup to be done just after unlocking out > of the loop iff we processed at least one iteration here. > I can move the wakeup operation after the unlocking. Like what I said above, I think this loop will not execute for long. Thanks, Yiwen. > That should also save you precious cpu cycles while under lock :) >