Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp1838547imm; Sun, 15 Jul 2018 18:56:54 -0700 (PDT) X-Google-Smtp-Source: AAOMgpevrOW2d0DlAZMCDW/HZ8sCSFefEzcRD/0voOgkJrNsCpXlGJyYQuWIiQRv/Irb6J6aY8q8 X-Received: by 2002:a62:998:: with SMTP id 24-v6mr15874446pfj.99.1531706214202; Sun, 15 Jul 2018 18:56:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531706214; cv=none; d=google.com; s=arc-20160816; b=cOBjL51nT6V0gparmUCQPenokdRmxaBc3hDh1ZgJdoBHgvLxWO8gDOBUaxtj/+ufEF 4eliXYhieWfuiTOslvjHihSzKi56wnbHoK5tXkzw6MayoS1X3mlkj+xtzkh/Zq7MCxdj rhagLKwYDKq9Uj+odslMP5k/YpGswotKlfVCPUxfbIL5+6P3eY7+Fpvl9sVFN66Xrlto D87F8Zdu9LUn5bABpIS51pMXgHCgb0Q2t5O/8MUsiSFrifgMDRuLU+pBGuN+cTBlko1S RIuU0iRoh7rxMPlqHA8j/+DgEaKJwbB1/tJ881+k1iaBtoNg2PVVw0wNCMDS1mSg3B8n cNeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject:arc-authentication-results; bh=N9rDgTyA5XernuI55/LMiPtkdo2i7tf9qUhcInWM9/8=; b=AH6aGvxhdpLeHPW4o8xQhgnftD8D0/gltEGIUyjwDXk6vlWd60Uz8N26J8txtGApUM no4U5yz5WXuVyx9ZKd2NyDHVqdMfKWAjW3m8/eAKBiG78M3z5UE9m3uohd/yVUdJolcC jOUiD/MTcTHBWwXPwTeEjTgCQKXTRoaLrC1pHpanC0GXMgowRhMJdgnMl7Szm2rOb+4z 0nLekTUCZlCybz80Etbyzk75jAE/oSfqUrPfCeo2/mbVGC0FXF3zIO7WlG1XA1XUzH9l ibXtftuzwSRYNndhK77g0mMVjr7pkU2FcUWgkWAPi5etLq6XE9qtzakaB8mtuFNBOJSo 6BtQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cd4-v6si6281223plb.516.2018.07.15.18.56.39; Sun, 15 Jul 2018 18:56:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727479AbeGPCVH (ORCPT + 99 others); Sun, 15 Jul 2018 22:21:07 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:9674 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727302AbeGPCVH (ORCPT ); Sun, 15 Jul 2018 22:21:07 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 8A1598757A3D1; Mon, 16 Jul 2018 09:55:53 +0800 (CST) Received: from [127.0.0.1] (10.177.16.168) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.382.0; Mon, 16 Jul 2018 09:55:55 +0800 Subject: Re: [V9fs-developer] [PATCH] net/9p: Fix a deadlock case in the virtio transport To: Dominique Martinet References: <5B49B8CF.40709@huawei.com> <20180714090502.GA16186@nautica> <5B49DAA5.3020600@huawei.com> <20180714124715.GA16134@nautica> CC: Andrew Morton , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , Linux Kernel Mailing List , From: jiangyiwen Message-ID: <5B4BFB29.3080507@huawei.com> Date: Mon, 16 Jul 2018 09:55:53 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <20180714124715.GA16134@nautica> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.16.168] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/7/14 20:47, Dominique Martinet wrote: > jiangyiwen wrote on Sat, Jul 14, 2018: >> On 2018/7/14 17:05, Dominique Martinet wrote: >>> jiangyiwen wrote on Sat, Jul 14, 2018: >>>> When client has multiple threads that issue io requests all the >>>> time, and the server has a very good performance, it may cause >>>> cpu is running in the irq context for a long time because it can >>>> check virtqueue has buf in the *while* loop. >>>> >>>> So we should keep chan->lock in the whole loop. >>> >>> Hmm, this is generally bad practice to hold a spin lock for long. >>> In general, spin locks are meant to protect data, not code. >>> >>> I'd want some numbers to decide on this one, even if I think this >>> particular case is safe (e.g. this cannot dead-lock) >>> >> >> Actually, the loop will not hold a spin lock for long, because other >> threads will not issue new requests in this case. In addition, >> virtio-blk or virtio-scsi also use this solution, I guess it may also >> encounter this problem before. > > Fair enough. If you do have some numbers to give though (throughput > and/or iops before/after) I'd still be really curious. > >>>> chan->ring_bufs_avail = 1; >>>> - spin_unlock_irqrestore(&chan->lock, flags); >>>> /* Wakeup if anyone waiting for VirtIO ring space. */ >>>> wake_up(chan->vc_wq); >>> >>> In particular, the wake up here echoes to wait events that will >>> immediately try to grab the lock, and will needlessly spin on it until >>> this thread is done. >>> If we do go this way I'd want setting chan->ring_bufs_avail to be done >>> just before unlocking and the wakeup to be done just after unlocking out >>> of the loop iff we processed at least one iteration here. >> >> I can move the wakeup operation after the unlocking. Like what I said >> above, I think this loop will not execute for long. > > Please do, you listed virtio_blk as doing this and they have the same > kind of pattern with a req_done bool and only restarting stopped queues > if they processed something > You're right, this wake up operation should be put after the unlocking, I will resend it. In addition, whether I should resend this patch based on your 9p-next branch? Thanks, Yiwen.