Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3510623imm; Tue, 17 Jul 2018 06:08:29 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcoTOGIiRXHYM7fcy27+V8qcxg4uhctZhB8X5wY7Whv7Bng/p6ranuWl/zJLaWWaUmjhgaC X-Received: by 2002:a63:6949:: with SMTP id e70-v6mr1582619pgc.119.1531832909693; Tue, 17 Jul 2018 06:08:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531832909; cv=none; d=google.com; s=arc-20160816; b=GOt7ROqFWRsHH/fII/aI4pg2lNPcxWsY+lOYZDKy5rBrsCsFMcQNlFFAwc95F6tuFh Pyu29M8wTVEqAriya3827qTpfL9YSO2VxCm+lQbkhzCgf2rrASfOla0qm4tkw5RP1bmC cww5POZlJgOOqWL4wVH8fHr2yiNz6rPXysPU87Rosh/tw8H4KCX9XP58KSHneEux91SL FBa09LrHrSIKjlxfSLDI2jTEgjLsHtLV8/x0DnOZs5NkgxtjRFikZY3pG+3/VKETC9Ot a34TFZW6lthaQJidx/nE4qiPCLfGKFbo3LXFQwmwYVIjUxpj+bAtg/dAUneeh6WEL59i uEQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=9h3j5Vq7r5cg8pf7mtfEworjfZ6kRjc6i5jA42HHai8=; b=BHcHNHg2FA19QasZXUg96QCPn5Dtu22cWll4Ll7aZh5Wbt6tzZMUH/y+94Fa4DX4HN rZj7f7/IM3DykTXgTm2vyJo8S8Flpq63J4vo2rLtkdj0RYRiLLIFFsYB9qpVepuQpZOr L/MSrj0t8oCmvnLopbFmUqB/d0Sf4ot4wjo8PHrU8OgKm7V1EtvgHzaKH9L5VC4frf8M /qHOlYaQC/Eapiddy7Jq+aKCY9sKzIOWuyq5YamXJvs+hYsdHpkwtC8TqvE88zm9rE3O XusjLbOUhklVQZOD83EWH5TeffRDe0yOTDA+nr6OvFrPXryVkf01U3pYUyYF2myXjB1r jV7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w20-v6si777826ply.137.2018.07.17.06.08.13; Tue, 17 Jul 2018 06:08:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731529AbeGQNkN (ORCPT + 99 others); Tue, 17 Jul 2018 09:40:13 -0400 Received: from nautica.notk.org ([91.121.71.147]:52364 "EHLO nautica.notk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731324AbeGQNkM (ORCPT ); Tue, 17 Jul 2018 09:40:12 -0400 Received: by nautica.notk.org (Postfix, from userid 1001) id 0D98AC009; Tue, 17 Jul 2018 15:07:35 +0200 (CEST) Date: Tue, 17 Jul 2018 15:07:20 +0200 From: Dominique Martinet To: jiangyiwen Cc: Andrew Morton , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , Linux Kernel Mailing List , v9fs-developer@lists.sourceforge.net Subject: Re: [V9fs-developer] [PATCH v2] net/9p: Fix a deadlock case in the virtio transport Message-ID: <20180717130720.GA23759@nautica> References: <5B4DCD0A.8040600@huawei.com> <20180717114215.GA14414@nautica> <5B4DE09F.5000800@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <5B4DE09F.5000800@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org jiangyiwen wrote on Tue, Jul 17, 2018: > On 2018/7/17 19:42, Dominique Martinet wrote: > > > >> Subject: net/9p: Fix a deadlock case in the virtio transport > > > > I hadn't noticed in the v1, but how is that a deadlock fix? > > The previous code doesn't look like it deadlocks to me, the commit > > message is more correct. > > > > If cpu is running in the irq context for a long time, > NMI watchdog will detect the hard lockup in the cpu, > and then it will cause kernel panic. So I use this > subject to underline the scenario. That's still not a deadlock - fix lockup would be more appropriate? > > Do we have a guarantee that req_done is only called if there is at least > > one buf to read? > > For example, that there isn't two threads queueing the same callback but > > the first one reads everything and the second has nothing to read? > > > > If virtblk_done takes care of setting up a "req_done" bool to only > > notify waiters if something has been done I'd rather have a reason to do > > differently, even if you can argue that nothing bad will happen in case > > of a gratuitous wake_up > > > > Sorry, I don't fully understand what your mean. > I think even if the ring buffer don't have the data, wakeup operation > will not cause any other problem, and the loss of performance can be > ignored. I just mean "others do check, why not us?". It's almost free to check if we had something to read, but if there are many pending read/writes waiting for a buffer they will all wake up and spin uselessly. I've checked other callers of virtqueue_get_buf() and out of 9 that loop around in a callback then wake another thread up, 6 do check before waking up, two check that something happened just to print a debug statement if not (virtio_test and virtgpu) and one doesn't check (virtio_input); so I guess we wouldn't be the first ones, just not following the trend. But yes, nothing bad will happen, so let's agree to disagree and I'll defer to others opinion on this Thanks, -- Dominique Martinet