Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp500427imm; Sat, 14 Jul 2018 05:48:59 -0700 (PDT) X-Google-Smtp-Source: AAOMgpej28O4rnhohJ1m39uwaBNN0oGJ3/RASOICQVrOBaSTuqsJb5euBj3/OUrSwlsIYsw/W6FW X-Received: by 2002:a62:843:: with SMTP id c64-v6mr10995107pfd.14.1531572538956; Sat, 14 Jul 2018 05:48:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531572538; cv=none; d=google.com; s=arc-20160816; b=k97wod8Mpv3Jzjb+y36Q0kv++AFe+2jEpxjOZjDwGAD1BRd8PaFLuG4kKFR7senB7O Hw9A/y7rV3FIWQ4tHfNmoYRkt28foRpdNb6qc2W90Ozikhb+IKEAdRxFKotpAQru07as gN1MQoAmmhnwumFs6Ij4UWIX6/rLI8cFuWj5NjLcVDniLrpCnb4mKXlpMQ8C3Rwl7BNi B/JHeQLKt9lQShJ+2j/jQen12d2GACx2pwxb5jzk1nsdEEw9kNs083Qqt1xzX1MtrRaO wyF2SPJUrrymeL86n6rCotZTR914Yll5Mr3Jmy3A44aTo3I2mTe+TahDdFVmNuPkU006 wCNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=EHJXdnAf11Mkgve2zGduEdJwjzEe8Ws8EfdcIgkIpwM=; b=MhB1+deffIm0EWRFXz8NQprqeQwjgTW+Emfplsr8svmsdjG8CqwqKse+YU9g9pJNRh 5Kz5ERVhUQ4qx12o+RD+ktyTd4DAX/mHODH+MN4rcW1sjIppJbo3SH/6PEAD8wlbSpw/ pVHIjSoARcHGKtCHkXX0KcuySRvkmjMj2P18+lRPKNpdhGUDgdJfOS23MvOA0mC+Q94h IS80xC3nzexBu3Ug8Xpk4OI6FBMHDuTrTIp/JrKPqeDm1UdxedDht89VxLZuzhg9OI9k WQPQXnLz+IUOt41R+TdhhmiY2EWhAqYMfOFct2oAl3MfRqwarwCe6yRwuPRofHTjJSv1 zGDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b2-v6si569711plk.111.2018.07.14.05.48.28; Sat, 14 Jul 2018 05:48:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727148AbeGNNGc (ORCPT + 99 others); Sat, 14 Jul 2018 09:06:32 -0400 Received: from nautica.notk.org ([91.121.71.147]:47410 "EHLO nautica.notk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726447AbeGNNGb (ORCPT ); Sat, 14 Jul 2018 09:06:31 -0400 Received: by nautica.notk.org (Postfix, from userid 1001) id 9C66DC009; Sat, 14 Jul 2018 14:47:30 +0200 (CEST) Date: Sat, 14 Jul 2018 14:47:15 +0200 From: Dominique Martinet To: jiangyiwen Cc: Andrew Morton , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , Linux Kernel Mailing List , v9fs-developer@lists.sourceforge.net Subject: Re: [V9fs-developer] [PATCH] net/9p: Fix a deadlock case in the virtio transport Message-ID: <20180714124715.GA16134@nautica> References: <5B49B8CF.40709@huawei.com> <20180714090502.GA16186@nautica> <5B49DAA5.3020600@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <5B49DAA5.3020600@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org jiangyiwen wrote on Sat, Jul 14, 2018: > On 2018/7/14 17:05, Dominique Martinet wrote: > > jiangyiwen wrote on Sat, Jul 14, 2018: > >> When client has multiple threads that issue io requests all the > >> time, and the server has a very good performance, it may cause > >> cpu is running in the irq context for a long time because it can > >> check virtqueue has buf in the *while* loop. > >> > >> So we should keep chan->lock in the whole loop. > > > > Hmm, this is generally bad practice to hold a spin lock for long. > > In general, spin locks are meant to protect data, not code. > > > > I'd want some numbers to decide on this one, even if I think this > > particular case is safe (e.g. this cannot dead-lock) > > > > Actually, the loop will not hold a spin lock for long, because other > threads will not issue new requests in this case. In addition, > virtio-blk or virtio-scsi also use this solution, I guess it may also > encounter this problem before. Fair enough. If you do have some numbers to give though (throughput and/or iops before/after) I'd still be really curious. > >> chan->ring_bufs_avail = 1; > >> - spin_unlock_irqrestore(&chan->lock, flags); > >> /* Wakeup if anyone waiting for VirtIO ring space. */ > >> wake_up(chan->vc_wq); > > > > In particular, the wake up here echoes to wait events that will > > immediately try to grab the lock, and will needlessly spin on it until > > this thread is done. > > If we do go this way I'd want setting chan->ring_bufs_avail to be done > > just before unlocking and the wakeup to be done just after unlocking out > > of the loop iff we processed at least one iteration here. > > I can move the wakeup operation after the unlocking. Like what I said > above, I think this loop will not execute for long. Please do, you listed virtio_blk as doing this and they have the same kind of pattern with a req_done bool and only restarting stopped queues if they processed something -- Dominique