Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp3936227rwr; Sun, 7 May 2023 23:42:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4hI0LT73SR/OZyOrtOvYy2psbJgsfwRVJHuxrlnWovbxVRxeu3xhH1bdeQDrH5gWumOy0q X-Received: by 2002:a17:902:bf0b:b0:1ac:3780:3a76 with SMTP id bi11-20020a170902bf0b00b001ac37803a76mr7885609plb.4.1683528161730; Sun, 07 May 2023 23:42:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683528161; cv=none; d=google.com; s=arc-20160816; b=Uhr9sHxqNuHINkqTc08JPHv+dW+MBn6ZUTVbACt/x8lWFXNgCEa5jPfMGwoJzsFPz8 qEaZfWNvtyvvi0C+nnlQDy2DJ0pwMpocckIbTegLI0X0U2JQDfpL1VxBIUTsHC009tlk xe/V/XN/75+BOsKL72rHAqM0iEwsKqYMm7CLx7aMlehz+JscqfIDaCinqOp1X5DWCHbO GJ54WeRSxZ8Be3jOPccFqYV6hIxXfwNSVktl1oYwCg4oKgbDdXBeaxZV5GFlYge0BxlV xpyRRRMaM5wxXJY922+nuwuWb46nCJk4247JSfpLpRu5GAv/X7QJHirBzMOg4A6t9nnq O4vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=WnGoERqYvEOx4rgm1MihSTLdb9c387eTW69LQCmBdTs=; b=JKpvh/gifJZUyc0I2YHzFGXhsRJBKZJqhGb3sgXutIgGnFpqgq7EqHzZSzTkTildgi x+pJS2DEufX8kssNaw94zUJml7TUiut3dnc7n6r9+ChEwYJuHJedfQPPhcD8M33K44Nv bkixHKKKM9SwovrD5I4KD7Y7kqXlKz+/prdmvdFSQgrRrzo/Iu9GR4qnQqoPDaLEtsY7 kbygOPJG5FwihLML5u/e2oFavwehRWXXy4mTgW339mXokY/3ErmEvikfp3Xq9+/voxuo oOMvOT5a0cUiaZ1YMf/ZpPhKCAVp6Y1picT1ELsxHyjrDdWzR3xqGmgDcfMXCRNii3YC KPYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Tp5YhECp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u3-20020a17090282c300b001a6565a16c4si7331108plz.493.2023.05.07.23.42.29; Sun, 07 May 2023 23:42:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Tp5YhECp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232241AbjEHGOq (ORCPT + 99 others); Mon, 8 May 2023 02:14:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230076AbjEHGOo (ORCPT ); Mon, 8 May 2023 02:14:44 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1D3B1885A for ; Sun, 7 May 2023 23:13:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683526437; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WnGoERqYvEOx4rgm1MihSTLdb9c387eTW69LQCmBdTs=; b=Tp5YhECpeQPgptEKaUpU1ncGkbv3WYdFzzmrHn73gOWxWM9Zp1IcwWVP65Lm6WH+f8hKS/ XvRpH7qx7kVmAMCuDSeApC6hqL5qOU8OP9H8WTZcnJTBkjXJRgEB1u858JbLbX3MhMa3AE V7VF9Pify3nIBN0104lNkNnXY2N5IfE= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-10-ELD1ETIVMp-ks9i9vwaQJg-1; Mon, 08 May 2023 02:13:55 -0400 X-MC-Unique: ELD1ETIVMp-ks9i9vwaQJg-1 Received: by mail-lf1-f72.google.com with SMTP id 2adb3069b0e04-4edc7406cb5so2394286e87.3 for ; Sun, 07 May 2023 23:13:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683526434; x=1686118434; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WnGoERqYvEOx4rgm1MihSTLdb9c387eTW69LQCmBdTs=; b=LcpM0yZzRsF/yNRYwHtkADm0uN4zvwInlr/gU6LQBnY6++c6mzkwbjjNFB49z7oqrQ lCQn7EetEi8V5Pa5fQLvcAxwKS2g1uN5hc4fpOETnrHdBEn5pbL9a9xGmFghmWifV7I9 mzPG0dW4zFUrjOKV0UXQ+O8uMxRDJW/ZevdXxJgJRsdC7oF4S1rvbjrfyO/cA9TWlq8C 50OoJYyj/EWDVg/73E2bx+hxgWBU185W9R1VGr6tgLV/nqVPjyutPQWAYti0bwgMyPzw jl1JMPLeasXj/Yv9sjqQeiW0B1flwfS/yBX1WQxjdr1ew9YM+8IHU6qyMOJ65q0NCno6 f+6g== X-Gm-Message-State: AC+VfDwtFSWIha3QH+BiS2Hlb0DZoMYBUGvTI660mCZ/pTTfJ6g5mjXw HtBjHvCmWYNgsfwyZQZenBfD/dYEyoyYabUgo3GGEnotQiPscqwPnauZ0PKGZu41Bi6sG3fcRaI n0ozEx/AonM9FE7DNZCStbaUjKQVHTDl8SZ2X8VaC X-Received: by 2002:ac2:593b:0:b0:4ef:ef67:65c9 with SMTP id v27-20020ac2593b000000b004efef6765c9mr2058507lfi.23.1683526434064; Sun, 07 May 2023 23:13:54 -0700 (PDT) X-Received: by 2002:ac2:593b:0:b0:4ef:ef67:65c9 with SMTP id v27-20020ac2593b000000b004efef6765c9mr2058496lfi.23.1683526433776; Sun, 07 May 2023 23:13:53 -0700 (PDT) MIME-Version: 1.0 References: <1683167226-7012-1-git-send-email-wangwenliang.1995@bytedance.com> <20230507093328-mutt-send-email-mst@kernel.org> <2b5cf90a-efa8-52a7-9277-77722622c128@redhat.com> <20230508020717-mutt-send-email-mst@kernel.org> In-Reply-To: <20230508020717-mutt-send-email-mst@kernel.org> From: Jason Wang Date: Mon, 8 May 2023 14:13:42 +0800 Message-ID: Subject: Re: [PATCH v4] virtio_net: suppress cpu stall when free_unused_bufs To: "Michael S. Tsirkin" Cc: Wenliang Wang , davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, zhengqi.arch@bytedance.com, willemdebruijn.kernel@gmail.com, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, xuanzhuo@linux.alibaba.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 8, 2023 at 2:08=E2=80=AFPM Michael S. Tsirkin = wrote: > > On Mon, May 08, 2023 at 11:12:03AM +0800, Jason Wang wrote: > > > > =E5=9C=A8 2023/5/7 21:34, Michael S. Tsirkin =E5=86=99=E9=81=93: > > > On Fri, May 05, 2023 at 11:28:25AM +0800, Jason Wang wrote: > > > > On Thu, May 4, 2023 at 10:27=E2=80=AFAM Wenliang Wang > > > > wrote: > > > > > For multi-queue and large ring-size use case, the following error > > > > > occurred when free_unused_bufs: > > > > > rcu: INFO: rcu_sched self-detected stall on CPU. > > > > > > > > > > Fixes: 986a4f4d452d ("virtio_net: multiqueue support") > > > > > Signed-off-by: Wenliang Wang > > > > > --- > > > > > v2: > > > > > -add need_resched check. > > > > > -apply same logic to sq. > > > > > v3: > > > > > -use cond_resched instead. > > > > > v4: > > > > > -add fixes tag > > > > > --- > > > > > drivers/net/virtio_net.c | 2 ++ > > > > > 1 file changed, 2 insertions(+) > > > > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > > > > index 8d8038538fc4..a12ae26db0e2 100644 > > > > > --- a/drivers/net/virtio_net.c > > > > > +++ b/drivers/net/virtio_net.c > > > > > @@ -3560,12 +3560,14 @@ static void free_unused_bufs(struct virtn= et_info *vi) > > > > > struct virtqueue *vq =3D vi->sq[i].vq; > > > > > while ((buf =3D virtqueue_detach_unused_buf(vq))= !=3D NULL) > > > > > virtnet_sq_free_unused_buf(vq, buf); > > > > > + cond_resched(); > > > > Does this really address the case when the virtqueue is very large? > > > > > > > > Thanks > > > > > > it does in that a very large queue is still just 64k in size. > > > we might however have 64k of these queues. > > > > > > Ok, but we have other similar loops especially the refill, I think we m= ay > > need cond_resched() there as well. > > > > Thanks > > > > Refill is already per vq isn't it? Not for the refill_work(). Thanks > > > > > > > > > > } > > > > > > > > > > for (i =3D 0; i < vi->max_queue_pairs; i++) { > > > > > struct virtqueue *vq =3D vi->rq[i].vq; > > > > > while ((buf =3D virtqueue_detach_unused_buf(vq))= !=3D NULL) > > > > > virtnet_rq_free_unused_buf(vq, buf); > > > > > + cond_resched(); > > > > > } > > > > > } > > > > > > > > > > -- > > > > > 2.20.1 > > > > > >