Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp605311rwb; Fri, 7 Oct 2022 01:22:26 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6Exz17OChTW7Fa/qMKddfp1Q5vpBO+O7HBrUi4tazDopCvrPqGLEmaeExsJrvQunmEk0D+ X-Received: by 2002:a62:4e8e:0:b0:54a:ee65:cde6 with SMTP id c136-20020a624e8e000000b0054aee65cde6mr3965582pfb.42.1665130946277; Fri, 07 Oct 2022 01:22:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665130946; cv=none; d=google.com; s=arc-20160816; b=aRcZuiew7ahBZF/QH+3UNhhqldFNg0CIRmkGmk3w4nFDGII00jJZvYCvRu0ApciGLc mtaf4D3FBLWS8o0fGxZQqcbF9jqd9QS00XJgthO2aEkOnkloC9YbPCsSNIshuG0nxLPd Yd/mqSV4dRRTjF8FefrQi1/YhBJIOfEmxhDH+Pake3hGdG1Ru0GNClA8F72RInQBhJWW WXVw1RIMi53Ug1+YJl1h+COh8yNQPMSuZT6zeuzWmuiRYUlEmedaPE+GbzoicNdiOjI0 4Op7vhxvatpeyRN3L8Ah4ePieIW34ip7A7xDTQkUyN/LZF3QzCykKFUIEJhasxkOiJUi 63pA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=1LM8BrE+lT990zTpq2Ri2Btyxx5NaMTdCH1GeUVOqNY=; b=G01/5FLgasMgMkOwpl81Y/Y/hbJoyIWux99OuqZe61MygHTxWVPS5fSQxDusLqyGKl EKYLiuGmFUwsncC9SDs4vhv4DDED+hJzGm5GsfLryZD232thlyCsgP4eLYc2qUPIjpbD rj3QQp/mE0wLA1cmKDzlpE4BKwXVdKiLxeysyqTgIMUHBgXV5A+ftFGdWt38AKqIk0uk AazSug/3PW2xfcQfcOTKJrRvETbRnech/P/zM0XoIYLwTr/i3XZmYZBuCg9HT76VDrKP iheoL5S90k49/FYmetzQ38aiigWlVagQ/jm1hbVk/SVkXKJMJt028K4BiBKf09atJfPj YnPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Q+Mxgbs1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j62-20020a638041000000b0043bf03d8606si2040574pgd.117.2022.10.07.01.22.13; Fri, 07 Oct 2022 01:22:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Q+Mxgbs1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229675AbiJGIJP (ORCPT + 99 others); Fri, 7 Oct 2022 04:09:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229731AbiJGIJN (ORCPT ); Fri, 7 Oct 2022 04:09:13 -0400 Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com [IPv6:2a00:1450:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD941B8C03 for ; Fri, 7 Oct 2022 01:09:11 -0700 (PDT) Received: by mail-lf1-x12e.google.com with SMTP id a29so6221171lfo.1 for ; Fri, 07 Oct 2022 01:09:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date; bh=1LM8BrE+lT990zTpq2Ri2Btyxx5NaMTdCH1GeUVOqNY=; b=Q+Mxgbs1wqmWg2gKZgO+D7/+ztdce2XtH6jkL4cZXqmxJXMWL26/W9X4JeW2ebBqCY 6FUJZniHBI6ajqXzvruG/vwiL2xXs0Ny8DLFayb5IRt42bVoDhfL14vRY0WxMvtUpVon 7FtLSENL5ubITuNSSVX4b5IK4uREb2lYAWAmvlaZD/ZgxZdAkD0R/1XeSnF2QVNx8Egf F4lLqEMUHvlOrB2WZCJU+IrADmhh9fxSajuZd6rr5UtEih/ihAkO+xxH6vCj02a+JolO J6RwrehyZCOP7HXmQ94k6irpUwNLQH6ZuxsNY5wFsrh0vsZRl6I/6vVIiygPSNQj86w1 sfHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date; bh=1LM8BrE+lT990zTpq2Ri2Btyxx5NaMTdCH1GeUVOqNY=; b=FODfTLQBpp6JHSjgEVmvSSyyF1TMtkwieHiFZ8YCWv/w6lg8YrDvCE9T7agfemX8SN 6a72Vnyosf+yiwrPg6FIBSo0TGehCto620oFCBTk/x6k5eIMQFtuY1mnpwWZ3/XWksHx 8qQbvXemhhjtuBXn3fmVK3OyDLh2bCMxQ/Os1L3BeUv6DvN8S0/ajK/1/syfb6kXNt8I O6keT/O3q6OrcJ1pfGTSon1YK7A4l6wp3kFC5F3xdabEibY/tRpfxKBySwZgdVJgSdw1 OmWtaJ15ZSZakPS2y5DsKYAhcyf+3dsibWSy0/yPAXNTXqGteWnR2VYen/A9/CFv3xeR F4Tg== X-Gm-Message-State: ACrzQf2uxpo8tuk3JvydjuQmc2q2p+fUj0DvSeEqAYJ4CY15mTOXwZRa IaQl9x2/aNxgiF0iN5DC8YY//290MeFQ+GqAail2X2YIAKc= X-Received: by 2002:a05:6512:2014:b0:4a2:6c86:d995 with SMTP id a20-20020a056512201400b004a26c86d995mr1500447lfb.632.1665130139329; Fri, 07 Oct 2022 01:08:59 -0700 (PDT) MIME-Version: 1.0 References: <20220928152509.141490-1-shenwei.wang@nxp.com> <4f7cf74d-95ca-f93f-7328-e0386348a06e@redhat.com> In-Reply-To: From: Ilias Apalodimas Date: Fri, 7 Oct 2022 11:08:23 +0300 Message-ID: Subject: Re: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP support To: Jesper Dangaard Brouer Cc: Shenwei Wang , Andrew Lunn , brouer@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , "netdev@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "imx@lists.linux.dev" , Magnus Karlsson , =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jesper, On Thu, 6 Oct 2022 at 11:37, Jesper Dangaard Brouer wr= ote: > > > > On 05/10/2022 14.40, Shenwei Wang wrote: > > Hi Jesper, > > > > Here is the summary of "xdp_rxq_info" testing. > > > > skb_mark_for_recycle page_pool_release_page > > > > Native SKB-Mode Native SKB-Mode > > XDP_DROP 460K 220K 460K 102K > > XDP_PASS 80K 113K 60K 62K > > > > It is very pleasing to see the *huge* performance benefit that page_pool > provide when recycling pages for SKBs (via skb_mark_for_recycle). > I did expect a performance boost, but not around a x2 performance boost. Indeed that's a pleasant surprise. Keep in mind that if we convert more driver we can also get rid of copy_break code sprinkled around in drivers. Thanks /Ilias > > I guess this platform have a larger overhead for DMA-mapping and > page-allocation. > > IMHO it would be valuable to include this result as part of the patch > description when you post the XDP patch again. > > Only strange result is XDP_PASS 'Native' is slower that 'SKB-mode'. I > cannot explain why, as XDP_PASS essentially does nothing and just follow > normal driver code to netstack. > > Thanks a lot for doing these tests. > --Jesper > > > The following are the testing log. > > > > Thanks, > > Shenwei > > > > ### skb_mark_for_recycle solution ### > > > > ./xdp_rxq_info --dev eth0 --act XDP_DROP --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 466,553 0 > > XDP-RX CPU total 466,553 > > > > ./xdp_rxq_info -S --dev eth0 --act XDP_DROP --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 226,272 0 > > XDP-RX CPU total 226,272 > > > > ./xdp_rxq_info --dev eth0 --act XDP_PASS --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 80,518 0 > > XDP-RX CPU total 80,518 > > > > ./xdp_rxq_info -S --dev eth0 --act XDP_PASS --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 113,681 0 > > XDP-RX CPU total 113,681 > > > > > > ### page_pool_release_page solution ### > > > > ./xdp_rxq_info --dev eth0 --act XDP_DROP --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 463,145 0 > > XDP-RX CPU total 463,145 > > > > ./xdp_rxq_info -S --dev eth0 --act XDP_DROP --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 104,443 0 > > XDP-RX CPU total 104,443 > > > > ./xdp_rxq_info --dev eth0 --act XDP_PASS --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 60,539 0 > > XDP-RX CPU total 60,539 > > > > ./xdp_rxq_info -S --dev eth0 --act XDP_PASS --read > > > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read > > XDP stats CPU pps issue-pps > > XDP-RX CPU 0 62,566 0 > > XDP-RX CPU total 62,566 > > > >> -----Original Message----- > >> From: Shenwei Wang > >> Sent: Tuesday, October 4, 2022 8:34 AM > >> To: Jesper Dangaard Brouer ; Andrew Lunn > >> > >> Cc: brouer@redhat.com; David S. Miller ; Eric Dum= azet > >> ; Jakub Kicinski ; Paolo Abeni > >> ; Alexei Starovoitov ; Daniel Borkm= ann > >> ; Jesper Dangaard Brouer ; John > >> Fastabend ; netdev@vger.kernel.org; linux- > >> kernel@vger.kernel.org; imx@lists.linux.dev; Magnus Karlsson > >> ; Bj=C3=B6rn T=C3=B6pel ;= Ilias > >> Apalodimas > >> Subject: RE: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP support > >> > >> > >> > >>> -----Original Message----- > >>> From: Shenwei Wang > >>> Sent: Tuesday, October 4, 2022 8:13 AM > >>> To: Jesper Dangaard Brouer ; Andrew Lunn > >> ... > >>> I haven't tested xdp_rxq_info yet, and will have a try sometime later= today. > >>> However, for the XDP_DROP test, I did try xdp2 test case, and the > >>> testing result looks reasonable. The performance of Native mode is > >>> much higher than skb- mode. > >>> > >>> # xdp2 eth0 > >>> proto 0: 475362 pkt/s > >>> > >>> # xdp2 -S eth0 (page_pool_release_page solution) > >>> proto 17: 71999 pkt/s > >>> > >>> # xdp2 -S eth0 (skb_mark_for_recycle solution) > >>> proto 17: 72228 pkt/s > >>> > >> > >> Correction for xdp2 -S eth0 (skb_mark_for_recycle solution) > >> proto 0: 0 pkt/s > >> proto 17: 122473 pkt/s > >> > >> Thanks, > >> Shenwei > > >