Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp1964096ybi; Thu, 4 Jul 2019 02:53:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqynFZwnLxPFl9bLfXzvf9gdAlqfIsJwulC9vIPHnvBTLeJk4UgK3QQpn0up3L11+P+kfq6f X-Received: by 2002:a17:90a:25af:: with SMTP id k44mr18335652pje.122.1562234028468; Thu, 04 Jul 2019 02:53:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562234028; cv=none; d=google.com; s=arc-20160816; b=Ueo5VUHedLusHbkgogS84J0s0LRakFf1tkIbbnzz0MzSkwTpd4etiF1vdgCJrNreKY Po+NFKVCDzdwucP+RaW57PoNNumx0R4tBWWYRhv6xMAJBKwXZSD6HmXV+3uexbb00zT1 lwpdpQxtCGdWETHfPcV+122NWe7fKKmrWj9Tm6J7PELjd11c1M5wsLcDET8PzW98E8U9 WqZXt1ooFzJoateC7VM1DAKcycpBq2EKyfFEdapB0t55SK41fqXa/aCV68cUKeNVCBD9 H9v0u03krbJDu0SVcwnUdamK7w1yeqZU5b5iTI4lGrqiBNQUaJAMq8DqzmXG9phVBKnv VKnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:mail-followup-to :message-id:subject:cc:to:from:date:dkim-signature; bh=BqBU8eROEpFAO1ocJziz04qM/Zuhm289mbEgEJT6YY8=; b=wXyOHk49FsvAzP0a8r8sTasThIb86B28KwHt/qB4wVB9lUui+uFkznxMg03IhMR0oY pFzhPYT+u1IdG3UCuajS/GRVjPUU4V9OXc257i/mkvmcX5XG+K91p67fFgC3mbC/KFAU u1VS6CcksZpReoKHdfsIoLOhuUpoGgoLY5VjCaMw3gN+IFUFtKeMPRefypHCuVv27082 O3CX7N0fCyJEGw+uHNOiLoLjQL4wyyu9SVubzGA7guRgdWqGTIXDo8oP/8qirxdGGuMe JdYvgU9KPZV10Pl9i7hb6oxuL7z2I4gZ2e0ogqpAnYokZ2mp9/IDnd67wU/ioMPDlpEm bJMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="J/WRwefk"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l1si5162094pgm.391.2019.07.04.02.53.33; Thu, 04 Jul 2019 02:53:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="J/WRwefk"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727458AbfGDJxM (ORCPT + 99 others); Thu, 4 Jul 2019 05:53:12 -0400 Received: from mail-lj1-f193.google.com ([209.85.208.193]:46194 "EHLO mail-lj1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727257AbfGDJxL (ORCPT ); Thu, 4 Jul 2019 05:53:11 -0400 Received: by mail-lj1-f193.google.com with SMTP id v24so5519943ljg.13 for ; Thu, 04 Jul 2019 02:53:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:in-reply-to:user-agent; bh=BqBU8eROEpFAO1ocJziz04qM/Zuhm289mbEgEJT6YY8=; b=J/WRwefkB5fpBBCdNhDqnbGOm5pEkwQKNqGOuVjEbxWcbY1VIEAPkwEt41zK+zAM6Q ZJw9VnhlVc8m/WNlt/nkuVm8/4lJGRAQ7xwIZbvDwxmkDxq6zo6UMrYRECrYvwOsBREs TNi5OMDaokL/x789RToYNUt0Qz3HctF66Za+6QqRPzlbia03h5Lgs6JVALYwmcMdGXzc mLrkM4AQf//f/qlkMInXnRfbLtwTQW7hU2bYhDf4HAvC/2dkiVZDJVJSECYGk/9ofHLj xhT8SBZgjNaYnypN9ZY6Vd/ye6z8KnEFdW7LP6fATtmpQSZj8DdId3GEBYNHZTkzyE/n uJEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to:user-agent; bh=BqBU8eROEpFAO1ocJziz04qM/Zuhm289mbEgEJT6YY8=; b=CZvjFgsURcRQTPxRjafEeaKtLnnUbp6i+NcVnJBL0y8erT6Nxys4zYNT3O/frgl0O0 sHlp0Ysm31Dj9kyfsiJ4C7InxiCxMO021nwJjqpZ8Q4gJi0DQNiWGhWdR2MlTm+c2Egu XpAIGbiE+WhCsLnlHICSXTdG6PZng3WhBn1SZAsBR2ypXb+yaOpdIcbnpsDJIO9GtCPM Ijyi0EL4Vrdjc7nv3WdMaYy0s/ZxgP13jfH1QzZwOffJjzsNNbk9JT3JtDRmSKpRXHF8 pgeiUZLvFK3oodvkJ3nzO7ggnhO2nGtB/5rA+AZfo3iQSX91CIEk6gXPni4040pPOyKA xMHQ== X-Gm-Message-State: APjAAAVa/OfOwJEYKsBIMhbtbM9m8WpY/IXHHIXs9AaA97a8Tnne7vK4 u92Huq+dHGCGgLaAJSu+PaiGNQ== X-Received: by 2002:a2e:6e0c:: with SMTP id j12mr24002852ljc.123.1562233988708; Thu, 04 Jul 2019 02:53:08 -0700 (PDT) Received: from khorivan (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id o11sm799392lfl.15.2019.07.04.02.53.07 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 04 Jul 2019 02:53:08 -0700 (PDT) Date: Thu, 4 Jul 2019 12:53:06 +0300 From: Ivan Khoronzhuk To: Ilias Apalodimas Cc: Jesper Dangaard Brouer , grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net, ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com Subject: Re: [PATCH v6 net-next 5/5] net: ethernet: ti: cpsw: add XDP support Message-ID: <20190704095305.GC19839@khorivan> Mail-Followup-To: Ilias Apalodimas , Jesper Dangaard Brouer , grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net, ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com References: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> <20190703101903.8411-6-ivan.khoronzhuk@linaro.org> <20190704111939.5d845071@carbon> <20190704093902.GA26927@apalos> <20190704094329.GA19839@khorivan> <20190704094938.GA27382@apalos> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20190704094938.GA27382@apalos> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 04, 2019 at 12:49:38PM +0300, Ilias Apalodimas wrote: >On Thu, Jul 04, 2019 at 12:43:30PM +0300, Ivan Khoronzhuk wrote: >> On Thu, Jul 04, 2019 at 12:39:02PM +0300, Ilias Apalodimas wrote: >> >On Thu, Jul 04, 2019 at 11:19:39AM +0200, Jesper Dangaard Brouer wrote: >> >>On Wed, 3 Jul 2019 13:19:03 +0300 >> >>Ivan Khoronzhuk wrote: >> >> >> >>> Add XDP support based on rx page_pool allocator, one frame per page. >> >>> Page pool allocator is used with assumption that only one rx_handler >> >>> is running simultaneously. DMA map/unmap is reused from page pool >> >>> despite there is no need to map whole page. >> >>> >> >>> Due to specific of cpsw, the same TX/RX handler can be used by 2 >> >>> network devices, so special fields in buffer are added to identify >> >>> an interface the frame is destined to. Thus XDP works for both >> >>> interfaces, that allows to test xdp redirect between two interfaces >> >>> easily. Aslo, each rx queue have own page pools, but common for both >> >>> netdevs. >> >>> >> >>> XDP prog is common for all channels till appropriate changes are added >> >>> in XDP infrastructure. Also, once page_pool recycling becomes part of >> >>> skb netstack some simplifications can be added, like removing >> >>> page_pool_release_page() before skb receive. >> >>> >> >>> In order to keep rx_dev while redirect, that can be somehow used in >> >>> future, do flush in rx_handler, that allows to keep rx dev the same >> >>> while reidrect. It allows to conform with tracing rx_dev pointed >> >>> by Jesper. >> >> >> >>So, you simply call xdp_do_flush_map() after each xdp_do_redirect(). >> >>It will kill RX-bulk and performance, but I guess it will work. >> >> >> >>I guess, we can optimized it later, by e.g. in function calling >> >>cpsw_run_xdp() have a variable that detect if net_device changed >> >>(priv->ndev) and then call xdp_do_flush_map() when needed. >> >I tried something similar on the netsec driver on my initial development. >> >On the 1gbit speed NICs i saw no difference between flushing per packet vs >> >flushing on the end of the NAPI handler. >> >The latter is obviously better but since the performance impact is negligible on >> >this particular NIC, i don't think this should be a blocker. >> >Please add a clear comment on this and why you do that on this driver, >> >so people won't go ahead and copy/paste this approach >> Sry, but I did this already, is it not enouph? >The flush *must* happen there to avoid messing the following layers. The comment >says something like 'just to be sure'. It's not something that might break, it's >something that *will* break the code and i don't think that's clear with the >current comment. > >So i'd prefer something like >'We must flush here, per packet, instead of doing it in bulk at the end of >the napi handler.The RX devices on this particular hardware is sharing a >common queue, so the incoming device might change per packet' Sounds good, will replace on it. -- Regards, Ivan Khoronzhuk