Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp7405853ybl; Tue, 24 Dec 2019 01:53:22 -0800 (PST) X-Google-Smtp-Source: APXvYqyj30cxp0WjOISCoLGezCCoP7x9GTn94pwnfgoa6YkYASGimxPz8UnAodwIQG7EbUVPjJLE X-Received: by 2002:a9d:4e92:: with SMTP id v18mr22778268otk.47.1577181202850; Tue, 24 Dec 2019 01:53:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577181202; cv=none; d=google.com; s=arc-20160816; b=mWByKQ7k28AcorqCX4SFfpj68agalOnqhPw3m8ZaAex9qB/A8T2ueW7ra44hrnIpM1 OuMFsZbRM240YoftkQM0tRNRgjAR5RHBgcIVUvulwOaTapt5KZ0QhAfLEXMy2eV2ZVk0 Uhyy5Cyj9GNKaV0z4LJagVqgGczPMhqG9o2B5j7fQwZa+oPzY2Xso/UvevtOUIDAUuko nC7zs7v337oWxiSUxRI6Ba5iQjA0gt3DxSIO61AuXQKmM7aB7fhqv0Z7UuJF/A+tNYt6 aKDi+DSOCerTvLoZuf36IC1dyIzvNuLHXnElzCrXuukNKwliY6t9K8mV8a+U+oIO5o1A CmeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=N3HBmWELJ09NCKU7N1TUdxqAH5rNCluFgs2bN3EZVaM=; b=WaldBwqDIcsQNbOlblgov9FgzWljBtR39a0z6vQ1WDsvUD5OUcPlAkAg8FG83WbW5c AJDImoQMCN/6gHvV7HIHuhKpdOSdmgZRWJGK1ERHz0sJ3bpOR8l2DKwP0Dz4hARspxzP 5E2VNXB/3YCB5R/0mH0g+HnysJmVMXLcZ2IBeT7PZs55u2ZG5mtjr3EB8j1WmC46xjXx 5SzhR1NNqX/TVrVnSaOGUYunEWKbpxaHRWke3teSZ8OS/v5D3ERRZMk691EG+mzAhYA0 LIPNqMbV5NucAhIwjo+K0LE6FuNHannHHwf9ieCEuwVWdEp4SeP7jfoOeHU1t4BKtruA kIEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=a5SwwLJ7; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 19si11193776oiq.128.2019.12.24.01.53.11; Tue, 24 Dec 2019 01:53:22 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=a5SwwLJ7; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726206AbfLXJwe (ORCPT + 99 others); Tue, 24 Dec 2019 04:52:34 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:38799 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726091AbfLXJwe (ORCPT ); Tue, 24 Dec 2019 04:52:34 -0500 Received: by mail-wr1-f67.google.com with SMTP id y17so19376559wrh.5 for ; Tue, 24 Dec 2019 01:52:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=N3HBmWELJ09NCKU7N1TUdxqAH5rNCluFgs2bN3EZVaM=; b=a5SwwLJ7wdPE0l7OEStq9kLfrybjTtNM9B2ahwATRWhdqk04fohhlIqkQG74hqA6Fz +Aj3921DFogeuC7csDkXVEnZLXIELKVyi2PCE6kUhBRPU1MiHjAUkssRhn3TS+AD2OtF yUC7UPi9dtY344RFgjAcsU1QApLLOGY88dg+5N4uAR7UVqv73CakDc+k23Mu6MHOerXd nB4fUsX7i7c7BRFdJrolF1lFVqq1cTeoOCVtCFfw4L5KAXodgx7iT0vMznczToxy8fk6 cKbCZCKI13lqNKQJj4CbsWEdfWTyhIw/ddiJsj3sRRwU8VGE2TKbaJ56MNhVXJe2KhGG RGug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=N3HBmWELJ09NCKU7N1TUdxqAH5rNCluFgs2bN3EZVaM=; b=Qwr5OIMsd2QNwAjBLoUo3/Ll1DUzXIakacWDNPpcGtyzy1WIVgOJgKLYUvODU/GPbW DiTY+9yAq8sOyCudfsA8uWJVql82NdryZDrLUHWN7xtfjyeJScsE5jSJkpPeHkNXktUJ kkgixgduOCgaHUuOcNj8ZamY723zYsezWVtA6OIGq0z+5yWdUFgKeDMojOJ9z7k0no1Q +C3cZxGYFLnzhTp61CzCmAQs7wT3oqiuwURetormwAXVOZ23FsAjtO4L+Ufftc7559sT nJqBOfS4rbwEi9w2NHWmqrG+ubHnatVYbrfLJsw43YDu7IJmM2BJiR/8FPKa6D43VflR qJuQ== X-Gm-Message-State: APjAAAX5GArCQijiYDn/wxrCIALh9ns3Em5X1fOyufUz+VfkZBJHED7I U4VWJbRbWoA0C0WnNAppgYW0qQ== X-Received: by 2002:adf:ee92:: with SMTP id b18mr36084857wro.281.1577181152190; Tue, 24 Dec 2019 01:52:32 -0800 (PST) Received: from apalos.home (ppp-94-64-118-170.home.otenet.gr. [94.64.118.170]) by smtp.gmail.com with ESMTPSA id v14sm23394678wrm.28.2019.12.24.01.52.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Dec 2019 01:52:31 -0800 (PST) Date: Tue, 24 Dec 2019 11:52:29 +0200 From: Ilias Apalodimas To: Matteo Croce Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Lorenzo Bianconi , Maxime Chevallier , Antoine Tenart , Luka Perkov , Tomislav Tomasic , Marcin Wojtas , Stefan Chulski , Jesper Dangaard Brouer , Nadav Haklai Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support Message-ID: <20191224095229.GA24310@apalos.home> References: <20191224010103.56407-1-mcroce@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191224010103.56407-1-mcroce@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote: > This patches change the memory allocator of mvpp2 from the frag allocator to > the page_pool API. This change is needed to add later XDP support to mvpp2. > > The reason I send it as RFC is that with this changeset, mvpp2 performs much > more slower. This is the tc drop rate measured with a single flow: > > stock net-next with frag allocator: > rx: 900.7 Mbps 1877 Kpps > > this patchset with page_pool: > rx: 423.5 Mbps 882.3 Kpps > > This is the perf top when receiving traffic: > > 27.68% [kernel] [k] __page_pool_clean_page This seems extremly high on the list. > 9.79% [kernel] [k] get_page_from_freelist > 7.18% [kernel] [k] free_unref_page > 4.64% [kernel] [k] build_skb > 4.63% [kernel] [k] __netif_receive_skb_core > 3.83% [mvpp2] [k] mvpp2_poll > 3.64% [kernel] [k] eth_type_trans > 3.61% [kernel] [k] kmem_cache_free > 3.03% [kernel] [k] kmem_cache_alloc > 2.76% [kernel] [k] dev_gro_receive > 2.69% [mvpp2] [k] mvpp2_bm_pool_put > 2.68% [kernel] [k] page_frag_free > 1.83% [kernel] [k] inet_gro_receive > 1.74% [kernel] [k] page_pool_alloc_pages > 1.70% [kernel] [k] __build_skb > 1.47% [kernel] [k] __alloc_pages_nodemask > 1.36% [mvpp2] [k] mvpp2_buf_alloc.isra.0 > 1.29% [kernel] [k] tcf_action_exec > > I tried Ilias patches for page_pool recycling, I get an improvement > to ~1100, but I'm still far than the original allocator. Can you post the recycling perf for comparison? > > Any idea on why I get such bad numbers? Nop but it's indeed strange > > Another reason to send it as RFC is that I'm not fully convinced on how to > use the page_pool given the HW limitation of the BM. I'll have a look right after holidays > > The driver currently uses, for every CPU, a page_pool for short packets and > another for long ones. The driver also has 4 rx queue per port, so every > RXQ #1 will share the short and long page pools of CPU #1. > I am not sure i am following the hardware config here > This means that for every RX queue I call xdp_rxq_info_reg_mem_model() twice, > on two different page_pool, can this be a problem? > > As usual, ideas are welcome. > > Matteo Croce (2): > mvpp2: use page_pool allocator > mvpp2: memory accounting > > drivers/net/ethernet/marvell/Kconfig | 1 + > drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 7 + > .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 142 +++++++++++++++--- > 3 files changed, 125 insertions(+), 25 deletions(-) > > -- > 2.24.1 > Cheers /Ilias