Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp7024303ybl; Mon, 23 Dec 2019 17:02:20 -0800 (PST) X-Google-Smtp-Source: APXvYqzgKQYZzTe64L9LTMJqrBhPAlaF8pH5Reqj618PZEouM0frZxX9Lr1Q2yF7KMaArbE7U7is X-Received: by 2002:a05:6830:13d3:: with SMTP id e19mr26382747otq.135.1577149340877; Mon, 23 Dec 2019 17:02:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1577149340; cv=none; d=google.com; s=arc-20160816; b=nDpYiGbSfmAIobYhX5yWKhUitTuddu4bVE9R+VdbLkKRov7QDQseRt39nZ5+74d15l sdJgwnaydX3FT3hqLu/tVkfEJGbtdbLpPUTCVbQxJYGJPYMK/Zkn197+lcxHm1pjLp4V bMNBhQ5VLFumIzSkO+JE3RKSL/WXPrqsY5roTFWqxs2Ths5KEEv2XSIhD84RHVwL0wgs ka7BxuhpHS3hOu1UKhSa1pWEsr9WIJ+DClPeKuneB14UEnMLrm1hoqxrI7/S2SN0iJou u/BvjwvaJ0oGZoEOQtsyeE5QcOymHVflTzmBP4M/XCIcF9MRHhzwbjulp7XeWqTssuKx eKLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=wKGwxLX/FCIOtXSJBKDi/gZqjDBYqgyappeRbRNyA3o=; b=nquHQnEkYi8L4kQX6L4ZMK7m8u3HAQleixb4Iyvgp6P/zwbYA3jP9sggCo1JmizVes E7L0U5D1mBIzGyGkyBG6WPD5VL64z/GsiVFBAXBL+ca6GWg5SMVY6n6gtR7k6vACGx/5 YL9Y3zhpGv7Hs6CScS966xAtgXXf4D3Gya6I3Kev3N+3V+SE3JC225utuqVFiF6ZzmAw MBk6BhyQWHd8fVEUjtZ+lmZ+Z/ZchgY9uwJnzEH1YgGHWBkG7N39PUorUeHWJO4ZwWdV dsU6iCh7riTVMujaWOtH+NlrkfkpFh95ziag5T1EsuFjMrZ6LV3dewU9mkpCBqKW1puu 9cYw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DgjRBNM0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h2si10662573otn.100.2019.12.23.17.02.10; Mon, 23 Dec 2019 17:02:20 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DgjRBNM0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727169AbfLXBBU (ORCPT + 99 others); Mon, 23 Dec 2019 20:01:20 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:26226 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726833AbfLXBBU (ORCPT ); Mon, 23 Dec 2019 20:01:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1577149279; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=wKGwxLX/FCIOtXSJBKDi/gZqjDBYqgyappeRbRNyA3o=; b=DgjRBNM0rsx17Lljsn1xuwOrdB2/cMQXqHaHvwnISxr0L7l2WMwWHGVlE9PWkyGm2UAbxr zKF4CLju8Dl9GapXhxT0q+UmJQ19J8Nz+cEhFCyCVFtVfltsu+iRcPyCaTtRd3t3tZ53fz kSg7SQI7QyuDXR4bF4pPvOiJzcmMH5Y= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-297-mx3ilqxiPhK1T0Kv8DrPaQ-1; Mon, 23 Dec 2019 20:01:12 -0500 X-MC-Unique: mx3ilqxiPhK1T0Kv8DrPaQ-1 Received: by mail-wr1-f70.google.com with SMTP id l20so1716501wrc.13 for ; Mon, 23 Dec 2019 17:01:12 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=wKGwxLX/FCIOtXSJBKDi/gZqjDBYqgyappeRbRNyA3o=; b=CoNRuU8XJFXXVFwmDRpTvKbSBLA0PT6KNdcYHJNoseQZQwVsbsv/Dj24d8GlEpzTVV CH36t2JEHRfvj0oo+6fgij0vQiwhH0rgE4GQeu/pl9VktxO1bmzdYGH2Cit81o713/mD s+4ds30smWv98tWoHR3TO7uZg+O60oYXdQjHrwGyEWLoAzYP/qxfa4iQxdNrM0lb0txk wWwVRpYFgotVdS7BaRuh21fc7VPq8OPK0cdMtG2yHB389mSspDt7ue1X1tHXORdPMgLJ v0og1A6rExeqdRpD43T8o1jnAZtd3yKloEDxlx/owXGaT/xx8VmtZ7ccdmV9azY6LKRW bmuA== X-Gm-Message-State: APjAAAXnSXC7lqPbJFRpc8cic5fWkNIsWhZCjKh+0++CAG2TcCaePPSX EH6qtlb0y6+FA4lX7seTWS7NUsVLD5yVCu5e85KG+YVgzTOTh7J3D5OWaFpwuyca8LVHbvy4wYT 3CoNx1Z5DMqhGXwyBNMj6Ehvv X-Received: by 2002:a1c:a949:: with SMTP id s70mr1259558wme.69.1577149271650; Mon, 23 Dec 2019 17:01:11 -0800 (PST) X-Received: by 2002:a1c:a949:: with SMTP id s70mr1259530wme.69.1577149271423; Mon, 23 Dec 2019 17:01:11 -0800 (PST) Received: from mcroce-redhat.homenet.telecomitalia.it (host213-32-dynamic.19-79-r.retail.telecomitalia.it. [79.19.32.213]) by smtp.gmail.com with ESMTPSA id e18sm22330532wrw.70.2019.12.23.17.01.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2019 17:01:10 -0800 (PST) From: Matteo Croce To: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ilias Apalodimas , Lorenzo Bianconi , Maxime Chevallier , Antoine Tenart , Luka Perkov , Tomislav Tomasic , Marcin Wojtas , Stefan Chulski , Jesper Dangaard Brouer , Nadav Haklai Subject: [RFC net-next 0/2] mvpp2: page_pool support Date: Tue, 24 Dec 2019 02:01:01 +0100 Message-Id: <20191224010103.56407-1-mcroce@redhat.com> X-Mailer: git-send-email 2.24.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patches change the memory allocator of mvpp2 from the frag allocator to the page_pool API. This change is needed to add later XDP support to mvpp2. The reason I send it as RFC is that with this changeset, mvpp2 performs much more slower. This is the tc drop rate measured with a single flow: stock net-next with frag allocator: rx: 900.7 Mbps 1877 Kpps this patchset with page_pool: rx: 423.5 Mbps 882.3 Kpps This is the perf top when receiving traffic: 27.68% [kernel] [k] __page_pool_clean_page 9.79% [kernel] [k] get_page_from_freelist 7.18% [kernel] [k] free_unref_page 4.64% [kernel] [k] build_skb 4.63% [kernel] [k] __netif_receive_skb_core 3.83% [mvpp2] [k] mvpp2_poll 3.64% [kernel] [k] eth_type_trans 3.61% [kernel] [k] kmem_cache_free 3.03% [kernel] [k] kmem_cache_alloc 2.76% [kernel] [k] dev_gro_receive 2.69% [mvpp2] [k] mvpp2_bm_pool_put 2.68% [kernel] [k] page_frag_free 1.83% [kernel] [k] inet_gro_receive 1.74% [kernel] [k] page_pool_alloc_pages 1.70% [kernel] [k] __build_skb 1.47% [kernel] [k] __alloc_pages_nodemask 1.36% [mvpp2] [k] mvpp2_buf_alloc.isra.0 1.29% [kernel] [k] tcf_action_exec I tried Ilias patches for page_pool recycling, I get an improvement to ~1100, but I'm still far than the original allocator. Any idea on why I get such bad numbers? Another reason to send it as RFC is that I'm not fully convinced on how to use the page_pool given the HW limitation of the BM. The driver currently uses, for every CPU, a page_pool for short packets and another for long ones. The driver also has 4 rx queue per port, so every RXQ #1 will share the short and long page pools of CPU #1. This means that for every RX queue I call xdp_rxq_info_reg_mem_model() twice, on two different page_pool, can this be a problem? As usual, ideas are welcome. Matteo Croce (2): mvpp2: use page_pool allocator mvpp2: memory accounting drivers/net/ethernet/marvell/Kconfig | 1 + drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 7 + .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 142 +++++++++++++++--- 3 files changed, 125 insertions(+), 25 deletions(-) -- 2.24.1