Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3438489imu; Sun, 11 Nov 2018 15:06:24 -0800 (PST) X-Google-Smtp-Source: AJdET5ebg6sd0yNxZ+OFIKB/AO67XMDR00ZJUpFoOZisLlMmH7TE+NsAKs7ka2mWYPTpOztQdtaw X-Received: by 2002:a17:902:4281:: with SMTP id h1-v6mr17768929pld.114.1541977584325; Sun, 11 Nov 2018 15:06:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541977584; cv=none; d=google.com; s=arc-20160816; b=c6mxQJbZHadIjauV+46e4Sxu6gPkWsMyOwNBexFfQaFR29m3YCtftKFjvJBtg6uyxL jt5r0/6zBqCC+WwWWZcA+2houdiE8heRejWtujlpLYZIb+8EzcQGwYok+/yWA6trMAh4 YX1XDr2GThbCnssqYxcRAEBtf1tkNlr7yKjuBr4IybEkpYkwCNmtM+O13vPdnM9doajV CD9OdD1MBUlm+9O2uCfyBkjrq1Gqup8YUAIfkfRSjvqVXlSfaUDvH4t1iJfXOWnY6mLc bBDJOzNAkH/cxWkWAIC40RiaFZYSKQH7ciHoDHbLhX+6M5tRrlaoVuBhztjSsiWkLGGW U5QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=0U9ceAnB4nAEnMcfQJhz58KUPoxlp94Luh91H/EYKkY=; b=oeYrDvzKy5xxRQzdXpT7NULhsPS29eUhSTF5i4b/VwABYIrDmrI+dqEGXV93q7A9fD mJDgpmnHdb2LUeJpWsFnkilJYG0hdewUV2rrtkSxQ/PdZdbRTqlxjIaBZpZ5qqYueukg x6vEDCBf39yx4VWs3ouF9kFw477HBNw+l92mT+gEVY36oE7H848IUFbSOA3suhEoBVxp ybqnMaHqka04/WJBRsIsRoZbfN4sRGruC+hOjfPlCxpsfqkvogzrES9sLrNG1t95soZ9 PhhCmHbwZL3v1LJLpvzGKLPr419U5FA21ORXZlBLX4WK6xhFEYl7w9Y4GjOEqlnt0Ixz fNHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=NrEkLR0D; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b94-v6si16293043plb.396.2018.11.11.15.06.09; Sun, 11 Nov 2018 15:06:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=NrEkLR0D; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390562AbeKLIzW (ORCPT + 99 others); Mon, 12 Nov 2018 03:55:22 -0500 Received: from mail-io1-f65.google.com ([209.85.166.65]:41103 "EHLO mail-io1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731378AbeKLIzV (ORCPT ); Mon, 12 Nov 2018 03:55:21 -0500 Received: by mail-io1-f65.google.com with SMTP id r6-v6so2214630ioj.8; Sun, 11 Nov 2018 15:05:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=0U9ceAnB4nAEnMcfQJhz58KUPoxlp94Luh91H/EYKkY=; b=NrEkLR0DUbSE6ST3VED/lXRjY1F10bkMITeyWwrIA/ZxvIw/3uQqGedqQjSNd4nvET ORoWvtamNnLuKSVmhqrk14g9StK9aPNNxSSag1xsT+u3A3rqTSzgZCAQyxhjUi3JI292 FT1BFUiS/PAv7oVZQeHSZJsyiaTRQCwSneW65yxvuWbY1Orevpp9QNhUABBINlfe9TsC wxl2uz7DHSZfWvK9CHcZB9rc3nsM5JzudEz8php3M119FWLCdkqLZOl41cmV2MOYw+4Y dyGpF3pwOyykeOPaov2Mwh9rPj2cwivA4Hwd5y12dnWJaTPM6u8YNYyFdENDJ4lkGlxv 1GrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=0U9ceAnB4nAEnMcfQJhz58KUPoxlp94Luh91H/EYKkY=; b=nYNmpi5dpnGSiJwUnOtDTXaXAWZQhTITPC3QFJjlCeOjkRZMB0OztOxy5fX8a34TXj +Vb1eLKDvxI3TWxhPmhOwQFiS2K0eUk9peuyuARyUY8/Wm+rm5NF/06MspPYRI/LPmBQ 9b3K2Lry1vkbI0MvicOEvqgPLWFRm/mf6XtZzuGmG8a2tz0Pl9U4VSIdKOEXDikLM8KR qq84p9bOzeoqfkFNoiH7r6tNt1ii63T3ICp606Fk3EIBCxH06ErJiD4iB/xD6GyAr3nh E94ELflMawwL6QmgUXWqXahG+cG38ZmsF2j2ZwgTlFOmVZbUf0/qHi2T6e4UR92XYGaL kboQ== X-Gm-Message-State: AGRZ1gJtWuxuv7UW8ye4XwXnhS3bV8WhWUji0bxMOHgF4nTmSeKdg43d XMvlrl9mgGdMW7wzyenZPXWgO98SyAO4xREyo10= X-Received: by 2002:a6b:4f14:: with SMTP id d20-v6mr14574428iob.68.1541977513600; Sun, 11 Nov 2018 15:05:13 -0800 (PST) MIME-Version: 1.0 References: <20181105085820.6341-1-aaron.lu@intel.com> In-Reply-To: From: Alexander Duyck Date: Sun, 11 Nov 2018 15:05:01 -0800 Message-ID: Subject: Re: [PATCH 1/2] mm/page_alloc: free order-0 pages through PCP in page_frag_free() To: =?UTF-8?Q?Pawe=C5=82_Staszewski?= Cc: aaron.lu@intel.com, linux-mm , LKML , Netdev , Andrew Morton , Jesper Dangaard Brouer , Eric Dumazet , Tariq Toukan , ilias.apalodimas@linaro.org, yoel@kviknet.dk, Mel Gorman , Saeed Mahameed , Michal Hocko , Vlastimil Babka , dave.hansen@linux.intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Nov 10, 2018 at 3:54 PM Pawe=C5=82 Staszewski wrote: > > > > W dniu 05.11.2018 o 16:44, Alexander Duyck pisze: > > On Mon, Nov 5, 2018 at 12:58 AM Aaron Lu wrote: > >> page_frag_free() calls __free_pages_ok() to free the page back to > >> Buddy. This is OK for high order page, but for order-0 pages, it > >> misses the optimization opportunity of using Per-Cpu-Pages and can > >> cause zone lock contention when called frequently. > >> > >> Pawe=C5=82 Staszewski recently shared his result of 'how Linux kernel > >> handles normal traffic'[1] and from perf data, Jesper Dangaard Brouer > >> found the lock contention comes from page allocator: > >> > >> mlx5e_poll_tx_cq > >> | > >> --16.34%--napi_consume_skb > >> | > >> |--12.65%--__free_pages_ok > >> | | > >> | --11.86%--free_one_page > >> | | > >> | |--10.10%--queued_spin_lock_slowpa= th > >> | | > >> | --0.65%--_raw_spin_lock > >> | > >> |--1.55%--page_frag_free > >> | > >> --1.44%--skb_release_data > >> > >> Jesper explained how it happened: mlx5 driver RX-page recycle > >> mechanism is not effective in this workload and pages have to go > >> through the page allocator. The lock contention happens during > >> mlx5 DMA TX completion cycle. And the page allocator cannot keep > >> up at these speeds.[2] > >> > >> I thought that __free_pages_ok() are mostly freeing high order > >> pages and thought this is an lock contention for high order pages > >> but Jesper explained in detail that __free_pages_ok() here are > >> actually freeing order-0 pages because mlx5 is using order-0 pages > >> to satisfy its page pool allocation request.[3] > >> > >> The free path as pointed out by Jesper is: > >> skb_free_head() > >> -> skb_free_frag() > >> -> skb_free_frag() > >> -> page_frag_free() > >> And the pages being freed on this path are order-0 pages. > >> > >> Fix this by doing similar things as in __page_frag_cache_drain() - > >> send the being freed page to PCP if it's an order-0 page, or > >> directly to Buddy if it is a high order page. > >> > >> With this change, Pawe=C5=82 hasn't noticed lock contention yet in > >> his workload and Jesper has noticed a 7% performance improvement > >> using a micro benchmark and lock contention is gone. > >> > >> [1]: https://www.spinics.net/lists/netdev/msg531362.html > >> [2]: https://www.spinics.net/lists/netdev/msg531421.html > >> [3]: https://www.spinics.net/lists/netdev/msg531556.html > >> Reported-by: Pawe=C5=82 Staszewski > >> Analysed-by: Jesper Dangaard Brouer > >> Signed-off-by: Aaron Lu > >> --- > >> mm/page_alloc.c | 10 ++++++++-- > >> 1 file changed, 8 insertions(+), 2 deletions(-) > >> > >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >> index ae31839874b8..91a9a6af41a2 100644 > >> --- a/mm/page_alloc.c > >> +++ b/mm/page_alloc.c > >> @@ -4555,8 +4555,14 @@ void page_frag_free(void *addr) > >> { > >> struct page *page =3D virt_to_head_page(addr); > >> > >> - if (unlikely(put_page_testzero(page))) > >> - __free_pages_ok(page, compound_order(page)); > >> + if (unlikely(put_page_testzero(page))) { > >> + unsigned int order =3D compound_order(page); > >> + > >> + if (order =3D=3D 0) > >> + free_unref_page(page); > >> + else > >> + __free_pages_ok(page, order); > >> + } > >> } > >> EXPORT_SYMBOL(page_frag_free); > >> > > One thing I would suggest for Pawel to try would be to reduce the Tx > > qdisc size on his transmitting interfaces, Reduce the Tx ring size, > > and possibly increase the Tx interrupt rate. Ideally we shouldn't have > > too many packets in-flight and I suspect that is the issue that Pawel > > is seeing that is leading to the page pool allocator freeing up the > > memory. I know we like to try to batch things but the issue is > > processing too many Tx buffers in one batch leads to us eating up too > > much memory and causing evictions from the cache. Ideally the Rx and > > Tx rings and queues should be sized as small as possible while still > > allowing us to process up to our NAPI budget. Usually I run things > > with a 128 Rx / 128 Tx setup and then reduce the Tx queue length so we > > don't have more buffers stored there than we can place in the Tx ring. > > Then we can avoid the extra thrash of having to pull/push memory into > > and out of the freelists. Essentially the issue here ends up being > > another form of buffer bloat. > Thanks Aleksandar - yes it can be - but in my scenario setting RX buffer > <4096 producing more interface rx drops - and no_rx_buffer on network > controller that is receiving more packets > So i need to stick with 3000-4000 on RX - and yes i was trying to lower > the TX buff on connectx4 - but that changed nothing before Aaron patch > > After Aaron patch - decreasing TX buffer influencing total bandwidth > that can be handled by the router/server > Dono why before this patch there was no difference there no matter what > i set there there was always page_alloc/slowpath on top in perf > > > Currently testing RX4096/TX256 - this helps with bandwidth like +10% > more bandwidth with less interrupts... The problem is if you are going for less interrupts you are setting yourself up for buffer bloat. Basically you are going to use much more cache and much more memory then you actually need and if things are properly configured NAPI should take care of the interrupts anyway since under maximum load you shouldn't stop polling normally. One issue I have seen is people delay interrupts for as long as possible which isn't really a good thing since most network controllers will use NAPI which will disable the interrupts and leave them disabled whenever the system is under heavy stress so you should be able to get the maximum performance by configuring an adapter with small ring sizes and for high interrupt rates. It is easiest to think of it this way. Your total packet rate is equal to your interrupt rate times the number of buffers you will store in the ring. So if you have some fixed rate "X" for packets and an interrupt rate of "i" then your optimal ring size should be "X/i". So if you lower the interrupt rate you end up hurting the throughput unless you increase the buffer size. However at a certain point the buffer size starts becoming an issue. For example with UDP flows I often see massive packet drops if you tune the interrupt rate too low and then put the system under heavy stress. - Alex