Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753411AbcLHOsW (ORCPT ); Thu, 8 Dec 2016 09:48:22 -0500 Received: from mx1.redhat.com ([209.132.183.28]:39147 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753347AbcLHOsU (ORCPT ); Thu, 8 Dec 2016 09:48:20 -0500 Date: Thu, 8 Dec 2016 15:48:13 +0100 From: Jesper Dangaard Brouer To: Mel Gorman Cc: Eric Dumazet , Andrew Morton , Christoph Lameter , Michal Hocko , Vlastimil Babka , Johannes Weiner , Joonsoo Kim , Linux-MM , Linux-Kernel , brouer@redhat.com Subject: Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7 Message-ID: <20161208154813.5dafae7b@redhat.com> In-Reply-To: <20161208110656.bnkvqg73qnjkehbc@techsingularity.net> References: <20161207101228.8128-1-mgorman@techsingularity.net> <1481137249.4930.59.camel@edumazet-glaptop3.roam.corp.google.com> <20161207194801.krhonj7yggbedpba@techsingularity.net> <1481141424.4930.71.camel@edumazet-glaptop3.roam.corp.google.com> <20161207211958.s3ymjva54wgakpkm@techsingularity.net> <20161207232531.fxqdgrweilej5gs6@techsingularity.net> <20161208092231.55c7eacf@redhat.com> <20161208091806.gzcxlerxprcjvt3l@techsingularity.net> <20161208114308.1c6a424f@redhat.com> <20161208110656.bnkvqg73qnjkehbc@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Thu, 08 Dec 2016 14:48:20 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3512 Lines: 86 On Thu, 8 Dec 2016 11:06:56 +0000 Mel Gorman wrote: > On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote: > > > That's expected. In the initial sniff-test, I saw negligible packet loss. > > > I'm waiting to see what the full set of network tests look like before > > > doing any further adjustments. > > > > For netperf I will not recommend adjusting the global default > > /proc/sys/net/core/rmem_default as netperf have means of adjusting this > > value from the application (which were the options you setup too low > > and just removed). I think you should keep this as the default for now > > (unless Eric says something else), as this should cover most users. > > > > Ok, the current state is that buffer sizes are only set for netperf > UDP_STREAM and only when running over a real network. The values selected > were specific to the network I had available so milage may vary. > localhost is left at the defaults. Looks like you made a mistake when re-implementing using buffer sizes for netperf. See patch below signature. Besides I think you misunderstood me, you can adjust: sysctl net.core.rmem_max sysctl net.core.wmem_max And you should if you plan to use/set 851968 as socket size for UDP remote tests, else you will be limited to the "max" values (212992 well actually 425984 2x default value, for reasons I cannot remember) https://github.com/gormanm/mmtests/commit/de9f8cdb7146021 -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer [PATCH] mmtests: actually use variable SOCKETSIZE_OPT From: Jesper Dangaard Brouer commit 7f16226577b2 ("netperf: Set remote and local socket max buffer sizes") removed netperf's setting of the socket buffer sizes and instead used global /proc/sys settings. commit de9f8cdb7146 ("netperf: Only adjust socket sizes for UDP_STREAM") re-added explicit netperf setting socket buffer sizes for remote-host testing (saved in SOCKETSIZE_OPT). Only problem is this variable is not used after commit 7f16226577b2. Simply use $SOCKETSIZE_OPT when invoking netperf command. Signed-off-by: Jesper Dangaard Brouer --- shellpack_src/src/netperf/netperf-bench | 2 +- shellpacks/shellpack-bench-netperf | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/shellpack_src/src/netperf/netperf-bench b/shellpack_src/src/netperf/netperf-bench index 8e7d02864c4a..b2820610936e 100755 --- a/shellpack_src/src/netperf/netperf-bench +++ b/shellpack_src/src/netperf/netperf-bench @@ -93,7 +93,7 @@ mmtests_server_ctl start --serverside-name $PROTOCOL-$SIZE -t $PROTOCOL \ -i 3,3 -I 95,5 \ -H $SERVER_HOST \ - -- $MSGSIZE_OPT $EXTRA \ + -- $SOCKETSIZE_OPT $MSGSIZE_OPT $EXTRA \ 2>&1 | tee $LOGDIR_RESULTS/$PROTOCOL-${SIZE}.$ITERATION \ || die Failed to run netperf monitor_post_hook $LOGDIR_RESULTS $SIZE diff --git a/shellpacks/shellpack-bench-netperf b/shellpacks/shellpack-bench-netperf index 2ce26ba39f1b..7356082d5a78 100755 --- a/shellpacks/shellpack-bench-netperf +++ b/shellpacks/shellpack-bench-netperf @@ -190,7 +190,7 @@ for ITERATION in `seq 1 $ITERATIONS`; do -t $PROTOCOL \ -i 3,3 -I 95,5 \ -H $SERVER_HOST \ - -- $MSGSIZE_OPT $EXTRA \ + -- $SOCKETSIZE_OPT $MSGSIZE_OPT $EXTRA \ 2>&1 | tee $LOGDIR_RESULTS/$PROTOCOL-${SIZE}.$ITERATION \ || die Failed to run netperf monitor_post_hook $LOGDIR_RESULTS $SIZE