Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753708AbcLHPRL (ORCPT ); Thu, 8 Dec 2016 10:17:11 -0500 Received: from outbound-smtp06.blacknight.com ([81.17.249.39]:54234 "EHLO outbound-smtp06.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753659AbcLHPRK (ORCPT ); Thu, 8 Dec 2016 10:17:10 -0500 Date: Thu, 8 Dec 2016 15:11:01 +0000 From: Mel Gorman To: Jesper Dangaard Brouer Cc: Eric Dumazet , Andrew Morton , Christoph Lameter , Michal Hocko , Vlastimil Babka , Johannes Weiner , Joonsoo Kim , Linux-MM , Linux-Kernel Subject: Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7 Message-ID: <20161208151101.pigfrnqd5i4n45uv@techsingularity.net> References: <1481137249.4930.59.camel@edumazet-glaptop3.roam.corp.google.com> <20161207194801.krhonj7yggbedpba@techsingularity.net> <1481141424.4930.71.camel@edumazet-glaptop3.roam.corp.google.com> <20161207211958.s3ymjva54wgakpkm@techsingularity.net> <20161207232531.fxqdgrweilej5gs6@techsingularity.net> <20161208092231.55c7eacf@redhat.com> <20161208091806.gzcxlerxprcjvt3l@techsingularity.net> <20161208114308.1c6a424f@redhat.com> <20161208110656.bnkvqg73qnjkehbc@techsingularity.net> <20161208154813.5dafae7b@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20161208154813.5dafae7b@redhat.com> User-Agent: Mutt/1.6.2 (2016-07-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2413 Lines: 58 On Thu, Dec 08, 2016 at 03:48:13PM +0100, Jesper Dangaard Brouer wrote: > On Thu, 8 Dec 2016 11:06:56 +0000 > Mel Gorman wrote: > > > On Thu, Dec 08, 2016 at 11:43:08AM +0100, Jesper Dangaard Brouer wrote: > > > > That's expected. In the initial sniff-test, I saw negligible packet loss. > > > > I'm waiting to see what the full set of network tests look like before > > > > doing any further adjustments. > > > > > > For netperf I will not recommend adjusting the global default > > > /proc/sys/net/core/rmem_default as netperf have means of adjusting this > > > value from the application (which were the options you setup too low > > > and just removed). I think you should keep this as the default for now > > > (unless Eric says something else), as this should cover most users. > > > > > > > Ok, the current state is that buffer sizes are only set for netperf > > UDP_STREAM and only when running over a real network. The values selected > > were specific to the network I had available so milage may vary. > > localhost is left at the defaults. > > Looks like you made a mistake when re-implementing using buffer sizes > for netperf. We appear to have a disconnect. This was reintroduced in response to your comment "For netperf I will not recommend adjusting the global default /proc/sys/net/core/rmem_default as netperf have means of adjusting this value from the application". My understanding was that netperfs means was the -s and -S switches for send and recv buffers so I reintroduced them and avoided altering [r|w]mem_default. Leaving the defaults resulted in some UDP packet loss on a 10GbE network so some upward adjustment. >From my perspective, either adjusting [r|w]mem_default or specifying -s -S works for the UDP_STREAM issue but using the switches meant only this is affected and other loads like sockperf and netpipe will need to be evaluated separately which I don't mind doing. > See patch below signature. > > Besides I think you misunderstood me, you can adjust: > sysctl net.core.rmem_max > sysctl net.core.wmem_max > > And you should if you plan to use/set 851968 as socket size for UDP > remote tests, else you will be limited to the "max" values (212992 well > actually 425984 2x default value, for reasons I cannot remember) > The intent is to use the larger values to avoid packet loss on UDP_STREAM. -- Mel Gorman SUSE Labs