Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CA58C61DA3 for ; Tue, 21 Feb 2023 15:47:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234715AbjBUPra (ORCPT ); Tue, 21 Feb 2023 10:47:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230115AbjBUPr2 (ORCPT ); Tue, 21 Feb 2023 10:47:28 -0500 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [IPv6:2a00:1450:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C4FA265B2; Tue, 21 Feb 2023 07:47:27 -0800 (PST) Received: by mail-ed1-x52b.google.com with SMTP id x10so18135015edd.13; Tue, 21 Feb 2023 07:47:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=scM9oYCSR1ND5NiwNb6ID5TUnyh6Ib6071WzOSOrsT4=; b=d0B6JhmUt4j5p3PHPLbXlteRY5kSw721PyQ1UDfSQYlq0razfwkmUCEPG544F9j4SF ZjBazQTGkCMhVyF/bp1NcvTD1EmyKaIVpJkSbZqIEcGsSGXLFqVvFOPWV9BKElhqOrnN CpytSaVygN02dpjIfbr6KZJBAn3P1InfeSYO9aETjq2eIgIiPWJ1/Ai3W39WmHNIgNnI juJE6vv+YXo+r4ewJrFbOnbSBhFXaVbKAob0vxjmw+uhxidsxxacg5zK+uaY9TMXwRbr fIs5HR3yFLofEc9sk8R4fqfFmOY/Wn/FG4+d5aQ/9YBkEoqk5G+si4QaEGwxFNn/Twk0 YG6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=scM9oYCSR1ND5NiwNb6ID5TUnyh6Ib6071WzOSOrsT4=; b=2XciQtcJySXy5LOiLEHpASVygKRy8GsVDmunABmqiTJPpDcaPD3F71bzD5z2SfZcS4 G48Qqgm/FEQICYjkVJ5KnWUevycLCk12lcRnHhDzYpWsf/V1pTvaUFshNINtkseekYNA JWCmolIe12Gir9rXHwFKVAdO/xiLaT50mzTnMliv2cjLYupHHn61F6Na35yO1saov3dZ UuMWvyDB4XFdEefAe9pPOBmj0xZsIT7DJng+eFIAXkQEel1IX8ZYNDL3abZHPIR+fAya Z1uKrjnK+M6b4wTrLWcboeR8KGKT/Pk6AzXKGZxSEZFW7ML86QxXcU/dOCSD3olYkbfA 4uxA== X-Gm-Message-State: AO0yUKUp9E/WFO4crMYMwablsQfCjQKgB1fhh7/d5D7DvKAQPKgWnOje W9cIIjUTM4LQxO1l6HnEbxbpy5A87NLZDluW4ec= X-Google-Smtp-Source: AK7set9hq6lHx/3D/Eckys2B7FbwUbl4sjrwQyeG1UCSJlPURRvP/Mqbbu+8eLZrnD9uJ7HJEJJZ1Q8g4zlgrUZSIGw= X-Received: by 2002:a17:907:1dda:b0:878:b86b:de15 with SMTP id og26-20020a1709071dda00b00878b86bde15mr6383597ejc.11.1676994445630; Tue, 21 Feb 2023 07:47:25 -0800 (PST) MIME-Version: 1.0 References: <20230221110344.82818-1-kerneljasonxing@gmail.com> <48429c16fdaee59867df5ef487e73d4b1bf099af.camel@redhat.com> In-Reply-To: From: Jason Xing Date: Tue, 21 Feb 2023 23:46:49 +0800 Message-ID: Subject: Re: [PATCH net] udp: fix memory schedule error To: Paolo Abeni Cc: willemdebruijn.kernel@gmail.com, davem@davemloft.net, dsahern@kernel.org, edumazet@google.com, kuba@kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Jason Xing Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 21, 2023 at 10:46 PM Paolo Abeni wrote: > > On Tue, 2023-02-21 at 21:39 +0800, Jason Xing wrote: > > On Tue, Feb 21, 2023 at 8:27 PM Paolo Abeni wrote: > > > > > > On Tue, 2023-02-21 at 19:03 +0800, Jason Xing wrote: > > > > From: Jason Xing > > > > > > > > Quoting from the commit 7c80b038d23e ("net: fix sk_wmem_schedule() > > > > and sk_rmem_schedule() errors"): > > > > > > > > "If sk->sk_forward_alloc is 150000, and we need to schedule 150001 bytes, > > > > we want to allocate 1 byte more (rounded up to one page), > > > > instead of 150001" > > > > > > I'm wondering if this would cause measurable (even small) performance > > > regression? Specifically under high packet rate, with BH and user-space > > > processing happening on different CPUs. > > > > > > Could you please provide the relevant performance figures? > > > > Sure, I've done some basic tests on my machine as below. > > > > Environment: 16 cpus, 60G memory > > Server: run "iperf3 -s -p [port]" command and start 500 processes. > > Client: run "iperf3 -u -c 127.0.0.1 -p [port]" command and start 500 processes. > > Just for the records, with the above command each process will send > pkts at 1mbs - not very relevant performance wise. > > Instead you could do: > > taskset 0x2 iperf -s & > iperf -u -c 127.0.0.1 -b 0 -l 64 > Thanks for your guidance. Here're some numbers according to what you suggested, which I tested several times. ----------|IFACE rxpck/s txpck/s rxkB/s txkB/s Before: lo 411073.41 411073.41 36932.38 36932.38 After: lo 410308.73 410308.73 36863.81 36863.81 Above is one of many results which does not mean that the original code absolutely outperforms. The output is not that constant and stable, I think. Please help me review those numbers. > > > In theory, I have no clue about why it could cause some regression? > > Maybe the memory allocation is not that enough compared to the > > original code? > > As Eric noted, for UDP traffic, due to the expected average packet > size, sk_forward_alloc is touched quite frequently, both with and > without this patch, so there is little chance it will have any > performance impact. Well, I see. Thanks, Jason > > Cheers, > > Paolo >