Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp383557pxf; Thu, 25 Mar 2021 06:27:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzKKjF0CPYrNCjOKWTssgalkuaT24G4W2m1u1Ke/NxLJEoomkK9Mu597s0x7d7Wer7ER8gy X-Received: by 2002:a17:906:18a1:: with SMTP id c1mr9063246ejf.62.1616678853194; Thu, 25 Mar 2021 06:27:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616678853; cv=none; d=google.com; s=arc-20160816; b=n/5N9LzqpNnHLo/qvpe6RORJdDnVss4b3RbhYvURN7TqSNk7j3/WK5+TnuBJDShUjp SebPBBUOFUaUoDuJoJ/iDo7jq+8aslWGTr195tkg03siZimNxgh2H/0tEeXp5ZwN+WPU A6ougbZQqFVkjEEUaM7C+kCjEZiIKcC6N3Bf+4AU3S8K+337UfyuflfexuJqMgoWuuiX 4x2PfGrampCudlwB6Dyk0TEQ7ER8QKyp/6l/ZOiIOWf0geAB5we/kDqedKa0MOrMGXxo +AKvnPzqBCDBgryt/aMt+FPWbDvLWnAXzHj77IJycZrQHQDhsgHf86YKedtZk3KagLrr xqVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=D147RJmokfxZUcWJiJVdzhR6KaXJNPS5ddAreqGcJEE=; b=vA5Q+i5DPHFTFT7CVq5cQNcR0VqmzxhO3dpZX6kD88MvlHsqCsSAQ0fnF2jqPTAl2O hR0FzxgEHfXRKy1kPBxF6hdgu8ElPBddg5dvIzN7qvwIdISJGfWB6XheqFVSlB4ldf1x fjHU1JE0h2H+W5ekk/H8k2rhtdHM+eo2Z1geWikzH7LP2WuukuM+qwE40S12JHDcS2GC 7DLFXGNjOXL2LfbAyqGib0ghTfLjboueqMy8fTuNbFB2hB+slOSjmw8Dxxh4oJzpIQwf BaDypYd8hPg8apJsO9/Hz53KpG6ZfOEi+5j6ZV2qtSqwpwEZuLsm6WKyz351YIpNADLZ Br7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n22si4277890eju.124.2021.03.25.06.27.01; Thu, 25 Mar 2021 06:27:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230093AbhCYN01 (ORCPT + 99 others); Thu, 25 Mar 2021 09:26:27 -0400 Received: from outbound-smtp48.blacknight.com ([46.22.136.219]:35167 "EHLO outbound-smtp48.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229731AbhCYN0A (ORCPT ); Thu, 25 Mar 2021 09:26:00 -0400 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp48.blacknight.com (Postfix) with ESMTPS id 66751FA89D for ; Thu, 25 Mar 2021 13:25:58 +0000 (GMT) Received: (qmail 25209 invoked from network); 25 Mar 2021 13:25:58 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 25 Mar 2021 13:25:58 -0000 Date: Thu, 25 Mar 2021 13:25:56 +0000 From: Mel Gorman To: Matthew Wilcox Cc: Andrew Morton , Uladzislau Rezki , Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Vlastimil Babka , Ilias Apalodimas , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 0/9 v6] Introduce a bulk order-0 page allocator with two in-tree users Message-ID: <20210325132556.GS3697@techsingularity.net> References: <20210325114228.27719-1-mgorman@techsingularity.net> <20210325125001.GW1719932@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20210325125001.GW1719932@casper.infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Thu, Mar 25, 2021 at 12:50:01PM +0000, Matthew Wilcox wrote: > On Thu, Mar 25, 2021 at 11:42:19AM +0000, Mel Gorman wrote: > > This series introduces a bulk order-0 page allocator with sunrpc and > > the network page pool being the first users. The implementation is not > > efficient as semantics needed to be ironed out first. If no other semantic > > changes are needed, it can be made more efficient. Despite that, this > > is a performance-related for users that require multiple pages for an > > operation without multiple round-trips to the page allocator. Quoting > > the last patch for the high-speed networking use-case > > > > Kernel XDP stats CPU pps Delta > > Baseline XDP-RX CPU total 3,771,046 n/a > > List XDP-RX CPU total 3,940,242 +4.49% > > Array XDP-RX CPU total 4,249,224 +12.68% > > > > >From the SUNRPC traces of svc_alloc_arg() > > > > Single page: 25.007 us per call over 532,571 calls > > Bulk list: 6.258 us per call over 517,034 calls > > Bulk array: 4.590 us per call over 517,442 calls > > > > Both potential users in this series are corner cases (NFS and high-speed > > networks) so it is unlikely that most users will see any benefit in the > > short term. Other potential other users are batch allocations for page > > cache readahead, fault around and SLUB allocations when high-order pages > > are unavailable. It's unknown how much benefit would be seen by converting > > multiple page allocation calls to a single batch or what difference it may > > make to headline performance. > > We have a third user, vmalloc(), with a 16% perf improvement. I know the > email says 21% but that includes the 5% improvement from switching to > kvmalloc() to allocate area->pages. > > https://lore.kernel.org/linux-mm/20210323133948.GA10046@pc638.lan/ > That's fairly promising. Assuming the bulk allocator gets merged, it would make sense to add vmalloc on top. That's for bringing it to my attention because it's far more relevant than my imaginary potential use cases. > I don't know how many _frequent_ vmalloc users we have that will benefit > from this, but it's probably more than will benefit from improvements > to 200Gbit networking performance. I think it was 100Gbit being looked at but your point is still valid and there is no harm in incrementally improving over time. -- Mel Gorman SUSE Labs