Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1312529pxf; Fri, 12 Mar 2021 06:59:53 -0800 (PST) X-Google-Smtp-Source: ABdhPJyG/DnrHY28WiPlS42PArmV2PrcjJHpw8R4Pca3bCFetX5a+qVIbhLrzYsJV2QHnvfhSEzC X-Received: by 2002:a17:906:b20b:: with SMTP id p11mr9108115ejz.0.1615561193252; Fri, 12 Mar 2021 06:59:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615561193; cv=none; d=google.com; s=arc-20160816; b=uGvKs/lG2XOxDFvMQ2Vr7q0G4QIl8PD/IY+KKYgspycOoTrb/zKL6kfk5QPZ0AvhN4 gB1oc0HxyZAWi0mDgqIARtpYuN6DJV/JDbsqlEAFYGvj0BJEO2yUdxVHm9hboBNQuhIR Edm6Dl7x3ApvfcYl1IMmRhunGDbSXZ1RcpWOm2pDzc1DTOZnqwApOB/tXyQ9TRxaWi7h +9b4YLVjZD2D8Tpkv6TFwnHujow3zdN8osnz5ggFsVzdZwi2ypn3t6B2o/aySFuSU1vM D3S4NWuQW2KfxcffVEkqaInlQqEwMxRZLUXIc+rW5Ls0iI5qPcu+epEkhv1HmUyilFTZ 7Rrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=gbwXnklzt/GG9mR0QlG6dEwq91Rd29SehOBGiJv9f4I=; b=Z8qn+pADjLK0cSQsvJhsQLAYogCSyQHJDuvfUqF8Gu0ovAIcSuvbZg8U8179q5LSVk Qv+II/MHYCY0X9PygpV8LD5ppowJXG6GaSwA4MotzMeMs4FLUi5TjJQBQW0l8moB1dqS CW05FEVQYgSUdjsDDwPzMjIh0xoAWqytjzQfqfBRvckI8uZlH/1Guj58SIE8CwixlnVe 6ODf0m2Ov2vheiPfixhpfMuIzcKC98u0sB8egxjMjom8m/iRllw5tGncGmxEjuPvdl1E QjOgyaadgyi/gWD3SmMWqZ78BVcYGJdAYsbJup0+iQiFBnvIVZiTT8dYapnlqL4WmK1u nlqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="CpQ/MsjI"; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h6si1145358ejc.211.2021.03.12.06.59.22; Fri, 12 Mar 2021 06:59:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="CpQ/MsjI"; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231336AbhCLO6j (ORCPT + 99 others); Fri, 12 Mar 2021 09:58:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231642AbhCLO6h (ORCPT ); Fri, 12 Mar 2021 09:58:37 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99C87C061574; Fri, 12 Mar 2021 06:58:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=gbwXnklzt/GG9mR0QlG6dEwq91Rd29SehOBGiJv9f4I=; b=CpQ/MsjIbARHlgT6/AtQcg8Ca1 AlCQYEU5FW9EGK2Rh6SM3snZpU86x48JpBlCdpy5TpenJW/oXb2UTZDN1/V0FUv6ChVHZYKr3xOek dAbLoi2hZdasuqQ0TQcHeJxAU6/LvEsR3Wr9F+tZDJ4S8y1sWwAW95zs9m4MAt1Lq4UMFztOHzWJt Pr5Vr3ICL7BxnigC+pwYsEqwbXIYVcHlx9CHqjYYDbr7mho/t1vPUDDy9j9onA912rxluott0fDxB 3DX/ErLlSB6dKS7zBPlRyvG3jYtRzNXhXsphl3ghNthhllxd5SJEcymDO1Yj4K8gTgKRAjQzi1wEG 2n+zmszA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lKjF0-00AvFJ-OO; Fri, 12 Mar 2021 14:58:16 +0000 Date: Fri, 12 Mar 2021 14:58:14 +0000 From: Matthew Wilcox To: Jesper Dangaard Brouer Cc: Mel Gorman , Andrew Morton , Chuck Lever , Christoph Hellwig , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 2/5] mm/page_alloc: Add a bulk page allocator Message-ID: <20210312145814.GA2577561@casper.infradead.org> References: <20210310104618.22750-1-mgorman@techsingularity.net> <20210310104618.22750-3-mgorman@techsingularity.net> <20210310154650.ad9760cd7cb9ac4acccf77ee@linux-foundation.org> <20210311084200.GR3697@techsingularity.net> <20210312124609.33d4d4ba@carbon> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210312124609.33d4d4ba@carbon> Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Fri, Mar 12, 2021 at 12:46:09PM +0100, Jesper Dangaard Brouer wrote: > In my page_pool patch I'm bulk allocating 64 pages. I wanted to ask if > this is too much? (PP_ALLOC_CACHE_REFILL=64). > > The mlx5 driver have a while loop for allocation 64 pages, which it > used in this case, that is why 64 is chosen. If we choose a lower > bulk number, then the bulk-alloc will just be called more times. The thing about batching is that smaller batches are often better. Let's suppose you need to allocate 100 pages for something, and the page allocator takes up 90% of your latency budget. Batching just ten pages at a time is going to reduce the overhead to 9%. Going to 64 pages reduces the overhead from 9% to 2% -- maybe that's important, but possibly not. > The result of the API is to deliver pages as a double-linked list via > LRU (page->lru member). If you are planning to use llist, then how to > handle this API change later? > > Have you notice that the two users store the struct-page pointers in an > array? We could have the caller provide the array to store struct-page > pointers, like we do with kmem_cache_alloc_bulk API. My preference would be for a pagevec. That does limit you to 15 pages per call [1], but I do think that might be enough. And the overhead of manipulating a linked list isn't free. [1] patches exist to increase this, because it turns out that 15 may not be enough for all systems! but it would limit to 255 as an absolute hard cap.