Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp1774521pxb; Thu, 7 Oct 2021 15:03:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzuefVtmX2kZLm/k/0bH0NinVxViIJLfJ02CRAWPSALXg0q4h/JpcVG/mDkejJrcMYL63bW X-Received: by 2002:a17:906:2b91:: with SMTP id m17mr8708813ejg.202.1633644239594; Thu, 07 Oct 2021 15:03:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633644239; cv=none; d=google.com; s=arc-20160816; b=PWXis+lmicdSp7XS3knhO6XsKlcdndK7l+l7FEH8lGhsMQaQWuZ3G1VVb+Ve1PrZQe jO6n1y7YPTjv9LvWjF80FqKc8GsvcWRhkflkPKsztP9Zmq2djGzE0OPqV4PFnX9wn/Gx qQ9so6D5NPq9BEdeBf9Ky70spJ04MF/h7tbV/G6DV1j4M+VhGAvWJ2HYifFo3OmqtEze R3PXZBf3Lmv2ANIHbFpo2FK6qNNRLHkzpfqLB+j5qdjQQPi5Wb2sKDQH7lKZGWNUBgNt iqECxL50Cy44HVhTynDRff5v7C1M6Pi/O6NlpHyu8qOfqP5B3x652XT2os+tPhEJ7VlI 1/Xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=/vOvb1MuFNjNuwhCwOtjXELnMlF1jtCs7PntC32TOyM=; b=vR/NXgeZD+TfNAO0N+aoDs2zuJ0CF+IB8O2YpkUbK4nHzxP3hKrpQHlAxcYhht3OL7 jTQseA1UcOS1mk/SEvQzmdkM4xGpWq17BwqHuRZ2ZumnqysY0NDdE1AwoqmSm+dvdHVM 7IvHRCFl3MWoq4PfAaoydIy3jQRPHIMVteswPMnDEHWU2dFCKwJhvM5GZVnPa48Zh7bT OX76H9hT+Yxp/UwSfqGKord6LJJaopOdLm9KOhRmArj7enLSIclK3kk9nwkM8XjglLE0 p9Uf9ACn2adNfsnaP57GwZQnnnCjP4Ay/TWoR2IwKyF9N7JaNweQCUXUMnuoU3ugyemu i6QQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="f/VRG6xn"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t17si1050320edd.346.2021.10.07.15.03.35; Thu, 07 Oct 2021 15:03:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="f/VRG6xn"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233145AbhJGTdE (ORCPT + 99 others); Thu, 7 Oct 2021 15:33:04 -0400 Received: from mail.kernel.org ([198.145.29.99]:56034 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232829AbhJGTdE (ORCPT ); Thu, 7 Oct 2021 15:33:04 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2884B610A1; Thu, 7 Oct 2021 19:31:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1633635070; bh=HhBYKDk4QK9bVPwS4vHzeRTACL3GE5kSGSANE9D2jDI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=f/VRG6xnT6Dq+L4jU0SQ/soGHSTlTjZ//IS9sLzBRv7xdr0ozOPlFIBw9M/vZWbM+ pAhgoWYfnVvlAHsBV2jAgVgqygOx+u2pOsxO7bZ6k3wkVQf2EyH6ueGpmEg7DVcqCl 8OggsvvlADcMDGz51XHBamkZfn3Q2oEsOUCurqmI= Date: Thu, 7 Oct 2021 12:31:09 -0700 From: Andrew Morton To: "Matthew Wilcox (Oracle)" Cc: Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm: Optimise put_pages_list() Message-Id: <20211007123109.6a49c7c625e414acf7546c89@linux-foundation.org> In-Reply-To: <20211007192138.561673-1-willy@infradead.org> References: <20211007192138.561673-1-willy@infradead.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 7 Oct 2021 20:21:37 +0100 "Matthew Wilcox (Oracle)" wrote: > Instead of calling put_page() one page at a time, pop pages off > the list if their refcount was too high and pass the remainder to > put_unref_page_list(). This should be a speed improvement, but I have > no measurements to support that. Current callers do not care about > performance, but I hope to add some which do. Don't you think it would actually be slower to take an additional pass across the list? If the list is long enough to cause cache thrashing. Maybe it's faster for small lists.