Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp82755pxv; Wed, 30 Jun 2021 00:08:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxVwhhTMMps+BhSEqrV3Vm3DVlcX/Sg7dYoJyPOn4pALouZdvQPq2euU1uqn2ulxh+JMjdF X-Received: by 2002:a17:907:e9e:: with SMTP id ho30mr1617070ejc.116.1625036903329; Wed, 30 Jun 2021 00:08:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625036903; cv=none; d=google.com; s=arc-20160816; b=ZpVa0ErG+qQIEh2qX2hnOsLFQw74oggKa3K2kivc3c3pi0BHaBE2JlvB2AoapNBPWI jJ/dtEU4cSRD92AyJzyiI4X7FBp7aLUecxI1eJM552SHdz0tBVR4O8dH1yQ2XHAYt/nG URGoFbk1IxOvlVvfPzCGnLg/5sh8GnXV0QYGks8rp51ZZYAjYvpoqGPHUOBpfDD8r8R1 AfKtxZeTqoRTuc1jOYBqQXZlDeVfqyEEeYCLi+UfpWoJtLHKbvEzEYkmXuG3P3ss7pzj a0erY2AxZh/Mg3jY/8X9kNjsjJlJK6YdI7bVz6vtnnxv9yNBS4CtdD58wsShjzhsNwcN eeTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=aNbgbOxhl5bZGo1hfLYhBixrD9TrRpZLD8eju3999x0=; b=ry1et60JPDfyBtitqRJhB5K5u1wHBt/FIDV2Kq0JekhgJFTOtxy6OyCQ6hWEvx/52I lrocbfV+qBCYWzu5uASeA3IiaBC5XMWTpt3TwnyG6tosGwc7C/7YWbj+SThocNsRgMhL 6nDBN5QGZrhcvQ3Arj8mNyGt1uRWGrsg3TONOmAoYnL11aW6jflteBjlIX+loiB2iAUX 6HZ0XnY5suHAbQjgBIHO/0vz5MifF6ozroiGrhVNNtXA1KqhHyCXojLXmloRpsM2edqx 70XIGH/GlGaOWfZRI46eTAYWKItjKdGKBognX2hWsNakmDld7+N3IQ5bcwhvucK2DlKD dqzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j13si1610640edr.371.2021.06.30.00.07.51; Wed, 30 Jun 2021 00:08:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232018AbhF3HIF (ORCPT + 99 others); Wed, 30 Jun 2021 03:08:05 -0400 Received: from outbound-smtp09.blacknight.com ([46.22.139.14]:36021 "EHLO outbound-smtp09.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232598AbhF3HIF (ORCPT ); Wed, 30 Jun 2021 03:08:05 -0400 X-Greylist: delayed 399 seconds by postgrey-1.27 at vger.kernel.org; Wed, 30 Jun 2021 03:08:05 EDT Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp09.blacknight.com (Postfix) with ESMTPS id CE4521C424C for ; Wed, 30 Jun 2021 07:58:56 +0100 (IST) Received: (qmail 386 invoked from network); 30 Jun 2021 06:58:56 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.17.255]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 30 Jun 2021 06:58:56 -0000 Date: Wed, 30 Jun 2021 07:58:55 +0100 From: Mel Gorman To: Jesper Dangaard Brouer Cc: Chuck Lever , linux-nfs@vger.kernel.org, linux-mm@kvack.org, brouer@redhat.com Subject: Re: [PATCH v3] mm/page_alloc: Further fix __alloc_pages_bulk() return value Message-ID: <20210630065855.GH3840@techsingularity.net> References: <162497449506.16614.7781489905877008435.stgit@klimt.1015granger.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Tue, Jun 29, 2021 at 06:01:12PM +0200, Jesper Dangaard Brouer wrote: > On 29/06/2021 15.48, Chuck Lever wrote: > > > The call site in __page_pool_alloc_pages_slow() also seems to be > > confused on this matter. It should be attended to by someone who > > is familiar with that code. > > I don't think we need a fix for __page_pool_alloc_pages_slow(), as the array > is guaranteed to be empty. > > But a fix would look like this: > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index c137ce308c27..1b04538a3da3 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -245,22 +245,23 @@ static struct page > *__page_pool_alloc_pages_slow(struct page_pool *pool, > ??????? if (unlikely(pp_order)) > ??????????????? return __page_pool_alloc_page_order(pool, gfp); > > ??????? /* Unnecessary as alloc cache is empty, but guarantees zero count */ > -?????? if (unlikely(pool->alloc.count > 0)) > +?????? if (unlikely(pool->alloc.count > 0))?? // ^^^^^^^^^^^^^^^^^^^^^^ > ??????????????? return pool->alloc.cache[--pool->alloc.count]; > > ??????? /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array > */ > ??????? memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); > > +?????? /* bulk API ret value also count existing pages, but array is empty > */ > ??????? nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache); > ??????? if (unlikely(!nr_pages)) > ??????????????? return NULL; > > ??????? /* Pages have been filled into alloc.cache array, but count is zero > and > ???????? * page element have not been (possibly) DMA mapped. > ???????? */ > -?????? for (i = 0; i < nr_pages; i++) { > +?????? for (i = pool->alloc.count; i < nr_pages; i++) { That last part would break as the loop is updating pool->alloc_count. Just setting pool->alloc_count = nr_pages would break if PP_FLAG_DMA_MAP was set and page_pool_dma_map failed. Right? -- Mel Gorman SUSE Labs