Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1590512imu; Tue, 6 Nov 2018 00:48:29 -0800 (PST) X-Google-Smtp-Source: AJdET5eosyxLO3+qHqGppe/UdwGfvli3DdfuMwOM1COrXcHXvUJSbk86KK/g1jwJxCU/nOxlycdA X-Received: by 2002:a17:902:a81:: with SMTP id 1-v6mr25704957plp.75.1541494108954; Tue, 06 Nov 2018 00:48:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541494108; cv=none; d=google.com; s=arc-20160816; b=b8Rr1FtGLMKhvAUBzKFrA1cIn2QrwBpxl8xH0I0Tx5URUX6yoKwCY09svgJuBl1dk7 DH9Jrdy64KO2gvvLlmSoifhVCSBaJa5PFi8UxeK1zlFdVDpri25eIzAwmHt+tEZNWmlR +SmXW+pkkJTCbar3LgFBmvtas5y7w9lc+lErKeUwOoYvVx8r2nzT34rfkdoyUJ8Car+1 vdWDXSSAtgjVu0J+e5l+q6IUSkf6rAzMAKZxSuPoZiZ823iKYCsL8nChS/IZsPRljTeO 2DTPpir2soEtpVPDUgxFEg2u6E2dzDFbUPmLYkTgRn1cyXoytDd4IT1TSgm3mUXelE4L DuTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=+4ZHBzwwjUYviJvPNjGI5LOYSl308IqERPCzmAN0Jkc=; b=QTBMiE4wDGrbdkNfEOq2pzl4mR4If7kWcxndHaqFIiSTrk36yGRWf7GQAX0aum1Iwg 7yeWNdB8M3Bn30JT8krogC1Z1nIjSMv1+4rHoFXAg8SpcqU8we9bWqtVuZwP8Fxf919r 6M9e3bxbFIlg4sbexIEyEZxGiwFiwNiYDCZftS0kAiiMvgbsE1JfuYs/oIgnTMw59/nu sWGn8VtDMAdYp/E1fTwGcpZ8eetFpA/LBI0mwIJCru8bQpMv5I9pBDClvVQ0R7r+SFMn uOLwLCaVPXtXDg/esNx+hZ/myb073rMFiSEgsaM3WzHLqWvDkQIsEWrypJMb/mn8CEA4 zNwA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a8-v6si45788785plz.94.2018.11.06.00.48.13; Tue, 06 Nov 2018 00:48:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730094AbeKFSL7 (ORCPT + 99 others); Tue, 6 Nov 2018 13:11:59 -0500 Received: from mga03.intel.com ([134.134.136.65]:41768 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729241AbeKFSL7 (ORCPT ); Tue, 6 Nov 2018 13:11:59 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Nov 2018 00:47:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,471,1534834800"; d="scan'208";a="89798998" Received: from aaronlu.sh.intel.com (HELO intel.com) ([10.239.159.44]) by orsmga008.jf.intel.com with ESMTP; 06 Nov 2018 00:47:46 -0800 Date: Tue, 6 Nov 2018 16:47:46 +0800 From: Aaron Lu To: Vlastimil Babka Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Andrew Morton , =?utf-8?B?UGF3ZcWC?= Staszewski , Jesper Dangaard Brouer , Eric Dumazet , Tariq Toukan , Ilias Apalodimas , Yoel Caspersen , Mel Gorman , Saeed Mahameed , Michal Hocko , Dave Hansen , Alexander Duyck Subject: Re: [PATCH v2 2/2] mm/page_alloc: use a single function to free page Message-ID: <20181106084746.GA24198@intel.com> References: <20181105085820.6341-1-aaron.lu@intel.com> <20181105085820.6341-2-aaron.lu@intel.com> <20181106053037.GD6203@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 06, 2018 at 09:16:20AM +0100, Vlastimil Babka wrote: > On 11/6/18 6:30 AM, Aaron Lu wrote: > > We have multiple places of freeing a page, most of them doing similar > > things and a common function can be used to reduce code duplicate. > > > > It also avoids bug fixed in one function but left in another. > > > > Signed-off-by: Aaron Lu > > Acked-by: Vlastimil Babka Thanks. > I assume there's no arch that would run page_ref_sub_and_test(1) slower > than put_page_testzero(), for the critical __free_pages() case? Good question. I followed the non-arch specific calls and found that: page_ref_sub_and_test() ends up calling atomic_sub_return(i, v) while put_page_testzero() ends up calling atomic_sub_return(1, v). So they should be same for archs that do not have their own implementations. Back to your question: I don't know either. If this is deemed unsafe, we can probably keep the ref modify part in their original functions and only take the free part into a common function. Regards, Aaron > > --- > > v2: move comments close to code as suggested by Dave. > > > > mm/page_alloc.c | 36 ++++++++++++++++-------------------- > > 1 file changed, 16 insertions(+), 20 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 91a9a6af41a2..4faf6b7bf225 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -4425,9 +4425,17 @@ unsigned long get_zeroed_page(gfp_t gfp_mask) > > } > > EXPORT_SYMBOL(get_zeroed_page); > > > > -void __free_pages(struct page *page, unsigned int order) > > +static inline void free_the_page(struct page *page, unsigned int order, int nr) > > { > > - if (put_page_testzero(page)) { > > + VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); > > + > > + /* > > + * Free a page by reducing its ref count by @nr. > > + * If its refcount reaches 0, then according to its order: > > + * order0: send to PCP; > > + * high order: directly send to Buddy. > > + */ > > + if (page_ref_sub_and_test(page, nr)) { > > if (order == 0) > > free_unref_page(page); > > else > > @@ -4435,6 +4443,10 @@ void __free_pages(struct page *page, unsigned int order) > > } > > } > > > > +void __free_pages(struct page *page, unsigned int order) > > +{ > > + free_the_page(page, order, 1); > > +} > > EXPORT_SYMBOL(__free_pages); > > > > void free_pages(unsigned long addr, unsigned int order) > > @@ -4481,16 +4493,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, > > > > void __page_frag_cache_drain(struct page *page, unsigned int count) > > { > > - VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); > > - > > - if (page_ref_sub_and_test(page, count)) { > > - unsigned int order = compound_order(page); > > - > > - if (order == 0) > > - free_unref_page(page); > > - else > > - __free_pages_ok(page, order); > > - } > > + free_the_page(page, compound_order(page), count); > > } > > EXPORT_SYMBOL(__page_frag_cache_drain); > > > > @@ -4555,14 +4558,7 @@ void page_frag_free(void *addr) > > { > > struct page *page = virt_to_head_page(addr); > > > > - if (unlikely(put_page_testzero(page))) { > > - unsigned int order = compound_order(page); > > - > > - if (order == 0) > > - free_unref_page(page); > > - else > > - __free_pages_ok(page, order); > > - } > > + free_the_page(page, compound_order(page), 1); > > } > > EXPORT_SYMBOL(page_frag_free); > > > > >