Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp499135ybk; Fri, 15 May 2020 06:18:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyDPDkpqEiN0OYzsOPM9x4ROXalJPxP18Wkq7bmMHpkrdNWp7x2qy9X3UlXUXzpX/VnsLNb X-Received: by 2002:aa7:ca48:: with SMTP id j8mr2649000edt.328.1589548734661; Fri, 15 May 2020 06:18:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589548734; cv=none; d=google.com; s=arc-20160816; b=Qm9voUEVm5ULMr/+T52ovNqnRMu7Ek38aLWASJLtMk9mfUlO1/J0hFV/Od6kD4RgRP TaXa1UUNXOMidbB6Ev2uMXm+fHfUiJaRmadYSB4QFdeqiMekAABGStSwWMQJ3n2yf+kH 1hEhiTZjrLjkQbWofX/zL5IZapzJjvOmARDqo7QfoPVqbe8y/lE3ruNry/RtkSjOl4eC asnRyjcgh9TrXoklReeOt+uhcNngd28eU/xKKufd81XG9hfLxFUekm+Bf+ECC93KB7HD 3QOXVi+HNopsHc2/CLXLxFgOh9XM2w+cHLjO01eVeB3GYcXKVy0KBqKeUtoyBXmuxril qvmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=n+nEZyido/DxHe/n1547nGNhqWH1ViJvKc/xC6MlKVQ=; b=FKvgDwFoF+dfcOpymlcjhE1CP2DFOBqy0IOkJTySUa9trucboyWwCJ4B2KvO8jdJMc ++RVutAVrbf22IdPrdhPSq2TvxMNMAiV2gF824qXe24NmWXuSlyVKrH0APYMvmiKYaQy lxv2/sV/HOWV+F+Zdzm/cwVyjnucs1pz1ntaPcxC0PuYN2eJ4xglUlNbb+KTgi5pfKXY AcJHlnnIt48WRYGaSH+r30Fc2bSspWTZb96XYEOkrjHZwNYbcvQe7vlGQ/4rGZlSUrti tQ1zfliZek060v7fjwYu2/3ZDTngUhcPemjZuQR+LkOzcyQxp+5R4b8aeniwmoamywFc HiXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="td/O+zfm"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g30si1202766ede.28.2020.05.15.06.18.30; Fri, 15 May 2020 06:18:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b="td/O+zfm"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726648AbgEONRF (ORCPT + 99 others); Fri, 15 May 2020 09:17:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726216AbgEONRA (ORCPT ); Fri, 15 May 2020 09:17:00 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6DCCC05BD09; Fri, 15 May 2020 06:17:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=n+nEZyido/DxHe/n1547nGNhqWH1ViJvKc/xC6MlKVQ=; b=td/O+zfmxpLo/MrXYYDpm9YR7t DHh0UbDLqOSyVakWYeqflEpWUwsRRZdJdboX0UOxtO4uddNwi+pMLamucxbcA1AFOPrmXF1y0R7JO +jMDHJtSLbo9tf2WAb2vCzkjxvIwJySsyPHvSh8sAkFcWWBwdXXwf5kpsZWAjvCSCxTC9N0hrZDsA CT7PaPeBDfJXhvfv3V6xMoFqSxojbKbRfpnT4skSwzwcxb+UpknT/lcXRINb4oj9kGE+sjIbo3aGD +5EqYIh7UnhrjdHwbXvknml0fhjzlKzl7Z8unM9AQvccJswAhHwd98RAJGEV/2QusLA8TDGBZtBUs zLQnRpWQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZaCy-0005Tq-Dm; Fri, 15 May 2020 13:17:00 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 03/36] mm: Allow hpages to be arbitrary order Date: Fri, 15 May 2020 06:16:23 -0700 Message-Id: <20200515131656.12890-4-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200515131656.12890-1-willy@infradead.org> References: <20200515131656.12890-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Remove the assumption in hpage_nr_pages() that compound pages are necessarily PMD sized. Move the relevant parts of mm.h to before the include of huge_mm.h so we can use an inline function rather than a macro. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 5 +-- include/linux/mm.h | 96 ++++++++++++++++++++--------------------- 2 files changed, 50 insertions(+), 51 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index cfbb0a87c5f0..6bec4b5b61e1 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -265,11 +265,10 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, else return NULL; } + static inline int hpage_nr_pages(struct page *page) { - if (unlikely(PageTransHuge(page))) - return HPAGE_PMD_NR; - return 1; + return compound_nr(page); } struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, diff --git a/include/linux/mm.h b/include/linux/mm.h index 581e56275bc4..088acbda722d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -671,6 +671,54 @@ int vma_is_stack_for_current(struct vm_area_struct *vma); struct mmu_gather; struct inode; +static inline unsigned int compound_order(struct page *page) +{ + if (!PageHead(page)) + return 0; + return page[1].compound_order; +} + +static inline bool hpage_pincount_available(struct page *page) +{ + /* + * Can the page->hpage_pinned_refcount field be used? That field is in + * the 3rd page of the compound page, so the smallest (2-page) compound + * pages cannot support it. + */ + page = compound_head(page); + return PageCompound(page) && compound_order(page) > 1; +} + +static inline int compound_pincount(struct page *page) +{ + VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); + page = compound_head(page); + return atomic_read(compound_pincount_ptr(page)); +} + +static inline void set_compound_order(struct page *page, unsigned int order) +{ + page[1].compound_order = order; +} + +/* Returns the number of pages in this potentially compound page. */ +static inline unsigned long compound_nr(struct page *page) +{ + return 1UL << compound_order(page); +} + +/* Returns the number of bytes in this potentially compound page. */ +static inline unsigned long page_size(struct page *page) +{ + return PAGE_SIZE << compound_order(page); +} + +/* Returns the number of bits needed for the number of bytes in a page */ +static inline unsigned int page_shift(struct page *page) +{ + return PAGE_SHIFT + compound_order(page); +} + /* * FIXME: take this include out, include page-flags.h in * files which need it (119 of them) @@ -875,54 +923,6 @@ static inline compound_page_dtor *get_compound_page_dtor(struct page *page) return compound_page_dtors[page[1].compound_dtor]; } -static inline unsigned int compound_order(struct page *page) -{ - if (!PageHead(page)) - return 0; - return page[1].compound_order; -} - -static inline bool hpage_pincount_available(struct page *page) -{ - /* - * Can the page->hpage_pinned_refcount field be used? That field is in - * the 3rd page of the compound page, so the smallest (2-page) compound - * pages cannot support it. - */ - page = compound_head(page); - return PageCompound(page) && compound_order(page) > 1; -} - -static inline int compound_pincount(struct page *page) -{ - VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); - page = compound_head(page); - return atomic_read(compound_pincount_ptr(page)); -} - -static inline void set_compound_order(struct page *page, unsigned int order) -{ - page[1].compound_order = order; -} - -/* Returns the number of pages in this potentially compound page. */ -static inline unsigned long compound_nr(struct page *page) -{ - return 1UL << compound_order(page); -} - -/* Returns the number of bytes in this potentially compound page. */ -static inline unsigned long page_size(struct page *page) -{ - return PAGE_SIZE << compound_order(page); -} - -/* Returns the number of bits needed for the number of bytes in a page */ -static inline unsigned int page_shift(struct page *page) -{ - return PAGE_SHIFT + compound_order(page); -} - void free_compound_page(struct page *page); #ifdef CONFIG_MMU -- 2.26.2