Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1862034ybt; Sun, 28 Jun 2020 00:02:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwmLqOqDe3pBURzaeTEhe5tB/6H6FBxuACzQ7WJmG1TQ2W6E7DWYh30kj6FdOpb2K1aWqxO X-Received: by 2002:a50:f058:: with SMTP id u24mr11657138edl.351.1593327773115; Sun, 28 Jun 2020 00:02:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593327773; cv=none; d=google.com; s=arc-20160816; b=HIHBIShRT5WKEpm9iqUO2hmu9tfkfObkqBTKuUvB/hUfR9+pdCc8dtqv8Q+bXpcjsZ 7KPlpTAVUvwyz2Qk5gxqEeTx0Y5wLsIdng3+OZq040ri0NRSAb938f3rkNDIQL7rX+UF xoaWuPMh0oN0PvzIGgwDZHKOKDiw/UoFccgyY0ik0Px48AlzvCyO2znmBLpBpo1fwb/h Rjy2pqy3Z+VCDKsKoomiTcy1cssk4zaD5htsXuTHcv9ZqBs6mbFXkesC5SlUTp1EF0uc RP1qBdp/1FeOlOBVI/9KgmA+afedRI2kJNwlpaC0y/SVwAip0QVKhF04s96SkKfbJ+Xc jYGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=Vk6aUHUuubRZAI8SWAlfVONir9KwFQpe72boykCjn5A=; b=OmCut7V+jtZVyhJcTdy9c3Hnvuv36zsvlagBLF9yl8ytNSYaoM58ZPMO5/Vj5IPE1a zKSijLnnv47zWiGfqhWoeuK5yBFiwRedKJdHye88ZzxL8ezdAt8N4f/qmRKYI7EEmoZW WAmTgd6lK3WGrU+bf9mn4xVrm1EdALyPuNBFxgPfPoaBf5fxjF0M50DFUWQ5KgkUU6Fp 1AmmQaaVI7oczK63a243w1YHWryQbXsg4Xx7MCAfAdZiOD3+ONdJ1FSHEi46OkD4m8vl CnUXquAuHv6Go9EWw6ZukmLNjCmpziGPpVmAj8kHa+YEPHtaVpsTsrDNk0MWZ/+PoLVU MlIg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Qm3qqF3d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id by12si10072594edb.99.2020.06.28.00.02.30; Sun, 28 Jun 2020 00:02:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Qm3qqF3d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726097AbgF1HAX (ORCPT + 99 others); Sun, 28 Jun 2020 03:00:23 -0400 Received: from mail.kernel.org ([198.145.29.99]:51372 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725975AbgF1HAM (ORCPT ); Sun, 28 Jun 2020 03:00:12 -0400 Received: from kernel.org (unknown [87.71.40.38]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D042720702; Sun, 28 Jun 2020 06:59:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1593327604; bh=hu/tW2f/lIs0zzXJSvCrgwdLd0PRG2JIRHvUCcaAjqM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Qm3qqF3d78J/gFOwv6YVJb71itb3mymeRhGcaLrkXQY5dETAmGwNnfv+uXEMscIi2 mPZsATFBdJNygr2fnTWo/ejLleXfIjFaiD5KXp/1ygh8w33HlLwG0oN/C/qRCb8sbq bU6JOX1KtJfeNxxfnsJIYZnBX1A1gh84k+HeVxCY= Date: Sun, 28 Jun 2020 09:59:51 +0300 From: Mike Rapoport To: Matthew Wilcox Cc: linux-kernel@vger.kernel.org, Abdul Haleem , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Christophe Leroy , Joerg Roedel , Max Filippov , Mike Rapoport , Peter Zijlstra , Satheesh Rajendran , Stafford Horne , Stephen Rothwell , Steven Rostedt , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org Subject: Re: [PATCH 9/8] mm: Account PMD tables like PTE tables Message-ID: <20200628065951.GB576120@kernel.org> References: <20200627143453.31835-1-rppt@kernel.org> <20200627184642.GF25039@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200627184642.GF25039@casper.infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jun 27, 2020 at 07:46:42PM +0100, Matthew Wilcox wrote: > We account the PTE level of the page tables to the process in order to > make smarter OOM decisions and help diagnose why memory is fragmented. > For these same reasons, we should account pages allocated for PMDs. > With larger process address spaces and ASLR, the number of PMDs in use > is higher than it used to be so the inaccuracy is starting to matter. > > Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Mike Rapoport > --- > include/linux/mm.h | 24 ++++++++++++++++++++---- > 1 file changed, 20 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index dc7b87310c10..b283e25fcffa 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) > return ptlock_ptr(pmd_to_page(pmd)); > } > > -static inline bool pgtable_pmd_page_ctor(struct page *page) > +static inline bool pmd_ptlock_init(struct page *page) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > page->pmd_huge_pte = NULL; > @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) > return ptlock_init(page); > } > > -static inline void pgtable_pmd_page_dtor(struct page *page) > +static inline void pmd_ptlock_free(struct page *page) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > VM_BUG_ON_PAGE(page->pmd_huge_pte, page); > @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) > return &mm->page_table_lock; > } > > -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } > -static inline void pgtable_pmd_page_dtor(struct page *page) {} > +static inline bool pmd_ptlock_init(struct page *page) { return true; } > +static inline void pmd_ptlock_free(struct page *page) {} > > #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) > > @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) > return ptl; > } > > +static inline bool pgtable_pmd_page_ctor(struct page *page) > +{ > + if (!pmd_ptlock_init(page)) > + return false; > + __SetPageTable(page); > + inc_zone_page_state(page, NR_PAGETABLE); > + return true; > +} > + > +static inline void pgtable_pmd_page_dtor(struct page *page) > +{ > + pmd_ptlock_free(page); > + __ClearPageTable(page); > + dec_zone_page_state(page, NR_PAGETABLE); > +} > + > /* > * No scalability reason to split PUD locks yet, but follow the same pattern > * as the PMD locks to make it easier if we decide to. The VM should not be > -- > 2.27.0 > -- Sincerely yours, Mike.