Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp5794564pxu; Thu, 22 Oct 2020 11:11:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybhp1jaZw1FrZqjpsM0sXLmn3F+m4gpGd4howugqEN8AI8FSyVgUbSuu3n8rfvNOWtaxqi X-Received: by 2002:a50:d69e:: with SMTP id r30mr3266120edi.383.1603390275046; Thu, 22 Oct 2020 11:11:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603390275; cv=none; d=google.com; s=arc-20160816; b=TJtL7phRtWHG2iut5JXpxO7bWq1G+Po9Cg7ckCSeACMivmN/jtxpzUGzmwCufAMy4x YM2lpNOJQQjC+bRu2kI0Uz4PPsi4HnjmEm1GSldToJHMPl984YfFgWedjBPm8crg5F8G t+pOtWEdgjF1my3H9ZE7zK4ANEpEJIMuayQnO46pxanNJ8rQlIPmx7RQr3bds2y89bNM yRe89zC5XTWBEWRRITNA89ItJN/XI67/mDHh447wURQg3s6tH2Xj37ZG7X7ZjlJ/yGSB l9yXN9hdCv4jVJAxct9x7FmRcC8NAIa5iRFlVA2AePgTi4l9oNdOuaXOyIozH1wcHFkF lyng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:subject:cc:to:from:date; bh=/fgPE5IB7dkwJfAuA6KTSowejelJ9rsJKaYz79gVHWs=; b=pHPLtr3hXqDpJv+nkxPKN/U44z+5NDFOTUkpaoUr+ZQb7lMjSp2e8KakFpvk1AyDNq 2NUsG4YgOCE+0ztRsvxBR2n4GtlbNLhFjU0qVBTkPhn8PAsul2/R7bxlj3tbSW8/94By fOMSyGnA8D2NZakY/TGSQ31mzfZE2VMVNl3KdpaS3HmkPwozq2TudXsR1gtnuBsmmEgw q6Ufk7ngwjitCqSNee0mb9Rg+B76L7j0bNoeFXiZHjPVcJ/dklk8Jci7gzkg3I7pJaO4 FG4Anpl97dfc34ZYLD+9FFeFv9gzDWZEFz0Q/QrrWR0fDYRZw4NK5DzpVRc1kKHETeus jMAQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r7si1329609ejr.8.2020.10.22.11.10.52; Thu, 22 Oct 2020 11:11:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2898014AbgJVQpN (ORCPT + 99 others); Thu, 22 Oct 2020 12:45:13 -0400 Received: from shelob.surriel.com ([96.67.55.147]:34910 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2897869AbgJVQpN (ORCPT ); Thu, 22 Oct 2020 12:45:13 -0400 Received: from [2603:3005:d05:2b00:6e0b:84ff:fee2:98bb] (helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94) (envelope-from ) id 1kVdiB-0001fZ-Qn; Thu, 22 Oct 2020 12:45:11 -0400 Date: Thu, 22 Oct 2020 12:45:11 -0400 From: Rik van Riel To: Hugh Dickins Cc: Yu Xu , Andrew Morton , Mel Gorman , Andrea Arcangeli , Matthew Wilcox , linux-mm@kvack.org, kernel-team@fb.com, linux-kernel@vger.kernel.org Subject: [PATCH v2] mm,thp,shmem: limit shmem THP alloc gfp_mask Message-ID: <20201022124511.72448a5f@imladris.surriel.com> X-Mailer: Claws Mail 3.17.6 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: riel@shelob.surriel.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The allocation flags of anonymous transparent huge pages can be controlled through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can help the system from getting bogged down in the page reclaim and compaction code when many THPs are getting allocated simultaneously. However, the gfp_mask for shmem THP allocations were not limited by those configuration settings, and some workloads ended up with all CPUs stuck on the LRU lock in the page reclaim code, trying to allocate dozens of THPs simultaneously. This patch applies the same configurated limitation of THPs to shmem hugepage allocations, to prevent that from happening. This way a THP defrag setting of "never" or "defer+madvise" will result in quick allocation failures without direct reclaim when no 2MB free pages are available. Signed-off-by: Rik van Riel --- v2: move gfp calculation to shmem_getpage_gfp as suggested by Yu Xu diff --git a/include/linux/gfp.h b/include/linux/gfp.h index c603237e006c..0a5b164a26d9 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -614,6 +614,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask); extern void pm_restrict_gfp_mask(void); extern void pm_restore_gfp_mask(void); +extern gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma); + #ifdef CONFIG_PM_SLEEP extern bool pm_suspended_storage(void); #else diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9474dbc150ed..9b08ce5cc387 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -649,7 +649,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, * available * never: never stall for any thp allocation */ -static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma) +gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma) { const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE); diff --git a/mm/shmem.c b/mm/shmem.c index 537c137698f8..9710b9df91e9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1545,8 +1545,8 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, return NULL; shmem_pseudo_vma_init(&pvma, info, hindex); - page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN, - HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(), true); + page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(), + true); shmem_pseudo_vma_destroy(&pvma); if (page) prep_transhuge_page(page); @@ -1802,6 +1802,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct page *page; enum sgp_type sgp_huge = sgp; pgoff_t hindex = index; + gfp_t huge_gfp; int error; int once = 0; int alloced = 0; @@ -1887,7 +1888,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, } alloc_huge: - page = shmem_alloc_and_acct_page(gfp, inode, index, true); + huge_gfp = alloc_hugepage_direct_gfpmask(vma); + page = shmem_alloc_and_acct_page(huge_gfp, inode, index, true); if (IS_ERR(page)) { alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode,