Received: by 10.223.185.116 with SMTP id b49csp890267wrg; Fri, 16 Feb 2018 08:48:58 -0800 (PST) X-Google-Smtp-Source: AH8x227W3TkcjR+bhD3il8cjT1n3KKv2GHPoc9k7QXgfNaiYCNhdsnc372Z4zTDHkZmHpR7H8KS3 X-Received: by 2002:a17:902:2f84:: with SMTP id t4-v6mr6136365plb.81.1518799738145; Fri, 16 Feb 2018 08:48:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518799738; cv=none; d=google.com; s=arc-20160816; b=eQ+Ex7vMqhng82itx2CUpGNgEtku5Gxnc7i3+TAkACfiQ6OuUNMNr+zL0463Z4m2s9 ie4FsEICSgN1lGPWeq6odKRao2gy8Kcd05NrRq8w5sWBnr8OjqP81BENOQF0uMdaGSF7 u8xOwU2ns8As4pNWhFcO92coPX2tiuzv/CmtVx7ZPRFqtvqLgJCEc4ZDEiW26Eyqh7RG ZCZRZq4VY7BOxdyhYhMPYOeG6Bn9E3hwJasRbNAtMAbyrrKHKUMIXs0Nw/khtFYElsuY 6okvG14EdmmYBwPA0jA7aq1z+R27SvL1M9lIKpLUNn7B2G5MvkwhQNoPTMJy4KAHwRIJ RgfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=LoSCAK6I9VI1zyFgKfa5pxKslMQIYXFns2sXQpdK+IU=; b=k6LonK+7rtTXFA606fXZ5vOYyylsU67Vh2bZMZ78q2+RUQZl8kdiJ/j9GzB9WKHKb/ gyn+bjjfYN25uChd+f04WLvvYjpei9lp3ODjrtlhvb9FrDzsdjafkNvvOkkXP8dUz5Ng UI8hULvYoHnRLCxM2sBiZ6aFzyavyYvCHBcI/EOke81q/On0vfZED4XrOdeWZAW1yh40 J7YwHw7bOQv8u5Isj4CWQaSmR/oO3xh0BIpwMGFvFZzpylukhWSXP6i1vlmgog3Flj0P t34KwqKfMtAkOc9tPy6kd0poKcPEhy/owoUGVPjX2gUYYPnEsn3Z29th5sqHVUUm0udV e4oA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=Q7kgXakP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y66si1537934pfi.243.2018.02.16.08.48.43; Fri, 16 Feb 2018 08:48:58 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=Q7kgXakP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755600AbeBOUOL (ORCPT + 99 others); Thu, 15 Feb 2018 15:14:11 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:46152 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1165492AbeBOUOK (ORCPT ); Thu, 15 Feb 2018 15:14:10 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=LoSCAK6I9VI1zyFgKfa5pxKslMQIYXFns2sXQpdK+IU=; b=Q7kgXakPer9S1VB1JPPInG3Tj TF/BIvUvzzSHMmqnnA+r79csY/zkXSGcvgDBMA5DoqwwFN/XmJoB2flXLBmmY9BllZYvz2tp60xUG AhLmqgrKLMNtXumknpgUP8mYqmezjFchKXOeJbD5+wDJiP99i5QTDiDFtki/N5G3NavmKIf3R+2q2 z9mCKl3Rr4gEDsoYur3g04jokuuRp7aaefFlSpy+oBKZAgmc1lz8MFPWzWiLDoblQFaESn6ndqPtT CydipmcxyXPFKzEE/nv7vfETGTXfzd+cw4S8OjhKU2x9ZFwjRdmzLVaiCJn1DKeD7HxwZ3GLfdKgb WWNrxevug==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1emPuv-0002iJ-O6; Thu, 15 Feb 2018 20:14:05 +0000 Date: Thu, 15 Feb 2018 12:14:05 -0800 From: Matthew Wilcox To: Christopher Lameter Cc: Michal Hocko , David Rientjes , Andrew Morton , Jonathan Corbet , Vlastimil Babka , Mel Gorman , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org Subject: Re: [patch 1/2] mm, page_alloc: extend kernelcore and movablecore for percent Message-ID: <20180215201405.GA22948@bombadil.infradead.org> References: <20180214095911.GB28460@dhcp22.suse.cz> <20180215144525.GG7275@dhcp22.suse.cz> <20180215151129.GB12360@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 15, 2018 at 09:49:00AM -0600, Christopher Lameter wrote: > On Thu, 15 Feb 2018, Matthew Wilcox wrote: > > > What if ... on startup, slab allocated a MAX_ORDER page for itself. > > It would then satisfy its own page allocation requests from this giant > > page. If we start to run low on memory in the rest of the system, slab > > can be induced to return some of it via its shrinker. If slab runs low > > on memory, it tries to allocate another MAX_ORDER page for itself. > > The inducing of releasing memory back is not there but you can run SLUB > with MAX_ORDER allocations by passing "slab_min_order=9" or so on bootup. Maybe we should try this patch in order to automatically scale the slub page size with the amount of memory in the machine? diff --git a/mm/internal.h b/mm/internal.h index e6bd35182dae..7059a8389194 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -167,6 +167,7 @@ extern void prep_compound_page(struct page *page, unsigned int order); extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); extern int user_min_free_kbytes; +extern unsigned long __meminitdata nr_kernel_pages; #if defined CONFIG_COMPACTION || defined CONFIG_CMA diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ef9c259db041..3c51bb22403f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -264,7 +264,7 @@ int min_free_kbytes = 1024; int user_min_free_kbytes = -1; int watermark_scale_factor = 10; -static unsigned long __meminitdata nr_kernel_pages; +unsigned long __meminitdata nr_kernel_pages; static unsigned long __meminitdata nr_all_pages; static unsigned long __meminitdata dma_reserve; diff --git a/mm/slub.c b/mm/slub.c index e381728a3751..abca4a6e9b6c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4194,6 +4194,23 @@ void __init kmem_cache_init(void) if (debug_guardpage_minorder()) slub_max_order = 0; + if (slub_min_order == 0) { + unsigned long numentries = nr_kernel_pages; + + /* + * Above 4GB, we start to care more about fragmenting large + * pages than about using the minimum amount of memory. + * Scale the slub page size at half the rate that we scale + * the memory size; at 4GB we double the page size to 8k, + * 16GB to 16k, 64GB to 32k, 256GB to 64k. + */ + while (numentries > (4UL << 30)) { + if (slub_min_order >= slub_max_order) + break; + slub_min_order++; + numentries /= 4; + } + } kmem_cache_node = &boot_kmem_cache_node; kmem_cache = &boot_kmem_cache;