Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp693698pxf; Thu, 1 Apr 2021 11:04:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxnfp70ORXjHriy4ODEIJpiIYiGVzAu/ypGh2Q+XJ172p1rH2hqUvcdLRdq3kdEASMxRBdY X-Received: by 2002:a17:906:154f:: with SMTP id c15mr10613102ejd.142.1617300288470; Thu, 01 Apr 2021 11:04:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617300288; cv=none; d=google.com; s=arc-20160816; b=G4hQoU7oMSl1BE53rpyV/+f7Oe3NPYDn3RBybspxiAPAdm+8ISyK5Tp0LvT7T0weDt sAQpNFRzhvTOZNb9HfOPGr+c0QfW5nUzPESeSUsoHb4UbRIUj7GYOWmyQDISalxO8B1J C7B28HpTQjpi6FTTu7Ag3BmrUiiOByOmN+YLiGVCiR4yspdR7S6aCdEChh7o0hg/f4UF nrXf5ZA4WSCUxHobZogxzAVrEESr5cPcbLL0Q7Lzz5B8AK8VpKFO0IeB7ur5HgOrkj74 h/hGuknZNMKN7A5TB5mGYi4AaCaFddHzRJl6iHi0CDr64MzLEoxtqgx+mjogNoqUdU5t Gj0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=G/+9Hb7YYQmA2SssmIOWqXZ0kNaROHWnX1DvJqmrmNI=; b=wPGqmEZpNb9rj0yu6k1RnCFjP3eXWpaAOfY4/gUAp2QWkcU3KS0iWd+pGr4BFqYGt1 bgkm3BvgCgyWUEoMJ+RiHcsXb0WYnrY+CJVxCWozn+iLMAArwMXYRBJJFwgcNYPpG/3q LCDHQ1EMQoDC88ISVgeWxamNqMI844UW+aBccu7aKcMUUH5O4R/YqRtRKuSPI5Xm+1wZ t0z/gFl8nVYmOhF02ubWYXsPP8fHPdYvzy6UKiEGmsxINuPKeAzcBJGZKe0ziIIJnTwN 4dPP+HKnQbSjiZC7Nv4glVjHwXgCFWhUcxk9bI0bP9sG8ws7nIRP/N24M1HlEgzh0nbH YaAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=c13x1pln; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i24si4581978ejf.53.2021.04.01.11.04.25; Thu, 01 Apr 2021 11:04:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=c13x1pln; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236306AbhDASCS (ORCPT + 99 others); Thu, 1 Apr 2021 14:02:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236264AbhDARrM (ORCPT ); Thu, 1 Apr 2021 13:47:12 -0400 Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com [IPv6:2607:f8b0:4864:20::736]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A018C02FE85 for ; Thu, 1 Apr 2021 09:00:12 -0700 (PDT) Received: by mail-qk1-x736.google.com with SMTP id x11so2616738qkp.11 for ; Thu, 01 Apr 2021 09:00:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=G/+9Hb7YYQmA2SssmIOWqXZ0kNaROHWnX1DvJqmrmNI=; b=c13x1plnbd9O5N0pa2dHAkihjGH6ROBPko3mLq9jsLcZ5dvMbyLYm0/pe8+4HVj7SY NqpWRjFmceGkyaBCaeRvbtg1oxZot1s5xFOIlITYkU2RI+OWxtHNsT5vB/SQ5N5cMcOx MFeBKHXLwFPoanmzbLqMRMcY9wNvhrrASbOTogFfi3uMEjFNO/FZF24TmaprZJjC+lCX UxVRXOYhRVKR+gQVDEyJz59VyO0zcFG1s1F9PG5FgFnG6pVrlZ6/iM8h3tzduY7oz422 /xu6rR9gOKWyN1rtcKRWB4aHvxcIDUzhLqM06zpkx7vHCu2W97HmNlA4H/NCJvjjfFQj wURQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=G/+9Hb7YYQmA2SssmIOWqXZ0kNaROHWnX1DvJqmrmNI=; b=BWuQ8Ekx4IWdfGN+abPKpcc20QLn4l04KVnmBgZ+eMUiyD1t69ER+uL3f61qUmVcF4 5fDDK1XY3aPx1E1swfSRtXWtJt+3/5t6kD9479a0NQNn8hVqHyneWE55FUO1bzhx4S/T gvjuUa/9xwczTlGpiGvoehvAPAFw3Woa4uXRiIMhHMqVixGayq7bBODuZKKijINB5zqz kSvkNCa38rERCpTu0ESzgnTPY9lZX3B/fJJHoDNdyUFNT4KOZtSIwKyhVdcR6s4dF63u f6v+FCVj8zygSnjOtYUVJspnvyNiBKyXMQOHM9pF+o6A1mlXQAp04AOQSZ0FuJMmBVbg LW2w== X-Gm-Message-State: AOAM533fmz0iGAd/MZ0uwWXatKN86hL5E+FGD8mWUmZ4XE8ud6Gon19a KBKKckvCl6/T47zjTnsySCE9dA== X-Received: by 2002:a05:620a:e10:: with SMTP id y16mr9042205qkm.375.1617292811103; Thu, 01 Apr 2021 09:00:11 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:9738]) by smtp.gmail.com with ESMTPSA id z188sm4335793qkb.40.2021.04.01.09.00.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Apr 2021 09:00:10 -0700 (PDT) Date: Thu, 1 Apr 2021 12:00:08 -0400 From: Johannes Weiner To: Al Viro Cc: Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: Re: [PATCH v5 00/27] Memory Folios Message-ID: References: <20210320054104.1300774-1-willy@infradead.org> <20210322184744.GU1719932@casper.infradead.org> <20210324062421.GQ1719932@casper.infradead.org> <20210329165832.GG351017@casper.infradead.org> <20210330210929.GR351017@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 01, 2021 at 05:05:37AM +0000, Al Viro wrote: > On Tue, Mar 30, 2021 at 10:09:29PM +0100, Matthew Wilcox wrote: > > > That's a very Intel-centric way of looking at it. Other architectures > > support a multitude of page sizes, from the insane ia64 (4k, 8k, 16k, then > > every power of four up to 4GB) to more reasonable options like (4k, 32k, > > 256k, 2M, 16M, 128M). But we (in software) shouldn't constrain ourselves > > to thinking in terms of what the hardware currently supports. Google > > have data showing that for their workloads, 32kB is the goldilocks size. > > I'm sure for some workloads, it's much higher and for others it's lower. > > But for almost no workload is 4kB the right choice any more, and probably > > hasn't been since the late 90s. > > Out of curiosity I looked at the distribution of file sizes in the > kernel tree: > 71455 files total > 0--4Kb 36702 > 4--8Kb 11820 > 8--16Kb 10066 > 16--32Kb 6984 > 32--64Kb 3804 > 64--128Kb 1498 > 128--256Kb 393 > 256--512Kb 108 > 512Kb--1Mb 35 > 1--2Mb 25 > 2--4Mb 5 > 4--6Mb 7 > 6--8Mb 4 > 12Mb 2 > 14Mb 1 > 16Mb 1 > > ... incidentally, everything bigger than 1.2Mb lives^Wshambles under > drivers/gpu/drm/amd/include/asic_reg/ > > Page size Footprint > 4Kb 1128Mb > 8Kb 1324Mb > 16Kb 1764Mb > 32Kb 2739Mb > 64Kb 4832Mb > 128Kb 9191Mb > 256Kb 18062Mb > 512Kb 35883Mb > 1Mb 71570Mb > 2Mb 142958Mb > > So for kernel builds (as well as grep over the tree, etc.) uniform 2Mb pages > would be... interesting. Right, I don't see us getting rid of 4k cache entries anytime soon. Even 32k pages would double the footprint here. The issue is just that at the other end of the spectrum we have IO devices that do 10GB/s, which corresponds to 2.6 million pages per second. At such data rates we are currently CPU-limited because of the pure transaction overhead in page reclaim. Workloads like this tend to use much larger files, and would benefit from a larger paging unit. Likewise, most production workloads in cloud servers have enormous anonymous regions and large executables that greatly benefit from fewer page table levels and bigger TLB entries. Today, fragmentation prevents the page allocator from producing 2MB blocks at a satisfactory rate and allocation latency. It's not feasible to allocate 2M inside page faults for example; getting huge page coverage for the page cache will be even more difficult. I'm not saying we should get rid of 4k cache entries. Rather, I'm wondering out loud whether longer-term we'd want to change the default page size to 2M, and implement the 4k cache entries, which we clearly continue to need, with a slab style allocator on top. The idea being that it'll do a better job at grouping cache entries with other cache entries of a similar lifetime than the untyped page allocator does naturally, and so make fragmentation a whole lot more manageable. (I'm using x86 page sizes as examples because they matter to me. But there is an architecture independent discrepancy between the smallest cache entries we must continue to support, and larger blocks / huge pages that we increasingly rely on as first class pages.)