Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1456508pxb; Fri, 22 Oct 2021 01:01:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwNikagKlybT0+X8vjmkvC+s/YBtdkNe45dr7tk5vhFObUtTyWh4QoLBb1TPl0mLlGgE4c3 X-Received: by 2002:aa7:cfd3:: with SMTP id r19mr14455906edy.225.1634889682657; Fri, 22 Oct 2021 01:01:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634889682; cv=none; d=google.com; s=arc-20160816; b=g5hy3Vb/USv2Di8AWFFojKjqCU7rFmDe1bhda+XhzQ7eWJyyzHYX6J574ZMGf3/wyY OY8w4SrH441upXhqOwXujbE/l9smf7yNwACzT2xyAX6sRdghEavr71wqd/jKqwDEjNab YyPyeTMZf7NV3XPeQ+W1EpYd2lpzTzTa7ww5z+YdxqE/MB9Wy8yJDSmmCh/VwXlCP1OZ XUcirv6na/eLfNjLeXRNT43xups3/6ckv+wU5suH5fi6Q0D574C8qt2tWt8S3O1/ZzZB k0Jk6eDm3vWhD9jXxKDxjLadC+CF4JAj0szzAuyPzbvGqn2sX19C+VQbenYejkquu4/v +eDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:subject :organization:from:references:cc:to:content-language:user-agent :mime-version:date:message-id:dkim-signature; bh=kZoWYLLUFZ8JY44SKry8OHyZeTm3yXeRKD2OKegls7I=; b=NLfR9TGTY+oCTUCOos/Ffgiwo7krjG+GfhOgiU6DIchWWH2KzYkC0YhbiL23cS4YD9 xJNCAze0oScwSwlIIUfgjjbam2Qqb9PIJ8ODenhPCOKz/NPG9Ph/CTv6sN7RLvAqHl7q 9pWw4cjTrNEfJL/8jzaHbwjx8wow6nBc/jQVMXx/WthHCwD+71h2lhYEJnudRm4d5T2G RDtsfiqr34/umw90Bsm+QPWhfacGj9b+PSDa1wFKaDHw90p88QJum3eUCmN55Uea771w GPGtbB8PIMqykEG9atqautuZGs4gWY55COOvgxwMu7l5gNGsmXl3xkoVlqqzkO3rTqjg yWYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="RxQsVKb/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id by8si9681154edb.489.2021.10.22.01.00.58; Fri, 22 Oct 2021 01:01:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="RxQsVKb/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232274AbhJVIB1 (ORCPT + 99 others); Fri, 22 Oct 2021 04:01:27 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:28008 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232261AbhJVIB0 (ORCPT ); Fri, 22 Oct 2021 04:01:26 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634889549; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kZoWYLLUFZ8JY44SKry8OHyZeTm3yXeRKD2OKegls7I=; b=RxQsVKb/eZ/rRNO/XF2IFs1755HwZpxWUfu+jXCQRGjTpNgnc3UVkGXwgzrxFbCLs6qnve Gz+24vSMSDWizzX3HAWxkFveYhYMdaSmronW+zYk6DTfxoE3c4Mv+jeOtsTXadBiBCZybD o/VWj7Fyk2s7+fSE6KWV+pQJq9ar6YM= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-478-S6ExAGRvPmySgiPPtySjbQ-1; Fri, 22 Oct 2021 03:59:07 -0400 X-MC-Unique: S6ExAGRvPmySgiPPtySjbQ-1 Received: by mail-wm1-f69.google.com with SMTP id s25-20020a7bc399000000b0030da0f36afeso693879wmj.1 for ; Fri, 22 Oct 2021 00:59:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=kZoWYLLUFZ8JY44SKry8OHyZeTm3yXeRKD2OKegls7I=; b=kwwZrnX2HAtxeHzQIT97Nv1aDawRzPWp5JW4ILmPd3b180FcUNWnQpZwFKQiC62GhC OEDMcDJvpHGuTvjvQh2ZTkz7ZTX4GolRGQ/s9Gq/txB722QB7sLj3EZexjslcvD7ogls gcJCwyDi9/9sNVQXYAFG4FXdH8YTvMVSH2mamcKbb3NXHJyCiGQc2Yegh2xEGfTjmjus zGBGpNQzORf5HvZKtiRl6v9oFxyGMB3vtdS5wil55+0X/cfhAA3du00yE2F5H4/HGKP/ rC1M0egy1MyHbNExcirWIRZop9Ge/3PerpMargEWgMP0LXdn+pNqXp50jcSY+pjKAnL/ 5rOA== X-Gm-Message-State: AOAM531OmknRHz5Qn0c6F0eBHFwVfWnn1mWR1sf2PI0td9kass+uYkcV 4myhvna/ApE5kX7S3EU53KXknNGGUKAzEapRY9u3ufPsc1TUNDxBeySy4Gi53QTwJdBJnkc7Cr6 LAYFDYPwsyHaXPZreqeF5bd0N X-Received: by 2002:a5d:6d8f:: with SMTP id l15mr14121158wrs.350.1634889546670; Fri, 22 Oct 2021 00:59:06 -0700 (PDT) X-Received: by 2002:a5d:6d8f:: with SMTP id l15mr14121107wrs.350.1634889546352; Fri, 22 Oct 2021 00:59:06 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6324.dip0.t-ipconnect.de. [91.12.99.36]) by smtp.gmail.com with ESMTPSA id i17sm7855993wru.18.2021.10.22.00.59.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 22 Oct 2021 00:59:05 -0700 (PDT) Message-ID: <326b5796-6ef9-a08f-a671-4da4b04a2b4f@redhat.com> Date: Fri, 22 Oct 2021 09:59:05 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 Content-Language: en-US To: Matthew Wilcox , Johannes Weiner Cc: Kent Overstreet , "Kirill A. Shutemov" , Linus Torvalds , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Andrew Morton , "Darrick J. Wong" , Christoph Hellwig , David Howells , Hugh Dickins References: <20211018231627.kqrnalsi74bgpoxu@box.shutemov.name> From: David Hildenbrand Organization: Red Hat Subject: Re: Folios for 5.15 request - Was: re: Folio discussion recap - In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 22.10.21 03:52, Matthew Wilcox wrote: > On Thu, Oct 21, 2021 at 05:37:41PM -0400, Johannes Weiner wrote: >> Here is my summary of the discussion, and my conclusion: > > Thank you for this. It's the clearest, most useful post on this thread, > including my own. It really highlights the substantial points that > should be discussed. > >> The premise of the folio was initially to simply be a type that says: >> I'm the headpage for one or more pages. Never a tailpage. Cool. >> >> However, after we talked about what that actually means, we seem to >> have some consensus on the following: >> >> 1) If folio is to be a generic headpage, it'll be the new >> dumping ground for slab, network, drivers etc. Nobody is >> psyched about this, hence the idea to split the page into >> subtypes which already resulted in the struct slab patches. >> >> 2) If higher-order allocations are going to be the norm, it's >> wasteful to statically allocate full descriptors at a 4k >> granularity. Hence the push to eliminate overloading and do >> on-demand allocation of necessary descriptor space. >> >> I think that's accurate, but for the record: is there anybody who >> disagrees with this and insists that struct folio should continue to >> be the dumping ground for all kinds of memory types? > > I think there's a useful distinction to be drawn between "where we're > going with this patchset", "where we're going in the next six-twelve > months" and "where we're going eventually". I think we have minor > differences of opinion on the answers to those questions, and they can > be resolved as we go, instead of up-front. > > My answer to that question is that, while this full conversion is not > part of this patch, struct folio is logically: > > struct folio { > ... almost everything that's currently in struct page ... > }; > > struct page { > unsigned long flags; > unsigned long compound_head; > union { > struct { /* First tail page only */ > unsigned char compound_dtor; > unsigned char compound_order; > atomic_t compound_mapcount; > unsigned int compound_nr; > }; > struct { /* Second tail page only */ > atomic_t hpage_pinned_refcount; > struct list_head deferred_list; > }; > unsigned long padding1[4]; > }; > unsigned int padding2[2]; > #ifdef CONFIG_MEMCG > unsigned long padding3; > #endif > #ifdef WANT_PAGE_VIRTUAL > void *virtual; > #endif > #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS > int _last_cpupid; > #endif > }; > > (I'm open to being told I have some of that wrong, eg maybe _last_cpupid > is actually part of struct folio and isn't a per-page property at all) > > I'd like to get there in the next year. I think dynamically allocating > memory descriptors is more than a year out. > > Now, as far as struct folio being a dumping group, I would like to > split other things out from struct folio. Let me address that below. > >> Let's assume the answer is "no" for now and move on. >> >> If folios are NOT the common headpage type, it begs two questions: >> >> 1) What subtype(s) of page SHOULD it represent? >> >> This is somewhat unclear at this time. Some say file+anon. >> It's also been suggested everything userspace-mappable, but >> that would again bring back major type punning. Who knows? >> >> Vocal proponents of the folio type have made conflicting >> statements on this, which certainly gives me pause. >> >> 2) What IS the common type used for attributes and code shared >> between subtypes? >> >> For example: if a folio is anon+file, then the code that >> maps memory to userspace needs a generic type in order to >> map both folios and network pages. Same as the page table >> walkers, and things like GUP. >> >> Will this common type be struct page? Something new? Are we >> going to duplicate the implementation for each subtype? >> >> Another example: GUP can return tailpages. I don't see how >> it could return folio with even its most generic definition >> of "headpage". >> >> (But bottomline, it's not clear how folio can be the universal >> headpage type and simultaneously avoid being the type dumping ground >> that the page was. Maybe I'm not creative enough?) > > This whole section is predicated on "If it is NOT the headpage type", > but I think this is a great list of why it _should_ be the generic > headpage type. > > To answer a questions in here; GUP should continue to return precise > pages because that's what its callers expect. But we should have a > better interface than GUP which returns a rather more compressed list > (something like today's biovec). > >> Anyway. I can even be convinved that we can figure out the exact fault >> lines along which we split the page down the road. >> >> My worry is more about 2). A shared type and generic code is likely to >> emerge regardless of how we split it. Think about it, the only world >> in which that isn't true would be one in which either >> >> a) page subtypes are all the same, or >> b) the subtypes have nothing in common >> >> and both are clearly bogus. > > Amen! > > I'm convinced that pgtable, slab and zsmalloc uses of struct page can all > be split out into their own types instead of being folios. They have > little-to-nothing in common with anon+file; they can't be mapped into > userspace and they can't be on the LRU. The only situation you can find > them in is something like compaction which walks PFNs. > > I don't think we can split out ZONE_DEVICE and netpool into their own > types. While they can't be on the LRU, they can be mapped to userspace, > like random device drivers. So they can be found by GUP, and we want > (need) to be able to go to folio from there in order to get, lock and > set a folio as dirty. Also, they have a mapcount as well as a refcount. > > The real question, I think, is whether it's worth splitting anon & file > pages out from generic pages. I can see arguments for it, but I can also > see arguments against it (whether it's two types: lru_mem and folio, > three types: anon_mem, file_mem and folio or even four types: ksm_mem, > anon_mem and file_mem). I don't think a compelling argument has been > made either way. > > Perhaps you could comment on how you'd see separate anon_mem and > file_mem types working for the memcg code? Would you want to have > separate lock_anon_memcg() and lock_file_memcg(), or would you want > them to be cast to a common type like lock_folio_memcg()? FWIW, something like this would roughly express what I've been mumbling about: anon_mem file_mem | | ------|------ lru_mem slab | | ------------- | page I wouldn't include folios in this picture, because IMHO folios as of now are actually what we want to be "lru_mem", just which a much clearer name+description (again, IMHO). Going from file_mem -> page is easy, just casting pointers. Going from page -> file_mem requires going to the head page if it's a compound page. But we expect most interfaces to pass around a proper type (e.g., lru_mem) instead of a page, which avoids having to lookup the compund head page. And each function can express which type it actually wants to consume. The filmap API wants to consume file_mem, so it should use that. And IMHO, with something above in mind and not having a clue which additional layers we'll really need, or which additional leaves we want to have, we would start with the leaves (e.g., file_mem, anon_mem, slab) and work our way towards the root. Just like we already started with slab. Maybe that makes sense. -- Thanks, David / dhildenb