Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp155479pxb; Fri, 17 Sep 2021 22:10:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlfWzI2yuj66KarmyIzn6zpZo4ybLHcx0jY1+fCPiu7z5E/GkgIxlkB4qT0e1lbYOhVlSq X-Received: by 2002:a02:9204:: with SMTP id x4mr11540819jag.45.1631941844762; Fri, 17 Sep 2021 22:10:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631941844; cv=none; d=google.com; s=arc-20160816; b=kGoN37Xk4IeRFCcMxLH61+kHROjxNY0Z+fn2eogR1j8DYZlQCVwgFK4ELL/vC911RQ T0oxFTR6Cr8pHAAGV52cs/d/504dA7Y4TyE/x21ClRqlECzPFNyDAqbzHU/CmU5lpFjx vvD9iVJCiMjv9+Rejqa4OUCpjWVA+aVRT5yLG8HXPl/2ptc9ZYqUWp5Rn/KXzki6g+6M yH6B2qnW9CaiFSqzc+2AkVAQDWJ3WGAHJvPFuDVoNklCagjA2aMbZSoFtavYYUwscMy6 LDQNFKG0we31u+GH5KvPMAMq2ydmKSHihlYiBuNjK3QuWAGWUb7sYPiHEn/4RbvcuT/T jSEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=7FlVRcSN4aKdFU1i3j4u7ovcjtQHnABnA/a8x5TLUsA=; b=0kBFB/trEVntnUxxNOjQoA8TDsF46xHwRzpmPexE2eXpUER47UnB9wLQ5nTVP7ttQh Ot5tv/D8nygwzbBxqCmI+4Ihhm3e+W8YMw72V5scYL7BqSX05ajZziU+UJjLlO8mYREw 5JTM5PBY9QI6bzlZgYlfa1S5nXZSqeelA4WuVDTp98rLf8CafFTSfg4o02Z4oZCKfFsv nPW7laIz/0WSXKp6bKnkR5F9fIpNPdjpVmjAOCtxFmLsiVtN9sxSSlTKqisj+LNYmoSq HbaaiHBtY9F5Ws8xQG0+H5q5zYP5emaZSvaFBR9j8MrNHdiyM2DRG7ElTOmJv2+/fTrR 2QhQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c35si7430416jai.6.2021.09.17.22.10.33; Fri, 17 Sep 2021 22:10:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242595AbhIQXQs (ORCPT + 99 others); Fri, 17 Sep 2021 19:16:48 -0400 Received: from mail110.syd.optusnet.com.au ([211.29.132.97]:58646 "EHLO mail110.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232079AbhIQXQs (ORCPT ); Fri, 17 Sep 2021 19:16:48 -0400 Received: from dread.disaster.area (pa49-195-238-16.pa.nsw.optusnet.com.au [49.195.238.16]) by mail110.syd.optusnet.com.au (Postfix) with ESMTPS id DF67E10940E; Sat, 18 Sep 2021 09:15:22 +1000 (AEST) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1mRN4k-00Dh62-2x; Sat, 18 Sep 2021 09:15:22 +1000 Date: Sat, 18 Sep 2021 09:15:22 +1000 From: Dave Chinner To: "Darrick J. Wong" Cc: Amir Goldstein , Jan Kara , xfs , linux-ext4 , linux-btrfs , linux-fsdevel , Christian Brauner Subject: Re: Shameless plug for the FS Track at LPC next week! Message-ID: <20210917231522.GT2361455@dread.disaster.area> References: <20210916013916.GD34899@magnolia> <20210917083043.GA6547@quack2.suse.cz> <20210917083608.GB6547@quack2.suse.cz> <20210917093838.GC6547@quack2.suse.cz> <20210917161217.GB10224@magnolia> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210917161217.GB10224@magnolia> X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=F8MpiZpN c=1 sm=1 tr=0 a=DzKKRZjfViQTE5W6EVc0VA==:117 a=DzKKRZjfViQTE5W6EVc0VA==:17 a=m-ou_RVVpQgPBfTu:21 a=kj9zAlcOel0A:10 a=7QKq2e-ADPsA:10 a=7-415B0cAAAA:8 a=GQkCdkIK5JPku0ffV3gA:9 a=CjuIK1q_8ugA:10 a=ns9Za6Vb_figrmUGg2RM:22 a=biEYGPWJfzWAr4FL6Ov7:22 Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Fri, Sep 17, 2021 at 09:12:17AM -0700, Darrick J. Wong wrote: > On Fri, Sep 17, 2021 at 01:23:08PM +0300, Amir Goldstein wrote: > > On Fri, Sep 17, 2021 at 12:38 PM Jan Kara wrote: > > > > > > On Fri 17-09-21 10:36:08, Jan Kara wrote: > > > > Let me also post Amir's thoughts on this from a private thread: > > > > > > And now I'm actually replying to Amir :-p > > > > > > > On Fri 17-09-21 10:30:43, Jan Kara wrote: > > > > > We did a small update to the schedule: > > > > > > > > > > > Christian Brauner will run the second session, discussing what idmapped > > > > > > filesystem mounts are for and the current status of supporting more > > > > > > filesystems. > > > > > > > > > > We have extended this session as we'd like to discuss and get some feedback > > > > > from users about project quotas and project ids: > > > > > > > > > > Project quotas were originally mostly a collaborative feature and later got > > > > > used by some container runtimes to implement limitation of used space on a > > > > > filesystem shared by multiple containers. As a result current semantics of > > > > > project quotas are somewhat surprising and handling of project ids is not > > > > > consistent among filesystems. The main two contending points are: > > > > > > > > > > 1) Currently the inode owner can set project id of the inode to any > > > > > arbitrary number if he is in init_user_ns. It cannot change project id at > > > > > all in other user namespaces. > > > > > > > > > > 2) Should project IDs be mapped in user namespaces or not? User namespace > > > > > code does implement the mapping, VFS quota code maps project ids when using > > > > > them. However e.g. XFS does not map project IDs in its calls setting them > > > > > in the inode. Among other things this results in some funny errors if you > > > > > set project ID to (unsigned)-1. > > > > > > > > > > In the session we'd like to get feedback how project quotas / ids get used > > > > > / could be used so that we can define the common semantics and make the > > > > > code consistently follow these rules. > > > > > > > > I think that legacy projid semantics might not be a perfect fit for > > > > container isolation requirements. I added project quota support to docker > > > > at the time because it was handy and it did the job of limiting and > > > > querying disk usage of containers with an overlayfs storage driver. > > > > > > > > With btrfs storage driver, subvolumes are used to create that isolation. > > > > The TREE_ID proposal [1] got me thinking that it is not so hard to > > > > implement "tree id" as an extention or in addition to project id. > > > > > > > > The semantics of "tree id" would be: > > > > 1. tree id is a quota entity accounting inodes and blocks > > > > 2. tree id can be changed only on an empty directory Hmmm. So once it's created, it can't be changed without first deleting all the data in the tree? > > > > 3. tree id can be set to TID only if quota inode usage of TID is 0 What does this mean? Defining behaviour.semantics in terms of it's implementation is ambiguous and open for interpretation. I *think* the intent here is that tree ids are unique and can only be applied to a single tree, but... And, of course, what happens if we have multiple filesystems? tree IDs are no longer globally unique across the system, right? > > > > 4. tree id is always inherited from parent What happens as we traverse mount points within a tree? If the quota applies to directory trees, then there are going to be directory tree constructs that don't obviously follow this behaviour. e.g. bind mounts from one directory tree to another, both having different tree IDs. Which then makes me question: are inodes and inode flags the right place to track and propagate these tree IDs? Isn't the tree ID as defined here a property of the path structure rather than a property of the inode? Should we actually be looking at a new directory tree ID tracking behaviour at, say, the vfs-mount+dentry level rather than the inode level? > > > > 5. No rename() or link() across tree id (clone should be possible) The current directory tree quotas disallow this because of implementation difficulties (e.g. avoiding recursive chproj inside the kernel as part of rename()) and so would punt the operations too difficult to do in the kernel back to userspace. They are not intended to implement container boundaries in any way, shape or form. Container boundaries need to use a proper access control mechanism, not rely on side effects of difficult-to-implement low level accounting mechanisms to provide access restriction. Indeed, why do we want to place restrictions on moving things across trees if the filesystem can actually do so correctly? Hence I think this is somewhat inappropriately be commingling container access restrictions with usage accounting.... I'm certain there will be filesytsems that do disallow rename and link to punt the problem back up to userspace, but that's an implementation detail to ensure accounting for the data movement to a different tree is correct and not a behavioural or access restriction... > > > > AFAIK btrfs subvol meets all the requirements of "tree id". > > > > > > > > Implementing tree id in ext4/xfs could be done by adding a new field to > > > > inode on-disk format and a new quota entity to quota on-disk format and > > > > quotatools. > > > > > > > > An alternative simpler way is to repurpose project id and project quota: > > > > * Add filesystem feature projid-is-treeid > > > > * The feature can be enabled on fresh mkfs or after fsck verifies "tree id" > > > > rules are followed for all usage of projid > > > > * Once the feature is enabled, filesystem enforces the new semantics > > > > about setting projid and projid_inherit > > I'd probably just repurpose the project quota mechanism, which means > that the xfs treeid is really just project quotas with somewhat > different behavior rules that are tailored to modern adversarial usage > models. ;) Potentially, yes, though I'm yet not convinced a "tree quota" is actually something we should track at an individual inode level... > IIRC someone asked for some sort of change like this on the xfs list > some years back. If memory serves, they wanted to prevent non-admin > userspace from changing project ids, even in the regular user ns? It > never got as far as a formal proposal though. > > I could definitely see a use case for letting admin processes in a > container change project ids among only the projids that are idmapped > into the namespace. Yup, all we need is a solid definition of how it will work, but that's always been the point where silence has fallen on the discussion. Cheers, Dave. -- Dave Chinner david@fromorbit.com