Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1973546ybz; Thu, 23 Apr 2020 09:19:31 -0700 (PDT) X-Google-Smtp-Source: APiQypICUsNJ/RDT85QRFq2WFjdG6XJ5zYLx2PI0ylD1WwWLA7S8WAtgKKlHk/r2oHLDXkBTxQLs X-Received: by 2002:a17:907:20a2:: with SMTP id pw2mr3430828ejb.252.1587658771715; Thu, 23 Apr 2020 09:19:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587658771; cv=none; d=google.com; s=arc-20160816; b=sd2SvToIOtn/Mw10tVAz66f5+lN9528Xyj5jDmuWpLqn5SyCXNeKLff8z4bEy1ZLIN ABeUKz3199gvK3cnHLrRfu5Mk/CKIVIzwK0SoLaBGSrcO0wkNHc5Tm4diHLso3lrm1Ko 4aMwA9sDgwwJa4FT/qsR6o1QQuhPmn/SbtEVyffP9dRRD+djfhOgEXSixHZ3v2tbgayP nB0vkcwNGpT6Z9hQHzItZBZZkZVz6K+BAwzCGcMnTWNt3sbxkRCsGrV+7govCft3Gha/ 4XTbKYFpMoI6ISsUIgRX/YoK4mOvE81xpS+Q7s6OAV5/7NtaHP0qjyZHxajO5CmSoQUg JxPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Fwu8zWxjtDaKNGXCHFJcMLq6yL8TmUwAMOrzCXpBrBk=; b=nwRifI35796b1suFVABckS5YYD6LkZFJRrY8utb8gH+qAo+RtXRyUCbol4pb3az40e bP8TUPixOUUuvZ0VUIR3HD03MD4hOP2+qXTnTZZHvbdZXledE07PaU1p1tg/4x2wybJ1 Nv6F2RNz2yNv7YxWxmanTN/ON2jTOhC9mSnr5LgK6+cEEwdjygFGQXdEoE3G3ai7yU4A pyZHIxmwpuWlsTADBQr8MLK0ka48B8I3TyaJ6n85AzEOpCybqdNVgY7KE54S8mZp11an QtR37CW6TUZ+aaOLp/b/uE1GNcjrHiC6IX6ZOoH9drV7FbZD9bRKt5bdY3v6UiC1MQ1l kIkg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b1si1363416edh.318.2020.04.23.09.19.03; Thu, 23 Apr 2020 09:19:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729542AbgDWQRV (ORCPT + 99 others); Thu, 23 Apr 2020 12:17:21 -0400 Received: from mail.hallyn.com ([178.63.66.53]:47420 "EHLO mail.hallyn.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728865AbgDWQRU (ORCPT ); Thu, 23 Apr 2020 12:17:20 -0400 Received: by mail.hallyn.com (Postfix, from userid 1001) id 1A8A1B24; Thu, 23 Apr 2020 11:17:17 -0500 (CDT) Date: Thu, 23 Apr 2020 11:17:17 -0500 From: "Serge E. Hallyn" To: Christian Brauner Cc: "Serge E. Hallyn" , Jens Axboe , Greg Kroah-Hartman , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-api@vger.kernel.org, Jonathan Corbet , "Rafael J. Wysocki" , Tejun Heo , "David S. Miller" , Saravana Kannan , Jan Kara , David Howells , Seth Forshee , David Rheinsberg , Tom Gundersen , Christian Kellner , Dmitry Vyukov , =?iso-8859-1?Q?St=E9phane?= Graber , linux-doc@vger.kernel.org, netdev@vger.kernel.org, Steve Barber , Dylan Reid , Filipe Brandenburger , Kees Cook , Benjamin Elder , Akihiro Suda Subject: Re: [PATCH v2 2/7] loopfs: implement loopfs Message-ID: <20200423161717.GB12201@mail.hallyn.com> References: <20200422145437.176057-1-christian.brauner@ubuntu.com> <20200422145437.176057-3-christian.brauner@ubuntu.com> <20200422215213.GB31944@mail.hallyn.com> <20200423112401.ipzmsyicabwajpn2@wittgenstein> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200423112401.ipzmsyicabwajpn2@wittgenstein> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 23, 2020 at 01:24:01PM +0200, Christian Brauner wrote: > On Wed, Apr 22, 2020 at 04:52:13PM -0500, Serge Hallyn wrote: > > On Wed, Apr 22, 2020 at 04:54:32PM +0200, Christian Brauner wrote: > > > This implements loopfs, a loop device filesystem. It takes inspiration > > > from the binderfs filesystem I implemented about two years ago and with > > > which we had overall good experiences so far. Parts of it are also > > > based on [3] but it's mostly a new, imho cleaner approach. > > > > > > Loopfs allows to create private loop devices instances to applications > > > for various use-cases. It covers the use-case that was expressed on-list > > > and in-person to get programmatic access to private loop devices for > > > image building in sandboxes. An illustration for this is provided in > > > [4]. > > > > > > Also loopfs is intended to provide loop devices to privileged and > > > unprivileged containers which has been a frequent request from various > > > major tools (Chromium, Kubernetes, LXD, Moby/Docker, systemd). I'm > > > providing a non-exhaustive list of issues and requests (cf. [5]) around > > > this feature mainly to illustrate that I'm not making the use-cases up. > > > Currently none of this can be done safely since handing a loop device > > > from the host into a container means that the container can see anything > > > that the host is doing with that loop device and what other containers > > > are doing with that device too. And (bind-)mounting devtmpfs inside of > > > containers is not secure at all so also not an option (though sometimes > > > done out of despair apparently). > > > > > > The workloads people run in containers are supposed to be indiscernible > > > from workloads run on the host and the tools inside of the container are > > > supposed to not be required to be aware that they are running inside a > > > container apart from containerization tools themselves. This is > > > especially true when running older distros in containers that did exist > > > before containers were as ubiquitous as they are today. With loopfs user > > > can call mount -o loop and in a correctly setup container things work > > > the same way they would on the host. The filesystem representation > > > allows us to do this in a very simple way. At container setup, a > > > container manager can mount a private instance of loopfs somehwere, e.g. > > > at /dev/loopfs and then bind-mount or symlink /dev/loopfs/loop-control > > > to /dev/loop-control, pre allocate and symlink the number of standard > > > devices into their standard location and have a service file or rules in > > > place that symlink additionally allocated loop devices through losetup > > > into place as well. > > > With the new syscall interception logic this is also possible for > > > unprivileged containers. In these cases when a user calls mount -o loop > > > it will be possible to completely setup the loop > > > device in the container. The final mount syscall is handled through > > > syscall interception which we already implemented and released in > > > earlier kernels (see [1] and [2]) and is actively used in production > > > workloads. The mount is often rewritten to a fuse binary to provide safe > > > access for unprivileged containers. > > > > > > Loopfs also allows the creation of hidden/detached dynamic loop devices > > > and associated mounts which also was a often issued request. With the > > > old mount api this can be achieved by creating a temporary loopfs and > > > stashing a file descriptor to the mount point and the loop-control > > > device and immediately unmounting the loopfs instance. With the new > > > mount api a detached mount can be created directly (i.e. a mount not > > > visible anywhere in the filesystem). New loop devices can then be > > > allocated and configured. They can be mounted through > > > /proc/self// with the old mount api or by using the fd directly > > > with the new mount api. Combined with a mount namespace this allows for > > > fully auto-cleaned up loop devices on program crash. This ties back to > > > various use-cases and is illustrated in [4]. > > > > > > The filesystem representation requires the standard boilerplate > > > filesystem code we know from other tiny filesystems. And all of > > > the loopfs code is hidden under a config option that defaults to false. > > > This specifically means, that none of the code even exists when users do > > > not have any use-case for loopfs. > > > In addition, the loopfs code does not alter how loop devices behave at > > > all, i.e. there are no changes to any existing workloads and I've taken > > > care to ifdef all loopfs specific things out. > > > > > > Each loopfs mount is a separate instance. As such loop devices created > > > in one instance are independent of loop devices created in another > > > instance. This specifically entails that loop devices are only visible > > > in the loopfs instance they belong to. > > > > > > The number of loop devices available in loopfs instances are > > > hierarchically limited through /proc/sys/user/max_loop_devices via the > > > ucount infrastructure (Thanks to David Rheinsberg for pointing out that > > > missing piece.). An administrator could e.g. set > > > echo 3 > /proc/sys/user/max_loop_devices at which point any loopfs > > > instance mounted by uid x can only create 3 loop devices no matter how > > > many loopfs instances they mount. This limit applies hierarchically to > > > all user namespaces. > > > > Hm, info->device_count is per loopfs mount, though, right? I don't > > see where this gets incremented for all of a user's loopfs mounts > > when one adds a loopdev? > > > > I'm sure I'm missing something obvious... > > Hm, I think you might be mixing up the two limits? device_count > corresponds to the "max" mount option and is not involved in enforcing > hierarchical limits. The global restriction is enforced through > inc_ucount() which tracks by the uid of the mounter of the superblock. > If the same user mounts multiple loopfs instances in the same namespace > the ucount infra will enforce his quota across all loopfs instances. Well I'm trying to understand what the point of the max mount option is :) I can just do N mounts to get N*max mounts to work around it? But meanwhile if I have a daemon mounting isos over loopdevs to extract some files (bc I never heard of bsdtar :), I risk more spurious failures due to hitting max? If you think we need it, that's fine - it just has the odor of something more trouble than it's worth. Anyway, with or without it, Reviewed-by: Serge Hallyn thanks, -serge