Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp301630rwd; Thu, 8 Jun 2023 00:25:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7eKLlwR69VNAlGZJRIEtmoRZk3X3m26ULQB/ccKGc4yh4MjtuoQhqo8MTixIoZftmLB7FV X-Received: by 2002:a17:902:d4cc:b0:1af:d4f0:1dbe with SMTP id o12-20020a170902d4cc00b001afd4f01dbemr4589003plg.23.1686209125157; Thu, 08 Jun 2023 00:25:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686209125; cv=none; d=google.com; s=arc-20160816; b=LODi6OEiv65FFJKD02e1dPWyWpYOonDvEWUGXVrbMCO9aROvyUawL3XXE+2mArMoHv yqjRM6tHK+FOPrXr/vxhyO3Ic13X3506hy930deVgM3ixhWVmc64FPuTtx757jmrE+8Q cs0fuvtzksvE63cG2uAty2t6ojgu7/ByU1+k8rxF8MxHLyEmgO+l2pZ44xmIBTmNokKO hbeGpKAm44fYBN61CTpyizeHyLt8fd/tdqqysn/A1xebbju+bOJQAIolmv0zt/iaQgpN kSmIl4eltsxUxrKW1WMK3wi3Wz1vCSPxD1uKOeWXTSVvRyAZNm7rfjnEBzhQF8hvFXng q2Cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=ZsNRIyP+X7XOHybtDnn1u0u6wdb6BoZre2xC4a+Lfzc=; b=LFYOXAeiVwRX3Utuki1PQKonwtzGKm2l/MIQheTKjMx4uPcY1VXxAD44UDfXRYaUkT rm9aIKEPtI7RYgMhMBHnOd77zz64kHjn+LUdqHjW+F9zRuxTlCLuMIoHtdgNZjxho3t3 I9U7d4KCsJ4m9/BLNIXMnY23M49IUWHamVRcLFchVebPQDhgLnceL+0oRuCBcBNofFZ7 449qvHk3Wkn7uwRP0D3HNeXLYqM14IP8lc+URh/F4Gmj9BLlLiVEFYAV8QPFfFMQZzqE 6zicamvHOT2zgmLvJU/y0we15LOlAJ6CcaoblFdQcSNrfMlSt1D9vaveskXySgWAZr1v Ebrg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@canonical.com header.s=20210705 header.b=XOSjU4sm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=canonical.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o3-20020a170902d4c300b001b248529a66si630654plg.624.2023.06.08.00.25.12; Thu, 08 Jun 2023 00:25:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@canonical.com header.s=20210705 header.b=XOSjU4sm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=canonical.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235367AbjFHHWC (ORCPT + 99 others); Thu, 8 Jun 2023 03:22:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235362AbjFHHVl (ORCPT ); Thu, 8 Jun 2023 03:21:41 -0400 Received: from smtp-relay-internal-0.canonical.com (smtp-relay-internal-0.canonical.com [185.125.188.122]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F2691984 for ; Thu, 8 Jun 2023 00:21:15 -0700 (PDT) Received: from mail-yw1-f200.google.com (mail-yw1-f200.google.com [209.85.128.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 475CB3F460 for ; Thu, 8 Jun 2023 07:21:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1686208870; bh=ZsNRIyP+X7XOHybtDnn1u0u6wdb6BoZre2xC4a+Lfzc=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=XOSjU4smbrjehXIwJ9X4lXDpMAZW1PQmtdd/ETLnfqpuUdJEbvO7gC9WgVD9AAvIk niQo0k+dVJD/blfCICLg6h1AqnJIwK34trGCyKmuYWCuja/rElAvfYTZ0E8djiGRqo yNqk7OxjWsujeh1pqdu5GrD4H3YqgmpBhvr8CXjkpN0b/st01xqwVbr51AYVAUHJ8Y MN9pfX3OkvRqglHlCwqja/MNUqiYTaJf2GWXg4V2z0kj801f3JP2aZn0LgzwQ9qVtW uFZpp9dRbcdd9ZhjsPd71iQYMFHndc2rYUm1xwLyuI/K7Aax2R5qaPnwbZ+lYxVaOd 0MvpDrj/ZUxKg== Received: by mail-yw1-f200.google.com with SMTP id 00721157ae682-568960f4596so4879497b3.1 for ; Thu, 08 Jun 2023 00:21:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686208868; x=1688800868; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZsNRIyP+X7XOHybtDnn1u0u6wdb6BoZre2xC4a+Lfzc=; b=UDMGH9nGPCVYqHJffSfDUMOGEbF2KSSTwxet4fu5ZMPVHXYxVAAwvb+6id3462TFYW aRrmyARAHuOXY/nWoX5/LvkgXAYVTtHf/6TY3Kj6fLP3o/89unGbWu0ppUzu7Tg3djul WnqTWb8jG5aY4S59+xoX+iLyrWtJXL5EqIkanpHRTj3MJbDIBaeZJTjCB/KXYufA7CoR kaWxnDhdcMs4/qp1S9Q8Lu3mMqnVQAHBxvrH6owgfyGzghW8fBZXD1lOVwePfugTqLA/ iTFnu70168MEQpTHZl8jQB6RyQaVmu4kPDQw/O7bni6y8Vhrz+2P6zYje6Zn90RGv5kP GUJQ== X-Gm-Message-State: AC+VfDw08HcE3mIuiXA+bQTzhxU4ukcPe8wyKT9JhKcTGsjdpLk8R2FX cMtVhD+rWkJdyZcLEral0P92YTdNBJY3TxSiRsv2tU/cf8St1EO2fAnpVeDN4O0gisyidDYcxS6 jV3njIMRoVTkFZ815CJHL9SOhN1/RS6DhfX+QSTVog2h09bd7dawXRyOo1Q== X-Received: by 2002:a0d:ca89:0:b0:565:beb0:75b4 with SMTP id m131-20020a0dca89000000b00565beb075b4mr8683762ywd.49.1686208868283; Thu, 08 Jun 2023 00:21:08 -0700 (PDT) X-Received: by 2002:a0d:ca89:0:b0:565:beb0:75b4 with SMTP id m131-20020a0dca89000000b00565beb075b4mr8683747ywd.49.1686208867965; Thu, 08 Jun 2023 00:21:07 -0700 (PDT) MIME-Version: 1.0 References: <20230607180958.645115-1-aleksandr.mikhalitsyn@canonical.com> <8b22fc1e-595a-b729-dd21-2714f22a28a7@redhat.com> In-Reply-To: <8b22fc1e-595a-b729-dd21-2714f22a28a7@redhat.com> From: Aleksandr Mikhalitsyn Date: Thu, 8 Jun 2023 09:20:56 +0200 Message-ID: Subject: Re: [PATCH v4 00/14] ceph: support idmapped mounts To: Xiubo Li Cc: brauner@kernel.org, stgraber@ubuntu.com, linux-fsdevel@vger.kernel.org, Ilya Dryomov , Jeff Layton , ceph-devel@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 8, 2023 at 5:01=E2=80=AFAM Xiubo Li wrote: > > Hi Alexander, Dear Xiubo, > > As I mentioned in V2 thread > https://www.spinics.net/lists/kernel/msg4810994.html, we should use the > 'idmap' for all the requests below, because MDS will do the > 'check_access()' for all the requests by using the caller uid/gid, > please see > https://github.com/ceph/ceph/blob/main/src/mds/Server.cc#L3294-L3310. > > > Cscope tag: ceph_mdsc_do_request > # line filename / context / line > 1 321 fs/ceph/addr.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 2 443 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 3 838 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 4 933 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, dir, req); > 5 1045 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, dir, req); > 6 1120 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, dir, req); > 7 1180 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, dir, req); > 8 1365 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, dir, req); > 9 1431 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, old_dir, req); > 10 1927 fs/ceph/dir.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 11 154 fs/ceph/export.c <<__lookup_inode>> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 12 262 fs/ceph/export.c <<__snapfh_to_dentry>> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 13 347 fs/ceph/export.c <<__get_parent>> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 14 490 fs/ceph/export.c <<__get_snap_name>> > err =3D ceph_mdsc_do_request(fsc->mdsc, NULL, req); > 15 561 fs/ceph/export.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 16 339 fs/ceph/file.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 17 434 fs/ceph/file.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 18 855 fs/ceph/file.c <> > err =3D ceph_mdsc_do_request(mdsc, (flags & O_CREAT) ? dir = : > NULL, req); > 19 2715 fs/ceph/inode.c <<__ceph_setattr>> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 20 2839 fs/ceph/inode.c <<__ceph_do_getattr>> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 21 2883 fs/ceph/inode.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 22 126 fs/ceph/ioctl.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 23 171 fs/ceph/ioctl.c <> > err =3D ceph_mdsc_do_request(mdsc, inode, req); > 24 216 fs/ceph/locks.c <> > err =3D ceph_mdsc_do_request(mdsc, inode, intr_req); > 25 1091 fs/ceph/super.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); > 26 1151 fs/ceph/xattr.c <> > err =3D ceph_mdsc_do_request(mdsc, NULL, req); Sure, I remember about this point and as far as I mentioned earlier https://lore.kernel.org/all/20230519134420.2d04e5f70aad15679ab566fc@canonic= al.com/ It is a discussional thing, because not all the inode_operations allows us to get a mount idmapping. For instance, lookup, get_link, get_inode_acl, readlink, link, unlink, rmdir, listxattr, fiemap, update_time, fileattr_get inode_operations are not provided with an idmapping. atomic_open also lacks the mnt_idmap argument, but we have a struct file so we can get an idmapping through it. As far as I can see from the code: https://raw.githubusercontent.com/ceph/ceph/main/src/mds/Server.cc We have Server::check_access calls for all inode_operations, including the lookup. It means that with the current VFS we are not able to support MDS UID/GID-based path restriction with idmapped mounts. But we can return to it later if someone really wants it. If I understand your idea right, you want me to set req->r_mnt_idmap to an actual idmapping everywhere, where it is possible, and ignore inode_operations where we have no idmapping passed? Christian's approach was more conservative here, his idea was to pass an idmapping only to the operations that are creating some nodes on the filesystem, but pass a "nop_mnt_idmap" to everyone else. So, I'll try to set up MDS UID/GID-based path restriction on my local environment and reproduce the issue with it, but as I mentioned earlier we can't support it right now anyway. But as we already have an idmappings supported for most of existing filesystems, having it supported for cephfs would be great (even with this limitation about MDS UID/GID-based path restriction), because we already have a real world use cases for cephfs idmapped mounts and this particular patchset is used by LXD/LXC for more that a year. We can extend this later, if someone really wants to use this combination and once we extend the VFS layer. > > > And also could you squash the similar commit into one ? Sure, you mean commits that do `req->r_mnt_idmap =3D mnt_idmap_get(idmap)`? Will do. Big thanks for the fast reaction/review on this series, Xiubo! > Kind regards, Alex > > Thanks > > - Xiubo > > > On 6/8/23 02:09, Alexander Mikhalitsyn wrote: > > Dear friends, > > > > This patchset was originally developed by Christian Brauner but I'll co= ntinue > > to push it forward. Christian allowed me to do that :) > > > > This feature is already actively used/tested with LXD/LXC project. > > > > Git tree (based on https://github.com/ceph/ceph-client.git master): > > https://github.com/mihalicyn/linux/tree/fs.idmapped.ceph > > > > In the version 3 I've changed only two commits: > > - fs: export mnt_idmap_get/mnt_idmap_put > > - ceph: allow idmapped setattr inode op > > and added a new one: > > - ceph: pass idmap to __ceph_setattr > > > > In the version 4 I've reworked the ("ceph: stash idmapping in mdsc requ= est") > > commit. Now we take idmap refcounter just in place where req->r_mnt_idm= ap > > is filled. It's more safer approach and prevents possible refcounter un= derflow > > on error paths where __register_request wasn't called but ceph_mdsc_rel= ease_request is > > called. > > > > I can confirm that this version passes xfstests. > > > > Links to previous versions: > > v1: https://lore.kernel.org/all/20220104140414.155198-1-brauner@kernel.= org/ > > v2: https://lore.kernel.org/lkml/20230524153316.476973-1-aleksandr.mikh= alitsyn@canonical.com/ > > v3: https://lore.kernel.org/lkml/20230607152038.469739-1-aleksandr.mikh= alitsyn@canonical.com/#t > > > > Kind regards, > > Alex > > > > Original description from Christian: > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > This patch series enables cephfs to support idmapped mounts, i.e. the > > ability to alter ownership information on a per-mount basis. > > > > Container managers such as LXD support sharaing data via cephfs between > > the host and unprivileged containers and between unprivileged container= s. > > They may all use different idmappings. Idmapped mounts can be used to > > create mounts with the idmapping used for the container (or a different > > one specific to the use-case). > > > > There are in fact more use-cases such as remapping ownership for > > mountpoints on the host itself to grant or restrict access to different > > users or to make it possible to enforce that programs running as root > > will write with a non-zero {g,u}id to disk. > > > > The patch series is simple overall and few changes are needed to cephfs= . > > There is one cephfs specific issue that I would like to discuss and > > solve which I explain in detail in: > > > > [PATCH 02/12] ceph: handle idmapped mounts in create_request_message() > > > > It has to do with how to handle mds serves which have id-based access > > restrictions configured. I would ask you to please take a look at the > > explanation in the aforementioned patch. > > > > The patch series passes the vfs and idmapped mount testsuite as part of > > xfstests. To run it you will need a config like: > > > > [ceph] > > export FSTYP=3Dceph > > export TEST_DIR=3D/mnt/test > > export TEST_DEV=3D10.103.182.10:6789:/ > > export TEST_FS_MOUNT_OPTS=3D"-o name=3Dadmin,secret=3D$password > > > > and then simply call > > > > sudo ./check -g idmapped > > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > > Alexander Mikhalitsyn (2): > > fs: export mnt_idmap_get/mnt_idmap_put > > ceph: pass idmap to __ceph_setattr > > > > Christian Brauner (12): > > ceph: stash idmapping in mdsc request > > ceph: handle idmapped mounts in create_request_message() > > ceph: allow idmapped mknod inode op > > ceph: allow idmapped symlink inode op > > ceph: allow idmapped mkdir inode op > > ceph: allow idmapped rename inode op > > ceph: allow idmapped getattr inode op > > ceph: allow idmapped permission inode op > > ceph: allow idmapped setattr inode op > > ceph/acl: allow idmapped set_acl inode op > > ceph/file: allow idmapped atomic_open inode op > > ceph: allow idmapped mounts > > > > fs/ceph/acl.c | 6 +++--- > > fs/ceph/dir.c | 4 ++++ > > fs/ceph/file.c | 10 ++++++++-- > > fs/ceph/inode.c | 29 +++++++++++++++++------------ > > fs/ceph/mds_client.c | 27 +++++++++++++++++++++++---- > > fs/ceph/mds_client.h | 1 + > > fs/ceph/super.c | 2 +- > > fs/ceph/super.h | 3 ++- > > fs/mnt_idmapping.c | 2 ++ > > include/linux/mnt_idmapping.h | 3 +++ > > 10 files changed, 64 insertions(+), 23 deletions(-) > > >