Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp150148rdb; Thu, 30 Nov 2023 00:31:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IHanb5TwwB3cOchTzZjF0Goy4TmhogHTgpECALufmlPhT8VRGE27vZsJIvbt1kYqFF0h0Fl X-Received: by 2002:a17:902:c40d:b0:1cf:6ac3:81c2 with SMTP id k13-20020a170902c40d00b001cf6ac381c2mr22073921plk.47.1701333091029; Thu, 30 Nov 2023 00:31:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701333091; cv=none; d=google.com; s=arc-20160816; b=ZSfxgv7cw5nlbgOMo4NuuVWh9ZdqovrLTGKjpvA3tjL4Pm2eRd6KU+TSnDCIA7zytS QmgJlpzRQgdwdzanFZ1BCET4ruE5IWL8gsNhAVqk3xb5rtSdHEkWJ9zeILkVupikaZWM BKqBHMnrLqzKbESi/9+2uLxmI38Apcx5e24RA+mEDhjN744RHviOA7uaMpDez7Z5Y/cl FiD8P2c4KQ+jEcAx1Zh+swsMQ3VuFIDaAlVkrBHPfpBiSK9tvW9lDv9HomPyt0dLMO9T 3ZhwX+g4yVcuxt+6iztCmv2yvp9t5PD/fstYHfOu3h/QiO2/39j+0rKLcgf2GihlfeOp j5Eg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=trBPOQnWeP2NCMCS9vf8Ei/GT8+F24JtXI6B5t+B1is=; fh=JhM3NkHm8mfcJbr4SlVo67qHcp4guJj6T8f5M5/8lG8=; b=dRlo+nJMUFOUUd1QS6I6E3WG9X1JH1yeeUNRzF1jCvqwClHZ0Kp4q5YfuUgDr7MzBt UTFreStQoCSph1JIIhgKtijTEckzV9HKw+uSy0KJ9uxqhd1DGL04OQxbQES3huHiIu89 peWEzk2iYqpRlHDsmZSBaG4NhtzvIdJg/eSe8twcYk0rjGqzOWbvtxJJqda0eWqTDODp e6xKtDWPz22j+GYRNlMc5TgcUGxQIKzwEs7KyyD3HZrjFMsm8mWnniGg0TwC7rVMtOc4 lfsub8VZsOr07nll5n/hOJt1XDu8+xNz0QWxtnePEdDp9ExJC2QgvRLw2+eg/k80PRhd 4cWA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id bd10-20020a170902830a00b001cc5f995bccsi690151plb.182.2023.11.30.00.31.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 00:31:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id A5C198369F65; Thu, 30 Nov 2023 00:31:27 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344927AbjK3Ia4 (ORCPT + 99 others); Thu, 30 Nov 2023 03:30:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344885AbjK3Iax (ORCPT ); Thu, 30 Nov 2023 03:30:53 -0500 Received: from frasgout12.his.huawei.com (frasgout12.his.huawei.com [14.137.139.154]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D55710C2; Thu, 30 Nov 2023 00:30:58 -0800 (PST) Received: from mail.maildlp.com (unknown [172.18.186.51]) by frasgout12.his.huawei.com (SkyGuard) with ESMTP id 4Sgpqr0DKFz9yskM; Thu, 30 Nov 2023 16:14:00 +0800 (CST) Received: from mail02.huawei.com (unknown [7.182.16.47]) by mail.maildlp.com (Postfix) with ESMTP id 181D11407B1; Thu, 30 Nov 2023 16:30:55 +0800 (CST) Received: from [10.48.145.201] (unknown [10.48.145.201]) by APP1 (Coremail) with SMTP id LxC2BwBno3MtSGhlf0OkAQ--.62330S2; Thu, 30 Nov 2023 09:30:54 +0100 (CET) Message-ID: <66ec6876-483a-4403-9baa-487ebad053f2@huaweicloud.com> Date: Thu, 30 Nov 2023 09:30:34 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 23/23] integrity: Switch from rbtree to LSM-managed blob for integrity_iint_cache Content-Language: en-US To: Casey Schaufler , Roberto Sassu , Paul Moore Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, chuck.lever@oracle.com, jlayton@kernel.org, neilb@suse.de, kolga@netapp.com, Dai.Ngo@oracle.com, tom@talpey.com, jmorris@namei.org, serge@hallyn.com, zohar@linux.ibm.com, dmitry.kasatkin@gmail.com, dhowells@redhat.com, jarkko@kernel.org, stephen.smalley.work@gmail.com, eparis@parisplace.org, mic@digikod.net, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-security-module@vger.kernel.org, linux-integrity@vger.kernel.org, keyrings@vger.kernel.org, selinux@vger.kernel.org, Roberto Sassu References: <20231107134012.682009-24-roberto.sassu@huaweicloud.com> <17befa132379d37977fc854a8af25f6d.paul@paul-moore.com> <2084adba3c27a606cbc5ed7b3214f61427a829dd.camel@huaweicloud.com> <90eb8e9d-c63e-42d6-b951-f856f31590db@huaweicloud.com> <366a6e5f-d43d-4266-8421-a8a05938a8fd@schaufler-ca.com> From: Petr Tesarik In-Reply-To: <366a6e5f-d43d-4266-8421-a8a05938a8fd@schaufler-ca.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CM-TRANSID: LxC2BwBno3MtSGhlf0OkAQ--.62330S2 X-Coremail-Antispam: 1UD129KBjvJXoWxKrWDZF43ur18Aw4rXr45GFg_yoWfAr4fpF W7Kay7Kr4kAry2kr1IvF45ZFyfKry8XF1UXrn8Jr18A3s0vr1Sqr4UArWUuFyUGrs5Gw1j qr1j9ry7Zr1DAw7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUkK14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xGY2AK02 1l84ACjcxK6xIIjxv20xvE14v26r1j6r1xM28EF7xvwVC0I7IYx2IY6xkF7I0E14v26r4j 6F4UM28EF7xvwVC2z280aVAFwI0_Gr0_Cr1l84ACjcxK6I8E87Iv6xkF7I0E14v26r4j6r 4UJwAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv7VC0 I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r 4UM4x0Y48IcVAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kI c2xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14 v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIxkG c2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI 0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6rW3Jr0E3s1lIxAIcVC2z280aVAFwI0_Jr0_ Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbJ73D UUUUU== X-CM-SenderInfo: hshw23xhvd2x3n6k3tpzhluzxrxghudrp/ X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 30 Nov 2023 00:31:27 -0800 (PST) Hi all, On 11/30/2023 1:41 AM, Casey Schaufler wrote: > On 11/29/2023 10:46 AM, Roberto Sassu wrote: >> On 11/29/2023 6:22 PM, Paul Moore wrote: >>> On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu >>> wrote: >>>> >>>> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: >>>>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu >>>>> wrote: >>>>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: >>>>>>> On Nov  7, 2023 Roberto Sassu wrote: >>>>>>>> >>>>>>>> Before the security field of kernel objects could be shared >>>>>>>> among LSMs with >>>>>>>> the LSM stacking feature, IMA and EVM had to rely on an >>>>>>>> alternative storage >>>>>>>> of inode metadata. The association between inode metadata and >>>>>>>> inode is >>>>>>>> maintained through an rbtree. >>>>>>>> >>>>>>>> Because of this alternative storage mechanism, there was no need >>>>>>>> to use >>>>>>>> disjoint inode metadata, so IMA and EVM today still share them. >>>>>>>> >>>>>>>> With the reservation mechanism offered by the LSM >>>>>>>> infrastructure, the >>>>>>>> rbtree is no longer necessary, as each LSM could reserve a space >>>>>>>> in the >>>>>>>> security blob for each inode. However, since IMA and EVM share the >>>>>>>> inode metadata, they cannot directly reserve the space for them. >>>>>>>> >>>>>>>> Instead, request from the 'integrity' LSM a space in the >>>>>>>> security blob for >>>>>>>> the pointer of inode metadata (integrity_iint_cache structure). >>>>>>>> The other >>>>>>>> reason for keeping the 'integrity' LSM is to preserve the >>>>>>>> original ordering >>>>>>>> of IMA and EVM functions as when they were hardcoded. >>>>>>>> >>>>>>>> Prefer reserving space for a pointer to allocating the >>>>>>>> integrity_iint_cache >>>>>>>> structure directly, as IMA would require it only for a subset of >>>>>>>> inodes. >>>>>>>> Always allocating it would cause a waste of memory. >>>>>>>> >>>>>>>> Introduce two primitives for getting and setting the pointer of >>>>>>>> integrity_iint_cache in the security blob, respectively >>>>>>>> integrity_inode_get_iint() and integrity_inode_set_iint(). This >>>>>>>> would make >>>>>>>> the code more understandable, as they directly replace rbtree >>>>>>>> operations. >>>>>>>> >>>>>>>> Locking is not needed, as access to inode metadata is not >>>>>>>> shared, it is per >>>>>>>> inode. >>>>>>>> >>>>>>>> Signed-off-by: Roberto Sassu >>>>>>>> Reviewed-by: Casey Schaufler >>>>>>>> Reviewed-by: Mimi Zohar >>>>>>>> --- >>>>>>>>   security/integrity/iint.c      | 71 >>>>>>>> +++++----------------------------- >>>>>>>>   security/integrity/integrity.h | 20 +++++++++- >>>>>>>>   2 files changed, 29 insertions(+), 62 deletions(-) >>>>>>>> >>>>>>>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c >>>>>>>> index 882fde2a2607..a5edd3c70784 100644 >>>>>>>> --- a/security/integrity/iint.c >>>>>>>> +++ b/security/integrity/iint.c >>>>>>>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) >>>>>>>>      return 0; >>>>>>>>   } >>>>>>>> >>>>>>>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { >>>>>>>> +   .lbs_inode = sizeof(struct integrity_iint_cache *), >>>>>>>> +}; >>>>>>> >>>>>>> I'll admit that I'm likely missing an important detail, but is there >>>>>>> a reason why you couldn't stash the integrity_iint_cache struct >>>>>>> directly in the inode's security blob instead of the pointer?  For >>>>>>> example: >>>>>>> >>>>>>>    struct lsm_blob_sizes ... = { >>>>>>>      .lbs_inode = sizeof(struct integrity_iint_cache), >>>>>>>    }; >>>>>>> >>>>>>>    struct integrity_iint_cache *integrity_inode_get(inode) >>>>>>>    { >>>>>>>      if (unlikely(!inode->isecurity)) >>>>>>>        return NULL; >>>>>>>      return inode->i_security + integrity_blob_sizes.lbs_inode; >>>>>>>    } >>>>>> >>>>>> It would increase memory occupation. Sometimes the IMA policy >>>>>> encompasses a small subset of the inodes. Allocating the full >>>>>> integrity_iint_cache would be a waste of memory, I guess? >>>>> >>>>> Perhaps, but if it allows us to remove another layer of dynamic memory >>>>> I would argue that it may be worth the cost.  It's also worth >>>>> considering the size of integrity_iint_cache, while it isn't small, it >>>>> isn't exactly huge either. >>>>> >>>>>> On the other hand... (did not think fully about that) if we embed the >>>>>> full structure in the security blob, we already have a mutex >>>>>> available >>>>>> to use, and we don't need to take the inode lock (?). >>>>> >>>>> That would be excellent, getting rid of a layer of locking would be >>>>> significant. >>>>> >>>>>> I'm fully convinced that we can improve the implementation >>>>>> significantly. I just was really hoping to go step by step and not >>>>>> accumulating improvements as dependency for moving IMA and EVM to the >>>>>> LSM infrastructure. >>>>> >>>>> I understand, and I agree that an iterative approach is a good idea, I >>>>> just want to make sure we keep things tidy from a user perspective, >>>>> i.e. not exposing the "integrity" LSM when it isn't required. >>>> >>>> Ok, I went back to it again. >>>> >>>> I think trying to separate integrity metadata is premature now, too >>>> many things at the same time. >>> >>> I'm not bothered by the size of the patchset, it is more important >>> that we do The Right Thing.  I would like to hear in more detail why >>> you don't think this will work, I'm not interested in hearing about >>> difficult it may be, I'm interested in hearing about what challenges >>> we need to solve to do this properly. >> >> The right thing in my opinion is to achieve the goal with the minimal >> set of changes, in the most intuitive way. >> >> Until now, there was no solution that could achieve the primary goal >> of this patch set (moving IMA and EVM to the LSM infrastructure) and, >> at the same time, achieve the additional goal you set of removing the >> 'integrity' LSM. >> >> If you see the diff, the changes compared to v5 that was already >> accepted by Mimi are very straightforward. If the assumption I made >> that in the end the 'ima' LSM could take over the role of the >> 'integrity' LSM, that for me is the preferable option. >> >> Given that the patch set is not doing any design change, but merely >> moving calls and storing pointers elsewhere, that leaves us with the >> option of thinking better what to do next, including like you >> suggested to make IMA and EVM use disjoint metadata. >> >>>> I started to think, does EVM really need integrity metadata or it can >>>> work without? >>>> >>>> The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have >>>> the same problem now. What if we make IMA the one that manages >>>> integrity metadata, so that we can remove the 'integrity' LSM? >>> >>> I guess we should probably revisit the basic idea of if it even makes >>> sense to enable EVM without IMA?  Should we update the Kconfig to >>> require IMA when EVM is enabled? >> >> That would be up to Mimi. Also this does not seem the main focus of >> the patch set. >> >>>> Regarding the LSM order, I would take Casey's suggestion of introducing >>>> LSM_ORDER_REALLY_LAST, for EVM. >>> >>> Please understand that I really dislike that we have imposed ordering >>> constraints at the LSM layer, but I do understand the necessity (the >>> BPF LSM ordering upsets me the most).  I really don't want to see us >>> make things worse by adding yet another ordering bucket, I would >>> rather that we document it well and leave it alone ... basically treat >>> it like the BPF LSM (grrrrrr). >> >> Uhm, that would not be possible right away (the BPF LSM is mutable), >> remember that we defined LSM_ORDER_LAST so that an LSM can be always >> enable and placed as last (requested by Mimi)? > > It would be nice if the solution directly addresses the problem. > EVM needs to be after the LSMs that use xattrs, not after all LSMs. > I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as > unattractive. Excuse me to chime in, but do we really need the ordering in code? FWIW the linker guarantees that objects appear in the order they are seen during the link (unless --sort-section overrides that default, but this option is not used in the kernel). Since *.a archive files are used in kbuild, I have also verified that their use does not break the assumption; they are always created from scratch. In short, to enforce an ordering, you can simply list the corresponding object files in that order in the Makefile. Of course, add a big fat warning comment, so people understand the order is not arbitrary. Just my two eurocents, Petr T