Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp4626274iob; Sun, 8 May 2022 19:43:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJylZdNf58fjXXnrNCSm8XmneXrdws7ZmIP4/K5poUTTXksJwb2Mx8rvOu1+81tUP9R+2LUp X-Received: by 2002:a63:2a93:0:b0:3c2:3ed1:7804 with SMTP id q141-20020a632a93000000b003c23ed17804mr11579176pgq.38.1652064189835; Sun, 08 May 2022 19:43:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652064189; cv=none; d=google.com; s=arc-20160816; b=laY6XYinsVL3CdilyfElIZfXGov0QTGHx2FyoJ+Cwav2sXsRaMDuez+GEerWlu2jx9 z4rUyXV3pKtOETlC0XjrSCnvjZPcDszP3G8IgRNw4vql5w+5gTs2JYTPVEk/DBmMP6x1 2NE5puXIIFv8M2kfh3pnwooGD2HnM5IkFpgDRjj4OwuU+EkVy9jU4/vrpWMo1/U26LCJ +95vbDSWbyrITk6bjpIFotUFGAyL0bTAA+TigoKYunHeUq0ESPgFuwtGZLbaEC5sFWUD TgKlyRO6qkOGVK9LXaAkF821uCWuYmIwVMktp0PbjlnGvBRHW5i3GSRd0LT0fO2KP5sq qv2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=VbRho/5qSICDsOiq1rVn40IbhHRIPgfL6Ip6GqhnKvM=; b=AQyf2BRIoNVMaKNdkS/6RhWdc4RaCRkomRC5zH3frhl8AvhVTQt8DTdFQxq3hhUHgr whr0Pf3nZYrUEvymPzfe9HqnkoHlj8mzobGIMTWi3KcqtWQriXUBfr+MDMoBVtCel/gd gq2fkMST22T9vbXurHHcHxzLd3WdkFfrzUvTdqk+YGc7cFouq1PWtUyPOiw0D5P93py8 CX41O+NAdQBz3V2a6ugEjnIvI+Vab0VkFOwBNTm2LytEkd7kQmB/nLXzJa6BCl4LRGWn Lt9XULywC6L0+8fbRmsfo4N9IA2/mVfQInzdOiKpkUhd1LeH0WxX2SXYf8qC+Xv/z0ju Dk5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@digikod.net header.s=20191114 header.b=rsSTUMPF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id z13-20020a17090a468d00b001dc280ca355si12098244pjf.124.2022.05.08.19.43.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 May 2022 19:43:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@digikod.net header.s=20191114 header.b=rsSTUMPF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BB8D22317C; Sun, 8 May 2022 19:42:27 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443752AbiEFQOG (ORCPT + 99 others); Fri, 6 May 2022 12:14:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443548AbiEFQNa (ORCPT ); Fri, 6 May 2022 12:13:30 -0400 Received: from smtp-42ae.mail.infomaniak.ch (smtp-42ae.mail.infomaniak.ch [84.16.66.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 572D06E8CC for ; Fri, 6 May 2022 09:09:42 -0700 (PDT) Received: from smtp-2-0000.mail.infomaniak.ch (unknown [10.5.36.107]) by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4KvwW873gBzMqT9B; Fri, 6 May 2022 18:09:40 +0200 (CEST) Received: from localhost (unknown [23.97.221.149]) by smtp-2-0000.mail.infomaniak.ch (Postfix) with ESMTPA id 4KvwW85MffzlhSM0; Fri, 6 May 2022 18:09:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=digikod.net; s=20191114; t=1651853380; bh=2txkbKyK29AkSCNNTThxy/R8rlErhcknhk6fEDBlaNU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rsSTUMPFYEiFgKVYLn+cGStRURD9ozdklV1Lkljnbuf3tJ/doT0TRIpZeL63qj2z1 1UODYJBPN5hS/hTObWdWLVv3WfzQccx2hZwygLfaCiK3DPGCw2CZ6Lb0ISDbaPRO4P zHpdyhxhHq//MDpk97MYLknby3xzoCLIuvjtEisU= From: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= To: James Morris , "Serge E . Hallyn" Cc: =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= , Al Viro , Jann Horn , John Johansen , Kees Cook , Konstantin Meskhidze , Paul Moore , Shuah Khan , Tetsuo Handa , linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org Subject: [PATCH v3 04/12] landlock: Fix same-layer rule unions Date: Fri, 6 May 2022 18:10:54 +0200 Message-Id: <20220506161102.525323-5-mic@digikod.net> In-Reply-To: <20220506161102.525323-1-mic@digikod.net> References: <20220506161102.525323-1-mic@digikod.net> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The original behavior was to check if the full set of requested accesses was allowed by at least a rule of every relevant layer. This didn't take into account requests for multiple accesses and same-layer rules allowing the union of these accesses in a complementary way. As a result, multiple accesses requested on a file hierarchy matching rules that, together, allowed these accesses, but without a unique rule allowing all of them, was illegitimately denied. This case should be rare in practice and it can only be triggered by the path_rename or file_open hook implementations. For instance, if, for the same layer, a rule allows execution beneath /a/b and another rule allows read beneath /a, requesting access to read and execute at the same time for /a/b should be allowed for this layer. This was an inconsistency because the union of same-layer rule accesses was already allowed if requested once at a time anyway. This fix changes the way allowed accesses are gathered over a path walk. To take into account all these rule accesses, we store in a matrix all layer granting the set of requested accesses, according to the handled accesses. To avoid heap allocation, we use an array on the stack which is 2*13 bytes. A following commit bringing the LANDLOCK_ACCESS_FS_REFER access right will increase this size to reach 112 bytes (2*14*4) in case of link or rename actions. Add a new layout1.layer_rule_unions test to check that accesses from different rules pertaining to the same layer are ORed in a file hierarchy. Also test that it is not the case for rules from different layers. Reviewed-by: Paul Moore Signed-off-by: Mickaël Salaün Link: https://lore.kernel.org/r/20220506161102.525323-5-mic@digikod.net --- Changes since v2: * Fix commit message about stack size. * Format with clang-format and rebase. Changes since v1: * Add Reviewed-by: Paul Moore. * Small cosmetic comment cleanup. --- security/landlock/fs.c | 78 ++++++++++----- security/landlock/ruleset.h | 2 + tools/testing/selftests/landlock/fs_test.c | 107 +++++++++++++++++++++ 3 files changed, 161 insertions(+), 26 deletions(-) diff --git a/security/landlock/fs.c b/security/landlock/fs.c index 20953bff8fd5..c5749301b37d 100644 --- a/security/landlock/fs.c +++ b/security/landlock/fs.c @@ -207,45 +207,67 @@ find_rule(const struct landlock_ruleset *const domain, return rule; } -static inline layer_mask_t unmask_layers(const struct landlock_rule *const rule, - const access_mask_t access_request, - layer_mask_t layer_mask) +/* + * @layer_masks is read and may be updated according to the access request and + * the matching rule. + * + * Returns true if the request is allowed (i.e. relevant layer masks for the + * request are empty). + */ +static inline bool +unmask_layers(const struct landlock_rule *const rule, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) { size_t layer_level; + if (!access_request || !layer_masks) + return true; if (!rule) - return layer_mask; + return false; /* * An access is granted if, for each policy layer, at least one rule - * encountered on the pathwalk grants the requested accesses, - * regardless of their position in the layer stack. We must then check + * encountered on the pathwalk grants the requested access, + * regardless of its position in the layer stack. We must then check * the remaining layers for each inode, from the first added layer to - * the last one. + * the last one. When there is multiple requested accesses, for each + * policy layer, the full set of requested accesses may not be granted + * by only one rule, but by the union (binary OR) of multiple rules. + * E.g. /a/b + /a => /a/b */ for (layer_level = 0; layer_level < rule->num_layers; layer_level++) { const struct landlock_layer *const layer = &rule->layers[layer_level]; const layer_mask_t layer_bit = BIT_ULL(layer->level - 1); + const unsigned long access_req = access_request; + unsigned long access_bit; + bool is_empty; - /* Checks that the layer grants access to the full request. */ - if ((layer->access & access_request) == access_request) { - layer_mask &= ~layer_bit; - - if (layer_mask == 0) - return layer_mask; + /* + * Records in @layer_masks which layer grants access to each + * requested access. + */ + is_empty = true; + for_each_set_bit(access_bit, &access_req, + ARRAY_SIZE(*layer_masks)) { + if (layer->access & BIT_ULL(access_bit)) + (*layer_masks)[access_bit] &= ~layer_bit; + is_empty = is_empty && !(*layer_masks)[access_bit]; } + if (is_empty) + return true; } - return layer_mask; + return false; } static int check_access_path(const struct landlock_ruleset *const domain, const struct path *const path, const access_mask_t access_request) { - bool allowed = false; + layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {}; + bool allowed = false, has_access = false; struct path walker_path; - layer_mask_t layer_mask; size_t i; if (!access_request) @@ -265,13 +287,20 @@ static int check_access_path(const struct landlock_ruleset *const domain, return -EACCES; /* Saves all layers handling a subset of requested accesses. */ - layer_mask = 0; for (i = 0; i < domain->num_layers; i++) { - if (domain->fs_access_masks[i] & access_request) - layer_mask |= BIT_ULL(i); + const unsigned long access_req = access_request; + unsigned long access_bit; + + for_each_set_bit(access_bit, &access_req, + ARRAY_SIZE(layer_masks)) { + if (domain->fs_access_masks[i] & BIT_ULL(access_bit)) { + layer_masks[access_bit] |= BIT_ULL(i); + has_access = true; + } + } } /* An access request not handled by the domain is allowed. */ - if (layer_mask == 0) + if (!has_access) return 0; walker_path = *path; @@ -283,14 +312,11 @@ static int check_access_path(const struct landlock_ruleset *const domain, while (true) { struct dentry *parent_dentry; - layer_mask = - unmask_layers(find_rule(domain, walker_path.dentry), - access_request, layer_mask); - if (layer_mask == 0) { + allowed = unmask_layers(find_rule(domain, walker_path.dentry), + access_request, &layer_masks); + if (allowed) /* Stops when a rule from each layer grants access. */ - allowed = true; break; - } jump_up: if (walker_path.dentry == walker_path.mnt->mnt_root) { diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index 521af2848951..d43231b783e4 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -22,6 +22,8 @@ typedef u16 access_mask_t; /* Makes sure all filesystem access rights can be stored. */ static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS); +/* Makes sure for_each_set_bit() and for_each_clear_bit() calls are OK. */ +static_assert(sizeof(unsigned long) >= sizeof(access_mask_t)); typedef u16 layer_mask_t; /* Makes sure all layers can be checked. */ diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c index e13f046a172a..a4fdcda62bde 100644 --- a/tools/testing/selftests/landlock/fs_test.c +++ b/tools/testing/selftests/landlock/fs_test.c @@ -758,6 +758,113 @@ TEST_F_FORK(layout1, ruleset_overlap) ASSERT_EQ(0, test_open(dir_s1d3, O_RDONLY | O_DIRECTORY)); } +TEST_F_FORK(layout1, layer_rule_unions) +{ + const struct rule layer1[] = { + { + .path = dir_s1d2, + .access = LANDLOCK_ACCESS_FS_READ_FILE, + }, + /* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */ + { + .path = dir_s1d3, + .access = LANDLOCK_ACCESS_FS_WRITE_FILE, + }, + {}, + }; + const struct rule layer2[] = { + /* Doesn't change anything from layer1. */ + { + .path = dir_s1d2, + .access = LANDLOCK_ACCESS_FS_READ_FILE | + LANDLOCK_ACCESS_FS_WRITE_FILE, + }, + {}, + }; + const struct rule layer3[] = { + /* Only allows write (but not read) to dir_s1d3. */ + { + .path = dir_s1d2, + .access = LANDLOCK_ACCESS_FS_WRITE_FILE, + }, + {}, + }; + int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1); + + ASSERT_LE(0, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + ASSERT_EQ(0, close(ruleset_fd)); + + /* Checks s1d1 hierarchy with layer1. */ + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Checks s1d2 hierarchy with layer1. */ + ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Checks s1d3 hierarchy with layer1. */ + ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY)); + ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY)); + /* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */ + ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Doesn't change anything from layer1. */ + ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer2); + ASSERT_LE(0, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + ASSERT_EQ(0, close(ruleset_fd)); + + /* Checks s1d1 hierarchy with layer2. */ + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Checks s1d2 hierarchy with layer2. */ + ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Checks s1d3 hierarchy with layer2. */ + ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY)); + ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY)); + /* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */ + ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Only allows write (but not read) to dir_s1d3. */ + ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer3); + ASSERT_LE(0, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + ASSERT_EQ(0, close(ruleset_fd)); + + /* Checks s1d1 hierarchy with layer3. */ + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Checks s1d2 hierarchy with layer3. */ + ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY)); + ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); + + /* Checks s1d3 hierarchy with layer3. */ + ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDONLY)); + ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY)); + /* dir_s1d3 should now deny READ_FILE and WRITE_FILE (O_RDWR). */ + ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDWR)); + ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY)); +} + TEST_F_FORK(layout1, non_overlapping_accesses) { const struct rule layer1[] = { -- 2.35.1