Received: by 2002:a05:6359:6284:b0:131:369:b2a3 with SMTP id se4csp5783962rwb; Wed, 9 Aug 2023 09:04:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGzK97vijo0XyhocmlrwUV1xiZDM90NUNMrg/Tx96qVQnZf0PVeqRmDq7q3KiqyVRJP6fam X-Received: by 2002:a2e:8742:0:b0:2b9:ed84:b2bf with SMTP id q2-20020a2e8742000000b002b9ed84b2bfmr2234288ljj.33.1691597053454; Wed, 09 Aug 2023 09:04:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691597053; cv=none; d=google.com; s=arc-20160816; b=c9Q4POq98tR43lR5qyU1aJKyH6g16Xcue+IrOjaN7PpB7B0UnvEi0eSGrraWDQkOmr gBTxODJLAQ2slJ6vOSRkH2ooGn09mExxWCwynPdBz5hxQikKuK7DFADRUxOQ5TLk1lFZ IoIpfXXyUg3mhuPiGrdsLO16RivxxIiTv6KCcxCdu1qtA8Isy76rTflrXHT9biZ8pfRb Pxv+HP+fEbtRpgozFKfIFVVs2bQjb01dfjxlY0qIxB2BqpL6i5EzmrMRHkhdun5V5g1T /9Jr75BA/pm5o3rY6iPFxYtLhyJ6PxJxETJAMd4mA0B5lSvo5wW4Ct71cq+Jy8phbqPH ywyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=vcwCj9y2yL+g5o+8ATJeS7p9Qr+e8+HW6uHkPJgKedk=; fh=odIR4FQeFyWUxK52Yt2ioCJPaiM7e1ZQDB9QdkiKKxM=; b=lbCOnjwZpxI7JD4+NbIi1Iqz95RoKqsYgzFbGKJU/VzEtJYnEMYGOUP1nZc4hUMD6h 3ydRZzwcew9cF2zFkwIiT2jnAxJXxpdFR6NT517HIeSOzssICWhkv4q5tl6OswA6synb MyZl9fdqA+355/eRjDXWMQta93yJtXzGgR6I+qh8kx5c4+aFuPo17pwJT88WK8gGhCna fQai0fqM82SKMmTXdtNtJjihyGcKQJtM8jqVG8ZZMhacEliacFxF0eMgdXz9o5B7QN0N FFJ5OMG6Fhu0arz8ZnGNlzq2y2cKcl4NUVbkfXbNGDq13zhF3OoO650fDD180JH+1Ved pIfg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dx8-20020a170906a84800b0099b9df9a864si8248013ejb.738.2023.08.09.09.03.35; Wed, 09 Aug 2023 09:04:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233555AbjHIPaa (ORCPT + 99 others); Wed, 9 Aug 2023 11:30:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232178AbjHIPa1 (ORCPT ); Wed, 9 Aug 2023 11:30:27 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C0EF1FD4; Wed, 9 Aug 2023 08:30:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B5FBD639F5; Wed, 9 Aug 2023 15:30:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 753E4C433C8; Wed, 9 Aug 2023 15:30:23 +0000 (UTC) Date: Wed, 9 Aug 2023 11:30:21 -0400 From: Steven Rostedt To: Marco Elver Cc: Kees Cook , Andrew Morton , Guenter Roeck , Peter Zijlstra , Mark Rutland , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org Subject: Re: [PATCH v3 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL Message-ID: <20230809113021.63e5ef66@gandalf.local.home> In-Reply-To: References: <20230808102049.465864-1-elver@google.com> <20230808102049.465864-3-elver@google.com> <202308081424.1DC7AA4AE3@keescook> X-Mailer: Claws Mail 3.19.1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 9 Aug 2023 11:57:19 +0200 Marco Elver wrote: > static __always_inline bool __list_add_valid(struct list_head *new, > struct list_head *prev, > struct list_head *next) > { > - return __list_add_valid_or_report(new, prev, next); > + bool ret = true; > + > + if (IS_ENABLED(CONFIG_HARDEN_LIST)) { > + /* > + * With the hardening version, elide checking if next and prev > + * are NULL, since the immediate dereference of them below would > + * result in a fault if NULL. > + * > + * With the reduced set of checks, we can afford to inline the > + * checks, which also gives the compiler a chance to elide some > + * of them completely if they can be proven at compile-time. If > + * one of the pre-conditions does not hold, the slow-path will > + * show a report which pre-condition failed. > + */ > + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) > + return true; > + ret = false; > + } > + > + ret &= __list_add_valid_or_report(new, prev, next); > + return ret; > } I would actually prefer DEBUG_LIST to select HARDEN_LIST and not the other way around. It logically doesn't make sense that HARDEN_LIST would select DEBUG_LIST. That is, I could by default want HARDEN_LIST always on, but not DEBUG_LIST (because who knows, it may add other features I don't want). But then, I may have stumbled over something and want more info, and enable DEBUG_LIST (while still having HARDEN_LIST) enabled. I think you are looking at this from an implementation perspective and not the normal developer one. This would mean the above function should get enabled by CONFIG_HARDEN_LIST (and CONFIG_DEBUG would select CONFIG_HARDEN) and would look more like: static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { bool ret = true; if (!IS_ENABLED(CONFIG_DEBUG_LIST)) { /* * With the hardening version, elide checking if next and prev * are NULL, since the immediate dereference of them below would * result in a fault if NULL. * * With the reduced set of checks, we can afford to inline the * checks, which also gives the compiler a chance to elide some * of them completely if they can be proven at compile-time. If * one of the pre-conditions does not hold, the slow-path will * show a report which pre-condition failed. */ if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) return true; ret = false; } ret &= __list_add_valid_or_report(new, prev, next); return ret; } That is, if DEBUG_LIST is enabled, we always call the __list_add_valid_or_report(), but if only HARDEN_LIST is enabled, then we do the shortcut. -- Steve