Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CAD5C6FD1F for ; Wed, 15 Mar 2023 00:26:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229667AbjCOA0R (ORCPT ); Tue, 14 Mar 2023 20:26:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229494AbjCOA0P (ORCPT ); Tue, 14 Mar 2023 20:26:15 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13EFA13DD5; Tue, 14 Mar 2023 17:26:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B1086B81C20; Wed, 15 Mar 2023 00:26:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7450C433D2; Wed, 15 Mar 2023 00:26:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678839971; bh=/XAQ8rJt3lGOaB1E/YthUanR+sbcNEkijl/oe8YCtZ4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Uy+4w2F0tBaxVtF5+IeMp90ZEN1/sa8G+DZvW8BDgHPfErqM4omjPFdjXdQDBQQPG 91YMajIgSoMKaPQXMZlWRD7D1IeH3CZJ+cjqlhpqkWJaNh7VcPWvnC343XK9h0BvED rSb6D+F8ZrfhYPqVSnyAisgWrFB/eTdX8hc/CZxhtmKptlYvFrpoCCBVQB8xQAHSPy Fyrvm14+OIkV3bO8aaNZchvFxh4hSS+7SMeiKv1WAgnKboJSxdeuP7ns+jrcP9pvA7 dBittKfBqsYAzEWBKZum4ECjKUL1tfPd4aRg47TOHGPeJqgMUIc8PttJ/sOWxGrY8I 2nSTU7/t7wDNw== Date: Tue, 14 Mar 2023 17:26:08 -0700 From: Josh Poimboeuf To: Petr Mladek Cc: live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, Seth Forshee , Peter Zijlstra , Song Liu , Mark Rutland , Joe Lawrence , Miroslav Benes , Jiri Kosina , Ingo Molnar , Rik van Riel Subject: Re: [PATCH 0.5/3] livepatch: Convert stack entries array to percpu Message-ID: <20230315002608.sulru3taycwrmvki@treble> References: <4ae981466b7814ec221014fc2554b2f86f3fb70b.1677257135.git.jpoimboe@kernel.org> <20230228165608.kumgxziaietsjaz3@treble> <20230313233346.kayh4t2lpicjkpsv@treble> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 14, 2023 at 11:50:21AM +0100, Petr Mladek wrote: > > static int klp_check_stack(struct task_struct *task, const char **oldname) > > { > > - static unsigned long entries[MAX_STACK_ENTRIES]; > > + unsigned long *entries = this_cpu_ptr(klp_stack_entries); > > struct klp_object *obj; > > struct klp_func *func; > > int ret, nr_entries; > > > > - ret = stack_trace_save_tsk_reliable(task, entries, ARRAY_SIZE(entries)); > > + /* Protect 'klp_stack_entries' */ > > + lockdep_assert_preemption_disabled(); > > I think about adding: > > /* > * Stay on the safe side even when cond_resched() is called from > * an IRQ context by mistake. > */ > if (!in_task()) > return -EINVAL; > > Or is this prevented another way, please? An IRQ handler trying to schedule would be a pretty major bug, no? I think there are already several different checks for that. For example, the __might_resched() call in cond_resched(). -- Josh