Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6054FC54EAA for ; Fri, 27 Jan 2023 02:03:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233743AbjA0CDI (ORCPT ); Thu, 26 Jan 2023 21:03:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233629AbjA0CCl (ORCPT ); Thu, 26 Jan 2023 21:02:41 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 907977963A for ; Thu, 26 Jan 2023 17:57:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674784543; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=kBqIVY5zx2E18osvf7PrltTD0lMoq5S2M4uAQB23y6I=; b=CuXD7eJn/ibRttXMwlBoi6oqthQahgesqB6yChGOaVb8pBw0YTmHS6trmka4tyBLv7+OXh bUi1/FzlDKBYcqtygb3OzneptZQ8gVOBoYnG2f+yGxEPYZ1eJG8b15KM7d65vG6sQBpxyh ULV59bcHHFWCZJWQt3nILwTjT9vYFv4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-172-t5CkqvuZNRuKvUtSMwdH_A-1; Thu, 26 Jan 2023 20:55:37 -0500 X-MC-Unique: t5CkqvuZNRuKvUtSMwdH_A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 175EC101A52E; Fri, 27 Jan 2023 01:55:37 +0000 (UTC) Received: from llong.com (unknown [10.22.33.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 406322166B29; Fri, 27 Jan 2023 01:55:36 +0000 (UTC) From: Waiman Long To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Will Deacon Cc: Phil Auld , Linus Torvalds , linux-kernel@vger.kernel.org, Waiman Long Subject: [PATCH v3] sched: Store restrict_cpus_allowed_ptr() call state Date: Thu, 26 Jan 2023 20:55:27 -0500 Message-Id: <20230127015527.466367-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The user_cpus_ptr field was originally added by commit b90ca8badbd1 ("sched: Introduce task_struct::user_cpus_ptr to track requested affinity"). It was used only by arm64 arch due to possible asymmetric CPU setup. Since commit 8f9ea86fdf99 ("sched: Always preserve the user requested cpumask"), task_struct::user_cpus_ptr is repurposed to store user requested cpu affinity specified in the sched_setaffinity(). This results in a slight performance regression on an arm64 system when booted with "allow_mismatched_32bit_el0" on the command-line. The arch code will (amongst other things) calls force_compatible_cpus_allowed_ptr() and relax_compatible_cpus_allowed_ptr() when exec()'ing a 32-bit or a 64-bit task respectively. Now a call to relax_compatible_cpus_allowed_ptr() will always result in a __sched_setaffinity() call whether there is a previous force_compatible_cpus_allowed_ptr() call or not. In order to fix this regression, a new scheduler flag task_struct::cpus_allowed_restricted is now added to track if force_compatible_cpus_allowed_ptr() has been called before or not. This patch also updates the comments in force_compatible_cpus_allowed_ptr() and relax_compatible_cpus_allowed_ptr() and handles their interaction with sched_setaffinity(). This patch also removes the task_user_cpus() helper. In the case of relax_compatible_cpus_allowed_ptr(), cpu_possible_mask as user_cpu_ptr masking will be performed within __sched_setaffinity() anyway. Fixes: 8f9ea86fdf99 ("sched: Always preserve the user requested cpumask") Reported-by: Will Deacon Signed-off-by: Waiman Long --- include/linux/sched.h | 3 +++ kernel/sched/core.c | 25 +++++++++++++++++-------- kernel/sched/sched.h | 8 +------- 3 files changed, 21 insertions(+), 15 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 853d08f7562b..f93f62a1f858 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -886,6 +886,9 @@ struct task_struct { unsigned sched_contributes_to_load:1; unsigned sched_migrated:1; + /* restrict_cpus_allowed_ptr() bit, serialized by scheduler locks */ + unsigned cpus_allowed_restricted:1; + /* Force alignment to the next boundary: */ unsigned :0; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index bb1ee6d7bdde..d7bc809c109e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2999,6 +2999,10 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, struct rq *rq; rq = task_rq_lock(p, &rf); + + if (ctx->flags & SCA_CLR_RESTRICT) + p->cpus_allowed_restricted = 0; + /* * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_* * flags are set. @@ -3025,8 +3029,8 @@ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); /* * Change a given task's CPU affinity to the intersection of its current * affinity mask and @subset_mask, writing the resulting mask to @new_mask. - * If user_cpus_ptr is defined, use it as the basis for restricting CPU - * affinity or use cpu_online_mask instead. + * The cpus_allowed_restricted bit is set to indicate to a later + * relax_compatible_cpus_allowed_ptr() call to relax the cpumask. * * If the resulting mask is empty, leave the affinity unchanged and return * -EINVAL. @@ -3044,6 +3048,7 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p, int err; rq = task_rq_lock(p, &rf); + p->cpus_allowed_restricted = 1; /* * Forcefully restricting the affinity of a deadline task is @@ -3055,7 +3060,8 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p, goto err_unlock; } - if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) { + if (p->user_cpu_ptr && + !cpumask_and(new_mask, p->user_cpu_ptr, subset_mask)) { err = -EINVAL; goto err_unlock; } @@ -3069,9 +3075,8 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p, /* * Restrict the CPU affinity of task @p so that it is a subset of - * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the - * old affinity mask. If the resulting mask is empty, we warn and walk - * up the cpuset hierarchy until we find a suitable mask. + * task_cpu_possible_mask(). If the resulting mask is empty, we warn + * and walk up the cpuset hierarchy until we find a suitable mask. */ void force_compatible_cpus_allowed_ptr(struct task_struct *p) { @@ -3125,11 +3130,15 @@ __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx); void relax_compatible_cpus_allowed_ptr(struct task_struct *p) { struct affinity_context ac = { - .new_mask = task_user_cpus(p), - .flags = 0, + .new_mask = cpu_possible_mask; + .flags = SCA_CLR_RESTRICT, }; int ret; + /* Return if no previous force_compatible_cpus_allowed_ptr() call */ + if (!data_race(p->cpus_allowed_restricted)) + return; + /* * Try to restore the old affinity mask with __sched_setaffinity(). * Cpuset masking will be done there too. diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 771f8ddb7053..e32ba5d9bb54 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1881,13 +1881,6 @@ static inline void dirty_sched_domain_sysctl(int cpu) #endif extern int sched_update_scaling(void); - -static inline const struct cpumask *task_user_cpus(struct task_struct *p) -{ - if (!p->user_cpus_ptr) - return cpu_possible_mask; /* &init_task.cpus_mask */ - return p->user_cpus_ptr; -} #endif /* CONFIG_SMP */ #include "stats.h" @@ -2293,6 +2286,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq); #define SCA_MIGRATE_DISABLE 0x02 #define SCA_MIGRATE_ENABLE 0x04 #define SCA_USER 0x08 +#define SCA_CLR_RESTRICT 0x10 #ifdef CONFIG_SMP -- 2.31.1