Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp807357rdb; Fri, 22 Dec 2023 05:53:02 -0800 (PST) X-Google-Smtp-Source: AGHT+IFeT2ucXTPhQ68l5lmi1nvBeYqxR+qVA7e0VF/7wLCYl1hm6wZRbyGzW/5EZv16f1+4s1Rq X-Received: by 2002:a17:906:4b08:b0:a23:3b67:a14d with SMTP id y8-20020a1709064b0800b00a233b67a14dmr347154eju.189.1703253182367; Fri, 22 Dec 2023 05:53:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703253182; cv=none; d=google.com; s=arc-20160816; b=GURy/ujatNVt+Dsgm3X2CByMHevNPpOTnxI5g5dya16fFnSO/Gc5vmGEnEmo4Vo8UD ggUW7afsTvLGeTbgPNaiNZaJGjEbI5oy9feb6o80rloNfbNtV3Sn8s5Gc72Il4A9RWi/ BdIWscykicajFTgnigo06Iw7fRMfv5v8aJsNApQY1CsJi/Y2IZVpr20/zaslfjtA+dHo dHGvXbgIxqgHqynaNnoXUVbMWTXp1rxu8FwPIuAJ1vAk0BCkMCPQVgEX8rSYDQMzx2NZ 41P2IQK6HCo+f1PT93SW52xb1mxYG/SyPcDK4TJybPUGuyc7BxexgSakSl95NuAR6W5Y jeTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=UbVJlmILtZWtx+bJbk1JLEKNEK76+uOlLF5uxTghCEo=; fh=IIn5181kkf6eV43uezdjLqEXV63ArwY//WIorpexdA4=; b=uwBDzsAv1nbEnvFWMh+21ljU+2DBmIERjVxnSHZeSVXV87gzqo0W3yG3h7GCW0KJn4 iuO8KNx33dSoipykeestG+7/mOjfB6siSBhmiy9NGRu8HIJiFdoSsTa40XsJ/bCxlKOG TVM3ACP9BDTdzF7o2YrMsiYk6TTl4GsOE9BlsuH5sYwAVKUGcS+EPmptlbyW9qUWabDc UJfi1jlplYD16PjWqqQaPyHZQaHwL8+a1hRAZ85aDhD8jYUh9qOPH4MrD9CJaaxVIglI i2KhFNmjcjlqylV7CL0zUwHB327d/Ia470Bmc5Uc1LAeDBarZF9VXZ6x1zyv9swP2T5L Iwjw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-9762-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9762-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id fx6-20020a170906b74600b00a2363e1e500si1788411ejb.505.2023.12.22.05.53.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 05:53:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-9762-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-9762-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9762-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B6F121F24B3B for ; Fri, 22 Dec 2023 13:53:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3B86A1A28F; Fri, 22 Dec 2023 13:52:55 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 92289199B3 for ; Fri, 22 Dec 2023 13:52:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DFD7E2F4; Fri, 22 Dec 2023 05:53:36 -0800 (PST) Received: from [10.57.87.46] (unknown [10.57.87.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7BB623F64C; Fri, 22 Dec 2023 05:52:48 -0800 (PST) Message-ID: <3ea82554-bd7e-41d9-8bea-f12317b27f4f@arm.com> Date: Fri, 22 Dec 2023 13:52:47 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v7 22/23] sched: Refactor dl/rt find_lowest/latest_rq logic Content-Language: en-US To: John Stultz , LKML Cc: Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com References: <20231220001856.3710363-1-jstultz@google.com> <20231220001856.3710363-23-jstultz@google.com> From: Metin Kaya In-Reply-To: <20231220001856.3710363-23-jstultz@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 20/12/2023 12:18 am, John Stultz wrote: > This pulls re-validation logic done in find_lowest_rq > and find_latest_rq after re-acquiring the rq locks out into its > own function. > > This allows us to later use a more complicated validation > check for chain-migration when using proxy-exectuion. execution > > TODO: It seems likely we could consolidate this two functions > further and leave the task_is_rt()/task_is_dl() checks externally? Agreed. > > Cc: Joel Fernandes > Cc: Qais Yousef > Cc: Ingo Molnar > Cc: Peter Zijlstra > Cc: Juri Lelli > Cc: Vincent Guittot > Cc: Dietmar Eggemann > Cc: Valentin Schneider > Cc: Steven Rostedt > Cc: Ben Segall > Cc: Zimuzo Ezeozue > Cc: Youssef Esmat > Cc: Mel Gorman > Cc: Daniel Bristot de Oliveira > Cc: Will Deacon > Cc: Waiman Long > Cc: Boqun Feng > Cc: "Paul E. McKenney" > Cc: Metin Kaya > Cc: Xuewen Yan > Cc: K Prateek Nayak > Cc: Thomas Gleixner > Cc: kernel-team@android.com > Signed-off-by: John Stultz > --- > kernel/sched/deadline.c | 31 ++++++++++++++++++++----- > kernel/sched/rt.c | 50 ++++++++++++++++++++++++++++------------- > 2 files changed, 59 insertions(+), 22 deletions(-) > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index 21e56ac58e32..8b5701727342 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -2172,6 +2172,30 @@ static int find_later_rq(struct task_struct *sched_ctx, struct task_struct *exec > return -1; > } > > +static inline bool dl_revalidate_rq_state(struct task_struct *task, struct rq *rq, > + struct rq *later) > +{ > + if (task_rq(task) != rq) > + return false; > + > + if (!cpumask_test_cpu(later->cpu, &task->cpus_mask)) > + return false; > + > + if (task_on_cpu(rq, task)) > + return false; > + > + if (!dl_task(task)) > + return false; > + > + if (is_migration_disabled(task)) > + return false; > + > + if (!task_on_rq_queued(task)) > + return false; > + > + return true; > +} > + > /* Locks the rq it finds */ > static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq) > { > @@ -2204,12 +2228,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq) > > /* Retry if something changed. */ > if (double_lock_balance(rq, later_rq)) { > - if (unlikely(task_rq(task) != rq || > - !cpumask_test_cpu(later_rq->cpu, &task->cpus_mask) || > - task_on_cpu(rq, task) || > - !dl_task(task) || > - is_migration_disabled(task) || > - !task_on_rq_queued(task))) { > + if (unlikely(!dl_revalidate_rq_state(task, rq, later_rq))) { > double_unlock_balance(rq, later_rq); > later_rq = NULL; > break; > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c > index f8134d062fa3..fabb19891e95 100644 > --- a/kernel/sched/rt.c > +++ b/kernel/sched/rt.c > @@ -1935,6 +1935,39 @@ static int find_lowest_rq(struct task_struct *sched_ctx, struct task_struct *exe > return -1; > } > > +static inline bool rt_revalidate_rq_state(struct task_struct *task, struct rq *rq, > + struct rq *lowest) > +{ > + /* > + * We had to unlock the run queue. In > + * the mean time, task could have > + * migrated already or had its affinity changed. > + * Also make sure that it wasn't scheduled on its rq. > + * It is possible the task was scheduled, set > + * "migrate_disabled" and then got preempted, so we must > + * check the task migration disable flag here too. > + */ > + if (task_rq(task) != rq) > + return false; > + > + if (!cpumask_test_cpu(lowest->cpu, &task->cpus_mask)) > + return false; > + > + if (task_on_cpu(rq, task)) > + return false; > + > + if (!rt_task(task)) > + return false; > + > + if (is_migration_disabled(task)) > + return false; > + > + if (!task_on_rq_queued(task)) > + return false; > + > + return true; > +} > + > /* Will lock the rq it finds */ > static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq) > { > @@ -1964,22 +1997,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq) > > /* if the prio of this runqueue changed, try again */ > if (double_lock_balance(rq, lowest_rq)) { > - /* > - * We had to unlock the run queue. In > - * the mean time, task could have > - * migrated already or had its affinity changed. > - * Also make sure that it wasn't scheduled on its rq. > - * It is possible the task was scheduled, set > - * "migrate_disabled" and then got preempted, so we must > - * check the task migration disable flag here too. > - */ > - if (unlikely(task_rq(task) != rq || > - !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) || > - task_on_cpu(rq, task) || > - !rt_task(task) || > - is_migration_disabled(task) || > - !task_on_rq_queued(task))) { > - > + if (unlikely(!rt_revalidate_rq_state(task, rq, lowest_rq))) { > double_unlock_balance(rq, lowest_rq); > lowest_rq = NULL; > break;