Received: by 2002:a05:7412:bbc7:b0:fc:a2b0:25d7 with SMTP id kh7csp694474rdb; Fri, 2 Feb 2024 00:34:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IE1q9iDAly73GfxU5ED6na6cUGZ3483iOU8r5uWkQKeFPQlvjmeuRQneMKA671n5t8h3BLY X-Received: by 2002:a05:6358:7244:b0:176:d48a:6cd5 with SMTP id i4-20020a056358724400b00176d48a6cd5mr6037218rwa.23.1706862846285; Fri, 02 Feb 2024 00:34:06 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706862846; cv=pass; d=google.com; s=arc-20160816; b=Y7CnbHtFEDOnNdxuokYQGahGGEBhlYnpa/Mt5pC9IdjovzmJp4fD7ftXJOb6ciQkKE vyBz2mRQ3P5IU3lzA82WnyCVeOkym5J2UadcPta2/PghIVKTWU/7OHpipVzRxyv5s0Dw N6WUk7FD7bJRKmfYIu321/PmWdbMKSDr8IYSQtYLBElbW6DuZXBQSJ/u16I71V9zoOyn s3qQbBvBvcgP8L8sRUOeQDFu2FrVmIQ3JUh92fIHd+gQmquqRiJ6HrBlOzRl/TCEE5s0 XtAzIXwlbz2KTEsx6qDn/vgLqxzJ/kww17Hzwsh+PPc7OxnLeEtEYjZ0Kdq1BnMZAI9z DPLw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from; bh=R6ZIA+Z0f8hdzYX4kAF5btbcD3jd3axfpBqdUTL1roQ=; fh=NMPKwYp/s/2GTDyqZNIFXJPAksHuhLfQU22WoO14XMA=; b=sqCA6zQTHlBpm31HEosEWhf4NhlLr5wCURdNzN3Me1p3N7E5RP+X/PYB20h9vJWp+S lvY/YnnP3yasM3NqkIt0KNUwPnSAPsdwFZFly1mV2aOGwx8gdHRftlB0SrexR+IqFgEH A1nlSb/7siVPmh+jFlY1FPt0fczQPwzYfmp8Yb4KhOa9rXoUOJWMe/81vcstC6Pj9Mv0 2KbaXCKL1rkJAsmTEVFvb6lAfj60kKpaxkZKS2Q8e+hhRaD2ThhLFalRgdt2qbOhmZk2 er+3+SHTp2m9NTrm/GlMI0hintWPlNi3FDp0taDPD9DzeerdUHpDQdWvz0cqth6RfnLP dqhg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-49473-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-49473-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com X-Forwarded-Encrypted: i=1; AJvYcCXKcSIsNajwF6onyzWt0HkaLAP6aLJovFK85Ng5IvnNJCNGUcJcd5EiMlCAzeO71z/jRjr8jl911NCSCaWjmWj5c8Sv5jO5AFSKqWZ2xA== Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id o19-20020a63fb13000000b005cee039f233si1233061pgh.425.2024.02.02.00.34.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Feb 2024 00:34:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-49473-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-49473-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-49473-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E2D4228A222 for ; Fri, 2 Feb 2024 08:34:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AE8F6182B5; Fri, 2 Feb 2024 08:33:58 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8282D182BE; Fri, 2 Feb 2024 08:33:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706862838; cv=none; b=US64pWeKM0u4Y6LdvYBw+Y1GjjNYtGCQvn7KxA4eT6Ac4sKSpu8aoA10t2xgpPa0q6V0G65nl1laILDmx+pVj6Vhi3olUeqAzCOdqlP30YpSz7O6BwUGadEhFw0s1HCl4dzCoxMMbAlTXGjgnnPXlQVPD1fbp+6mDgNSDdV7juA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706862838; c=relaxed/simple; bh=72FVsJAWi+qWl0gmnOlAP2lrOTNvG7aAybhPev5uBKc=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=oNC15LZ6sQmz3qbHRaI8gshEoJ4UKxmNiGqOVUT/1uL1Q+9WBF0/APvDMShsqygj6ZeljqFek2N2vbNp1o/A6YVVmwwJHhzaf/K4dAkq0S3CenbNIQi19UB7X2r27JgOvzCJV9Lp001kfxKfuwKcLMP1c1ZQWWhCTjAiPgcRfWU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 17A18DA7; Fri, 2 Feb 2024 00:34:32 -0800 (PST) Received: from e132833.arm.com (unknown [10.57.7.95]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 443423F5A1; Fri, 2 Feb 2024 00:33:46 -0800 (PST) From: Metin Kaya To: linux-kernel@vger.kernel.org Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, linux-trace-kernel@vger.kernel.org Subject: [PATCH] sched: Add trace events for Proxy Execution (PE) Date: Fri, 2 Feb 2024 08:33:38 +0000 Message-Id: <20240202083338.1328060-1-metin.kaya@arm.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Add sched_[start, finish]_task_selection trace events to measure the latency of PE patches in task selection. Moreover, introduce trace events for interesting events in PE: 1. sched_pe_enqueue_sleeping_task: a task gets enqueued on wait queue of a sleeping task (mutex owner). 2. sched_pe_cross_remote_cpu: dependency chain crosses remote CPU. 3. sched_pe_task_is_migrating: mutex owner task migrates. New trace events can be tested via this command: $ perf trace \ -e sched:sched_start_task_selection \ -e sched:sched_finish_task_selection \ -e sched:sched_pe_enqueue_sleeping_task \ -e sched:sched_pe_cross_remote_cpu \ -e sched:sched_pe_task_is_migrating Notes: 1. These trace events are not intended to merge upstream. Instead, they are only for making PE tests easier and will be converted to trace points once PE patches hit upstream. 2. This patch is based on John's Proxy Execution v7 patch series (see the link below) which is also available at https://github.com/johnstultz-work/linux-dev/commits/proxy-exec-v7-6.7-rc6/. Link: https://lore.kernel.org/linux-kernel/CANDhNCrHd+5twWVNqBAhVLfhMhkiO0KjxXBmwVgaCD4kAyFyWw@mail.gmail.com/ Signed-off-by: Metin Kaya CC: John Stultz CC: Joel Fernandes CC: Qais Yousef CC: Ingo Molnar CC: Peter Zijlstra CC: Juri Lelli CC: Vincent Guittot CC: Dietmar Eggemann CC: Valentin Schneider CC: Steven Rostedt CC: Masami Hiramatsu CC: Mathieu Desnoyers CC: Ben Segall CC: Zimuzo Ezeozue CC: Youssef Esmat CC: Mel Gorman CC: Daniel Bristot de Oliveira CC: Will Deacon CC: Waiman Long CC: Boqun Feng CC: "Paul E. McKenney" CC: Xuewen Yan CC: K Prateek Nayak CC: Thomas Gleixner CC: kernel-team@android.com CC: linux-trace-kernel@vger.kernel.org --- include/trace/events/sched.h | 138 +++++++++++++++++++++++++++++++++++ kernel/sched/core.c | 11 +++ 2 files changed, 149 insertions(+) diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 6188ad0d9e0d..2b08509f3088 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -737,6 +737,144 @@ TRACE_EVENT(sched_wake_idle_without_ipi, TP_printk("cpu=%d", __entry->cpu) ); +#ifdef CONFIG_SCHED_PROXY_EXEC +/** + * sched_pe_enqueue_sleeping_task - called when a task is enqueued on wait + * queue of a sleeping task (mutex owner). + * @mutex_owner: pointer to struct task_struct + * @blocked: pointer to struct task_struct + */ +TRACE_EVENT(sched_pe_enqueue_sleeping_task, + + TP_PROTO(struct task_struct *mutex_owner, struct task_struct *blocked), + + TP_ARGS(mutex_owner, blocked), + + TP_STRUCT__entry( + __array(char, owner_comm, TASK_COMM_LEN ) + __field(pid_t, owner_pid ) + __field(int, owner_prio ) + __field(int, owner_cpu ) + __array(char, blocked_comm, TASK_COMM_LEN ) + __field(pid_t, blocked_pid ) + __field(int, blocked_prio ) + __field(int, blocked_cpu ) + ), + + TP_fast_assign( + strscpy(__entry->owner_comm, mutex_owner->comm, TASK_COMM_LEN); + __entry->owner_pid = mutex_owner->pid; + __entry->owner_prio = mutex_owner->prio; /* XXX SCHED_DEADLINE */ + __entry->owner_cpu = task_cpu(mutex_owner); + + strscpy(__entry->blocked_comm, blocked->comm, TASK_COMM_LEN); + __entry->blocked_pid = blocked->pid; + __entry->blocked_prio = blocked->prio; /* XXX SCHED_DEADLINE */ + __entry->blocked_cpu = task_cpu(blocked); + ), + + TP_printk("task=%s pid=%d prio=%d cpu=%d blocked_on owner_task=%s owner_pid=%d owner_prio=%d owner_cpu=%d", + __entry->blocked_comm, __entry->blocked_pid, + __entry->blocked_prio, __entry->blocked_cpu, + __entry->owner_comm, __entry->owner_pid, + __entry->owner_prio, __entry->owner_cpu) +); + +/** + * sched_pe_cross_remote_cpu - called when dependency chain crosses remote CPU + * @p: pointer to struct task_struct + */ +TRACE_EVENT(sched_pe_cross_remote_cpu, + + TP_PROTO(struct task_struct *p), + + TP_ARGS(p), + + TP_STRUCT__entry( + __array(char, comm, TASK_COMM_LEN ) + __field(pid_t, pid ) + __field(int, prio ) + __field(int, cpu ) + ), + + TP_fast_assign( + strscpy(__entry->comm, p->comm, TASK_COMM_LEN); + __entry->pid = p->pid; + __entry->prio = p->prio; /* XXX SCHED_DEADLINE */ + __entry->cpu = task_cpu(p); + ), + + TP_printk("comm=%s pid=%d prio=%d cpu=%d", + __entry->comm, __entry->pid, __entry->prio, __entry->cpu) +); + +/** + * sched_pe_task_is_migrating - called when mutex owner is in migrating state + * @p: pointer to struct task_struct + */ +TRACE_EVENT(sched_pe_task_is_migrating, + + TP_PROTO(struct task_struct *p), + + TP_ARGS(p), + + TP_STRUCT__entry( + __array(char, comm, TASK_COMM_LEN ) + __field(pid_t, pid ) + __field(int, prio ) + ), + + TP_fast_assign( + strscpy(__entry->comm, p->comm, TASK_COMM_LEN); + __entry->pid = p->pid; + __entry->prio = p->prio; /* XXX SCHED_DEADLINE */ + ), + + TP_printk("comm=%s pid=%d prio=%d", + __entry->comm, __entry->pid, __entry->prio) +); +#endif /* CONFIG_SCHED_PROXY_EXEC */ + +DECLARE_EVENT_CLASS(sched_task_selection_template, + + TP_PROTO(int cpu), + + TP_ARGS(cpu), + + TP_STRUCT__entry( + __field(int, cpu) + ), + + TP_fast_assign( + __entry->cpu = cpu; + ), + + TP_printk("cpu=%d", + __entry->cpu) +); + +/** + * sched_start_task_selection - called before selecting next task in + * __schedule() + * @cpu: The CPU which will run task selection operation. + */ +DEFINE_EVENT(sched_task_selection_template, sched_start_task_selection, + + TP_PROTO(int cpu), + + TP_ARGS(cpu)); + +/** + * sched_finish_task_selection - called after selecting next task in + * __schedule() + * @cpu: The CPU which ran task selection operation. + */ +DEFINE_EVENT(sched_task_selection_template, sched_finish_task_selection, + + TP_PROTO(int cpu), + + TP_ARGS(cpu)); + /* * Following tracepoints are not exported in tracefs and provide hooking * mechanisms only for testing and debugging purposes. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 30dfb6f14f2b..866809e52971 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7006,6 +7006,9 @@ static void proxy_enqueue_on_owner(struct rq *rq, struct task_struct *owner, */ if (!owner->on_rq) { BUG_ON(!next->on_rq); + + trace_sched_pe_enqueue_sleeping_task(owner, next); + deactivate_task(rq, next, DEQUEUE_SLEEP); if (task_current_selected(rq, next)) { put_prev_task(rq, next); @@ -7100,6 +7103,9 @@ find_proxy_task(struct rq *rq, struct task_struct *next, struct rq_flags *rf) if (task_cpu(owner) != cur_cpu) { target_cpu = task_cpu(owner); + + trace_sched_pe_cross_remote_cpu(owner); + /* * @owner can disappear, simply migrate to @target_cpu and leave that CPU * to sort things out. @@ -7113,6 +7119,8 @@ find_proxy_task(struct rq *rq, struct task_struct *next, struct rq_flags *rf) } if (task_on_rq_migrating(owner)) { + trace_sched_pe_task_is_migrating(owner); + /* * One of the chain of mutex owners is currently migrating to this * CPU, but has not yet been enqueued because we are holding the @@ -7335,6 +7343,8 @@ static void __sched notrace __schedule(unsigned int sched_mode) } prev_not_proxied = !prev->blocked_donor; + + trace_sched_start_task_selection(cpu); pick_again: next = pick_next_task(rq, rq_selected(rq), &rf); rq_set_selected(rq, next); @@ -7350,6 +7360,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) if (next == rq->idle && prev == rq->idle) preserve_need_resched = true; } + trace_sched_finish_task_selection(cpu); if (!preserve_need_resched) clear_tsk_need_resched(prev); -- 2.34.1