Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp5166689pxv; Tue, 6 Jul 2021 19:43:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxnjX+0DxrqSIUfXvtCN0QHvs4GxUFQ8beq/Yzt4lGOIkKn7hKBLPn0C0BJ4mSw+vtM+a8r X-Received: by 2002:aa7:d801:: with SMTP id v1mr26981126edq.349.1625625780971; Tue, 06 Jul 2021 19:43:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625625780; cv=none; d=google.com; s=arc-20160816; b=bqyT5cAc5z/8UVPq9ZzevvinL1kh0R517ReJbdHdmiHIUzc4PM/+lWuvOakSysJVJ1 Nk9iUkAZfKFLROsxjzyjhOqHwY5X6Vf42hfG0hzaeYqlJjs8ZLoHGeF/f3r/ITzRjjWw JalYGqYFvZdS1tzfuODNFwY4N5XhmIu0IPLvkUN6y+RmxxtLJI3CGGUK9Jm80Wp/OhZB /ioHXEH2uA+oTPnK8HC3UYV6+BkGv96gaq6wOV8vqxGlTl0aSs744q57OktKHNBNt7Yj XK7/XcIhvFfa61KTZBWgVlxyD3Uvfu2LZQVDpyPapsVI6kj7yOlNflpRwgpxCIwvtVTw QLRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:mime-version:message-id:date :dkim-signature; bh=SdfYnitcdKxWWf2Kojrvr4VcwIYCcJ2C+soGQWyAtM0=; b=hkEyxfw6pOrQ8/WgG0WaGIxX2xV5YRKR7DBXtCtxYw55HrkpIiquABfYViXzihndf8 S/mmkDSxfm1AlfORFREJoQ+CcbC1XHsP4Z9WKIUgWb6U5qypw0UBM6OXl3QopfSIMEhY mT+lKWkq/fYvtrEECFup/IcQOKTcVWcl7NQv3GeaB2Ie255Ncghj8x9CgdoijSvVMthJ uvYHwUyBTsj/Lh2QV3+XCUyg+R1gaqsgPx6qzsF2maInWAVkz1XxEMIAjCkZSxjuixdO 9y0/wy2oFK1EzW0XGOQ6jmhW8buJI8JFYU0JimImiHbV8j3gWLZP5JR1XB6NkIV9lqqE EETA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=vW64oLzc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a9si17365930eda.105.2021.07.06.19.42.38; Tue, 06 Jul 2021 19:43:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=vW64oLzc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229949AbhGGCmT (ORCPT + 99 others); Tue, 6 Jul 2021 22:42:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229894AbhGGCmT (ORCPT ); Tue, 6 Jul 2021 22:42:19 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1DCAC061574 for ; Tue, 6 Jul 2021 19:39:39 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 132-20020a25158a0000b029055791ebe1e6so666218ybv.20 for ; Tue, 06 Jul 2021 19:39:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=SdfYnitcdKxWWf2Kojrvr4VcwIYCcJ2C+soGQWyAtM0=; b=vW64oLzc/x82jLzJMp3A4WBFOCZVeBftClozdvf7J5hqLduAkQiqGc+3c0jJV1IaSp l7Whg51gy8zIITEL85Z8uHuM5kr/7DLQsqpKnb+jBYEQrWl3PO7pFkjIBPiVZpFEqqqF Z8qgBnSH0LzaQwF/blawqF1mzKZJH2usyl7uz+f5L1HgKLWb/Js/nk9DrSWw5hu5ZoKZ j90D6dHyM6WzUuNyYR7Tg2kZX+ctr353x2vgAgxlf+0HeWYTCSLGjN/ZN2adNKVmXk1q 0lJPSVCestr2sejlWBDUuYuZN4rHX5Ei52xibg9iaLN2d5UVwYeGwEfym9WlTJdA2dc5 vC5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=SdfYnitcdKxWWf2Kojrvr4VcwIYCcJ2C+soGQWyAtM0=; b=gig+v4aSKqsHXNTwTEJqXvE9dimhgqk97FfAhxsVEzV/YXEayFnCmHplRzarmJhFVM MrKbI+914+qaYUe6xT3TxMMzW5E89GOhuGo40drMQS2Qmh0iotzTYSszBq5LeGMqXGwn BA4vxrjMGNbm4IIhDwvFY5cCoXYktIlLdCVtjmH39bcA1nose6tKVNyqdn/BfhElARFu XeDUtBxrWYLMxheXmrTLAmsc5dptbLsE+QGP80KvrF/zMWPRy5oU7aVQjwZR6Pj06XS9 RGF6JUUVZkheiqoWEBi6LpMTa+OKjk4oCNm6gnFxRWAcfQEPgUnYoxAbBISSw8v/xKj+ 9+IQ== X-Gm-Message-State: AOAM532r2ZtesXIbFMDaTCyrAe9iZGLP1TmnpnVwsM+oc2kTBbZbpJPU io8s78GCu19GBxq9xBg4tPpjsrTSHEs= X-Received: from surenb1.mtv.corp.google.com ([2620:15c:211:200:f0dd:9a09:7a78:b474]) (user=surenb job=sendgmr) by 2002:a25:e68e:: with SMTP id d136mr26864313ybh.120.1625625578897; Tue, 06 Jul 2021 19:39:38 -0700 (PDT) Date: Tue, 6 Jul 2021 19:39:33 -0700 Message-Id: <20210707023933.1691149-1-surenb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling From: Suren Baghdasaryan To: peterz@infradead.org Cc: hannes@cmpxchg.org, mingo@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, matthias.bgg@gmail.com, minchan@google.com, timmurray@google.com, yt.chang@mediatek.com, wenju.xu@mediatek.com, jonathan.jmchen@mediatek.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, kernel-team@android.com, surenb@google.com, SH Chen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Psi polling mechanism is trying to minimize the number of wakeups to run psi_poll_work and is currently relying on timer_pending() to detect when this work is already scheduled. This provides a window of opportunity for psi_group_change to schedule an immediate psi_poll_work after poll_timer_fn got called but before psi_poll_work could reschedule itself. Below is the depiction of this entire window: poll_timer_fn wake_up_interruptible(&group->poll_wait); psi_poll_worker wait_event_interruptible(group->poll_wait, ...) psi_poll_work psi_schedule_poll_work if (timer_pending(&group->poll_timer)) return; ... mod_timer(&group->poll_timer, jiffies + delay); Prior to 461daba06bdc we used to rely on poll_scheduled atomic which was reset and set back inside psi_poll_work and therefore this race window was much smaller. The larger window causes increased number of wakeups and our partners report visible power regression of ~10mA after applying 461daba06bdc. Bring back the poll_scheduled atomic and make this race window even narrower by resetting poll_scheduled only when we reach polling expiration time. This does not completely eliminate the possibility of extra wakeups caused by a race with psi_group_change however it will limit it to the worst case scenario of one extra wakeup per every tracking window (0.5s in the worst case). This patch also ensures correct ordering between clearing poll_scheduled flag and obtaining changed_states using memory barrier. Correct ordering between updating changed_states and setting poll_scheduled is ensured by atomic_xchg operation. By tracing the number of immediate rescheduling attempts performed by psi_group_change and the number of these attempts being blocked due to psi monitor being already active, we can assess the effects of this change: Before the patch: Run#1 Run#2 Run#3 Immediate reschedules attempted: 684365 1385156 1261240 Immediate reschedules blocked: 682846 1381654 1258682 Immediate reschedules (delta): 1519 3502 2558 Immediate reschedules (% of attempted): 0.22% 0.25% 0.20% After the patch: Run#1 Run#2 Run#3 Immediate reschedules attempted: 882244 770298 426218 Immediate reschedules blocked: 881996 769796 426074 Immediate reschedules (delta): 248 502 144 Immediate reschedules (% of attempted): 0.03% 0.07% 0.03% The number of non-blocked immediate reschedules dropped from 0.22-0.25% to 0.03-0.07%. The drop is attributed to the decrease in the race window size and the fact that we allow this race only when psi monitors reach polling window expiration time. Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger scheduling mechanism") Reported-by: Kathleen Chang Reported-by: Wenju Xu Reported-by: Jonathan Chen Signed-off-by: Suren Baghdasaryan Tested-by: SH Chen --- - Replaced atomic_cmpxchg() with atomic_xchg() to ensure correct ordering per PeterZ - Added memory barrier between resetting poll_scheduled and obtaining changed_states per PeterZ and Johannes - Added a paragraph in the patch description about the ordering guarantees added in this patch include/linux/psi_types.h | 1 + kernel/sched/psi.c | 46 +++++++++++++++++++++++++++++---------- 2 files changed, 36 insertions(+), 11 deletions(-) diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h index 0a23300d49af..ef8bd89d065e 100644 --- a/include/linux/psi_types.h +++ b/include/linux/psi_types.h @@ -158,6 +158,7 @@ struct psi_group { struct timer_list poll_timer; wait_queue_head_t poll_wait; atomic_t poll_wakeup; + atomic_t poll_scheduled; /* Protects data used by the monitor */ struct mutex trigger_lock; diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index 1652f2bb54b7..544676b2c1dc 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -196,6 +196,7 @@ static void group_init(struct psi_group *group) INIT_DELAYED_WORK(&group->avgs_work, psi_avgs_work); mutex_init(&group->avgs_lock); /* Init trigger-related members */ + atomic_set(&group->poll_scheduled, 0); mutex_init(&group->trigger_lock); INIT_LIST_HEAD(&group->triggers); memset(group->nr_triggers, 0, sizeof(group->nr_triggers)); @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now) return now + group->poll_min_period; } -/* Schedule polling if it's not already scheduled. */ -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay) +/* Schedule polling if it's not already scheduled or forced. */ +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay, + bool force) { struct task_struct *task; - /* - * Do not reschedule if already scheduled. - * Possible race with a timer scheduled after this check but before - * mod_timer below can be tolerated because group->polling_next_update - * will keep updates on schedule. - */ - if (timer_pending(&group->poll_timer)) + /* xchg should be called even when !force to set poll_scheduled */ + if (atomic_xchg(&group->poll_scheduled, 1) && !force) return; rcu_read_lock(); @@ -582,12 +579,15 @@ static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay) */ if (likely(task)) mod_timer(&group->poll_timer, jiffies + delay); + else + atomic_set(&group->poll_scheduled, 0); rcu_read_unlock(); } static void psi_poll_work(struct psi_group *group) { + bool force_reschedule = false; u32 changed_states; u64 now; @@ -595,6 +595,28 @@ static void psi_poll_work(struct psi_group *group) now = sched_clock(); + if (now > group->polling_until) { + /* + * We are either about to start or might stop polling if no + * state change was recorded. Resetting poll_scheduled leaves + * a small window for psi_group_change to sneak in and schedule + * an immegiate poll_work before we get to rescheduling. One + * potential extra wakeup at the end of the polling window + * should be negligible and polling_next_update still keeps + * updates correctly on schedule. + */ + atomic_set(&group->poll_scheduled, 0); + /* + * Ensure that operations of clearing group->poll_scheduled and + * obtaining changed_states are not reordered. + */ + smp_mb(); + } else { + /* Polling window is not over, keep rescheduling */ + force_reschedule = true; + } + + collect_percpu_times(group, PSI_POLL, &changed_states); if (changed_states & group->poll_states) { @@ -620,7 +642,8 @@ static void psi_poll_work(struct psi_group *group) group->polling_next_update = update_triggers(group, now); psi_schedule_poll_work(group, - nsecs_to_jiffies(group->polling_next_update - now) + 1); + nsecs_to_jiffies(group->polling_next_update - now) + 1, + force_reschedule); out: mutex_unlock(&group->trigger_lock); @@ -744,7 +767,7 @@ static void psi_group_change(struct psi_group *group, int cpu, write_seqcount_end(&groupc->seq); if (state_mask & group->poll_states) - psi_schedule_poll_work(group, 1); + psi_schedule_poll_work(group, 1, false); if (wake_clock && !delayed_work_pending(&group->avgs_work)) schedule_delayed_work(&group->avgs_work, PSI_FREQ); @@ -1239,6 +1262,7 @@ static void psi_trigger_destroy(struct kref *ref) * can no longer be found through group->poll_task. */ kthread_stop(task_to_destroy); + atomic_set(&group->poll_scheduled, 0); } kfree(t); } -- 2.32.0.93.g670b81a890-goog