Received: by 2002:a05:6358:51dd:b0:131:369:b2a3 with SMTP id 29csp1088594rwl; Thu, 10 Aug 2023 06:22:21 -0700 (PDT) X-Google-Smtp-Source: AGHT+IELVzPbdtM3MChURkWQzgTTnuIWrHpBeX9vOY24lEvTNU/+v+2n/zomHHnLturx9t2F5fkQ X-Received: by 2002:a05:6512:23aa:b0:4f8:76ba:ad3c with SMTP id c42-20020a05651223aa00b004f876baad3cmr2168051lfv.55.1691673740982; Thu, 10 Aug 2023 06:22:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691673740; cv=none; d=google.com; s=arc-20160816; b=RjQJfVEOGdAmvBrrJhfqA6dqwvvTetx5loYvz7oK9O2+yDHu0Fu4jftRzVMJk9YCiR W2fEahJhvNwRUd1R0UaEJqRkCBKkQxTuPpd8uQaPz/uIhg9uQK0l+9C/wAyX3EZxYFKc IkcjmSZcqdS0M9WlbNovujIJsp7LAYSH68FA3niCKiQsBBwgdPzAdUZXflJPNSuods3D Kcc+PqVRWVnaEyG7wJAYDVrrfrRTJi3m08x+tTrmRmCtekap8LaFX45mgWyHmado417L yBcE9Dc7G4VHFduSnpDyE5AZ1LDIz/F0AsH2PmK/wsBArVAGgP/IPTW/4wJ1FvcYyeAI A2zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=7P+U2OV26AZWjLz6Z/2LHm/ap6hCu7Hf505Br/ffc8c=; fh=eU1Q4BmFOr1ZMPLcYiHt+coCOLdUynDiAGOBCBJLXUg=; b=Nt0HAWP7EGKAqXjrivjz4U4JOfm/rhbEAvyYgGCFOB5hvwanI0RQg/k2/uO98uHOTL 748A+2SSfhHCprsArjskZkHwQUR9NBWNvBlvxDdPfzgeH/10RVi2gEEbW2rzamvQx5n1 Ilh9f4JRXfirfBTuPYO/x6c60EIJ60dt9MGGsb9Ef+F4ouBC5K3yFHeutEk0vc9GAgGI yK16dRl8dQe5o5A+QirpY5SQZZLrVbzyvjQ6GQWxXt4eO+whqfw9/ZZC6So5CEveMhfn rfEAnEQwe1rB0i3T2gUi+htEdnWvU2lp4IKKj8RdPuLA+V/GRimux1SzDC59gkupC6bN jf1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x13-20020aa7cd8d000000b0052328243febsi1451225edv.153.2023.08.10.06.21.54; Thu, 10 Aug 2023 06:22:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235141AbjHJM0K (ORCPT + 99 others); Thu, 10 Aug 2023 08:26:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235069AbjHJM0I (ORCPT ); Thu, 10 Aug 2023 08:26:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FB2F2123; Thu, 10 Aug 2023 05:26:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 24B4365B2A; Thu, 10 Aug 2023 12:26:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3169DC433C7; Thu, 10 Aug 2023 12:26:00 +0000 (UTC) From: Huacai Chen To: "Paul E . McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Thomas Gleixner , Ingo Molnar , John Stultz , Stephen Boyd Cc: Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Sergey Senozhatsky , chenhuacai@kernel.org, rcu@vger.kernel.org, linux-kernel@vger.kernel.org, Huacai Chen Subject: [PATCH V3 1/2] tick: Rename tick_do_update_jiffies64() and allow external usage Date: Thu, 10 Aug 2023 20:24:55 +0800 Message-Id: <20230810122456.991421-1-chenhuacai@loongson.cn> X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.6 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rename tick_do_update_jiffies64() to do_update_jiffies_64() and move it to jiffies.c. This keeps the same naming style in jiffies.c and allow it be used by external components. This patch is a preparation for the next one which attempts to avoid necessary rcu stall warnings. Signed-off-by: Huacai Chen --- V2: Fix build. V3: Fix build again. include/linux/jiffies.h | 2 + kernel/time/jiffies.c | 113 ++++++++++++++++++++++++++++++++++++- kernel/time/tick-sched.c | 115 ++------------------------------------ kernel/time/timekeeping.h | 1 + 4 files changed, 118 insertions(+), 113 deletions(-) diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h index 5e13f801c902..48866314c68b 100644 --- a/include/linux/jiffies.h +++ b/include/linux/jiffies.h @@ -88,6 +88,8 @@ static inline u64 get_jiffies_64(void) } #endif +void do_update_jiffies_64(s64 now); /* typedef s64 ktime_t */ + /* * These inlines deal with timer wrapping correctly. You are * strongly encouraged to use them diff --git a/kernel/time/jiffies.c b/kernel/time/jiffies.c index bc4db9e5ab70..507a1e7e619e 100644 --- a/kernel/time/jiffies.c +++ b/kernel/time/jiffies.c @@ -5,14 +5,14 @@ * Copyright (C) 2004, 2005 IBM, John Stultz (johnstul@us.ibm.com) */ #include +#include #include #include -#include +#include #include "timekeeping.h" #include "tick-internal.h" - static u64 jiffies_read(struct clocksource *cs) { return (u64) jiffies; @@ -61,6 +61,115 @@ EXPORT_SYMBOL(get_jiffies_64); EXPORT_SYMBOL(jiffies); +/* + * The time, when the last jiffy update happened. Write access must hold + * jiffies_lock and jiffies_seq. Because tick_nohz_next_event() needs to + * get a consistent view of jiffies and last_jiffies_update. + */ +ktime_t last_jiffies_update; + +/* + * Must be called with interrupts disabled ! + */ +void do_update_jiffies_64(ktime_t now) +{ +#if defined(CONFIG_NO_HZ_COMMON) || defined(CONFIG_HIGH_RES_TIMERS) + unsigned long ticks = 1; + ktime_t delta, nextp; + + /* + * 64bit can do a quick check without holding jiffies lock and + * without looking at the sequence count. The smp_load_acquire() + * pairs with the update done later in this function. + * + * 32bit cannot do that because the store of tick_next_period + * consists of two 32bit stores and the first store could move it + * to a random point in the future. + */ + if (IS_ENABLED(CONFIG_64BIT)) { + if (ktime_before(now, smp_load_acquire(&tick_next_period))) + return; + } else { + unsigned int seq; + + /* + * Avoid contention on jiffies_lock and protect the quick + * check with the sequence count. + */ + do { + seq = read_seqcount_begin(&jiffies_seq); + nextp = tick_next_period; + } while (read_seqcount_retry(&jiffies_seq, seq)); + + if (ktime_before(now, nextp)) + return; + } + + /* Quick check failed, i.e. update is required. */ + raw_spin_lock(&jiffies_lock); + /* + * Reevaluate with the lock held. Another CPU might have done the + * update already. + */ + if (ktime_before(now, tick_next_period)) { + raw_spin_unlock(&jiffies_lock); + return; + } + + write_seqcount_begin(&jiffies_seq); + + delta = ktime_sub(now, tick_next_period); + if (unlikely(delta >= TICK_NSEC)) { + /* Slow path for long idle sleep times */ + s64 incr = TICK_NSEC; + + ticks += ktime_divns(delta, incr); + + last_jiffies_update = ktime_add_ns(last_jiffies_update, + incr * ticks); + } else { + last_jiffies_update = ktime_add_ns(last_jiffies_update, + TICK_NSEC); + } + + /* Advance jiffies to complete the jiffies_seq protected job */ + jiffies_64 += ticks; + + /* + * Keep the tick_next_period variable up to date. + */ + nextp = ktime_add_ns(last_jiffies_update, TICK_NSEC); + + if (IS_ENABLED(CONFIG_64BIT)) { + /* + * Pairs with smp_load_acquire() in the lockless quick + * check above and ensures that the update to jiffies_64 is + * not reordered vs. the store to tick_next_period, neither + * by the compiler nor by the CPU. + */ + smp_store_release(&tick_next_period, nextp); + } else { + /* + * A plain store is good enough on 32bit as the quick check + * above is protected by the sequence count. + */ + tick_next_period = nextp; + } + + /* + * Release the sequence count. calc_global_load() below is not + * protected by it, but jiffies_lock needs to be held to prevent + * concurrent invocations. + */ + write_seqcount_end(&jiffies_seq); + + calc_global_load(); + + raw_spin_unlock(&jiffies_lock); + update_wall_time(); +#endif +} + static int __init init_jiffies_clocksource(void) { return __clocksource_register(&clocksource_jiffies); diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 4df14db4da49..c993c7dfe79d 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -44,113 +44,6 @@ struct tick_sched *tick_get_tick_sched(int cpu) } #if defined(CONFIG_NO_HZ_COMMON) || defined(CONFIG_HIGH_RES_TIMERS) -/* - * The time, when the last jiffy update happened. Write access must hold - * jiffies_lock and jiffies_seq. tick_nohz_next_event() needs to get a - * consistent view of jiffies and last_jiffies_update. - */ -static ktime_t last_jiffies_update; - -/* - * Must be called with interrupts disabled ! - */ -static void tick_do_update_jiffies64(ktime_t now) -{ - unsigned long ticks = 1; - ktime_t delta, nextp; - - /* - * 64bit can do a quick check without holding jiffies lock and - * without looking at the sequence count. The smp_load_acquire() - * pairs with the update done later in this function. - * - * 32bit cannot do that because the store of tick_next_period - * consists of two 32bit stores and the first store could move it - * to a random point in the future. - */ - if (IS_ENABLED(CONFIG_64BIT)) { - if (ktime_before(now, smp_load_acquire(&tick_next_period))) - return; - } else { - unsigned int seq; - - /* - * Avoid contention on jiffies_lock and protect the quick - * check with the sequence count. - */ - do { - seq = read_seqcount_begin(&jiffies_seq); - nextp = tick_next_period; - } while (read_seqcount_retry(&jiffies_seq, seq)); - - if (ktime_before(now, nextp)) - return; - } - - /* Quick check failed, i.e. update is required. */ - raw_spin_lock(&jiffies_lock); - /* - * Reevaluate with the lock held. Another CPU might have done the - * update already. - */ - if (ktime_before(now, tick_next_period)) { - raw_spin_unlock(&jiffies_lock); - return; - } - - write_seqcount_begin(&jiffies_seq); - - delta = ktime_sub(now, tick_next_period); - if (unlikely(delta >= TICK_NSEC)) { - /* Slow path for long idle sleep times */ - s64 incr = TICK_NSEC; - - ticks += ktime_divns(delta, incr); - - last_jiffies_update = ktime_add_ns(last_jiffies_update, - incr * ticks); - } else { - last_jiffies_update = ktime_add_ns(last_jiffies_update, - TICK_NSEC); - } - - /* Advance jiffies to complete the jiffies_seq protected job */ - jiffies_64 += ticks; - - /* - * Keep the tick_next_period variable up to date. - */ - nextp = ktime_add_ns(last_jiffies_update, TICK_NSEC); - - if (IS_ENABLED(CONFIG_64BIT)) { - /* - * Pairs with smp_load_acquire() in the lockless quick - * check above and ensures that the update to jiffies_64 is - * not reordered vs. the store to tick_next_period, neither - * by the compiler nor by the CPU. - */ - smp_store_release(&tick_next_period, nextp); - } else { - /* - * A plain store is good enough on 32bit as the quick check - * above is protected by the sequence count. - */ - tick_next_period = nextp; - } - - /* - * Release the sequence count. calc_global_load() below is not - * protected by it, but jiffies_lock needs to be held to prevent - * concurrent invocations. - */ - write_seqcount_end(&jiffies_seq); - - calc_global_load(); - - raw_spin_unlock(&jiffies_lock); - update_wall_time(); -} - /* * Initialize and return retrieve the jiffies update. */ @@ -207,7 +100,7 @@ static void tick_sched_do_timer(struct tick_sched *ts, ktime_t now) /* Check, if the jiffies need an update */ if (tick_do_timer_cpu == cpu) - tick_do_update_jiffies64(now); + do_update_jiffies_64(now); /* * If jiffies update stalled for too long (timekeeper in stop_machine() @@ -218,7 +111,7 @@ static void tick_sched_do_timer(struct tick_sched *ts, ktime_t now) ts->last_tick_jiffies = READ_ONCE(jiffies); } else { if (++ts->stalled_jiffies == MAX_STALLED_JIFFIES) { - tick_do_update_jiffies64(now); + do_update_jiffies_64(now); ts->stalled_jiffies = 0; ts->last_tick_jiffies = READ_ONCE(jiffies); } @@ -652,7 +545,7 @@ static void tick_nohz_update_jiffies(ktime_t now) __this_cpu_write(tick_cpu_sched.idle_waketime, now); local_irq_save(flags); - tick_do_update_jiffies64(now); + do_update_jiffies_64(now); local_irq_restore(flags); touch_softlockup_watchdog_sched(); @@ -975,7 +868,7 @@ static void tick_nohz_stop_sched_tick(struct tick_sched *ts, int cpu) static void tick_nohz_restart_sched_tick(struct tick_sched *ts, ktime_t now) { /* Update jiffies first */ - tick_do_update_jiffies64(now); + do_update_jiffies_64(now); /* * Clear the timer idle flag, so we avoid IPIs on remote queueing and diff --git a/kernel/time/timekeeping.h b/kernel/time/timekeeping.h index 543beba096c7..21670f6c7421 100644 --- a/kernel/time/timekeeping.h +++ b/kernel/time/timekeeping.h @@ -28,6 +28,7 @@ extern void update_wall_time(void); extern raw_spinlock_t jiffies_lock; extern seqcount_raw_spinlock_t jiffies_seq; +extern ktime_t last_jiffies_update; #define CS_NAME_LEN 32 -- 2.39.3