Received: by 2002:a5d:925a:0:0:0:0:0 with SMTP id e26csp31132iol; Wed, 8 Jun 2022 20:51:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw997OiZgh3PulHIS3fOqfcCtkL02sQYCyDMzoxnvvxD0gl+VcetcOX+Z9dlcWmtX8PWe4m X-Received: by 2002:a05:6402:3895:b0:430:6a14:8ca3 with SMTP id fd21-20020a056402389500b004306a148ca3mr30107291edb.421.1654746664767; Wed, 08 Jun 2022 20:51:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654746664; cv=none; d=google.com; s=arc-20160816; b=Zi9E+/LJ71Fomk6wilu7hKj47lnAnFKkWwfGda7YvOjvIP55KvKDqveSM3lw/9/tiS v9D7kzk2enVDeXoVzNXx7kMyWdQVgXDQi9pJU7U6+uxRxQZ0oSUeQfaKYqEctx+UgAL5 njLXiJBhq+tt1zhmNJJKHMpw1rSRDxKt8kYhSL3cknTyR2I4Vlo9nQDgu8Iu5nRk9NAI QfxoBrjUXNBC6LVhi9ZYp3zkbNzK15Z3jH2/OlCylhoG2SkjxJeeZ7eA8tmmBOl2KnyG hsMQp67VzLFk8QoehVh1JNLHY4xuQ9I9rA8olSwCgl1o8dM7wwhjJwHy30ghvutq4ce5 jGJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:mime-version:message-id:date :dkim-signature; bh=4ARLjw1Nof83y9YpqKOn8wr8VcsjNu7ZoweP/sOCfPY=; b=z40oNHCtuBO4j6AB+9JOwN40BVdioA+/YdLCRGjQXyQyKEIPG53mMOB/iXIczwb0LD Cz7npx9G7BqhuTETxb6k+37p2dsSU4nuUF8VHlsCw+i9CdMHCioFdC4RUg+7+20rDyfF VT0daRilmvM4RQXhRlWU/uuy31iXzb1mJs91UadujccbQle9F0M1FnuN3fJI41om3Sic aVP78kVSgQ1Wz1+LzQCYBzL8rQUB2+mbTAnIN7+trplsSy+E31UvH/UgCRjAFUOBj0WS R6GJMG4Zdp3FIw8xYwDWbWT/j/CgAxs6HZ71CXUWF2Y1Lo2cFO+7eZ2064ZDfrftQCXs isyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Qx1sTnuP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e5-20020a17090658c500b0070d84d3b710si2127803ejs.673.2022.06.08.20.50.39; Wed, 08 Jun 2022 20:51:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Qx1sTnuP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237239AbiFICzc (ORCPT + 99 others); Wed, 8 Jun 2022 22:55:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236571AbiFICzb (ORCPT ); Wed, 8 Jun 2022 22:55:31 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D52851A40A2 for ; Wed, 8 Jun 2022 19:55:29 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-30047b94aa8so191956017b3.1 for ; Wed, 08 Jun 2022 19:55:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=4ARLjw1Nof83y9YpqKOn8wr8VcsjNu7ZoweP/sOCfPY=; b=Qx1sTnuPnKZIjWGGtQpVlxmYsryyYfV8gpq2IWXczHI0T4WeB1Br9zEitH904E54mq H/0ip/CRw0YEASts5sNTWaNhY0oRQr6A7hb6ThHml3YNQKEJArAPGo6OliXJoNIIysc1 h3Xk2URRY5Ucr+OXNiRif2REJZFh3RMfiQwVMNRP5p26Hezp3dC+Y0C4sqeXbCTRpngD lgeg/i9DP75mo0gGPVoAtu8kojzoUan4q5GPPH7acGQ7Nn9+KC2oWZ0Q25NRN1SynuwJ 1C2OhZaremYmei+jCVLFLhB6/D/xllOCWw4LnNkLs5peDWuet25EyV7BAz6S4rCpF0mU qDpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=4ARLjw1Nof83y9YpqKOn8wr8VcsjNu7ZoweP/sOCfPY=; b=39UaYoJjBWsVwfORw3AA9q2UbYDznSu3MzIdC85kF8ifrm2wKuqsaDESntB5yKv/aE joqb7UBJ33zul8ONxT7dGBJ8FJXxqRbAeVRhYM1u5pkcNYI0IRFaHSxkc9Q7lkaXt2Ex /DPoSCDJZ8vCHd6ir5doF73nb7izpWUNwaekM05vp/HYOzuEs1KTqvv9ILQx1BmYD8BF /XUvB+dVn5xttvZcdDSxjy0q/8Fo7VdG5uFUQ46+FZyt3LZjg3pX+NVDWbtm6/g6NAS1 6x6a7WW5VTXHzbPXoB54II90hrVJikzQiyOS43RtlJgJ3r9t4vwQTmU+NlGmDdnHx+xy IUEA== X-Gm-Message-State: AOAM532NhNpHEWHXdKrsNzjyFoJNE5xqCLdyrgxG1gvUBUU7leOOyvS4 aZlG5oGodlvKDFGVmhmri+kvB6K/ZV4N X-Received: from joshdon.svl.corp.google.com ([2620:15c:2cd:202:af3:3e2d:508a:9cef]) (user=joshdon job=sendgmr) by 2002:a81:c4e:0:b0:30f:fc76:672b with SMTP id 75-20020a810c4e000000b0030ffc76672bmr36722807ywm.371.1654743328998; Wed, 08 Jun 2022 19:55:28 -0700 (PDT) Date: Wed, 8 Jun 2022 19:55:15 -0700 Message-Id: <20220609025515.2086253-1-joshdon@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH] sched: allow newidle balancing to bail out of load_balance From: Josh Don To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Josh Don Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org While doing newidle load balancing, it is possible for new tasks to arrive, such as with pending wakeups. newidle_balance() already accounts for this by exiting the sched_domain load_balance() iteration if it detects these cases. This is very important for minimizing wakeup latency. However, if we are already in load_balance(), we may stay there for a while before returning back to newidle_balance(). This is most exacerbated if we enter a 'goto redo' loop in the LBF_ALL_PINNED case. A very straightforward workaround to this is to adjust should_we_balance() to bail out if we're doing a CPU_NEWLY_IDLE balance and new tasks are detected. This was tested with the following reproduction: - two threads that take turns sleeping and waking each other up are affined to two cores - a large number of threads with 100% utilization are pinned to all other cores Without this patch, wakeup latency was ~120us for the pair of threads, almost entirely spent in load_balance(). With this patch, wakeup latency is ~6us. Signed-off-by: Josh Don --- kernel/sched/fair.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8c5b74f66bd3..5abf30117824 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9834,9 +9834,15 @@ static int should_we_balance(struct lb_env *env) /* * In the newly idle case, we will allow all the CPUs * to do the newly idle load balance. + * + * However, we bail out if we already have tasks or a wakeup pending, + * to optimize wakeup latency. */ - if (env->idle == CPU_NEWLY_IDLE) + if (env->idle == CPU_NEWLY_IDLE) { + if (env->dst_rq->nr_running > 0 || env->dst_rq->ttwu_pending) + return 0; return 1; + } /* Try to find first idle CPU */ for_each_cpu_and(cpu, group_balance_mask(sg), env->cpus) { -- 2.36.1.476.g0c4daa206d-goog