Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2657627pxj; Mon, 10 May 2021 07:57:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTs33AcTZK0qrovn8J4gcS+ZOlZ0/0je2N0km8wrltKmZ052nO9Fichx+/aIzcsxuUatpj X-Received: by 2002:a17:907:2663:: with SMTP id ci3mr25920391ejc.540.1620658655228; Mon, 10 May 2021 07:57:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620658655; cv=none; d=google.com; s=arc-20160816; b=m0YAsdHu7yW+OA5VjTkFt0cKEfLoNb+ANLg9UwpNPIh86lE57aZeJQhJ5UbwkwGgah OwElk8uxXbLFqOZbYUzhErhRNrT8x/UZeAPcI6KWZsil7vpnvLPmIrLz28I+ypzXLLQd CoxFra8BU37w3gGqO1wvHjRdPN2GI9R53uD9QuXTL3u7G544Jc4FT4wX8Z5LhqOYboar Z9PQLYwLO/9U4auvRiYgnndVhGVJpbJa3M13vtYzcuRkPlP2tEUWZMOdxwoWd2c5Nz/M rFiipfwWAILgavRQV0wmoCvMmFiK51HfPAGxEs1qgbmZrPStADncnCXBTLRVfquKRkkd em4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=e1OOlTOFp5pyUfFxw+JC2YEm13EP8w/1T/oY5csZHxo=; b=z7zs4r0hxY5sDQ5K8PRMRqUUgpohUaCIC8PsgkY6IODO1MsUL4iK0tGGQ9Ys9Qembg YVVaxzIP7J3xIVI/kHBUSx1UG+5I9m9jVlOmJFW68QiQjvH/PPKWNtqIOk9lQYqepMFl VpMwWcruAHx0B61sf6NCuEvDtf2WddmUTmwS/zEAl0VPXp+yITpBnlb54gy3ctAkKXPm g34zpCQec/ZJW9a8a4WqOg9B0Xc2+GHwHFG+XRh9kF9zlnqLdQA9flzekzJxo16J3Pei M5QRCFfIVYZcEyUyxvZCIs6xmZ2tgJiUCOFh7UaBAWhIWOjxM/t40RwjcAZulfv3vqkM aDhg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dn20si7032257ejc.545.2021.05.10.07.57.11; Mon, 10 May 2021 07:57:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239891AbhEJOxd (ORCPT + 99 others); Mon, 10 May 2021 10:53:33 -0400 Received: from foss.arm.com ([217.140.110.172]:60154 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233479AbhEJOvy (ORCPT ); Mon, 10 May 2021 10:51:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 038461691; Mon, 10 May 2021 07:50:50 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8CA893F719; Mon, 10 May 2021 07:50:48 -0700 (PDT) From: Qais Yousef To: "Peter Zijlstra (Intel)" , Ingo Molnar Cc: Vincent Guittot , Dietmar Eggemann , Patrick Bellasi , Tejun Heo , Quentin Perret , Wei Wang , Yun Hsiang , linux-kernel@vger.kernel.org, Qais Yousef Subject: [PATCH RESEND 2/2] sched/uclamp: Fix locking around cpu_util_update_eff() Date: Mon, 10 May 2021 15:50:32 +0100 Message-Id: <20210510145032.1934078-3-qais.yousef@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210510145032.1934078-1-qais.yousef@arm.com> References: <20210510145032.1934078-1-qais.yousef@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org cpu_cgroup_css_online() calls cpu_util_update_eff() without holding the uclamp_mutex or rcu_read_lock() like other call sites, which is a mistake. The uclamp_mutex is required to protect against concurrent reads and writes that could update the cgroup hierarchy. The rcu_read_lock() is required to traverse the cgroup data structures in cpu_util_update_eff(). Surround the caller with the required locks and add some asserts to better document the dependency in cpu_util_update_eff(). Fixes: 7226017ad37a ("sched/uclamp: Fix a bug in propagating uclamp value in new cgroups") Reported-by: Quentin Perret Signed-off-by: Qais Yousef --- There was no real failure observed because of this. Quentin just observed the oddity and reported it. kernel/sched/core.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b61329299379..efa15287d09e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8690,7 +8690,11 @@ static int cpu_cgroup_css_online(struct cgroup_subsys_state *css) #ifdef CONFIG_UCLAMP_TASK_GROUP /* Propagate the effective uclamp value for the new group */ + mutex_lock(&uclamp_mutex); + rcu_read_lock(); cpu_util_update_eff(css); + rcu_read_unlock(); + mutex_unlock(&uclamp_mutex); #endif return 0; @@ -8780,6 +8784,9 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css) enum uclamp_id clamp_id; unsigned int clamps; + lockdep_assert_held(&uclamp_mutex); + SCHED_WARN_ON(!rcu_read_lock_held()); + css_for_each_descendant_pre(css, top_css) { uc_parent = css_tg(css)->parent ? css_tg(css)->parent->uclamp : NULL; -- 2.25.1