Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1977082pxb; Thu, 4 Nov 2021 11:47:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxq/Qkz8DDGsy1wWTZOz8ZXyTpEhZcVBMjPtYIY63wyiofepgX7fs7XeZ8NL28Hod1Uufs0 X-Received: by 2002:a50:cd87:: with SMTP id p7mr71000261edi.205.1636051636775; Thu, 04 Nov 2021 11:47:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1636051636; cv=none; d=google.com; s=arc-20160816; b=noTxMnzOlqV8DzT2CP1LUWqAsKbuetPo7Aa4WR9mOFeiIQ5fp7HQ74EVWMhVmIr9p5 3BEdlplEPp82c2/ChqoMY1oEyBQZtnCtCcwhgV1HiHi54DL2zhcKMVPmTgeZDh8RAtJj GZtoP36NBjOq0oJr8/S16hJE/f20j39GvYBcBY+VZLuBi0vUvJABKeConTdgQJenY/zY Zpq9AjtY0dqDbVqw79lXnBiUg0yd7V2pVlM2nPTS2/v7F1Jy4apaJVDKvsOBLYbqi++R u5DCqmQ9IlPWaHahWLgcBtbYbPdumJ+dRKr8zHC49h8Vl4Y3+xEeUDEEz4IeHMR6RGet VnGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:subject:from :references:cc:to:dkim-signature; bh=vEtpeBqc1SfByNyATN14KlpMwjPUoxDoHwS2Nhgjwmg=; b=s348RZr+zN46pCzOBRErLWRsMGO9M3dydtE+rqr0t1jRZFUSvZvE0Gce7awPLzkrJq L33tCZts9VfMFZGOwam8EhtXknsUnUaSXGUQG6UBwVpUOG51ghR1Au5Dc4b14ror5W0N rNkbW9rHohc+8l72w0Y8mW3TYJBjAWFWQuk1tTCgSEHFoeCwPbIwMAFBUpHPWwi2/5ei 1bVUK9Nx3qzywNlSu/a9svg5efx2R9lY2JPVrsRcYL6TpH/zpURzADlgVtxPZo4YawV4 Za0TGs/UtJGs0jZMc79Pqy9FLV/p6HG3I6773PTgrsICXW15zezFkmYwLRlD+WbeYq5O isgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@grsecurity.net header.s=grsec header.b=oBe6LORh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=grsecurity.net Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qn18si322507ejb.220.2021.11.04.11.46.52; Thu, 04 Nov 2021 11:47:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@grsecurity.net header.s=grsec header.b=oBe6LORh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=grsecurity.net Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233987AbhKDRkd (ORCPT + 99 others); Thu, 4 Nov 2021 13:40:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233983AbhKDRkc (ORCPT ); Thu, 4 Nov 2021 13:40:32 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8917C061714 for ; Thu, 4 Nov 2021 10:37:53 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id b184-20020a1c1bc1000000b0033140bf8dd5so4806841wmb.5 for ; Thu, 04 Nov 2021 10:37:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; h=to:cc:references:from:subject:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=vEtpeBqc1SfByNyATN14KlpMwjPUoxDoHwS2Nhgjwmg=; b=oBe6LORhNOLtguyNABY0deApumjlUDu+3vO5atRBfxYFT5zkNLNTf5urR7WNBB34// 2IiLPe3iL+7Kw9vvnz7Em3mbFVIt272h0HpABWJyW+eBQ1xzOJ95C+9jSMRpcDezi2Pj 0PgWmQ+79l8FOnVJ+/CYdNK3mayGJt0wUSWv8rzeKR0qbMmaUEu7WRdJCawAASIyIFDA H+AdUiKfWoA+zrDhB5oTbX0MgJ+itaAWhSVZk2nSk6UgBmsKhx2vy6FqQVEvw0Y1M/we 7IAOn6ZPYc8BO7M+FVECJdDg9cwFry46keuvhdz307N/PHq8qJSaboum4sUgXr2gBPZR mA3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:to:cc:references:from:subject:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=vEtpeBqc1SfByNyATN14KlpMwjPUoxDoHwS2Nhgjwmg=; b=zM+QOuyw1tLucyxiDFSj4S0ZNlIHqyYyULB2weGCYYw0Tm+v6C7k9hWJlcmjUY4EQm d+avZQ3xOCmA10cmE8ZGr3aMSyejBY2UnY95O5h8Jz3tGPzGChyF6J8LUvZ+ahreHeeY fkWOYRcUaU/f4j3A+c8yPuSfguF03JFbfVgEqdvaLkcKSNh3DfN1x04zOI10QTCgVuiK y4E+PecfJu8ebPO+vK3BCo+82yf+s0VEGQHxdZEGouNxYR0euzAs/Q8xd8r74qTmGt8A RX6YQ83bTfpxcipDxpB4VhTUVs0tlZPTHjYmSA0o9qAVPZS3mW7qSwOdUq77i3f6pvmQ Fs0Q== X-Gm-Message-State: AOAM530eiQ5fWD5Zj0VB55jOMVG+xrxGGmFxD8cClrN5M4pqzOkFr2JS +Rvl3J+Vf7/mB/xGewJCaWv1lQ== X-Received: by 2002:a05:600c:430c:: with SMTP id p12mr25798449wme.127.1636047472459; Thu, 04 Nov 2021 10:37:52 -0700 (PDT) Received: from ?IPv6:2003:f6:af07:3b00:c7c0:ba0:afa0:f76a? (p200300f6af073b00c7c00ba0afa0f76a.dip0.t-ipconnect.de. [2003:f6:af07:3b00:c7c0:ba0:afa0:f76a]) by smtp.gmail.com with ESMTPSA id p12sm5718243wro.33.2021.11.04.10.37.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 04 Nov 2021 10:37:52 -0700 (PDT) To: Vincent Guittot Cc: Benjamin Segall , Ingo Molnar , Peter Zijlstra , Juri Lelli , =?UTF-8?Q?Michal_Koutn=c3=bd?= , Dietmar Eggemann , Steven Rostedt , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , linux-kernel@vger.kernel.org, Odin Ugedal , Kevin Tanguy , Brad Spengler References: <20211103190613.3595047-1-minipli@grsecurity.net> From: Mathias Krause Subject: Re: [PATCH] sched/fair: Prevent dead task groups from regaining cfs_rq's Message-ID: Date: Thu, 4 Nov 2021 18:37:50 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Am 04.11.21 um 17:49 schrieb Vincent Guittot: > [snip] > > Ok so we must have 2 GPs: > > list_del_rcu(&tg->siblings); > GP to wait for the end of ongoing walk_tg_tree_from : synchronize_rcu > in your patch > list_del_leaf_cfs_rq(tg->cfs_rq[cpu]); if on_list > remove_entity_load_avg(tg->se[cpu]); > GP to wait for the end of ongoing for_each_leaf_cfs_rq_safe (print_cfs_stats) > kfree everything Basically yes, but with my patch we already have these two, as there's at least one RCU GP between after sched_offline_group() finishes and sched_free_group() / cpu_cgroup_css_free() starts. So we either use my patch as-is or move unregister_fair_sched_group() to free_fair_sched_group() and use kfree_rcu() instead of kfree(). Both approaches have pros and cons. Pro for my version is the early unlinking of cfs_rq's for dead task groups, so no surprises later on. Con is the explicit synchronize_rcu(). Pro for the kfree_rcu() approach is the lack of the explicit synchronize_rcu() call, so no explicit blocking operation. Con is that we have cfs_rq's re-added to dead task groups which feels wrong and need to find a suitable member to overlap with the rcu_head in each involved data type. Which one do you prefer? Thanks, Mathias