Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp2281742imc; Tue, 12 Mar 2019 10:31:01 -0700 (PDT) X-Google-Smtp-Source: APXvYqz2KlZwX/Hvd2JZwVwzMRFXyYw2Z7TG0a18gtOdFW205bGpA0923XZ16RQkONA10ad1IckP X-Received: by 2002:a17:902:8c84:: with SMTP id t4mr14569980plo.298.1552411861695; Tue, 12 Mar 2019 10:31:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552411861; cv=none; d=google.com; s=arc-20160816; b=yS/WHecjbYKk5z/cGaK1xSylpgMYXyi7Ad4qfdbZtaZgUR390aixBEp3UyEjhiY661 WoOWMWxIgSU/uqshwX36UTyeLt4GYhQaS/lWdBFt99PpPH0R3LtTmTxVZB2yKBWEru7k FD3KYOe0LAkn/Hb3b20OPL93Qq6fwufL5rY54gLwHxerMtNAl8HhRGtC5pBJYWcBtMif YK/vw54/4+RctOsMBQyGb1f/f3EZx9UdAdY0hDqkcXQLNHGr/orWXVWfmDW4wuLVObyq yuAlytmZkWXiu4rkU/NTquATJduTMMbVFk6MJxwOKeQJxzprgNbgFiM2UdWnbK+OYjEW 4FSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from:dkim-signature; bh=BnlIGKGZrYnCgV86vqT27THrDHPB/iqwuTCMl0zSBBU=; b=DJshJ0VK5cczwFt0dxfSRHx4aVDyqyoabecexhwLed293Skc679dlZ6F9TwCVhpNaW Ic2DuGt/KPYMemW7yzbLBbh1jTc19IHffR6VloKBb2OuMc727y5aMjeSUnFCJOIaAzcU GHg1O0wSP9hzt/3L3qAtRijpsloTWSeTpiuc2nIY8cSVtTb6GDXDhL+yoNWVcZIHA85H 8YHagCoiypx1LSxt2so1GUiStWqT3tt4EejL0dd7Wud1DNKhcU0YnuA58Wrt8dsINXxe /k5mKuMEUpkU98Dif879Tq+yrdjzS77ZNBnG60TFekVAiGSi+6uIurx1lXzPVOHWnVdU qUqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=l7Tw9KOn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w10si7534676pgr.469.2019.03.12.10.30.45; Tue, 12 Mar 2019 10:31:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=l7Tw9KOn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728660AbfCLR3m (ORCPT + 99 others); Tue, 12 Mar 2019 13:29:42 -0400 Received: from mail-pf1-f195.google.com ([209.85.210.195]:40941 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727578AbfCLR3k (ORCPT ); Tue, 12 Mar 2019 13:29:40 -0400 Received: by mail-pf1-f195.google.com with SMTP id y124so1320868pfy.7 for ; Tue, 12 Mar 2019 10:29:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:references:date:in-reply-to:message-id :user-agent:mime-version; bh=BnlIGKGZrYnCgV86vqT27THrDHPB/iqwuTCMl0zSBBU=; b=l7Tw9KOnGjTqrWS+D/ZgHg+xoBbgafeXNUftb4gGynr2F+1fEu4Y8PUPGNVz5a+hnT rbsMQSG3jgNbgGwd6j0VfZSchUMlgIa4iC1ElZLzfKd0zbWadObDCnihQ/rNWZafJsjq vh7aqFCHxMaNycjuN2eYy2Z9k7GoEdzLyPbwRzBJCaJKhaJr1gbnWoqezw/YmrPCp+J6 cBKCey//7U/TST/20HaVfEwYFSazGnnzmuP9Ia0vDpQGZzQVdCMySRItY8vXr+zrKzfG 9sWmLGEYE8m7frhpV54Ur0tCX2y3sIEcqlItALf7CQsTd1FmLGX94PYhjuvrPKjK7GD6 xSWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:references:date:in-reply-to :message-id:user-agent:mime-version; bh=BnlIGKGZrYnCgV86vqT27THrDHPB/iqwuTCMl0zSBBU=; b=uAEBYjWLmYw2Y3k/h0F3232cElOaSSP0RtNKHP17S+WHJSiEE4dZypSKYYLhtkeg+K Qc8xupHKWSUpOXsAJjMwdwn/3Ub90u/cWJO68yE+un0DW0ZWPci+79daWDIB3wbz5kWq ORG7ckzfecz3Ip1hQ+eBuin8Ylpc2J0s9T/feRitahgcDKBQkRfhPMNfBsPG9JRe/xwd Jhv/qo1WtEZvkgF37VIRCZ6EZV3xBbcIBPjbMqTCP2jUsMi+9fZvZ7v24z5brxMFLV3j /avOLOe7SvvrqXOw0F5YwGWUw/YV3ItCONBq4PD2uuEuTTAvZ+BYhnIep3aV56Bc3GwW btIQ== X-Gm-Message-State: APjAAAUFBIsUlMGWsbnZNQ+lzPXsD/ADkxPF2aNy8LNwL1Xh7UC+72VS 8JcvASTYCVClq/HlTvLACwzyDLT+BOo= X-Received: by 2002:a63:5c66:: with SMTP id n38mr35433587pgm.15.1552411778866; Tue, 12 Mar 2019 10:29:38 -0700 (PDT) Received: from bsegall-linux.svl.corp.google.com.localhost ([2620:15c:2cd:202:39d7:98b3:2536:e93f]) by smtp.gmail.com with ESMTPSA id s6sm12160439pfm.94.2019.03.12.10.29.37 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Mar 2019 10:29:38 -0700 (PDT) From: bsegall@google.com To: Phil Auld Cc: mingo@redhat.com, peterz@infradead.org, linux-kernel@vger.kernel.org Subject: Re: [RFC] sched/fair: hard lockup in sched_cfs_period_timer References: <20190301145209.GA9304@pauld.bos.csb> <20190304190510.GB5366@lorien.usersys.redhat.com> <20190305200554.GA8786@pauld.bos.csb> <20190306162313.GB8786@pauld.bos.csb> <20190309203320.GA24464@lorien.usersys.redhat.com> <20190311202536.GK25201@pauld.bos.csb> Date: Tue, 12 Mar 2019 10:29:37 -0700 In-Reply-To: <20190311202536.GK25201@pauld.bos.csb> (Phil Auld's message of "Mon, 11 Mar 2019 16:25:37 -0400") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Phil Auld writes: > On Mon, Mar 11, 2019 at 10:44:25AM -0700 bsegall@google.com wrote: >> Phil Auld writes: >> >> > On Wed, Mar 06, 2019 at 11:25:02AM -0800 bsegall@google.com wrote: >> >> Phil Auld writes: >> >> >> >> > On Tue, Mar 05, 2019 at 12:45:34PM -0800 bsegall@google.com wrote: >> >> >> Phil Auld writes: >> >> >> >> >> >> > Interestingly, if I limit the number of child cgroups to the number of >> >> >> > them I'm actually putting processes into (16 down from 2500) the problem >> >> >> > does not reproduce. >> >> >> >> >> >> That is indeed interesting, and definitely not something we'd want to >> >> >> matter. (Particularly if it's not root->a->b->c...->throttled_cgroup or >> >> >> root->throttled->a->...->thread vs root->throttled_cgroup, which is what >> >> >> I was originally thinking of) >> >> >> >> >> > >> >> > The locking may be a red herring. >> >> > >> >> > The setup is root->throttled->a where a is 1-2500. There are 4 threads in >> >> > each of the first 16 a groups. The parent, throttled, is where the >> >> > cfs_period/quota_us are set. >> >> > >> >> > I wonder if the problem is the walk_tg_tree_from() call in unthrottle_cfs_rq(). >> >> > >> >> > The distribute_cfg_runtime looks to be O(n * m) where n is number of >> >> > throttled cfs_rqs and m is the number of child cgroups. But I'm not >> >> > completely clear on how the hierarchical cgroups play together here. >> >> > >> >> > I'll pull on this thread some. >> >> > >> >> > Thanks for your input. >> >> > >> >> > >> >> > Cheers, >> >> > Phil >> >> >> >> Yeah, that isn't under the cfs_b lock, but is still part of distribute >> >> (and under rq lock, which might also matter). I was thinking too much >> >> about just the cfs_b regions. I'm not sure there's any good general >> >> optimization there. >> >> >> > >> > It's really an edge case, but the watchdog NMI is pretty painful. >> > >> >> I suppose cfs_rqs (tgs/cfs_bs?) could have "nearest >> >> ancestor with a quota" pointer and ones with quota could have >> >> "descendants with quota" list, parallel to the children/parent lists of >> >> tgs. Then throttle/unthrottle would only have to visit these lists, and >> >> child cgroups/cfs_rqs without their own quotas would just check >> >> cfs_rq->nearest_quota_cfs_rq->throttle_count. throttled_clock_task_time >> >> can also probably be tracked there. >> > >> > That seems like it would add a lot of complexity for this edge case. Maybe >> > it would be acceptible to use the safety valve like my first example, or >> > something like the below which will tune the period up until it doesn't >> > overrun for ever. The down side of this one is it does change the user's >> > settings, but that could be preferable to an NMI crash. >> >> Yeah, I'm not sure what solution is best here, but one of the solutions >> should be done. >> >> > >> > Cheers, >> > Phil >> > >> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> > index 310d0637fe4b..78f9e28adc7b 100644 >> > --- a/kernel/sched/fair.c >> > +++ b/kernel/sched/fair.c >> > @@ -4859,16 +4859,42 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer) >> > return HRTIMER_NORESTART; >> > } >> > >> > +extern const u64 max_cfs_quota_period; >> > +s64 cfs_quota_period_autotune_thresh = 100 * NSEC_PER_MSEC; >> > +int cfs_quota_period_autotune_shift = 4; /* 100 / 16 = 6.25% */ >> >> Letting it spin for 100ms and then only increasing by 6% seems extremely >> generous. If we went this route I'd probably say "after looping N >> times, set the period to time taken / N + X%" where N is like 8 or >> something. I think I'd probably perfer something like this to the >> previous "just abort and let it happen again next interrupt" one. > > Okay. I'll try to spin something up that does this. It may be a little > trickier to keep the quota proportional to the new period. I think that's > important since we'll be changing the user's setting. > > Do you mean to have it break when it hits N and recalculates the period or > reset the counter and keep going? > In theory you should be fine doing it once more I think? And yeah, keeping the quota correct is a bit more annoying given you have to use fixed point math.