Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1366518pxf; Fri, 19 Mar 2021 05:52:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxKK/yFRGhrSTdztADUIPjdzfZn2ZCQKKXATVNpZV5GskaCEPME6HVD+q0QHZnSCix3qf7y X-Received: by 2002:aa7:cf16:: with SMTP id a22mr9091808edy.288.1616158346489; Fri, 19 Mar 2021 05:52:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616158346; cv=none; d=google.com; s=arc-20160816; b=U3jxB8J30SJAk6gsDlrVcLvXM35GVMX9AdewLN1bH2asWbnspZONTK7/71dFLhuKm6 Q/HrBBay4MlsVFPmva4q3J570XiYmM9XPW7oxklj/3seNK05Ww1iZq0srPc3Qi4zwsgQ qvbFFFGqAFQU4F0q57YEjRw2eUm3jJp1O0hkqZJ3ZaueiiHBXatq7dFvl/W6b1NbfuxB nGiH/Inv2IHiq0yCbz1gYsPko/8fVETBqzbGgKlENsoKIQccDdY046OwD1NIyzwR4EWN ElS8qYUtqWexHGH/h0fqu5tAiBomdAwTsJHlhbi8D5pzqZNbwtAL+v0KsPFpB2SS0+Dd wTwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:references:message-id :content-transfer-encoding:cc:date:in-reply-to:from:subject :mime-version; bh=O9kZ1BbdqVJNeYE8A1vAEF8N8rU7wsMhd0xD44DJcCQ=; b=Bu1VKr+yb1ft/iou9zfeo9JOsKmbfTDWcBCOZ56K7fWmcx6NBiNgw+dvnruTue/7Oa mW7XgDoSqPUUqUr/Mb21hzqXARF2e1Urah7Y9iQLdeaiF25OGnLA0IcvOcZLSag+4IZD 4Ppp3B4RicmdAIGFB/HDbPN2jDxqgoAUKm+gI2SJvnnS1sByL2ju7MqGCD2Fe0rVscxf lWhVN21KRXaxZ7Bj8hrhdv5L776XJj2mQqSUIqLMGPw/4PA95sdQ0b4cwOnCB5fUBycg RlmRMnrSBF9uka4nVnHrbbDG9Gr/Nsp6BDmb9giHzyAIf9SSRNotV9iHjzowjNmfcycr HyfQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t3si4213906ejy.142.2021.03.19.05.52.03; Fri, 19 Mar 2021 05:52:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbhCSMuX convert rfc822-to-8bit (ORCPT + 99 others); Fri, 19 Mar 2021 08:50:23 -0400 Received: from out4436.biz.mail.alibaba.com ([47.88.44.36]:48699 "EHLO out4436.biz.mail.alibaba.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229847AbhCSMuE (ORCPT ); Fri, 19 Mar 2021 08:50:04 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=changhuaixin@linux.alibaba.com;NM=1;PH=DS;RN=18;SR=0;TI=SMTPD_---0USc56Kw_1616158190; Received: from 30.240.100.153(mailfrom:changhuaixin@linux.alibaba.com fp:SMTPD_---0USc56Kw_1616158190) by smtp.aliyun-inc.com(127.0.0.1); Fri, 19 Mar 2021 20:49:51 +0800 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.11\)) Subject: Re: [PATCH v4 1/4] sched/fair: Introduce primitives for CFS bandwidth burst From: changhuaixin In-Reply-To: Date: Fri, 19 Mar 2021 20:51:59 +0800 Cc: changhuaixin , Peter Zijlstra , Benjamin Segall , dietmar.eggemann@arm.com, juri.lelli@redhat.com, khlebnikov@yandex-team.ru, open list , mgorman@suse.de, mingo@redhat.com, Odin Ugedal , Odin Ugedal , Paul Turner , rostedt@goodmis.org, Shanpei Chen , Tejun Heo , Vincent Guittot , xiyou.wangcong@gmail.com Content-Transfer-Encoding: 8BIT Message-Id: References: <20210316044931.39733-1-changhuaixin@linux.alibaba.com> <20210316044931.39733-2-changhuaixin@linux.alibaba.com> To: Phil Auld X-Mailer: Apple Mail (2.3445.104.11) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Mar 18, 2021, at 8:59 PM, Phil Auld wrote: > > On Thu, Mar 18, 2021 at 09:26:58AM +0800 changhuaixin wrote: >> >> >>> On Mar 17, 2021, at 4:06 PM, Peter Zijlstra wrote: >>> >>> On Wed, Mar 17, 2021 at 03:16:18PM +0800, changhuaixin wrote: >>> >>>>> Why do you allow such a large burst? I would expect something like: >>>>> >>>>> if (burst > quote) >>>>> return -EINVAL; >>>>> >>>>> That limits the variance in the system. Allowing super long bursts seems >>>>> to defeat the entire purpose of bandwidth control. >>>> >>>> I understand your concern. Surely large burst value might allow super >>>> long bursts thus preventing bandwidth control entirely for a long >>>> time. >>>> >>>> However, I am afraid it is hard to decide what the maximum burst >>>> should be from the bandwidth control mechanism itself. Allowing some >>>> burst to the maximum of quota is helpful, but not enough. There are >>>> cases where workloads are bursty that they need many times more than >>>> quota in a single period. In such cases, limiting burst to the maximum >>>> of quota fails to meet the needs. >>>> >>>> Thus, I wonder whether is it acceptable to leave the maximum burst to >>>> users. If the desired behavior is to allow some burst, configure burst >>>> accordingly. If that is causing variance, use share or other fairness >>>> mechanism. And if fairness mechanism still fails to coordinate, do not >>>> use burst maybe. >>> >>> It's not fairness, bandwidth control is about isolation, and burst >>> introduces interference. >>> >>>> In this way, cfs_b->buffer can be removed while cfs_b->max_overrun is >>>> still needed maybe. >>> >>> So what is the typical avg,stdev,max and mode for the workloads where you find >>> you need this? >>> >>> I would really like to put a limit on the burst. IMO a workload that has >>> a burst many times longer than the quota is plain broken. >> >> I see. Then the problem comes down to how large the limit on burst shall be. >> >> I have sampled the CPU usage of a bursty container in 100ms periods. The statistics are: >> average : 42.2% >> stddev : 81.5% >> max : 844.5% >> P95 : 183.3% >> P99 : 437.0% >> >> If quota is 100000ms, burst buffer needs to be 8 times more in order for this workload not to be throttled. >> I can't say this is typical, but these workloads exist. On a machine running Kubernetes containers, >> where there is often room for such burst and the interference is hard to notice, users would prefer >> allowing such burst to being throttled occasionally. >> > > I admit to not having followed all the history of this patch set. That said, when I see the above I just > think your quota is too low for your workload. > Yeah, more quota is helpful for this workload. But that usually prevents us from improving the total CPU usage by putting more work onto a single machine. > The burst (mis?)feature seems to be a way to bypass the quota. And it sort of assumes cooperative > containers that will only burst when they need it and then go back to normal. > >> In this sense, I suggest limit burst buffer to 16 times of quota or around. That should be enough for users to >> improve tail latency caused by throttling. And users might choose a smaller one or even none, if the interference >> is unacceptable. What do you think? >> > > Having quotas that can regularly be exceeded by 16 times seems to make the concept of a quota > meaningless. I'd have thought a burst would be some small percentage. > > What if several such containers burst at the same time? Can't that lead to overcommit that can effect > other well-behaved containers? > I see. Maybe there should be some calculation on the probabilities of that, as Peter has replied. > > Cheers, > Phil > > --