Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp4463251pxy; Tue, 27 Apr 2021 05:48:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxDnWIlpljLiSEniHK+fqlL22lHsSlia9s70Wt8rPTXnuSuM7B8Ln0/C8xiIvBR9/IaT5ph X-Received: by 2002:aa7:d9ce:: with SMTP id v14mr4177425eds.110.1619527714170; Tue, 27 Apr 2021 05:48:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619527714; cv=none; d=google.com; s=arc-20160816; b=AWCQa6OC/snETvqNkOfa2bjaa98K/UO2pVCdzORUugoDmt31jkDO80B0yhIlEOxROd yoTURckTSlv8AqcUqRsA6WfIhKMANbO2wn9fsTcN0XQW++G2pDdhkQkhGAtFf3ERVIZ8 HUYZfK0seNQp4vrqIUqjSwYu0hSRz4/+cOnHVQqmvxDj/plh5vQY82wkBAH3tKc9Gfwp gSwLact/Ct587l2tn1mdpqrS3XoNT8RBPdY8cCqPSNewTt6Pbaw2/wzqzydN4xyGRaCT XsPqQMN/f7ZhA+Ssceclf7Byz19y0X4l3dyPqnmF3c41dvWo6rfUJaxyokkeEUosxm7x rRNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=TEm1bQ6boYn9sgOS7se993fi5zEhbGrHhlvchnybY1c=; b=huF3uxFzoVzAXpwmVt9klsujMJI3oqXIOsWDklqg9ni0EChm0XfwiuI697SyGhy2QG TMaond5ptufk/CJ5jdBTZMktLnUPaA2nHTbKSGrburOv79+Pm1wwDXdktaSMcOzceYOC 6TGfHFTPQ6F7rLP71MOgfuvChHtNos+4nmtac5fihj31qv1fQ+JrKcDREWP9bNJm5+vz TgegS1NUKCG3RuK2bia4KWwCdjrRjVxYf6iWmgwSnK1q1gCjG1iykF67PNBSP6trA/sR l91mjeN1ETCHazhTryt641l2N/UEzGWxf/Hg4n8YbrEwqZClGh0AQdFExXjj8d7SsKg6 DijQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZOP3CuV9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w2si2615827edi.470.2021.04.27.05.48.10; Tue, 27 Apr 2021 05:48:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZOP3CuV9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235795AbhD0Mpt (ORCPT + 99 others); Tue, 27 Apr 2021 08:45:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235489AbhD0Mps (ORCPT ); Tue, 27 Apr 2021 08:45:48 -0400 Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com [IPv6:2a00:1450:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C34DBC061574 for ; Tue, 27 Apr 2021 05:45:03 -0700 (PDT) Received: by mail-lj1-x234.google.com with SMTP id a13so14174141ljp.2 for ; Tue, 27 Apr 2021 05:45:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=TEm1bQ6boYn9sgOS7se993fi5zEhbGrHhlvchnybY1c=; b=ZOP3CuV9lm4JC9hbdhoneZV5TmJGTQ20+43Fum9sENKQjXmwR45lPpH2pS1VdEgW7o jpZjGn7f6hNmHXEGsPxFEcrI+Y4UpG/MVFzS3FkghOgUfFKhlPnOIsMYO18LFBwUD4Fp 14CfnLN14sIuqpuACpr1uktBNYBKpxjmYVBVY/pNncs3TVldalChwmNE37FEbDej5/1W Fwd8OM6ZV/jt2UT69cN7Rc/hJcq5zzLXxxgTjz+1dnF+XqSB9ZKLG/Xmbbqore5sXSQe i/xsR0lNzZk1pXpDCzwNZIzuzkXjnCdyV6JtQFgjfZzSG6hHVYAtXaO6xcevQYlpxdFr cGfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=TEm1bQ6boYn9sgOS7se993fi5zEhbGrHhlvchnybY1c=; b=WEZjmlUpFhsDAGjIoLJkgsUs1zoSxDppgEIk5I+Vz0zj2QSI3z0ozmB2vfGkv9h/X3 5hRM29R42d7H2Y+14BtESPS2kxF1fzQhHI6h3AsLwo8SpLuQwfMPrUtynfuydNBeuYSG CZbd/NN52lv7u6tkM5j0+cQsycmV8Z9Pq2WP9/wlsiniogM+5udpxE1iYn2iGe3txx2h hzOUdZNcZenYzIUNo7C4UdsirFTHSijrqPISnE6i6xwFFp0yah0wupLDrwXeAzMXl0Qw /WQvjlAlTSWSzNY21E9K6huNjYS5XzyjFbsW1nqNu+buJ/vGTHDnxVkMTIYQVRxdQy+m tsWQ== X-Gm-Message-State: AOAM533/k1j5DGc36P0n7kJh2MerRnNsskEGTClYnievPYa0OSvAR8nm 946gTFGOR5ssM8BIwMnLsTtgGGOIS2prMm4h9xs1vQ== X-Received: by 2002:a05:651c:612:: with SMTP id k18mr16523306lje.445.1619527502059; Tue, 27 Apr 2021 05:45:02 -0700 (PDT) MIME-Version: 1.0 References: <20210425080902.11854-1-odin@uged.al> In-Reply-To: From: Vincent Guittot Date: Tue, 27 Apr 2021 14:44:50 +0200 Message-ID: Subject: Re: [PATCH 0/1] sched/fair: Fix unfairness caused by missing load decay To: Odin Ugedal Cc: Odin Ugedal , Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , "open list:CONTROL GROUP (CGROUP)" , linux-kernel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 27 Apr 2021 at 13:24, Odin Ugedal wrote: > > Hi, > > > I wanted to say one v5.12-rcX version to make sure this is still a > > valid problem on latest version > > Ahh, I see. No problem. :) Thank you so much for taking the time to > look at this! > > > I confirm that I can see a ratio of 4ms vs 204ms running time with the > > patch below. > > (I assume you talk about the bash code for reproducing, not the actual > sched patch.) yes sorry > > > But when I look more deeply in my trace (I have > > instrumented the code), it seems that the 2 stress-ng don't belong to > > the same cgroup but remained in cg-1 and cg-2 which explains such > > running time difference. > > (mail reply number two to your previous mail might also help surface it) > > Not sure if I have stated it correctly, or if we are talking about the > same thing. It _is_ the intention that the two procs should not be in the > same cgroup. In the same way as people create "containers", each proc runs > in a separate cgroup in the example. The issue is not the balancing > between the procs > themselves, but rather cgroups/sched_entities inside the cgroup hierarchy. > (due to the fact that the vruntime of those sched_entities end up > being calculated with more load than they are supposed to). > > If you have any thought about the phrasing of the patch itself to make it > easier to understand, feel free to suggest. > > Given the last cgroup v1 script, I get this: > > - cat /proc//cgroup | grep cpu > 11:cpu,cpuacct:/slice/cg-1/sub > 3:cpuset:/slice > > - cat /proc//cgroup | grep cpu > 11:cpu,cpuacct:/slice/cg-2/sub > 3:cpuset:/slice > > > The cgroup hierarchy will then roughly be like this (using cgroup v2 terms, > becuase I find them easier to reason about): > > slice/ > cg-1/ > cpu.shares: 100 > sub/ > cpu.weight: 1 > cpuset.cpus: 1 > cgroup.procs - stress process 1 here > cg-2/ > cpu.weight: 100 > sub/ > cpu.weight: 10000 > cpuset.cpus: 1 > cgroup.procs - stress process 2 here > > This should result in 50/50 due to the fact that cg-1 and cg-2 both have a > weight of 100, and "live" inside the /slice cgroup. The inner weight should not > matter, since there is only one cgroup at that level. > > > So your script doesn't reproduce the bug you > > want to highlight. That being said, I can also see a diff between the > > contrib of the cpu0 in the tg_load. I'm going to look further > > There can definitely be some other issues involved, and I am pretty sure > you have way more knowledge about the scheduler than me... :) However, > I am pretty sure that it is in fact showing the issue I am talking about, > and applying the patch does indeed make it impossible to reproduce it > on my systems. Your script is correct. I was wrongly interpreting my trace. I have been able to reproduce your problem and your analysis is correct. Let me continue on the patch itself > > Odin