Received: by 2002:ab2:60d1:0:b0:1f7:5705:b850 with SMTP id i17csp554610lqm; Wed, 1 May 2024 08:40:01 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCUM1EChWcEWmGfqZaeiXJurvqH1zsKBTZvCejIBKjmrInVqgg2PS1Q7kRUMsjdUqMEMZ5S8Qyee5n4eHzCZEb6laioTGS5VUkzju23Ozw== X-Google-Smtp-Source: AGHT+IF8Szrs3wMtNyn/n2rXGH3TyhdG6COAWNUOFRLtjb1oGwpWTv5uLsy54KXpAneXGJ9SeD4l X-Received: by 2002:a17:902:e5d2:b0:1e5:1867:d9fa with SMTP id u18-20020a170902e5d200b001e51867d9famr3487119plf.44.1714578000827; Wed, 01 May 2024 08:40:00 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1714578000; cv=pass; d=google.com; s=arc-20160816; b=zyRydQIq47YDnqDf4EUAlwhJPGI+yq0WTG0/KmYOWbQjnKnvL3mapuArTt04BfHX/6 O2RgNOAYBuYizCgGAnEPy2SA04qLJnzGjk39evxLM3nXPdfpJd1pQdP/zOQK8PMGZ1KQ Nim8Q6g10BQd9OPk/FJvLuTcm06Wex9WKEFjqa5sPPDaad2UvKpN9Biu8+zJa81Vs5JE usvZHzsKDH+gMGAi5alCuBoDIPSCeA6TCTRIJk8bznyNvn4nG3HI+3WIEadbJoS78HDX 5xDdvsbqpdnKa6roGpaWJs+AXnJV9QQcpBhlhvqRlabYXWk74lYmVi5ZfrhAlS4qs5pO GSXA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=7HEIRVOOxs9+Z4JyFqDhStdwNE4nDa7vmALaBzUWy5c=; fh=MxRCq2dBaqr0ozvSVwI+Ls5q0b8PaAKpmSBFq7O+DTQ=; b=i4ZRPp81Iz2B90a2ht65vRrvfZCRs9j7X1AxC/nhPSkbafMfD1EWnbC4q3YxsnweFM /sQeRvCCJ5h14qlsmLqgrCNx5Hb3lf6z23K+EtrQxV0fBNl14Vu5GhrcgFcn+DBFyBY5 JtFuJnckW89sDBXymwExKlji19vyI7Ek1n9i4yfj0T8XEH4H64gLxFH/ZqzG5aXUuE0c owgO3PTP47ebDn56gWy3n9Ln/GhtahTl4AoGf4HrfSnbTGbwGJ4noYDQMf6o5hnOHkNt nJ6eyl1g//vnOg80KrQ20jSIml1BX32fjBPk87iY1IfAoG4ZSTS3dX3B7Unvl4TJrrWu HbhA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gwRkcnqz; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-165501-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165501-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id k11-20020a170902ce0b00b001eb3d68fc2fsi9187951plg.622.2024.05.01.08.40.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 May 2024 08:40:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-165501-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gwRkcnqz; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-165501-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165501-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 74F542817D6 for ; Wed, 1 May 2024 15:39:55 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 023FC12FB07; Wed, 1 May 2024 15:39:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gwRkcnqz" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E71B5024E for ; Wed, 1 May 2024 15:39:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714577987; cv=none; b=WLThm7iDo6elifD3ao8yHZrMQtglXR26n0Jyr1fkbvdqOCLsQDL6kjgS8ScN0uDLbb1DIXnw6ZPbNvOdKMeYxqeCSCbFL1WnLPAUdYuhUZRt9SHP2n6r5+dRB1y15JGJ7j/K5a0XMIgMQ5DNGSAnKGees0IXtAHXBJ603RsO2nA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714577987; c=relaxed/simple; bh=TXnlRUd4wzTg+J3RDLQaov6hXIMql18Rm1XlXmIxRJw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=fxAjQkXedU/5elsRgZyGGuuH7aB5A9xNYDK9yc9DEFOjomiI6+9oGoCmYW873cDXLiWFKgaGA5jPzQpgUDaR0u8KgRtKJMbIgesCOax1izjRb7wDdu6KoM6b83TtN9hSoQY7/iNcdgtSv3C8y1dldBp+6ztjGA1LnG4wnnOHiOU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gwRkcnqz; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1714577984; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=7HEIRVOOxs9+Z4JyFqDhStdwNE4nDa7vmALaBzUWy5c=; b=gwRkcnqzQNoKA/c/sIYzp3400oK8EH+TUh6785N12gTORtrOE/Ki1lcZmxLS5vy5d2O1ep Dq72c4kPn69HTtIiBmEq8Kcqc0VkkxVEW7w93XSjBsyfpk7cgPzMrDQ+LcbWkY9/EKiin6 xbSUaVPLFrXpog1ZKOYQJtSHrEUKCIE= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-533-sC1vH30zMqy-tC0TGZzU7g-1; Wed, 01 May 2024 11:39:36 -0400 X-MC-Unique: sC1vH30zMqy-tC0TGZzU7g-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 839D0385A185; Wed, 1 May 2024 15:39:19 +0000 (UTC) Received: from lorien.usersys.redhat.com (unknown [10.39.192.58]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4C8511C0654E; Wed, 1 May 2024 15:39:14 +0000 (UTC) Date: Wed, 1 May 2024 11:39:11 -0400 From: Phil Auld To: Yury Norov Cc: Ankit Jain , linux@rasmusvillemoes.dk, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, juri.lelli@redhat.com, ajay.kaher@broadcom.com, alexey.makhalov@broadcom.com, vasavi.sirnapalli@broadcom.com, Paul Turner , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider Subject: Re: [PATCH] lib/cpumask: Boot option to disable tasks distribution within cpumask Message-ID: <20240501153911.GD39737@lorien.usersys.redhat.com> References: <20240430090431.1619622-1-ankit-aj.jain@broadcom.com> <20240501133608.GB39737@lorien.usersys.redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 On Wed, May 01, 2024 at 08:27:58AM -0700 Yury Norov wrote: > On Wed, May 01, 2024 at 09:36:08AM -0400, Phil Auld wrote: > > > > Hi Yuri, > > [...] > > > > Not that I'm familiar to your setup, but this sounds like a userspace > > > configuration problems. Can you try to move your non-RT tasks into a > > > cgroup attached to non-RT CPUs, or something like that? > > > > > > > It's not really. In a container environment just logging in to the > > container could end up with the exec'd task landing on one of > > the polling or latency sensitive cores. > > > > In a telco deployment the applications will run containers with > > isolated(pinned) cpus with load balacning disabled. These > > containers typically use one of these cpus for its "housekeeping" > > with the remainder used for the latency sensitive workloads. > > > > Also, this is a change in kernel behavior which is breaking > > userspace. > > Alright, that's a different story. > It's a specific edge case. I'd prefer to push for a forward solution than revert. > > We are also hitting this and are interested in a way to get the > > old behavior back for some workloads. > > > > > > With the introduction of kernel cmdline param 'sched_pick_firstcpu', > > > > there is an option provided for such usecases to disable the distribution > > > > of tasks within the cpumask logic and use the previous 'pick first cpu' > > > > approach for initial placement of tasks. Because many telco vendors > > > > configure the system in such a way that the first cpu within a cpuset > > > > of pod doesn't run any SCHED_FIFO or High priority tasks. > > > > > > > > Co-developed-by: Alexey Makhalov > > > > Signed-off-by: Alexey Makhalov > > > > Signed-off-by: Ankit Jain > > > > --- > > > > lib/cpumask.c | 24 ++++++++++++++++++++++++ > > > > 1 file changed, 24 insertions(+) > > > > > > > > diff --git a/lib/cpumask.c b/lib/cpumask.c > > > > index e77ee9d46f71..3dea87d5ec1f 100644 > > > > --- a/lib/cpumask.c > > > > +++ b/lib/cpumask.c > > > > @@ -154,6 +154,23 @@ unsigned int cpumask_local_spread(unsigned int i, int node) > > > > } > > > > EXPORT_SYMBOL(cpumask_local_spread); > > > > > > > > +/* > > > > + * Task distribution within the cpumask feature disabled? > > > > + */ > > > > +static bool cpumask_pick_firstcpu __read_mostly; > > > > + > > > > +/* > > > > + * Disable Tasks distribution within the cpumask feature > > > > + */ > > > > +static int __init cpumask_pick_firstcpu_setup(char *str) > > > > +{ > > > > + cpumask_pick_firstcpu = 1; > > > > + pr_info("cpumask: Tasks distribution within cpumask is disabled."); > > > > + return 1; > > > > +} > > > > + > > > > +__setup("sched_pick_firstcpu", cpumask_pick_firstcpu_setup); > > > > + > > > > static DEFINE_PER_CPU(int, distribute_cpu_mask_prev); > > > > > > > > /** > > > > @@ -171,6 +188,13 @@ unsigned int cpumask_any_and_distribute(const struct cpumask *src1p, > > > > { > > > > unsigned int next, prev; > > > > > > > > + /* > > > > + * Don't distribute, if tasks distribution > > > > + * within cpumask feature is disabled > > > > + */ > > > > + if (cpumask_pick_firstcpu) > > > > + return cpumask_any_and(src1p, src2p); > > > > > > No, this is a wrong way. > > > > > > To begin with, this parameter shouldn't control a single random > > > function. At least, the other cpumask_*_distribute() should be > > > consistent to the policy. > > > > > > But in general... I don't think we should do things like that at all. > > > Cpumask API is a simple and plain wrapper around bitmaps. If you want > > > to modify a behavior of the scheduler, you could do that at scheduler > > > level, not in a random helper function. > > > > > > Consider 2 cases: > > > - Someone unrelated to scheduler would use the same helper and will > > > be affected by this parameter inadvertently. > > > - Scheduler will switch to using another function to distribute CPUs, > > > and your setups will suddenly get broken again. This time deeply in > > > production. > > > > > > > Yeah, I think I agree with this part. At the scheduler level, where this > > is called, makes more sense. > > > > Note, this is "deeply in production" now... > > So, if we all agree that touching cpumasks is a bad idea, let's drop > this patch and try figuring out a better solution. > > Now that you're saying the scheduler patches break userspace, I think > it would be legitimate to revert them, unless there's a simple fix for > that. As I said above let's try to go forward if we can. I'd argue that relying on the old first cpu selection is not really an API, or documented so I don't think a revert is needed. I think a static key at the one or two places _distribute() is used in the scheduler (and workqueue?) code would have the same effect as this and be a better fit. Cheers, Phil > > Let's see what the folks will say. Please keep me in CC. > > Thanks, > Yury > --