Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp533370ybh; Wed, 18 Mar 2020 04:36:48 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsgOnDx9YWKtZfIRnoMjE5Sd1SIGg12YgB2rL4hA8k6kGcE6YkQWLJdUaR2cOw8AIlv5tjg X-Received: by 2002:aca:f585:: with SMTP id t127mr2846093oih.38.1584531408341; Wed, 18 Mar 2020 04:36:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584531408; cv=none; d=google.com; s=arc-20160816; b=AG/TlAdRkFau021BJrSdV/r3A9NnN9+bru2K37jF9881w+VNrEFYw2eOjLj0wIiuih npYz6T7YmXRNmoljFvC7U8aNmI9C4PSW5cHJ1FuwhuIocAottVQ6RyZAzG6NWfGtnQOv dnryp3cT0wU6vSNX6yKAH3F7aL50sG1j3cbzZsKFMVKtN7cbMSZHRF+tUhS9R1zOyG0k BVZirioz3BQDSJ5Dg2UmJ/nikoZhoLZsrqwloe8WktRsj26jj0dDVbG54iGjCJlG1O+L qXfVUzKdEjX3UQZixk3KEJNXVtvXMyFDoxE5fI4k8uWZetyW2MoOJMjzJamTGpcT8k7w MtJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=zAwZAt/PBzQ6sARgCd0ikANyiWkyibR+xiGvvzv3pb0=; b=TvaTMBlWipE0diVG1Cre0VxZ3/C2v3cAyJvb0p/hbJlfzyd6O4mS6Z1jaAe+uYlZYE QIS0FsutlvlITwEIZDQ2vFga3ufLYl8W7tR15U1Zrs5DhmL6VDeiiNT6y8bgSlP4bg/G MQSr+hW0aR0S5EFrkTx8e367hE+H6nWUPvVU83EvRM8KEJc5qeFVRvN9Q82lKKigIZzW Gm6DT9kJsLP1IJNvSpvQxRCd/AmYuFFFLbcjWQ59cqlyi2jIfP6cicuZ+so+x/PrGunG 8s44oKN1LrKqaSTce38l7oUXCdkv56D5z7GsKchHct40iHaeyIdGvF/XLEgTN2i9ViRV JG4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l126si3150279oih.31.2020.03.18.04.36.29; Wed, 18 Mar 2020 04:36:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727191AbgCRLfC (ORCPT + 99 others); Wed, 18 Mar 2020 07:35:02 -0400 Received: from foss.arm.com ([217.140.110.172]:48736 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726855AbgCRLfB (ORCPT ); Wed, 18 Mar 2020 07:35:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F3CFF1FB; Wed, 18 Mar 2020 04:35:00 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1DB4A3F534; Wed, 18 Mar 2020 04:34:59 -0700 (PDT) Date: Wed, 18 Mar 2020 11:34:56 +0000 From: Qais Yousef To: Josh Don Cc: Peter Zijlstra , Ingo Molnar , Juri Lelli , Vincent Guittot , Li Zefan , Tejun Heo , Johannes Weiner , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel , cgroups@vger.kernel.org, Paul Turner Subject: Re: [PATCH v2] sched/cpuset: distribute tasks within affinity masks Message-ID: <20200318113456.3h64jpyb6xiczhcj@e107158-lin.cambridge.arm.com> References: <20200311010113.136465-1-joshdon@google.com> <20200311140533.pclgecwhbpqzyrks@e107158-lin.cambridge.arm.com> <20200317192401.GE20713@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20171215 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/17/20 14:35, Josh Don wrote: > On Wed, Mar 11, 2020 at 7:05 AM Qais Yousef wrote: > > > > This actually helps me fix a similar problem I faced in RT [1]. If multiple RT > > tasks wakeup at the same time we get a 'thundering herd' issue where they all > > end up going to the same CPU, just to be pushed out again. > > > > Beside this will help fix another problem for RT tasks fitness, which is > > a manifestation of the problem above. If two tasks wake up at the same time and > > they happen to run on a little cpu (but request to run on a big one), one of > > them will end up being migrated because find_lowest_rq() will return the first > > cpu in the mask for both tasks. > > > > I tested the API (not the change in sched/core.c) and it looks good to me. > > Nice, glad that the API already has another use case. Thanks for taking a look. > > > nit: cpumask_first_and() is better here? > > Yea, I would also prefer to use it, but the definition of > cpumask_first_and() follows this section, as it itself uses > cpumask_next_and(). > > > It might be a good idea to split the API from the user too. > > Not sure what you mean by this, could you clarify? I meant it'd be a good idea to split the cpumask API into its own patch and have a separate patch for the user in sched/core.c. But that was a small nit. If the user (in sched/core.c) somehow introduces a regression, reverting it separately should be trivial. Thanks -- Qais Yousef > > On Tue, Mar 17, 2020 at 12:24 PM Peter Zijlstra wrote: > > > > > Anyway, for the API. > > > > > > Reviewed-by: Qais Yousef > > > Tested-by: Qais Yousef > > > > Thanks guys! > > Thanks Peter, any other comments or are you happy with merging this patch as-is?