Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1122130imm; Wed, 23 May 2018 10:36:55 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoAmVhC/nMZwAR0N2cHR6GKhm8xAWi6cwBHRd5FldM6Tc8PgNC6OZmikhhM4JdEk32wNEwU X-Received: by 2002:a65:6250:: with SMTP id q16-v6mr3076832pgv.113.1527097015567; Wed, 23 May 2018 10:36:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527097015; cv=none; d=google.com; s=arc-20160816; b=UoGe82MeuqI66FWJKkin/rJAVNMM0NJ2a7P9IeAuoyapgSMwCvrSP3tmZ8e3yJIHtT jcZQt1hXIMA/i36F1y0AqONxNm2DuYwlBnIHTZ3Cuxz8kFfGJ24SFHCMypzOnPBpGf6J K+TxMLgNBvhQ/XLzT5/gFSEvl4pAPS+mxq8sSpJPJta6wwZAyZSYSqCLybGxy5F3cY/v paya7Kp3CVtMLnT1gdai2sJPXGA8lX0VaCRdt+FqW70Wd+/TfzQna2w64duBtQp/mnpa LmRcbHuv/632bVwV0lt2DkFP0/tht+HdX8Frmw0yWTmv/s8FQn9tXWHHvje0ADjjVbTl S/nA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=CV+QA+sGjg34wE/tsRBrxWJY826SbqN0zOVMxUrPTgc=; b=lTa0o50SgPvLHxbl7b2xrVPHG6CVDGTcu7usXGLSCvJGwJgRw9z80etgi0c5dbVWNe xvhsj71vqznIfMo4jvHCUqIWNq2OQatkLd4cSf65r0hpQ28MuzpSsGne6fcAE5MNl/Na 68aOLjyl/ejOQyOoM4OCVNSeXc8wurBigcL7PaemerMFDF8Wubl8wgN3cK01Hz3OtvJE 3Y6qaHSWeFqEGAiT+0jQGNe7erJ+l2QsVf38vWoGrH1tU4fvTNtQmNP/7b1BSdV790NX vauOs3esYeru8A7KEWEUEWo/fLATqa7aPQXhNI1n/jWkyC4NQ17+0NDeQEMeVZlaItxL qxCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k139-v6si19298514pfd.97.2018.05.23.10.36.39; Wed, 23 May 2018 10:36:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933886AbeEWRfB (ORCPT + 99 others); Wed, 23 May 2018 13:35:01 -0400 Received: from foss.arm.com ([217.140.101.70]:59200 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932667AbeEWRe6 (ORCPT ); Wed, 23 May 2018 13:34:58 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 873751435; Wed, 23 May 2018 10:34:58 -0700 (PDT) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DCC103F589; Wed, 23 May 2018 10:34:55 -0700 (PDT) Date: Wed, 23 May 2018 18:34:53 +0100 From: Patrick Bellasi To: Waiman Long Cc: Tejun Heo , Li Zefan , Johannes Weiner , Peter Zijlstra , Ingo Molnar , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@fb.com, pjt@google.com, luto@amacapital.net, Mike Galbraith , torvalds@linux-foundation.org, Roman Gushchin , Juri Lelli Subject: Re: [PATCH v8 4/6] cpuset: Make generate_sched_domains() recognize isolated_cpus Message-ID: <20180523173453.GY30654@e110439-lin> References: <1526590545-3350-1-git-send-email-longman@redhat.com> <1526590545-3350-5-git-send-email-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1526590545-3350-5-git-send-email-longman@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Waiman, On 17-May 16:55, Waiman Long wrote: [...] > @@ -672,13 +672,14 @@ static int generate_sched_domains(cpumask_var_t **domains, > int ndoms = 0; /* number of sched domains in result */ > int nslot; /* next empty doms[] struct cpumask slot */ > struct cgroup_subsys_state *pos_css; > + bool root_load_balance = is_sched_load_balance(&top_cpuset); > > doms = NULL; > dattr = NULL; > csa = NULL; > > /* Special case for the 99% of systems with one, full, sched domain */ > - if (is_sched_load_balance(&top_cpuset)) { > + if (root_load_balance && !top_cpuset.isolation_count) { Perhaps I'm missing something but, it seems to me that, when the two conditions above are true, then we are going to destroy and rebuild the exact same scheduling domains. IOW, on 99% of systems where: is_sched_load_balance(&top_cpuset) top_cpuset.isolation_count = 0 since boot time and forever, then every time we update a value for cpuset.cpus we keep rebuilding the same SDs. It's not strictly related to this patch, the same already happens in mainline based just on the first condition, but since you are extending that optimization, perhaps you can tell me where I'm possibly wrong or which cases I'm not considering. I'm interested mainly because on Android systems those conditions are always true and we see SDs rebuilds every time we write something in cpuset.cpus, which ultimately accounts for almost all the 6-7[ms] time required for the write to return, depending on the CPU frequency. Cheers Patrick -- #include Patrick Bellasi