Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1964386imm; Thu, 24 May 2018 03:41:55 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpuTxhoJu949weAatqSQZ/i8vNDCPouaRsc/ZogP4UBk4+9ElUCrYRFxQ1P6z3okMT5ubLl X-Received: by 2002:a62:aa18:: with SMTP id e24-v6mr6603717pff.107.1527158515798; Thu, 24 May 2018 03:41:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527158515; cv=none; d=google.com; s=arc-20160816; b=K/8+RLP1JwlOL9J8fpP7LvAHXf/ZgymHeJd1oEg3pXUYOzbdyyL0480mQK64Fs9yym pr/4aa5ELprmQlzNC/fJCiOZahiKC6s0lY/647wulZBpWXXnfmkebVjE2SD6sftbz7nz IYhfcYwDEWtpjrLQkjr8tmHG85Wf1IyominhHMhsNu2KqRoTinR1238cUJwUTvaJ+efO 6bdcrPfoRK8xDMb9OR7AFxkALpGK9y/iRkEnSIxKwVhms5mOluLASCAOTtWqA6R2ADTG creQLXmxGSbnx9m7Kt68xy9pAk+KRnfMmNJa2iKmdpKu7epM+HRTnDboiZNTzeR3Wo0c aU1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=ixV/VhddrE5KcgpsO+68sv653zP1VVZy+U5/mv+3bTw=; b=0wN3p0Hb/E0hT4EIMV9toTrB77CAOEWSIySv9shsHNcIJl4YS62IYkAUzx4fPSWEhG Ikxhiuwk0bHSzDMqNjgbAMRAcL+qeiLLNM8gvDlQhQ2G/EYUBwUuODwkIKK0fwLWQvu+ ffLWrTIRJMOWB2RWe+excOrLQy5UlXprNjf9jOf23G72SSanBiSlcZnsuNBvB4IpXymW KtENm9pfYpQneBvVQqAy1nTDbcBXbEWwUEVXB8bkaHgDiO8Sh51NVCrUDl7lCZ6MzQAt DU8tTShj6f+yNFItAnA2taD0QVK4SjKnL6GMmR/GaZz5Kc3Bt3GavoSHNhoIErOWsrHf 8v/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h12-v6si21215637pfd.253.2018.05.24.03.41.41; Thu, 24 May 2018 03:41:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032319AbeEXKkA (ORCPT + 99 others); Thu, 24 May 2018 06:40:00 -0400 Received: from mail-wr0-f193.google.com ([209.85.128.193]:43752 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1032130AbeEXKjn (ORCPT ); Thu, 24 May 2018 06:39:43 -0400 Received: by mail-wr0-f193.google.com with SMTP id r13-v6so2170503wrj.10 for ; Thu, 24 May 2018 03:39:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=ixV/VhddrE5KcgpsO+68sv653zP1VVZy+U5/mv+3bTw=; b=GmSTIKoiQrhH/uhI0ZFMDzBsSZEKj4kVGKew5qbvqcsVwKqeXKHEohDJWvly/wlKAw z51EM0gXsOAPxoBpFyuUj01nmlfw6ryz8OY/YxSmiRovylnDcarnmySWzpAovo+C4+hm k1lcndKuDeLg3GZDq56ox4SZQ4wi+8V/UFrzjisVySm/9MHTaFL2fqXIfVtZvWnPjJBl VuiwrLLRLBhA3yuHOnni2VVSHumqsseu4cax/tmG2wl7kSjz0691yEFySR/bpRZYabSn eoKEvecKx5wDKNfDN9OB0k4X4lyv6AFxgS/Q1FSlHOGjWpDUSfvGQ0FAqW6Lwep2I7Wo DtVw== X-Gm-Message-State: ALKqPwf9/QOewFlhge3DXurNTbrgKXASKxruTYzAzRdtO2wqA15/TpKe gxItcem2kGkNiPRjVSt4SXj4fiZ9ubU= X-Received: by 2002:adf:8584:: with SMTP id 4-v6mr6689748wrt.15.1527158381990; Thu, 24 May 2018 03:39:41 -0700 (PDT) Received: from localhost.localdomain ([151.15.207.242]) by smtp.gmail.com with ESMTPSA id d12-v6sm14785028wre.39.2018.05.24.03.39.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 24 May 2018 03:39:41 -0700 (PDT) Date: Thu, 24 May 2018 12:39:38 +0200 From: Juri Lelli To: Patrick Bellasi Cc: Waiman Long , Tejun Heo , Li Zefan , Johannes Weiner , Peter Zijlstra , Ingo Molnar , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@fb.com, pjt@google.com, luto@amacapital.net, Mike Galbraith , torvalds@linux-foundation.org, Roman Gushchin Subject: Re: [PATCH v8 4/6] cpuset: Make generate_sched_domains() recognize isolated_cpus Message-ID: <20180524103938.GB3948@localhost.localdomain> References: <1526590545-3350-1-git-send-email-longman@redhat.com> <1526590545-3350-5-git-send-email-longman@redhat.com> <20180523173453.GY30654@e110439-lin> <20180524090430.GZ30654@e110439-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180524090430.GZ30654@e110439-lin> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 24/05/18 10:04, Patrick Bellasi wrote: [...] > From 84bb8137ce79f74849d97e30871cf67d06d8d682 Mon Sep 17 00:00:00 2001 > From: Patrick Bellasi > Date: Wed, 23 May 2018 16:33:06 +0100 > Subject: [PATCH 1/1] cgroup/cpuset: disable sched domain rebuild when not > required > > The generate_sched_domains() already addresses the "special case for 99% > of systems" which require a single full sched domain at the root, > spanning all the CPUs. However, the current support is based on an > expensive sequence of operations which destroy and recreate the exact > same scheduling domain configuration. > > If we notice that: > > 1) CPUs in "cpuset.isolcpus" are excluded from load balancing by the > isolcpus= kernel boot option, and will never be load balanced > regardless of the value of "cpuset.sched_load_balance" in any > cpuset. > > 2) the root cpuset has load_balance enabled by default at boot and > it's the only parameter which userspace can change at run-time. > > we know that, by default, every system comes up with a complete and > properly configured set of scheduling domains covering all the CPUs. > > Thus, on every system, unless the user explicitly disables load balance > for the top_cpuset, the scheduling domains already configured at boot > time by the scheduler/topology code and updated in consequence of > hotplug events, are already properly configured for cpuset too. > > This configuration is the default one for 99% of the systems, > and it's also the one used by most of the Android devices which never > disable load balance from the top_cpuset. > > Thus, while load balance is enabled for the top_cpuset, > destroying/rebuilding the scheduling domains at every cpuset.cpus > reconfiguration is a useless operation which will always produce the > same result. > > Let's anticipate the "special" optimization within: > > rebuild_sched_domains_locked() > > thus completely skipping the expensive: > > generate_sched_domains() > partition_sched_domains() > > for all the cases we know that the scheduling domains already defined > will not be affected by whatsoever value of cpuset.cpus. [...] > + /* Special case for the 99% of systems with one, full, sched domain */ > + if (!top_cpuset.isolation_count && > + is_sched_load_balance(&top_cpuset)) > + goto out; > + Mmm, looks like we still need to destroy e recreate if there is a new_topology (see arch_update_cpu_topology() in partition_sched_ domains). Maybe we could move the check you are proposing in update_cpumasks_ hier() ?