Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752258AbdF3RCf (ORCPT ); Fri, 30 Jun 2017 13:02:35 -0400 Received: from mail-wr0-f193.google.com ([209.85.128.193]:34210 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751942AbdF3RCd (ORCPT ); Fri, 30 Jun 2017 13:02:33 -0400 Message-ID: <1498842140.15161.66.camel@gmail.com> Subject: Re: wake_wide mechanism clarification From: Mike Galbraith To: Josef Bacik , Joel Fernandes Cc: Peter Zijlstra , LKML , Juri Lelli , Dietmar Eggemann , Patrick Bellasi , Brendan Jackman , Chris Redpath Date: Fri, 30 Jun 2017 19:02:20 +0200 In-Reply-To: <20170630142815.GA9743@destiny> References: <20170630004912.GA2457@destiny> <20170630142815.GA9743@destiny> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.20.5 Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1826 Lines: 43 On Fri, 2017-06-30 at 10:28 -0400, Josef Bacik wrote: > On Thu, Jun 29, 2017 at 08:04:59PM -0700, Joel Fernandes wrote: > > > That makes sense that we multiply slave's flips by a factor because > > its low, but I still didn't get why the factor is chosen to be > > llc_size instead of something else for the multiplication with slave > > (slave * factor). > Yeah I don't know why llc_size was chosen... static void update_top_cache_domain(int cpu) {         struct sched_domain_shared *sds = NULL;         struct sched_domain *sd;         int id = cpu;         int size = 1;         sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES);         if (sd) {                 id = cpumask_first(sched_domain_span(sd));                 size = cpumask_weight(sched_domain_span(sd));                 sds = sd->shared;         }         rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);         per_cpu(sd_llc_size, cpu) = size; The goal of wake wide was to approximate when pulling would be a futile consolidation effort and counterproductive to scaling.  'course with ever increasing socket size, any 1:N waker is ever more likely to run out of CPU for its one and only self (slamming into scaling wall) before it needing to turn its minions loose to conquer the world. Something else to consider: network interrupt waking multiple workers at high frequency.  If the waking CPU is idle, do you really want to place a worker directly in front of a tattoo artist, or is it better off nearly anywhere but there? If the box is virtual, with no topology exposed (or real but ancient) to let select_idle_sibling() come to the rescue, two workers can even get tattooed simultaneously (see sync wakeup).  -Mike