Received: by 10.223.176.5 with SMTP id f5csp1209478wra; Fri, 2 Feb 2018 13:10:46 -0800 (PST) X-Google-Smtp-Source: AH8x2256YHLGfkK6LXXKMnRPsMCtNn87PBkC5pi+WXG7dqWrTeu/MmVSAhgE+KxKNYi73a0skcvL X-Received: by 10.99.9.1 with SMTP id 1mr25969716pgj.257.1517605846701; Fri, 02 Feb 2018 13:10:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517605846; cv=none; d=google.com; s=arc-20160816; b=ubWz6k8HMiFW19lQ439vmZe7dNQyDf61v5MF0AhvIKkBeDNcNM5AZIqgqlBZzs0MT6 0L1BY/QVF3WJ7Y39BI8qLKWwz1cw9uNTuqA+ixUG6DaSiPkG5x6ulnbl58XWF98HLI59 sa/pSpIfe6mN8LJ76Yvzaq68WgrIMcIs37G4pNnhMk/Kb9L7z4J6tYR6RLIZDWBJd2Hr s9cp4hy2vPHnDILGvhRf8JDLGboSC0UMNDaBajMKYIACA4a66AsanOQn8ibCfpWHTVak 9954ioEdST7UayRol6RZcNoroWuXsPvg3hb8kdBPDYCKRjlsfYd0b0AGZAv+cifFooEA 4Kyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=BE6czfV2LHuFx3qKy8W+5F8wi7eP0dpvg1aG64iFNQE=; b=VLVd86oJKuG0xOnBKYGp+KPaduSNtryOhjRwNwReelmr+2+fFp5k0BcBOhTJymIxBz MkCWWw7cEQeRpn8bi8NM+UOGvcvpZPHkU9QvEb7PjG8FhcwzM/sPgKL+CyXfGCZ+wjU4 bv3RAoSCJ9/M4eLevgavVfsHWPq5xU2EYQ5cCbS0jQdCgDd3pK8zQegr8jByZOaRvTGz DBBPKzuxsJCLCN0Stf8zGzP1ZSk3v9rjhTPXLO7cWEUZ+Yo52EDTstX0UbChtMwU/6GJ q5e9Ab/Z+PpwqC4W6kmuz2fkElyI1EAc0WLfyI5lSKrODIcVkDrnJYn6DtyZKvZ+z4nG 6odA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=Smn4Gxhq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i2si1931507pgq.601.2018.02.02.13.10.31; Fri, 02 Feb 2018 13:10:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=Smn4Gxhq; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754472AbeBBT6q (ORCPT + 99 others); Fri, 2 Feb 2018 14:58:46 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:39959 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752338AbeBBT6m (ORCPT ); Fri, 2 Feb 2018 14:58:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=BE6czfV2LHuFx3qKy8W+5F8wi7eP0dpvg1aG64iFNQE=; b=Smn4GxhqqcFaert01HAo8kVb0 VKKmO0m+Yyv1wdUaaHgGtRjs1Wn0GD0ICyrTyGbFpWnmRVfF1MxYtwklNHWquopiy3iwq71Nr77Ot va70RSV1UpW8ZEHCBKOYDoJGZgtXFKX939A+I1B5We+V84dEgQ5hrX8CLJDgewjFkzucASCZWfwxu sC6hiFLTQoCGveindpRk050TUfNHoV6bmY1iWU+34tw5eAQ2fJMaWUqfU3SXQS+hJLCn7YRYoY6HE ihdiH5+BahyEO7IVI3yJxltdpDNL+dl9dXrDdwar836fevnwOf4mPsJ3Z6eq06GRBABUiMOxRutJa VlbH4O93w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.89 #1 (Red Hat Linux)) id 1ehhTt-00030G-CK; Fri, 02 Feb 2018 19:58:41 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 118762029F9F9; Fri, 2 Feb 2018 20:58:39 +0100 (CET) Date: Fri, 2 Feb 2018 20:58:39 +0100 From: Peter Zijlstra To: Steven Sistare Cc: subhra mazumdar , linux-kernel@vger.kernel.org, mingo@redhat.com, dhaval.giani@oracle.com Subject: Re: [RESEND RFC PATCH V3] sched: Improve scalability of select_idle_sibling using SMT balance Message-ID: <20180202195839.GQ2269@hirez.programming.kicks-ass.net> References: <20180129233102.19018-1-subhra.mazumdar@oracle.com> <20180201123335.GV2249@hirez.programming.kicks-ass.net> <911d42cf-54c7-4776-c13e-7c11f8ebfd31@oracle.com> <20180202171708.GN2269@hirez.programming.kicks-ass.net> <0b3ee72d-0316-e11d-dee4-0d35375eed1d@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0b3ee72d-0316-e11d-dee4-0d35375eed1d@oracle.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 02, 2018 at 12:36:47PM -0500, Steven Sistare wrote: > On 2/2/2018 12:17 PM, Peter Zijlstra wrote: > > On Fri, Feb 02, 2018 at 11:53:40AM -0500, Steven Sistare wrote: > >>>> +static int select_idle_smt(struct task_struct *p, struct sched_group *sg) > >>>> { > >>>> + int i, rand_index, rand_cpu; > >>>> + int this_cpu = smp_processor_id(); > >>>> > >>>> + rand_index = CPU_PSEUDO_RANDOM(this_cpu) % sg->group_weight; > >>>> + rand_cpu = sg->cp_array[rand_index]; > >>> > >>> Right, so yuck.. I know why you need that, but that extra array and > >>> dereference is the reason I never went there. > >>> > >>> How much difference does it really make vs the 'normal' wrapping search > >>> from last CPU ? > >>> > >>> This really should be a separate patch with separate performance numbers > >>> on. > >> > >> For the benefit of other readers, if we always search and choose starting from > >> the first CPU in a core, then later searches will often need to traverse the first > >> N busy CPU's to find the first idle CPU. Choosing a random starting point avoids > >> such bias. It is probably a win for processors with 4 to 8 CPUs per core, and > >> a slight but hopefully negligible loss for 2 CPUs per core, and I agree we need > >> to see performance data for this as a separate patch to decide. We have SPARC > >> systems with 8 CPUs per core. > > > > Which is why the current code already doesn't start from the first cpu > > in the mask. We start at whatever CPU the task ran last on, which is > > effectively 'random' if the system is busy. > > > > So how is a per-cpu rotor better than that? > > The current code is: > for_each_cpu(cpu, cpu_smt_mask(target)) { > > For an 8-cpu/core processor, 8 values of target map to the same cpu_smt_mask. > 8 different tasks will traverse the mask in the same order. Ooh, the SMT loop.. yes that can be improved. But look at the other ones, they do: for_each_cpu_wrap(cpu, sched_domain_span(), target) so we look for an idle cpu in the LLC domain, and start iteration at @target, which will (on average) be different for different CPUs, and thus hopefully find different idle CPUs. You could simple change the SMT loop to something like: for_each_cpu_wrap(cpu, cpu_smt_mask(target), target) and see what that does.