Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp846359ybj; Thu, 7 May 2020 08:56:24 -0700 (PDT) X-Google-Smtp-Source: APiQypItP354vNrOA8DF1rJ+TSVIfjUkLlBIObbIn/c9ZwoJu2wt8GAvRx3Py5U+u3fyrngJIw+5 X-Received: by 2002:aa7:df8a:: with SMTP id b10mr12905839edy.263.1588866984261; Thu, 07 May 2020 08:56:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588866984; cv=none; d=google.com; s=arc-20160816; b=QS9HgcaHMz+t0fjYRYbAlg6OLYBY2fGq+jBUnRVFVt2Lwm6C3zAkN3EOhew7IWeULZ 3G2Y2WFECyALGIvjiVUPT1UdOycDY9N5zwsZpBKZLFcRTVuaImGC0ocGH4DjiwtNqX52 O3jWufVLM11Eki54FV9RqcrmhrCvdOgJ1EMyVrlhY13LThvTITiHhP1rdhYQdoNu7tBj AKW0vtW7G4b5zkPZfRrE7Hu+FBbMOvByAHzKTQQbFTnxxaLcUXvMN/WnBDiHNIoA3M7j VhM8wgURU2hRsJbA7h67e6xQgcH4lyJp/K1cHxY1IPABc9R9l5tRzlfXF7ror+IbkB+k 2Jxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=3eHREAex3FZSCJnoT7Sxx3CPvc7N0PbtTCMLwBmqyUg=; b=uXGaRVTh0i6TIN0/cOyMsffD4PD8BqBKSaQonY6TEAf6jSIR6V690Jnf0RsUzOCaZG LWgN0TdyxUFlqk1cgENeWy2npd84rMKCBs+Us8NlUHt1gXgoZ7Cc3cm1oRW4IGF2ZN5B 3Hb0s2FvOFY0iOUcHhW5/vmoE8nSaIMP0B36CI4bNsThGaITmsFjl/mrURv88jNWjdTO C9mihi48SPxF9B5xbr5pzFdvpeApwQuRyJeNkUGbeD0hYLf1aLS8LyM5Y3FvlcsqBmem rCc4Mze0KZY70Csb6RvWV8BU5y5l9V1KXKbL5qOsofQU1Yd9TZo8tLbtKA8eDEqnuxMS ralg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u13si3461341edb.6.2020.05.07.08.55.59; Thu, 07 May 2020 08:56:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726950AbgEGPy1 (ORCPT + 99 others); Thu, 7 May 2020 11:54:27 -0400 Received: from outbound-smtp38.blacknight.com ([46.22.139.221]:47513 "EHLO outbound-smtp38.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726451AbgEGPy1 (ORCPT ); Thu, 7 May 2020 11:54:27 -0400 Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp38.blacknight.com (Postfix) with ESMTPS id 42AB81A36 for ; Thu, 7 May 2020 16:54:25 +0100 (IST) Received: (qmail 4187 invoked from network); 7 May 2020 15:54:25 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.18.57]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 7 May 2020 15:54:24 -0000 Date: Thu, 7 May 2020 16:54:22 +0100 From: Mel Gorman To: Jirka Hladky Cc: Phil Auld , Peter Zijlstra , Ingo Molnar , Vincent Guittot , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Valentin Schneider , Hillf Danton , LKML , Douglas Shakshober , Waiman Long , Joe Mario , Bill Gray Subject: Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6 Message-ID: <20200507155422.GD3758@techsingularity.net> References: <20200312155640.GX3818@techsingularity.net> <20200312214736.GA3818@techsingularity.net> <20200320152251.GC3818@techsingularity.net> <20200320163843.GD3818@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 07, 2020 at 05:24:17PM +0200, Jirka Hladky wrote: > Hi Mel, > > > > Yes, it's indeed OMP. With low threads count, I mean up to 2x number of > > > NUMA nodes (8 threads on 4 NUMA node servers, 16 threads on 8 NUMA node > > > servers). > > > > Ok, so we know it's within the imbalance threshold where a NUMA node can > > be left idle. > > we have discussed today with my colleagues the performance drop for > some workloads for low threads counts (roughly up to 2x number of NUMA > nodes). We are worried that it can be a severe issue for some use > cases, which require a full memory bandwidth even when only part of > CPUs is used. > > We understand that scheduler cannot distinguish this type of workload > from others automatically. However, there was an idea for a * new > kernel tunable to control the imbalance threshold *. Based on the > purpose of the server, users could set this tunable. See the tuned > project, which allows creating performance profiles [1]. > I'm not completely opposed to it but given that the setting is global, I imagine it could have other consequences if two applications ran at different times have different requirements. Given that it's OMP, I would have imagined that an application that really cared about this would specify what was needed using OMP_PLACES. Why would someone prefer kernel tuning or a tuned profile over OMP_PLACES? After all, it requires specific knowledge of the application even to know that a particular tuned profile is needed. -- Mel Gorman SUSE Labs