Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1956173ybv; Fri, 21 Feb 2020 06:20:42 -0800 (PST) X-Google-Smtp-Source: APXvYqzspzLaTBMUU83WjbIgww5eJSQGYDrpxwV5X5UgUFAgdhbdnw8wx+aqElgpEUmWf8UOB97I X-Received: by 2002:a9d:7695:: with SMTP id j21mr29185303otl.157.1582294842611; Fri, 21 Feb 2020 06:20:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582294842; cv=none; d=google.com; s=arc-20160816; b=S6ezjD2Lo/TBv0yHWfrIdLhwbAJ3Z+N1E1MSKNnhRBW7/+bDtSFJj8NbBdNNAZbQoH Tj3fYmaLMo3WIQpjB2IgaGV1KsDDsPF1/4IX/HhlJBwJjQT6deeaEhcPGspQuFeG73UT koNSM9k/bJ3btHGWphHJ3v1UNPD+Gc4a8Tnd5id8VZ6sDPmMqBB+/o2R091W6hiLIwYo ilHWiwSsETbWXESNjPSvKh4sFnjMyVvkBeQKyRXHkpZR2JTAKrG4ZhsSuclCL6UURprg F3W5whIcuh5IzcipLeyHkLSbYe5zfwaNLZIzLY0TC15frQAcjCV4u6INRadG/yWDa1nV ITag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=sHfS/tuiqhtQdFJJXESH0Dix/K7xoGUpcGx4T1KRMEQ=; b=GBi2b0n0zNQdZlVBS8EL3AqFvOwXWg3iwBENgD/FDbyHnJ+63yTP4LILGOgaxrLowZ ofpSmlegbRf8v9qaha7MaMoRMmrARkhM4dHFr+/67FKXFXmzsWIoiffV81UUhZqzOuGm dYzM6u6VmudJWgvmDu0vMCcvzN7IyrOR4lsSRgpOzPprm4N6oyeH1p2nDgSBZCPXhJdM ZbkSS9UjDpetdfaGDpZQTW4Vupy/mMbBAsX/mOrnA8Plqjo575rt4bwrpUC0W6IeyxlQ cR3j8/Qg+8Da8szsg75EdwpC7JdLd4qjKraqdy7vMgMgg+I/5DGX3Do2fjz8Ry1A1OJC LYNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z21si1485570oti.199.2020.02.21.06.20.25; Fri, 21 Feb 2020 06:20:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728392AbgBUOUR (ORCPT + 99 others); Fri, 21 Feb 2020 09:20:17 -0500 Received: from mx2.suse.de ([195.135.220.15]:41524 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728081AbgBUOUR (ORCPT ); Fri, 21 Feb 2020 09:20:17 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id DD6CAB23C; Fri, 21 Feb 2020 14:20:14 +0000 (UTC) Date: Fri, 21 Feb 2020 14:20:10 +0000 From: Mel Gorman To: ?????? Cc: Peter Zijlstra , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Luis Chamberlain , Kees Cook , Iurii Zaikin , Michal Koutn? , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, "Paul E. McKenney" , Randy Dunlap , Jonathan Corbet Subject: Re: [PATCH RESEND v8 1/2] sched/numa: introduce per-cgroup NUMA locality info Message-ID: <20200221142010.GT3420@suse.de> References: <20200214151048.GL14914@hirez.programming.kicks-ass.net> <20200217115810.GA3420@suse.de> <881deb50-163e-442a-41ec-b375cc445e4d@linux.alibaba.com> <20200217141616.GB3420@suse.de> <114519ab-4e9e-996a-67b8-4f5fcecba72a@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <114519ab-4e9e-996a-67b8-4f5fcecba72a@linux.alibaba.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 18, 2020 at 09:39:35AM +0800, ?????? wrote: > On 2020/2/17 ??????10:16, Mel Gorman wrote: > > On Mon, Feb 17, 2020 at 09:23:52PM +0800, ?????? wrote: > [snip] > >> > >> IMHO the scan period changing should not be a problem now, since the > >> maximum period is defined by user, so monitoring at maximum period > >> on the accumulated page accessing counters is always meaningful, correct? > >> > > > > It has meaning but the scan rate drives the fault rate which is the basis > > for the stats you accumulate. If the scan rate is high when accesses > > are local, the stats can be skewed making it appear the task is much > > more local than it may really is at a later point in time. The scan rate > > affects the accuracy of the information. The counters have meaning but > > they needs careful interpretation. > > Yeah, to zip so many information from NUMA Balancing to some statistics > is a challenge itself, the locality still not so easy to be understood by > NUMA newbie :-P > Indeed and if they do not take into account historical skew into account, they still might not understand. > > > >> FYI, by monitoring locality, we found that the kvm vcpu thread is not > >> covered by NUMA Balancing, whatever how many maximum period passed, the > >> counters are not increasing, or very slowly, although inside guest we are > >> copying memory. > >> > >> Later we found such task rarely exit to user space to trigger task > >> work callbacks, and NUMA Balancing scan depends on that, which help us > >> realize the importance to enable NUMA Balancing inside guest, with the > >> correct NUMA topo, a big performance risk I'll say :-P > >> > > > > Which is a very interesting corner case in itself but also one that > > could have potentially have been inferred from monitoring /proc/vmstat > > numa_pte_updates or on a per-task basis by monitoring /proc/PID/sched and > > watching numa_scan_seq and total_numa_faults. Accumulating the information > > on a per-cgroup basis would require a bit more legwork. > > That's not working for daily monitoring... > Indeed although at least /proc/vmstat is cheap to monitor and it could at least be tracked if the number of NUMA faults are abnormally low or the ratio of remote to local hints are problematic. > Besides, compared with locality, this require much more deeper understand > on the implementation, which could even be tough for NUMA developers to > assemble all these statistics together. > My point is that even with the patch, the definition of locality is subtle. At a single point in time, the locality might appear to be low but it's due to an event that happened far in the past. > > > >> Maybe not a good example, but we just try to highlight that NUMA Balancing > >> could have issue in some cases, and we want them to be exposed, somehow, > >> maybe by the locality. > >> > > > > Again, I'm somewhat neutral on the patch simply because I would not use > > the information for debugging problems with NUMA balancing. I would try > > using tracepoints and if the tracepoints were not good enough, I'd add or > > fix them -- similar to what I had to do with sched_stick_numa recently. > > The caveat is that I mostly look at this sort of problem as a developer. > > Sysadmins have very different requirements, especially simplicity even > > if the simplicity in this case is an illusion. > > Fair enough, but I guess PeterZ still want your Ack, so neutral means > refuse in this case :-( > I think the patch is functionally harmless and can be disabled but I also would be wary of dealing with a bug report that was based on the numbers provided by the locality metric. The bulk of the work related to the bug would likely be spent on trying to explain the metric and I've dealt with quite a few bugs that were essentially "We don't like this number and think something is wrong because of it -- fix it". Even then, I would want the workload isolated and then vmstat recorded over time to determine it's a persistent problem or not. That's the reason why I'm relucant to ack it. I fully acknowledge that this may have value for sysadmins and may be a good enough reason to merge it for environments that typically build and configure their own kernels. I doubt that general distributions would enable it but that's a guess. > BTW, how do you think about the documentation in second patch? > I think the documentation is great, it's clear and explains itself well. > Do you think it's necessary to have a doc to explain NUMA related statistics? > It would be nice but AFAIK, the stats in vmstats are not documented. They are there because recording them over time can be very useful when dealing with user bug reports. -- Mel Gorman SUSE Labs