Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp18692669ybl; Fri, 3 Jan 2020 07:16:12 -0800 (PST) X-Google-Smtp-Source: APXvYqyV8QLeI8T6QyZuGYRhjNC2/CC5GuFpFvwTNJ50wCgfB8y7tVeVQb5If946/FrkWY6IsBok X-Received: by 2002:a9d:754a:: with SMTP id b10mr100643135otl.273.1578064572592; Fri, 03 Jan 2020 07:16:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578064572; cv=none; d=google.com; s=arc-20160816; b=0SJ5uOQ3IFvLiNsJa1zmpzYL50jM4wi4KioY8NuEZR+V84VqkR/Yr1y4KbRWnWR8Jn 7I5gGxjXF7m6n16LXyPHEGsSkw/9EDtIhyetpaqT5LwTmlA2D/XXX6tvThYAxTWQjjzH 4YdOfpYJ9S3FMJNuxS9V6UWo7wBHgFcYqG6v1buyH6V4IwwRdOEj4/Hn4IHSMi2Y3n7U 2DII1a3Z+15YR7Aehcn1pgMpLn5K/+3hpMQUVTNykbTWHUQHm4FdNEhGsLLEwjTixlKq 5pVUILRqSlICZL1G2/txpfgF7MESk5XopQ4VWD/wkxIHWBDtLwEJBxkb+KoOGcV/XfRt 55Cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=Pm8IMIQsMpAxH950YnYqN4p/gPgJoPvuWA2nY6ysYe4=; b=CDH5a6bmxtXI2kqT/WTaXDM+pYatnYj6sQB145kO5T3tvrRgIAQN/xQGyhR5MCDjvk LC6P9YXMopq1otoKx76BOedaSH3AAMIlk1ZqqE97ctFwG5hGc/mGY4BAv0PMoZR+V+wW ShWqypi6JcXavnbDjmSY0Vatp7iUXuthdSf4F+3GSkwYMaeyosQoOw3yzWyomlhvaGQO oEvLLqk6bFeYzC0EMh+ZAzzsCoXD2qwBGlG8DhXsEwkLno+XgWAoke0ADgrJPBM+iILu fh2bj+RTxIxbgT/zjqCDztcjbIqnL9KlAVk918ynto77Djp4FRFWi5Cwm41lOxW86Mbj nDvw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e3si26632541otr.245.2020.01.03.07.16.00; Fri, 03 Jan 2020 07:16:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727872AbgACPO7 (ORCPT + 99 others); Fri, 3 Jan 2020 10:14:59 -0500 Received: from mx2.suse.de ([195.135.220.15]:43996 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727646AbgACPO6 (ORCPT ); Fri, 3 Jan 2020 10:14:58 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CF604AC69; Fri, 3 Jan 2020 15:14:55 +0000 (UTC) Date: Fri, 3 Jan 2020 16:14:49 +0100 From: Michal =?iso-8859-1?Q?Koutn=FD?= To: =?utf-8?B?546L6LSH?= Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Luis Chamberlain , Kees Cook , Iurii Zaikin , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, "Paul E. McKenney" , Randy Dunlap , Jonathan Corbet Subject: Re: [PATCH v6 1/2] sched/numa: introduce per-cgroup NUMA locality info Message-ID: <20200103151449.GA25747@blackbody.suse.cz> References: <743eecad-9556-a241-546b-c8a66339840e@linux.alibaba.com> <207ef46c-672c-27c8-2012-735bd692a6de@linux.alibaba.com> <040def80-9c38-4bcc-e4a8-8a0d10f131ed@linux.alibaba.com> <25cf7ef5-e37e-7578-eea7-29ad0b76c4ea@linux.alibaba.com> <443641e7-f968-0954-5ff6-3b7e7fed0e83@linux.alibaba.com> <275a98ed-35b8-b65f-3600-64ab722dd836@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <275a98ed-35b8-b65f-3600-64ab722dd836@linux.alibaba.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi. On Fri, Dec 13, 2019 at 09:47:36AM +0800, 王贇 wrote: > By monitoring the increments, we will be able to locate the per-cgroup > workload which NUMA Balancing can't helpwith (usually caused by wrong > CPU and memory node bindings), then we got chance to fix that in time. I just wonder do the data based on increments match with those you obtained previously? > +static inline void > +update_task_locality(struct task_struct *p, int pnid, int cnid, int pages) > +{ > + if (!static_branch_unlikely(&sched_numa_locality)) > + return; > + > + /* > + * pnid != cnid --> remote idx 0 > + * pnid == cnid --> local idx 1 > + */ > + p->numa_page_access[!!(pnid == cnid)] += pages; If the per-task information isn't used anywhere, why not accumulate directly into task's cfs_rq->{local,remote}_page_access? > @@ -4298,6 +4359,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) > */ > update_load_avg(cfs_rq, curr, UPDATE_TG); > update_cfs_group(curr); > + update_group_locality(cfs_rq); With the per-NUMA node time tracked separately, isn't it unnecessary doing group updates inside entity_tick? Regards, Michal