Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752494AbbHRXm5 (ORCPT ); Tue, 18 Aug 2015 19:42:57 -0400 Received: from LGEMRELSE7Q.lge.com ([156.147.1.151]:51137 "EHLO lgemrelse7q.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752379AbbHRXmx (ORCPT ); Tue, 18 Aug 2015 19:42:53 -0400 X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com Date: Wed, 19 Aug 2015 08:42:14 +0900 From: Byungchul Park To: "T. Zhou" Cc: mingo@kernel.org, peterz@infradead.org, linux-kernel@vger.kernel.org, yuyang.du@intel.com Subject: Re: [PATCH v2 1/3] sched: sync a se with its cfs_rq when attaching and dettaching Message-ID: <20150818234214.GC24261@byungchulpark-X58A-UD3R> References: <1439797552-18202-1-git-send-email-byungchul.park@lge.com> <1439797552-18202-2-git-send-email-byungchul.park@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2075 Lines: 50 On Wed, Aug 19, 2015 at 12:32:43AM +0800, T. Zhou wrote: > Hi, > > On Mon, Aug 17, 2015 at 04:45:50PM +0900, byungchul.park@lge.com wrote: > > From: Byungchul Park > > > > current code is wrong with cfs_rq's avg loads when changing a task's > > cfs_rq to another. i tested with "echo pid > cgroup" and found that > > e.g. cfs_rq->avg.load_avg became larger and larger whenever i changed > > a cgroup to another again and again. we have to sync se's avg loads > > with both *prev* cfs_rq and next cfs_rq when changing its group. > > > > my simple think about above, may be nothing or wrong, just ignore it. > > if a load balance migration happened just before cgroup change, prev > cfs_rq and next cfs_rq will be on different cpu. migrate_task_rq_fair() hello, two oerations, migration and cgroup change, are protected by lock. therefore it would never happen. :) thanks, byungchul > and update_cfs_rq_load_avg() will sync and remove se's load avg from > prev cfs_rq. whether or not queued, well done. dequeue_task() decay se > and pre_cfs before calling task_move_group_fair(). after set cfs_rq in > task_move_group_fair(), if queued, se's load avg do not add to next > cfs_rq(try set last_update_time to 0 like migration to add), if !queued, > also need to add se's load avg to next cfs_rq. > > if no load balance migration happened when change cgroup. prev cfs_rq > and next cfs_rq may be on same cpu(not sure), this time, need to remove > se's load avg by ourself, also need to add se's load avg on next cfs_rq. > > thinks, > -- > Tao > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/