Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp687358img; Fri, 22 Mar 2019 06:36:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqyWtybuZQq8KAbbtjFOuH3aappklonr6ce5dQT9O5Lgp93vecTecykZMMWOVMOTGpVPj6Si X-Received: by 2002:a63:1d03:: with SMTP id d3mr8994543pgd.42.1553261764308; Fri, 22 Mar 2019 06:36:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553261764; cv=none; d=google.com; s=arc-20160816; b=OpB5+/WQlfmLYbj11mD7ctGV0MAQu6YxNr3164iEcQNvwvdTbRlxScfO9KJkrP9HO2 80yYRVHAIYkJ3iW3pSc608auWVwl29J9uvI1v1+bXGnVL7FsQpssoAFGSCVpQWAA5Coq jV8I7t9B0psFMC4pZ/qjXU8ZLtdJNse/kBgeAVj/n7/RFPZJlcgWNH1VD+UPzMZqwjVl lRs5Sq1VwMrgV5/RFRGIeptBdLbCyF8BfghSHswaB8fQlG6RVQ10zZ010k3xAwVv2IOJ d5I3LhHq7pzMOwD0zbDionQ+/UIBcYbngjs+NqHpUZKbHpUGs8EbGtW8B6WSk9QvtS7J Vtgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=s1uWVPMBLfGNeSuw3sabKjebwEXiHejM54freog362Y=; b=Qa32/GP9oR6KszRVNYEpUkvy4h1UJU4q/UkLfJjQjiv/ER+S58XMjFZcwW5nxCwXec BguoEg1ACImypWyGOQViaAs5A2ko7bjvhvbg2wzwAyTknOPwfsbGkciVFd9+BXyfrUNy QZlmaVEXtg2MMZ3WNf+t7sUYUoZpmvaHO7zs+TR7dn4DtZB3u2PYuIVDfbvUsASVCPAd Yn3DlgcTNllqsH+ge9XhHfcD5z1DclEBENgOR4Jj+mDRjbxZGKVixccuNnMK/q0VBDCv cfb2qLtPSB5M6jFaXQT82bk9qB0gumWRzirc/m007NiUnXlKzI6pbzeeyoMpiQuWto3T 457A== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=ljPjHhKW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k7si6822488pfb.69.2019.03.22.06.35.48; Fri, 22 Mar 2019 06:36:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=ljPjHhKW; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728269AbfCVNfJ (ORCPT + 99 others); Fri, 22 Mar 2019 09:35:09 -0400 Received: from merlin.infradead.org ([205.233.59.134]:47774 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728140AbfCVNfJ (ORCPT ); Fri, 22 Mar 2019 09:35:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=s1uWVPMBLfGNeSuw3sabKjebwEXiHejM54freog362Y=; b=ljPjHhKWGYUBZloqpCcEsBSAc zaqu4TpzqT9gOo32lS4bPHdqG3p7lMkoBX+AVT4oB6/9NN3mVZrLyXgVEyAGRqt+5rvlZHtzPYUQ5 fBEPLUZ7jJ0pnjKyBvtJmDw9Gh8Hu0+e/NUWqOs7n9oc1Tph2xpEcR0nEq3fWBE2a/5KiLUyU0eql P6+uVgh5Kp4Gg+HIdCzgDxEQxluUcGgpA7bTsn/VMEz8/37smM6vgc0MJ9/7iyBytZnUlPj1yD47U DGVabsFx7/p2QOnNtR2jAOc6vSOy0xVTwY0TWyQVAaeAzmDxn0LvSDFUtuCrkAhMv8SrdtnKJrl8B OJg5mGjpg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1h7KJv-0001hI-CJ; Fri, 22 Mar 2019 13:34:51 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id C20512038C224; Fri, 22 Mar 2019 14:34:48 +0100 (CET) Date: Fri, 22 Mar 2019 14:34:48 +0100 From: Peter Zijlstra To: Julien Desfossez Cc: mingo@kernel.org, tglx@linutronix.de, pjt@google.com, tim.c.chen@linux.intel.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, subhra.mazumdar@oracle.com, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Vineeth Pillai , Nishanth Aravamudan Subject: Re: [RFC][PATCH 03/16] sched: Wrap rq::lock access Message-ID: <20190322133448.GT6058@hirez.programming.kicks-ass.net> References: <15f3f7e6-5dce-6bbf-30af-7cffbd7bb0c3@oracle.com> <1553203217-11444-1-git-send-email-jdesfossez@digitalocean.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1553203217-11444-1-git-send-email-jdesfossez@digitalocean.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 21, 2019 at 05:20:17PM -0400, Julien Desfossez wrote: > On further investigation, we could see that the contention is mostly in the > way rq locks are taken. With this patchset, we lock the whole core if > cpu.tag is set for at least one cgroup. Due to this, __schedule() is more or > less serialized for the core and that attributes to the performance loss > that we are seeing. We also saw that newidle_balance() takes considerably > long time in load_balance() due to the rq spinlock contention. Do you think > it would help if the core-wide locking was only performed when absolutely > needed ? Something like that could be done, but then you end up with 2 locks, something which I was hoping to avoid. Basically you keep rq->lock as it exists today, but add something like rq->core->core_lock, you then have to take that second lock (nested under rq->lock) for every scheduling action involving a tagged task. It makes things complicatd though; because now my head hurts thikning about pick_next_task(). (this can obviously do away with the whole rq->lock wrappery) Also, completely untested.. --- --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -146,6 +146,8 @@ void sched_core_enqueue(struct rq *rq, s if (!p->core_cookie) return; + raw_spin_lock(&rq->core->core_lock); + node = &rq->core_tree.rb_node; parent = *node; @@ -161,6 +163,8 @@ void sched_core_enqueue(struct rq *rq, s rb_link_node(&p->core_node, parent, node); rb_insert_color(&p->core_node, &rq->core_tree); + + raw_spin_unlock(&rq->core->core_lock); } void sched_core_dequeue(struct rq *rq, struct task_struct *p) @@ -170,7 +174,9 @@ void sched_core_dequeue(struct rq *rq, s if (!p->core_cookie) return; + raw_spin_lock(&rq->core->core_lock); rb_erase(&p->core_node, &rq->core_tree); + raw_spin_unlock(&rq->core->core_lock); } /* @@ -181,6 +187,8 @@ struct task_struct *sched_core_find(stru struct rb_node *node = rq->core_tree.rb_node; struct task_struct *node_task, *match; + lockdep_assert_held(&rq->core->core_lock); + /* * The idle task always matches any cookie! */ @@ -206,6 +214,8 @@ struct task_struct *sched_core_next(stru { struct rb_node *node = &p->core_node; + lockdep_assert_held(&rq->core->core_lock); + node = rb_next(node); if (!node) return NULL; @@ -3685,6 +3695,8 @@ pick_next_task(struct rq *rq, struct tas * If there were no {en,de}queues since we picked (IOW, the task * pointers are all still valid), and we haven't scheduled the last * pick yet, do so now. + * + * XXX probably OK without ->core_lock */ if (rq->core->core_pick_seq == rq->core->core_task_seq && rq->core->core_pick_seq != rq->core_sched_seq) { @@ -3710,6 +3722,20 @@ pick_next_task(struct rq *rq, struct tas if (!rq->nr_running) newidle_balance(rq, rf); + if (!rq->core->core_cookie) { + for_each_class(class) { + next = pick_task(rq, class, NULL); + if (next) + break; + } + + if (!next->core_cookie) { + set_next_task(rq, next); + return next; + } + } + + raw_spin_lock(&rq->core->core_lock); cpu = cpu_of(rq); smt_mask = cpu_smt_mask(cpu); @@ -3849,6 +3875,7 @@ next_class:; trace_printk("picked: %s/%d %lx\n", next->comm, next->pid, next->core_cookie); done: + raw_spin_unlock(&rq->core->core_lock); set_next_task(rq, next); return next; } --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -966,6 +966,7 @@ struct rq { struct rb_root core_tree; /* shared state */ + raw_spinlock_t core_lock; unsigned int core_task_seq; unsigned int core_pick_seq; unsigned long core_cookie; @@ -1007,9 +1008,6 @@ static inline bool sched_core_enabled(st static inline raw_spinlock_t *rq_lockp(struct rq *rq) { - if (sched_core_enabled(rq)) - return &rq->core->__lock; - return &rq->__lock; }