Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757242AbbBFVJi (ORCPT ); Fri, 6 Feb 2015 16:09:38 -0500 Received: from smtprelay0114.hostedemail.com ([216.40.44.114]:43024 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753937AbbBFVJh (ORCPT ); Fri, 6 Feb 2015 16:09:37 -0500 X-Session-Marker: 6E657665747340676F6F646D69732E6F7267 X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,rostedt@goodmis.org,:::::::::,RULES_HIT:41:355:379:541:599:800:960:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1543:1593:1594:1605:1711:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2892:2895:3138:3139:3140:3141:3142:3151:3165:3622:3865:3866:3867:3868:3870:3871:3872:3874:4250:5007:6119:6261:7875:7903:7904:9010:10004:10400:10848:10967:11026:11232:11658:11914:12043:12296:12438:12517:12519:12740:14096:14097:21080,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0 X-HE-Tag: head38_8dfa07ce9cc50 X-Filterd-Recvd-Size: 4637 Date: Fri, 6 Feb 2015 16:09:34 -0500 From: Steven Rostedt To: Xunlei Pang Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Juri Lelli , Xunlei Pang Subject: Re: [PATCH v2 1/2] sched/rt: Check to push the task when changing its affinity Message-ID: <20150206160934.0663505e@gandalf.local.home> In-Reply-To: <1423151974-22557-1-git-send-email-xlpang@126.com> References: <1423151974-22557-1-git-send-email-xlpang@126.com> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.25; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3815 Lines: 145 On Thu, 5 Feb 2015 23:59:33 +0800 Xunlei Pang wrote: return p; > @@ -1886,28 +1892,73 @@ static void set_cpus_allowed_rt(struct task_struct *p, > const struct cpumask *new_mask) > { > struct rq *rq; > - int weight; > + int old_weight, new_weight; > + int preempt_push = 0, direct_push = 0; > > BUG_ON(!rt_task(p)); > > if (!task_on_rq_queued(p)) > return; > > - weight = cpumask_weight(new_mask); > + old_weight = p->nr_cpus_allowed; > + new_weight = cpumask_weight(new_mask); > + > + rq = task_rq(p); > + > + if (new_weight > 1 && > + rt_task(rq->curr) && > + !test_tsk_need_resched(rq->curr)) { > + /* > + * Set new mask information which is already valid > + * to prepare pushing. > + * > + * We own p->pi_lock and rq->lock. rq->lock might > + * get released when doing direct pushing, however > + * p->pi_lock is always held, so it's safe to assign > + * the new_mask and new_weight to p. > + */ > + cpumask_copy(&p->cpus_allowed, new_mask); > + p->nr_cpus_allowed = new_weight; > + > + if (task_running(rq, p) && > + cpumask_test_cpu(task_cpu(p), new_mask) && Why the check for task_cpu being in new_mask? > + cpupri_find(&rq->rd->cpupri, p, NULL)) { > + /* > + * At this point, current task gets migratable most > + * likely due to the change of its affinity, let's > + * figure out if we can migrate it. > + * > + * Is there any task with the same priority as that > + * of current task? If found one, we should resched. > + * NOTE: The target may be unpushable. > + */ > + if (p->prio == rq->rt.highest_prio.next) { > + /* One target just in pushable_tasks list. */ > + requeue_task_rt(rq, p, 0); > + preempt_push = 1; > + } else if (rq->rt.rt_nr_total > 1) { > + struct task_struct *next; > + > + requeue_task_rt(rq, p, 0); > + next = peek_next_task_rt(rq); > + if (next != p && next->prio == p->prio) > + preempt_push = 1; > + } > + } else if (!task_running(rq, p)) > + direct_push = 1; We could avoid the second check (!task_running()) by splitting up the first if: if (task_running(rq, p)) { if (cpumask_test_cpu() && cpupri_find()) { } } else { direct push = 1 Also, is the copy of cpus_allowed only done so that cpupri_find is called? If so maybe move it in there too: if (task_running(rq, p)) { if (!cpumask_test_cpu()) goto update; cpumask_copy(&p->cpus_allowed, new_mask); p->nr_cpus_allowed = new_weight; if (!cpupri_find()) goto update; [...] This way we avoid the double copy of cpumask unless we truly need to do it. > + } > > /* > * Only update if the process changes its state from whether it > * can migrate or not. > */ > - if ((p->nr_cpus_allowed > 1) == (weight > 1)) > - return; > - > - rq = task_rq(p); > + if ((old_weight > 1) == (new_weight > 1)) > + goto out; > > /* > * The process used to be able to migrate OR it can now migrate > */ > - if (weight <= 1) { > + if (new_weight <= 1) { > if (!task_current(rq, p)) > dequeue_pushable_task(rq, p); > BUG_ON(!rq->rt.rt_nr_migratory); > @@ -1919,6 +1970,15 @@ static void set_cpus_allowed_rt(struct task_struct *p, > } > > update_rt_migration(&rq->rt); > + > +out: > + BUG_ON(direct_push == 1 && preempt_push == 1); Do we really need this bug on? > + > + if (direct_push) > + push_rt_tasks(rq); > + > + if (preempt_push) We could make that an "else if" if they really are mutually exclusive. -- Steve > + resched_curr(rq); > } > > /* Assumes rq->lock is held */ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/