Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp559360imm; Thu, 6 Sep 2018 06:51:22 -0700 (PDT) X-Google-Smtp-Source: ANB0VdY5pdYInvtexDJrHpUFAyYDMyhtAKddHOIl84GY5pniWExVFkvnyEg/83me3myVnTbrP+Z1 X-Received: by 2002:a63:3642:: with SMTP id d63-v6mr2769847pga.231.1536241882335; Thu, 06 Sep 2018 06:51:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536241882; cv=none; d=google.com; s=arc-20160816; b=g5fvGt2PgG1Tj0egMeNk3o8xQrB9D+FD2XYHWDGZOKgn6sbmifwoEp/FECQIDhjlm9 8ClWe6k3rhoNtIGpmz2whP+O51FyS6+C4H6mrNeNf8YYwbMkQADuI1tQOx0Fl8ut9a4h W4nYF29pYJrKjFci0kT+Pk7ZbBdpeMFjyG8dE2LqfO/sqvrCfv6uXxeiy83TCsw0pX95 /ceFfNVCv22PNDi1iw0MVF83XS8ghBhBwFIuSsIwgyzY3ptXF9dbbCe3XVOXG6hW8tAx 9aPG/kv2Y+5QSaLkKDb1LoVCR18h1eizhgVu/o64ybdLudh/I9t1jDgSbNboA5LlGKjW WJLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=V6yYdAPqt8XDfk/MTvlIYhbHUl7UqPgs9JjRKeTvEQE=; b=gkx1SAHR5BUVHd5X5frA8dojvD48+j3bpWfcIezUHKSuSpEzh1wvOuxfrZD/wbiyjQ qpwMQpIe+vlO/uEjEPi1wbNM79tfTlTwg/0942g0IbwDG7dO3f40jmPDSnvWIBuE2n3N +AlKv7qPJfwYZbN0wznmyN/PXTIx8GWKf5YQQipV4+UURH+/ddH2q6mAIrCpjbjDszLi dEmABVGo1/9y04csrrcynRe0zDy0wpPzfobapgSFkwLnb+DiPy5qjJMvZOUI17xlhqtp /XYJcEyp8dqW2yFvx744Qwvsj2rffIj1rNCSRGX9OvCo2zFVTu2koVl4Ti97/l8hk1n0 APyA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u26-v6si4687543pge.590.2018.09.06.06.51.07; Thu, 06 Sep 2018 06:51:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729997AbeIFSYa (ORCPT + 99 others); Thu, 6 Sep 2018 14:24:30 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46024 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726837AbeIFSYa (ORCPT ); Thu, 6 Sep 2018 14:24:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1099B80D; Thu, 6 Sep 2018 06:48:52 -0700 (PDT) Received: from e110439-lin (e110439-lin.emea.arm.com [10.4.12.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 41A9B3F614; Thu, 6 Sep 2018 06:48:49 -0700 (PDT) Date: Thu, 6 Sep 2018 14:48:46 +0100 From: Patrick Bellasi To: Juri Lelli Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: Re: [PATCH v4 02/16] sched/core: uclamp: map TASK's clamp values into CPU's clamp groups Message-ID: <20180906134846.GB25636@e110439-lin> References: <20180828135324.21976-1-patrick.bellasi@arm.com> <20180828135324.21976-3-patrick.bellasi@arm.com> <20180905104545.GB20267@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180905104545.GB20267@localhost.localdomain> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Juri! On 05-Sep 12:45, Juri Lelli wrote: > Hi, > > On 28/08/18 14:53, Patrick Bellasi wrote: > > [...] > > > static inline int __setscheduler_uclamp(struct task_struct *p, > > const struct sched_attr *attr) > > { > > - if (attr->sched_util_min > attr->sched_util_max) > > - return -EINVAL; > > - if (attr->sched_util_max > SCHED_CAPACITY_SCALE) > > - return -EINVAL; > > + int group_id[UCLAMP_CNT] = { UCLAMP_NOT_VALID }; > > + int lower_bound, upper_bound; > > + struct uclamp_se *uc_se; > > + int result = 0; > > > > - p->uclamp[UCLAMP_MIN] = attr->sched_util_min; > > - p->uclamp[UCLAMP_MAX] = attr->sched_util_max; > > + mutex_lock(&uclamp_mutex); > > This is going to get called from an rcu_read_lock() section, which is a > no-go for using mutexes: > > sys_sched_setattr -> > rcu_read_lock() > ... > sched_setattr() -> > __sched_setscheduler() -> > ... > __setscheduler_uclamp() -> > ... > mutex_lock() Rightm, great catch, thanks! > Guess you could fix the issue by getting the task struct after find_ > process_by_pid() in sys_sched_attr() and then calling sched_setattr() > after rcu_read_lock() (putting the task struct at the end). Peter > actually suggested this mod to solve a different issue. I guess you mean something like this ? ---8<--- --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5792,10 +5792,15 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr, rcu_read_lock(); retval = -ESRCH; p = find_process_by_pid(pid); - if (p != NULL) - retval = sched_setattr(p, &attr); + if (likely(p)) + get_task_struct(p); rcu_read_unlock(); + if (likely(p)) { + retval = sched_setattr(p, &attr); + put_task_struct(p); + } + return retval; } ---8<--- Cheers, Patrick -- #include Patrick Bellasi