Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751920AbdCOQLM (ORCPT ); Wed, 15 Mar 2017 12:11:12 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:59303 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752010AbdCOQKy (ORCPT ); Wed, 15 Mar 2017 12:10:54 -0400 Date: Wed, 15 Mar 2017 09:10:48 -0700 From: "Paul E. McKenney" To: Joel Fernandes Cc: Patrick Bellasi , "Joel Fernandes (Google)" , Linux Kernel Mailing List , linux-pm@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Tejun Heo Subject: Re: [RFC v3 1/5] sched/core: add capacity constraints to CPU controller Reply-To: paulmck@linux.vnet.ibm.com References: <1488292722-19410-1-git-send-email-patrick.bellasi@arm.com> <1488292722-19410-2-git-send-email-patrick.bellasi@arm.com> <20170315112020.GA18557@e110439-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17031516-0036-0000-0000-000001AEA502 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006787; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000206; SDB=6.00834232; UDB=6.00409661; IPR=6.00611888; BA=6.00005214; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00014661; XFM=3.00000013; UTC=2017-03-15 16:10:50 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17031516-0037-0000-0000-00003EE9AE70 Message-Id: <20170315161048.GJ3637@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-03-15_05:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1703150124 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2318 Lines: 51 On Wed, Mar 15, 2017 at 06:20:28AM -0700, Joel Fernandes wrote: > On Wed, Mar 15, 2017 at 4:20 AM, Patrick Bellasi > wrote: > > On 13-Mar 03:46, Joel Fernandes (Google) wrote: > >> On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi > >> wrote: > >> > The CPU CGroup controller allows to assign a specified (maximum) > >> > bandwidth to tasks within a group, however it does not enforce any > >> > constraint on how such bandwidth can be consumed. > >> > With the integration of schedutil, the scheduler has now the proper > >> > information about a task to select the most suitable frequency to > >> > satisfy tasks needs. > >> [..] > >> > >> > +static u64 cpu_capacity_min_read_u64(struct cgroup_subsys_state *css, > >> > + struct cftype *cft) > >> > +{ > >> > + struct task_group *tg; > >> > + u64 min_capacity; > >> > + > >> > + rcu_read_lock(); > >> > + tg = css_tg(css); > >> > + min_capacity = tg->cap_clamp[CAP_CLAMP_MIN]; > >> > >> Shouldn't the cap_clamp be accessed with READ_ONCE (and WRITE_ONCE in > >> the write path) to avoid load-tearing? > > > > tg->cap_clamp is an "unsigned int" and thus I would expect a single > > memory access to write/read it, isn't it? I mean: I do not expect the > > compiler "to mess" with these accesses. > > This depends on compiler and arch. I'm not sure if its in practice > these days an issue, but see section on 'load tearing' in > Documentation/memory-barriers.txt . If compiler decided to break down > the access to multiple accesses due to some reason, then might be a > problem. The compiler might also be able to inline cpu_capacity_min_read_u64() fuse the load from tg->cap_clamp[CAP_CLAMP_MIN] with other accesses. If min_capacity is used several times in the ensuing code, the compiler could reload multiple times from tg->cap_clamp[CAP_CLAMP_MIN], which at best might be a bit confusing. > Adding Paul for his expert opinion on the matter ;) My personal approach is to use READ_ONCE() and WRITE_ONCE() unless I can absolutely prove that the compiler cannot do any destructive optimizations. And I not-infrequently find unsuspected opportunities for destructive optimization in my own code. Your mileage may vary. ;-) Thanx, Paul