Received: by 10.223.176.46 with SMTP id f43csp1729666wra; Wed, 24 Jan 2018 22:32:28 -0800 (PST) X-Google-Smtp-Source: AH8x2269yv7Et76S9tWwepS+00lnBTPLX9hsf8FokwabCKhjjpWIIvd7fqG80Wsd4k47CUK8Cg2y X-Received: by 10.99.96.23 with SMTP id u23mr13192371pgb.40.1516861948572; Wed, 24 Jan 2018 22:32:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516861948; cv=none; d=google.com; s=arc-20160816; b=zY/ry3oSQNcbUU0X3JHoTxU13nE0msiJ3dYAMqTnWBzWHoyWn0g1jaKa4HoEbuMCDX jG4/pewvcsphabriD/B9KnyJkbFCk5FYSMAzUXSO82N/4X7oJVDdSPxp+sLnqJjXwxgB 4nVBGQTN2yrjqPueCmREf2/arWFryAoYGKkFIRO90/XLGQG3d5lwTOdnnpQgOsSJ0/36 N6oXk3lr8JGl+WnH2YoCsnGHNBb8sb2PLdVSsKbMi+Z1b1LojZAzghGrAdm83BIcfRM7 w8kIUgW9wnsFgHxx3K0bfITbbhF5O0i1LudgoOoUGHV/mN+rFlljeR0xcN4pX/EeeaJc Bb3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:subject:cc:to :from:date:arc-authentication-results; bh=dLLke4equ3GOG4b9//RrVvaN/uSC4ZkC2m5pEsL0hEA=; b=ECak6JtmVVkMeoYO6PbCNb5wF5mCtfvIyUoIXqDBOuEzLSrF952zqdCFSYiOoIaa4V 9eR8Ul/GqH/iAcMDEUxrKDkHqHwmUl7J8QuMsf+jgaTT1i+Gk3UG77rJFQDEE1yZjoyN 1XI851Ccm/2W3e4QeCHL9MHi9sMA2RKujLR0mh9FzOusD1xKnLH3CnS6SLa+fUTTv2HR 4SjST/EEd6YdaL/g9TXV0K/cS44n/H5xQ+JjDxUTCAw05n21/nNjzziiUY5Hj8gMwAjk tP7HZqVEBrR7pAgJ5mwIbeuO/4UUDSBwaQufCdfjR3Xjjmn7xhBiqbZd6JmTaN9WwnTL nAZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d9-v6si1468198plj.186.2018.01.24.22.32.14; Wed, 24 Jan 2018 22:32:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751259AbeAYGaf (ORCPT + 99 others); Thu, 25 Jan 2018 01:30:35 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:41422 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751178AbeAYGaH (ORCPT ); Thu, 25 Jan 2018 01:30:07 -0500 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w0P6SuYh109547 for ; Thu, 25 Jan 2018 01:30:01 -0500 Received: from e17.ny.us.ibm.com (e17.ny.us.ibm.com [129.33.205.207]) by mx0b-001b2d01.pphosted.com with ESMTP id 2fq9ha9bda-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 25 Jan 2018 01:29:59 -0500 Received: from localhost by e17.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 25 Jan 2018 01:29:58 -0500 Received: from b01cxnp22036.gho.pok.ibm.com (9.57.198.26) by e17.ny.us.ibm.com (146.89.104.204) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 25 Jan 2018 01:29:54 -0500 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp22036.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w0P6TrIH49414250; Thu, 25 Jan 2018 06:29:53 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0BBF1B205A; Thu, 25 Jan 2018 01:26:52 -0500 (EST) Received: from paulmck-ThinkPad-W541 (unknown [9.85.153.144]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP id 43A0CB2054; Thu, 25 Jan 2018 01:26:51 -0500 (EST) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id 1FC3116C25C6; Wed, 24 Jan 2018 22:24:57 -0800 (PST) Date: Wed, 24 Jan 2018 22:24:57 -0800 From: "Paul E. McKenney" To: lianglihao@huawei.com Cc: guohanjun@huawei.com, heng.z@huawei.com, hb.chen@huawei.com, lihao.liang@gmail.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH RFC 09/16] prcu: Implement prcu_barrier() API Reply-To: paulmck@linux.vnet.ibm.com References: <1516694381-20333-1-git-send-email-lianglihao@huawei.com> <1516694381-20333-10-git-send-email-lianglihao@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1516694381-20333-10-git-send-email-lianglihao@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 18012506-0040-0000-0000-000003E90356 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00008423; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000247; SDB=6.00980015; UDB=6.00496769; IPR=6.00759316; BA=6.00005794; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00019197; XFM=3.00000015; UTC=2018-01-25 06:29:56 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18012506-0041-0000-0000-000007DE6AAC Message-Id: <20180125062457.GX3741@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-01-25_01:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1801250090 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 23, 2018 at 03:59:34PM +0800, lianglihao@huawei.com wrote: > From: Lihao Liang > > This is PRCU's counterpart of RCU's rcu_barrier() API. > > Reviewed-by: Heng Zhang > Signed-off-by: Lihao Liang > --- > include/linux/prcu.h | 7 ++++++ > kernel/rcu/prcu.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 70 insertions(+) > > diff --git a/include/linux/prcu.h b/include/linux/prcu.h > index 4e7d5d65..cce967fd 100644 > --- a/include/linux/prcu.h > +++ b/include/linux/prcu.h > @@ -5,6 +5,7 @@ > #include > #include > #include > +#include > > #define CONFIG_PRCU > > @@ -32,6 +33,7 @@ struct prcu_local_struct { > unsigned int online; > unsigned long long version; > unsigned long long cb_version; > + struct rcu_head barrier_head; > struct prcu_cblist cblist; > }; > > @@ -39,8 +41,11 @@ struct prcu_struct { > atomic64_t global_version; > atomic64_t cb_version; > atomic_t active_ctr; > + atomic_t barrier_cpu_count; > struct mutex mtx; > + struct mutex barrier_mtx; > wait_queue_head_t wait_q; > + struct completion barrier_completion; > }; > > #ifdef CONFIG_PRCU > @@ -48,6 +53,7 @@ void prcu_read_lock(void); > void prcu_read_unlock(void); > void synchronize_prcu(void); > void call_prcu(struct rcu_head *head, rcu_callback_t func); > +void prcu_barrier(void); > void prcu_init(void); > void prcu_note_context_switch(void); > int prcu_pending(void); > @@ -60,6 +66,7 @@ void prcu_check_callbacks(void); > #define prcu_read_unlock() do {} while (0) > #define synchronize_prcu() do {} while (0) > #define call_prcu() do {} while (0) > +#define prcu_barrier() do {} while (0) > #define prcu_init() do {} while (0) > #define prcu_note_context_switch() do {} while (0) > #define prcu_pending() 0 > diff --git a/kernel/rcu/prcu.c b/kernel/rcu/prcu.c > index 373039c5..2664d091 100644 > --- a/kernel/rcu/prcu.c > +++ b/kernel/rcu/prcu.c > @@ -15,6 +15,7 @@ struct prcu_struct global_prcu = { > .cb_version = ATOMIC64_INIT(0), > .active_ctr = ATOMIC_INIT(0), > .mtx = __MUTEX_INITIALIZER(global_prcu.mtx), > + .barrier_mtx = __MUTEX_INITIALIZER(global_prcu.barrier_mtx), > .wait_q = __WAIT_QUEUE_HEAD_INITIALIZER(global_prcu.wait_q) > }; > struct prcu_struct *prcu = &global_prcu; > @@ -250,6 +251,68 @@ static __latent_entropy void prcu_process_callbacks(struct softirq_action *unuse > local_irq_restore(flags); > } > > +/* > + * PRCU callback function for prcu_barrier(). > + * If we are last, wake up the task executing prcu_barrier(). > + */ > +static void prcu_barrier_callback(struct rcu_head *rhp) > +{ > + if (atomic_dec_and_test(&prcu->barrier_cpu_count)) > + complete(&prcu->barrier_completion); > +} > + > +/* > + * Called with preemption disabled, and from cross-cpu IRQ context. > + */ > +static void prcu_barrier_func(void *info) > +{ > + struct prcu_local_struct *local = this_cpu_ptr(&prcu_local); > + > + atomic_inc(&prcu->barrier_cpu_count); > + call_prcu(&local->barrier_head, prcu_barrier_callback); > +} > + > +/* Waiting for all PRCU callbacks to complete. */ > +void prcu_barrier(void) > +{ > + int cpu; > + > + /* Take mutex to serialize concurrent prcu_barrier() requests. */ > + mutex_lock(&prcu->barrier_mtx); > + > + /* > + * Initialize the count to one rather than to zero in order to > + * avoid a too-soon return to zero in case of a short grace period > + * (or preemption of this task). > + */ > + init_completion(&prcu->barrier_completion); > + atomic_set(&prcu->barrier_cpu_count, 1); > + > + /* > + * Register a new callback on each CPU using IPI to prevent races > + * with call_prcu(). When that callback is invoked, we will know > + * that all of the corresponding CPU's preceding callbacks have > + * been invoked. > + */ > + for_each_possible_cpu(cpu) > + smp_call_function_single(cpu, prcu_barrier_func, NULL, 1); This code seems to be assuming CONFIG_HOTPLUG_CPU=n. This might explain your rcutorture failure. > + /* Decrement the count as we initialize it to one. */ > + if (atomic_dec_and_test(&prcu->barrier_cpu_count)) > + complete(&prcu->barrier_completion); > + > + /* > + * Now that we have an prcu_barrier_callback() callback on each > + * CPU, and thus each counted, remove the initial count. > + * Wait for all prcu_barrier_callback() callbacks to be invoked. > + */ > + wait_for_completion(&prcu->barrier_completion); > + > + /* Other rcu_barrier() invocations can now safely proceed. */ > + mutex_unlock(&prcu->barrier_mtx); > +} > +EXPORT_SYMBOL(prcu_barrier); > + > void prcu_init_local_struct(int cpu) > { > struct prcu_local_struct *local; > -- > 2.14.1.729.g59c0ea183 >