Received: by 10.223.176.46 with SMTP id f43csp3876886wra; Tue, 23 Jan 2018 00:10:32 -0800 (PST) X-Google-Smtp-Source: AH8x2270zdV/VeQEjJjQPgG/FhmtVhWcUs2tx3SvqVNZ3GAXuY9x3g/w5UaeNORSiH/Ap1x8OfKM X-Received: by 10.99.137.195 with SMTP id v186mr8463155pgd.25.1516695032214; Tue, 23 Jan 2018 00:10:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516695032; cv=none; d=google.com; s=arc-20160816; b=wB21d3n7vGO9r/S1RLIXuanAmDaJQZ1EKbU6nensOyKbH7EpJP+oOIoifA25phaiWi 42cWgU4j5iVvz3MFfoFrCa+JTMk/6Dum5zTcT5RYCXudpNkUm92d7i53K1p/WzEQ8rfw PTDBayTOfm644P6PPuslel4ptnE/7t8g+WXgTAcWaG0+DsPl955zQ3Kz0BnXfMnj4vXK GujNVhCDdUGNwM52yYHB6FlKlB+kYcSFcXjsghvnjt9Sqv2wdQlzGRB7XCFPdxsJE/rH m8JBVc+DUy0YYjVZTOqLRCT+0i3nKRPzb4hYbh0seRZSwGjnXT3253B/yEO9x2mivUnE UgyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=azV3fZcT5B8BQDcdx2bCvOVShXfMJAyWB+bq8j73AZQ=; b=cdGfrawn3X7RUNfuKDjiDm6HlYbXBJULtBFZVwMiBOjKZeYbb2OK4kR1mgQqZYYmQE oJEaBQtEgKIRtFaCxZiztJ1KcJTf5DuGDMevU3rDgE+XHWlkyluE2Uq0t9WJ9tERZ8MM 0vUf+DlhsjCfCQO6S2N3CJ1j/sXRxMoPcRvu1ZFC2R8F4UJgh87Ru6XoMOzKwsiipkFw 4o/xzCx9L75Yff6jFcLNnapVgjOsPAitjwsXscgRjFKwMyLA65IzLCtePlKpt5zAS+V6 lRDfyq7hDEkWpDAZ9+SIJvo1Y17noL8mNkMs9/sdumCRI6WInzFMbPSu6rEKyQJUPKB+ Si8Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w63si1099402pfd.375.2018.01.23.00.10.18; Tue, 23 Jan 2018 00:10:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751368AbeAWIIy (ORCPT + 99 others); Tue, 23 Jan 2018 03:08:54 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:4278 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751312AbeAWIIu (ORCPT ); Tue, 23 Jan 2018 03:08:50 -0500 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 6C027B7438173; Tue, 23 Jan 2018 16:08:36 +0800 (CST) Received: from huawei.com (10.175.102.37) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.361.1; Tue, 23 Jan 2018 16:08:29 +0800 From: To: CC: , , , , Subject: [PATCH RFC 09/16] prcu: Implement prcu_barrier() API Date: Tue, 23 Jan 2018 15:59:34 +0800 Message-ID: <1516694381-20333-10-git-send-email-lianglihao@huawei.com> X-Mailer: git-send-email 1.7.12.4 In-Reply-To: <1516694381-20333-1-git-send-email-lianglihao@huawei.com> References: <1516694381-20333-1-git-send-email-lianglihao@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.102.37] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Lihao Liang This is PRCU's counterpart of RCU's rcu_barrier() API. Reviewed-by: Heng Zhang Signed-off-by: Lihao Liang --- include/linux/prcu.h | 7 ++++++ kernel/rcu/prcu.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) diff --git a/include/linux/prcu.h b/include/linux/prcu.h index 4e7d5d65..cce967fd 100644 --- a/include/linux/prcu.h +++ b/include/linux/prcu.h @@ -5,6 +5,7 @@ #include #include #include +#include #define CONFIG_PRCU @@ -32,6 +33,7 @@ struct prcu_local_struct { unsigned int online; unsigned long long version; unsigned long long cb_version; + struct rcu_head barrier_head; struct prcu_cblist cblist; }; @@ -39,8 +41,11 @@ struct prcu_struct { atomic64_t global_version; atomic64_t cb_version; atomic_t active_ctr; + atomic_t barrier_cpu_count; struct mutex mtx; + struct mutex barrier_mtx; wait_queue_head_t wait_q; + struct completion barrier_completion; }; #ifdef CONFIG_PRCU @@ -48,6 +53,7 @@ void prcu_read_lock(void); void prcu_read_unlock(void); void synchronize_prcu(void); void call_prcu(struct rcu_head *head, rcu_callback_t func); +void prcu_barrier(void); void prcu_init(void); void prcu_note_context_switch(void); int prcu_pending(void); @@ -60,6 +66,7 @@ void prcu_check_callbacks(void); #define prcu_read_unlock() do {} while (0) #define synchronize_prcu() do {} while (0) #define call_prcu() do {} while (0) +#define prcu_barrier() do {} while (0) #define prcu_init() do {} while (0) #define prcu_note_context_switch() do {} while (0) #define prcu_pending() 0 diff --git a/kernel/rcu/prcu.c b/kernel/rcu/prcu.c index 373039c5..2664d091 100644 --- a/kernel/rcu/prcu.c +++ b/kernel/rcu/prcu.c @@ -15,6 +15,7 @@ struct prcu_struct global_prcu = { .cb_version = ATOMIC64_INIT(0), .active_ctr = ATOMIC_INIT(0), .mtx = __MUTEX_INITIALIZER(global_prcu.mtx), + .barrier_mtx = __MUTEX_INITIALIZER(global_prcu.barrier_mtx), .wait_q = __WAIT_QUEUE_HEAD_INITIALIZER(global_prcu.wait_q) }; struct prcu_struct *prcu = &global_prcu; @@ -250,6 +251,68 @@ static __latent_entropy void prcu_process_callbacks(struct softirq_action *unuse local_irq_restore(flags); } +/* + * PRCU callback function for prcu_barrier(). + * If we are last, wake up the task executing prcu_barrier(). + */ +static void prcu_barrier_callback(struct rcu_head *rhp) +{ + if (atomic_dec_and_test(&prcu->barrier_cpu_count)) + complete(&prcu->barrier_completion); +} + +/* + * Called with preemption disabled, and from cross-cpu IRQ context. + */ +static void prcu_barrier_func(void *info) +{ + struct prcu_local_struct *local = this_cpu_ptr(&prcu_local); + + atomic_inc(&prcu->barrier_cpu_count); + call_prcu(&local->barrier_head, prcu_barrier_callback); +} + +/* Waiting for all PRCU callbacks to complete. */ +void prcu_barrier(void) +{ + int cpu; + + /* Take mutex to serialize concurrent prcu_barrier() requests. */ + mutex_lock(&prcu->barrier_mtx); + + /* + * Initialize the count to one rather than to zero in order to + * avoid a too-soon return to zero in case of a short grace period + * (or preemption of this task). + */ + init_completion(&prcu->barrier_completion); + atomic_set(&prcu->barrier_cpu_count, 1); + + /* + * Register a new callback on each CPU using IPI to prevent races + * with call_prcu(). When that callback is invoked, we will know + * that all of the corresponding CPU's preceding callbacks have + * been invoked. + */ + for_each_possible_cpu(cpu) + smp_call_function_single(cpu, prcu_barrier_func, NULL, 1); + + /* Decrement the count as we initialize it to one. */ + if (atomic_dec_and_test(&prcu->barrier_cpu_count)) + complete(&prcu->barrier_completion); + + /* + * Now that we have an prcu_barrier_callback() callback on each + * CPU, and thus each counted, remove the initial count. + * Wait for all prcu_barrier_callback() callbacks to be invoked. + */ + wait_for_completion(&prcu->barrier_completion); + + /* Other rcu_barrier() invocations can now safely proceed. */ + mutex_unlock(&prcu->barrier_mtx); +} +EXPORT_SYMBOL(prcu_barrier); + void prcu_init_local_struct(int cpu) { struct prcu_local_struct *local; -- 2.14.1.729.g59c0ea183