Received: by 10.213.65.68 with SMTP id h4csp2024143imn; Sun, 1 Apr 2018 22:34:13 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+gpLaRoFlG+V8aVJOGt6HEzsPb1FN0vgrtTvBtyglE+Wd/2YR/pawCAYfBdZX4AkcSMPBc X-Received: by 10.98.67.141 with SMTP id l13mr6347534pfi.166.1522647253260; Sun, 01 Apr 2018 22:34:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522647253; cv=none; d=google.com; s=arc-20160816; b=c5U1EdyeNsN5Ge9wlArDDGXqL7Eokpuv8Bmt9T/j4c5AEhxObDJM9iFHMVKrHsPzQF AiQ0ZyeERAeiA9uHIviqiwct3Po93ZQ4u+uLsKXo2Ou91QLJWj16YoITGT2FCHywrvDM R6OxBznp/L2jHjdgSrQS5VshyGgiLhHYC+H9V3BGQINN4hAG08RKHklME1rmoHN4DxwY 3mYfeoiRNavSLFmBNk2Zzo5vW45qjZyZWK6uDE4MknpmGc9c1QUVNI9McTqQDAGBtJX1 vcbhku0MBbdBb9z/hMzVOKXHdPGoSPpelHQV6oP6keCEDn6vhjBNn/tyVQMLyIQtBj2r 0pbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=CK78HPCQRjjDHHH4ULfE4a1VWqY+Ubc63gHqs9DR3oo=; b=O5qoBkldTOZXxBdWmQIIhuib0XNCneHESK7Mare9OKi4h7tg8Q8o+gG0wq4L0JOYy8 HxfEMYkJNLKxr/lFeW2QB4PxokZrzGlOM9QoFlujsJgNdQ5ZJRGkx9WGOJW16+1IPP9e WZcY6+lrauP2pUQuJvJpDl0y3FdoFWjHavU4G9XKh7YQYSLOPy3mFbM3OXzk6X/WuedS P5bSqrLnLo3VoOM/P4tIfm8bip1IkXhTRcyhkbvv3Fb29lSAibVF0DopO13Wv0SmCi93 RANg/2jj3LNVosIgz0kR0tYKjIs9KoGUz2TJi8ju5MF7pZVlW/QfllUDvq2Zjbg2zfHv E8Lw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=k650GDcl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f18si9307303pgn.244.2018.04.01.22.33.59; Sun, 01 Apr 2018 22:34:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=k650GDcl; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754111AbeDBFcB (ORCPT + 99 others); Mon, 2 Apr 2018 01:32:01 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:46390 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751482AbeDBFbh (ORCPT ); Mon, 2 Apr 2018 01:31:37 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w325U5j9001954; Mon, 2 Apr 2018 05:31:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2017-10-26; bh=CK78HPCQRjjDHHH4ULfE4a1VWqY+Ubc63gHqs9DR3oo=; b=k650GDcl6bDMCI9ZQFzvMf7YjdWky8hXU1J+XVjD8xhg135rg83DO7rzgiDasaFnBfmj xsHY3/LzE8OV54GHURTRS/iYGgDRqBrbUc6tdGnfUrHkk3i+PpR5QHZPZgpBt0ru83mn 7xWAOPbZ6xLM5TNsrLgJDLOL/ZVOSvSSSNjD/x+iwMvG4TwxgNUKmowtLPcq8SYJ08DE Xt4y79eFyq8rNHPPeDasPmt9S/ZvCxCLqvX6DxFptVewCUdqsFGcoK0GRsdbwRdzTNvx bkJrOSROfk5oDYa2CvsmHrBwez2T5ypPxWsXfO7gyr59zkANIm96a+7slAS8R0pma1qT zw== Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp2130.oracle.com with ESMTP id 2h3ee0r06b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 02 Apr 2018 05:31:12 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w325VBIt002318 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 2 Apr 2018 05:31:11 GMT Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w325V9iw001219; Mon, 2 Apr 2018 05:31:09 GMT Received: from oracle.com (/67.188.214.158) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 01 Apr 2018 22:31:09 -0700 From: rao.shoaib@oracle.com To: linux-kernel@vger.kernel.org Cc: paulmck@linux.vnet.ibm.com, joe@perches.com, willy@infradead.org, brouer@redhat.com, linux-mm@kvack.org, Rao Shoaib Subject: [PATCH 1/2] Move kfree_call_rcu() to slab_common.c Date: Sun, 1 Apr 2018 22:31:03 -0700 Message-Id: <1522647064-27167-2-git-send-email-rao.shoaib@oracle.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522647064-27167-1-git-send-email-rao.shoaib@oracle.com> References: <1522647064-27167-1-git-send-email-rao.shoaib@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8850 signatures=668697 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=3 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1804020059 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Rao Shoaib kfree_call_rcu does not belong in linux/rcupdate.h and should be moved to slab_common.c Signed-off-by: Rao Shoaib --- include/linux/rcupdate.h | 43 +++---------------------------------------- include/linux/rcutree.h | 2 -- include/linux/slab.h | 42 ++++++++++++++++++++++++++++++++++++++++++ kernel/rcu/tree.c | 24 ++++++++++-------------- mm/slab_common.c | 10 ++++++++++ 5 files changed, 65 insertions(+), 56 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 043d047..6338fb6 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -55,6 +55,9 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); #define call_rcu call_rcu_sched #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ +/* only for use by kfree_call_rcu() */ +void call_rcu_lazy(struct rcu_head *head, rcu_callback_t func); + void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); void synchronize_sched(void); @@ -837,45 +840,6 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) #define __is_kfree_rcu_offset(offset) ((offset) < 4096) /* - * Helper macro for kfree_rcu() to prevent argument-expansion eyestrain. - */ -#define __kfree_rcu(head, offset) \ - do { \ - BUILD_BUG_ON(!__is_kfree_rcu_offset(offset)); \ - kfree_call_rcu(head, (rcu_callback_t)(unsigned long)(offset)); \ - } while (0) - -/** - * kfree_rcu() - kfree an object after a grace period. - * @ptr: pointer to kfree - * @rcu_head: the name of the struct rcu_head within the type of @ptr. - * - * Many rcu callbacks functions just call kfree() on the base structure. - * These functions are trivial, but their size adds up, and furthermore - * when they are used in a kernel module, that module must invoke the - * high-latency rcu_barrier() function at module-unload time. - * - * The kfree_rcu() function handles this issue. Rather than encoding a - * function address in the embedded rcu_head structure, kfree_rcu() instead - * encodes the offset of the rcu_head structure within the base structure. - * Because the functions are not allowed in the low-order 4096 bytes of - * kernel virtual memory, offsets up to 4095 bytes can be accommodated. - * If the offset is larger than 4095 bytes, a compile-time error will - * be generated in __kfree_rcu(). If this error is triggered, you can - * either fall back to use of call_rcu() or rearrange the structure to - * position the rcu_head structure into the first 4096 bytes. - * - * Note that the allowable offset might decrease in the future, for example, - * to allow something like kmem_cache_free_rcu(). - * - * The BUILD_BUG_ON check must not involve any function calls, hence the - * checks are done in macros here. - */ -#define kfree_rcu(ptr, rcu_head) \ - __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) - - -/* * Place this after a lock-acquisition primitive to guarantee that * an UNLOCK+LOCK pair acts as a full barrier. This guarantee applies * if the UNLOCK and LOCK are executed by the same CPU or if the @@ -887,5 +851,4 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) #define smp_mb__after_unlock_lock() do { } while (0) #endif /* #else #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */ - #endif /* __LINUX_RCUPDATE_H */ diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index fd996cd..567ef58 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -48,8 +48,6 @@ void synchronize_rcu_bh(void); void synchronize_sched_expedited(void); void synchronize_rcu_expedited(void); -void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func); - /** * synchronize_rcu_bh_expedited - Brute-force RCU-bh grace period * diff --git a/include/linux/slab.h b/include/linux/slab.h index 231abc8..116e870 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -355,6 +355,48 @@ void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; void *kmem_cache_alloc(struct kmem_cache *, gfp_t flags) __assume_slab_alignment __malloc; void kmem_cache_free(struct kmem_cache *, void *); +void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func); + +/* Helper macro for kfree_rcu() to prevent argument-expansion eyestrain. */ +#define __kfree_rcu(head, offset) \ + do { \ + unsigned long __of = (unsigned long)offset; \ + BUILD_BUG_ON(!__is_kfree_rcu_offset(__of)); \ + kfree_call_rcu(head, (rcu_callback_t)(__of)); \ + } while (0) + +/** + * kfree_rcu() - kfree an object after a grace period. + * @ptr: pointer to kfree + * @rcu_name: the name of the struct rcu_head within the type of @ptr. + * + * Many rcu callbacks functions just call kfree() on the base structure. + * These functions are trivial, but their size adds up, and furthermore + * when they are used in a kernel module, that module must invoke the + * high-latency rcu_barrier() function at module-unload time. + * + * The kfree_rcu() function handles this issue. Rather than encoding a + * function address in the embedded rcu_head structure, kfree_rcu() instead + * encodes the offset of the rcu_head structure within the base structure. + * Because the functions are not allowed in the low-order 4096 bytes of + * kernel virtual memory, offsets up to 4095 bytes can be accommodated. + * If the offset is larger than 4095 bytes, a compile-time error will + * be generated in __kfree_rcu(). If this error is triggered, you can + * either fall back to use of call_rcu() or rearrange the structure to + * position the rcu_head structure into the first 4096 bytes. + * + * Note that the allowable offset might decrease in the future, for example, + * to allow something like kmem_cache_free_rcu(). + * + * The BUILD_BUG_ON check must not involve any function calls, hence the + * checks are done in macros here. + */ +#define kfree_rcu(ptr, rcu_name) \ + do { \ + unsigned long __off = offsetof(typeof(*(ptr)), rcu_name); \ + struct rcu_head *__rptr = (void *)ptr + __off; \ + __kfree_rcu(__rptr, __off); \ + } while (0) /* * Bulk allocation and freeing operations. These are accelerated in an * allocator specific way to avoid taking locks repeatedly or building diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 491bdf3..e40f014 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3101,6 +3101,16 @@ void call_rcu_sched(struct rcu_head *head, rcu_callback_t func) } EXPORT_SYMBOL_GPL(call_rcu_sched); +/* Queue an RCU callback for lazy invocation after a grace period. + * Currently there is no way of tagging the lazy RCU callbacks in the + * list of pending callbacks. Until then, this function may only be + * called from kfree_call_rcu(). + */ +void call_rcu_lazy(struct rcu_head *head, rcu_callback_t func) +{ + __call_rcu(head, func, rcu_state_p, -1, 1); +} + /** * call_rcu_bh() - Queue an RCU for invocation after a quicker grace period. * @head: structure to be used for queueing the RCU updates. @@ -3130,20 +3140,6 @@ void call_rcu_bh(struct rcu_head *head, rcu_callback_t func) EXPORT_SYMBOL_GPL(call_rcu_bh); /* - * Queue an RCU callback for lazy invocation after a grace period. - * This will likely be later named something like "call_rcu_lazy()", - * but this change will require some way of tagging the lazy RCU - * callbacks in the list of pending callbacks. Until then, this - * function may only be called from __kfree_rcu(). - */ -void kfree_call_rcu(struct rcu_head *head, - rcu_callback_t func) -{ - __call_rcu(head, func, rcu_state_p, -1, 1); -} -EXPORT_SYMBOL_GPL(kfree_call_rcu); - -/* * Because a context switch is a grace period for RCU-sched and RCU-bh, * any blocking grace-period wait automatically implies a grace period * if there is only one CPU online at any point time during execution diff --git a/mm/slab_common.c b/mm/slab_common.c index 10f127b..2ea9866 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1525,6 +1525,16 @@ void kzfree(const void *p) } EXPORT_SYMBOL(kzfree); +/* + * Queue Memory to be freed by RCU after a grace period. + */ +void kfree_call_rcu(struct rcu_head *head, + rcu_callback_t func) +{ + call_rcu_lazy(head, func); +} +EXPORT_SYMBOL_GPL(kfree_call_rcu); + /* Tracepoints definitions. */ EXPORT_TRACEPOINT_SYMBOL(kmalloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); -- 2.7.4