Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp6133007ybi; Tue, 4 Jun 2019 19:48:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqwxGwRxK4KtE1S3AnZXawU6MnpcTQ15skoXibDUj5yT3ZNirK/oZs4PSyKDbmvVATGRdSb1 X-Received: by 2002:a17:902:b590:: with SMTP id a16mr40559094pls.168.1559702928640; Tue, 04 Jun 2019 19:48:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559702928; cv=none; d=google.com; s=arc-20160816; b=xBmWxrKIj149P46TRwfGnBu0j/0leezYPXVi1nHPkbyHyZ2QJZfJRJOCbCduZLPRU/ D5LrDmpmXK2DFSGo3LBj/ldyvCcbKWo00xvNpGv63eOUXKMnCKqntm8hTVRlhYLR0YlU Flw7Du0xPCokNNq0fn2KLbUlNjUHpv8kOmPlHWuGpk3hMwJteY/s62FVuR+thxHQJmGf 718Khs0BH+evcoSavSGZ5XaCCfueAYjQmOPRaWOrLg1poAwz89c2rtN7msxedLknyW5K 6IE5hmb8a/86PVU82DsbNt1E35iYWQ4SrsM0ROsMGfVXoXC0FLdy6lZetA1STQT4pEIs 9Csg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:smtp-origin-cluster:cc:to :smtp-origin-hostname:from:smtp-origin-hostprefix:dkim-signature; bh=F5q0Ous8JMckUZTK0P//WSX11GdeHQH8mlJbfSlHdWk=; b=Mf+jWN4f+SlbcdUjN7S+S/VMDSdlf/YSulFyC2NV65gm+4LGRflYo8LAHnQDzL5aMw JRvx9qq9epC/siSjRiwJfmTAxlDgsOGC30cXeHzRXZefLsZoEQY0eJtWkK3WGXsI22uG 8VmeF2THdxcSbo53IXMdRGGczDBPOE0ih+jKGpGXx/RnSmOPKl/ZwvX4eW81aFQJwz96 6kY6/5fJH9F/aa19gF+Z+kwNf15AWH/qd3lzGiAIrvhfYv8N1QIJgzM4zCvNGz2Oql0a 2joXnh9RP0L7w6xANlSue1qXb9wS2OtFEdRDL7wZ5vmVlFj0vexfm+Do2HMw2NZsKT/X FwHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=rGwlcvcn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=fb.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i33si26337516pld.312.2019.06.04.19.48.32; Tue, 04 Jun 2019 19:48:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=rGwlcvcn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=fb.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726664AbfFECpD (ORCPT + 99 others); Tue, 4 Jun 2019 22:45:03 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:52280 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726465AbfFECpC (ORCPT ); Tue, 4 Jun 2019 22:45:02 -0400 Received: from pps.filterd (m0044008.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x552iAP2001299 for ; Tue, 4 Jun 2019 19:45:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=F5q0Ous8JMckUZTK0P//WSX11GdeHQH8mlJbfSlHdWk=; b=rGwlcvcnL/0ke9glhflkD/OtcJdh+z/2LjCHptAG7F8zR31PfCGh0rUOenA1vqf1cWBS 8M9UDjRc+tWJqZUSzyZU29ji1mlBYCu7A4DKB0wTNDS+9HHBMnOGksYaLVfq4nJpmkx2 70iuLK1pb9QrdvTjyUSoLU9r4H5GH2KYWBg= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 2swq1tjy3t-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Tue, 04 Jun 2019 19:45:01 -0700 Received: from mx-out.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Tue, 4 Jun 2019 19:44:59 -0700 Received: by devvm2643.prn2.facebook.com (Postfix, from userid 111017) id 7FB4112C7FDC6; Tue, 4 Jun 2019 19:44:58 -0700 (PDT) Smtp-Origin-Hostprefix: devvm From: Roman Gushchin Smtp-Origin-Hostname: devvm2643.prn2.facebook.com To: Andrew Morton CC: , , , Johannes Weiner , Shakeel Butt , Vladimir Davydov , Waiman Long , Roman Gushchin Smtp-Origin-Cluster: prn2c23 Subject: [PATCH v6 03/10] mm: rename slab delayed deactivation functions and fields Date: Tue, 4 Jun 2019 19:44:47 -0700 Message-ID: <20190605024454.1393507-4-guro@fb.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605024454.1393507-1-guro@fb.com> References: <20190605024454.1393507-1-guro@fb.com> X-FB-Internal: Safe MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-06-05_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1906050015 X-FB-Internal: deliver Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The delayed work/rcu deactivation infrastructure of non-root kmem_caches can be also used for asynchronous release of these objects. Let's get rid of the word "deactivation" in corresponding names to make the code look better after generalization. It's easier to make the renaming first, so that the generalized code will look consistent from scratch. Let's rename struct memcg_cache_params fields: deact_fn -> work_fn deact_rcu_head -> rcu_head deact_work -> work And RCU/delayed work callbacks in slab common code: kmemcg_deactivate_rcufn -> kmemcg_rcufn kmemcg_deactivate_workfn -> kmemcg_workfn This patch contains no functional changes, only renamings. Signed-off-by: Roman Gushchin --- include/linux/slab.h | 6 +++--- mm/slab.h | 2 +- mm/slab_common.c | 30 +++++++++++++++--------------- 3 files changed, 19 insertions(+), 19 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 9449b19c5f10..47923c173f30 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -642,10 +642,10 @@ struct memcg_cache_params { struct list_head children_node; struct list_head kmem_caches_node; - void (*deact_fn)(struct kmem_cache *); + void (*work_fn)(struct kmem_cache *); union { - struct rcu_head deact_rcu_head; - struct work_struct deact_work; + struct rcu_head rcu_head; + struct work_struct work; }; }; }; diff --git a/mm/slab.h b/mm/slab.h index c16e5af0fb59..8ff90f42548a 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -292,7 +292,7 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order, extern void slab_init_memcg_params(struct kmem_cache *); extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg); extern void slab_deactivate_memcg_cache_rcu_sched(struct kmem_cache *s, - void (*deact_fn)(struct kmem_cache *)); + void (*work_fn)(struct kmem_cache *)); #else /* CONFIG_MEMCG_KMEM */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 77df6029de8e..d019ee66bdc4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -692,17 +692,17 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg, put_online_cpus(); } -static void kmemcg_deactivate_workfn(struct work_struct *work) +static void kmemcg_workfn(struct work_struct *work) { struct kmem_cache *s = container_of(work, struct kmem_cache, - memcg_params.deact_work); + memcg_params.work); get_online_cpus(); get_online_mems(); mutex_lock(&slab_mutex); - s->memcg_params.deact_fn(s); + s->memcg_params.work_fn(s); mutex_unlock(&slab_mutex); @@ -713,36 +713,36 @@ static void kmemcg_deactivate_workfn(struct work_struct *work) css_put(&s->memcg_params.memcg->css); } -static void kmemcg_deactivate_rcufn(struct rcu_head *head) +static void kmemcg_rcufn(struct rcu_head *head) { struct kmem_cache *s = container_of(head, struct kmem_cache, - memcg_params.deact_rcu_head); + memcg_params.rcu_head); /* - * We need to grab blocking locks. Bounce to ->deact_work. The + * We need to grab blocking locks. Bounce to ->work. The * work item shares the space with the RCU head and can't be * initialized eariler. */ - INIT_WORK(&s->memcg_params.deact_work, kmemcg_deactivate_workfn); - queue_work(memcg_kmem_cache_wq, &s->memcg_params.deact_work); + INIT_WORK(&s->memcg_params.work, kmemcg_workfn); + queue_work(memcg_kmem_cache_wq, &s->memcg_params.work); } /** * slab_deactivate_memcg_cache_rcu_sched - schedule deactivation after a * sched RCU grace period * @s: target kmem_cache - * @deact_fn: deactivation function to call + * @work_fn: deactivation function to call * - * Schedule @deact_fn to be invoked with online cpus, mems and slab_mutex + * Schedule @work_fn to be invoked with online cpus, mems and slab_mutex * held after a sched RCU grace period. The slab is guaranteed to stay - * alive until @deact_fn is finished. This is to be used from + * alive until @work_fn is finished. This is to be used from * __kmemcg_cache_deactivate(). */ void slab_deactivate_memcg_cache_rcu_sched(struct kmem_cache *s, - void (*deact_fn)(struct kmem_cache *)) + void (*work_fn)(struct kmem_cache *)) { if (WARN_ON_ONCE(is_root_cache(s)) || - WARN_ON_ONCE(s->memcg_params.deact_fn)) + WARN_ON_ONCE(s->memcg_params.work_fn)) return; if (s->memcg_params.root_cache->memcg_params.dying) @@ -751,8 +751,8 @@ void slab_deactivate_memcg_cache_rcu_sched(struct kmem_cache *s, /* pin memcg so that @s doesn't get destroyed in the middle */ css_get(&s->memcg_params.memcg->css); - s->memcg_params.deact_fn = deact_fn; - call_rcu(&s->memcg_params.deact_rcu_head, kmemcg_deactivate_rcufn); + s->memcg_params.work_fn = work_fn; + call_rcu(&s->memcg_params.rcu_head, kmemcg_rcufn); } void memcg_deactivate_kmem_caches(struct mem_cgroup *memcg) -- 2.20.1