Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp169575rwl; Wed, 4 Jan 2023 17:10:40 -0800 (PST) X-Google-Smtp-Source: AMrXdXtZlE5oMgAanObh2FB+K8p3VERt4tM85UvWdy8mgoBavfI7FgOAx4OozdsR2JzMpO+XNX/4 X-Received: by 2002:a05:6a20:49af:b0:9d:efbf:48d2 with SMTP id fs47-20020a056a2049af00b0009defbf48d2mr47118733pzb.22.1672881040506; Wed, 04 Jan 2023 17:10:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672881040; cv=none; d=google.com; s=arc-20160816; b=BGE0faUOxdilI9Zbrb37sWob1uYfZIN56AxFGvntOAv9PQsfuETuqvfzHP4DyzDOlv rjoCkkOzkklbLn4ZWiv+kp/kyn+PWya1nCVIKCF8BoDsQsCCwoidafnu9VNd5aOfQJEE SQ8V/EHAXCTAV8yaL1/CBiNN8oMQezF/9hMTpghaJKuWem0mCMnkR8YUXWtj8ILGjoiC mK1eyLuXbXUvXBG7cnYEn0e101bwIzTKjfgoxSvxSfIhtDVib4fb/27sbDaMWatkd2jn lnWPHWkj2+Da16RwgHRVYmAtMrhGughPneGWj2qfv2Y7hv8yXoRbKNqaWbLTccpMToEt EOow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=e3zVlcz1O2lYSyPqUMCTOF3WDJT2tNTYOvd/8ZOwPnk=; b=xJtiDG5T0EaPfF2Rh6lcrbK+ubu7ypZyMoSBWOv2MzyrluR6CTzla+Rrt1shzSWar/ lT9Q1P5jgMq2VcBwTpqy8YZPg7TPdpE+1yukLY+jJWLXPnCutZqSUDtY+WGrbcvf4+OS a9QixiwCjSHrpKFEWnnNT5JxWuiP36qmM47ZGbUvyujGjuKIaBj3zIIoTtIbi6t0hJbg 2xPKvs4OROSVkmBP9Hil7TnW5QL4h7KEQBFYMjYHFC3Wg7ZETZR0/V0AM1Yr+nrb4EMT w83lB3tPIBZb202Bmb5ZxYARSQ5xwGyITKiqOoQ3FpqfTDy44dnkCRyLggwcJ0aeQJbH wNAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=YUYIXIZm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 71-20020a63064a000000b0049df87734d2si20036914pgg.271.2023.01.04.17.10.33; Wed, 04 Jan 2023 17:10:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=YUYIXIZm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235773AbjAEA0i (ORCPT + 56 others); Wed, 4 Jan 2023 19:26:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240715AbjAEA0A (ORCPT ); Wed, 4 Jan 2023 19:26:00 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EE124FD70; Wed, 4 Jan 2023 16:24:51 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E74D261890; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 386EBC433F1; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=eyuMolz7jRjywZspmtRUtHr0uH/ZCSEztevsvhEK960=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YUYIXIZmmQrjC53VvOnPo8Dx0Onugn4spzyTZp9jWjZ4jlWVjX2fkf1SyTzyQwTiN igY6a0u3zn4v6V6l3dJEMtbCYk8zyRnBpDiXZ5AXWYFmpBGoZoAcO4Fnz2lEexjjSb mwv3y7UxZ6AewUICz2SD5mlyZM89fWkb2l6I+oE+YTx9Ttv55wK50ZmjqVfgrHIbAV HMmvTtw4pwCeP5b08DEc08BUFb5dLseMq4LAHrPtqLzfGLwdV4Mnfi+bZqYfToNXkf GDfe0VW4KlKcl0Yz/xFws6KKHfhf8S+xPNwmMCxjj8ymC/eduZSFlQ9h+dESn863wj 1UqwotAmTspWA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E536D5C08E5; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 3/8] rcu/kvfree: Move bulk/list reclaim to separate functions Date: Wed, 4 Jan 2023 16:24:43 -0800 Message-Id: <20230105002448.1768892-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Uladzislau Rezki (Sony)" The kvfree_rcu() code maintains lists of pages of pointers, but also a singly linked list, with the latter being used when memory allocation fails. Traversal of these two types of lists is currently open coded. This commit simplifies the code by providing kvfree_rcu_bulk() and kvfree_rcu_list() functions, respectively, to traverse these two types of lists. This patch does not introduce any functional change. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 114 ++++++++++++++++++++++++++-------------------- 1 file changed, 65 insertions(+), 49 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 4088b34ce9610..839e617f6c370 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3029,6 +3029,65 @@ drain_page_cache(struct kfree_rcu_cpu *krcp) return freed; } +static void +kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, + struct kvfree_rcu_bulk_data *bnode, int idx) +{ + unsigned long flags; + int i; + + debug_rcu_bhead_unqueue(bnode); + + rcu_lock_acquire(&rcu_callback_map); + if (idx == 0) { // kmalloc() / kfree(). + trace_rcu_invoke_kfree_bulk_callback( + rcu_state.name, bnode->nr_records, + bnode->records); + + kfree_bulk(bnode->nr_records, bnode->records); + } else { // vmalloc() / vfree(). + for (i = 0; i < bnode->nr_records; i++) { + trace_rcu_invoke_kvfree_callback( + rcu_state.name, bnode->records[i], 0); + + vfree(bnode->records[i]); + } + } + rcu_lock_release(&rcu_callback_map); + + raw_spin_lock_irqsave(&krcp->lock, flags); + if (put_cached_bnode(krcp, bnode)) + bnode = NULL; + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + if (bnode) + free_page((unsigned long) bnode); + + cond_resched_tasks_rcu_qs(); +} + +static void +kvfree_rcu_list(struct rcu_head *head) +{ + struct rcu_head *next; + + for (; head; head = next) { + void *ptr = (void *) head->func; + unsigned long offset = (void *) head - ptr; + + next = head->next; + debug_rcu_head_unqueue((struct rcu_head *)ptr); + rcu_lock_acquire(&rcu_callback_map); + trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); + + if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) + kvfree(ptr); + + rcu_lock_release(&rcu_callback_map); + cond_resched_tasks_rcu_qs(); + } +} + /* * This function is invoked in workqueue context after a grace period. * It frees all the objects queued on ->bulk_head_free or ->head_free. @@ -3038,10 +3097,10 @@ static void kfree_rcu_work(struct work_struct *work) unsigned long flags; struct kvfree_rcu_bulk_data *bnode, *n; struct list_head bulk_head[FREE_N_CHANNELS]; - struct rcu_head *head, *next; + struct rcu_head *head; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; - int i, j; + int i; krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); @@ -3058,38 +3117,9 @@ static void kfree_rcu_work(struct work_struct *work) raw_spin_unlock_irqrestore(&krcp->lock, flags); // Handle the first two channels. - for (i = 0; i < FREE_N_CHANNELS; i++) { - list_for_each_entry_safe(bnode, n, &bulk_head[i], list) { - debug_rcu_bhead_unqueue(bnode); - - rcu_lock_acquire(&rcu_callback_map); - if (i == 0) { // kmalloc() / kfree(). - trace_rcu_invoke_kfree_bulk_callback( - rcu_state.name, bnode->nr_records, - bnode->records); - - kfree_bulk(bnode->nr_records, bnode->records); - } else { // vmalloc() / vfree(). - for (j = 0; j < bnode->nr_records; j++) { - trace_rcu_invoke_kvfree_callback( - rcu_state.name, bnode->records[j], 0); - - vfree(bnode->records[j]); - } - } - rcu_lock_release(&rcu_callback_map); - - raw_spin_lock_irqsave(&krcp->lock, flags); - if (put_cached_bnode(krcp, bnode)) - bnode = NULL; - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - if (bnode) - free_page((unsigned long) bnode); - - cond_resched_tasks_rcu_qs(); - } - } + for (i = 0; i < FREE_N_CHANNELS; i++) + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) + kvfree_rcu_bulk(krcp, bnode, i); /* * This is used when the "bulk" path can not be used for the @@ -3098,21 +3128,7 @@ static void kfree_rcu_work(struct work_struct *work) * queued on a linked list through their rcu_head structures. * This list is named "Channel 3". */ - for (; head; head = next) { - void *ptr = (void *) head->func; - unsigned long offset = (void *) head - ptr; - - next = head->next; - debug_rcu_head_unqueue((struct rcu_head *)ptr); - rcu_lock_acquire(&rcu_callback_map); - trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); - - if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) - kvfree(ptr); - - rcu_lock_release(&rcu_callback_map); - cond_resched_tasks_rcu_qs(); - } + kvfree_rcu_list(head); } static bool -- 2.31.1.189.g2e36527f23