Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp601635pxx; Thu, 29 Oct 2020 09:53:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOWWPY2AndFgSCANB2WhiG16UjZ1iMbDbAc94ClwdVFhDi2fHVNmJA+yUWCrXcw+tGhBGR X-Received: by 2002:a50:fb18:: with SMTP id d24mr5108080edq.43.1603990383453; Thu, 29 Oct 2020 09:53:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603990383; cv=none; d=google.com; s=arc-20160816; b=wUv+JuvD9nCfCLlplDDUeIsBcZIUOIurfPFB4ljOfmAbC7pp/JoORkvDbFk/t4M/wq eKzv3JnO0iRaXnJSzl76XktNl2LK0BReYomD4XdPgA6opwl+kbfs95ImlL2EqHRzy8IQ mo15viGv2fm1hh5+SBgwC7K7Y57hKiV68Gfc2tHzPO9ivCWPa1CVWzEpgGIWMfYeyIHt m5oZSJD69zSuVcaGbTxGgempfVN1HHNJW8kuzeM+ZRt5ohuCnsZ93GjiJv7w2OAH64l0 RFjV9WYaPENF3ocBsrhLz85ybNddrrkTisObuKQFUJqrs43flhbW2eYHTQTQcLpavVge W+CA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LnJgsrqj24e3iKwCYsEfizVAgHlFQ27IS+H4AUjM3Gc=; b=yBSN7GVCYMMatn626cwZnAz0/HqMGhbhbyS9fn6/8j4Ke3YAsylZZPeHJvBcNaxTOm Sg0zAEt1PVrvl3lUfEVlneCFi0WPxZyyvMuIFz074Oo5CccRjN0wJN1dQuujI3GLcpa4 1Aa1t54nh35zojunZHF6aXKjEUFDes5dWGvqFPS8TJlgtjEmMT+TW80ZXZ4vxRieNrre bxkb4X6ELDtksn1zPDyGas/gCYnwVosIskemnwu1Qm0qqJU0bIIcSluS8WygM/m27egE lpusWfXn9e4fyMJKCs/xGWPn9MLSTSKpup+PKp83ffrU/izQceNG3PdTNCSZq4fkmRRm Zw7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="uo9/yslu"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qc1si2422639ejb.315.2020.10.29.09.52.39; Thu, 29 Oct 2020 09:53:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="uo9/yslu"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728000AbgJ2QvC (ORCPT + 99 others); Thu, 29 Oct 2020 12:51:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727991AbgJ2Qu7 (ORCPT ); Thu, 29 Oct 2020 12:50:59 -0400 Received: from mail-lj1-x243.google.com (mail-lj1-x243.google.com [IPv6:2a00:1450:4864:20::243]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EEC9DC0613D6; Thu, 29 Oct 2020 09:50:58 -0700 (PDT) Received: by mail-lj1-x243.google.com with SMTP id p15so3837055ljj.8; Thu, 29 Oct 2020 09:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LnJgsrqj24e3iKwCYsEfizVAgHlFQ27IS+H4AUjM3Gc=; b=uo9/ysluJLEptzHR78M22LA1Y4lB4izmLXp+WcqhIEQA9J1bCJuixFY0VFWlp0bx8T 21vY0ZPdywE/w9wZWarjB9T0pro1FmD8ZUZsjuB00o3wKd3BwTjmsDUvuy9nLQ5gsTQl vz4LMHv1KkB+daJzM2gXC/DmB0nEScF3HHZ2bE/022zccqSwjlRYepNXxcodnm5FVGAd gM6Mgvt6YtIePFL1YRR+7PEZ1HzymyuUG205ZGmbOdsdldVsNZI8VXlWet/rPhK8pOPe x0p7Y/qMD9dGGX7Ltq3sDacArSdEfb9jf+HOOdyKkKXbb9MzpZVFyYVXkE2H97erfoMy 8mNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LnJgsrqj24e3iKwCYsEfizVAgHlFQ27IS+H4AUjM3Gc=; b=OC2hKx37lZulLgYq8CNpv9QK+DIJzJoZJjAtR6k/FB5wFmR0/b9rbM+yOsyrQtlq1w 3UiSaoSasVmzjkGqD06l9s6so4y3RUUd//I5zrVCkZEBwyJKRs2VqKltH2J4rkQ6Z+F2 LEO0m1p42FjR6cz9h4Qa3VNXRoESF0cmdTpSDGqQu993kHsfPB0f6SNntvu7qcTvJjWF z4gvhKzLz7chVUjALSJJaI3DwmD6slJ9QCSAn2tfLevjlB0cT+6rGg+FEYnU0yG2bdTm 53TgRA0BACS/AYMsfJLhqtN8wYm6CuNBmI9CddxKzhvj3NqQXXWX2HHO/kYmlrB4tMDj 8VZQ== X-Gm-Message-State: AOAM530vjoB7Uoy48pY6OJ+vfr906c6a/SnBC5zPHJ9D0X8A07AooPpx L6ld4Cgrw4ZWIXZ5t2XttKCbqO1lZp1URA== X-Received: by 2002:a2e:947:: with SMTP id 68mr1698607ljj.464.1603990257061; Thu, 29 Oct 2020 09:50:57 -0700 (PDT) Received: from pc638.lan (h5ef52e31.seluork.dyn.perspektivbredband.net. [94.245.46.49]) by smtp.gmail.com with ESMTPSA id s1sm331832lfd.236.2020.10.29.09.50.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Oct 2020 09:50:56 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: LKML , RCU , "Paul E . McKenney" Cc: Andrew Morton , Peter Zijlstra , Michal Hocko , Thomas Gleixner , "Theodore Y . Ts'o" , Joel Fernandes , Sebastian Andrzej Siewior , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 16/16] rcu/tree: Use delayed work instead of hrtimer to refill the cache Date: Thu, 29 Oct 2020 17:50:19 +0100 Message-Id: <20201029165019.14218-16-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201029165019.14218-1-urezki@gmail.com> References: <20201029165019.14218-1-urezki@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A CONFIG_PREEMPT_COUNT is unconditionally enabled, thus a page can be obtained directly from a kvfree_rcu() path. To distinguish that and take a decision the preemptable() macro is used when it is save to enter allocator. It means that refilling a cache is not important from timing point of view. Switch to a delayed work, so the actual work is queued from the timer interrupt with 1 jiffy delay. An immediate placing a task on a current CPU can lead to rq->lock double lock. That is why a delayed method is in place. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 26 +++++--------------------- 1 file changed, 5 insertions(+), 21 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 3f9b016a44dc..9ea55f800b4b 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3086,7 +3086,6 @@ struct kfree_rcu_cpu_work { * per-cpu lock. * @page_cache_work: A work to refill the cache when it is empty * @work_in_progress: Indicates that page_cache_work is running - * @hrtimer: A hrtimer for scheduling a page_cache_work * @nr_bkv_objs: number of allocated objects at @bkvcache. * * This is a per-CPU structure. The reason that it is not included in @@ -3104,9 +3103,8 @@ struct kfree_rcu_cpu { bool initialized; int count; - struct work_struct page_cache_work; + struct delayed_work page_cache_work; atomic_t work_in_progress; - struct hrtimer hrtimer; struct llist_head bkvcache; int nr_bkv_objs; @@ -3355,22 +3353,12 @@ static void kfree_rcu_monitor(struct work_struct *work) raw_spin_unlock_irqrestore(&krcp->lock, flags); } -static enum hrtimer_restart -schedule_page_work_fn(struct hrtimer *t) -{ - struct kfree_rcu_cpu *krcp = - container_of(t, struct kfree_rcu_cpu, hrtimer); - - queue_work(system_highpri_wq, &krcp->page_cache_work); - return HRTIMER_NORESTART; -} - static void fill_page_cache_func(struct work_struct *work) { struct kvfree_rcu_bulk_data *bnode; struct kfree_rcu_cpu *krcp = container_of(work, struct kfree_rcu_cpu, - page_cache_work); + page_cache_work.work); unsigned long flags; bool pushed; int i; @@ -3398,12 +3386,8 @@ static void run_page_cache_worker(struct kfree_rcu_cpu *krcp) { if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && - !atomic_xchg(&krcp->work_in_progress, 1)) { - hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC, - HRTIMER_MODE_REL); - krcp->hrtimer.function = schedule_page_work_fn; - hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL); - } + !atomic_xchg(&krcp->work_in_progress, 1)) + schedule_delayed_work(&krcp->page_cache_work, 1); } // Record ptr in a page managed by krcp, with the pre-krc_this_cpu_lock() @@ -4503,7 +4487,7 @@ static void __init kfree_rcu_batch_init(void) } INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor); - INIT_WORK(&krcp->page_cache_work, fill_page_cache_func); + INIT_DELAYED_WORK(&krcp->page_cache_work, fill_page_cache_func); krcp->initialized = true; } if (register_shrinker(&kfree_rcu_shrinker)) -- 2.20.1