Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp306922pxb; Mon, 7 Feb 2022 11:48:58 -0800 (PST) X-Google-Smtp-Source: ABdhPJyu4a/4rv8HIAF1OHXkrhwA/ZkdPJFNGsfOHEwWmXaAReSw01S5y+KeyFwyLBRJ5qBytr+w X-Received: by 2002:a63:b207:: with SMTP id x7mr764899pge.392.1644263338724; Mon, 07 Feb 2022 11:48:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644263338; cv=none; d=google.com; s=arc-20160816; b=jUTOYLsUAN0obe9LbSFAsWJV4HFVcjjPDe960XYODUbWsnBVNgQuO0PXSs/oSSsgkH yukHzyJ0K1iC0ezFPt8tWBVHP4cTWT11BeMZhkl9NQseibTF3Ynrniv1H4xBI21ShG4K z822TKlpJ7IUv48RzxYjofJa4tQbIxuFcK0n8dglEhAnErdPlu2UCmmCABI8kuYdafIv vUVHUHxaKC5m/CNZJJXptei2I2l8zu7MiulLWm8x2YnNBFYx+20X+75Q5hhMQoNLgQn4 oXm05GVcJ0wIZ5qUfucUDSurvzkT4+qn32Ne0OtIgBvAJrjxAUOgEVBTfJlQyulgF4gN W+cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bt/nhiRO602O83Dxe9FK4Zb74fZ6ACa88foOUHh13Iw=; b=0yqTiRYzEvOLt3UZPT0fZhUWkjEi5TDVbHXan6466vB54Z1mOZxi+X2/yQLWdUXNLf z4pyXF8YF9uEyQI0j1RIr7/xPXCRIqK7MzvlCQmlJ5KjiinDtljWwfnfBdLV3xwOztNw FsrSlmVHdj17Jng90jqs1K2aglaOBy+3caXNM5bCar1aB8mxbc+Vn+JJEObLYq2v1uqa alai2Id2cMi5lZ4p7MrQwRRhOmoTaouwmQ/f2y/Ah8LEQiuIwIi1op6v0jC+1Z3QcMdC jE53bputyWR0ZP95/wa3hqty4y0+qSQT8UodPSAzPKZVXUkO1+HDuVI0wn1bAlSxg50o aGKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=alUhBBfX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d21si10778669pfl.97.2022.02.07.11.48.46; Mon, 07 Feb 2022 11:48:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=20210105 header.b=alUhBBfX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1359853AbiBDPv4 (ORCPT + 99 others); Fri, 4 Feb 2022 10:51:56 -0500 Received: from dfw.source.kernel.org ([139.178.84.217]:53348 "EHLO dfw.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230391AbiBDPvz (ORCPT ); Fri, 4 Feb 2022 10:51:55 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E081B617E7 for ; Fri, 4 Feb 2022 15:51:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C15BC340E9; Fri, 4 Feb 2022 15:51:53 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="alUhBBfX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1643989911; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bt/nhiRO602O83Dxe9FK4Zb74fZ6ACa88foOUHh13Iw=; b=alUhBBfXtoVIdYCD5lcVSjwbmAnUFeii58l30dQg4cq57GYuo373C6OvJDE7PilfGKgiK8 QqXhUmd2zGk51sARU3jNqM2qLUt2ERxUCGMTKleVu/XjYIN2+7C294M1sIRshAwuHqK2EZ jf4+FQgYqsv9Run/b/jX3i04Gv73FBk= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id ab41c3d9 (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Fri, 4 Feb 2022 15:51:51 +0000 (UTC) From: "Jason A. Donenfeld" To: Sebastian Andrzej Siewior , Andy Lutomirski , Boqun Feng , Will Deacon , Peter Zijlstra , Ingo Molnar , Waiman Long , Sultan Alsawaf , "Theodore Ts'o" , Andy Lutomirski , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , LKML , Thomas Gleixner Cc: "Jason A. Donenfeld" Subject: [PATCH v3] random: remove batched entropy locking Date: Fri, 4 Feb 2022 16:51:42 +0100 Message-Id: <20220204155142.56419-1-Jason@zx2c4.com> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rather than use spinlocks to protect batched entropy, we can instead disable interrupts locally, since we're dealing with per-cpu data, and manage resets with a basic generation counter. At the same time, we can't quite do this on PREEMPT_RT, where we still want spinlocks-as- mutexes semantics. So we use a local_lock_t, which provides the right behavior for each. Because this is a per-cpu lock, that generation counter is still doing the necessary CPU-to-CPU communication. This should improve performance a bit. It will also fix the linked splat that Jonathan received with a PROVE_RAW_LOCK_NESTING=y. Note that Sebastian has pointed out a few other areas where using spinlock_t in an IRQ context is potentially problematic for PREEMPT_RT. This patch handles one of those cases, and we'll hopefully have additional patches for other cases. Suggested-by: Andy Lutomirski Reported-by: Jonathan Neuschäfer Tested-by: Jonathan Neuschäfer Link: https://lore.kernel.org/lkml/YfMa0QgsjCVdRAvJ@latitude/ Cc: Sebastian Andrzej Siewior Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 55 ++++++++++++++++++++++--------------------- 1 file changed, 28 insertions(+), 27 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 455615ac169a..3e54b90a3ff8 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1732,13 +1732,16 @@ struct ctl_table random_table[] = { }; #endif /* CONFIG_SYSCTL */ +static atomic_t batch_generation = ATOMIC_INIT(0); + struct batched_entropy { union { u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)]; u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)]; }; + local_lock_t lock; unsigned int position; - spinlock_t batch_lock; + int generation; }; /* @@ -1750,7 +1753,7 @@ struct batched_entropy { * point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { - .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), + .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock) }; u64 get_random_u64(void) @@ -1759,41 +1762,54 @@ u64 get_random_u64(void) unsigned long flags; struct batched_entropy *batch; static void *previous; + int next_gen; warn_unseeded_randomness(&previous); - batch = raw_cpu_ptr(&batched_entropy_u64); - spin_lock_irqsave(&batch->batch_lock, flags); - if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { + batch = this_cpu_ptr(&batched_entropy_u64); + local_lock_irqsave(&batch->lock, flags); + + next_gen = atomic_read(&batch_generation); + if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0 || + next_gen != batch->generation) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; + batch->generation = next_gen; } + ret = batch->entropy_u64[batch->position++]; - spin_unlock_irqrestore(&batch->batch_lock, flags); + local_unlock_irqrestore(&batch->lock, flags); return ret; } EXPORT_SYMBOL(get_random_u64); static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = { - .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock), + .lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock) }; + u32 get_random_u32(void) { u32 ret; unsigned long flags; struct batched_entropy *batch; static void *previous; + int next_gen; warn_unseeded_randomness(&previous); - batch = raw_cpu_ptr(&batched_entropy_u32); - spin_lock_irqsave(&batch->batch_lock, flags); - if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { + batch = this_cpu_ptr(&batched_entropy_u32); + local_lock_irqsave(&batch->lock, flags); + + next_gen = atomic_read(&batch_generation); + if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0 || + next_gen != batch->generation) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; + batch->generation = next_gen; } + ret = batch->entropy_u32[batch->position++]; - spin_unlock_irqrestore(&batch->batch_lock, flags); + local_unlock_irqrestore(&batch->lock, flags); return ret; } EXPORT_SYMBOL(get_random_u32); @@ -1804,22 +1820,7 @@ EXPORT_SYMBOL(get_random_u32); * next usage. */ static void invalidate_batched_entropy(void) { - int cpu; - unsigned long flags; - - for_each_possible_cpu(cpu) { - struct batched_entropy *batched_entropy; - - batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu); - spin_lock_irqsave(&batched_entropy->batch_lock, flags); - batched_entropy->position = 0; - spin_unlock(&batched_entropy->batch_lock); - - batched_entropy = per_cpu_ptr(&batched_entropy_u64, cpu); - spin_lock(&batched_entropy->batch_lock); - batched_entropy->position = 0; - spin_unlock_irqrestore(&batched_entropy->batch_lock, flags); - } + atomic_inc(&batch_generation); } /** -- 2.35.0