Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp6132714iog; Thu, 23 Jun 2022 12:02:40 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uQIGCOKaqBWyJ5aSkHCvpEpszQagaDHY96scsYMCChl20qLuXhsuBKYXYQToCOjOrEKy0G X-Received: by 2002:a17:90b:4ac9:b0:1ec:9bd1:92ff with SMTP id mh9-20020a17090b4ac900b001ec9bd192ffmr5443000pjb.178.1656010960674; Thu, 23 Jun 2022 12:02:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656010960; cv=none; d=google.com; s=arc-20160816; b=Swmg9msEC9f7CWy983GxYd6aoHy0GPDNbYFVqclo+f1AKevr8S2XwmGTdMNMTNINsy 2R+BChCrCrNdVPBbFWVwLXyjbv7QiHkTEmgKISYXB5mxiDz89zCgcKl/ZXGUmCZrbCXG yHrE+LlVS8pwwMhJBmheX5p6KRR1+g0ZgPU4bzHJ0R+mBXNIqOln836ze9l0subHkOHL TVre8uF99hknedl8mvH9c5MiqHfNbaBXha0ymDUrG81VOYXzPAs7orFREYpMsuWo1Eq/ uduM3rfMkxZyRCXl8K7FCcJrDGV9TjYvuMGenMDeCTzWM2M7jV8s5gQazcIMqnj4XqS9 Q7ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=DcubmJkPlzPAmM8OGlahqHGvF93Otofy5IUWhW5uHgk=; b=ccds4URnTb8meeD7SyLO2ea4UW3aDUmMHIFhYPdaUsc1BzA7ZQeJQbGPf3eL5IWklq i5EZQiLUnDn8ZFH+i5TPqREQiK1lsDWxOAxe1pzd4BKxIVjp5U5sSBpPf2t3vKkqC2Dw Ghh3NT9hpHed8kTHmCs6GTNhUyAC522Rf31uhOAKDsZrqyrkT1GGyBi4cTqyon3RNzaB 8BXZobnwL3IYv7ELGW644jczuwlnmYcu3ug8CJ6cuL5rARzTBiC6R9F1/rEpiwDWQmPb gOHhkwEjlWAm6YYB0QYQBmLJom2VTnKTe0+4YBKhJdublKjJ8PTHYDc/C0kDzUJdIjvq +vHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=xE2wTPb0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s18-20020aa78bd2000000b00525215ea340si6439pfd.70.2022.06.23.12.02.28; Thu, 23 Jun 2022 12:02:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=xE2wTPb0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230082AbiFWRNf (ORCPT + 99 others); Thu, 23 Jun 2022 13:13:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232328AbiFWRLj (ORCPT ); Thu, 23 Jun 2022 13:11:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FC1A286; Thu, 23 Jun 2022 09:52:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8F4E661FC2; Thu, 23 Jun 2022 16:52:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 657AAC3411B; Thu, 23 Jun 2022 16:52:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1656003131; bh=Y2gmlSgBbMXu6RR8XLuhEE2C3itK+jpV7gi1v4e4cK4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xE2wTPb0mI/P23ILW94QAMrmHMf3YoeF07z51sKArNIiD6y4l6Ik5T7jtYBZ5/MeE fooIjjXzJ75p9lJaVPpn0hWH/ZqGwkA/VnDo/rD1B4hPWfrWra3mtOUuMYC0hkX3yn VpXn763yqkHKAZL0Bm+q5QfTgRzUiyM49E9qHVGg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Peter Zijlstra , Theodore Tso , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , Sebastian Andrzej Siewior , Sultan Alsawaf , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 4.9 141/264] random: defer fast pool mixing to worker Date: Thu, 23 Jun 2022 18:42:14 +0200 Message-Id: <20220623164348.055607901@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623164344.053938039@linuxfoundation.org> References: <20220623164344.053938039@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 58340f8e952b613e0ead0bed58b97b05bf4743c5 upstream. On PREEMPT_RT, it's problematic to take spinlocks from hard irq handlers. We can fix this by deferring to a workqueue the dumping of the fast pool into the input pool. We accomplish this with some careful rules on fast_pool->count: - When it's incremented to >= 64, we schedule the work. - If the top bit is set, we never schedule the work, even if >= 64. - The worker is responsible for setting it back to 0 when it's done. There are two small issues around using workqueues for this purpose that we work around. The first issue is that mix_interrupt_randomness() might be migrated to another CPU during CPU hotplug. This issue is rectified by checking that it hasn't been migrated (after disabling irqs). If it has been migrated, then we set the count to zero, so that when the CPU comes online again, it can requeue the work. As part of this, we switch to using an atomic_t, so that the increment in the irq handler doesn't wipe out the zeroing if the CPU comes back online while this worker is running. The second issue is that, though relatively minor in effect, we probably want to make sure we get a consistent view of the pool onto the stack, in case it's interrupted by an irq while reading. To do this, we don't reenable irqs until after the copy. There are only 18 instructions between the cli and sti, so this is a pretty tiny window. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Jonathan Neuschäfer Acked-by: Sebastian Andrzej Siewior Reviewed-by: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 63 ++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 49 insertions(+), 14 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1174,9 +1174,10 @@ struct fast_pool { u32 pool32[4]; u64 pool64[2]; }; + struct work_struct mix; unsigned long last; + atomic_t count; u16 reg_idx; - u8 count; }; /* @@ -1226,12 +1227,49 @@ static u32 get_reg(struct fast_pool *f, return *ptr; } +static void mix_interrupt_randomness(struct work_struct *work) +{ + struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); + u32 pool[4]; + + /* Check to see if we're running on the wrong CPU due to hotplug. */ + local_irq_disable(); + if (fast_pool != this_cpu_ptr(&irq_randomness)) { + local_irq_enable(); + /* + * If we are unlucky enough to have been moved to another CPU, + * during CPU hotplug while the CPU was shutdown then we set + * our count to zero atomically so that when the CPU comes + * back online, it can enqueue work again. The _release here + * pairs with the atomic_inc_return_acquire in + * add_interrupt_randomness(). + */ + atomic_set_release(&fast_pool->count, 0); + return; + } + + /* + * Copy the pool to the stack so that the mixer always has a + * consistent view, before we reenable irqs again. + */ + memcpy(pool, fast_pool->pool32, sizeof(pool)); + atomic_set(&fast_pool->count, 0); + fast_pool->last = jiffies; + local_irq_enable(); + + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); + memzero_explicit(pool, sizeof(pool)); +} + void add_interrupt_randomness(int irq) { + enum { MIX_INFLIGHT = 1U << 31 }; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned long now = jiffies; cycles_t cycles = random_get_entropy(); + unsigned int new_count; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1251,12 +1289,13 @@ void add_interrupt_randomness(int irq) } fast_mix(fast_pool->pool32); - ++fast_pool->count; + /* The _acquire here pairs with the atomic_set_release in mix_interrupt_randomness(). */ + new_count = (unsigned int)atomic_inc_return_acquire(&fast_pool->count); if (unlikely(crng_init == 0)) { - if (fast_pool->count >= 64 && + if (new_count >= 64 && crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { - fast_pool->count = 0; + atomic_set(&fast_pool->count, 0); fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); @@ -1266,20 +1305,16 @@ void add_interrupt_randomness(int irq) return; } - if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ)) + if (new_count & MIX_INFLIGHT) return; - if (!spin_trylock(&input_pool.lock)) + if (new_count < 64 && !time_after(now, fast_pool->last + HZ)) return; - fast_pool->last = now; - _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); - spin_unlock(&input_pool.lock); - - fast_pool->count = 0; - - /* Award one bit for the contents of the fast pool. */ - credit_entropy_bits(1); + if (unlikely(!fast_pool->mix.func)) + INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); + atomic_or(MIX_INFLIGHT, &fast_pool->count); + queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); } EXPORT_SYMBOL_GPL(add_interrupt_randomness);