Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp6139717iog; Thu, 23 Jun 2022 12:10:19 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uNAiell+JI5g+Egun6+xDuN9DI+IHFJXpKeYdYM/sGuM6y5HU0bCZxdM8cEnF/f1iqUDD7 X-Received: by 2002:a17:906:e98:b0:726:29c5:620a with SMTP id p24-20020a1709060e9800b0072629c5620amr3421559ejf.192.1656011419224; Thu, 23 Jun 2022 12:10:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656011419; cv=none; d=google.com; s=arc-20160816; b=JXDKT/103wpuQn9MpTc6Gr+De55OkxTleI5AfJOHKTWSOAuv/orWyTE7n5/35y9SAv LGOWPvImcMuPVI1qpAz4XmYFYTwZQKfdA+/1HOBtaBzQyVxlXS9/Cy+fK1j3mmOlU3I3 wBntehy6Mwqq4/J3JQrELDBEFR1rojsbCTay4JooyLcz+ARYuzYpJ95IZL6wCyxheRpV BEUi85RnG79/Bdbd84Mpm2Obdurl2Tihk0vOQx2MH7+5Bn67sz1tsEGlhzCplV0h8FYE Fl7MvhLvExSuOT5lEpFB1u2zBs3wNkc/pMpe/XMOWrxLkvOW92kLkaKyxWglvL+w0A11 bmbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=E7ufI3dCgsJytgSzOnETxwv5TSZRAMIpS2zOGteIbWI=; b=lfzCgYxufIChNJ4gJeQibtpNK7PrmPVpMZeLe3az+x1TufAaPaVCD0j1XgjG12IDcT rAOSLsdsnHYGIfZw18qDbHyKGuAO6gFzZntH12JoejNgbtVGEW8BBTiM+JPVB3XzVM/p VkzS/zMN/SQ8OmvEOcukpSZ5ApgSYleVKp/pfhjpfULtAbcK/4+wBWXn8D3Ol828zC27 ocAU1SOCtkabrPi/qxwUwC6eeH2Yb9Qn7j13ObYghhbGuaHNWNR1KIxxFijXFhQXJA2M rkKavEg1Ze/BupNyTdtj2/TPV2JCF4JcqqBhCyt4m9KMDe0wJIfDIbGr/xEYS3lc9u0/ v4yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=lhKS1uui; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r16-20020a05640251d000b0042dc982357fsi489085edd.156.2022.06.23.12.09.54; Thu, 23 Jun 2022 12:10:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=lhKS1uui; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235994AbiFWSF4 (ORCPT + 99 others); Thu, 23 Jun 2022 14:05:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235921AbiFWSF2 (ORCPT ); Thu, 23 Jun 2022 14:05:28 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8265B8F94; Thu, 23 Jun 2022 10:17:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 22D9FB824B9; Thu, 23 Jun 2022 17:17:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78797C3411B; Thu, 23 Jun 2022 17:17:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1656004647; bh=5kDImO9RoNMxTP1t0HIIHeJTH69GPGyd3ot5AYiBb+4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lhKS1uuiHAtEGl1ddzIEGy+OitTiQUIXdLYgzkctnN9F2SUol/fVQMolFQMROeAU3 QwbH/K/izZr4gR1oDqx4Q0sFm4as9+Rc/b2nCEyV5aKg6L+I0iPMuQ+bR61r4hYEoY TOYprUiqJri4792vs5Xp2ZZ/hFmDX8Bbdgo3yVT8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Peter Zijlstra , Theodore Tso , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , Sebastian Andrzej Siewior , Sultan Alsawaf , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 4.19 105/234] random: defer fast pool mixing to worker Date: Thu, 23 Jun 2022 18:42:52 +0200 Message-Id: <20220623164346.029788371@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623164343.042598055@linuxfoundation.org> References: <20220623164343.042598055@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 58340f8e952b613e0ead0bed58b97b05bf4743c5 upstream. On PREEMPT_RT, it's problematic to take spinlocks from hard irq handlers. We can fix this by deferring to a workqueue the dumping of the fast pool into the input pool. We accomplish this with some careful rules on fast_pool->count: - When it's incremented to >= 64, we schedule the work. - If the top bit is set, we never schedule the work, even if >= 64. - The worker is responsible for setting it back to 0 when it's done. There are two small issues around using workqueues for this purpose that we work around. The first issue is that mix_interrupt_randomness() might be migrated to another CPU during CPU hotplug. This issue is rectified by checking that it hasn't been migrated (after disabling irqs). If it has been migrated, then we set the count to zero, so that when the CPU comes online again, it can requeue the work. As part of this, we switch to using an atomic_t, so that the increment in the irq handler doesn't wipe out the zeroing if the CPU comes back online while this worker is running. The second issue is that, though relatively minor in effect, we probably want to make sure we get a consistent view of the pool onto the stack, in case it's interrupted by an irq while reading. To do this, we don't reenable irqs until after the copy. There are only 18 instructions between the cli and sti, so this is a pretty tiny window. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Jonathan Neuschäfer Acked-by: Sebastian Andrzej Siewior Reviewed-by: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 63 ++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 49 insertions(+), 14 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1173,9 +1173,10 @@ struct fast_pool { u32 pool32[4]; u64 pool64[2]; }; + struct work_struct mix; unsigned long last; + atomic_t count; u16 reg_idx; - u8 count; }; /* @@ -1225,12 +1226,49 @@ static u32 get_reg(struct fast_pool *f, return *ptr; } +static void mix_interrupt_randomness(struct work_struct *work) +{ + struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); + u32 pool[4]; + + /* Check to see if we're running on the wrong CPU due to hotplug. */ + local_irq_disable(); + if (fast_pool != this_cpu_ptr(&irq_randomness)) { + local_irq_enable(); + /* + * If we are unlucky enough to have been moved to another CPU, + * during CPU hotplug while the CPU was shutdown then we set + * our count to zero atomically so that when the CPU comes + * back online, it can enqueue work again. The _release here + * pairs with the atomic_inc_return_acquire in + * add_interrupt_randomness(). + */ + atomic_set_release(&fast_pool->count, 0); + return; + } + + /* + * Copy the pool to the stack so that the mixer always has a + * consistent view, before we reenable irqs again. + */ + memcpy(pool, fast_pool->pool32, sizeof(pool)); + atomic_set(&fast_pool->count, 0); + fast_pool->last = jiffies; + local_irq_enable(); + + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); + memzero_explicit(pool, sizeof(pool)); +} + void add_interrupt_randomness(int irq) { + enum { MIX_INFLIGHT = 1U << 31 }; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned long now = jiffies; cycles_t cycles = random_get_entropy(); + unsigned int new_count; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1250,12 +1288,13 @@ void add_interrupt_randomness(int irq) } fast_mix(fast_pool->pool32); - ++fast_pool->count; + /* The _acquire here pairs with the atomic_set_release in mix_interrupt_randomness(). */ + new_count = (unsigned int)atomic_inc_return_acquire(&fast_pool->count); if (unlikely(crng_init == 0)) { - if (fast_pool->count >= 64 && + if (new_count >= 64 && crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { - fast_pool->count = 0; + atomic_set(&fast_pool->count, 0); fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); @@ -1265,20 +1304,16 @@ void add_interrupt_randomness(int irq) return; } - if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ)) + if (new_count & MIX_INFLIGHT) return; - if (!spin_trylock(&input_pool.lock)) + if (new_count < 64 && !time_after(now, fast_pool->last + HZ)) return; - fast_pool->last = now; - _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); - spin_unlock(&input_pool.lock); - - fast_pool->count = 0; - - /* Award one bit for the contents of the fast pool. */ - credit_entropy_bits(1); + if (unlikely(!fast_pool->mix.func)) + INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); + atomic_or(MIX_INFLIGHT, &fast_pool->count); + queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); } EXPORT_SYMBOL_GPL(add_interrupt_randomness);