Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp2433963ioo; Sat, 28 May 2022 13:32:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzQVie6YTRg3r5jUvaAMx4qeZ7skQmsptycfjGGPsmm+2Vpzqwtn0n+kUTMVWgFLbCWV83R X-Received: by 2002:a62:f901:0:b0:518:307a:b392 with SMTP id o1-20020a62f901000000b00518307ab392mr49518397pfh.44.1653769969244; Sat, 28 May 2022 13:32:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653769969; cv=none; d=google.com; s=arc-20160816; b=NQG3cXe/WqG+3RID8Cr8K6+on3D+cjg+is4nS/NcXctQmEJ2kh4SnvyKqrno+U7jEc tJMqCNvKMgqxjI1lrmv+ofo6W74wxHwzFvCMHA5izipWD3In5E9DRvaPITBqtUN81CUE XFPREIlI2nioFe/zBdIRPj572Aj8Bm3cDVhMH+/QMd+zKlY/7Dw+ohgi2R4xbZebV05C ikT1VyBGMMM4ek6LN73qOxPnMem/fmNmWB+HKE/q4s8EWUXbnAYMT2QeIKyQGyNul7Df Ae4Epa9frMKYH5LstZSkGdGWn4W84kZ4BCHEi62TP1KI8P0SzvnBY1dWxbHtlLSp847X nl+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=vZJJtG/2mJkkSCOEO9lTvlRQ3fFgP1oVfYD+PD/QpE4=; b=S8029st0LZOjms4XPancG5/fWY5b4X8womUZc1fU/dvC4/msdEF0Nk2ERAXOkZw8Vd LkGZU6gTbAgN+crumh2fEzjPYCxKHQgGycEpnJY/U2OhSeDLoYm25zxc/um0ZZWqJg8j mwmZZfX9HDRjHp11re7DU8K7FII85ec6sIY+TnDfC1dFGmfUCRTMCBZopXBKsoyueMUq VvkbyntD0pgzPp553ASmEu8lEq1eu0jIBtaTogRkOlrufbU+bWX8zX62NUKTadG9IKfB p3D+ll1uxiXC2hE81EPNuiK7+Xe6bH6wYI14X5ZUo7mj8VF3W3KNsvt8PJso5tiheoxL rT7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=HaZRa9Uh; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id m4-20020a1709026bc400b001617f6d5e36si9657325plt.23.2022.05.28.13.32.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 28 May 2022 13:32:49 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=HaZRa9Uh; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 279691A0AFA; Sat, 28 May 2022 12:36:35 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352422AbiE0MDB (ORCPT + 99 others); Fri, 27 May 2022 08:03:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352167AbiE0LyI (ORCPT ); Fri, 27 May 2022 07:54:08 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E25CE15897C; Fri, 27 May 2022 04:47:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 21F7561D96; Fri, 27 May 2022 11:47:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2BBE2C385A9; Fri, 27 May 2022 11:47:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653652067; bh=kK+727lFJ/EMNk+O4EfQALlAPBxYd7tgSmT3zvBhMYo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HaZRa9Uh7ZIMp9M/gw2qCzfe0+PLCEcecVcE4X57r2zgeT0d45DHFKfNBaiUFOani zsw3k/FsUmvnMxmJhvz1vFd43ESpibN1iATSTKK/ZiWe3b8tV8WUeT42YDSP+xx2bW CIOP3vNGFwGaHjZoe4giO85L391+e2+uMwbZp7HY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Peter Zijlstra , Theodore Tso , =?UTF-8?q?Jonathan=20Neusch=C3=A4fer?= , Sebastian Andrzej Siewior , Sultan Alsawaf , Dominik Brodowski , "Jason A. Donenfeld" Subject: [PATCH 5.15 073/145] random: defer fast pool mixing to worker Date: Fri, 27 May 2022 10:49:34 +0200 Message-Id: <20220527084859.383947660@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084850.364560116@linuxfoundation.org> References: <20220527084850.364560116@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 58340f8e952b613e0ead0bed58b97b05bf4743c5 upstream. On PREEMPT_RT, it's problematic to take spinlocks from hard irq handlers. We can fix this by deferring to a workqueue the dumping of the fast pool into the input pool. We accomplish this with some careful rules on fast_pool->count: - When it's incremented to >= 64, we schedule the work. - If the top bit is set, we never schedule the work, even if >= 64. - The worker is responsible for setting it back to 0 when it's done. There are two small issues around using workqueues for this purpose that we work around. The first issue is that mix_interrupt_randomness() might be migrated to another CPU during CPU hotplug. This issue is rectified by checking that it hasn't been migrated (after disabling irqs). If it has been migrated, then we set the count to zero, so that when the CPU comes online again, it can requeue the work. As part of this, we switch to using an atomic_t, so that the increment in the irq handler doesn't wipe out the zeroing if the CPU comes back online while this worker is running. The second issue is that, though relatively minor in effect, we probably want to make sure we get a consistent view of the pool onto the stack, in case it's interrupted by an irq while reading. To do this, we don't reenable irqs until after the copy. There are only 18 instructions between the cli and sti, so this is a pretty tiny window. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Jonathan Neuschäfer Acked-by: Sebastian Andrzej Siewior Reviewed-by: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 63 ++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 49 insertions(+), 14 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1178,9 +1178,10 @@ struct fast_pool { u32 pool32[4]; u64 pool64[2]; }; + struct work_struct mix; unsigned long last; + atomic_t count; u16 reg_idx; - u8 count; }; /* @@ -1230,12 +1231,49 @@ static u32 get_reg(struct fast_pool *f, return *ptr; } +static void mix_interrupt_randomness(struct work_struct *work) +{ + struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); + u32 pool[4]; + + /* Check to see if we're running on the wrong CPU due to hotplug. */ + local_irq_disable(); + if (fast_pool != this_cpu_ptr(&irq_randomness)) { + local_irq_enable(); + /* + * If we are unlucky enough to have been moved to another CPU, + * during CPU hotplug while the CPU was shutdown then we set + * our count to zero atomically so that when the CPU comes + * back online, it can enqueue work again. The _release here + * pairs with the atomic_inc_return_acquire in + * add_interrupt_randomness(). + */ + atomic_set_release(&fast_pool->count, 0); + return; + } + + /* + * Copy the pool to the stack so that the mixer always has a + * consistent view, before we reenable irqs again. + */ + memcpy(pool, fast_pool->pool32, sizeof(pool)); + atomic_set(&fast_pool->count, 0); + fast_pool->last = jiffies; + local_irq_enable(); + + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); + memzero_explicit(pool, sizeof(pool)); +} + void add_interrupt_randomness(int irq) { + enum { MIX_INFLIGHT = 1U << 31 }; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned long now = jiffies; cycles_t cycles = random_get_entropy(); + unsigned int new_count; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1255,12 +1293,13 @@ void add_interrupt_randomness(int irq) } fast_mix(fast_pool->pool32); - ++fast_pool->count; + /* The _acquire here pairs with the atomic_set_release in mix_interrupt_randomness(). */ + new_count = (unsigned int)atomic_inc_return_acquire(&fast_pool->count); if (unlikely(crng_init == 0)) { - if (fast_pool->count >= 64 && + if (new_count >= 64 && crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { - fast_pool->count = 0; + atomic_set(&fast_pool->count, 0); fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); @@ -1270,20 +1309,16 @@ void add_interrupt_randomness(int irq) return; } - if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ)) + if (new_count & MIX_INFLIGHT) return; - if (!spin_trylock(&input_pool.lock)) + if (new_count < 64 && !time_after(now, fast_pool->last + HZ)) return; - fast_pool->last = now; - _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); - spin_unlock(&input_pool.lock); - - fast_pool->count = 0; - - /* Award one bit for the contents of the fast pool. */ - credit_entropy_bits(1); + if (unlikely(!fast_pool->mix.func)) + INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); + atomic_or(MIX_INFLIGHT, &fast_pool->count); + queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); } EXPORT_SYMBOL_GPL(add_interrupt_randomness);