Received: by 2002:ac2:464d:0:0:0:0:0 with SMTP id s13csp2000980lfo; Sat, 28 May 2022 13:22:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJySe0DUGK8S/g/SQIKoEhHY99Q9GtHis8o6x/fEN7NZ67xkXoCMfOg/bQmxHITUY293gRAK X-Received: by 2002:a63:2bc4:0:b0:3fa:cc63:415e with SMTP id r187-20020a632bc4000000b003facc63415emr18020526pgr.465.1653769339353; Sat, 28 May 2022 13:22:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653769339; cv=none; d=google.com; s=arc-20160816; b=TRKct0LHp7+3TTR+eKCrFZtxqs/r+59wI+/OiBRJvY86dmpweWoCL4RoPZNZnbeOOS /Om8zobovXMlWi1Ez7acdgfVeaOMYQSFuSCqyJXVv1hhyeYvPmlIjpz7/zLsVGj69x9R oMqyGlcyvL8cO9wEbqSbKgwbdsh/13mdCmApXlrkb1uNyZEBBUY9WXYclD1qoXljguAn pvB1Wc79g+NcvzXFEGNDvTvB/9FkbRpcgfRmq9/nbQuI/ZilFNU2E7+XnNHjQXe3P3iv PAJ5TG2FDzIINACH12/+aKmbg9O85KritBh1tkMJTRKICvGyd06wh4ahArfaiu7WG9qq wqsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=x88+34pamNE9JkbLMx9gVHqXqxLeh/Z13yv13kGLJW8=; b=BrwCKqaMKzzfXRrs8iIuqRe2IGnIR5tE5R8dpFoeG7d0wpLuxpC3+PnSi7cch11jdC B/Gr6tQvd5aXqM27rJg6D2HlzXqt41nsnZrH8/nwGmFXzCF0zvAxxmsgDNG6OuZVbh+0 n1UfHyaatUpMMiNP4ZlU1FTqisSG8ZxC9wRwjcWrRd6WVZ5lomf2d9muZMeGbhhPuYH0 rX3BLAw47KPI+9QrBgO6ArX4Fg4qsgeSdS1UQX0rV3gqG1FVQCfja2uvT2otwFKZxnB/ 6oIBIi4pHrGbhLaIawmdX74WEWDD0MT/bfYjf+emswNRq/ZjlH8Hi9SefLJKKqWfIOCA zo2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=REgORDip; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id l72-20020a63914b000000b003c144fa99cdsi10723261pge.543.2022.05.28.13.22.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 28 May 2022 13:22:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=REgORDip; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F00B6188E67; Sat, 28 May 2022 12:34:19 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345091AbiE0LxO (ORCPT + 99 others); Fri, 27 May 2022 07:53:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352007AbiE0LtG (ORCPT ); Fri, 27 May 2022 07:49:06 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EE0D13C4EB; Fri, 27 May 2022 04:43:58 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E83AAB824DA; Fri, 27 May 2022 11:43:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D838C385A9; Fri, 27 May 2022 11:43:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1653651835; bh=jEf0mTx3WuYMuIwHhlJ0sBiImxV0YR5fG8sQC3YJewk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=REgORDipJW3Z5TkWX3IV/TRg8f0KnzVZAK0oUwazz7+69Nh+3H7hrG8DwN/603Z08 BD70Sz9Z5w6htYHVzdI0TerIK1XQHlbtWLN7F1Q73IFKf52fYs3bdKVtU5y1t7//fk l29VKHjNliKXPrKkF5/J99mR/DMuXd/5z5F0H03E= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Filipe Manana , Peter Zijlstra , Borislav Petkov , Theodore Tso , "Jason A. Donenfeld" Subject: [PATCH 5.17 089/111] random: do not use input pool from hard IRQs Date: Fri, 27 May 2022 10:50:01 +0200 Message-Id: <20220527084832.063873145@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220527084819.133490171@linuxfoundation.org> References: <20220527084819.133490171@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit e3e33fc2ea7fcefd0d761db9d6219f83b4248f5c upstream. Years ago, a separate fast pool was added for interrupts, so that the cost associated with taking the input pool spinlocks and mixing into it would be avoided in places where latency is critical. However, one oversight was that add_input_randomness() and add_disk_randomness() still sometimes are called directly from the interrupt handler, rather than being deferred to a thread. This means that some unlucky interrupts will be caught doing a blake2s_compress() call and potentially spinning on input_pool.lock, which can also be taken by unprivileged users by writing into /dev/urandom. In order to fix this, add_timer_randomness() now checks whether it is being called from a hard IRQ and if so, just mixes into the per-cpu IRQ fast pool using fast_mix(), which is much faster and can be done lock-free. A nice consequence of this, as well, is that it means hard IRQ context FPU support is likely no longer useful. The entropy estimation algorithm used by add_timer_randomness() is also somewhat different than the one used for add_interrupt_randomness(). The former looks at deltas of deltas of deltas, while the latter just waits for 64 interrupts for one bit or for one second since the last bit. In order to bridge these, and since add_interrupt_randomness() runs after an add_timer_randomness() that's called from hard IRQ, we add to the fast pool credit the related amount, and then subtract one to account for add_interrupt_randomness()'s contribution. A downside of this, however, is that the num argument is potentially attacker controlled, which puts a bit more pressure on the fast_mix() sponge to do more than it's really intended to do. As a mitigating factor, the first 96 bits of input aren't attacker controlled (a cycle counter followed by zeros), which means it's essentially two rounds of siphash rather than one, which is somewhat better. It's also not that much different from add_interrupt_randomness()'s use of the irq stack instruction pointer register. Cc: Thomas Gleixner Cc: Filipe Manana Cc: Peter Zijlstra Cc: Borislav Petkov Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 51 +++++++++++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 15 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1084,6 +1084,7 @@ static void mix_interrupt_randomness(str * we don't wind up "losing" some. */ unsigned long pool[2]; + unsigned int count; /* Check to see if we're running on the wrong CPU due to hotplug. */ local_irq_disable(); @@ -1097,12 +1098,13 @@ static void mix_interrupt_randomness(str * consistent view, before we reenable irqs again. */ memcpy(pool, fast_pool->pool, sizeof(pool)); + count = fast_pool->count; fast_pool->count = 0; fast_pool->last = jiffies; local_irq_enable(); mix_pool_bytes(pool, sizeof(pool)); - credit_init_bits(1); + credit_init_bits(max(1u, (count & U16_MAX) / 64)); memzero_explicit(pool, sizeof(pool)); } @@ -1142,22 +1144,30 @@ struct timer_rand_state { /* * This function adds entropy to the entropy "pool" by using timing - * delays. It uses the timer_rand_state structure to make an estimate - * of how many bits of entropy this call has added to the pool. - * - * The number "num" is also added to the pool - it should somehow describe - * the type of event which just happened. This is currently 0-255 for - * keyboard scan codes, and 256 upwards for interrupts. + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. The + * value "num" is also added to the pool; it should somehow describe + * the type of event that just happened. */ static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) { unsigned long entropy = random_get_entropy(), now = jiffies, flags; long delta, delta2, delta3; + unsigned int bits; - spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&entropy, sizeof(entropy)); - _mix_pool_bytes(&num, sizeof(num)); - spin_unlock_irqrestore(&input_pool.lock, flags); + /* + * If we're in a hard IRQ, add_interrupt_randomness() will be called + * sometime after, so mix into the fast pool. + */ + if (in_hardirq()) { + fast_mix(this_cpu_ptr(&irq_randomness)->pool, + (unsigned long[2]){ entropy, num }); + } else { + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&entropy, sizeof(entropy)); + _mix_pool_bytes(&num, sizeof(num)); + spin_unlock_irqrestore(&input_pool.lock, flags); + } if (crng_ready()) return; @@ -1188,11 +1198,22 @@ static void add_timer_randomness(struct delta = delta3; /* - * delta is now minimum absolute delta. - * Round down by 1 bit on general principles, - * and limit entropy estimate to 12 bits. + * delta is now minimum absolute delta. Round down by 1 bit + * on general principles, and limit entropy estimate to 11 bits. + */ + bits = min(fls(delta >> 1), 11); + + /* + * As mentioned above, if we're in a hard IRQ, add_interrupt_randomness() + * will run after this, which uses a different crediting scheme of 1 bit + * per every 64 interrupts. In order to let that function do accounting + * close to the one in this function, we credit a full 64/64 bit per bit, + * and then subtract one to account for the extra one added. */ - credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); + if (in_hardirq()) + this_cpu_ptr(&irq_randomness)->count += max(1u, bits * 64) - 1; + else + credit_init_bits(bits); } void add_input_randomness(unsigned int type, unsigned int code,