Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp6031147iog; Thu, 23 Jun 2022 09:56:05 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tJ7RMv+roBBZ0k0bcQJLaWf3ym1IOIBpRPIr7MxCahww+sOelhII00TlTU6gVVClI8FdW7 X-Received: by 2002:a17:907:e91:b0:707:c7af:93aa with SMTP id ho17-20020a1709070e9100b00707c7af93aamr9319407ejc.382.1656003365472; Thu, 23 Jun 2022 09:56:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656003365; cv=none; d=google.com; s=arc-20160816; b=FdQOC5SOWMihvTS76tanzkmZlQlIxHd3JhxEfeXMcUjUt6OakPTYXJ270jUjYDpoNx ojx1isVJzf5dQMv0ju67ld1aWqoAM+TbJnh8MJXSRiXkYQ3fQjPWkYa2BLkIHnePoGXw LpkW635l4Cu862dM623O4UZMOkTqbqrM6dIfBhVeGFXwAaTbBuGzSvJKyOWJORHSrQmr atAMLbG03ImUGScDou7JbUnP19JvnZHnlszBC5eXKOwGrmD60FxAQSgQQ0cwNu0m9lWg 3eF1Otb1f1cYl7PL4Nyi6JzTtFgSQg/CJBD5t/hgUAzMVo4ArdecTSoPDnA+mR+8+L0n iGKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=wdaH1OarL15/ZxLFTL5vTz4ouHhjC/igWdmZTXDeWyo=; b=HtWcZjimsKTI7/SB11LokMHNs9a0X2CjbeVC8oWh1O8objeeUpxrKhKCyluo8QDn0K pjicoIgRDhqlgh5iWtzdvzAawlXRzi9mCPP7uGuLZv1O4T+5OS5dfUN7q9itlVfR6/K2 YELVyFIDdus9DlpJCUmnVetMWCjGgfuNjkaV7S7HOVG+DQMzShJ0mAoM6Mq25hTPhxo/ cLaTr4ufStzSTLaZUWXaHnwLT8735UTdpc3ieta/OoDi7kTg84QLdEXfJoPNe9BIXMxB pUB4xrX4FAv6a3gnh0PpVIAmoCEUGQCHFaROWnOfVFvU3JT15IHxITbptXbKs2wH9/6u +N+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=iDyTDFxU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nd5-20020a170907628500b006f395a10e8fsi12754382ejc.594.2022.06.23.09.55.38; Thu, 23 Jun 2022 09:56:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=iDyTDFxU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232683AbiFWQrC (ORCPT + 99 others); Thu, 23 Jun 2022 12:47:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232537AbiFWQq0 (ORCPT ); Thu, 23 Jun 2022 12:46:26 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1939148E74; Thu, 23 Jun 2022 09:46:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8EE3561F8B; Thu, 23 Jun 2022 16:46:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E507C3411B; Thu, 23 Jun 2022 16:46:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1656002783; bh=DVN6U5IhvqaY5Z5ZN0rwL7RjeJmnKnPUZNN85OcuWM0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iDyTDFxUWKawi+HnUCVLODsP9KOyiUNeq0ryGoUZIjzEsur4c3A0StaUj2O4ytmm1 nxCj/mRmigquAub72s/ua/OqmWIzkF4miJck2eu7PKc0kWXyji7kLHjHlDE8dkPPiJ ajm7ScXHlyd+gp/PRej5qCQjVMSAYNn+0XoFR9w4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" , Theodore Tso Subject: [PATCH 4.9 024/264] random: always use batched entropy for get_random_u{32,64} Date: Thu, 23 Jun 2022 18:40:17 +0200 Message-Id: <20220623164344.752706899@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220623164344.053938039@linuxfoundation.org> References: <20220623164344.053938039@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jason A. Donenfeld" commit 69efea712f5b0489e67d07565aad5c94e09a3e52 upstream. It turns out that RDRAND is pretty slow. Comparing these two constructions: for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret)) arch_get_random_long(&ret); and long buf[CHACHA_BLOCK_SIZE / sizeof(long)]; extract_crng((u8 *)buf); it amortizes out to 352 cycles per long for the top one and 107 cycles per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H. And importantly, the top one has the drawback of not benefiting from the real rng, whereas the bottom one has all the nice benefits of using our own chacha rng. As get_random_u{32,64} gets used in more places (perhaps beyond what it was originally intended for when it was introduced as get_random_{int,long} back in the md5 monstrosity era), it seems like it might be a good thing to strengthen its posture a tiny bit. Doing this should only be stronger and not any weaker because that pool is already initialized with a bunch of rdrand data (when available). This way, we get the benefits of the hardware rng as well as our own rng. Another benefit of this is that we no longer hit pitfalls of the recent stream of AMD bugs in RDRAND. One often used code pattern for various things is: do { val = get_random_u32(); } while (hash_table_contains_key(val)); That recent AMD bug rendered that pattern useless, whereas we're really very certain that chacha20 output will give pretty distributed numbers, no matter what. So, this simplification seems better both from a security perspective and from a performance perspective. Signed-off-by: Jason A. Donenfeld Reviewed-by: Greg Kroah-Hartman Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2233,11 +2233,11 @@ struct batched_entropy { /* * Get a random word for internal kernel use only. The quality of the random - * number is either as good as RDRAND or as good as /dev/urandom, with the - * goal of being quite fast and not depleting entropy. In order to ensure + * number is good as /dev/urandom, but there is no backtrack protection, with + * the goal of being quite fast and not depleting entropy. In order to ensure * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once - * at any point prior. + * wait_for_random_bytes() should be called and return 0 at least once at any + * point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), @@ -2250,15 +2250,6 @@ u64 get_random_u64(void) struct batched_entropy *batch; static void *previous; -#if BITS_PER_LONG == 64 - if (arch_get_random_long((unsigned long *)&ret)) - return ret; -#else - if (arch_get_random_long((unsigned long *)&ret) && - arch_get_random_long((unsigned long *)&ret + 1)) - return ret; -#endif - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u64); @@ -2283,9 +2274,6 @@ u32 get_random_u32(void) struct batched_entropy *batch; static void *previous; - if (arch_get_random_int(&ret)) - return ret; - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u32);