Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp1514997ybb; Sat, 11 Apr 2020 05:24:05 -0700 (PDT) X-Google-Smtp-Source: APiQypJyZC99AfES0y8EhgynUtTOycABAlfuQfQvEE7dIWt7m7nX9cr7mFiiorsLHLNDuJoDIQGf X-Received: by 2002:aed:3e87:: with SMTP id n7mr3412100qtf.301.1586607845047; Sat, 11 Apr 2020 05:24:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586607845; cv=none; d=google.com; s=arc-20160816; b=TU9JYrxMy5kiOI/yvpCi7Nn/V1O6AN98udXTs6yiQPXyodkgsCmiEM97X1mykEJIXL AEfwjuxfWAo5SF9oxaERN/V5eZpa7E+h2uyqy7CluEb/KldSbWR9dBlClt8s640l1KgQ Wb7lq+/EkKIB0h5h1bJLf93ZdEkx9/m2ylCOmka3OUlQ4tI0bk5SLl8Py7sI5Yj8+SBJ RWK5JOMmhRlPIYiwAADuAiqn8tAnxdxCPlKjKbdNaD+ms4jJQjp2oaxdX0revOF7HaFI 8yrdTIN/ZhV27ViocbGq9GJI5c5pVqIZHYF2RKDr3oIgaBuZo9QrCVevj6bNKKawNzRm xLKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=bi/zNtwnINz/+M/WBINZ54OdT9JbKEkaufWvXisAaeY=; b=cQ7ib16a9tLQvhPSH/bzBFID4aZqcx/Vf/yCoidZ7+W+evS8bNUiF6WHjHtYhaU6BA TsRP9irED42RXmvH4qHvpkT6QNDd1s8kVdsQfOCxXEP77qKKYYhy4kz3rqOfayDcqZe8 eATsYCKU1w2jRjR3VCJyZxTLZZEjMRmJCV+StQbg42pFjSWFdYSNrIY7ORiJmpZ/OChr j8klK3zJeb7xDOlomp3t4IglS2vGDQcrt1773tnYwkD/S4LSF/H28arPAt5OOflNRHeR +q/ZkX/weoCPE5gNCekk8VtxGJqtkYEtSF+IM4y0a1akLH84fKPe4Vi7AMC0mfE2Gdws oV/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=TWBFexv4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e7si3464928qtw.306.2020.04.11.05.23.51; Sat, 11 Apr 2020 05:24:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=TWBFexv4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728532AbgDKMUn (ORCPT + 99 others); Sat, 11 Apr 2020 08:20:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:56744 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728760AbgDKMUm (ORCPT ); Sat, 11 Apr 2020 08:20:42 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C9366206A1; Sat, 11 Apr 2020 12:20:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586607641; bh=3vTHABxsCALXADbCMYcH8x1eqlByzfYm255U9hi9IJs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TWBFexv4UFdjzflqSNP62s3GqL52MlY5eK2IrA3DjpY2NsOKdsKOzKyEW0QUmP5b1 Et22C2EBdBhVl3eSGdEsjK2E5JZjsprQW85nQuXj/UOOCElOFz3ZLjI78TCOtMORlA 5NZqkcjJvoc0OzfqUVxqFxNy0TybphUBituNLSxc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Jason A. Donenfeld" , Theodore Tso Subject: [PATCH 5.6 17/38] random: always use batched entropy for get_random_u{32,64} Date: Sat, 11 Apr 2020 14:09:54 +0200 Message-Id: <20200411115501.052381006@linuxfoundation.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200411115459.324496182@linuxfoundation.org> References: <20200411115459.324496182@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jason A. Donenfeld commit 69efea712f5b0489e67d07565aad5c94e09a3e52 upstream. It turns out that RDRAND is pretty slow. Comparing these two constructions: for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret)) arch_get_random_long(&ret); and long buf[CHACHA_BLOCK_SIZE / sizeof(long)]; extract_crng((u8 *)buf); it amortizes out to 352 cycles per long for the top one and 107 cycles per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H. And importantly, the top one has the drawback of not benefiting from the real rng, whereas the bottom one has all the nice benefits of using our own chacha rng. As get_random_u{32,64} gets used in more places (perhaps beyond what it was originally intended for when it was introduced as get_random_{int,long} back in the md5 monstrosity era), it seems like it might be a good thing to strengthen its posture a tiny bit. Doing this should only be stronger and not any weaker because that pool is already initialized with a bunch of rdrand data (when available). This way, we get the benefits of the hardware rng as well as our own rng. Another benefit of this is that we no longer hit pitfalls of the recent stream of AMD bugs in RDRAND. One often used code pattern for various things is: do { val = get_random_u32(); } while (hash_table_contains_key(val)); That recent AMD bug rendered that pattern useless, whereas we're really very certain that chacha20 output will give pretty distributed numbers, no matter what. So, this simplification seems better both from a security perspective and from a performance perspective. Signed-off-by: Jason A. Donenfeld Reviewed-by: Greg Kroah-Hartman Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2149,11 +2149,11 @@ struct batched_entropy { /* * Get a random word for internal kernel use only. The quality of the random - * number is either as good as RDRAND or as good as /dev/urandom, with the - * goal of being quite fast and not depleting entropy. In order to ensure + * number is good as /dev/urandom, but there is no backtrack protection, with + * the goal of being quite fast and not depleting entropy. In order to ensure * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once - * at any point prior. + * wait_for_random_bytes() should be called and return 0 at least once at any + * point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), @@ -2166,15 +2166,6 @@ u64 get_random_u64(void) struct batched_entropy *batch; static void *previous; -#if BITS_PER_LONG == 64 - if (arch_get_random_long((unsigned long *)&ret)) - return ret; -#else - if (arch_get_random_long((unsigned long *)&ret) && - arch_get_random_long((unsigned long *)&ret + 1)) - return ret; -#endif - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u64); @@ -2199,9 +2190,6 @@ u32 get_random_u32(void) struct batched_entropy *batch; static void *previous; - if (arch_get_random_int(&ret)) - return ret; - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u32);