Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp2271922ybv; Fri, 21 Feb 2020 12:12:00 -0800 (PST) X-Google-Smtp-Source: APXvYqz44yMPcJ4lKvZGE6AhShMZ3Er+eYlrhKzgup4PORh0GP3XVTapjfI731UGrDV/A+BPK1Zh X-Received: by 2002:aca:a810:: with SMTP id r16mr3440463oie.116.1582315920611; Fri, 21 Feb 2020 12:12:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582315920; cv=none; d=google.com; s=arc-20160816; b=ZOTH5DHOSKnP3UV17vQJrgw2CxrNU2wtzQyyKXmvnWafogM+WrNqdHEzK4bML17e0o ONqxWS8fF9JEVz5FWh1xvQu5JQ4XxDCTb+lvT1RdcMtwVgiCZPvjBiTIjCljswsOVZAt 3VDiJmjBQhN0rSixNGXoXMcANe45mlOqj4hhs/EG2S87t35PHjqYYSX4LidnnBiJJKIU 8xR2ZcuE2QVr/IvGU4acyCeGpAtcDgPRN+k52T15FNtxQvUmnexdtvaCScMFAJ01zVmH HYrnO+S7nFaygLfBXmz0Xk78ixwRIfTayf7Byc1nK5bVhjcEFo6hySvdeqspg0L7iKdW 7mrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Uncw4sLco3hymnJ2mDYVaSEjUlgM9UWRNHRbN6pcGcw=; b=qIFO2FFhRrRoCafaz89yUd326KyriUZ/LxNkQpBzOP6twrpj03JnCSctpwjIIwUSEV WanMQVbdKr0gKr2m0hH741xJZ/zIt1xGozergsVj7VyaMzMkNLgNXpZsUi5AtCqyXCnb xcTsh8AEbpGVdTMRX5Ewj9WfXBcRWNIH94u+8h5gGzeQpPP8B4G9424k8JkJIUuYb54k FUFoeeGMvZIEFD/Toz8y0yX2K/ukwl3r3oJzrpTQUS1CI6VMgvpgvjcYhEXSCNGAM1qU JMZVJZXIAZcS1DsqRHp996zKxOzyhPQgVRD2rmgh9HKE+HwyjY/NhGd0hKjbQafx4zV6 3OtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=a72Iwp74; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r5si1176741oic.19.2020.02.21.12.11.46; Fri, 21 Feb 2020 12:12:00 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=a72Iwp74; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726725AbgBUUKr (ORCPT + 99 others); Fri, 21 Feb 2020 15:10:47 -0500 Received: from frisell.zx2c4.com ([192.95.5.64]:36897 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726443AbgBUUKr (ORCPT ); Fri, 21 Feb 2020 15:10:47 -0500 Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTP id 67b8b6f5; Fri, 21 Feb 2020 20:07:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; s=mail; bh=Y5FQH6O7ANENEU2z1fYsRnL7r EY=; b=a72Iwp74d5zGb/pN41iOtoXdQFeY7YX+ZibpbG6rmxr7+PvxHX4aAsCJ+ cvLrv5CwBoG5xT1qh/lXSOXjSw3vffBUDUFwUWj/v6bt5Q7x6/HsbVFRdr1KbeLe HYCpjVG6X1TURBQu7TudAVbdHMBtvo2XBahoulUxoOKiPckjQaJDX8AJsUZXUiRV FBRqhSau3cbcLarBOvK22tRe3djblSRuqGnm7w5LgjvZafElpz3b9GGPf2i5HSSQ 2lhnNiimH2Z3FNxyvdNSpkkP/wuDMznrxI+cG2/2cOd2nVKpwnbIPGZITakvj2H6 VRy0DsnWeADgIr5VAaX7//oCinxSQ== Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id c3d9c1c2 (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO); Fri, 21 Feb 2020 20:07:43 +0000 (UTC) From: "Jason A. Donenfeld" To: tytso@mit.edu, linux-kernel@vger.kernel.org, gregkh@linuxfoundation.org Cc: "Jason A. Donenfeld" Subject: [PATCH v2] random: always use batched entropy for get_random_u{32,64} Date: Fri, 21 Feb 2020 21:10:37 +0100 Message-Id: <20200221201037.30231-1-Jason@zx2c4.com> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It turns out that RDRAND is pretty slow. Comparing these two constructions: for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret)) arch_get_random_long(&ret); and long buf[CHACHA_BLOCK_SIZE / sizeof(long)]; extract_crng((u8 *)buf); it amortizes out to 352 cycles per long for the top one and 107 cycles per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H. And importantly, the top one has the drawback of not benefiting from the real rng, whereas the bottom one has all the nice benefits of using our own chacha rng. As get_random_u{32,64} gets used in more places (perhaps beyond what it was originally intended for when it was introduced as get_random_{int,long} back in the md5 monstrosity era), it seems like it might be a good thing to strengthen its posture a tiny bit. Doing this should only be stronger and not any weaker because that pool is already initialized with a bunch of rdrand data (when available). This way, we get the benefits of the hardware rng as well as our own rng. Another benefit of this is that we no longer hit pitfalls of the recent stream of AMD bugs in RDRAND. One often used code pattern for various things is: do { val = get_random_u32(); } while (hash_table_contains_key(val)); That recent AMD bug rendered that pattern useless, whereas we're really very certain that chacha20 output will give pretty distributed numbers, no matter what. So, this simplification seems better both from a security perspective and from a performance perspective. Signed-off-by: Jason A. Donenfeld Reviewed-by: Greg Kroah-Hartman --- Changes v1->v2: - Tony Luck suggested I also update the comment that referenced the no-longer relevant RDRAND. drivers/char/random.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c7f9584de2c8..a6b77a850ddd 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2149,11 +2149,11 @@ struct batched_entropy { /* * Get a random word for internal kernel use only. The quality of the random - * number is either as good as RDRAND or as good as /dev/urandom, with the - * goal of being quite fast and not depleting entropy. In order to ensure + * number is good as /dev/urandom, but there is no backtrack protection, with + * the goal of being quite fast and not depleting entropy. In order to ensure * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once - * at any point prior. + * wait_for_random_bytes() should be called and return 0 at least once at any + * point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), @@ -2166,15 +2166,6 @@ u64 get_random_u64(void) struct batched_entropy *batch; static void *previous; -#if BITS_PER_LONG == 64 - if (arch_get_random_long((unsigned long *)&ret)) - return ret; -#else - if (arch_get_random_long((unsigned long *)&ret) && - arch_get_random_long((unsigned long *)&ret + 1)) - return ret; -#endif - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u64); @@ -2199,9 +2190,6 @@ u32 get_random_u32(void) struct batched_entropy *batch; static void *previous; - if (arch_get_random_int(&ret)) - return ret; - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u32); -- 2.25.0