Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp3893589ybv; Sun, 16 Feb 2020 08:20:05 -0800 (PST) X-Google-Smtp-Source: APXvYqwm9uGzEExIgHVpu1QFsiXXUXxGOsZkdaD527ba65xYz87dRqc+abPS/9ePfiWWqeS4+9kj X-Received: by 2002:aca:5486:: with SMTP id i128mr7316328oib.12.1581870005448; Sun, 16 Feb 2020 08:20:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581870005; cv=none; d=google.com; s=arc-20160816; b=KcqNWga804wg3VrvlC5wk4pPJNOovy0MKrtUV/u9sq2sEecHTiK9lKtvT3idPo1uBB eckVdoILlgkt8axtA5+iWXZwhzrgCmXIpDh/LEdj8haEaT9fyAfjD4JbKDYwYh4SxLn0 zcjU8mqWWcB9cpfev4JoI5iVXnXPHNBX4QPxkDgz7oXKEc7IJuZP/cufgwP2uoDVepI2 4glGeB6g/yIVFY7ERS6K0XnpG26DmoOD6OQpHrWiQvwxxOfRmNdfAUW0YKPcocBG8p4y 08MjLxqt11LyknY8gZRfbpKfWZ5qjOyPRn5j+XpxbMD12CFM6i7L/lEwk0gAHk7fOtnQ el4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=FJft1vU+wCFvhNpTtwYbRdg3nTB3qnI0ZAhRXVHbVmM=; b=RuME0+r3HNoaHtee9ufuZHFoAaGodDGM5gW4PBMDeZJNi2iFAJnRzKnzRJZc2pkzEb FJ850N5Ef5qPwjji/FusZ4r6v9m40F9+G4zMSA5X8w9AE/qYRPzzUBae4zE158dW1JEq FqaK4As1WEfp9XibAJS92sJLF3XsQ5LQk6RP/h1kIL1xMW9OJy3bT9VswGWzzxeIYUVW tbRZC35BJv8GpSsP6wBFs9Bg2MCZcnjilDX5X7DrO3pKLy+iZOCGQGbemud3YcrEYOMt WHIqTH4e3KktatWC/c0rpDgbXTutIHMw1VL1iMEM5oPrGAMgnaWqCUDUrZ+wi+KU9CjG mjWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=eEXRae13; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q25si6649736otg.128.2020.02.16.08.19.53; Sun, 16 Feb 2020 08:20:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=eEXRae13; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728465AbgBPQSr (ORCPT + 99 others); Sun, 16 Feb 2020 11:18:47 -0500 Received: from frisell.zx2c4.com ([192.95.5.64]:56103 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728370AbgBPQSr (ORCPT ); Sun, 16 Feb 2020 11:18:47 -0500 Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTP id edd63ee0; Sun, 16 Feb 2020 16:16:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; s=mail; bh=m5m/7j7VWIeglScROPaBC7iLYfc=; b=eEXRae13a/dhQflP7lPv sTfZxzQzVvQoR5zTiksnzlnD7sMraXB8nv1RSHR9MGThb99In6kQdLUe14J5SmjY Qc6ZKfCLMFFqRx5rmueU2WONNOHucl8wm8MRhcxwjBrFi6vnzVDXTR+xWLxt1p6N Dzfr19kpcVaHthOjAba8vw1TF5CM30CPSXn/BRGiUaX9Y7bQf5PI/qc4RrPDeijl jIa+NVV3m8EBa9l2g7BcAA+MWMlr72zmYeT3VbVekW2/ZvQP7yNBKDKjpzs3ib4J mHy48YOs4i1oA4GspsM43xhMRslaOQsUZeiMV9aztdL0rFIQZz8ObGNAz/RxZGFI IA== Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id db5c0f5f (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO); Sun, 16 Feb 2020 16:16:22 +0000 (UTC) From: "Jason A. Donenfeld" To: tytso@mit.edu, linux-kernel@vger.kernel.org Cc: "Jason A. Donenfeld" , Greg Kroah-Hartman Subject: [PATCH] random: always use batched entropy for get_random_u{32,64} Date: Sun, 16 Feb 2020 17:18:36 +0100 Message-Id: <20200216161836.1976-1-Jason@zx2c4.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It turns out that RDRAND is pretty slow. Comparing these two constructions: for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret)) arch_get_random_long(&ret); and long buf[CHACHA_BLOCK_SIZE / sizeof(long)]; extract_crng((u8 *)buf); it amortizes out to 352 cycles per long for the top one and 107 cycles per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H. And importantly, the top one has the drawback of not benefiting from the real rng, whereas the bottom one has all the nice benefits of using our own chacha rng. As get_random_u{32,64} gets used in more places (perhaps beyond what it was originally intended for when it was introduced as get_random_{int,long} back in the md5 monstrosity era), it seems like it might be a good thing to strengthen its posture a tiny bit. Doing this should only be stronger and not any weaker because that pool is already initialized with a bunch of rdrand data (when available). This way, we get the benefits of the hardware rng as well as our own rng. Another benefit of this is that we no longer hit pitfalls of the recent stream of AMD bugs in RDRAND. One often used code pattern for various things is: do { val = get_random_u32(); } while (hash_table_contains_key(val)); That recent AMD bug rendered that pattern useless, whereas we're really very certain that chacha20 output will give pretty distributed numbers, no matter what. So, this simplification seems better both from a security perspective and from a performance perspective. Signed-off-by: Jason A. Donenfeld Cc: Greg Kroah-Hartman --- drivers/char/random.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c7f9584de2c8..037fdb182b4d 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2166,15 +2166,6 @@ u64 get_random_u64(void) struct batched_entropy *batch; static void *previous; -#if BITS_PER_LONG == 64 - if (arch_get_random_long((unsigned long *)&ret)) - return ret; -#else - if (arch_get_random_long((unsigned long *)&ret) && - arch_get_random_long((unsigned long *)&ret + 1)) - return ret; -#endif - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u64); @@ -2199,9 +2190,6 @@ u32 get_random_u32(void) struct batched_entropy *batch; static void *previous; - if (arch_get_random_int(&ret)) - return ret; - warn_unseeded_randomness(&previous); batch = raw_cpu_ptr(&batched_entropy_u32); -- 2.25.0