Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp591293ybb; Wed, 1 Apr 2020 06:10:51 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtQIhRoOwJIKZmTbe6ugbWrufsU9eixQkl//bwGzSGs57xCGos3SE/o9+kfDC3ha2UbrwQA X-Received: by 2002:a9d:7992:: with SMTP id h18mr17799012otm.315.1585746651492; Wed, 01 Apr 2020 06:10:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585746651; cv=none; d=google.com; s=arc-20160816; b=UemAP4Yms8FNLF9YON2Ti5s86MG6pM5WM+3IJaw8ZnDVb7ImAbFBRH+HH4vTWCtuWm 63vx5LQ9LpSobaB6KlLI5pejpWcOcbuUdeskHsvkYcJ+o2mtp5unK1fS5ruX+KYZBq6+ 1/1VnFYDdQThjrnREMMmIRsRdhpvT2Y069apxP7uR/XK+ZkPPHMzdrVk5oJZsLbnQwWy fZYF6frrF09Un7a/1lLNb/NtSLYEp7prMC7pHOTrkAOa4IxwRcjVccD+xj8McQ4vNE/3 js62ONUPpb4iSBaEJVEV5GRuWP1BlGPE2LUlyKcrwu3hBfJXvqtprOQVhngdrmgvxoPF wDkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:message-id:in-reply-to:date:references:subject:cc:to :from; bh=5Lk0J60DGl/7up7TohFp841v44PC2PgVJ2tJk9+/Bc8=; b=CT8JVllp3d8aJuIem4xW6lHw4koiStYBuUDDZwoVWuV1yN/aQKrraDIo61nCLlwXNm Isvfw2+7dPaR3uBG0VvFW++6BaBI9zMLnW8CYTYCSsLKUz0iYM0ujBn/tb20+OyCIt1y m7F08IH5q4qcgrrNtu+31SPi8faxwr3BFmjNqITWgu/j3Kpp2unRNSaqhFK5vZGaI939 s8DhLglCkgt8E1hcUb0CEz49FNnQKuxigJbklrsIOrIvL+uGqetLQNoVNdlbODzSRaSz +leJSrBSsTxuH8oRvAVrwvbusvplFMS/eDJ6u+APTYPSXsx8WIPQXY89GHdLKp8cZ/OD 8cYQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 109si836977otv.36.2020.04.01.06.10.33; Wed, 01 Apr 2020 06:10:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732586AbgDANI0 convert rfc822-to-8bit (ORCPT + 99 others); Wed, 1 Apr 2020 09:08:26 -0400 Received: from mx2.suse.de ([195.135.220.15]:39374 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732296AbgDANIZ (ORCPT ); Wed, 1 Apr 2020 09:08:25 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 33325AD4F; Wed, 1 Apr 2020 13:08:23 +0000 (UTC) From: Nicolai Stange To: "Jason A. Donenfeld" Cc: tytso@mit.edu, linux-kernel@vger.kernel.org, gregkh@linuxfoundation.org Subject: Re: [PATCH v2] random: always use batched entropy for get_random_u{32,64} References: <20200221201037.30231-1-Jason@zx2c4.com> Date: Wed, 01 Apr 2020 15:08:22 +0200 In-Reply-To: <20200221201037.30231-1-Jason@zx2c4.com> (Jason A. Donenfeld's message of "Fri, 21 Feb 2020 21:10:37 +0100") Message-ID: <87d08rbbg9.fsf@suse.de> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, "Jason A. Donenfeld" writes: > diff --git a/drivers/char/random.c b/drivers/char/random.c > index c7f9584de2c8..a6b77a850ddd 100644 > --- a/drivers/char/random.c > +++ b/drivers/char/random.c > @@ -2149,11 +2149,11 @@ struct batched_entropy { > > /* > * Get a random word for internal kernel use only. The quality of the random > - * number is either as good as RDRAND or as good as /dev/urandom, with the > - * goal of being quite fast and not depleting entropy. In order to ensure > + * number is good as /dev/urandom, but there is no backtrack protection, with > + * the goal of being quite fast and not depleting entropy. In order to ensure > * that the randomness provided by this function is okay, the function > - * wait_for_random_bytes() should be called and return 0 at least once > - * at any point prior. > + * wait_for_random_bytes() should be called and return 0 at least once at any > + * point prior. > */ > static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { > .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), > @@ -2166,15 +2166,6 @@ u64 get_random_u64(void) > struct batched_entropy *batch; > static void *previous; > > -#if BITS_PER_LONG == 64 > - if (arch_get_random_long((unsigned long *)&ret)) > - return ret; > -#else > - if (arch_get_random_long((unsigned long *)&ret) && > - arch_get_random_long((unsigned long *)&ret + 1)) > - return ret; > -#endif > - > warn_unseeded_randomness(&previous); I don't know if this has been reported elsewhere already, but this warning triggers on x86_64 with CONFIG_SLAB_FREELIST_HARDENED and CONFIG_SLAB_FREELIST_RANDOM during early boot now: [ 0.079775] random: random: get_random_u64 called from __kmem_cache_create+0x40/0x550 with crng_init=0 Strictly speaking, this isn't really a problem with this patch -- other archs without arch_get_random_long() support (like e.g. ppc64le) have showed this behaviour before. FWIW, I added a dump_stack() to warn_unseeded_randomness() in order to capture the resp. call paths of the offending get_random_u64/u32() callers, i.e. those invoked before rand_initialize(). Those are - __kmem_cache_create() and init_cache_random_seq()->cache_random_seq_create(), called indirectly quite a number of times from - kmem_cache_init() - kmem_cache_init()->create_kmalloc_caches() - vmalloc_init()->kmem_cache_create() - sched_init()->kmem_cache_create() - radix_tree_init()->kmem_cache_create() - workqueue_init_early()->kmem_cache_create() - trace_event_init()->kmem_cache_create() - __kmalloc()/kmem_cache_alloc()/kmem_cache_alloc_node()/kmem_cache_alloc_trace() ->__slab_alloc->___slab_alloc()->new_slab() called indirectly through - vmalloc_init() - workqueue_init_early() - trace_event_init() - early_irq_init()->alloc_desc() Two possible ways to work around this came to my mind: 1.) Either reintroduce arch_get_random_long() to get_random_u64/u32(), but only for the case that crng_init <= 1. 2.) Or introduce something like arch_seed_primary_crng_early() to be called early from start_kernel(). For x86_64 this could be implemented by filling the crng state with RDRAND data whereas other archs would fall back to get_cycles(), something jitterentropish-like or whatever. What do you think? Thanks, Nicolai > > batch = raw_cpu_ptr(&batched_entropy_u64); > @@ -2199,9 +2190,6 @@ u32 get_random_u32(void) > struct batched_entropy *batch; > static void *previous; > > - if (arch_get_random_int(&ret)) > - return ret; > - > warn_unseeded_randomness(&previous); > > batch = raw_cpu_ptr(&batched_entropy_u32); -- SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg), GF: Felix Imendörffer