Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp2026559pxu; Sat, 17 Oct 2020 09:03:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJznptQdRIuBEGs+lU3AeTLmX7PpHqECNCeT/G3q3IGk0j2zNDVXRQi20fuRjvyN4u1dFgIw X-Received: by 2002:aa7:c90a:: with SMTP id b10mr9941525edt.163.1602950629034; Sat, 17 Oct 2020 09:03:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602950629; cv=none; d=google.com; s=arc-20160816; b=U1Zp9sxx0qxXBF33aHTm4CIvQhXJl/C7crUSZ3CqymsE2wSoE3MkDzwE6hB2o9qSmK 5rKGbOgO21d3wQ7pURsoi6zeOGocUtFJFxH/mz7GLN/H15r67ECbzH7nWtH7lKqbwtB8 RD7Upzrr8/Ze8iFHOokgw9HDEtjupEEFwxt8dAvsKrFCOCe1GRUXpFy7J++GTHgX5Fw1 7D5iCAggKqRPNxNKBqQj8IK8IvTdA0+SMYQlL3+jAJT+3i3iiYkWsDz4dG4w8tgnMgRU sjvyKbi93IIEnsxZaMZIXhuLAyWlLgK4rPl3qzru0KiQ0zKBkrG8boV630bM4VhzjJi3 0BQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=vuJMAEnVZby742Jq/khxI4VdrMZKvceGfpSv84NJqJo=; b=XnGTJo9mGnnz8kDvcroCUIVs5p53TWMN232rFLXgmkLPxKiRfZiHZkmUAFH3FLaBcP EO4jRv0IfO4joYn1/2N3JJjS7x32QH9UBYH/Lkr5Zi82AC6my76y3HWE+p0g1dA5zy1x UBD4uifgELq7zaKpgrrmX8WWsWOmYeS1ytHJmCDJ+8xhT3bAwrsUaQI4ktQDq+CvGbP6 wmnTqDN1S6N4Rar4fd3IVtJaYdfY+4vGcOcGoZ9fX6BENJ3WFryxJvylDQrnwzR6yJ6O te8AEIskkf+pA9EXtBUSIfv6+vp0uONOOD0CmySlg041A3eD6pDNourg9AadZHQK0r1O kXwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=QxUmPAmG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g9si4983546edn.361.2020.10.17.09.03.26; Sat, 17 Oct 2020 09:03:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=QxUmPAmG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438060AbgJQNY2 (ORCPT + 99 others); Sat, 17 Oct 2020 09:24:28 -0400 Received: from mail.zx2c4.com ([192.95.5.64]:57293 "EHLO mail.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2436577AbgJQNYY (ORCPT ); Sat, 17 Oct 2020 09:24:24 -0400 Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTP id bac32541; Sat, 17 Oct 2020 12:50:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=mime-version :references:in-reply-to:from:date:message-id:subject:to:cc :content-type; s=mail; bh=uGCG7KNtg2p4bchOTMYLEyZNL+g=; b=QxUmPA mGKLUxw8T3WCFlV0XpwpKEj0fsH9rWxbR1s8qeVdIsVRxoYn0Qu7i3QvXemqb6UA fYbrwB8JGevznVpLa00KYO2fmQHumHJuYSy6jjVhji4gR/LlzFNK2Wou7HV+gLy+ 4KwI/h+xMMZGKOvyk2K3FbVosThwTVJEPa790gKKGbaiKnXIAdlQfLQBSVFaoe5I +6tnhlM1Gu53be2m6+cDvU6p4wJrwXfJ9FtzAC01yf+CsaTu+b75RVlQapwM4Ggk AyT7iGA+0UDknsRX9CBK0IqkEMLCEHce1RMYcRDtm6J+7dy+5XEcZezQ61gjqzI6 zOhO/QG1w5i5FQug== Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id b66e6cb2 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Sat, 17 Oct 2020 12:50:26 +0000 (UTC) Received: by mail-io1-f46.google.com with SMTP id q9so7380252iow.6; Sat, 17 Oct 2020 06:24:21 -0700 (PDT) X-Gm-Message-State: AOAM5337ef0MRnTGOcNE7I33BrLLSpaPXbhhh/VBBaqtmZwCLQVy2TUe vRWD24lmoduk1pWYLTGF77hwIC3M4wMyZbS0Tis= X-Received: by 2002:a05:6638:f03:: with SMTP id h3mr5867605jas.36.1602941059429; Sat, 17 Oct 2020 06:24:19 -0700 (PDT) MIME-Version: 1.0 References: <788878CE-2578-4991-A5A6-669DCABAC2F2@amazon.com> <20201017033606.GA14014@1wt.eu> <6CC3DB03-27BA-4F5E-8ADA-BE605D83A85C@amazon.com> <20201017053712.GA14105@1wt.eu> <20201017064442.GA14117@1wt.eu> In-Reply-To: From: "Jason A. Donenfeld" Date: Sat, 17 Oct 2020 15:24:08 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] drivers/virt: vmgenid: add vm generation id driver To: Jann Horn Cc: Willy Tarreau , Colm MacCarthaigh , "Catangiu, Adrian Costin" , Andy Lutomirski , "Theodore Y. Ts'o" , Eric Biggers , "open list:DOCUMENTATION" , kernel list , "open list:VIRTIO GPU DRIVER" , "Graf (AWS), Alexander" , "Woodhouse, David" , bonzini@gnu.org, "Singh, Balbir" , "Weiss, Radu" , oridgar@gmail.com, ghammer@redhat.com, Jonathan Corbet , Greg Kroah-Hartman , "Michael S. Tsirkin" , Qemu Developers , KVM list , Michal Hocko , "Rafael J. Wysocki" , Pavel Machek , Linux API Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After discussing this offline with Jann a bit, I have a few general comments on the design of this. First, the UUID communicated by the hypervisor should be consumed by the kernel -- added as another input to the rng -- and then userspace should be notified that it should reseed any userspace RNGs that it may have, without actually communicating that UUID to userspace. IOW, I agree with Jann there. Then, it's the functioning of this notification mechanism to userspace that is interesting to me. There are a few design goals of notifying userspace: it should be fast, because people who are using userspace RNGs are usually doing so in the first place to completely avoid syscall overhead for whatever high performance application they have - e.g. I recall conversations with Colm about his TLS implementation needing to make random IVs _really_ fast. It should also happen as early as possible, with no race or as minimal as possible race window, so that userspace doesn't begin using old randomness and then switch over after the damage is already done. I'm also not wedded to using Microsoft's proprietary hypervisor design for this. If we come up with a better interface, I don't think it's asking too much to implement that and reasonably expect for Microsoft to catch up. Maybe someone here will find that controversial, but whatever -- discussing ideal designs does not seem out of place or inappropriate for how we usually approach things in the kernel, and a closed source hypervisor coming along shouldn't disrupt that. So, anyway, here are a few options with some pros and cons for the kernel notifying userspace that its RNG should reseed. 1. SIGRND - a new signal. Lol. 2. Userspace opens a file descriptor that it can epoll on. Pros are that many notification mechanisms already use this. Cons is that this requires syscall and might be more racy than we want. Another con is that this a new thing for userspace programs to do. 3. We stick an atomic counter in the vDSO, Jann's suggestion. Pros are that this is extremely fast, and also simple to use and implement. There are enough sequence points in typical crypto programs that checking to see whether this counter has changed before doing whatever operation seems easy enough. Cons are that typically we've been conservative about adding things to the vDSO, and this is also a new thing for userspace programs to do. 4. We already have a mechanism for this kind of thing, because the same issue comes up when fork()ing. The solution was MADV_WIPEONFORK, where userspace marks a page to be zeroed when forking, for the purposes of the RNG being notified when its world gets split in two. This is basically the same thing as we're discussing here with guest snapshots, except it's on the system level rather than the process level, and a system has many processes. But the problem space is still almost the same, and we could simply reuse that same mechanism. There are a few implementation strategies for that: 4a. We mess with the PTEs of all processes' pages that are MADV_WIPEONFORK, like fork does now, when the hypervisor notifies us to do so. Then we wind up reusing the already existing logic for userspace RNGs. Cons might be that this usually requires semaphores, and we're in irq context, so we'd have to hoist to a workqueue, which means either more wake up latency, or a larger race window. 4b. We just memzero all processes' pages that are MADV_WIPEONFORK, when the hypervisor notifies us to do so. Then we wind up reusing the already existing logic for userspace RNGs. 4c. The guest kernel maintains an array of physical addresses that are MADV_WIPEONFORK. The hypervisor knows about this array and its location through whatever protocol, and before resuming a moved/snapshotted/duplicated VM, it takes the responsibility for memzeroing this memory. The huge pro here would be that this eliminates all races, and reduces complexity quite a bit, because the hypervisor can perfectly synchronize its bringup (and SMP bringup) with this, and it can even optimize things like on-disk memory snapshots to simply not write out those pages to disk. A 4c-like approach seems like it'd be a lot of bang for the buck -- we reuse the existing mechanism (MADV_WIPEONFORK), so there's no new userspace API to deal with, and it'd be race free, and eliminate a lot of kernel complexity. But 4b and 3 don't seem too bad either. Any thoughts on 4c? Is that utterly insane, or does that actually get us somewhere close to what we want? Jason