Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp3293942ybi; Mon, 17 Jun 2019 21:25:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqy8v/5Wf2c1vCvxxaiJhrHefOkTZNqkUkie5JfshjLi2JUvjqRC57KwGjbLQ5y3BiV6dFNd X-Received: by 2002:a62:4d04:: with SMTP id a4mr117433105pfb.177.1560831920144; Mon, 17 Jun 2019 21:25:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560831920; cv=none; d=google.com; s=arc-20160816; b=sEROdi1U4yFE7sW2ln521F6rzdc+Z38WY1imGuT0XZMwx+DDxMy7Fi3yBV8JtXapry MR0MS+kSVhpdzmhxIpp/uxtbqLBi25cLuyAu9FtdRmDqXax+SXseUwvJ2bDfOhdvQBp5 NBkj2Ruc+Squw5m+1XimRDSU8QSsSVHwy3P7LVnVMN1AomAE0r3kBneL0eq2zxUUc4/S 4tmzQn6aGi8eR+W6uYJtfrCk+Cqa6Z4tyQfGjXJUNZV1PVJkpnlvpgyiMkzhxzgxXoCJ bLbG/tiTD5QuDbbXP7Amv24t4hUpsBRbPQ0Kl5nw8n7D2gMxaqu5CNop3fx1GhEfmsE9 4V9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=jLvrJIICqH/KGJGydYZP5bkBQHxZougG4pbeXQV45wE=; b=tryTpn8byu4w4aZiiNa7NNiSM1sjzqEENzaPgsiwhH3a4LRGvzBA8Oi7jITZiz7vKk kbFdIUZQ5VEzt2NJYE1nEgmV4JWrIadSfwPDQLO8ZPYj6/TuBg84K2RiM9LyIaXMNtxp m/ZFDxNLmlmN0WWHZtD7rcX3WYXpB7Ack4xuzwl8a1c3wGkuBT4KUHs0p2+N76pY1Vej yDSTz+sG3ESerhVfxFC1XwjUUpYsAOpPcbNyHXzIrsgNCCpayshjOphp9trC8zZH3sST bI1rjy2gyrCKRvuF8E8QjRTLsXUzmPJ9d4HUk/zBXJrXuW85EU3z+j0uK/eL2ZWwRJGL G8MA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=X6icH23w; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m5si9708245plt.167.2019.06.17.21.25.04; Mon, 17 Jun 2019 21:25:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=X6icH23w; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727199AbfFREYf (ORCPT + 99 others); Tue, 18 Jun 2019 00:24:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:58910 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726946AbfFREYf (ORCPT ); Tue, 18 Jun 2019 00:24:35 -0400 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 692D321880 for ; Tue, 18 Jun 2019 04:24:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560831873; bh=9ReCKOz5FyiWYbaWF0e9yUvLI4xOn5aBSEcVZBfweIA=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=X6icH23wLami3qVLS+W9x15FWYVyjUV2lrsKX5ptElPPqwXfwlVFNzngaWsYOhdPZ Md37Fra9XLkQD0GOttnpfHbIQYsXmCjGa5vLNTbtjgd+cCFBo/w0/GiWXUFOIPagKg rHUkFYvP+C+l1qxx5AASZU1zrnwJcZB1meO74imU= Received: by mail-wm1-f41.google.com with SMTP id c66so1603267wmf.0 for ; Mon, 17 Jun 2019 21:24:33 -0700 (PDT) X-Gm-Message-State: APjAAAUonfiQP6ZECh1J9Lbi7tc6qwst+l30ktmdpbMCpdKwB+1ry/LA Tjy3yp9klw9AZJ1hDhmQOubF2S8cBVj05hP8PAbtpA== X-Received: by 2002:a1c:a942:: with SMTP id s63mr1318487wme.76.1560831871826; Mon, 17 Jun 2019 21:24:31 -0700 (PDT) MIME-Version: 1.0 References: <20190508144422.13171-1-kirill.shutemov@linux.intel.com> <20190508144422.13171-46-kirill.shutemov@linux.intel.com> <3c658cce-7b7e-7d45-59a0-e17dae986713@intel.com> <5cbfa2da-ba2e-ed91-d0e8-add67753fc12@intel.com> <1560818931.5187.70.camel@linux.intel.com> <1560823899.5187.92.camel@linux.intel.com> In-Reply-To: <1560823899.5187.92.camel@linux.intel.com> From: Andy Lutomirski Date: Mon, 17 Jun 2019 21:24:20 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH, RFC 45/62] mm: Add the encrypt_mprotect() system call for MKTME To: Kai Huang Cc: Andy Lutomirski , Dave Hansen , "Kirill A. Shutemov" , Andrew Morton , X86 ML , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Borislav Petkov , Peter Zijlstra , David Howells , Kees Cook , Jacob Pan , Alison Schofield , Linux-MM , kvm list , keyrings@vger.kernel.org, LKML , Tom Lendacky Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 17, 2019 at 7:11 PM Kai Huang wrote: > > On Mon, 2019-06-17 at 18:50 -0700, Andy Lutomirski wrote: > > On Mon, Jun 17, 2019 at 5:48 PM Kai Huang wrote: > > > > > > > > > > > > > > > And another silly argument: if we had /dev/mktme, then we could > > > > > possibly get away with avoiding all the keyring stuff entirely. > > > > > Instead, you open /dev/mktme and you get your own key under the hook. > > > > > If you want two keys, you open /dev/mktme twice. If you want some > > > > > other program to be able to see your memory, you pass it the fd. > > > > > > > > We still like the keyring because it's one-stop-shopping as the place > > > > that *owns* the hardware KeyID slots. Those are global resources and > > > > scream for a single global place to allocate and manage them. The > > > > hardware slots also need to be shared between any anonymous and > > > > file-based users, no matter what the APIs for the anonymous side. > > > > > > MKTME driver (who creates /dev/mktme) can also be the one-stop-shopping. I think whether to > > > choose > > > keyring to manage MKTME key should be based on whether we need/should take advantage of existing > > > key > > > retention service functionalities. For example, with key retention service we can > > > revoke/invalidate/set expiry for a key (not sure whether MKTME needs those although), and we > > > have > > > several keyrings -- thread specific keyring, process specific keyring, user specific keyring, > > > etc, > > > thus we can control who can/cannot find the key, etc. I think managing MKTME key in MKTME driver > > > doesn't have those advantages. > > > > > > > Trying to evaluate this with the current proposed code is a bit odd, I > > think. Suppose you create a thread-specific key and then fork(). The > > child can presumably still use the key regardless of whether the child > > can nominally access the key in the keyring because the PTEs are still > > there. > > Right. This is a little bit odd, although virtualization (Qemu, which is the main use case of MKTME > at least so far) doesn't use fork(). > > > > > More fundamentally, in some sense, the current code has no semantics. > > Associating a key with memory and "encrypting" it doesn't actually do > > anything unless you are attacking the memory bus but you haven't > > compromised the kernel. There's no protection against a guest that > > can corrupt its EPT tables, there's no protection against kernel bugs > > (*especially* if the duplicate direct map design stays), and there > > isn't even any fd or other object around by which you can only access > > the data if you can see the key. > > I am not saying managing MKTME key/keyID in key retention service is definitely better, but it seems > all those you mentioned are not related to whether to choose key retention service to manage MKTME > key/keyID? Or you are saying it doesn't matter we manage key/keyID in key retention service or in > MKTME driver, since MKTME barely have any security benefits (besides physical attack)? Mostly the latter. I think it's very hard to evaluate whether a given key allocation model makes sense given that MKTME provides such weak security benefits. TME has obvious security benefits, as does encryption of persistent memory, but this giant patch set isn't needed for plain TME and it doesn't help with persistent memory. > > > > > I'm also wondering whether the kernel will always be able to be a > > one-stop shop for key allocation -- if the MKTME hardware gains > > interesting new uses down the road, who knows how key allocation will > > work? > > I by now don't have any use case which requires to manage key/keyID specifically for its own use, > rather than letting kernel to manage keyID allocation. Please inspire us if you have any potential. > Other than compliance, I can't think of much reason that using multiple keys is useful, regardless of how their allocated. The only thing I've thought of is that, with multiple keys, you can use PCONFIG to remove one and flush caches and the data is most definitely gone. On the other hand, you can just zero the memory and the data is just as gone even without any encryption.