Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp6453322pxv; Thu, 29 Jul 2021 15:19:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzF8IdYf40g+nhsb2cDu679sWLuZ/IR22SVEY1SZ48vc2WDh2ZliFrdBcT4fFS+s0T2Rf0B X-Received: by 2002:a92:d84e:: with SMTP id h14mr5283210ilq.267.1627597174947; Thu, 29 Jul 2021 15:19:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627597174; cv=none; d=google.com; s=arc-20160816; b=Uq8kTFCd2S+COLts3JcmhVQnO4lKa9YC4H7B122oY65/IDuDVjtngf6IGXFwqt2a0H Yz94OpRAMqb5odSjryVevqcZa/dMbuM5EyI5J+xyMQETsdkRoOqH0T3RcjMI7MBPL6vP kBO3qVD/OF1z9vg9P7IuwBSjhX5cbajl1Wl9SPmD94Z3WO8u27myceE6rSPGK5QMOT0/ qyhlaoNajQYSELYSt+fGmmXZAbjpOpJA0lmTDyhZtZaxn1B0nnnPlpY8fv5/8FbS5p3L 4gE0wDzpJEY+hecqbhFbc4JK76zkt7ev7OIzD/QhVyFXHpbtQAOt0C9MwX0AReT+mja2 9sCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=u/ilw5NBsJP2nmfyHHjlurYIq9TKi0gAKv2Zlvey2RQ=; b=ORRdj5ED4zBhJYmwEbGpNzZH1FP24FOsdL5G9o7nXxU7+06H+2RciAHIIvK4FFZNWe 3n9vWys3+zbgvNL466DmxdIKcJRNsLOeuqnWum7fER3+mZOQVqtUvFA5/BX2cuG5q1lK TnYoW/PnGMbY2whyUcfftKfsyJ2jvIRCSodPrrbv2PwKxpWn15exy+59vFdi9741lhgs oQHS/uDwSJvDNc44NF3qmtb6TfvCQNjBXIfVk3GoNd6tvGaH2Fi9O1rg7XnHbSzZz9Pw b2InGqpV+eVc55yebUxzDjc6WQH7mqliqfFwSdY48YYI7DwBfJe46UyUpiqgcWl+PCkX XsrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=iXGP91wf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ay7si4775293jab.36.2021.07.29.15.19.22; Thu, 29 Jul 2021 15:19:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=iXGP91wf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234299AbhG2WRz (ORCPT + 99 others); Thu, 29 Jul 2021 18:17:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230078AbhG2WRz (ORCPT ); Thu, 29 Jul 2021 18:17:55 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB91FC061765 for ; Thu, 29 Jul 2021 15:17:51 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id e2-20020a17090a4a02b029016f3020d867so11628295pjh.3 for ; Thu, 29 Jul 2021 15:17:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=u/ilw5NBsJP2nmfyHHjlurYIq9TKi0gAKv2Zlvey2RQ=; b=iXGP91wf4OD7DIgLvU0aEpgvSVV8t+Qo7h/khRV7r9XjB3YXodcxAisfJqkUiFfZVo uhYuF2xP/aaD5jMqrPBHziMaTTzcHOYIXrUj0ymvflER627ZyMRfYXIJ+2ecEgOdQYFq OPokrPnYKzrpY28qLjfS/2kDH2yby5xHdcv1eG6/Jwu04A5jsXKcfCTGytafS6F1HfDL lnCmIA5UGS8alBwdl+Y21xoWu0M+SgVGhGCr4GoNOC+5vxE2v9T99ZeJ/V9ArLL6l/PB HBkKvZvgxgDmAXiZrokSdtopoAhlUkEWixyfWEooNVbrFq54cty6TzQOZQUoSVuAENZr NVzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=u/ilw5NBsJP2nmfyHHjlurYIq9TKi0gAKv2Zlvey2RQ=; b=cmWapuZ38KMkO9dwS0TDfblmFcLfIhm3fx4qnta7K5hj4FD+ghe6So+RbwFYew+yKX EKVUMJqOrBgSa4JqnIPJiNd2GAj8jRr2ahjpESHH58IhHGGmnnFYXstxRNQbVizpT0Tj XeDRgljp5ZUsKnak42Ujgeh2hCmzZf5t986NBvM8AgFDKHuBl0wClQyGn2HvaHQaoO2M Bday/OhXjpB7gV2fQnkiBRUhKEzjrbBjC5v7rkHEKHxHSXLDFRyFVFX902vd2gpulFf6 uVy6lSqVSJ+z9o0ntP0N2ftgAlSXxaLp2Tobg7zfYk91Inc4gFT/hP4QdK8p9HmZIaCg 97Vg== X-Gm-Message-State: AOAM532aIvlpLavt40diVPNqzzFR+WCAx4O7iY7qLfP1hyp5EYynVTwv 87ZIQpSGIstvPog5ZQj8IdvU/g== X-Received: by 2002:a62:e809:0:b029:32c:2dcf:60ed with SMTP id c9-20020a62e8090000b029032c2dcf60edmr7283033pfi.5.1627597071180; Thu, 29 Jul 2021 15:17:51 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id k6sm5063456pgb.43.2021.07.29.15.17.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Jul 2021 15:17:50 -0700 (PDT) Date: Thu, 29 Jul 2021 22:17:46 +0000 From: Sean Christopherson To: Peter Gonda Cc: kvm@vger.kernel.org, Lars Bull , Brijesh Singh , Marc Orr , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/3 V3] KVM, SEV: Add support for SEV intra host migration Message-ID: References: <20210726195015.2106033-1-pgonda@google.com> <20210726195015.2106033-3-pgonda@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210726195015.2106033-3-pgonda@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 26, 2021, Peter Gonda wrote: > To avoid exposing this internal state to userspace and prevent other > processes from importing state they shouldn't have access to, the send > returns a token to userspace that is handed off to the target VM. The > target passes in this token to receive the sent state. The token is only > valid for one-time use. Functionality on the source becomes limited > after send has been performed. If the source is destroyed before the > target has received, the token becomes invalid. ... > +11. KVM_SEV_INTRA_HOST_RECEIVE > +------------------------------------- > + > +The KVM_SEV_INTRA_HOST_RECEIVE command is used to transfer staged SEV > +info to a target VM from some source VM. SEV on the target VM should be active > +when receive is performed, but not yet launched and without any pinned memory. > +The launch commands should be skipped after receive because they should have > +already been performed on the source. > + > +Parameters (in/out): struct kvm_sev_intra_host_receive > + > +Returns: 0 on success, -negative on error > + > +:: > + > + struct kvm_sev_intra_host_receive { > + __u64 info_token; /* token referencing the staged info */ Sorry to belatedly throw a wrench in things, but why use a token approach? This is only intended for migrating between two userspace VMMs using the same KVM module, which can access both the source and target KVM instances (VMs/guests). Rather than indirectly communicate through a token, why not communidate directly? Same idea as svm_vm_copy_asid_from(). The locking needs special consideration, e.g. attempting to take kvm->lock on both the source and dest could deadlock if userspace is malicious and double-migrates, but I think a flag and global spinlock to state that migration is in-progress would suffice. Locking aside, this would reduce the ABI to a single ioctl(), should avoid most if not all temporary memory allocations, and would obviate the need for patch 1 since there's no limbo state, i.e. the encrypted regions are either owned by the source or the dest. I think the following would work? Another thought would be to make the helpers and "lock for multi-lock" flag arch-agnostic, e.g. the logic below works iff this is the only path that takes two kvm->locks simultaneous. static int svm_sev_lock_for_migration(struct kvm *kvm) { struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; int ret = 0; /* * Bail if this VM is already involved in a migration to avoid deadlock * between two VMs trying to migrate to/from each other. */ spin_lock(&sev_migration_lock); if (sev->migration_in_progress) ret = -EINVAL; else sev->migration_in_progress = true; spin_unlock(&sev_migration_lock); if (!ret) mutex_lock(&kvm->lock); return ret; } static void svm_unlock_after_migration(struct kvm *kvm) { mutex_unlock(&kvm->lock); WRITE_ONCE(sev->migration_in_progress, false); } int svm_sev_migrate_from(struct kvm *kvm, unsigned int source_fd) { struct file *source_kvm_file; struct kvm *source_kvm; int ret = -EINVAL; ret = svm_sev_lock_for_migration(kvm); if (ret) return ret; if (!sev_guest(kvm)) goto out_unlock; source_kvm_file = fget(source_fd); if (!file_is_kvm(source_kvm_file)) { ret = -EBADF; goto out_fput; } source_kvm = source_kvm_file->private_data; ret = svm_sev_lock_for_migration(source_kvm); if (ret) goto out_fput; if (!sev_guest(source_kvm)) { ret = -EINVAL; goto out_source; } out_source: svm_unlock_after_migration(&source_kvm->lock); out_fpu: if (source_kvm_file) fput(source_kvm_file); out_unlock: svm_unlock_after_migration(kvm); return ret; }