Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp889890pxb; Thu, 19 Aug 2021 14:02:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzom8CJUWAmF0i9BeCCqSjB/SNaSxatnwyivJa+XvOHnELMaCHZm0RPSfAvnwFP/H4rFvwu X-Received: by 2002:a02:664e:: with SMTP id l14mr14731142jaf.56.1629406921523; Thu, 19 Aug 2021 14:02:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629406921; cv=none; d=google.com; s=arc-20160816; b=oY4JHyDnyl6MgTuYuL9imXquiwxhDhQJzokltB9p+33xfwOVgpCWnBwVR9VRBPAsgi KZ8GSDOJ8X8ADhK6j1NFg7cjtZEvm1i+U2kRMXeygjEwS6TuHhPhTPD2ocQBaAE6MYrV bLVS4qCVXi5XKddLG49L9CpUE9NnFTv5jlCmlzL14AyXHr9FkwUQwNpXFSBzFSglcWEL x1NTOTcbFCyTszdDsNMWj4izXvvR3hTcw+9lUbJW5waTKXH8IabEyb4flGI2YIXL+twQ fqRggvmODckzIGQihnbUP8CrR6jN4GIwhL5p1ufvUIXLc468i5JP4nl03Aj73D/S/E8b 6xEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=AvQxXkt+Y6T63punRpouLEiLYsNbGvwmAz8WS2jurfQ=; b=kkArOPS1TN9yxo79D+taXnLx6VZebw4pRhVWoNSxhSZ0q2PtLjm5lOssmIdl7wbs/1 FEJyXIAymzMfmJJIKOWtvgmOcyGU/FvaiDUH+WeUcqm+sK8avsXydDSomJIC5wPP5oFA 92gj4MWrMnh/cjiQi/wq2SYbSPwDz6VSVQ5WiQSo9JHQmcJUhCqmKpuVtHQi1BOpcsXO HVry+DbIlszYgbHcesIljWxalYzHQGqWU0o0RElDhdOdU5AV3HLmx2FzgIC8nvE4/vYS BX5BX4BevTJZd+7DjojUye0DhjKSMqGslTfDotIBvzWwU35xNPBF5HBfHYMBK5wzy1jQ eKZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZHJEMjfb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n20si4008720iod.5.2021.08.19.14.01.49; Thu, 19 Aug 2021 14:02:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZHJEMjfb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235375AbhHSVBR (ORCPT + 99 others); Thu, 19 Aug 2021 17:01:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229791AbhHSVBQ (ORCPT ); Thu, 19 Aug 2021 17:01:16 -0400 Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com [IPv6:2a00:1450:4864:20::131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E3BFC061575 for ; Thu, 19 Aug 2021 14:00:40 -0700 (PDT) Received: by mail-lf1-x131.google.com with SMTP id z2so15763089lft.1 for ; Thu, 19 Aug 2021 14:00:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=AvQxXkt+Y6T63punRpouLEiLYsNbGvwmAz8WS2jurfQ=; b=ZHJEMjfbNtf35/3ppqO8FcW3EFGc2NwLvO7lliyfTYD1V0n6TJ5XeANhrwSukFXGOX GGgDsPfdM7qXmFqZDk2BDcVTdONCimPPNwcSKABKAFEJFgq9zbheE8v07glt5S1C9vt6 bhSZgY58nsHjAKnWfCThu2+tIy3YFtK5xNiKCB5l2GwFwzu+EvCxTVpzUiaDpWP/z78p BLS4SkE+NV3yQ5n4FshQidB5qaRn5URGvO7sn7YsN6Zz44zkzUdl4y9VhDIPCouRcKdy DYM279xkUC8peyXwJarTC2OV+W8+KxfV5HMupHVWOzb448KigQIwDGtABxLM1uCErlhc 2vzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=AvQxXkt+Y6T63punRpouLEiLYsNbGvwmAz8WS2jurfQ=; b=exEW3Pe07FUNc9dHrxnGiYLj322MbiSP2T+Hm34B9kiEMlJIelKLuREQGO3zQUOzqk WVOTgIDUXeFgpDPQ2xppeKJg7J4OrhjxZTksrT9p+z+DNSypgOS/T3XwBZGVrNO8HtXY t18TQj6yvUWYkEhJ/o9XQ330a7zrDJrYWCybfWGczOrkwGmxTMkieGyFEYCodMAryPD7 6ZFQtsR1lxspT89MSn8JeFbdA0qlwCBY4mbdq39s0suKzZEisrHPvLzwMjNmz3KpO5ml HpulhiflakEndRNiwaHjqWJchsqgTalYu5lYaxX81nO0IIU9TypdDS/MhHQDCl3OuFnP S31A== X-Gm-Message-State: AOAM5329GMNoPNmmNpDCkrefuYoa1lg6gLfAqys5mddSRS96OAvN+HDJ 2UQ5g0FQ++XxHKNFoKrbzb0B+NC/K6YA/gb5gnhwrw== X-Received: by 2002:a19:6541:: with SMTP id c1mr11833577lfj.423.1629406836921; Thu, 19 Aug 2021 14:00:36 -0700 (PDT) MIME-Version: 1.0 References: <20210819154910.1064090-1-pgonda@google.com> <20210819154910.1064090-2-pgonda@google.com> In-Reply-To: From: Peter Gonda Date: Thu, 19 Aug 2021 15:00:25 -0600 Message-ID: Subject: Re: [PATCH 1/2 V4] KVM, SEV: Add support for SEV intra host migration To: Marc Orr Cc: kvm list , Sean Christopherson , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > > > +static int svm_sev_lock_for_migration(struct kvm *kvm) > > +{ > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > + int ret; > > + > > + /* > > + * Bail if this VM is already involved in a migration to avoid deadlock > > + * between two VMs trying to migrate to/from each other. > > + */ > > + spin_lock(&sev->migration_lock); > > + if (sev->migration_in_progress) > > + ret = -EBUSY; > > + else { > > + /* > > + * Otherwise indicate VM is migrating and take the KVM lock. > > + */ > > + sev->migration_in_progress = true; > > + mutex_lock(&kvm->lock); > > + ret = 0; > > + } > > + spin_unlock(&sev->migration_lock); > > + > > + return ret; > > +} > > + > > +static void svm_unlock_after_migration(struct kvm *kvm) > > +{ > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > + > > + mutex_unlock(&kvm->lock); > > + WRITE_ONCE(sev->migration_in_progress, false); > > +} > > + > > This entire locking scheme seems over-complicated to me. Can we simply > rely on `migration_lock` and get rid of `migration_in_progress`? I was > chatting about these patches with Peter, while he worked on this new > version. But he mentioned that this locking scheme had been suggested > by Sean in a previous review. Sean: what do you think? My rationale > was that this is called via a VM-level ioctl. So serializing the > entire code path on `migration_lock` seems fine. But maybe I'm missing > something? > Marc I think that only having the spin lock could result in deadlocking. If userspace double migrated 2 VMs, A and B for discussion, A could grab VM_A.spin_lock then VM_A.kvm_mutex. Meanwhile B could grab VM_B.spin_lock and VM_B.kvm_mutex. Then A attempts to grab VM_B.spin_lock and we have a deadlock. If the same happens with the proposed scheme when A attempts to lock B, VM_B.spin_lock will be open but the bool will mark the VM under migration so A will unlock and bail. Sean originally proposed a global spin lock but I thought a per kvm_sev_info struct would also be safe.