Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp971491pxb; Thu, 19 Aug 2021 16:17:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzP/Inc2h8SVusn21TW6oWffgAFBSPVKRVog0wVBqPPo5XwGyAhYOFK8yPb/bfHrxuU5H9N X-Received: by 2002:a17:906:7095:: with SMTP id b21mr18311509ejk.131.1629415030104; Thu, 19 Aug 2021 16:17:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629415030; cv=none; d=google.com; s=arc-20160816; b=uw2NKwG9mXnwFGnvlJzJ71CtJsB/x4FzIzW93VBamktHi29wQ9/9ehRCG9qWR8Cfvn YX2Vg7iE8f/Zjy0vUqBzv8h/KMhnMUp4gKtLTp5cF5uidKpS7R7JCFLBzVg1muJ1udnD IUEF0VfeGBlG556318g3ZfLMBJ24UF+Tutxw8jzpnn46EmTTz3ZjfMk2jrS6j9wA3hpW a8Ab7USj94Tqc+rY3c49ng8Bv/FIaaE56sGss8HJQtMdcvFTJZWcgoZ0LDZBb8VJ/eOZ uJZuqH2M2T87XMc8+RP9NvF7e1XgYjdCe0WlwMVxapvS0JvsVedQ4bGM/z0DXdu7mQmz rdOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=8y5LqE4GdIR929n9d9+ayleYuEU4gSh8xvm/xko1ui8=; b=yL8WW4YyeaQejvkw2V7vUrQTP52SrySJORw9vjZJLcOXYQXVGJrLvUwyFjclwSC1CQ plTkiqpmPaq5qg+xH0GIQb+l1dsALuAkhzqAAyzSvR2RBnScUVgAPR4ORrAVwgjOmSUB Bb+T18QNed7j3drW+CgEXPD5z+HVCKu65lz9j20P8+5IbFbjZCgx5EKPRW8w6gGkL+sC 4M42340sHGIpsVCPISsaChoUZ6k0/FDFxGHNYjv7184COORt4WysRPIv1SsT1l6Vt1+I tFD82TY+G2igx8fy0yHKyDs/Zuntp0xmeRZS2UbBBXS50jMK2UELcDaPGWK4iXtiJ70z 6ihQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=QWHBI897; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z12si4742110ejc.42.2021.08.19.16.16.40; Thu, 19 Aug 2021 16:17:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=QWHBI897; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236479AbhHSW7J (ORCPT + 99 others); Thu, 19 Aug 2021 18:59:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229808AbhHSW7I (ORCPT ); Thu, 19 Aug 2021 18:59:08 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 428F1C061756 for ; Thu, 19 Aug 2021 15:58:31 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id j1so6050248pjv.3 for ; Thu, 19 Aug 2021 15:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=8y5LqE4GdIR929n9d9+ayleYuEU4gSh8xvm/xko1ui8=; b=QWHBI897Zpttl5mcDjfI+YVe6BFEYWDr3Gf2V50xV1oDU7WdDnAUuIZjAwDlKjl4zQ 6wIl8/IbGUU2b4/RocsbAhkg+bTMeXlYlppcgxuc+J7YxEu0MPsoWPqx42u5qw1iWJzc XcGNJd7tx/UgjCAHtcF1YPw5XCR1UsQUAnnq8f4pFuOBPNJ/wWpuQT8lBXrdPZagcKBF 0SNZX9fvVZJhKgXB8zTTsvEX9lzRlYeRgrzaprdGCqLkRy3FiaEXhYE+cwIbsC63M/0V skLtLERUzJVLzX3nx/Nfg96NMpbvOQEbJ7+7SMFfIlnXLpOnR4a+GjTaA8CCNeC3kfrn SQEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=8y5LqE4GdIR929n9d9+ayleYuEU4gSh8xvm/xko1ui8=; b=gTUZXMEYJOnRd6qUv3YRiJDEnQeNgCbu1dyJBMPB+2hG2/qe/ee5qYm7SIjz5OOkZW t2/yR19G/GEIjTXb5DPWFRQkAOFyeZqWHWPqE5+rNiVhtIj+Cylg9lvKgZtewWK8frfM kKVTM2r/yUsp1RU5p5fuQuJdaazgBbh37D6yYf95drvEscttHg5PVhMHqPUtiLRB5vIF YwcRTiP/50p/+WMW5kU9aXI0oovVsK7xqOvSIAulsTFlSCzy50iZPIkL+G6VQyDrEFVd nka8pgiXLUWKXPKR6+pbmrAu+cke0EBr1+r56cSAy7cNKf9KkOOhkCWWjAChETVMMKCY lS7A== X-Gm-Message-State: AOAM532rbI7vYy2oqAGhhqh0zdzbh69eUTIvc00ciGhuLJEri8WaUi9v hBcOv4/A14Rgdg+KC76Y82vVww== X-Received: by 2002:a17:902:8c83:b029:129:17e5:a1cc with SMTP id t3-20020a1709028c83b029012917e5a1ccmr13710692plo.49.1629413910546; Thu, 19 Aug 2021 15:58:30 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id j11sm4754203pfa.10.2021.08.19.15.58.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Aug 2021 15:58:30 -0700 (PDT) Date: Thu, 19 Aug 2021 22:58:23 +0000 From: Sean Christopherson To: Peter Gonda Cc: Marc Orr , kvm list , Paolo Bonzini , David Rientjes , "Dr . David Alan Gilbert" , Brijesh Singh , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2 V4] KVM, SEV: Add support for SEV intra host migration Message-ID: References: <20210819154910.1064090-1-pgonda@google.com> <20210819154910.1064090-2-pgonda@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 19, 2021, Peter Gonda wrote: > > > > > > +static int svm_sev_lock_for_migration(struct kvm *kvm) > > > +{ > > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > > + int ret; > > > + > > > + /* > > > + * Bail if this VM is already involved in a migration to avoid deadlock > > > + * between two VMs trying to migrate to/from each other. > > > + */ > > > + spin_lock(&sev->migration_lock); > > > + if (sev->migration_in_progress) > > > + ret = -EBUSY; > > > + else { > > > + /* > > > + * Otherwise indicate VM is migrating and take the KVM lock. > > > + */ > > > + sev->migration_in_progress = true; > > > + mutex_lock(&kvm->lock); Deadlock aside, mutex_lock() can sleep, which is not allowed while holding a spinlock, i.e. this patch does not work. That's my suggestion did the crazy dance of "acquiring" a flag. What I don't know is why on earth I suggested a global spinlock, a simple atomic should work, e.g. if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1)) return -EBUSY; mutex_lock(&kvm->lock); and on the backend... mutex_unlock(&kvm->lock); atomic_set_release(&sev->migration_in_progress, 0); > > > + ret = 0; > > > + } > > > + spin_unlock(&sev->migration_lock); > > > + > > > + return ret; > > > +} > > > + > > > +static void svm_unlock_after_migration(struct kvm *kvm) > > > +{ > > > + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; > > > + > > > + mutex_unlock(&kvm->lock); > > > + WRITE_ONCE(sev->migration_in_progress, false); > > > +} > > > + > > > > This entire locking scheme seems over-complicated to me. Can we simply > > rely on `migration_lock` and get rid of `migration_in_progress`? I was > > chatting about these patches with Peter, while he worked on this new > > version. But he mentioned that this locking scheme had been suggested > > by Sean in a previous review. Sean: what do you think? My rationale > > was that this is called via a VM-level ioctl. So serializing the > > entire code path on `migration_lock` seems fine. But maybe I'm missing > > something? > > > Marc I think that only having the spin lock could result in > deadlocking. If userspace double migrated 2 VMs, A and B for > discussion, A could grab VM_A.spin_lock then VM_A.kvm_mutex. Meanwhile > B could grab VM_B.spin_lock and VM_B.kvm_mutex. Then A attempts to > grab VM_B.spin_lock and we have a deadlock. If the same happens with > the proposed scheme when A attempts to lock B, VM_B.spin_lock will be > open but the bool will mark the VM under migration so A will unlock > and bail. Sean originally proposed a global spin lock but I thought a > per kvm_sev_info struct would also be safe. Close. The issue is taking kvm->lock from both VM_A and VM_B. If userspace double migrates we'll end up with lock ordering A->B and B-A, so we need a way to guarantee one of those wins. My proposed solution is to use a flag as a sort of one-off "try lock" to detect a mean userspace.