Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp197456iob; Mon, 2 May 2022 16:58:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzSgnaUvl3vRgrzj/JPQg23JjzyumuywS673r8FqCr6vgBG/wzgsh8BqiZjBI8FuN3aTdyv X-Received: by 2002:a17:902:9b92:b0:158:57d8:3a20 with SMTP id y18-20020a1709029b9200b0015857d83a20mr13979979plp.34.1651535912316; Mon, 02 May 2022 16:58:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651535912; cv=none; d=google.com; s=arc-20160816; b=E8lPEJfzJ9D6zALa3Z8BofySUE2aOhTc61nQZbPBmBh+T43PF1aaWTojmmeaveF206 8fTe9emh5rPa10NVQCuAnqY8xKWS0Jw1rV94YYtRvz7VMQj/3g7zefLdk1VsTpa6CuPU 4jdxevW6wRAPOwbigqjtz1pNENtx+9EbRk2g/VLSOqbGCc9KjatC5UxlNKYdNqybpZrz zjjnjL1swNaH3U7v58tcf03k6ICaojgBw58aAyEHfB7Cdwrn+pHuVqriI7ns0pR2/6+s 2WCXZQfGbM5110HXBu0GuRGv/RZHtZnOkBubQ0N1nzB4YRFrABKAiIxGU73WRviJqoOA jS9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:mime-version:message-id:date :dkim-signature; bh=bPx4uFSnc14yK8QTU/QEz0rdJyrogPiWttuTKlYBjBw=; b=rTYgu61MUo9/gxWwBhLklExyd4b25O9utPz9tmFScGwBGMYSfRWD0cC522Z2WdxKS+ QGwt+jGvQr2ub/2ae02x0vyUY5QnEcezBSJ4aRF6R/wqzz3sb2Um24mxNuEw1Fh290c5 XBQYVYKV42/6I1IsNT/vqtLo5bZxkMA9Kf747KedPP4smZrHKoCbgPRQJWZA7PhcWZRk yXuw/mhTrvvR3zmt1HEfGsvi2yW/BFzgvO2LNyfI32TbvGEGw036YVnb9cquzydazh/s wA0J9yLf+hNmWE6YzZCaLO94iceSxmKhCsolpMHZ2taQMhvs9bw/EQt72B1qlLvrV2H+ vNOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Qwbi3Yni; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id n18-20020a170902d2d200b0015eb29579dbsi1714985plc.187.2022.05.02.16.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 May 2022 16:58:32 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Qwbi3Yni; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C0B842E082; Mon, 2 May 2022 16:58:26 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236775AbiEBRCL (ORCPT + 99 others); Mon, 2 May 2022 13:02:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350054AbiEBRCJ (ORCPT ); Mon, 2 May 2022 13:02:09 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01D8C38A7 for ; Mon, 2 May 2022 09:58:40 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id r204-20020a632bd5000000b003c1720b306bso6099020pgr.8 for ; Mon, 02 May 2022 09:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=bPx4uFSnc14yK8QTU/QEz0rdJyrogPiWttuTKlYBjBw=; b=Qwbi3YniGcw+A+s7IOlUFx7dzRTqP2JZQYXQBah7k5tLMg/Mjq5gKYiLjvhOJJSqmn +7UPgTqf/omglpT3nftLpmilF/NwSTftkx9l64Eh8XmCA4YR9WJVtrvxjrLnrrhKFzSm NtmkEyQuiMcN/zzRyiSebm22aIsD1kV44OOG132zJ/Bnn4LCTNhAGyA2iZeledgKRKg2 C9p5VrvrjbNMQaefgRfQ1P6780BOJbB/K18QQeESeVPovL6eX/PSVcr+YyMtGOeg1BRd Urv2vr9FHcTbhHZdr9wbZZ4P6GzQ1lDRILuojWGgHAxffEQ4uRDcQvxC6sll+5O8Qu1o 9n3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=bPx4uFSnc14yK8QTU/QEz0rdJyrogPiWttuTKlYBjBw=; b=OGmOvRwfmsUqSUpNKWjaCeGbk3kajXyMr4ZKbEDcnCpPXFdnqcSWqK6Bti78TwRbGE BoelipJ2ESMaVVa+SIp5FcDmmhtdp2S18XwoaIyLDoXwu12IWTwaedinSkbfYeXVr3GV gLupzx3eIWyOSRC+4cnU5FYvf044OOztQcE4ozxzQULaV+5nQgzPji+DhuaWCkVmS44P krfcHHtX03q5XPpg8lgp0PM8XZTSuiilvS80mgtb4tpO5DPNrEWkiW9E15PCr/tDJwEE mzNvfu1DUJOVPuj9NQ21q2FdXhr7G5N50upzUmVQ2gg+mCijZ0UJ38rblJoN3BfakIKS C+oA== X-Gm-Message-State: AOAM532RsFpBgNq79ZL+9+9u7mGlnx7Ktnr8+Nhnppw21HBimkv8c+l+ sT3Vv7cAuWUkEA/O04PnBJ75YoVPp60= X-Received: from pgonda1.kir.corp.google.com ([2620:15c:29:203:a504:c712:edfe:ed97]) (user=pgonda job=sendgmr) by 2002:a05:6a00:1d8f:b0:50d:cbc5:ff90 with SMTP id z15-20020a056a001d8f00b0050dcbc5ff90mr11545687pfw.50.1651510719447; Mon, 02 May 2022 09:58:39 -0700 (PDT) Date: Mon, 2 May 2022 09:58:07 -0700 Message-Id: <20220502165807.529624-1-pgonda@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH v4] KVM: SEV: Mark nested locking of vcpu->lock From: Peter Gonda To: kvm@vger.kernel.org Cc: Peter Gonda , John Sperbeck , David Rientjes , Sean Christopherson , Paolo Bonzini , Hillf Danton , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all source and target vcpu->locks. Unfortunately there is an 8 subclass limit, so a new subclass cannot be used for each vCPU. Instead maintain ownership of the first vcpu's mutex.dep_map using a role specific subclass: source vs target. Release the other vcpu's mutex.dep_maps. Fixes: b56639318bb2b ("KVM: SEV: Add support for SEV intra host migration") Reported-by: John Sperbeck Suggested-by: David Rientjes Suggested-by: Sean Christopherson Suggested-by: Paolo Bonzini Cc: Hillf Danton Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Peter Gonda --- V4 * Due to 8 subclass limit keep dep_map on only the first vcpu and release the others. V3 * Updated signature to enum to self-document argument. * Updated comment as Seanjc@ suggested. Tested by running sev_migrate_tests with lockdep enabled. Before we see a warning from sev_lock_vcpus_for_migration(). After we get no warnings. --- arch/x86/kvm/svm/sev.c | 46 ++++++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 75fa6dd268f0..0239def64eaa 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1591,24 +1591,55 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) atomic_set_release(&src_sev->migration_in_progress, 0); } +/* + * To suppress lockdep false positives, subclass all vCPU mutex locks by + * assigning even numbers to the source vCPUs and odd numbers to destination + * vCPUs based on the vCPU's index. + */ +enum sev_migration_role { + SEV_MIGRATION_SOURCE = 0, + SEV_MIGRATION_TARGET, + SEV_NR_MIGRATION_ROLES, +}; -static int sev_lock_vcpus_for_migration(struct kvm *kvm) +static int sev_lock_vcpus_for_migration(struct kvm *kvm, + enum sev_migration_role role) { struct kvm_vcpu *vcpu; unsigned long i, j; + bool first = true; kvm_for_each_vcpu(i, vcpu, kvm) { - if (mutex_lock_killable(&vcpu->mutex)) + if (mutex_lock_killable_nested(&vcpu->mutex, role)) goto out_unlock; + + if (first) { + /* + * Reset the role to one that avoids colliding with + * the role used for the first vcpu mutex. + */ + role = SEV_NR_MIGRATION_ROLES; + first = false; + } else { + mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); + } } return 0; out_unlock: + + first = true; kvm_for_each_vcpu(j, vcpu, kvm) { if (i == j) break; + if (first) + first = false; + else + mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_); + + mutex_unlock(&vcpu->mutex); } return -EINTR; @@ -1618,8 +1649,15 @@ static void sev_unlock_vcpus_for_migration(struct kvm *kvm) { struct kvm_vcpu *vcpu; unsigned long i; + bool first = true; kvm_for_each_vcpu(i, vcpu, kvm) { + if (first) + first = false; + else + mutex_acquire(&vcpu->mutex.dep_map, + SEV_NR_MIGRATION_ROLES, 0, _THIS_IP_); + mutex_unlock(&vcpu->mutex); } } @@ -1745,10 +1783,10 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) charged = true; } - ret = sev_lock_vcpus_for_migration(kvm); + ret = sev_lock_vcpus_for_migration(kvm, SEV_MIGRATION_SOURCE); if (ret) goto out_dst_cgroup; - ret = sev_lock_vcpus_for_migration(source_kvm); + ret = sev_lock_vcpus_for_migration(source_kvm, SEV_MIGRATION_TARGET); if (ret) goto out_dst_vcpu; -- 2.36.0.464.gb9c8b46e94-goog