Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp2744337iob; Sat, 30 Apr 2022 18:30:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyRydJqxRy2oMhSfbjjIGYeRXUEjUi2kTVavRJBhsHCWC1D4t20rroPzB70qRz6+YBa/zU2 X-Received: by 2002:ac2:4a85:0:b0:472:1f8a:ae24 with SMTP id l5-20020ac24a85000000b004721f8aae24mr4706339lfp.370.1651368643127; Sat, 30 Apr 2022 18:30:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651368643; cv=none; d=google.com; s=arc-20160816; b=mtyzrj3yWsAhpHoEHrcZurPvTVEFHWGL8O+0RKDr0GoPdVIp+F8N3esIG7JZJkUBe+ tM5S34znqj8B0naGb0NDNTGyMGWtqCBQrPezCQNeigIxQm+Ga+/H/TNekHZwTWXrIWca XEqq2SDXtdOeCpubo47SonLuAttYG5CLDn7+SP/jc+VjnRRELvLk5tfaH92zW8j1q4Zd WHKtPyEG5LHQVpexapMASYs2ceCOQ7WlghLgu0p3XRv0GZt+jPu9ryhZcrPPOIty540u 0xS5YuPuaDiHkwMn5kIzDtzhAfSdEX1U2VUcmz3VIOZvgVGvUXi1DfKQ+b2PcQlk3j0P c3PQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Bkwm56G8Y7QJPuweRPoHg97Ydlf3HkBjvFlQy90GFK0=; b=Zldq5yVbU5kwR0Bat7N5J9Iha04JCmZv/A0d/NIhs6lYhzl+u9lRyS1if3Cv56Qk7s DrQnVYoZIDwCYeNv5FTt4EhdPQbD9gFAjyfEL7K9JS9XY1MtQF2OzJ8R9EMrnQoNa6I7 DfXU3FQV8cUhepLQKJa+MuZWBXhPYy0KVnUu4fGERTpiUum6UnONoyVkJKymmB1Xr08C vlCo5kL5OQkxJwwtNYU66v9VqIGUvoFkzN/T1L7MGd+59DgCxvdaqfdIxPCdk+BQzSBW of+lQb4QLsAroJgvJY/R6xmnd6urlJwMQQF8/FzI9YPl6CAeHqD22m3NWY6qQ6JTCkLu U3bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=j8ecA+fw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p7-20020a056512234700b00449fff282d2si11883695lfu.596.2022.04.30.18.29.35; Sat, 30 Apr 2022 18:30:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=j8ecA+fw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352312AbiD1Vbi (ORCPT + 99 others); Thu, 28 Apr 2022 17:31:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbiD1Vbh (ORCPT ); Thu, 28 Apr 2022 17:31:37 -0400 Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com [IPv6:2a00:1450:4864:20::12c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDE5D69718 for ; Thu, 28 Apr 2022 14:28:20 -0700 (PDT) Received: by mail-lf1-x12c.google.com with SMTP id n14so10819691lfu.13 for ; Thu, 28 Apr 2022 14:28:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Bkwm56G8Y7QJPuweRPoHg97Ydlf3HkBjvFlQy90GFK0=; b=j8ecA+fwUBStisq2hVVSQX9H3Ciop/uqbnHdnf1SMFF/1XkCfjkVrAtyh5SnH9akNz GO5NYXKun+J6QCK9Q+710N91SKx/ESs113lcZsqgC64vTmpAtRwsKUGV9OC64Mu0w4VX pVvaPK/EqFG6BNnpRhnFxfPMiXpOhZQN/pCJ7X7qenHKoKr8jEAc0TM6PCtcm5evBLZL 629zl38T512N5R50MLrLToZQkDK/+bX78EDXw6ZDpfB48rnkSeJ4mXImDOWMU01GfqFx RDLVOcbY07YAjVjTDVETO2tfbQkJOLUbf3VtQFaFfB+AJo5dY6808gcdODScgpL7UYpl 6xag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Bkwm56G8Y7QJPuweRPoHg97Ydlf3HkBjvFlQy90GFK0=; b=23saS0pNUl0rrk3pf65xONJP8EibQLGpdpFk6HASlbhYzhGYuxkqMGIzEYcG2lU+Et ak0B5REV0RCCwwKg2c0xnwMTi1aE6YuhP4y6v2pU49uSmrIWCn6sqv/zI9osoyt2AmOk 8OxgVoXEb70M3LmDExvagfMtOm9hzsh+1QTVzB30a5L1JtRp1+FZTLpn1PFetM5bKCTV zNqdYAIOvz+9/qyLiVmUaeIvzz1O0BvsfP+nY1aylKv/T8rjreasCHNoYohEiZTRUV54 n4mnjiWDcikk1/Vs45WwkKYILKqQNrIjObrPKgmZUSVK6paFYQHE5xsDZ8xpiw/WYrA0 RgDw== X-Gm-Message-State: AOAM533zrSUnWBSqfdbGTVAJp0kKSS1/wx3fXxhsjhEKkOCGBt0gXLjK kp2qokuh7yTcs74/8dLwS4XlkFzmHWc3snROpr7Miw== X-Received: by 2002:a05:6512:c01:b0:448:6aec:65c5 with SMTP id z1-20020a0565120c0100b004486aec65c5mr25924602lfu.193.1651181298953; Thu, 28 Apr 2022 14:28:18 -0700 (PDT) MIME-Version: 1.0 References: <20220407195908.633003-1-pgonda@google.com> <62e9ece1-5d71-f803-3f65-2755160cf1d1@redhat.com> <4c0edc90-36a1-4f4c-1923-4b20e7bdbb4c@redhat.com> In-Reply-To: From: Peter Gonda Date: Thu, 28 Apr 2022 15:28:07 -0600 Message-ID: Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock To: Paolo Bonzini Cc: John Sperbeck , kvm list , David Rientjes , Sean Christopherson , LKML Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 27, 2022 at 2:18 PM Peter Gonda wrote: > > On Wed, Apr 27, 2022 at 10:04 AM Paolo Bonzini wrote: > > > > On 4/26/22 21:06, Peter Gonda wrote: > > > On Thu, Apr 21, 2022 at 9:56 AM Paolo Bonzini wrote: > > >> > > >> On 4/20/22 22:14, Peter Gonda wrote: > > >>>>>> svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all > > >>>>>> source and target vcpu->locks. Mark the nested subclasses to avoid false > > >>>>>> positives from lockdep. > > >>>> Nope. Good catch, I didn't realize there was a limit 8 subclasses: > > >>> Does anyone have thoughts on how we can resolve this vCPU locking with > > >>> the 8 subclass max? > > >> > > >> The documentation does not have anything. Maybe you can call > > >> mutex_release manually (and mutex_acquire before unlocking). > > >> > > >> Paolo > > > > > > Hmm this seems to be working thanks Paolo. To lock I have been using: > > > > > > ... > > > if (mutex_lock_killable_nested( > > > &vcpu->mutex, i * SEV_NR_MIGRATION_ROLES + role)) > > > goto out_unlock; > > > mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); > > > ... > > > > > > To unlock: > > > ... > > > mutex_acquire(&vcpu->mutex.dep_map, 0, 0, _THIS_IP_); > > > mutex_unlock(&vcpu->mutex); > > > ... > > > > > > If I understand correctly we are fully disabling lockdep by doing > > > this. If this is the case should I just remove all the '_nested' usage > > > so switch to mutex_lock_killable() and remove the per vCPU subclass? > > > > Yes, though you could also do: > > > > bool acquired = false; > > kvm_for_each_vcpu(...) { > > if (acquired) > > mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); > > if (mutex_lock_killable_nested(&vcpu->mutex, role) > > goto out_unlock; > > acquired = true; > > ... > > > > and to unlock: > > > > bool acquired = true; > > kvm_for_each_vcpu(...) { > > if (!acquired) > > mutex_acquire(&vcpu->mutex.dep_map, 0, role, _THIS_IP_); > > mutex_unlock(&vcpu->mutex); > > acquired = false; > > } So when actually trying this out I noticed that we are releasing the current vcpu iterator but really we haven't actually taken that lock yet. So we'd need to maintain a prev_* pointer and release that one. That seems a bit more complicated than just doing this: To lock: bool acquired = false; kvm_for_each_vcpu(...) { if (!acquired) { if (mutex_lock_killable_nested(&vcpu->mutex, role) goto out_unlock; acquired = true; } else { if (mutex_lock_killable(&vcpu->mutex, role) goto out_unlock; } } To unlock: kvm_for_each_vcpu(...) { mutex_unlock(&vcpu->mutex); } This way instead of mocking and releasing the lock_dep we just lock the fist vcpu with mutex_lock_killable_nested(). I think this maintains the property you suggested of "coalesces all the mutexes for a vm in a single subclass". Thoughts? > > > > where role is either 0 or SINGLE_DEPTH_NESTING and is passed to > > sev_{,un}lock_vcpus_for_migration. > > > > That coalesces all the mutexes for a vm in a single subclass, essentially. > > Ah thats a great idea to allow for lockdep to work still. I'll try > that out, thanks again Paolo. > > > > > Paolo > >