Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp3799909iob; Mon, 2 May 2022 06:07:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzX58mdD6lDslB2b6ZUYT0Dg8ImNxFn6nrxKm5Oe89CnkSknFE77ZZLKcoDZ22uyFuZ4CcY X-Received: by 2002:a05:6512:2614:b0:445:777d:3530 with SMTP id bt20-20020a056512261400b00445777d3530mr8506355lfb.647.1651496819943; Mon, 02 May 2022 06:06:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651496819; cv=none; d=google.com; s=arc-20160816; b=WlRreTZvdyGHgZVN60Nlhdj6oo+IEDgZvXXSnRP7iJmj/TvsLhZ6+kKWrUJEcSIEfD +xMovHl/CDZB3tg+0Gur0Jkmbzt/1Sx49dkM9uzux1usG306lYNBjGjC8VcfVEd6s22z KyCp0EEiaYVUHr/4dM8lYui56l5MTxUjUW+l/2Nys4KOdolx4i/X4IdYBe/GTGa+YBDc meYYYoTetseq9237llr2zR28JtiOnAogiPtu3vcTF1ZHpGKWNxp1mkKzEuddaMGLgnPY SKcST7lMypYt6Bqj3HHq86GzeoKXGFltOn04uQTqU6vkuxI0xbg3/f+tv/oQwPOidtOs 4NPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=6s1nwoJRGgRDD+cc6eVnKDXhc5zKGyzdqBiCfjhgACw=; b=moiSMmdEWRElIdK9hMg++Qd3cM+M+cFFjI/t7LqbsROQTLCYV314zrUAR13p+eq65v HVwUFSFe6S6iFj3nxODmBF9dirWK4mhCqIphvtVyo9ttT6CDCmOzUo1FPqLBMsPzxYDP HW7NRIrvAaMwYQF6H5UEQhqLfdeJJUc+GYrQ+1hLS+4bzL2mnedtZq1S8K/nZ/NXT7d3 zGytajHoYmMDj0WNc2eN2sUg84xCCqKmJTqvGnpDFv6U01Xrgedzrt1aWwFvYcJkQafC eXERFEgKBZE+otRA5VXFdp7iowu6pUBPntNjk82FWQA2OaRC3Pytfdr9TpveR4dFVnsO DUgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=QRe3ug7K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r14-20020a2eb88e000000b0025065e29993si310419ljp.74.2022.05.02.06.06.31; Mon, 02 May 2022 06:06:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=QRe3ug7K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378551AbiD2PjS (ORCPT + 99 others); Fri, 29 Apr 2022 11:39:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229886AbiD2PjR (ORCPT ); Fri, 29 Apr 2022 11:39:17 -0400 Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com [IPv6:2a00:1450:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67020D5EB2 for ; Fri, 29 Apr 2022 08:35:58 -0700 (PDT) Received: by mail-lj1-x22f.google.com with SMTP id g16so5470243lja.3 for ; Fri, 29 Apr 2022 08:35:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6s1nwoJRGgRDD+cc6eVnKDXhc5zKGyzdqBiCfjhgACw=; b=QRe3ug7K1fccdkgWK9+JEOfzzP1wXoDKVm5M/ouWALA2QAkiM9sM5mQaCM+wK7D+v3 vosrmviy1SYp/52eVj7aSP8AN0D8VAU5Okvb4Ww8xgcqiXF9ZElmdP2YOFbhm7XV0qCz OrPJDhWWzcWkt5ISnaWJp4UWFQSRujN2mYmOd3ptUuFcjqk7kK0xf02f8kQk/8GxjGYv R8dlQH1h9WQLZqOX7d7Vbo/s7c+YzOvhAVSm7esn70hVYOYtl6QPHQW/lR70eu6ztjZD pKiaKcokFfvJ242z+zynG0IN4GowpMyhRJWo/gnYQwpL2wc3PwtK2z0FlehP2gRrMqKr MO6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6s1nwoJRGgRDD+cc6eVnKDXhc5zKGyzdqBiCfjhgACw=; b=yJ5d8VqcF8WhW119OyCjicY30QRqgCaR/3yqdFoeCB+baaVZKE1fGyq+8xDovldFvc qz/Eldb1YBG86KdJlq1t/YZx/YcGiMguE4jXEnTxzAEKQWQlM4gap2+7kA+NFcvgDrx1 sposYfFdrMUprIHa4x8EfU2QvGfvBHB+8gtQxUsBMz/fDpP4SitYtnFv3vJ1uNRYDk6X R5J0WceYybhkiktLerS4dn7DTmA7QT35GelzN3gM3BCU6uEyxX1882oh0ktsOUUwCTmN TTchiMDUeb6fsJEK0BGJGUBhc46fmCmNZbi9CQ9r+yFaO+QXsNpkvOOU6D4Y5nc4jOb8 3x8A== X-Gm-Message-State: AOAM5305IoGliWwCErAZVsZEG9YVn1Li3moqTYizjvqcaj8CiVnCRLak 50JlPOR3BDExWbE3imr0FQLuUSLd/zmtmxVP4rQDjw== X-Received: by 2002:a05:651c:1508:b0:24f:2fa6:89a4 with SMTP id e8-20020a05651c150800b0024f2fa689a4mr6252956ljf.132.1651246556519; Fri, 29 Apr 2022 08:35:56 -0700 (PDT) MIME-Version: 1.0 References: <20220407195908.633003-1-pgonda@google.com> <62e9ece1-5d71-f803-3f65-2755160cf1d1@redhat.com> <4c0edc90-36a1-4f4c-1923-4b20e7bdbb4c@redhat.com> <0d282be4-d612-374d-84ba-067994321bab@redhat.com> In-Reply-To: <0d282be4-d612-374d-84ba-067994321bab@redhat.com> From: Peter Gonda Date: Fri, 29 Apr 2022 09:35:45 -0600 Message-ID: Subject: Re: [PATCH v3] KVM: SEV: Mark nested locking of vcpu->lock To: Paolo Bonzini Cc: John Sperbeck , kvm list , David Rientjes , Sean Christopherson , LKML Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 28, 2022 at 5:59 PM Paolo Bonzini wrote: > > On 4/28/22 23:28, Peter Gonda wrote: > > > > So when actually trying this out I noticed that we are releasing the > > current vcpu iterator but really we haven't actually taken that lock > > yet. So we'd need to maintain a prev_* pointer and release that one. > > Not entirely true because all vcpu->mutex.dep_maps will be for the same > lock. The dep_map is essentially a fancy string, in this case > "&vcpu->mutex". > > See the definition of mutex_init: > > #define mutex_init(mutex) \ > do { \ > static struct lock_class_key __key; \ > \ > __mutex_init((mutex), #mutex, &__key); \ > } while (0) > > and the dep_map field is initialized with > > lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP); > > (i.e. all vcpu->mutexes share the same name and key because they have a > single mutex_init-ialization site). Lockdep is as crude in theory as it > is effective in practice! > > > > > bool acquired = false; > > kvm_for_each_vcpu(...) { > > if (!acquired) { > > if (mutex_lock_killable_nested(&vcpu->mutex, role) > > goto out_unlock; > > acquired = true; > > } else { > > if (mutex_lock_killable(&vcpu->mutex, role) > > goto out_unlock; > > This will cause a lockdep splat because it uses subclass 0. All the > *_nested functions is allow you to specify a subclass other than zero. OK got it. I now have this to lock: kvm_for_each_vcpu (i, vcpu, kvm) { if (prev_vcpu != NULL) { mutex_release(&prev_vcpu->mutex.dep_map, _THIS_IP_); prev_vcpu = NULL; } if (mutex_lock_killable_nested(&vcpu->mutex, role)) goto out_unlock; prev_vcpu = vcpu; } But I've noticed the unlocking is in the wrong order since we are using kvm_for_each_vcpu() I think we need a kvm_for_each_vcpu_rev() or something. Which maybe a bit for work: https://elixir.bootlin.com/linux/latest/source/lib/xarray.c#L1119. Then I think we could something like this to unlock: bool acquired = true; kvm_for_each_vcpu_rev(i, vcpu, kvm) { if (!acquired) { mutex_acquire(&vcpu->mutex.dep_map, 0, role, _THIS_IP_); } mutex_unlock(&vcpu->mutex); acquired = false; } > > Paolo > > > } > > } > > > > To unlock: > > > > kvm_for_each_vcpu(...) { > > mutex_unlock(&vcpu->mutex); > > } > > > > This way instead of mocking and releasing the lock_dep we just lock > > the fist vcpu with mutex_lock_killable_nested(). I think this > > maintains the property you suggested of "coalesces all the mutexes for > > a vm in a single subclass". Thoughts? >