Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1341820pxf; Fri, 2 Apr 2021 08:00:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyF5GzVBaHAMmJIHfLkjsRtkcsqNr5U1NcrJqCUO8rUcGhn6A0dqppOlC/P3GbpnGCVv6kx X-Received: by 2002:a02:3304:: with SMTP id c4mr13185653jae.68.1617375603771; Fri, 02 Apr 2021 08:00:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617375603; cv=none; d=google.com; s=arc-20160816; b=yTfJVkak0X+gYwIymrd/cSGgRrjFbeAxTZdxDW2S+1UT20fbyvZA9WuaQ0qNEv+abB GJuEuxyb8b6R84ZLsQEIbhKSXSDBJF9PIjz7NxRNm5Wd5+grPO3qbWFOksOTCt9E6h52 Ezs7I1MfX5STiRZkME5Aib4ZN5HRyeTy2a5eDVSDE4XhdaCLEYSb7QCyEmdjkyFpQ+vV lZzx3IkMgez4rU6Wk/BoYAEq0ln8w8mLSvu6yeqk3rFxnCH92m9dBH/i47hkN/1JICx1 pwxdJxSekcnNCk3A/0qGjJl6XXN7ke3tPMHkOE5WsFJo9y74JEkjxbMFfU4OzXoDR/Zf j12A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=hrWlBAMIox9uyp5vA/6pgwPUQG03jLxXChap4JWk798=; b=JiYBcTo+7tOx1DIcnPFUuTf4lDEr2vM0VHxsydDG2g/8/0yY1t5vhABHvifjigVy0w 4BwSIVum86qtYoDHRMToLL8BrptluiTsnwGNcp3DpjJWIZpclP1HJZtYoU1FvYOv8Bfe OJm6h2UzTCnT19lKVmDPCp/vphG7M2m31s0lGL2gvf0+pTKG0dNs4p+0PJkKAuQwMNdI TfLpS5b5jFyHMptKko4kP0gMuRDIM2a+knowPsnkOzwDfPT10l5LxadK6IQXJdebPW8E xLvXISh9gnBWntrMk9VFLtXkhadi/G4DoDNUuiSBANf2PdX5aVLog8+++kJIafpqEkey tMxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Wis80br4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 15si8675744ilt.42.2021.04.02.07.59.49; Fri, 02 Apr 2021 08:00:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Wis80br4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235151AbhDBO7Z (ORCPT + 99 others); Fri, 2 Apr 2021 10:59:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235113AbhDBO7Y (ORCPT ); Fri, 2 Apr 2021 10:59:24 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5950C06178C for ; Fri, 2 Apr 2021 07:59:23 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id v10so3761331pfn.5 for ; Fri, 02 Apr 2021 07:59:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=hrWlBAMIox9uyp5vA/6pgwPUQG03jLxXChap4JWk798=; b=Wis80br4QxI0udXSM5bgb12ub29OxkCYgsG+Q5MaHbKjRYNZTxX2dKGqJxK+5x27Ec bCjK5MvvUurkyJh+2dVl5bUVlOF+S5kkzM86t1vFaJEMUN3T8Kr2U41JVg3ikRq9VsY9 bmvTxU20lmUiRhm6QGV5OSagHM6CMIS3zcbQPID23xudkFfFOfaNfDFrfAAV9MBGBCbG 4fo9i95YmINHH5yrkVF5nSKywYI/L+Mv9lKQ9OfuRtFbwYvD08rHo26eFSvgUFwVqUjF 9D9+2OjiT3WWi7P0x9fBHYTDN17PeeBVM5FGjv80+DwDvYvLLhKy7ATI6OvoR6zd0tW3 xYMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=hrWlBAMIox9uyp5vA/6pgwPUQG03jLxXChap4JWk798=; b=Rbxu2bwmssag4Kvrm+n85R/Poxe64LLRw2V9qiy9KKCOlqlNr9uyuSCNLbgOspU8Oo jBtjDZVS9abAa2qm3UcvhClsJ1K9R00AKPS93i475pRLnEFm3A3dB2+vfn0nepW0qyNo IiBYFZl8+UB5FiyK3JA7CNcc34JMovyruQC/VU68fxsuqVunZJf5MTIUPYqq1NhR8MS3 nVzOGHHytvtkqPsmRxFS+2ZW2IBnH+uK41u2mJW95iZbu7UFGVtM1srBQhS+SPqLsm5J r4d/8Jy1kIVUQNkqyM2VOyz5WDkpWDR4/G3ZyBu+gy95sGOdJnW+7pZVqwV0CYvSDLEh FPNw== X-Gm-Message-State: AOAM532uJ4kcI4y9A1tK/It5GFwkGudnfe5g6ixxu1I8V0ReqZklQ4Sp A2/EjlebcPuQiviknXYfU0X4QA== X-Received: by 2002:a62:7708:0:b029:1ee:f656:51d5 with SMTP id s8-20020a6277080000b02901eef65651d5mr12750775pfc.59.1617375562873; Fri, 02 Apr 2021 07:59:22 -0700 (PDT) Received: from google.com (240.111.247.35.bc.googleusercontent.com. [35.247.111.240]) by smtp.gmail.com with ESMTPSA id b186sm8540014pfb.170.2021.04.02.07.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Apr 2021 07:59:22 -0700 (PDT) Date: Fri, 2 Apr 2021 14:59:18 +0000 From: Sean Christopherson To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon Subject: Re: [PATCH v2 09/10] KVM: Don't take mmu_lock for range invalidation unless necessary Message-ID: References: <20210402005658.3024832-1-seanjc@google.com> <20210402005658.3024832-10-seanjc@google.com> <417bd6b5-b7d0-ed22-adae-02150cdbfebe@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <417bd6b5-b7d0-ed22-adae-02150cdbfebe@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 02, 2021, Paolo Bonzini wrote: > On 02/04/21 02:56, Sean Christopherson wrote: > > Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}() > > notifications. Because mmu_notifier_count must be modified while holding > > mmu_lock for write, and must always be paired across start->end to stay > > balanced, lock elision must happen in both or none. To meet that > > requirement, add a rwsem to prevent memslot updates across range_start() > > and range_end(). > > > > Use a rwsem instead of a rwlock since most notifiers _allow_ blocking, > > and the lock will be endl across the entire start() ... end() sequence. > > If anything in the sequence sleeps, including the caller or a different > > notifier, holding the spinlock would be disastrous. > > > > For notifiers that _disallow_ blocking, e.g. OOM reaping, simply go down > > the slow path of unconditionally acquiring mmu_lock. The sane > > alternative would be to try to acquire the lock and force the notifier > > to retry on failure. But since OOM is currently the _only_ scenario > > where blocking is disallowed attempting to optimize a guest that has been > > marked for death is pointless. > > > > Unconditionally define and use mmu_notifier_slots_lock in the memslots > > code, purely to avoid more #ifdefs. The overhead of acquiring the lock > > is negligible when the lock is uncontested, which will always be the case > > when the MMU notifiers are not used. > > > > Note, technically flag-only memslot updates could be allowed in parallel, > > but stalling a memslot update for a relatively short amount of time is > > not a scalability issue, and this is all more than complex enough. > > Proposal for the locking documentation: Argh, sorry! Looks great, I owe you. > diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst > index b21a34c34a21..3e4ad7de36cb 100644 > --- a/Documentation/virt/kvm/locking.rst > +++ b/Documentation/virt/kvm/locking.rst > @@ -16,6 +16,13 @@ The acquisition orders for mutexes are as follows: > - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring > them together is quite rare. > +- The kvm->mmu_notifier_slots_lock rwsem ensures that pairs of > + invalidate_range_start() and invalidate_range_end() callbacks > + use the same memslots array. kvm->slots_lock is taken outside the > + write-side critical section of kvm->mmu_notifier_slots_lock, so > + MMU notifiers must not take kvm->slots_lock. No other write-side > + critical sections should be added. > + > On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock. > Everything else is a leaf: no other lock is taken inside the critical > > Paolo >