Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp766046pxy; Wed, 28 Apr 2021 13:50:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzavLXvoaFPhfMasqj9B0p2iU2mHJL0u0nHbtNDABCq3PoOpEO6ebTI9Uz15MdZ7WXNUpUO X-Received: by 2002:a17:907:e8a:: with SMTP id ho10mr31161420ejc.110.1619643046431; Wed, 28 Apr 2021 13:50:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619643046; cv=none; d=google.com; s=arc-20160816; b=yHvhnsy+TtRGDG0AzhyL5tuGx1Lz/wYA8CtmsiFeMB/ACbhmWVaYyeIs3bJ8XpQDpV 5Pb/fHdMwVWisuD/nRBczFsq+PvfiQiuXPH/CeCQjIFqfFytlD0d9ovVZsZJXQJd40Ww +IhekVjRxYcqnBTl4MPzvOqhyAOPEpOIjNBBT1xdkKO3373tiQxrv0nWn0K8bHmCAa/a Nyt33QOHbGO4lwNshMXkOAXXzHh0QFFypbkBPKUhm4a9Zyujj9B9qFQ9JRjKDqAqEydZ uSg3HiD4/SIraW0bYdOmxECz13WMrylCYyOKzqrybkF8k6+AU6IR093k9gw/ONnoFd4R XYXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=7DXTGibt6BlhF0qBWFzyUiYAgESiWFri8buCG1L9Tbo=; b=r8fvVEYnj81pPcYIWaFglynWhKE8qIEji9wPL0n2K/cpqJbZvRnU2Oz3yFZ/OhQnJR KgDTuIt/EZZOMHH3nhvLnIRJDwcd/BF48ausWn7akwimEV8HDUygwpsXJq+4ArdfCppM 3Rs31kIzQv5kdSeDKVrETWA862LNj9ccdRucCY1szq7KuSy7IOuvajSzjMbhdHSfRrRY hvI2JrbifTo1DaZee5OR+j670CTRuspgXrdZIYDw9T3pwCMJDgyf0lsL1sOK1t+JnPM8 kbMEG+HYOPcsoCAOmouPOd7sTNc5oIfJc6wM1f+pePFViC8PQbeQ6Vcd92F7WiEfEJvJ lNhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=jye5oWYT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i22si855844edq.81.2021.04.28.13.50.23; Wed, 28 Apr 2021 13:50:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=jye5oWYT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241104AbhD1Qld (ORCPT + 99 others); Wed, 28 Apr 2021 12:41:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241089AbhD1Qlc (ORCPT ); Wed, 28 Apr 2021 12:41:32 -0400 Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com [IPv6:2607:f8b0:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A873EC061573 for ; Wed, 28 Apr 2021 09:40:46 -0700 (PDT) Received: by mail-ot1-x32c.google.com with SMTP id d3-20020a9d29030000b029027e8019067fso57032538otb.13 for ; Wed, 28 Apr 2021 09:40:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7DXTGibt6BlhF0qBWFzyUiYAgESiWFri8buCG1L9Tbo=; b=jye5oWYTW/qDTriJx0IyrX7bOD5CHTkWigLBq2eFbp0m9EjRp4xsww5w3h76mCBKL8 DkmQl5xknWKRo548AAc6OMDIwUsuySmLYVZnIAT6Gg2vst4dpGZTYAyvP10bmrfvrOhk jhyyMrhtK+Gw2rj6m+uJsnDsoFDEOXZpQ4hFK9hey+fzYUBHZ0IX038t4heYA/Pc1OTt K0fRKK14MpLF26MnwmOUQ1D4qNVf/oUIR1YocK0xBG/PrgE31D2qnO1JlVCDpn+37w4v aNspS2gjirgyXd9HfzR2MTukdvMQhxJxFrXYBk80aq0OAFOvCpOpT+OUtdj+qVexz381 L/iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=7DXTGibt6BlhF0qBWFzyUiYAgESiWFri8buCG1L9Tbo=; b=NIVN0tJiiantmvzqSNuxxXzzaAiPN+R49zJTxQ6p1cUcRUtgwQcrFRqgM0Vvd9TZsq ZytvNlht8xlnb6GA6NBeafU1RVq0NcK3ceirxw+btTyp9IOJbiKX94eFPVb/m91Yih+n GiqTXUV+9ioxe4e6IMUZHr7TiLLoZi94URd2E7UdFP+FwIUMSMuWIvhdKv9JJpbuYY20 jbhXfrWDOaNtt/93BvnNjlLO61xdrZv212ocMvk1kxLk0cXtr0GpvQ58mgldeFJapDH7 pzym4I/ny6sv/MDv0/hnvNm4ixrh52jIEMDz2bi10Pjk4aKg2kny5AEXH5PYGCnoxDrF 5qIw== X-Gm-Message-State: AOAM532hCOFP4V5iOIqRhXhuY1g2RlxBsXSeK4aQsUWMDBTPX/BI7D8c UJMF3vvfGAWJF+MkdPTJqk6eRUCcVW64tN7s+NGYXg== X-Received: by 2002:a05:6830:1deb:: with SMTP id b11mr25653799otj.72.1619628045862; Wed, 28 Apr 2021 09:40:45 -0700 (PDT) MIME-Version: 1.0 References: <20210427223635.2711774-1-bgardon@google.com> <20210427223635.2711774-6-bgardon@google.com> <997f9fe3-847b-8216-c629-1ad5fdd2ffae@redhat.com> In-Reply-To: <997f9fe3-847b-8216-c629-1ad5fdd2ffae@redhat.com> From: Ben Gardon Date: Wed, 28 Apr 2021 09:40:34 -0700 Message-ID: Subject: Re: [PATCH 5/6] KVM: x86/mmu: Protect kvm->memslots with a mutex To: Paolo Bonzini Cc: LKML , kvm , Peter Xu , Sean Christopherson , Peter Shier , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 27, 2021 at 11:25 PM Paolo Bonzini wrote: > > On 28/04/21 00:36, Ben Gardon wrote: > > +void kvm_arch_assign_memslots(struct kvm *kvm, int as_id, > > + struct kvm_memslots *slots) > > +{ > > + mutex_lock(&kvm->arch.memslot_assignment_lock); > > + rcu_assign_pointer(kvm->memslots[as_id], slots); > > + mutex_unlock(&kvm->arch.memslot_assignment_lock); > > +} > > Does the assignment also needs the lock, or only the rmap allocation? I > would prefer the hook to be something like kvm_arch_setup_new_memslots. The assignment does need to be under the lock to prevent the following race: 1. Thread 1 (installing a new memslot): Acquires memslot assignment lock (or perhaps in this case rmap_allocation_lock would be more apt.) 2. Thread 1: Check alloc_memslot_rmaps (it is false) 3. Thread 1: doesn't allocate memslot rmaps for new slot 4. Thread 1: Releases memslot assignment lock 5. Thread 2 (allocating a shadow root): Acquires memslot assignment lock 6. Thread 2: Sets alloc_memslot_rmaps = true 7. Thread 2: Allocates rmaps for all existing slots 8. Thread 2: Releases memslot assignment lock 9. Thread 2: Sets shadow_mmu_active = true 10. Thread 1: Installs the new memslots 11. Thread 3: Null pointer dereference when trying to access rmaps on the new slot. Putting the assignment under the lock prevents 5-8 from happening between 2 and 10. I'm open to other ideas as far as how to prevent this race though. I admit this solution is not the most elegant looking. > > (Also it is useful to have a comment somewhere explaining why the > slots_lock does not work. IIUC there would be a deadlock because you'd > be taking the slots_lock inside an SRCU critical region, while usually > the slots_lock critical section is the one that includes a > synchronize_srcu; I should dig that up and document that ordering in > Documentation/virt/kvm too). Yeah, sorry about that. I should have added a comment to that effect. As you suspected, it's because of the slots lock / SRCU deadlock. Using the slots lock was my original implementation, until the deadlock issue came up. I can add comments about that in a v2. > > Paolo >