Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp1691902rwb; Thu, 19 Jan 2023 14:12:38 -0800 (PST) X-Google-Smtp-Source: AMrXdXsUGdAH3JbRVXETBLspZqJ7ynRQET5Lzo3eGz6m5zL4VcsG3oMoauRJRWwRoYDwv0rgFnVE X-Received: by 2002:a17:90a:528e:b0:229:eae4:bef9 with SMTP id w14-20020a17090a528e00b00229eae4bef9mr3311111pjh.43.1674166358447; Thu, 19 Jan 2023 14:12:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674166358; cv=none; d=google.com; s=arc-20160816; b=X/UNGGKs/5qrurzVTpEDjkDY+yJXqMxVGZZB7O/SwzZ7Zi1p9/osUODdc47TKcLaeg rqCvlCgf6Bri2BX9dP2lXLhtob2BnceJefW9ol8dxX0EID5YxyMlYdCvOfLT9JxSWqpO Z1cnMIyK5ttiiZhG+FhD+O1CZG/GO5f7D1AQCx3+DjyoVK3g9yDDYMvrNWviTxBWUm0D DEMNoSzHGOx1X1z2+TyPH5YlWbSyk/ZMIXmtaJLMs3PbAjITS8VWSJHwHNj5gHAI8UNF Qxint4IQWzxZtJ+eIFvwHetc+lhV3QZVL+m2DSTzDefN4HnnhSSYXbyPCuvg43Md4AfC 9NPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=KbsQPaprkHSBHTpU4bUWi90+sKhBjwi80Xw27RY5QK4=; b=lBgo5C6YtSbSpagoV6lIdcKE8s/Ez71VybakyIAltHDifTo0zOjPaNitKzdv/t4Ac8 d5uv44KaBhi2Av9Y+QYDCmezkzKS2m7xRDq3KRG5xJaiZyKIXV7NDtNxzsby3rETc8SV o4xLVQg0yol59S98dwmw6W3MayMKuPZNx3Wp/OYXXdKTXfghjAEP2Q0s6N/ssRlFaZxl i1No18sF8yi2eQaMObTLbw1QmBs/C0VCxdRLNp8yWDkEnnWnt38DHK6uBT4mZyLUhQyv fBNXQVj0hl0J/d4Mu0e+VaBjW1/6mqvtrHQBrWw/oOX+445Mq1ueqnP1TekAEDqEhvMs ndBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=a86Quj1Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mm14-20020a17090b358e00b00226613377a2si346767pjb.167.2023.01.19.14.12.31; Thu, 19 Jan 2023 14:12:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=a86Quj1Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229762AbjASV4q (ORCPT + 48 others); Thu, 19 Jan 2023 16:56:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229583AbjASV4P (ORCPT ); Thu, 19 Jan 2023 16:56:15 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A0D8BC762 for ; Thu, 19 Jan 2023 13:36:38 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id a9-20020a17090a740900b0022a0e51fb17so506668pjg.3 for ; Thu, 19 Jan 2023 13:36:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=KbsQPaprkHSBHTpU4bUWi90+sKhBjwi80Xw27RY5QK4=; b=a86Quj1YJ7xhR1Z7/QMfVE8urweLYcv3DdyGJe2wnQ2+47GrH64iRK+ppq9BL/iicS M4BzASeIAzlQnCzsH2idXLXMgpYjyn/jHmKNuUX9lEO8YvDb8KY71RStMJOuqed/BuE0 0xLOVaVboMYj3NmxXu9baS2NILRq/k9yb/ofX/ejksksOY7ySKc2iSV85PwPq7wPmhz9 G+n0tFbutM6WI9ZXGMNXw+mugxfvCx9zq5Hil07FzdTQ6+Q1qa3uUI/kwlfkVXx823ik evnGhB1In1ICxzkhG+nvRvLosoxTp1GeLrVCFQq7+xAiay6gq276kkaM8c6q+ky8yBTd lq/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=KbsQPaprkHSBHTpU4bUWi90+sKhBjwi80Xw27RY5QK4=; b=ZrcKM1EgXZdky+xrzvZNrLapgWtsz9JK2O7urXJzOs3ZkoCpxQDkhj1t5khfsN7Ak5 jE+W+zp01fPEl//rXUMIfL2zIPBBVCK5R8HZrtHsvcWgtdCw3SbFYoe+GKC1QbsGY1zm wU+rU0Ca/QpoqV/Y7IjKNhE7oTljphnoNmiHP65hI3zVhFLeyN515IuGod0y+sbz/3AZ aDRSv7aYjlzwXiy+KRU76C5fkSjaT2+IrUSsZfDcSuniRP9QA9kWkfR5W7O2D42CeVxn 3vL40JB3ZjCnfMZgP8GstRcxkI0anrUBcqfhrw/UUqp8qjxP3uerSgR7Wk70MnW8N2c/ VGCA== X-Gm-Message-State: AFqh2kqPuzQx78Z+ncwQukSiZmUtvniKWq8l8sOSY8vs4jZ/360SZOSO Zodb8yYWSJFX6+KJPPE+niQ4HA== X-Received: by 2002:a05:6a21:358b:b0:b8:e33c:f160 with SMTP id az11-20020a056a21358b00b000b8e33cf160mr89811pzc.0.1674164196862; Thu, 19 Jan 2023 13:36:36 -0800 (PST) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id d22-20020a170902aa9600b001871461688esm25521453plr.175.2023.01.19.13.36.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jan 2023 13:36:36 -0800 (PST) Date: Thu, 19 Jan 2023 21:36:32 +0000 From: Sean Christopherson To: "Huang, Kai" Cc: "pbonzini@redhat.com" , "isaku.yamahata@gmail.com" , "Shahar, Sagi" , "linux-kernel@vger.kernel.org" , "Aktas, Erdem" , "kvm@vger.kernel.org" , "dmatlack@google.com" , "zhi.wang.linux@gmail.com" , "sean.j.christopherson@intel.com" , "Yamahata, Isaku" Subject: Re: [PATCH v11 018/113] KVM: TDX: create/destroy VM structure Message-ID: References: <20230113151258.00006a6d@gmail.com> <20230114111621.00001840@gmail.com> <20230117214414.00003229@gmail.com> <02b0e551647beed9ec3a2fefd3b659eb52c4846c.camel@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <02b0e551647beed9ec3a2fefd3b659eb52c4846c.camel@intel.com> X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 19, 2023, Huang, Kai wrote: > On Thu, 2023-01-19 at 15:37 +0000, Sean Christopherson wrote: > > On Thu, Jan 19, 2023, Huang, Kai wrote: > > > On Tue, 2023-01-17 at 21:01 +0000, Sean Christopherson wrote: > > > > On Tue, Jan 17, 2023, Sean Christopherson wrote: > > > > > On Tue, Jan 17, 2023, Zhi Wang wrote: > > > > Oh, the other important piece I forgot to mention is that dropping mmu_lock deep > > > > in KVM's MMU in order to wait isn't always an option. Most flows would play nice > > > > with dropping mmu_lock and sleeping, but some paths, e.g. from the mmu_notifier, > > > > (conditionally) disallow sleeping. > > > > > > Could we do something similar to tdp_mmu_iter_cond_resched() but not simple busy > > > retrying "X times", at least at those paths that can release mmu_lock()? > > > > That's effectively what happens by unwinding up the stak with an error code. > > Eventually the page fault handler will get the error and retry the guest. > > > > > Basically we treat TDX_OPERAND_BUSY as seamcall_needbreak(), similar to > > > rwlock_needbreak(). I haven't thought about details though. > > > > I am strongly opposed to that approach. I do not want to pollute KVM's MMU code > > with a bunch of retry logic and error handling just because the TDX module is > > ultra paranoid and hostile to hypervisors. > > Right. But IIUC there's legal cases that SEPT SEAMCALL can return BUSY due to > multiple threads trying to read/modify SEPT simultaneously in case of TDP MMU. > For instance, parallel page faults on different vcpus on private pages. I > believe this is the main reason to retry. Um, crud. I think there's a bigger issue. KVM always operates on its copy of the S-EPT tables and assumes the the real S-EPT tables will always be synchronized with KVM's mirror. That assumption doesn't hold true without serializing SEAMCALLs in some way. E.g. if a SPTE is zapped and mapped at the same time, we can end up with: vCPU0 vCPU1 ===== ===== mirror[x] = xyz old_spte = mirror[x] mirror[x] = REMOVED_SPTE sept[x] = REMOVED_SPTE sept[x] = xyz In other words, when mmu_lock is held for read, KVM relies on atomic SPTE updates. With the mirror=>s-ept scheme, updates are no longer atomic and everything falls apart. Gracefully retrying only papers over the visible failures, the really problematic scenarios are where multiple updates race and _don't_ trigger conflicts in the TDX module. > We previously used spinlock around the SEAMCALLs to avoid, but looks that is > not preferred. That doesn't address the race above either. And even if it did, serializing all S-EPT SEAMCALLs for a VM is not an option, at least not in the long term. The least invasive idea I have is expand the TDP MMU's concept of "frozen" SPTEs and freeze (a.k.a. lock) the SPTE (KVM's mirror) until the corresponding S-EPT update completes. The other idea is to scrap the mirror concept entirely, though I gotta imagine that would provide pretty awful performance. diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 0d8deefee66c..bcb398e71475 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -198,9 +198,9 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask; /* Removed SPTEs must not be misconstrued as shadow present PTEs. */ static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK)); -static inline bool is_removed_spte(u64 spte) +static inline bool is_frozen_spte(u64 spte) { - return spte == REMOVED_SPTE; + return spte == REMOVED_SPTE || spte & FROZEN_SPTE; } /* Get an SPTE's index into its parent's page table (and the spt array). */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bba33aea0fb0..7f34eccadf98 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -651,6 +651,9 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm, lockdep_assert_held_read(&kvm->mmu_lock); + if ( && new_spte != REMOVED_SPTE) + new_spte |= FROZEN_SPTE; + /* * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and * does not hold the mmu_lock. @@ -662,6 +665,9 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm, new_spte, iter->level, true); handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); + if ( && new_spte != REMOVED_SPTE) + __kvm_tdp_mmu_write_spte(iter->sptep, new_spte); + return 0; }