Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4927370pxj; Tue, 22 Jun 2021 11:01:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyXiyl5EJCIV0V+Iush5WQvs68jw8zkmMOPMnmFpWC5MzA1VSpWyGAvZY6DIQRgV0wxWr9+ X-Received: by 2002:a5d:858d:: with SMTP id f13mr3898840ioj.121.1624384878342; Tue, 22 Jun 2021 11:01:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624384878; cv=none; d=google.com; s=arc-20160816; b=KRqlY0E4uSa5vgPXSyUwDx1Djbc9in7MMh1pJzjs4DJ+PBs+HBNwmFQvUw/uPz6OUq ioF05yI15aWWUw77a+vhdbBvXGlctBoQ5Ln3cD26CDDOa81ELAr6ajXMAGjPHU684ARz 42VH2x47Aw1Gifgk5XnXNVb0M59VKD3Lid4rD6UMVjRmemErefV4o0JCQL6LTs69tUAg BJi76c0Lk4JEpOBQLbkawZWOrw2zSdH0IIBVwlxrNiIr8Rsdo+ps+gRkqTOrejyaTz2P 8vvns4R87cT6NXlqxfjCsvPB2akdr/eYjNeJUm+JiX64LFn8iU0ehx7SNz0hgHFr/cdS FxIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:reply-to:dkim-signature; bh=SXPUg9mNmqVN5rRtGZsktXnV3IRtDTAZpIQMiVU+baY=; b=Rp9oSfmkJy1C0+Tk4+kOn7xkvVdOGdlixBMWm5b7WZsXNuG6hPeMWX32JAnVz8MUlU +N0XblGtOelJT2CFuTMIbtd0FenjNRQq0YewvaYm5AHaKYyibIgWmat+MfKjBbJYoet2 plCbXq98RIUVCcuEWYL6Z7MK2DACaOGrCom8Lt221bQXP+3SpIQ7zH3+9aMmaokmydY2 2TwG2m2on7DawoTmsLK3Zr4SK//0F1+ewguvGNMy4EKG09pBozQnkbTB0RS0ihJ+++EH nIVvtbTLHV7eoQgs6dp81yglEqDCju29Xi74sb4LTJJF9+/cRMLTajeS31V1J2xz9Ne8 dZCA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=m1bpxHKW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x19si102171jat.6.2021.06.22.11.01.04; Tue, 22 Jun 2021 11:01:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=m1bpxHKW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232743AbhFVSBR (ORCPT + 99 others); Tue, 22 Jun 2021 14:01:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232585AbhFVSAx (ORCPT ); Tue, 22 Jun 2021 14:00:53 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7E5AC0617AE for ; Tue, 22 Jun 2021 10:58:24 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id i3-20020a05620a1503b02903b24f00c97fso1323170qkk.21 for ; Tue, 22 Jun 2021 10:58:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=SXPUg9mNmqVN5rRtGZsktXnV3IRtDTAZpIQMiVU+baY=; b=m1bpxHKWRMXd6AP9P/rxCP1SVrJwcVZ+LBLaU6bVsOae/XAOt9w7AyOgX1LluKqP+C cdGh27lYDruC6CulMsAYjvLoAGMAy3JFOJqNWfgtWYlbpZRyWuZcTD5xtbrFO9e3+SxE Xnt/QfpgKtpG4UrtYtVNI1pjErGJFV6+5IaNjq1+zpzgGtBeGqW5Uq1dut0AuKfmiuOA XtJPkxtmycqkDXwohCE8n7xPT1SdrluCrjzn2XrvOLEp7UD/LJJHYjmPoXkkrMtwFzzq yGWFOEmvgcPYVBs8VxHhHH9Rid40Fz95cHSd9TgVh+JUiWqd/P3p0dooVe9z7dEm5KWl BuCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=SXPUg9mNmqVN5rRtGZsktXnV3IRtDTAZpIQMiVU+baY=; b=BhIDFgWMCuGog2RbyuvnwhDehytdhIHU698UFeSVU0A4Q3MB5pIbCY2U4CPcwyXo7b LBGvdReOMV9RvocGaCBe9kGNKHWCcgqU4Gzv4PRXfuqbMpKgbolfJDysZqLGJR3DPjpO oHv0giqoC2h+GwSkmzeQ5bfMQUz4TGFLqhXTVX+luQJa7TXCFb7QyN8y7K1CxKNB8/NF PFnivxtOZcHFSLYY4rDjmSpCsyDmEKJOMYqLoQSOMVEtnJlmOaZXykA65VPYMO8OeHTS gtJsLXm7H8igyb8S+cfFK0NQ71TcGWeYuiXX8pZg+HKoKMh6yJK6a9Z4njsOIiGa90ws Do7Q== X-Gm-Message-State: AOAM533iZdVOW6Xyh2qgC4jEtK/yahVXaFvlbHIGDnpH9eQUADRgv42P ShG5HAeRLcKtxlQotaFQwBU99G2EnXg= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:5722:92ce:361f:3832]) (user=seanjc job=sendgmr) by 2002:a25:d44f:: with SMTP id m76mr6300374ybf.198.1624384703815; Tue, 22 Jun 2021 10:58:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 22 Jun 2021 10:56:57 -0700 In-Reply-To: <20210622175739.3610207-1-seanjc@google.com> Message-Id: <20210622175739.3610207-13-seanjc@google.com> Mime-Version: 1.0 References: <20210622175739.3610207-1-seanjc@google.com> X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH 12/54] KVM: x86/mmu: Drop the intermediate "transient" __kvm_sync_page() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yu Zhang , Maxim Levitsky Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Nove the kvm_unlink_unsync_page() call out of kvm_sync_page() and into it's sole caller, and fold __kvm_sync_page() into kvm_sync_page() since the latter becomes a pure pass-through. There really should be no reason for code to do a complete sync of a shadow page outside of the full kvm_mmu_sync_roots(), e.g. the one use case that creeped in turned out to be flawed and counter-productive. Update the comment in kvm_mmu_get_page() regarding its sync_page() usage, which is anything but obvious. Drop the stale comment about @sp->gfn needing to be write-protected, as it directly contradicts the kvm_mmu_get_page() usage. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2e2d66319325..77296ce6215f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1780,18 +1780,6 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else -/* @sp->gfn should be write-protected at the call site */ -static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct list_head *invalid_list) -{ - if (vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); - return false; - } - - return true; -} - static bool kvm_mmu_remote_flush_or_zap(struct kvm *kvm, struct list_head *invalid_list, bool remote_flush) @@ -1833,8 +1821,12 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - kvm_unlink_unsync_page(vcpu->kvm, sp); - return __kvm_sync_page(vcpu, sp, invalid_list); + if (vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { + kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); + return false; + } + + return true; } struct mmu_page_path { @@ -1931,6 +1923,7 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, } for_each_sp(pages, sp, parents, i) { + kvm_unlink_unsync_page(vcpu->kvm, sp); flush |= kvm_sync_page(vcpu, sp, &invalid_list); mmu_pages_clear_parents(&parents); } @@ -2008,10 +2001,19 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, goto trace_get_page; if (sp->unsync) { - /* The page is good, but __kvm_sync_page might still end - * up zapping it. If so, break in order to rebuild it. + /* + * The page is good, but is stale. "Sync" the page to + * get the latest guest state, but don't write-protect + * the page and don't mark it synchronized! KVM needs + * to ensure the mapping is valid, but doesn't need to + * fully sync (write-protect) the page until the guest + * invalidates the TLB mapping. This allows multiple + * SPs for a single gfn to be unsync. + * + * If the sync fails, the page is zapped. If so, break + * If so, break in order to rebuild it. */ - if (!__kvm_sync_page(vcpu, sp, &invalid_list)) + if (!kvm_sync_page(vcpu, sp, &invalid_list)) break; WARN_ON(!list_empty(&invalid_list)); -- 2.32.0.288.g62a8d224e6-goog