Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp970540pxp; Wed, 16 Mar 2022 23:02:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyYhe+Wkq3bsG0aDyDUJd4nMMX/Jx8Mhx7uPt68qkcmFG3IhGzdfMxBSr7ngQxpx07dfLqW X-Received: by 2002:a17:90b:390c:b0:1c6:259e:759 with SMTP id ob12-20020a17090b390c00b001c6259e0759mr3509046pjb.120.1647496950202; Wed, 16 Mar 2022 23:02:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1647496950; cv=none; d=google.com; s=arc-20160816; b=evKSAGQJi+hTd+VYP/loJy2DoT95gpIL7zgDJ4oxLP621hyaEI5L3Wx5g0fr+MgJgj RK1RGN8SZGys8T+wo4ghu9Bb9YM24Bkj67r8jcYcjqQgP+wuD6CxmqTCVsS0/OgWd6Oa qOJee+xB0zNDzn45Bh2hTYg109GUkFq/SQBZvyEyrdfoWetYS2in5l4Q0ugAchf7C7Xf 2JcnASGdMefgJMGY4947k0pgYAjiDb7FGvWueFsmCiybjhVB5LhuYU9B9QZXWny+NByP KhCDzdfJgeeEHAarlpK0VKHGmgQPSeaP9ieQPKrFII86Wk9paVR4WIPwQhYXxvpxfHG+ mrqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=j3s2Ls4nv54nDC/LXv+A849PZB020qRBbCN9LtulHCc=; b=rDlhyFPUuTO5D5QBmE6w3KGYTX/TQDEdh76KrguLc/vgnLJpmkIIT8mjPuocKf4QyJ ifCZqkDRujoV/f1jRP5Om4CYiypE91ojaqdn+soJiAtJ1f5tt5GiafNsfvXI/k7mHCWK +byEAcTT2S7KXd4vTbDQqsuhrH2EN6SRX+LsHDp/o4TR8CxgP2yID61IyIpQijIuGkHH A718Arp30L6jZ0mpGahHuhJL1h5Ic0zEEHEvtwQ6JBYgB8FqxmI/lGVz/62H7sr5Rgbq WQMmgW8xl38MuDv3zTCEw8BTpzWRAlwWR5HAeIifk/3JNzC1ZW18MCJYvcXWKqVxY/Td xPpA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id n8-20020a170903110800b0015197fbfa15si4036020plh.184.2022.03.16.23.02.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Mar 2022 23:02:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=antgroup.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 155DE24B08C; Wed, 16 Mar 2022 22:04:10 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346653AbiCOJgf (ORCPT + 99 others); Tue, 15 Mar 2022 05:36:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238684AbiCOJgb (ORCPT ); Tue, 15 Mar 2022 05:36:31 -0400 Received: from out0-153.mail.aliyun.com (out0-153.mail.aliyun.com [140.205.0.153]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 266884BFD2; Tue, 15 Mar 2022 02:35:18 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R271e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018047212;MF=houwenlong.hwl@antgroup.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---.N4z7UFy_1647336913; Received: from localhost(mailfrom:houwenlong.hwl@antgroup.com fp:SMTPD_---.N4z7UFy_1647336913) by smtp.aliyun-inc.com(127.0.0.1); Tue, 15 Mar 2022 17:35:14 +0800 From: "Hou Wenlong" To: kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Lai Jiangshan , linux-kernel@vger.kernel.org Subject: [PATCH] KVM: x86/mmu: Don't rebuild page when the page is synced and no tlb flushing is required Date: Tue, 15 Mar 2022 17:35:13 +0800 Message-Id: <0dabeeb789f57b0d793f85d073893063e692032d.1647336064.git.houwenlong.hwl@antgroup.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Before Commit c3e5e415bc1e6 ("KVM: X86: Change kvm_sync_page() to return true when remote flush is needed"), the return value of kvm_sync_page() indicates whether the page is synced, and kvm_mmu_get_page() would rebuild page when the sync fails. But now, kvm_sync_page() returns false when the page is synced and no tlb flushing is required, which leads to rebuild page in kvm_mmu_get_page(). So return the return value of mmu->sync_page() directly and check it in kvm_mmu_get_page(). If the sync fails, the page will be zapped and the invalid_list is not empty, so set flush as true is accepted in mmu_sync_children(). Fixes: c3e5e415bc1e6 ("KVM: X86: Change kvm_sync_page() to return true when remote flush is needed") Signed-off-by: Hou Wenlong --- arch/x86/kvm/mmu/mmu.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3b8da8b0745e..8efd165ee27c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1866,17 +1866,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else -static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, +static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { int ret = vcpu->arch.mmu->sync_page(vcpu, sp); - if (ret < 0) { + if (ret < 0) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); - return false; - } - - return !!ret; + return ret; } static bool kvm_mmu_remote_flush_or_zap(struct kvm *kvm, @@ -2039,6 +2036,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, struct hlist_head *sp_list; unsigned quadrant; struct kvm_mmu_page *sp; + int ret; int collisions = 0; LIST_HEAD(invalid_list); @@ -2091,11 +2089,13 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, * If the sync fails, the page is zapped. If so, break * in order to rebuild it. */ - if (!kvm_sync_page(vcpu, sp, &invalid_list)) + ret = kvm_sync_page(vcpu, sp, &invalid_list); + if (ret < 0) break; WARN_ON(!list_empty(&invalid_list)); - kvm_flush_remote_tlbs(vcpu->kvm); + if (ret > 0) + kvm_flush_remote_tlbs(vcpu->kvm); } __clear_sp_write_flooding_count(sp); -- 2.31.1