Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753253AbcCGOTI (ORCPT ); Mon, 7 Mar 2016 09:19:08 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:35712 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752845AbcCGOQF (ORCPT ); Mon, 7 Mar 2016 09:16:05 -0500 From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Takuya Yoshikawa , Xiao Guangrong Subject: [PATCH 4/9] KVM: MMU: cleanup __kvm_sync_page and its callers Date: Mon, 7 Mar 2016 15:15:50 +0100 Message-Id: <1457360155-9610-6-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1457360155-9610-1-git-send-email-pbonzini@redhat.com> References: <1457360155-9610-1-git-send-email-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2062 Lines: 59 Calling kvm_unlink_unsync_page in the middle of __kvm_sync_page makes things unnecessarily tricky. If kvm_mmu_prepare_zap_page is called, it will call kvm_unlink_unsync_page too. So kvm_unlink_unsync_page can be called just as well at the beginning or the end of __kvm_sync_page... which means that we might do it in kvm_sync_page too and remove the parameter. kvm_sync_page ends up being the same code that kvm_sync_pages used to have before the previous patch. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 56be33714036..88a1a79c869e 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1917,16 +1917,13 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, /* @sp->gfn should be write-protected at the call site */ static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct list_head *invalid_list, bool clear_unsync) + struct list_head *invalid_list) { if (sp->role.cr4_pae != !!is_pae(vcpu)) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); return 1; } - if (clear_unsync) - kvm_unlink_unsync_page(vcpu->kvm, sp); - if (vcpu->arch.mmu.sync_page(vcpu, sp)) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); return 1; @@ -1956,7 +1953,7 @@ static int kvm_sync_page_transient(struct kvm_vcpu *vcpu, LIST_HEAD(invalid_list); int ret; - ret = __kvm_sync_page(vcpu, sp, &invalid_list, false); + ret = __kvm_sync_page(vcpu, sp, &invalid_list); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, !ret); return ret; @@ -1972,7 +1969,8 @@ static void mmu_audit_disable(void) { } static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - return __kvm_sync_page(vcpu, sp, invalid_list, true); + kvm_unlink_unsync_page(vcpu->kvm, sp); + return __kvm_sync_page(vcpu, sp, invalid_list); } /* @gfn should be write-protected at the call site */ -- 1.8.3.1