Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751099AbdIEKoU (ORCPT ); Tue, 5 Sep 2017 06:44:20 -0400 Received: from mga01.intel.com ([192.55.52.88]:15104 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750762AbdIEKoS (ORCPT ); Tue, 5 Sep 2017 06:44:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,479,1498546800"; d="scan'208";a="897198362" From: changbin.du@intel.com To: pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com Cc: x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Changbin Du Subject: [RESEND][PATCH] kvm: x86: Do not handle MMIO request in fast_page_fault Date: Tue, 5 Sep 2017 18:37:42 +0800 Message-Id: <1504607862-8497-1-git-send-email-changbin.du@intel.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1395 Lines: 46 From: Changbin Du If it is a MMIO request, it should be handled by slow path. This patch actually fixed below warning when mmu debug is enabled. WARNING: CPU: 5 PID: 2282 at arch/x86/kvm/mmu.c:226 fast_page_fault+0x41b/0x520 CPU: 5 PID: 2282 Comm: qemu-system-x86 Not tainted 4.13.0-rc6+ #34 task: ffff9b47f5286000 task.stack: ffffb18d03b28000 RIP: 0010:fast_page_fault+0x41b/0x520 Call Trace: tdp_page_fault+0xfb/0x290 kvm_mmu_page_fault+0x61/0x120 handle_ept_misconfig+0x1ba/0x1c0 vmx_handle_exit+0xb8/0xd70 ? kvm_arch_vcpu_ioctl_run+0x9b6/0x18e0 kvm_arch_vcpu_ioctl_run+0xa5a/0x18e0 ? kvm_arch_vcpu_load+0x62/0x230 kvm_vcpu_ioctl+0x340/0x6c0 ? kvm_vcpu_ioctl+0x340/0x6c0 ? lock_acquire+0xf5/0x1f0 do_vfs_ioctl+0xa2/0x670 ? __fget+0x107/0x200 SyS_ioctl+0x79/0x90 entry_SYSCALL_64_fastpath+0x23/0xc2 Signed-off-by: Changbin Du --- arch/x86/kvm/mmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 9d3f275..63c3360 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3180,6 +3180,9 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, iterator.level < level) break; + if (is_mmio_spte(spte)) + break; + sp = page_header(__pa(iterator.sptep)); if (!is_last_spte(spte, sp->role.level)) break; -- 2.7.4