Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp712797pxu; Wed, 7 Oct 2020 13:59:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzWAjsjOYBbAiDIrlAkFvG/5OU0VHQpmZqDm3IfIftONvKLWw0FtF0KgYMAGXXZNi6zf9Fm X-Received: by 2002:a05:6402:220f:: with SMTP id cq15mr5727660edb.24.1602104341683; Wed, 07 Oct 2020 13:59:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602104341; cv=none; d=google.com; s=arc-20160816; b=KC7cDN2MSbYq9Q6Q0u0tVRaM3Cz1zeZpJ0dH9CY1WQMY8bmE4t/WL4CnQRSuzJDEUl 4vBaZOV0l09BxO/V2L8klJmeS0U3FeH5orTrjVjsQ16eTQ8/eNg+aWteb5H+FzwYbx9b Zqnaoqk96qgMEPAcJZMasVlMqQALX3MJBoFaF88xq3lnNpQUOi2UTVyZkMMk6oa8Dg0v Q67XvnvVbAIHQerAFwuwAx75QjL4KU6q2B4Nqe9BvHpoXrWLDgOzE6z1caqCSpQsm9aT LXjABlLCxNCbvPfqp+R1xXl+ZDlX1lO2dvex5zv9nNu7zg8uryGZDkHTj77Uakyig7qD 42AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :ironport-sdr:ironport-sdr; bh=ntXaULlcH+1J0xoYUTAX7hoHI/hnk5WNAe5LCITpjpw=; b=XfBnez0z1vvjKGs/PBBxZyolk7e0YBZTvcv8JHYrr02lKova3hkNnD5DozSFH/crdy S0gCIIOIBD+yVicE85PXAah+viooo/zkb8PH/PlEH19mMG/h/2B8JE4UtC+s5jRcQ2Ja SUl0VkLL+Ojb30HTgLQusN/+bMBE155BREZCeiZrIm49+e1ml92Sv9JoYjwTSlZdoyEV NHsUZmFtRP/7rCzo1kq8my/k06NXRq/k1HaGMfxcdxMwdf5hqxzVKqUuiwB2WCqZKOFS nSA+ejGJGfeSGzewp1rW5u7AzIGBNGcFATYvOt+KwU1wcHUhV1pHki/htZ37e4E9mzzd Mdtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n15si2686878edy.300.2020.10.07.13.58.39; Wed, 07 Oct 2020 13:59:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728806AbgJGUzy (ORCPT + 99 others); Wed, 7 Oct 2020 16:55:54 -0400 Received: from mga17.intel.com ([192.55.52.151]:38777 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728022AbgJGUzy (ORCPT ); Wed, 7 Oct 2020 16:55:54 -0400 IronPort-SDR: EIviMndUDIHL3nApgZuAVeU49ySDtPKxKukCaxzAtgTXXXU5H5bo762dguG2kxhHioT3Gz4qVG YhfsM+FzOy0A== X-IronPort-AV: E=McAfee;i="6000,8403,9767"; a="145028001" X-IronPort-AV: E=Sophos;i="5.77,348,1596524400"; d="scan'208";a="145028001" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2020 13:55:54 -0700 IronPort-SDR: tKemYpyspvIDJstVnrosrbjs/GY71RYfm/aOB/qcgMVU8bVOKD7/gfGDPCVOOJjPtwr8AZLgZ9 mHWILCKoR2Qg== X-IronPort-AV: E=Sophos;i="5.77,348,1596524400"; d="scan'208";a="528160278" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.160]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2020 13:55:53 -0700 Date: Wed, 7 Oct 2020 13:55:52 -0700 From: Sean Christopherson To: Ben Gardon Cc: LKML , kvm , Cannon Matthews , Paolo Bonzini , Peter Xu , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong Subject: Re: [PATCH 10/22] kvm: mmu: Add TDP MMU PF handler Message-ID: <20201007205552.GB2138@linux.intel.com> References: <20200925212302.3979661-1-bgardon@google.com> <20200925212302.3979661-11-bgardon@google.com> <20200930163740.GD32672@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 06, 2020 at 03:33:21PM -0700, Ben Gardon wrote: > On Wed, Sep 30, 2020 at 9:37 AM Sean Christopherson > wrote: > > > > On Fri, Sep 25, 2020 at 02:22:50PM -0700, Ben Gardon wrote: > > > @@ -4113,8 +4088,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > > if (page_fault_handle_page_track(vcpu, error_code, gfn)) > > > return RET_PF_EMULATE; > > > > > > - if (fast_page_fault(vcpu, gpa, error_code)) > > > - return RET_PF_RETRY; > > > + if (!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > > > + if (fast_page_fault(vcpu, gpa, error_code)) > > > + return RET_PF_RETRY; > > > > It'll probably be easier to handle is_tdp_mmu() in fast_page_fault(). > > I'd prefer to keep this check here because then in the fast page fault > path, we can just handle the case where we do have a tdp mmu root with > the tdp mmu fast pf handler and it'll mirror the split below with > __direct_map and the TDP MMU PF handler. Hmm, what about adding wrappers for these few cases where TDP MMU splits cleanly from the existing paths? The thought being that it would keep the control flow somewhat straightforward, and might also help us keep the two paths aligned (more below). > > > > > > r = mmu_topup_memory_caches(vcpu, false); > > > if (r) > > > @@ -4139,8 +4115,14 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > > > r = make_mmu_pages_available(vcpu); > > > if (r) > > > goto out_unlock; > > > - r = __direct_map(vcpu, gpa, write, map_writable, max_level, pfn, > > > - prefault, is_tdp && lpage_disallowed); > > > + > > > + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) > > > + r = kvm_tdp_mmu_page_fault(vcpu, write, map_writable, max_level, > > > + gpa, pfn, prefault, > > > + is_tdp && lpage_disallowed); > > > + else > > > + r = __direct_map(vcpu, gpa, write, map_writable, max_level, pfn, > > > + prefault, is_tdp && lpage_disallowed); Somewhat tangetially related to the above, it feels like the TDP MMU helper here would be better named tdp_mmu_map() or so. KVM has already done the "fault" part, in that it has faulted in the page (if relevant) and obtained a pfn. What's left is the actual insertion into the TDP page tables. And again related to the helper, ideally tdp_mmu_map() and __direct_map() would have identical prototypes. Ditto for the fast page fault paths. In theory, that would allow the compiler to generate identical preamble, with only the final check being different. And if the compiler isn't smart enough to do that on its own, we might even make the wrapper non-inline, with an "unlikely" annotation to coerce the compiler to generate a tail call for the preferred path. > > > > > > out_unlock: > > > spin_unlock(&vcpu->kvm->mmu_lock);