Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp3348975pxb; Mon, 4 Apr 2022 14:17:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJza7ea7zAq7T3iqRu3gwuv7CdhOENi8xCZsDq2rR5O9+vWM5r2n26rMNnBopBs7ItsPny+Z X-Received: by 2002:a65:5387:0:b0:382:3e29:358 with SMTP id x7-20020a655387000000b003823e290358mr89362pgq.27.1649107034793; Mon, 04 Apr 2022 14:17:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649107034; cv=none; d=google.com; s=arc-20160816; b=ICkYCVliZyK3EZrti9Xt5oQsDDbEkuMPOVMIBa2hzctOim9mODzqATi9+XtgYv5LhE aG/uqfNvqqnPFwUbMY0cuYGb8lvLPmBpjpOp46o86DOJjaGebeVAqsNw30C7x5GEJweI TYb6RF9PJA6EXL6IoosnLLNTy1vMLtFVfcLzJA0UBNKZaJ+abOcXcC2LkKabTrwhIJqb VM3ZbThLBSXeqtXJKYGoWMVfDOY547Z2mk6HFCxc/lzoT9lZ3qtBECRjfVx8VmItZXto Fkpdnmf0hAEtGbwixj/RkBwhdWlpKOGQKrOrgQa0l2AWEEhLKxeMtbzgDSAPC5UjLftV /MMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=O0uv2aYG1ookiD29oN8l602dz4/7c0EcYMjjugwm+VA=; b=f12MMAZzyFeQ0bBU4Y4kbeS8KxkVezpXTvMgy1ghgQMA4YIyubBymbtVO7XbkCPbbY nMHFsl+PH5WxudcQBHxoa1fkeZWIj93i6eEKe3V6JVXX/suV8z4bxbi8c5VSMQk845T0 RWCCjtPK9je8aHuKGOkG1sAQUbZXo7mWjFJrf9dx1V8coVYxsrO/sMLu2dz1/jYVVxSZ xZDLXWsXisigOyX9oTNy6Mz/kbaeOJV8ySncKd8j0d3I3efYpWPLVuLHGOPdZ/tzq6OV 00J68vewuzqMp3VqfKWLEH6CU4NSsfCotp2AyBTyxChlRQEheIihvjWE82nIjgeXWisz NfAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FjdhunpE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h24-20020a170902ac9800b00156849c4f32si6862939plr.84.2022.04.04.14.17.00; Mon, 04 Apr 2022 14:17:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FjdhunpE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353228AbiDAW3A (ORCPT + 99 others); Fri, 1 Apr 2022 18:29:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236927AbiDAW26 (ORCPT ); Fri, 1 Apr 2022 18:28:58 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D91D5DE72; Fri, 1 Apr 2022 15:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648852028; x=1680388028; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=UJNICAG6TCIb/4Af9Q1KfBvjRZRVQS+n33nr6c0+nyU=; b=FjdhunpEd+pBPSdkQV9QiklehUW7ztGvUortbPeaMU46wsXgso9BU/sP SfZGCQmGMroSTCJ/lpwXOG/SogaBQE4e7ObJKIFnNf7DSNWXAyZaFcgMI wcNGTdbKMarn5pOgsjgv7TL82uwP95ZUpFZRN535bS5pPr3yiqJVVW2t2 JHX/dxJWjI3vDprvES2/ZjOLpfjucnunifOn6w6ovGkN/16wtyg3QeXn5 0d/nnOpA3F79PjDAD/voddAYNwwLc5F31pAzQz5rsIdHwAeDR7vCZ39XX Yh7BIKcN9GxNqaYA5fwbHg/MIFBxuUHMQKjclQ8+OBerCvw9gFsig+KLS w==; X-IronPort-AV: E=McAfee;i="6200,9189,10304"; a="240182092" X-IronPort-AV: E=Sophos;i="5.90,228,1643702400"; d="scan'208";a="240182092" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2022 15:27:07 -0700 X-IronPort-AV: E=Sophos;i="5.90,228,1643702400"; d="scan'208";a="522938992" Received: from tmle-mobl.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.131.147]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Apr 2022 15:27:05 -0700 Message-ID: <43098446667829fc592b7cc7d5fd463319d37562.camel@intel.com> Subject: Re: [RFC PATCH v5 038/104] KVM: x86/mmu: Allow per-VM override of the TDP max page level From: Kai Huang To: Sean Christopherson Cc: isaku.yamahata@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl Date: Sat, 02 Apr 2022 11:27:03 +1300 In-Reply-To: References: <5cc4b1c90d929b7f4f9829a42c0b63b52af0c1ed.1646422845.git.isaku.yamahata@intel.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.42.4 (3.42.4-1.fc35) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2022-04-01 at 14:08 +0000, Sean Christopherson wrote: > On Fri, Apr 01, 2022, Kai Huang wrote: > > On Fri, 2022-03-04 at 11:48 -0800, isaku.yamahata@intel.com wrote: > > > From: Sean Christopherson > > > > > > In the existing x86 KVM MMU code, there is already max_level member in > > > struct kvm_page_fault with KVM_MAX_HUGEPAGE_LEVEL initial value. The KVM > > > page fault handler denies page size larger than max_level. > > > > > > Add per-VM member to indicate the allowed maximum page size with > > > KVM_MAX_HUGEPAGE_LEVEL as default value and initialize max_level in struct > > > kvm_page_fault with it. > > > > > > For the guest TD, the set per-VM value for allows maximum page size to 4K > > > page size. Then only allowed page size is 4K. It means large page is > > > disabled. > > > > Do not support large page for TD is the reason that you want this change, but > > not the result. Please refine a little bit. > > Not supporting huge pages was fine for the PoC, but I'd prefer not to merge TDX > without support for huge pages. Has any work been put into enabling huge pages? > If so, what's the technical blocker? If not... Hi Sean, Is there any reason large page support must be included in the initial merge of TDX? Large page is more about performance improvement I think. Given this series is already very big, perhaps we can do it later. -- Thanks, -Kai