Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp1884874pxv; Fri, 2 Jul 2021 15:08:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzP9ZwQOkC9J60FKwltrIYKDAoiWr+6iwlElDfRIFGvBdp9qucqWf+OrJjIfaAjwGWiV8y1 X-Received: by 2002:aa7:cd9a:: with SMTP id x26mr1955115edv.185.1625263686548; Fri, 02 Jul 2021 15:08:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625263686; cv=none; d=google.com; s=arc-20160816; b=DPEryYdWFIc01EZ/OFMQboSsD+K6AIxFss++gCDWiYIuJiuezoWC0t+9bZ+IdOxsfv TT0w0NW5I3mVpXAGAOzS5+SirXG8jOPunL9bzjA6I1WhH9LVS/cIrhO41obyWC0do2Md q1zBZC5DJdMkCD8jOG/rjY19ZuShTCVhhwjA/ZNRJZ21r5XIpt3kcpoogT22vM8iEkJJ 4QeSvxV26vtcxiIZo36tU3DGKuuEHptluMGcVZJ1ga3zAFB9saHkyTLeC/Q1qFFZgqSC uDZpW6N4Eu8RJvP9Y15P4kw1TcZ+AfgFMY3wPsuFa22go1AtnJcgS7dcLsODvnnwc15y a5/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=nRAV7YYqe/nIBLh8Yoi7n++LV10qoIaJkZgTsaHPoCc=; b=oQxsEjw1Ul1KWBASapQynjs0FNZBXGHeYdY0xlsaQX+OL1Y+NEnHtXhduBQRbwCDHF vQQPierTjbNfiHUIcGAk0sPI4pJIcz3tA5dMx87bNhobPiHH3fXWOfbtX0Rz6yo4YN1Z X98rVaotLbsgoja8CtliqP4A6WBrUr38mo8acIMmxnp4WZUl+Veflw4s0tbEzEgWB6RI TO88l/46QwB9IH/yAa/lJKgrkED86Bq0rBKPvSg2MTjicDCKxDYm/rB9ClahvNeB/suk BgHJVtoqqGPZq17f4a9ikIh1VxMwP47tgBbLK4H8yIa3vc1PV/y8TgUqPxSKw2sgeOxY kVQw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w8si4085481edc.463.2021.07.02.15.07.42; Fri, 02 Jul 2021 15:08:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233480AbhGBWIf (ORCPT + 99 others); Fri, 2 Jul 2021 18:08:35 -0400 Received: from mga12.intel.com ([192.55.52.136]:50200 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233120AbhGBWH5 (ORCPT ); Fri, 2 Jul 2021 18:07:57 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10033"; a="188472733" X-IronPort-AV: E=Sophos;i="5.83,320,1616482800"; d="scan'208";a="188472733" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2021 15:05:24 -0700 X-IronPort-AV: E=Sophos;i="5.83,320,1616482800"; d="scan'208";a="642814759" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jul 2021 15:05:24 -0700 From: isaku.yamahata@intel.com To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , erdemaktas@google.com, Connor Kuehl , Sean Christopherson , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Sean Christopherson , Kai Huang Subject: [RFC PATCH v2 27/69] KVM: x86: Add flag to mark TSC as immutable (for TDX) Date: Fri, 2 Jul 2021 15:04:33 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson The TSC for TDX1 guests is fixed at TD creation time. Add tsc_immutable to reflect that the TSC of the guest cannot be changed in any way, and use it to short circuit all paths that lead to one of the myriad TSC adjustment flows. Suggested-by: Kai Huang Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/x86.c | 35 +++++++++++++++++++++++++-------- 2 files changed, 28 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 09e51c5e86b3..5d6143643cd1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1044,6 +1044,7 @@ struct kvm_arch { int audit_point; #endif + bool tsc_immutable; bool backwards_tsc_observed; bool boot_vcpu_runs_old_kvmclock; u32 bsp_vcpu_id; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 681fc3be2b2b..cd9407982366 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2184,7 +2184,9 @@ static int set_tsc_khz(struct kvm_vcpu *vcpu, u32 user_tsc_khz, bool scale) u64 ratio; /* Guest TSC same frequency as host TSC? */ - if (!scale) { + if (!scale || vcpu->kvm->arch.tsc_immutable) { + if (scale) + pr_warn_ratelimited("Guest TSC immutable, scaling not supported\n"); vcpu->arch.tsc_scaling_ratio = kvm_default_tsc_scaling_ratio; return 0; } @@ -2360,6 +2362,9 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) bool already_matched; bool synchronizing = false; + if (WARN_ON_ONCE(vcpu->kvm->arch.tsc_immutable)) + return; + raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); offset = kvm_compute_tsc_offset(vcpu, data); ns = get_kvmclock_base_ns(); @@ -2791,6 +2796,10 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) u8 pvclock_flags; bool use_master_clock; + /* Unable to update guest time if the TSC is immutable. */ + if (ka->tsc_immutable) + return 0; + kernel_ns = 0; host_tsc = 0; @@ -4142,7 +4151,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (tsc_delta < 0) mark_tsc_unstable("KVM discovered backwards TSC"); - if (kvm_check_tsc_unstable()) { + if (kvm_check_tsc_unstable() && + !vcpu->kvm->arch.tsc_immutable) { u64 offset = kvm_compute_tsc_offset(vcpu, vcpu->arch.last_guest_tsc); kvm_vcpu_write_tsc_offset(vcpu, offset); @@ -4156,7 +4166,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * On a host with synchronized TSC, there is no need to update * kvmclock on vcpu->cpu migration */ - if (!vcpu->kvm->arch.use_master_clock || vcpu->cpu == -1) + if ((!vcpu->kvm->arch.use_master_clock || vcpu->cpu == -1) && + !vcpu->kvm->arch.tsc_immutable) kvm_make_request(KVM_REQ_GLOBAL_CLOCK_UPDATE, vcpu); if (vcpu->cpu != cpu) kvm_make_request(KVM_REQ_MIGRATE_TIMER, vcpu); @@ -5126,10 +5137,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp, break; } case KVM_SET_TSC_KHZ: { - u32 user_tsc_khz; + u32 user_tsc_khz = (u32)arg; r = -EINVAL; - user_tsc_khz = (u32)arg; + if (vcpu->kvm->arch.tsc_immutable) + goto out; if (kvm_has_tsc_control && user_tsc_khz >= kvm_max_guest_tsc_khz) @@ -10499,9 +10511,12 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) if (mutex_lock_killable(&vcpu->mutex)) return; - vcpu_load(vcpu); - kvm_synchronize_tsc(vcpu, 0); - vcpu_put(vcpu); + + if (!kvm->arch.tsc_immutable) { + vcpu_load(vcpu); + kvm_synchronize_tsc(vcpu, 0); + vcpu_put(vcpu); + } /* poll control enabled by default */ vcpu->arch.msr_kvm_poll_control = 1; @@ -10696,6 +10711,10 @@ int kvm_arch_hardware_enable(void) if (backwards_tsc) { u64 delta_cyc = max_tsc - local_tsc; list_for_each_entry(kvm, &vm_list, vm_list) { + if (vcpu->kvm->arch.tsc_immutable) { + pr_warn_ratelimited("Backwards TSC observed and guest with immutable TSC active\n"); + continue; + } kvm->arch.backwards_tsc_observed = true; kvm_for_each_vcpu(i, vcpu, kvm) { vcpu->arch.tsc_offset_adjustment += delta_cyc; -- 2.25.1