Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp914215rwr; Wed, 19 Apr 2023 15:21:53 -0700 (PDT) X-Google-Smtp-Source: AKy350bgvvJRvRbL1nR7B5lisQYwtNHlAGxW7xAe8TqHjDv1XljtRTXUYFIY0W/9ryKF1s13DHZh X-Received: by 2002:a05:6a00:339b:b0:63b:6d4d:4c9 with SMTP id cm27-20020a056a00339b00b0063b6d4d04c9mr3696979pfb.9.1681942913374; Wed, 19 Apr 2023 15:21:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681942913; cv=none; d=google.com; s=arc-20160816; b=YpEyx2U7meGhZIFZbUIqym7TbLUkQ8oqROAva+Hs0DDHnoPZK/STOaylKURUTiiIPg ma4VqmdYmVrVH6VQTsHV2+GjbfmUKiuw86ZGc7pW6LnrTDKbMqGvHSIW8GTWD99QxzUT oKbeM9HGXaqzaNOn1SXoyz9jbazFFY4yQeCNRHz9wNVmqoFbiG7LCwBYyuPsp9ZkQ1ku zhEpEHYkof8cXtPcsSvYwfKNdO98G6nfbFxQzQ1dh1+NXoD0ok482mP8hdB/ld7OH7f+ PYDnkJ8WjnfgoUHRl5d6ot4mXDaohdCFCUnVqAjfBaSTAI5LoskybOMBuVhMxRxU0jXv SDYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=YC3dkfqk8W2pe49qAcLoSncXIFKU9XJAyuzKfKpjb5nscwXZaRo8fJbPHAvIGwJbEU r1o8Mz4gadaZRHQ82PDmL0y98MlShgyf46yv0An1St59BTDI6cmezSFBFjaPxt6Znd0S 8KjdwL6YoY2VQvVuSOuZyMB4019H0yiZIDC+rnAYLaWU5qFAqXl+WiJSec3OZKNZFh8u 0cu1pfEDZPc6mlctXMgLnr5KvJyU3VaHW7W5+F39xcHmPdoUPOMDOZcCnjaZZ6EFW3gM 8lLjxAedEZ1AzHaS7QMWXZ9RoGE+dLr/3fiM2I0navY0WrccV71wet4ODQ2YUHAvFyV4 e2cg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20221208.gappssmtp.com header.s=20221208 header.b=SCmbMjD5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c2-20020a63d142000000b005139e8ca7b1si18126991pgj.494.2023.04.19.15.21.41; Wed, 19 Apr 2023 15:21:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20221208.gappssmtp.com header.s=20221208 header.b=SCmbMjD5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232840AbjDSWVD (ORCPT + 99 others); Wed, 19 Apr 2023 18:21:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232836AbjDSWUd (ORCPT ); Wed, 19 Apr 2023 18:20:33 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12C29900F for ; Wed, 19 Apr 2023 15:19:18 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-63d4595d60fso2914827b3a.0 for ; Wed, 19 Apr 2023 15:19:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681942711; x=1684534711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=SCmbMjD55QloUYOgXX6MGUNEjAyAnMm/HfGRAGM1vS7vADDmD//a3bJ19IoxEPbsjR M2cj4sRqNKbEknJUmqcRzzuGvuAWBBnIZmhseVA9z0HrnfGvz/BvmurAl+ZaqdFV9z0+ XE+By4GvDamj1V0iHkImOVtuMlZpTJ8+WBz54nyEAGyf404LButJI5R1r1M9kSp/2kE/ u0yMzOB8MvU04pFuFPh8V3hwD0+VVdFu6iRbztwNwUe0wOuPOvAaUzMpoheX119q8fkx HByPgmjTnZTij1BszzopQyeoxfkgy1UkfAJB+AO4WFsTrtiA+xJhP2Ikh+DJT3BJUL+Y xQ7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681942711; x=1684534711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PU8iFueolrZCmcbo4gYTi+eKDv6f+PtpXTvRFNX2NZA=; b=XfyWGhwrz7GU8/zVBWGifJSdAFyu3NAqE1+xwPwMWgALE6waYIujGKim7h/ubNyfCh XgTaSiFsndGpHKDL3yRXZFSoxT/lH41s+vja1V4YzzVgTSSIIR+r5HtviRzmRp+rfmUx cYptHiqDjfPBOLbn0ff8KTMTjmJSNMaZ/9jJ1rhIbLurRhn11cp76TCgA5V2qh20ljyX euONQ8xXexs71XS5x0jLu3MtRXjHDynQ7Ca3xwuXpTi2OvSh+BbvCQMwoQ4Gj23KRRuZ yjiy5rptXf0N2B36zPb9Hgfz8xeihGOtA77G2zfcT18glGWdYd+k/FZ+YfB0xyi3zQLp DnIA== X-Gm-Message-State: AAQBX9d+fAei5j166vu9DRyV4m9fj5Dgaiq18UcVweSPZXUNoKNmYu7u DpvtUVaJLHTjoH5+RtXnNCU3F+xGk24XDanCuaM= X-Received: by 2002:a17:90a:9e5:b0:246:aeee:e61c with SMTP id 92-20020a17090a09e500b00246aeeee61cmr4072001pjo.11.1681942711371; Wed, 19 Apr 2023 15:18:31 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([66.220.2.162]) by smtp.gmail.com with ESMTPSA id jn11-20020a170903050b00b00196807b5189sm11619190plb.292.2023.04.19.15.18.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 15:18:31 -0700 (PDT) From: Atish Patra To: linux-kernel@vger.kernel.org Cc: Atish Patra , Alexandre Ghiti , Andrew Jones , Andrew Morton , Anup Patel , Atish Patra , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Suzuki K Poulose , Will Deacon , Marc Zyngier , Sean Christopherson , linux-coco@lists.linux.dev, Dylan Reid , abrestic@rivosinc.com, Samuel Ortiz , Christoph Hellwig , Conor Dooley , Greg Kroah-Hartman , Guo Ren , Heiko Stuebner , Jiri Slaby , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, Mayuresh Chitale , Palmer Dabbelt , Paolo Bonzini , Paul Walmsley , Rajnesh Kanwal , Uladzislau Rezki Subject: [RFC 28/48] RISC-V: KVM: Add interrupt management functions for TVM Date: Wed, 19 Apr 2023 15:16:56 -0700 Message-Id: <20230419221716.3603068-29-atishp@rivosinc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419221716.3603068-1-atishp@rivosinc.com> References: <20230419221716.3603068-1-atishp@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The COVI SBI extension defines the functions related to interrupt management for TVMs. These functions are the glue logic between AIA code and the actually CoVE Interrupt SBI extension(COVI). Signed-off-by: Atish Patra --- arch/riscv/include/asm/kvm_cove.h | 34 ++++ arch/riscv/kvm/cove.c | 256 ++++++++++++++++++++++++++++++ 2 files changed, 290 insertions(+) diff --git a/arch/riscv/include/asm/kvm_cove.h b/arch/riscv/include/asm/kvm_cove.h index b63682f..74bad2f 100644 --- a/arch/riscv/include/asm/kvm_cove.h +++ b/arch/riscv/include/asm/kvm_cove.h @@ -61,10 +61,19 @@ struct kvm_riscv_cove_page { unsigned long gpa; }; +struct imsic_tee_state { + bool bind_required; + bool bound; + int vsfile_hgei; +}; + struct kvm_cove_tvm_vcpu_context { struct kvm_vcpu *vcpu; /* Pages storing each vcpu state of the TVM in TSM */ struct kvm_riscv_cove_page vcpu_state; + + /* Per VCPU imsic state */ + struct imsic_tee_state imsic; }; struct kvm_cove_tvm_context { @@ -133,6 +142,16 @@ int kvm_riscv_cove_vm_add_memreg(struct kvm *kvm, unsigned long gpa, unsigned lo int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva); /* Fence related function */ int kvm_riscv_cove_tvm_fence(struct kvm_vcpu *vcpu); + +/* AIA related CoVE functions */ +int kvm_riscv_cove_aia_init(struct kvm *kvm); +int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, unsigned long iid); +int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, int old_cpu); +int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, unsigned long imsic_mask); +int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, int old_pcpu); +int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa); +int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa); +int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu); #else static inline bool kvm_riscv_cove_enabled(void) {return false; }; static inline int kvm_riscv_cove_init(void) { return -1; } @@ -162,6 +181,21 @@ static inline int kvm_riscv_cove_vm_measure_pages(struct kvm *kvm, } static inline int kvm_riscv_cove_gstage_map(struct kvm_vcpu *vcpu, gpa_t gpa, unsigned long hva) {return -1; } +/* AIA related TEE functions */ +static inline int kvm_riscv_cove_aia_init(struct kvm *kvm) { return -1; } +static inline int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, + unsigned long iid) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, + int old_cpu) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, + unsigned long imsic_mask) { return -1; } +static inline int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, + phys_addr_t imsic_pa) { return -1; } +static inline int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, + phys_addr_t imsic_pa) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu) { return -1; } +static inline int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, + int old_pcpu) { return -1; } #endif /* CONFIG_RISCV_COVE_HOST */ #endif /* __KVM_RISCV_COVE_H */ diff --git a/arch/riscv/kvm/cove.c b/arch/riscv/kvm/cove.c index 4a8a8db..154b01a 100644 --- a/arch/riscv/kvm/cove.c +++ b/arch/riscv/kvm/cove.c @@ -8,6 +8,7 @@ * Atish Patra */ +#include #include #include #include @@ -137,6 +138,247 @@ __always_inline bool kvm_riscv_cove_enabled(void) return riscv_cove_enabled; } +static void kvm_cove_imsic_clone(void *info) +{ + int rc; + struct kvm_vcpu *vcpu = info; + struct kvm *kvm = vcpu->kvm; + + rc = sbi_covi_rebind_vcpu_imsic_clone(kvm->arch.tvmc->tvm_guest_id, vcpu->vcpu_idx); + if (rc) + kvm_err("Imsic clone failed guest %ld vcpu %d pcpu %d\n", + kvm->arch.tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); +} + +static void kvm_cove_imsic_unbind(void *info) +{ + struct kvm_vcpu *vcpu = info; + struct kvm_cove_tvm_context *tvmc = vcpu->kvm->arch.tvmc; + + /*TODO: We probably want to return but the remote function call doesn't allow any return */ + if (sbi_covi_unbind_vcpu_imsic_begin(tvmc->tvm_guest_id, vcpu->vcpu_idx)) + return; + + /* This may issue IPIs to running vcpus. */ + if (kvm_riscv_cove_tvm_fence(vcpu)) + return; + + if (sbi_covi_unbind_vcpu_imsic_end(tvmc->tvm_guest_id, vcpu->vcpu_idx)) + return; + + kvm_info("Unbind success for guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, smp_processor_id(), vcpu->vcpu_idx); +} + +int kvm_riscv_cove_vcpu_imsic_addr(struct kvm_vcpu *vcpu) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_vcpu_aia *vaia = &vcpu->arch.aia_context; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_set_vcpu_imsic_addr(tvmc->tvm_guest_id, vcpu->vcpu_idx, vaia->imsic_addr); + if (ret) + return -EPERM; + + return 0; +} + +int kvm_riscv_cove_aia_convert_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa) +{ + struct kvm *kvm = vcpu->kvm; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + ret = sbi_covi_convert_imsic(imsic_pa); + if (ret) + return -EPERM; + + ret = kvm_riscv_cove_fence(); + if (ret) + return ret; + + return 0; +} + +int kvm_riscv_cove_aia_claim_imsic(struct kvm_vcpu *vcpu, phys_addr_t imsic_pa) +{ + int ret; + struct kvm *kvm = vcpu->kvm; + + if (!kvm->arch.tvmc) + return -EINVAL; + + ret = sbi_covi_reclaim_imsic(imsic_pa); + if (ret) + return -EPERM; + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_rebind(struct kvm_vcpu *vcpu, int old_pcpu) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + int ret; + cpumask_t tmpmask; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_rebind_vcpu_imsic_begin(tvmc->tvm_guest_id, vcpu->vcpu_idx, + BIT(tvcpu->imsic.vsfile_hgei)); + if (ret) { + kvm_err("Imsic rebind begin failed guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + + ret = kvm_riscv_cove_tvm_fence(vcpu); + if (ret) + return ret; + + cpumask_clear(&tmpmask); + cpumask_set_cpu(old_pcpu, &tmpmask); + on_each_cpu_mask(&tmpmask, kvm_cove_imsic_clone, vcpu, 1); + + ret = sbi_covi_rebind_vcpu_imsic_end(tvmc->tvm_guest_id, vcpu->vcpu_idx); + if (ret) { + kvm_err("Imsic rebind end failed guest %ld vcpu %d pcpu %d\n", + tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + + tvcpu->imsic.bound = true; + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_bind(struct kvm_vcpu *vcpu, unsigned long imsic_mask) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_bind_vcpu_imsic(tvmc->tvm_guest_id, vcpu->vcpu_idx, imsic_mask); + if (ret) { + kvm_err("Imsic bind failed for imsic %lx guest %ld vcpu %d pcpu %d\n", + imsic_mask, tvmc->tvm_guest_id, vcpu->vcpu_idx, smp_processor_id()); + return ret; + } + tvcpu->imsic.bound = true; + pr_err("%s: rebind success vcpu %d hgei %d pcpu %d\n", __func__, + vcpu->vcpu_idx, tvcpu->imsic.vsfile_hgei, smp_processor_id()); + + return 0; +} + +int kvm_riscv_cove_vcpu_imsic_unbind(struct kvm_vcpu *vcpu, int old_pcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_cove_tvm_vcpu_context *tvcpu = vcpu->arch.tc; + cpumask_t tmpmask; + + if (!kvm->arch.tvmc) + return -EINVAL; + + /* No need to unbind if it is not bound already */ + if (!tvcpu->imsic.bound) + return 0; + + /* Do it first even if there is failure to prevent it to try again */ + tvcpu->imsic.bound = false; + + if (smp_processor_id() == old_pcpu) { + kvm_cove_imsic_unbind(vcpu); + } else { + /* Unbind can be invoked from a different physical cpu */ + cpumask_clear(&tmpmask); + cpumask_set_cpu(old_pcpu, &tmpmask); + on_each_cpu_mask(&tmpmask, kvm_cove_imsic_unbind, vcpu, 1); + } + + return 0; +} + +int kvm_riscv_cove_vcpu_inject_interrupt(struct kvm_vcpu *vcpu, unsigned long iid) +{ + struct kvm_cove_tvm_context *tvmc; + struct kvm *kvm = vcpu->kvm; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + ret = sbi_covi_inject_external_interrupt(tvmc->tvm_guest_id, vcpu->vcpu_idx, iid); + if (ret) + return ret; + + return 0; +} + +int kvm_riscv_cove_aia_init(struct kvm *kvm) +{ + struct kvm_aia *aia = &kvm->arch.aia; + struct sbi_cove_tvm_aia_params *tvm_aia; + struct kvm_vcpu *vcpu; + struct kvm_cove_tvm_context *tvmc; + int ret; + + if (!kvm->arch.tvmc) + return -EINVAL; + + tvmc = kvm->arch.tvmc; + + /* Sanity Check */ + if (aia->aplic_addr != KVM_RISCV_AIA_UNDEF_ADDR) + return -EINVAL; + + /* TVMs must have a physical guest interrut file */ + if (aia->mode != KVM_DEV_RISCV_AIA_MODE_HWACCEL) + return -ENODEV; + + tvm_aia = kzalloc(sizeof(*tvm_aia), GFP_KERNEL); + if (!tvm_aia) + return -ENOMEM; + + /* Address of the IMSIC group ID, hart ID & guest ID of 0 */ + vcpu = kvm_get_vcpu_by_id(kvm, 0); + tvm_aia->imsic_base_addr = vcpu->arch.aia_context.imsic_addr; + + tvm_aia->group_index_bits = aia->nr_group_bits; + tvm_aia->group_index_shift = aia->nr_group_shift; + tvm_aia->hart_index_bits = aia->nr_hart_bits; + tvm_aia->guest_index_bits = aia->nr_guest_bits; + /* Nested TVMs are not supported yet */ + tvm_aia->guests_per_hart = 0; + + + ret = sbi_covi_tvm_aia_init(tvmc->tvm_guest_id, tvm_aia); + if (ret) + kvm_err("TVM AIA init failed with rc %d\n", ret); + + return ret; +} + void kvm_riscv_cove_vcpu_load(struct kvm_vcpu *vcpu) { kvm_riscv_vcpu_timer_restore(vcpu); @@ -283,6 +525,7 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; void *nshmem; struct kvm_guest_timer *gt = &kvm->arch.timer; + struct kvm_cove_tvm_vcpu_context *tvcpuc = vcpu->arch.tc; if (!kvm->arch.tvmc) return; @@ -301,6 +544,19 @@ void noinstr kvm_riscv_cove_vcpu_switchto(struct kvm_vcpu *vcpu, struct kvm_cpu_ tvmc->finalized_done = true; } + /* + * Bind the vsfile here instead during the new vsfile allocation because + * COVH bind call requires the TVM to be in finalized state. + */ + if (tvcpuc->imsic.bind_required) { + tvcpuc->imsic.bind_required = false; + rc = kvm_riscv_cove_vcpu_imsic_bind(vcpu, BIT(tvcpuc->imsic.vsfile_hgei)); + if (rc) { + kvm_err("bind failed with rc %d\n", rc); + return; + } + } + rc = sbi_covh_run_tvm_vcpu(tvmc->tvm_guest_id, vcpu->vcpu_idx); if (rc) { trap->scause = EXC_CUSTOM_KVM_COVE_RUN_FAIL; -- 2.25.1