Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3184738pxf; Sun, 21 Mar 2021 23:04:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx77Pp6j2vzd7i7vp0ieW2NjaC7dg9Q11uFpazAFq/c0BD7VHeI4B8LlB15yNyhPgne0QuT X-Received: by 2002:a50:e80c:: with SMTP id e12mr24021666edn.229.1616393071485; Sun, 21 Mar 2021 23:04:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616393071; cv=none; d=google.com; s=arc-20160816; b=zX2L19uJFKorysUswoDL7SSIaiIMIrMNfhjkAAocnYLDhvDc1EkocvC/JuLogwTNGz skQaFcspcMGX3qz4J3IsPVW3aXyVIOGhMJYskMvaNddtOAXSTPbisj7mH5krJV2JgHO+ GJH7J8ijhNp6DDv891eQaacdhu+yw+C0CGFCWcPPqfc/kMIQ2M9jPCrqaz3WH0F/6F52 WO2RFpfw/YH7acDHvGuDX3BUjSkn8QCMBsvX8jo3cPJnL3q3c/gSj9MS/GrUbxiY7f36 YQ82yGqmZDowXtp/7tosib/1Hb24r6ZErHwtZZzWdYzpBrQ/H7+gXGLhasZ5RH8da0Yj viuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=quqSLVp1bYRs27CVliVUpCSQbdsKnchQ4YyuvvHy94I=; b=DkOLiKSgGy2nb2dV6CfBvsuQ/LbcHO9cchTq8I0tcAQJs5S6wBlm8Te7MdttoWFycE FBd++UIt712Ay2L5AfIm/c/kdb6SVnkDanToAnv9b06t2xYUZeI20tGJtl1nuX/6L16B 4Nc/PUVihUZaPSArP1EnPpri53x3BJzAc4mcrpaFmEgl53nvYw8SkEtVEMsAzZ8FPalG F3gMDlYj0L4qM1ymoSK9ipLMjdQ6y8TyTA/OSPed6ITWz+kc968YNZntXCjj38r0wWo4 juMsQFiLDCgCyAcLvlj6rKf0Qu0oK3b2C3tg5zXiLIqNsYV7l6XbhOZmkhUc+2U5q+wx efWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cw21si10290167edb.344.2021.03.21.23.04.09; Sun, 21 Mar 2021 23:04:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230133AbhCVGDS (ORCPT + 99 others); Mon, 22 Mar 2021 02:03:18 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:14047 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230010AbhCVGCk (ORCPT ); Mon, 22 Mar 2021 02:02:40 -0400 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4F3kP41tbhzNq3t; Mon, 22 Mar 2021 14:00:08 +0800 (CST) Received: from DESKTOP-7FEPK9S.china.huawei.com (10.174.184.135) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Mar 2021 14:02:30 +0800 From: Shenming Lu To: Marc Zyngier , Eric Auger , "Will Deacon" , , , , CC: Alex Williamson , Cornelia Huck , Lorenzo Pieralisi , , , Subject: [PATCH v5 4/6] KVM: arm64: GICv4.1: Try to save VLPI state in save_pending_tables Date: Mon, 22 Mar 2021 14:01:56 +0800 Message-ID: <20210322060158.1584-5-lushenming@huawei.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20210322060158.1584-1-lushenming@huawei.com> References: <20210322060158.1584-1-lushenming@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.184.135] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org After pausing all vCPUs and devices capable of interrupting, in order to save the states of all interrupts, besides flushing the states in kvm’s vgic, we also try to flush the states of VLPIs in the virtual pending tables into guest RAM, but we need to have GICv4.1 and safely unmap the vPEs first. As for the saving of VSGIs, which needs the vPEs to be mapped and might conflict with the saving of VLPIs, but since we will map the vPEs back at the end of save_pending_tables and both savings require the kvm->lock to be held (thus only happen serially), it will work fine. Signed-off-by: Shenming Lu --- arch/arm64/kvm/vgic/vgic-v3.c | 66 +++++++++++++++++++++++++++++++---- 1 file changed, 60 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 6f530925a231..41ecf219c333 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only #include +#include +#include #include #include #include @@ -356,6 +358,32 @@ int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq) return 0; } +/* + * The deactivation of the doorbell interrupt will trigger the + * unmapping of the associated vPE. + */ +static void unmap_all_vpes(struct vgic_dist *dist) +{ + struct irq_desc *desc; + int i; + + for (i = 0; i < dist->its_vm.nr_vpes; i++) { + desc = irq_to_desc(dist->its_vm.vpes[i]->irq); + irq_domain_deactivate_irq(irq_desc_get_irq_data(desc)); + } +} + +static void map_all_vpes(struct vgic_dist *dist) +{ + struct irq_desc *desc; + int i; + + for (i = 0; i < dist->its_vm.nr_vpes; i++) { + desc = irq_to_desc(dist->its_vm.vpes[i]->irq); + irq_domain_activate_irq(irq_desc_get_irq_data(desc), false); + } +} + /** * vgic_v3_save_pending_tables - Save the pending tables into guest RAM * kvm lock and all vcpu lock must be held @@ -365,13 +393,28 @@ int vgic_v3_save_pending_tables(struct kvm *kvm) struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_irq *irq; gpa_t last_ptr = ~(gpa_t)0; - int ret; + bool vlpi_avail = false; + int ret = 0; u8 val; + if (unlikely(!vgic_initialized(kvm))) + return -ENXIO; + + /* + * A preparation for getting any VLPI states. + * The above vgic initialized check also ensures that the allocation + * and enabling of the doorbells have already been done. + */ + if (kvm_vgic_global_state.has_gicv4_1) { + unmap_all_vpes(dist); + vlpi_avail = true; + } + list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { int byte_offset, bit_nr; struct kvm_vcpu *vcpu; gpa_t pendbase, ptr; + bool is_pending; bool stored; vcpu = irq->target_vcpu; @@ -387,24 +430,35 @@ int vgic_v3_save_pending_tables(struct kvm *kvm) if (ptr != last_ptr) { ret = kvm_read_guest_lock(kvm, ptr, &val, 1); if (ret) - return ret; + goto out; last_ptr = ptr; } stored = val & (1U << bit_nr); - if (stored == irq->pending_latch) + + is_pending = irq->pending_latch; + + if (irq->hw && vlpi_avail) + vgic_v4_get_vlpi_state(irq, &is_pending); + + if (stored == is_pending) continue; - if (irq->pending_latch) + if (is_pending) val |= 1 << bit_nr; else val &= ~(1 << bit_nr); ret = kvm_write_guest_lock(kvm, ptr, &val, 1); if (ret) - return ret; + goto out; } - return 0; + +out: + if (vlpi_avail) + map_all_vpes(dist); + + return ret; } /** -- 2.19.1