Received: by 2002:a05:6a10:c7d3:0:0:0:0 with SMTP id h19csp597368pxy; Sat, 14 Aug 2021 17:18:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyoHYJxNqee6lJw4371ZUAA2UoUOuPkLBxQ2jpesDRoJ6VOCbkafn+m8JezS0k3DcRYgwJw X-Received: by 2002:a17:906:4dd5:: with SMTP id f21mr9462527ejw.284.1628986686558; Sat, 14 Aug 2021 17:18:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628986686; cv=none; d=google.com; s=arc-20160816; b=NPlDa/UsSq4Q/IpJubUi03Th1ttZEBDHQhIoyWLCJywkdgm5kiGmkvmBXIVSgy9Pv5 13o+t5r119HbYxo/wEzlQsJqyf/TM4h4kuH+9B2BLo6ko0K1p0S3Z9NOjstlmwfpNpBl dKRwSYua2SvWyBMtTvsxoTManR313E38wMNE2b5r1lXTDYLyH9zZEWQK+1WGnzYmkvAU GJPk+yGrXcyjcaU0R62VuLtSbuHtBzKtU13pTGDeY774HadUq1xG9/ay10FFKTLclA1A P8EgA95PawGFl02H1snHZb2IKvt5WmXROkOfkypx5pz2Qxme9uaFeOBczmeaAjWNRuZx qxYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HDIF804hsFXz7fo4/n1hskGvZoXurMvfIhDu1hTQGQE=; b=xUqRNUvjHpXOqxWislzhN84a0TSNcioYA7lKF3XD0hhiHKHnwdasKgAKjasYkFuAJU 0/rCeu7kUnOS/1sqc83K24ROH/ik+WaJtJJpEH/u6mhboOPJZA4C+bT//9PTY8OTb+Y5 zj1zaFS7Ath0c4Ls7S+MofLR6VZYsSoCmyzytuXmCWEzoPN5suqPER5IAjffeFuwfLyq MSqeDMJHOrJaTjQ1rOb2V4br6byvAO2zNgCBJRfcf5dhLs3mdZzrHfCGgZYsJQPL83oR bt7gZbqjVZ1qRjn1hBm/e2WA23/tketoOCt2p6FNDteZEfBUWbH3d4xuWguAwMuh08qW 5gRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UQICPeY8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j25si6820665edj.366.2021.08.14.17.17.44; Sat, 14 Aug 2021 17:18:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UQICPeY8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235484AbhHOAQL (ORCPT + 99 others); Sat, 14 Aug 2021 20:16:11 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:20885 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235342AbhHOAPt (ORCPT ); Sat, 14 Aug 2021 20:15:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1628986520; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HDIF804hsFXz7fo4/n1hskGvZoXurMvfIhDu1hTQGQE=; b=UQICPeY8J+Prs6I1F48Eej1OvgJGnCprsAMvCJIk8+sZFsJGwXMxryGXmIfh/ZCt6cHEkZ AISGG6WUaR8KLP5vjJmmBtWZt6aD8iJoGgpCCkdvkaYVWU4TrI8h3Y+AaLZtDQHkNYFg6n FD7pvrJwiz3+ETp3dfaD3DvOD1bsySI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-300-INtLakD-Nfm6lTq8eJL1Iw-1; Sat, 14 Aug 2021 20:15:16 -0400 X-MC-Unique: INtLakD-Nfm6lTq8eJL1Iw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A6001824F83; Sun, 15 Aug 2021 00:15:15 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-103.bne.redhat.com [10.64.54.103]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0EB5510429FE; Sun, 15 Aug 2021 00:15:11 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.cs.columbia.edu Cc: linux-kernel@vger.kernel.org, james.morse@arm.com, mark.rutland@arm.com, Jonathan.Cameron@huawei.com, will@kernel.org, maz@kernel.org, pbonzini@redhat.com Subject: [PATCH v4 13/21] KVM: arm64: Impment SDEI event delivery Date: Sun, 15 Aug 2021 08:13:44 +0800 Message-Id: <20210815001352.81927-14-gshan@redhat.com> In-Reply-To: <20210815001352.81927-1-gshan@redhat.com> References: <20210815001352.81927-1-gshan@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This implement kvm_sdei_deliver() to support SDEI event delivery. The function is called when the request (KVM_REQ_SDEI) is raised. The following rules are taken according to the SDEI specification: * x0 - x17 are saved. All of them are cleared except the following registered: x0: number SDEI event to be delivered x1: parameter associated with the SDEI event x2: PC of the interrupted context x3: PState of the interrupted context * PC is set to the handler of the SDEI event, which was provided during its registration. PState is modified accordingly. * SDEI event with critical priority can preempt those with normal priority. Signed-off-by: Gavin Shan --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_sdei.h | 1 + arch/arm64/kvm/arm.c | 3 ++ arch/arm64/kvm/sdei.c | 84 +++++++++++++++++++++++++++++++ 4 files changed, 89 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index aedf901e1ec7..46f363aa6524 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -47,6 +47,7 @@ #define KVM_REQ_RECORD_STEAL KVM_ARCH_REQ(3) #define KVM_REQ_RELOAD_GICv4 KVM_ARCH_REQ(4) #define KVM_REQ_RELOAD_PMU KVM_ARCH_REQ(5) +#define KVM_REQ_SDEI KVM_ARCH_REQ(6) #define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \ KVM_DIRTY_LOG_INITIALLY_SET) diff --git a/arch/arm64/include/asm/kvm_sdei.h b/arch/arm64/include/asm/kvm_sdei.h index b0abc13a0256..7f5f5ad689e6 100644 --- a/arch/arm64/include/asm/kvm_sdei.h +++ b/arch/arm64/include/asm/kvm_sdei.h @@ -112,6 +112,7 @@ KVM_SDEI_FLAG_FUNC(enabled) void kvm_sdei_init_vm(struct kvm *kvm); void kvm_sdei_create_vcpu(struct kvm_vcpu *vcpu); int kvm_sdei_hypercall(struct kvm_vcpu *vcpu); +void kvm_sdei_deliver(struct kvm_vcpu *vcpu); void kvm_sdei_destroy_vcpu(struct kvm_vcpu *vcpu); void kvm_sdei_destroy_vm(struct kvm *kvm); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 2f021aa41632..0c3db1ef1ba9 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -689,6 +689,9 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu)) kvm_reset_vcpu(vcpu); + if (kvm_check_request(KVM_REQ_SDEI, vcpu)) + kvm_sdei_deliver(vcpu); + /* * Clear IRQ_PENDING requests that were made to guarantee * that a VCPU sees new virtual interrupts. diff --git a/arch/arm64/kvm/sdei.c b/arch/arm64/kvm/sdei.c index 62efee2b67b8..b5d6d1ed3858 100644 --- a/arch/arm64/kvm/sdei.c +++ b/arch/arm64/kvm/sdei.c @@ -671,6 +671,90 @@ int kvm_sdei_hypercall(struct kvm_vcpu *vcpu) return 1; } +void kvm_sdei_deliver(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_sdei_kvm *ksdei = kvm->arch.sdei; + struct kvm_sdei_vcpu *vsdei = vcpu->arch.sdei; + struct kvm_sdei_event *kse = NULL; + struct kvm_sdei_kvm_event *kske = NULL; + struct kvm_sdei_vcpu_event *ksve = NULL; + struct kvm_sdei_vcpu_regs *regs = NULL; + unsigned long pstate; + int index = 0; + + /* Sanity check */ + if (!(ksdei && vsdei)) + return; + + /* The critical event can't be preempted */ + spin_lock(&vsdei->lock); + if (vsdei->critical_event) + goto unlock; + + /* + * The normal event can be preempted by the critical event. + * However, the normal event can't be preempted by another + * normal event. + */ + ksve = list_first_entry_or_null(&vsdei->critical_events, + struct kvm_sdei_vcpu_event, link); + if (!ksve && !vsdei->normal_event) { + ksve = list_first_entry_or_null(&vsdei->normal_events, + struct kvm_sdei_vcpu_event, link); + } + + if (!ksve) + goto unlock; + + kske = ksve->kske; + kse = kske->kse; + if (kse->state.priority == SDEI_EVENT_PRIORITY_CRITICAL) { + vsdei->critical_event = ksve; + vsdei->state.critical_num = ksve->state.num; + regs = &vsdei->state.critical_regs; + } else { + vsdei->normal_event = ksve; + vsdei->state.normal_num = ksve->state.num; + regs = &vsdei->state.normal_regs; + } + + /* Save registers: x0 -> x17, PC, PState */ + for (index = 0; index < ARRAY_SIZE(regs->regs); index++) + regs->regs[index] = vcpu_get_reg(vcpu, index); + + regs->pc = *vcpu_pc(vcpu); + regs->pstate = *vcpu_cpsr(vcpu); + + /* + * Inject SDEI event: x0 -> x3, PC, PState. We needn't take lock + * for the KVM event as it can't be destroyed because of its + * reference count. + */ + for (index = 0; index < ARRAY_SIZE(regs->regs); index++) + vcpu_set_reg(vcpu, index, 0); + + index = (kse->state.type == SDEI_EVENT_TYPE_PRIVATE) ? + vcpu->vcpu_idx : 0; + vcpu_set_reg(vcpu, 0, kske->state.num); + vcpu_set_reg(vcpu, 1, kske->state.params[index]); + vcpu_set_reg(vcpu, 2, regs->pc); + vcpu_set_reg(vcpu, 3, regs->pstate); + + pstate = regs->pstate; + pstate |= (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT); + pstate &= ~PSR_MODE_MASK; + pstate |= PSR_MODE_EL1h; + pstate &= ~PSR_MODE32_BIT; + + vcpu_write_sys_reg(vcpu, regs->pstate, SPSR_EL1); + *vcpu_cpsr(vcpu) = pstate; + *vcpu_pc(vcpu) = kske->state.entries[index]; + +unlock: + spin_unlock(&vsdei->lock); +} + void kvm_sdei_init_vm(struct kvm *kvm) { struct kvm_sdei_kvm *ksdei; -- 2.23.0