Received: by 2002:a05:6500:1b45:b0:1f5:f2ab:c469 with SMTP id cz5csp450152lqb; Wed, 17 Apr 2024 00:45:53 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWar+88XlX7O2Dj4yrT11oeFQa3LUvrQxr49pUWZLtHYHLWlUyLgFY6QvdYgcjL/4fxmdPT4tNK0kDsOh7kdASADrp9yqTciPKQMrsauw== X-Google-Smtp-Source: AGHT+IHBsOkBJTSuqSG0ri5fR8Rlgc5rexUmbG7ow3Z8IFKS8l8HhwJ4MKBGmtjGEF3ksMWxpI2Y X-Received: by 2002:a17:902:c106:b0:1e0:a1c7:571c with SMTP id 6-20020a170902c10600b001e0a1c7571cmr12706951pli.26.1713339953207; Wed, 17 Apr 2024 00:45:53 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1713339953; cv=pass; d=google.com; s=arc-20160816; b=gfinelQ1oE8HYsmGoeSpz8oG7cI8cBuMCwzAoSqfBPLHuffBAJ5tEIQyHNUFRcYaHf Nj7xOdHXOBFNp2HQwgyo30Fj5fwZlUZ6m8c0R/V6rapvhNAaACq3qNnUWEVkRxAjcSIh zgZoyT8YcQtEap6Gu0naYiUkPcphz7MFyyL+kYmDGADq0Ws/BQQinnmgbXj2YgiLtvrq DD++U0Scli9uQnPNvRauE3IBPCLw9qStd3XK1LLv+kwg100eA0F9uu0IEo6yBKlIbabD xfoUv/1GK4RZFMQKgMi0Mjvyxm6yRQf5ZnZyr2W1jFvWE/UYOSJBbB70DdYb3XzUAaUh LvJw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-unsubscribe:list-subscribe:list-id:precedence:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=St7UoB0o1VZp6BFYuL7tbSNr44kxdvXND12E5lAvbIw=; fh=c89/XPLqOl5L9tCd3YNzAx3QtQwbg0U4J9Q6+nulqaA=; b=Ibf+1q8GvPjqS6TF7zIKVNUHJ7WXPo81L9lJA5UpX9PPecFxr8KpoSbF+9mjhSHsCK iSrlC6B+LwGCQCuilrWP0mkk0TQ+oqDjFThtgNl2ecjdpYoO2jdil7ZDapGrzcCg+lJV /9+WXXX4DGo2niB2KFNlJBUG022dV/9UKE790djnoDRl2VDKNQULdRDTSROGJQPhGa1I soa4aweH2q0zMJuOIXKOO8HV1d78aXBiknMpxddMtdPclCDlgmog9/jU7Vz81I48qdd1 2f/44Vm2P7vtrnOr3LqORpXortsN5eJzH8Y4+KbASIQwYnlIEaJDxGOjHcbBwWuLcBem 0www==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sifive.com header.s=google header.b=c0yF1oBK; arc=pass (i=1 spf=pass spfdomain=sifive.com dkim=pass dkdomain=sifive.com dmarc=pass fromdomain=sifive.com); spf=pass (google.com: domain of linux-kernel+bounces-148075-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-148075-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=sifive.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id q10-20020a170902daca00b001e4638c9bcasi11513014plx.592.2024.04.17.00.45.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Apr 2024 00:45:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-148075-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@sifive.com header.s=google header.b=c0yF1oBK; arc=pass (i=1 spf=pass spfdomain=sifive.com dkim=pass dkdomain=sifive.com dmarc=pass fromdomain=sifive.com); spf=pass (google.com: domain of linux-kernel+bounces-148075-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-148075-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=sifive.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id D6615282061 for ; Wed, 17 Apr 2024 07:45:52 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9D86B6F066; Wed, 17 Apr 2024 07:45:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="c0yF1oBK" Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44F56184D for ; Wed, 17 Apr 2024 07:45:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713339941; cv=none; b=EbEMMB5eMgRlUS6V+TP6Inpa5QzPTr1/3V61fP5OOuNBaUCCcx1Bv9xVApsK/DwcaTfppELlpxsXqq8KI+FPW1vCcPiEXJDopWh02g0GQGuZr/iwS/rOmtD10Mu3OT78kFy1nagMWnd+YbgFqsOauXCPUMNshzQpY39qCL1VtUA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713339941; c=relaxed/simple; bh=H193LG1OYhGZxbiufps3RKJjYMc9orDOM742EhH0pVk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=heHz/3PeHxY5R7UeyORDWFkR5n3NB3kjU0JZvsyFrwRYeE6JzH21DDNQODCbrR08WG3QqC9cwq9iGebJzaBM+yXTkwZbiQm3V4PVKWgfRDt89/8orwfbuNKN1VZ9nAt7SyIHS1pyc55KgaWj/Kga6+aQh0nSKjibf/UmOTZuJhQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com; spf=pass smtp.mailfrom=sifive.com; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b=c0yF1oBK; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1e3f17c64daso36709175ad.3 for ; Wed, 17 Apr 2024 00:45:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1713339938; x=1713944738; darn=vger.kernel.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=St7UoB0o1VZp6BFYuL7tbSNr44kxdvXND12E5lAvbIw=; b=c0yF1oBKTfPVSvXD7LQ9PE9Cv+WZeL4NY5WYKTFBpdABA2Um2xQMZWY3kW4a/rsGVl WpDE9UPlwkWhPplP7t9d/eEGFwIg4IchHB8kFxX9aS0vIi0OEcG7KJWnd8vUQCi3vZ3P mzDZqckvCKrifXDB+wvJPTev5VGCrguv+UJ0yaE3NPFrZkUHLgd1nT+ymTFUP9vaYijk OC1iLMotQQsil4vXSoZVB+i/EFtqBLflGq5cOpMmkQPrL3tAsITIwI2r5iH+99BZLR1B qwGQu3RwHWzLhnBQpyurTSbhnT1eco7maJ1UYTd7QLwib76xXVMIA1yN1L65xAsqoaSF uWuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713339938; x=1713944738; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=St7UoB0o1VZp6BFYuL7tbSNr44kxdvXND12E5lAvbIw=; b=VXCpTkS1JHtfF+zp81MDw2MFLbV2xw2peR9fGhLIqgpNx317AtQ/3NsQ+zHtf9oLHu dXF7lA3G9NLotRaupDVRCW92fBJ5Rq2S0ISwZdgWjtK1oWPzXBcmloegQx7BDDNwnBro nUJoGH8+wUURvkR8lyO8FqaUpgIW1GhFpwIjsM1CeqUxBIb+BmJnR26TJplG7jZnlon/ HVB+qZVjdpmZUOnCSHbiIG4kgyFC5YXCV8XDWu+hyTXUu42OLXkLaHAZE6aN5AQsHEYt swR0yYSl5AgKtGU3cq2PaS0hW+ozWAdCnwKqISDSTkO9C9u6T9a5R4KaCDRlhx96YCc+ xROQ== X-Forwarded-Encrypted: i=1; AJvYcCV/2AYIsPQkFX+EeXUKi/Tz3EYdWs621DtysFo4RzD5t6W/YUjLF/lAM2mvNygBUODNZZB+F7wv4O1QwcLQOSIdLo4j5WZPPmcxh57r X-Gm-Message-State: AOJu0YyKVlbmmYr1+Gku8DewjKVug5JVrec4FFRSOGrKMJXnL34FG13f N/WG8ZEOgiFYXocBD8z2gopCzNzPj/7DMsFCqdwNt6FVBfpwNTLpJ4JN+EMJanw= X-Received: by 2002:a17:902:f7c5:b0:1e0:c567:bb42 with SMTP id h5-20020a170902f7c500b001e0c567bb42mr13147740plw.59.1713339938562; Wed, 17 Apr 2024 00:45:38 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id g21-20020a170902c39500b001e7b7a79340sm3166065plg.267.2024.04.17.00.45.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Apr 2024 00:45:38 -0700 (PDT) From: Yong-Xuan Wang To: linux-riscv@lists.infradead.org, kvm-riscv@lists.infradead.org Cc: greentime.hu@sifive.com, vincent.chen@sifive.com, Yong-Xuan Wang , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/2] RISCV: KVM: Introduce mp_state_lock to avoid lock inversion in SBI_EXT_HSM_HART_START Date: Wed, 17 Apr 2024 15:45:25 +0800 Message-Id: <20240417074528.16506-2-yongxuan.wang@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240417074528.16506-1-yongxuan.wang@sifive.com> References: <20240417074528.16506-1-yongxuan.wang@sifive.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Documentation/virt/kvm/locking.rst advises that kvm->lock should be acquired outside vcpu->mutex and kvm->srcu. However, when KVM/RISC-V handling SBI_EXT_HSM_HART_START, the lock ordering is vcpu->mutex, kvm->srcu then kvm->lock. Although the lockdep checking no longer complains about this after commit f0f44752f5f6 ("rcu: Annotate SRCU's update-side lockdep dependencies"), it's necessary to replace kvm->lock with a new dedicated lock to ensure only one hart can execute the SBI_EXT_HSM_HART_START call for the target hart simultaneously. Additionally, this patch also rename "power_off" to "mp_state" with two possible values. The vcpu->mp_state_lock also protects the access of vcpu->mp_state. Signed-off-by: Yong-Xuan Wang --- arch/riscv/include/asm/kvm_host.h | 7 ++-- arch/riscv/kvm/vcpu.c | 56 ++++++++++++++++++++++++------- arch/riscv/kvm/vcpu_sbi.c | 7 ++-- arch/riscv/kvm/vcpu_sbi_hsm.c | 23 ++++++++----- 4 files changed, 68 insertions(+), 25 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 484d04a92fa6..64d35a8c908c 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -252,8 +252,9 @@ struct kvm_vcpu_arch { /* Cache pages needed to program page tables with spinlock held */ struct kvm_mmu_memory_cache mmu_page_cache; - /* VCPU power-off state */ - bool power_off; + /* VCPU power state */ + struct kvm_mp_state mp_state; + spinlock_t mp_state_lock; /* Don't run the VCPU (blocked) */ bool pause; @@ -375,7 +376,9 @@ void kvm_riscv_vcpu_flush_interrupts(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_sync_interrupts(struct kvm_vcpu *vcpu); bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask); void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu); +void __kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu); +bool kvm_riscv_vcpu_stopped(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_sbi_sta_reset(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_record_steal_time(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index b5ca9f2e98ac..70937f71c3c4 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -102,6 +102,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) struct kvm_cpu_context *cntx; struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr; + spin_lock_init(&vcpu->arch.mp_state_lock); + /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once = false; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; @@ -201,7 +203,7 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) { return (kvm_riscv_vcpu_has_interrupts(vcpu, -1UL) && - !vcpu->arch.power_off && !vcpu->arch.pause); + !kvm_riscv_vcpu_stopped(vcpu) && !vcpu->arch.pause); } int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) @@ -429,26 +431,50 @@ bool kvm_riscv_vcpu_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) return kvm_riscv_vcpu_aia_has_interrupts(vcpu, mask); } -void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) +static void __kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) { - vcpu->arch.power_off = true; + vcpu->arch.mp_state.mp_state = KVM_MP_STATE_STOPPED; kvm_make_request(KVM_REQ_SLEEP, vcpu); kvm_vcpu_kick(vcpu); } -void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu) +void kvm_riscv_vcpu_power_off(struct kvm_vcpu *vcpu) +{ + spin_lock(&vcpu->arch.mp_state_lock); + __kvm_riscv_vcpu_power_off(vcpu); + spin_unlock(&vcpu->arch.mp_state_lock); +} + +void __kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu) { - vcpu->arch.power_off = false; + vcpu->arch.mp_state.mp_state = KVM_MP_STATE_RUNNABLE; kvm_vcpu_wake_up(vcpu); } +void kvm_riscv_vcpu_power_on(struct kvm_vcpu *vcpu) +{ + spin_lock(&vcpu->arch.mp_state_lock); + __kvm_riscv_vcpu_power_on(vcpu); + spin_unlock(&vcpu->arch.mp_state_lock); +} + +bool kvm_riscv_vcpu_stopped(struct kvm_vcpu *vcpu) +{ + bool ret; + + spin_lock(&vcpu->arch.mp_state_lock); + ret = vcpu->arch.mp_state.mp_state == KVM_MP_STATE_STOPPED; + spin_unlock(&vcpu->arch.mp_state_lock); + + return ret; +} + int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, struct kvm_mp_state *mp_state) { - if (vcpu->arch.power_off) - mp_state->mp_state = KVM_MP_STATE_STOPPED; - else - mp_state->mp_state = KVM_MP_STATE_RUNNABLE; + spin_lock(&vcpu->arch.mp_state_lock); + *mp_state = vcpu->arch.mp_state; + spin_unlock(&vcpu->arch.mp_state_lock); return 0; } @@ -458,17 +484,21 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, { int ret = 0; + spin_lock(&vcpu->arch.mp_state_lock); + switch (mp_state->mp_state) { case KVM_MP_STATE_RUNNABLE: - vcpu->arch.power_off = false; + vcpu->arch.mp_state.mp_state = KVM_MP_STATE_RUNNABLE; break; case KVM_MP_STATE_STOPPED: - kvm_riscv_vcpu_power_off(vcpu); + __kvm_riscv_vcpu_power_off(vcpu); break; default: ret = -EINVAL; } + spin_unlock(&vcpu->arch.mp_state_lock); + return ret; } @@ -584,11 +614,11 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_SLEEP, vcpu)) { kvm_vcpu_srcu_read_unlock(vcpu); rcuwait_wait_event(wait, - (!vcpu->arch.power_off) && (!vcpu->arch.pause), + (!kvm_riscv_vcpu_stopped(vcpu)) && (!vcpu->arch.pause), TASK_INTERRUPTIBLE); kvm_vcpu_srcu_read_lock(vcpu); - if (vcpu->arch.power_off || vcpu->arch.pause) { + if (kvm_riscv_vcpu_stopped(vcpu) || vcpu->arch.pause) { /* * Awaken to handle a signal, request to * sleep again later. diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c index 72a2ffb8dcd1..1851fc979bd2 100644 --- a/arch/riscv/kvm/vcpu_sbi.c +++ b/arch/riscv/kvm/vcpu_sbi.c @@ -138,8 +138,11 @@ void kvm_riscv_vcpu_sbi_system_reset(struct kvm_vcpu *vcpu, unsigned long i; struct kvm_vcpu *tmp; - kvm_for_each_vcpu(i, tmp, vcpu->kvm) - tmp->arch.power_off = true; + kvm_for_each_vcpu(i, tmp, vcpu->kvm) { + spin_lock(&vcpu->arch.mp_state_lock); + tmp->arch.mp_state.mp_state = KVM_MP_STATE_STOPPED; + spin_unlock(&vcpu->arch.mp_state_lock); + } kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP); memset(&run->system_event, 0, sizeof(run->system_event)); diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c index 7dca0e9381d9..115a6c6525fd 100644 --- a/arch/riscv/kvm/vcpu_sbi_hsm.c +++ b/arch/riscv/kvm/vcpu_sbi_hsm.c @@ -18,12 +18,18 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) struct kvm_cpu_context *cp = &vcpu->arch.guest_context; struct kvm_vcpu *target_vcpu; unsigned long target_vcpuid = cp->a0; + int ret = 0; target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) return SBI_ERR_INVALID_PARAM; - if (!target_vcpu->arch.power_off) - return SBI_ERR_ALREADY_AVAILABLE; + + spin_lock(&target_vcpu->arch.mp_state_lock); + + if (target_vcpu->arch.mp_state.mp_state != KVM_MP_STATE_STOPPED) { + ret = SBI_ERR_ALREADY_AVAILABLE; + goto out; + } reset_cntx = &target_vcpu->arch.guest_reset_context; /* start address */ @@ -34,14 +40,18 @@ static int kvm_sbi_hsm_vcpu_start(struct kvm_vcpu *vcpu) reset_cntx->a1 = cp->a2; kvm_make_request(KVM_REQ_VCPU_RESET, target_vcpu); - kvm_riscv_vcpu_power_on(target_vcpu); + __kvm_riscv_vcpu_power_on(target_vcpu); + +out: + spin_unlock(&target_vcpu->arch.mp_state_lock); + return 0; } static int kvm_sbi_hsm_vcpu_stop(struct kvm_vcpu *vcpu) { - if (vcpu->arch.power_off) + if (kvm_riscv_vcpu_stopped(vcpu)) return SBI_ERR_FAILURE; kvm_riscv_vcpu_power_off(vcpu); @@ -58,7 +68,7 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu) target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid); if (!target_vcpu) return SBI_ERR_INVALID_PARAM; - if (!target_vcpu->arch.power_off) + if (!kvm_riscv_vcpu_stopped(target_vcpu)) return SBI_HSM_STATE_STARTED; else if (vcpu->stat.generic.blocking) return SBI_HSM_STATE_SUSPENDED; @@ -71,14 +81,11 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run, { int ret = 0; struct kvm_cpu_context *cp = &vcpu->arch.guest_context; - struct kvm *kvm = vcpu->kvm; unsigned long funcid = cp->a6; switch (funcid) { case SBI_EXT_HSM_HART_START: - mutex_lock(&kvm->lock); ret = kvm_sbi_hsm_vcpu_start(vcpu); - mutex_unlock(&kvm->lock); break; case SBI_EXT_HSM_HART_STOP: ret = kvm_sbi_hsm_vcpu_stop(vcpu); -- 2.17.1