Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp3629931imm; Mon, 2 Jul 2018 02:39:35 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdC5JALcbfBxh8JJ0cg2T83ijXsMkRGG0GL1H3GsSFntQHH4zpT67H1o8OcbmO53chMmAQY X-Received: by 2002:a62:9849:: with SMTP id q70-v6mr10883059pfd.178.1530524375533; Mon, 02 Jul 2018 02:39:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530524375; cv=none; d=google.com; s=arc-20160816; b=ZN6D/P/QxG0e5gCXL5JyMVm4ZziNpuTT6Db6xyYBhDBq1qzEybc2DoKS8XNxgA5FTR G01rI/WURt66P4gtW7osabpZwS/UonhxpMAYawuoGFiXgrTvrocOj7NMbkn9HKtKT+RX FWzxhxZuoKyo8aSN7bL7UvgCudGml+YUIGPl+19fJQnCjEMg/2P21HQLVopNmIUG8y9k H/7Xp3s1BNnq/LBaF96RIlr9s+oe+XqUYfnEGSoFa522g2qAmkk8kevzcyBR/9DCakzY wfqsXwEUM7ge3MlRVeCZdLfrQ52DfQFjZmw2ivXiRz7tlSXFt6BJGsJtJ0TqPL2kiUcq B4VQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=WEKjgZu3eE46BqgDWFG70N/rEknufohDm+jb+Vy2QKc=; b=VPfoBi/J5EeMldBFUpukNIuykSIoA56v/X8pbSsjYtkpUaTMp+kUHP/CyCzd0vx7Hf 3JRWD6ezvO2vbOjTvSF6TH9lwJdnre/Nmq9NKlUFCSK4vp+m/+QzmZhfgzqL2YIaGms/ masV+2HYEtdP/yX47kecKg5dMwEhDgFLcgzELzGV4jb0pjUJVyXrBBEYI8zmlmHUkp3F MhLCsLCglAVrGB7ueJl9dSp6srmr+Unaod9GHyCElsqY3ViOcEZZiT1go0dWajTxD6je nYizTTlCPzkYUpheqrVlD8T4SPbxkdg01X2VWLwI43gVKpDgStfhb6qG/cGTzG5HM5oC eIWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o1-v6si15496591plb.279.2018.07.02.02.39.21; Mon, 02 Jul 2018 02:39:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753326AbeGBJOB (ORCPT + 99 others); Mon, 2 Jul 2018 05:14:01 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:50983 "EHLO huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1753281AbeGBJN4 (ORCPT ); Mon, 2 Jul 2018 05:13:56 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 3A120D238697D; Mon, 2 Jul 2018 17:13:51 +0800 (CST) Received: from [127.0.0.1] (10.142.68.147) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.382.0; Mon, 2 Jul 2018 17:13:47 +0800 Subject: Re: [PATCH v5 1/2] arm/arm64: KVM: Add KVM_GET/SET_VCPU_EVENTS To: James Morse CC: , , , , , , , , , , , References: <1529960309-2513-1-git-send-email-gengdongjiu@huawei.com> <1529960309-2513-2-git-send-email-gengdongjiu@huawei.com> From: gengdongjiu Message-ID: <606b9734-7f1f-5fc6-69e0-8f2c72f9fc55@huawei.com> Date: Mon, 2 Jul 2018 17:13:01 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.142.68.147] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi James, On 2018/6/29 23:59, James Morse wrote: > Hi Dongjiu Geng, > > On 25/06/18 21:58, Dongjiu Geng wrote: >> For the migrating VMs, user space may need to know the exception >> state. For example, in the machine A, KVM make an SError pending, >> when migrate to B, KVM also needs to pend an SError. >> >> This new IOCTL exports user-invisible states related to SError. >> Together with appropriate user space changes, user space can get/set >> the SError exception state to do migrate/snapshot/suspend. > > >> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h >> index 469de8a..357304a 100644 >> --- a/arch/arm64/include/asm/kvm_host.h >> +++ b/arch/arm64/include/asm/kvm_host.h >> @@ -335,6 +335,11 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); >> int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); >> int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); >> int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); >> +int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, >> + struct kvm_vcpu_events *events); >> + >> +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, >> + struct kvm_vcpu_events *events); >> > > (Nit: funny indentation) The indentation is in order to not over 80 characters, I will change it as below to make it more reasonable +int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); + +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); > > >> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c >> index 56a0260..8be14cc 100644 >> --- a/arch/arm64/kvm/guest.c >> +++ b/arch/arm64/kvm/guest.c >> @@ -289,6 +289,49 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, > >> +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, >> + struct kvm_vcpu_events *events) >> +{ >> + int i; >> + bool serror_pending = events->exception.serror_pending; >> + bool has_esr = events->exception.serror_has_esr; >> + >> + /* check whether the reserved field is zero */ >> + for (i = 0; i < ARRAY_SIZE(events->reserved); i++) >> + if (events->reserved[i]) >> + return -EINVAL; >> + >> + /* check whether the pad field is zero */ >> + for (i = 0; i < ARRAY_SIZE(events->exception.pad); i++) >> + if (events->exception.pad[i]) >> + return -EINVAL; >> + >> + if (serror_pending && has_esr) { >> + if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) >> + return -EINVAL; >> + > >> + kvm_set_sei_esr(vcpu, events->exception.serror_esr); > > This silently discards all but the bottom 24 bits of serror_esr. > > It makes sense that this field is 64 bit, because the register is 64 bit, and it > would let us use this API to migrate any new state that appears in the higher > bits... But those bits will come with an ID/feature field, we shouldn't accept > an attempt to restore them on a CPU that doesn't support the feature. If that > happens here, it silently succeeds, but the kernel just threw the extra bits away. > > You added documentation that only the bottom 24bits can be set, can we add > checks to enforce this, so the bits can be used later. yes, sure! I will check it. thanks for the suggestion. > > >> + } else if (serror_pending) { >> + kvm_inject_vabt(vcpu); >> + } >> + >> + return 0; >> +} > >> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c >> index a4c1b76..4e6f366 100644 >> --- a/virt/kvm/arm/arm.c >> +++ b/virt/kvm/arm/arm.c >> @@ -1107,6 +1107,27 @@ long kvm_arch_vcpu_ioctl(struct file *filp, >> r = kvm_arm_vcpu_has_attr(vcpu, &attr); >> break; >> } >> +#ifdef __KVM_HAVE_VCPU_EVENTS > > So its this #ifdef, or a uapi struct for a feature 32bit doesn't support. > I think the right thing to do is wire this up for 32bit, it also calls > kvm_inject_vabt() in handle_exit.c, so must have the same migration problems. > > I'll post a patch to do this as I've got something I can test it on. Ok, so here I will temporarily use "#ifdef __KVM_HAVE_VCPU_EVENTS" to avoid the 32bit platform build errors, when you post a new patch, you can remove the "#ifdef". thanks. > > >> + case KVM_GET_VCPU_EVENTS: { >> + struct kvm_vcpu_events events; >> + >> + if (kvm_arm_vcpu_get_events(vcpu, &events)) >> + return -EINVAL; >> + >> + if (copy_to_user(argp, &events, sizeof(events))) >> + return -EFAULT; >> + >> + return 0; >> + } >> + case KVM_SET_VCPU_EVENTS: { >> + struct kvm_vcpu_events events; >> + >> + if (copy_from_user(&events, argp, sizeof(events))) >> + return -EFAULT; >> + >> + return kvm_arm_vcpu_set_events(vcpu, &events); >> + } >> +#endif > > (It bugs me that the architecture has some rules about merging multiple > architected ESR values, that we neither enforce, nor document as user-space's > problem. It doesn't matter for RAS, but might for any future ESR encodings. But > I guess user-space wouldn't be aware of them anyway, and it can already put > bogus values in SPSR/ESR/ELR etc.) > > > With a check against the top bits of ESR: > Reviewed-by: James Morse Thanks for this "Reviewed-by", I will check against the top bits of ESR > > > Thanks, > > James > > . >