Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp14777rwe; Wed, 24 Aug 2022 15:54:53 -0700 (PDT) X-Google-Smtp-Source: AA6agR7JBzzeST/rjpAq9ZDqBevszbm+2wi8B6rlNMrJ2roLcFqIkqrIanLzXJEOT47WqJ8A7rV4 X-Received: by 2002:a17:907:1c87:b0:6f0:29ea:cc01 with SMTP id nb7-20020a1709071c8700b006f029eacc01mr663676ejc.671.1661381693669; Wed, 24 Aug 2022 15:54:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661381693; cv=none; d=google.com; s=arc-20160816; b=aFgAjTjXz9oknKGvDYwGDk2oww5ua6DIdV2MIoHj4Tnfv7xPRsNdEWaBftrR2V/F/8 xQG8meYpEVHmoQ4RHhW7AgMUBSG8qE3Thw0WnHOP0HM29jzgsx9EM5GSWKhWygdzDq9u tKibkKfDY6SKIPbuSRJDpd1VAhPte3jyaNFdkeIkKTClVTrMh5nrfw2dIRtqEfiUWwDv awk6NPyFfk1HtgnTIIzLLkLVNBTD1Dj0ZZ4mgjI1Pe0uG4y0hIeXsW/H+1JTEh7RKw/F Bz3RgB2axjJ03N95XODUkB/AuiOTRNxWVZJ6FG2i4zct83yx30jZRQ+1gSDmiqVCq4fV mlBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=qiO4ppR0T8BP0+C3qIBhC5C9nbxN/Jb16A7FZnxBJK8=; b=WJwV6rGVp69d08kr8h4XGLtJ5Rtae0j7D4jPmS9OjhuQmEyAKjWW1NsLD6Zd2Lnqpp SK6Zfs3hzUH3zDJ4a39lC0VHdypiq2LTqGOsLSSDNXSqXCdXceCj5RJRjT+0AzIq6khn YGIetygMlgulua7hFfU9TWmR0iOb4NRlYVq9YvVl680HdsQsOrhQ2mD/d9+H9mH3iA0x JU9bVUx4QIzgTyvaKyy84KtNZCZj8DH2d5jqEQbx4rzXhjZMt5w3JVblZAZxLOrHU34H OEotPGztqzWaXUT2O6ijmVYTHIFmZZl7W+rdEdQ+3ELLrKJevBe0rBEpsYaN0yN3A1Hh FbEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=n8ebhRfm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dt10-20020a170907728a00b007308200cc45si2803344ejc.35.2022.08.24.15.54.27; Wed, 24 Aug 2022 15:54:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=n8ebhRfm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229721AbiHXWZe (ORCPT + 99 others); Wed, 24 Aug 2022 18:25:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229452AbiHXWZd (ORCPT ); Wed, 24 Aug 2022 18:25:33 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5216E6D566 for ; Wed, 24 Aug 2022 15:25:31 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id y15so13880271pfr.9 for ; Wed, 24 Aug 2022 15:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=qiO4ppR0T8BP0+C3qIBhC5C9nbxN/Jb16A7FZnxBJK8=; b=n8ebhRfmV7w8m+N14bzabc1U8Q2dZYqaTiY5bBvktt2WetO+N8zs0ZFgyEmHogbQg7 PhogcQxRJ5maeKPNpcReR0Z01c4YTwopu7UtCkEXzGLtoAr5yJ6j13ga1V0RGpw3N2a+ e6SAvjX+GhCz4Bf29qHKByJwiIuNULbHVygmCygVocFemoGRfS7J20xF9F04oD1q0hZ7 5yh2sXq79plrZ8y+p/qsvhmppMBhk/QLP53vA22QdlpfEiDwH1P3W5HaE7Fi3QmZgWZ4 KCnlWFCnXM5FtxbOVZZb74rMlcn9c8+u5d2MOioA8boJPDsAwQ+fSr0cycGpnWJ0mdqN bNxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=qiO4ppR0T8BP0+C3qIBhC5C9nbxN/Jb16A7FZnxBJK8=; b=1Kdk0CvrEqBpnFKx0JsTEZJgStCfVQFpjLL91/DLxiQrb1kMOUCyj2ds/V2wS1ET8U qjhdAKECTN+qWnjHR9M4tOvAFMsI/+bYFYImM7WQcwsRABrlOK8P5/ADEkeibryuWfGt PXbVwmtOoKtChbk6m+b//86UdTdGQ6/93XljK5mUS6bCt8a/ibB4BXpzK9IJid3IG+wT fR+P/EycIvz306rRs/2CQeotbUfEIL5qU62n0LdVKNwUqSxriTFBQvHI/E+CtWPUcuop boWtkkZN4MyMFZKcm7JPO7JaQKIDNwLbWCSEfPsEH1NYfh8Y0F+3eC4O/nRi9idr+q0u ug0w== X-Gm-Message-State: ACgBeo1cdrf+VdGcd6Am3BBipfcEanYXQ+5TG1+JwGeSgXOx10YkbpDc 8qEXYi82bAGVmCalCXfnPLVF+Q== X-Received: by 2002:a63:1e5f:0:b0:419:d6bf:b9d7 with SMTP id p31-20020a631e5f000000b00419d6bfb9d7mr789552pgm.593.1661379930521; Wed, 24 Aug 2022 15:25:30 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id ij19-20020a170902ab5300b0016c50179b1esm13088867plb.152.2022.08.24.15.25.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Aug 2022 15:25:30 -0700 (PDT) Date: Wed, 24 Aug 2022 22:25:26 +0000 From: Sean Christopherson To: Maxim Levitsky Cc: kvm@vger.kernel.org, Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Ingo Molnar , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: Re: [PATCH v3 08/13] KVM: x86: emulator/smm: use smram structs in the common code Message-ID: References: <20220803155011.43721-1-mlevitsk@redhat.com> <20220803155011.43721-9-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220803155011.43721-9-mlevitsk@redhat.com> X-Spam-Status: No, score=-14.5 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,FSL_HELO_FAKE,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 03, 2022, Maxim Levitsky wrote: > Switch from using a raw array to 'union kvm_smram'. > > Signed-off-by: Maxim Levitsky > --- > arch/x86/include/asm/kvm_host.h | 5 +++-- > arch/x86/kvm/emulate.c | 12 +++++++----- > arch/x86/kvm/kvm_emulate.h | 3 ++- > arch/x86/kvm/svm/svm.c | 8 ++++++-- > arch/x86/kvm/vmx/vmx.c | 4 ++-- > arch/x86/kvm/x86.c | 16 ++++++++-------- > 6 files changed, 28 insertions(+), 20 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index e8281d64a4315a..d752fabde94ad2 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -204,6 +204,7 @@ typedef enum exit_fastpath_completion fastpath_t; > > struct x86_emulate_ctxt; > struct x86_exception; > +union kvm_smram; > enum x86_intercept; > enum x86_intercept_stage; > > @@ -1600,8 +1601,8 @@ struct kvm_x86_ops { > void (*setup_mce)(struct kvm_vcpu *vcpu); > > int (*smi_allowed)(struct kvm_vcpu *vcpu, bool for_injection); > - int (*enter_smm)(struct kvm_vcpu *vcpu, char *smstate); > - int (*leave_smm)(struct kvm_vcpu *vcpu, const char *smstate); > + int (*enter_smm)(struct kvm_vcpu *vcpu, union kvm_smram *smram); > + int (*leave_smm)(struct kvm_vcpu *vcpu, const union kvm_smram *smram); > void (*enable_smi_window)(struct kvm_vcpu *vcpu); > > int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp); > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c > index 55d9328e6074a2..610978d00b52b0 100644 > --- a/arch/x86/kvm/emulate.c > +++ b/arch/x86/kvm/emulate.c > @@ -2594,16 +2594,18 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, > static int em_rsm(struct x86_emulate_ctxt *ctxt) > { > unsigned long cr0, cr4, efer; > - char buf[512]; > + const union kvm_smram smram; This is blatantly wrong, ctxt->ops->read_phys() writes to the buffer. I assume you did this to make it more difficult to modify the buffer after reading from guest memory, but IMO that's not worth misleading readers. > u64 smbase; > int ret; > > + BUILD_BUG_ON(sizeof(smram) != 512); > + > if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_MASK) == 0) > return emulate_ud(ctxt); > > smbase = ctxt->ops->get_smbase(ctxt); > > - ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, buf, sizeof(buf)); > + ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, (void *)&smram, sizeof(smram)); The point of the union + bytes is so that KVM doesn't have to cast. kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, smram.bytes, sizeof(smram)); > if (ret != X86EMUL_CONTINUE) > return X86EMUL_UNHANDLEABLE; > > @@ -2653,15 +2655,15 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) > * state (e.g. enter guest mode) before loading state from the SMM > * state-save area. > */ > - if (ctxt->ops->leave_smm(ctxt, buf)) > + if (ctxt->ops->leave_smm(ctxt, &smram)) > goto emulate_shutdown; > > #ifdef CONFIG_X86_64 > if (emulator_has_longmode(ctxt)) > - ret = rsm_load_state_64(ctxt, buf); > + ret = rsm_load_state_64(ctxt, (const char *)&smram); > else > #endif > - ret = rsm_load_state_32(ctxt, buf); > + ret = rsm_load_state_32(ctxt, (const char *)&smram); Same thing here, though this is temporary. And it's kinda silly, but I think it makes sense to avoid the cast here by tweaking the rsm_load_state_*() helpers to take "u8 *" instead of "char *". > if (ret != X86EMUL_CONTINUE) > goto emulate_shutdown; > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 38f873cb6f2c14..688315d1dfabd1 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -4433,12 +4433,14 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) > return 1; > } > > -static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate) > +static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) > { > struct vcpu_svm *svm = to_svm(vcpu); > struct kvm_host_map map_save; > int ret; > > + char *smstate = (char *)smram; Again temporary, but since this is new code, just make it u8 *smstate = smram->bytes; > + > if (!is_guest_mode(vcpu)) > return 0; > > @@ -4480,7 +4482,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate) > return 0; > } > > -static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) > +static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) > { > struct vcpu_svm *svm = to_svm(vcpu); > struct kvm_host_map map, map_save; > @@ -4488,6 +4490,8 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate) > struct vmcb *vmcb12; > int ret; > > + const char *smstate = (const char *)smram; > + And here. > if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) > return 0; > E.g. this compiles cleanly on top --- arch/x86/kvm/emulate.c | 17 +++++++++-------- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/x86.c | 7 ++++--- 3 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index dd0a08af1dd9..b2ef63cf6cff 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -2357,7 +2357,7 @@ static void rsm_set_desc_flags(struct desc_struct *desc, u32 flags) desc->type = (flags >> 8) & 15; } -static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smstate, +static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const u8 *smstate, int n) { struct desc_struct desc; @@ -2379,7 +2379,7 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smstate, } #ifdef CONFIG_X86_64 -static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const char *smstate, +static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const u8 *smstate, int n) { struct desc_struct desc; @@ -2446,7 +2446,7 @@ static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt, } static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, - const char *smstate) + const u8 *smstate) { struct desc_struct desc; struct desc_ptr dt; @@ -2507,7 +2507,7 @@ static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt, #ifdef CONFIG_X86_64 static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, - const char *smstate) + const u8 *smstate) { struct desc_struct desc; struct desc_ptr dt; @@ -2580,7 +2580,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, static int em_rsm(struct x86_emulate_ctxt *ctxt) { unsigned long cr0, cr4, efer; - const union kvm_smram smram; + union kvm_smram smram; u64 smbase; int ret; @@ -2591,7 +2591,8 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) smbase = ctxt->ops->get_smbase(ctxt); - ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, (void *)&smram, sizeof(smram)); + ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, + smram.bytes, sizeof(smram)); if (ret != X86EMUL_CONTINUE) return X86EMUL_UNHANDLEABLE; @@ -2646,10 +2647,10 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) #ifdef CONFIG_X86_64 if (emulator_has_longmode(ctxt)) - ret = rsm_load_state_64(ctxt, (const char *)&smram); + ret = rsm_load_state_64(ctxt, smram.bytes); else #endif - ret = rsm_load_state_32(ctxt, (const char *)&smram); + ret = rsm_load_state_32(ctxt, smram.bytes); if (ret != X86EMUL_CONTINUE) goto emulate_shutdown; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 5d748b10c5be..ecf11c8a052e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4439,7 +4439,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) struct kvm_host_map map_save; int ret; - char *smstate = (char *)smram; + u8 *smstate = smram->bytes; if (!is_guest_mode(vcpu)) return 0; @@ -4490,7 +4490,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) struct vmcb *vmcb12; int ret; - const char *smstate = (const char *)smram; + const char *smstate = smram->bytes; if (!guest_cpuid_has(vcpu, X86_FEATURE_LM)) return 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ca558674b07b..09268c2335a8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9985,10 +9985,10 @@ static void enter_smm(struct kvm_vcpu *vcpu) memset(smram.bytes, 0, sizeof(smram.bytes)); #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) - enter_smm_save_state_64(vcpu, (char *)&smram); + enter_smm_save_state_64(vcpu, smram.bytes); else #endif - enter_smm_save_state_32(vcpu, (char *)&smram); + enter_smm_save_state_32(vcpu, smram.bytes); /* * Give enter_smm() a chance to make ISA-specific changes to the vCPU @@ -9998,7 +9998,8 @@ static void enter_smm(struct kvm_vcpu *vcpu) static_call(kvm_x86_enter_smm)(vcpu, &smram); kvm_smm_changed(vcpu, true); - kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, &smram, sizeof(smram)); + kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, + smram.bytes, sizeof(smram)); if (static_call(kvm_x86_get_nmi_mask)(vcpu)) vcpu->arch.hflags |= HF_SMM_INSIDE_NMI_MASK; base-commit: 0708faef18ff51a2b2dba546961d843223331138 --