Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1439042pxk; Fri, 25 Sep 2020 15:08:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzLeFyUmU2Edj5lUIfbAr2bY+wUwTZmHvKkEKsFHDIzhH9vEQFy2GyVfRCMydrhDJYHwWYm X-Received: by 2002:a17:906:354c:: with SMTP id s12mr5040479eja.370.1601071732370; Fri, 25 Sep 2020 15:08:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601071732; cv=none; d=google.com; s=arc-20160816; b=MLYm2fNY6ShDPRUq6D4qbQjVDCzKVzqNUD34PhPpqMLJnf1d/HUunkNLr3sGxS8CT7 kx4tKWVggo0Nr/aCoUthru3GU2PdXMNM/qZ2A2q1nboZT9FK/pvNh1A0IxqDFr5EAyYW A6+yTizwFJ4A+sdAhi+ww3UZ03pbQSx+kPNsxt2GXOzVVIjOE4+B9lL40VKFEYv1zK8x 6tdAOFzO8EvgRx2vwRpOODI+EKxxLNuJagyOc9km5e9kX2mffS7dmBrbWbuPeqZQAntV cHe/hcIHcky1VFNDad68b0/SErfQKek8x9C8174iQ0J8crQ7zBRA9g9T6d/tCbGO25DT ifKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=rsCTglPcEJCWqonBlngTDn2QZU1zRO8YnrPeV0NU/2o=; b=UfUVwxurVFHQugVvptkYlOdoDIlgb24tgl/z1vkeCmLn1lT0pQibskTQLXvJj2IqU3 L+IhBiX44Zg4piY/vUqO2Uj/gsdb2RVEf3ExSW4loP6AT8iNAJi5gCSBtdt+yevkaOLb oLnGwpykTrsWiSEBSdBt23boXC5LE1QB/zaewWpr5oPsL4cOVhXrzvFHVZRKGRntRh39 aBxF8tTNpFYEDrbSX6rSB3W1w+zvXh4ihXt9xVvHMPyNSOs2/igZQSTSlALJQsENMt6N lAXhUVDiAyZfsABCVttFlgedT/ARQ6RCKv5XHCIFZYLKwXsILpPJHLgznow8aHHg9sMm AgiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="HGaYu/nG"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dd25si2756730ejb.177.2020.09.25.15.08.28; Fri, 25 Sep 2020 15:08:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="HGaYu/nG"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726844AbgIYWEx (ORCPT + 99 others); Fri, 25 Sep 2020 18:04:53 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:58641 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726412AbgIYWEx (ORCPT ); Fri, 25 Sep 2020 18:04:53 -0400 Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601071491; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rsCTglPcEJCWqonBlngTDn2QZU1zRO8YnrPeV0NU/2o=; b=HGaYu/nGuuUhBk4MgaWEMgb8DSVlqp2WGaxs+TQMv36phPUBUgJWTsXl14EsefYoG5WQFX tVi5q+AS4dejrODGB/1ApgZ6yH03UfhJF9yK6gCxPKHBOV8+e7Ttks8CtumPJgtMnn0x+w gqAyj0pk1CpLvgIhoZ7AWGZ+7iGHP4U= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-179-eFxWwDC0N9Ci-BVXjFrS5w-1; Fri, 25 Sep 2020 18:04:40 -0400 X-MC-Unique: eFxWwDC0N9Ci-BVXjFrS5w-1 Received: by mail-wr1-f71.google.com with SMTP id w7so1634323wrp.2 for ; Fri, 25 Sep 2020 15:04:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=rsCTglPcEJCWqonBlngTDn2QZU1zRO8YnrPeV0NU/2o=; b=i7newS9g+YgT2vEAmrrvLGfFgsofSBZvtdHvddCyhjl1gKf2rQkboZD8hnZ+cUV+yc r61HU8uz066Z3kHdpgQeLAWLv/7s4159FB45wTq5IH5YkLVgeomC3Qx8YE/BIVqj/Oyu kKAOegjwg75G9pUdGY2vmuJzDfb49yOzXVkfYkQUHJznifr3J1mUVFr29/5CIR9TUfSI OViQw53pJRwjiBg1Cd7+vaIuZIjrZ6vIK1ItJXghrug7zavAQNe3le1pKL3gUN8m8iQV 7iijqAgizMLzHjTW2a9U7iyHAobK9OfWFxUoxWJ9O+wdrlUoy1q7zl7WdIT7NU9hpcaR LN8g== X-Gm-Message-State: AOAM533Q2lZpUg5IlomEHw/VQhBsBsxzApROUZpuqEj0m+s2e+c5UFIY Al9hf1MRDSXaav3gDcwsWKNJxFztplaUr/rg2gSH85pRUFU5Jtcfb0/fsXTmc/33mUzpyrpWGH5 YiiYffyc1vHy6w6mK1yWOmH0L X-Received: by 2002:adf:c44d:: with SMTP id a13mr6531946wrg.11.1601071479324; Fri, 25 Sep 2020 15:04:39 -0700 (PDT) X-Received: by 2002:adf:c44d:: with SMTP id a13mr6531925wrg.11.1601071479046; Fri, 25 Sep 2020 15:04:39 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:ec9b:111a:97e3:4baf? ([2001:b07:6468:f312:ec9b:111a:97e3:4baf]) by smtp.gmail.com with ESMTPSA id t5sm1536915wrb.21.2020.09.25.15.04.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 25 Sep 2020 15:04:37 -0700 (PDT) Subject: Re: [PATCH v2 03/15] KVM: VMX: Rename "vmx_find_msr_index" to "vmx_find_loadstore_msr_slot" To: Sean Christopherson Cc: Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org References: <20200923180409.32255-1-sean.j.christopherson@intel.com> <20200923180409.32255-4-sean.j.christopherson@intel.com> From: Paolo Bonzini Message-ID: <86b26125-1f02-f3d7-2834-c8a1ed8aebf4@redhat.com> Date: Sat, 26 Sep 2020 00:04:36 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200923180409.32255-4-sean.j.christopherson@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 23/09/20 20:03, Sean Christopherson wrote: > Add "loadstore" to vmx_find_msr_index() to differentiate it from the so > called shared MSRs helpers (which will soon be renamed), and replace > "index" with "slot" to better convey that the helper returns slot in the > array, not the MSR index (the value that gets stuffed into ECX). > > No functional change intended. "slot" is definitely better, I'll adjust SVM to use it too. Paolo > Signed-off-by: Sean Christopherson > --- > arch/x86/kvm/vmx/nested.c | 16 ++++++++-------- > arch/x86/kvm/vmx/vmx.c | 10 +++++----- > arch/x86/kvm/vmx/vmx.h | 2 +- > 3 files changed, 14 insertions(+), 14 deletions(-) > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index f818a406302a..87e5d606582e 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -938,11 +938,11 @@ static bool nested_vmx_get_vmexit_msr_value(struct kvm_vcpu *vcpu, > * VM-exit in L0, use the more accurate value. > */ > if (msr_index == MSR_IA32_TSC) { > - int index = vmx_find_msr_index(&vmx->msr_autostore.guest, > - MSR_IA32_TSC); > + int i = vmx_find_loadstore_msr_slot(&vmx->msr_autostore.guest, > + MSR_IA32_TSC); > > - if (index >= 0) { > - u64 val = vmx->msr_autostore.guest.val[index].value; > + if (i >= 0) { > + u64 val = vmx->msr_autostore.guest.val[i].value; > > *data = kvm_read_l1_tsc(vcpu, val); > return true; > @@ -1031,12 +1031,12 @@ static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu, > struct vcpu_vmx *vmx = to_vmx(vcpu); > struct vmx_msrs *autostore = &vmx->msr_autostore.guest; > bool in_vmcs12_store_list; > - int msr_autostore_index; > + int msr_autostore_slot; > bool in_autostore_list; > int last; > > - msr_autostore_index = vmx_find_msr_index(autostore, msr_index); > - in_autostore_list = msr_autostore_index >= 0; > + msr_autostore_slot = vmx_find_loadstore_msr_slot(autostore, msr_index); > + in_autostore_list = msr_autostore_slot >= 0; > in_vmcs12_store_list = nested_msr_store_list_has_msr(vcpu, msr_index); > > if (in_vmcs12_store_list && !in_autostore_list) { > @@ -1057,7 +1057,7 @@ static void prepare_vmx_msr_autostore_list(struct kvm_vcpu *vcpu, > autostore->val[last].index = msr_index; > } else if (!in_vmcs12_store_list && in_autostore_list) { > last = --autostore->nr; > - autostore->val[msr_autostore_index] = autostore->val[last]; > + autostore->val[msr_autostore_slot] = autostore->val[last]; > } > } > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index e99f3bbfa6e9..35291fd90ca0 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -824,7 +824,7 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx, > vm_exit_controls_clearbit(vmx, exit); > } > > -int vmx_find_msr_index(struct vmx_msrs *m, u32 msr) > +int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr) > { > unsigned int i; > > @@ -858,7 +858,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) > } > break; > } > - i = vmx_find_msr_index(&m->guest, msr); > + i = vmx_find_loadstore_msr_slot(&m->guest, msr); > if (i < 0) > goto skip_guest; > --m->guest.nr; > @@ -866,7 +866,7 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr) > vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr); > > skip_guest: > - i = vmx_find_msr_index(&m->host, msr); > + i = vmx_find_loadstore_msr_slot(&m->host, msr); > if (i < 0) > return; > > @@ -925,9 +925,9 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, > wrmsrl(MSR_IA32_PEBS_ENABLE, 0); > } > > - i = vmx_find_msr_index(&m->guest, msr); > + i = vmx_find_loadstore_msr_slot(&m->guest, msr); > if (!entry_only) > - j = vmx_find_msr_index(&m->host, msr); > + j = vmx_find_loadstore_msr_slot(&m->host, msr); > > if ((i < 0 && m->guest.nr == MAX_NR_LOADSTORE_MSRS) || > (j < 0 && m->host.nr == MAX_NR_LOADSTORE_MSRS)) { > diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h > index 9a418c274880..26887082118d 100644 > --- a/arch/x86/kvm/vmx/vmx.h > +++ b/arch/x86/kvm/vmx/vmx.h > @@ -353,7 +353,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); > struct shared_msr_entry *find_msr_entry(struct vcpu_vmx *vmx, u32 msr); > void pt_update_intercept_for_msr(struct vcpu_vmx *vmx); > void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); > -int vmx_find_msr_index(struct vmx_msrs *m, u32 msr); > +int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); > void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu); > > #define POSTED_INTR_ON 0 >