Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp620807imm; Fri, 29 Jun 2018 03:43:11 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIjvb3AAZRsSEiIXlVlVTC4LhSz2J4CWVDdtfSxY48/4Sr15ZMJCWrbtsMVtBRPkw5EeHKw X-Received: by 2002:a17:902:88:: with SMTP id a8-v6mr14015933pla.156.1530268991023; Fri, 29 Jun 2018 03:43:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530268990; cv=none; d=google.com; s=arc-20160816; b=HNGvBSMDQwuLCMdF8KslK6+MO0zLcKNmjv2sE3Mw8MIrZ00fzsU7ZdBQAAYLozCPM3 kDuwyYHkxj/6JVdpoN5bkfrEIlpCnrs8XIOaSPnT44Ch9kNY5YQKDzpFD1/jz0F8Choi 6wGICAdAiik5IF/BPMWz/aQeYPtFDhxYi/q9u0zXWzDKwYvN8i6uAjUcpiQNvNWvPV7m MobSB1/zWt81dk7hN51ccyjUagIsBqAItXk8ZjkwPUVXmn7h4GsIsGDDSt9nZqLdoEm0 Yfdr6smxRxYZJDLcRejfd/YFJ4pgy12Zj29UFwKqfx62SdI1WLBKxmkRQ9bX627uVzMz HqYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=wweFAHiJ76usbhKyNmoA56FAse6Qn9U0xzWo03xY0co=; b=i5/pUHWZ2/ucAQsJUGbDd0/NPiPYUYVHKnBXctfBLwuHpJz4bEL7PS5Tk3Fj428mFS lnRpLtOxDY97PtMxbRv3Ozc5oNIToQ80iEP3E/Q/a9PSCcMmuRfOsPswdZBPele54tOQ lSfV8pYRcg9eDjsEVaDdPKOu+K/MwU90MI+YzXseHDWcmXhH/cnluQU19M4zG2coJsww AbLf3E8GZy4ThrYL/2D/AAh46uf1o1vvSf1P4I2TKccKrV/QFQkJfsUjXuimK2nsC34D 0D5DF8HMpuPEgUNYOijv3j8uAuALhKLNd/eGoDj/22qjfA2rHh/xq3KFKVNM4Vh0gKyE PZtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u4-v6si8318622plj.43.2018.06.29.03.42.56; Fri, 29 Jun 2018 03:43:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934803AbeF2K03 (ORCPT + 99 others); Fri, 29 Jun 2018 06:26:29 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:32980 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932895AbeF2K01 (ORCPT ); Fri, 29 Jun 2018 06:26:27 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D128240122CB; Fri, 29 Jun 2018 10:26:26 +0000 (UTC) Received: from vitty.brq.redhat.com.redhat.com (unknown [10.43.2.155]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BFB192156700; Fri, 29 Jun 2018 10:26:24 +0000 (UTC) From: Vitaly Kuznetsov To: Roman Kagan Cc: kvm@vger.kernel.org, x86@kernel.org, Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , "Michael Kelley \(EOSG\)" , Mohammed Gamal , Cathy Avery , Wanpeng Li , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/5] KVM: x86: hyperv: introduce vp_index_to_vcpu_idx mapping References: <20180628135313.17468-1-vkuznets@redhat.com> <20180628135313.17468-3-vkuznets@redhat.com> <20180629101134.GA15656@rkaganb.sw.ru> Date: Fri, 29 Jun 2018 12:26:23 +0200 In-Reply-To: <20180629101134.GA15656@rkaganb.sw.ru> (Roman Kagan's message of "Fri, 29 Jun 2018 13:11:36 +0300") Message-ID: <87y3exdh2o.fsf@vitty.brq.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 29 Jun 2018 10:26:26 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 29 Jun 2018 10:26:26 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'vkuznets@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Roman Kagan writes: > On Thu, Jun 28, 2018 at 03:53:10PM +0200, Vitaly Kuznetsov wrote: >> While it is easy to get VP index from vCPU index the reverse task is hard. >> Basically, to solve it we have to walk all vCPUs checking if their VP index >> matches. For hypercalls like HvFlushVirtualAddress{List,Space}* and the >> upcoming HvSendSyntheticClusterIpi* where a single CPU may be specified in >> the whole set this is obviously sub-optimal. >> >> As VP index can be set to anything <= U32_MAX by userspace using plain >> [0..MAX_VP_INDEX] array is not a viable option. Use condensed sorted >> array with logarithmic search complexity instead. Use RCU to make read >> access as fast as possible and maintain atomicity of updates. > > Quoting TLFS 5.0C section 7.8.1: > >> Virtual processors are identified by using an index (VP index). The >> maximum number of virtual processors per partition supported by the >> current implementation of the hypervisor can be obtained through CPUID >> leaf 0x40000005. A virtual processor index must be less than the >> maximum number of virtual processors per partition. > > so this is a dense index, and VP_INDEX >= KVM_MAX_VCPUS is invalid. I > think we're better off enforcing this in kvm_hv_set_msr and keep the > translation simple. If the algorithm in get_vcpu_by_vpidx is not good > enough (and yes it can be made to return NULL early on vpidx >= > KVM_MAX_VCPUS instead of taking the slow path) then a simple index array > of KVM_MAX_VCPUS entries should certainly do. Sure, we can use pre-allocated [0..KVM_MAX_VCPUS] array instead and put limits on what userspace can assign VP_INDEX to. Howver, while thinking about it I decided to go with the more complex condensed array approach because the tendency is for KVM_MAX_VCPUS to grow and we will be pre-allocating more and more memory for no particular reason (so I think even 'struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]' in 'struct kvm' will need to be converted to something else eventually). Anyway, I'm flexible and if you think we should go this way now I'll do this in v3. We can re-think this when we later decide to raise KVM_MAX_VCPUS significantly. -- Vitaly