Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp1159223imm; Fri, 29 Jun 2018 12:28:14 -0700 (PDT) X-Google-Smtp-Source: AAOMgpc9Xx6HZjV1VnbXmmTozrh/h/UjuL2OuU8459Rdeyh3ePK/0GuMOU4xsWTniP4vs5W1p3aV X-Received: by 2002:a62:5f81:: with SMTP id t123-v6mr15589610pfb.15.1530300494086; Fri, 29 Jun 2018 12:28:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530300494; cv=none; d=google.com; s=arc-20160816; b=X/CFQDHuzn0I58G6UwGQwvJ9Lukk0iJhXo7d3VvbsC6J3HxeN1I4WGXZifGePM8Vi3 kvC4Aj/K0FhwH3ngX9yx06W5WajDhOXtT83fMfgODJG2lUJRPX2g1qzZrIehmSb1vhwI Nw2maJaN4NMPxY520FLUU0cq2RPi7r+cfRXwJoOUWHGPf0F0u7CcNYa+ACFij667RcrF k7c6tzjwUT6JwoPxcvPKz6Sx3snZfVlMqBc/QqmcazajJ8r6rjE4L/DA2dmgq6sYQEeN tWUbvgVTrpXjlW/NPGeSo3KIlAzapxr1tsWVZee8HG2zptonQwy99rDcZcjr+a680LYO 5zgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=oXnw2YPRAYqTVDbs6uy5V4XWOUanxRgSGKsSbKVDejY=; b=0ycYaWM/v31fETsGK3T3g7CFbSpg6DWU+AVsoEPutAZxdSGrH+am60VnA1gc9zSBCj ZOKzweQ+cISQln8c1q4qKS2x/G7ex70XfI2+9HvY4RgoLNdgEuhhnCcLj15aHLhkXWnp VMTaPBIF6Bx00okTuHfPUOf3Z5zqAHMqb5kGMTacYwqRLjJumeJijVmFO2bfWDh04ZsY xLAcDN7gBUVoy6Upn+cwZSEZpkZ+vDFPq7iBLpP51PPjKEirG+GZOeV5GYPkHqLm7jcn LiMbihfLlw1rUTTSh5MrsjxM1zvaQ1E43BYpF2y5FHq4CWgzSHdQcZ6G2eLfMnLNkcZN Lb9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 71-v6si10477157plc.164.2018.06.29.12.27.59; Fri, 29 Jun 2018 12:28:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S936708AbeF2P0B (ORCPT + 99 others); Fri, 29 Jun 2018 11:26:01 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:59086 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S935115AbeF2PZ7 (ORCPT ); Fri, 29 Jun 2018 11:25:59 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3360440201C7; Fri, 29 Jun 2018 15:25:59 +0000 (UTC) Received: from vitty.brq.redhat.com.redhat.com (unknown [10.43.2.155]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 562E02026D68; Fri, 29 Jun 2018 15:25:57 +0000 (UTC) From: Vitaly Kuznetsov To: Roman Kagan Cc: kvm@vger.kernel.org, x86@kernel.org, Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , "Michael Kelley \(EOSG\)" , Mohammed Gamal , Cathy Avery , Wanpeng Li , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/5] KVM: x86: hyperv: introduce vp_index_to_vcpu_idx mapping References: <20180628135313.17468-1-vkuznets@redhat.com> <20180628135313.17468-3-vkuznets@redhat.com> <20180629101134.GA15656@rkaganb.sw.ru> <87y3exdh2o.fsf@vitty.brq.redhat.com> <20180629111227.GB15656@rkaganb.sw.ru> <87tvplddrr.fsf@vitty.brq.redhat.com> <20180629125216.GC15656@rkaganb.sw.ru> <87h8lld9hl.fsf@vitty.brq.redhat.com> <20180629143212.GD15656@rkaganb.sw.ru> Date: Fri, 29 Jun 2018 17:25:56 +0200 In-Reply-To: <20180629143212.GD15656@rkaganb.sw.ru> (Roman Kagan's message of "Fri, 29 Jun 2018 17:32:13 +0300") Message-ID: <878t6xd37f.fsf@vitty.brq.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 29 Jun 2018 15:25:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Fri, 29 Jun 2018 15:25:59 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'vkuznets@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Roman Kagan writes: > On Fri, Jun 29, 2018 at 03:10:14PM +0200, Vitaly Kuznetsov wrote: >> Roman Kagan writes: >> >> > On Fri, Jun 29, 2018 at 01:37:44PM +0200, Vitaly Kuznetsov wrote: >> >> The problem we're trying to solve here is: with PV TLB flush and IPI we >> >> need to walk through the supplied list of VP_INDEXes and get VCPU >> >> ids. Usually they match. But in case they don't [...] >> > >> > Why wouldn't they *in practice*? Only if the userspace wanted to be >> > funny and assigned VP_INDEXes randomly? I'm not sure we need to >> > optimize for this case. >> >> Can someone please remind me why we allow userspace to change it in the >> first place? > > I can ;) > > We used not to, and reported KVM's vcpu index as the VP_INDEX. However, > later we realized that VP_INDEX needed to be persistent across > migrations and otherwise also known to userspace. Relying on the kernel > to always initialize its indices in the same order was unacceptable, and > we came up with no better way of synchronizing VP_INDEX between the > userspace and the kernel than to let the former to set it explicitly. > > However, this is basically a future-proofing feature; in practice, both > QEMU and KVM initialize their indices in the same order. Thanks! But in the theoretical case when these indices start to differ after migration, users will notice a slowdown which will be hard to explain, right? Does it justify the need for vp_idx_to_vcpu_idx? In any case I sent v3 with vp_idx_to_vcpu_idx dropped for now, hope Radim is OK with us de-coupling these discussions. -- Vitaly