Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp7965697imm; Thu, 28 Jun 2018 12:11:35 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfrXuGJA1/zok2lKs1O2T+hhNG2u4zwnovgP37t2sKI3z9WOnKfQUsf6aRLdcwPRxMMim+c X-Received: by 2002:a63:2ac4:: with SMTP id q187-v6mr8576368pgq.333.1530213095183; Thu, 28 Jun 2018 12:11:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530213095; cv=none; d=google.com; s=arc-20160816; b=T1jsprUPrm0yDp9lNxQvOJyrwTxHDyBRzXLijmveSjHCaYBDxtoU/ygevT3DpXy6fp jk+AzXli4asoLIafjX15cc0bb13+IS2SyYpo4zU0lhy+UyPQJb95G7GdJhILYEULbnoR h6Gx8E8qDgq0iLDh98JedtNebD0mom8RY8BR0CFr9iwjJEZZNHB1AseSDkPJ7Yi7SAjs EM+mSE8oKzkMkviF2LYIeUPYeFn7o9oFCkmnw1AyQFxpL8WRDCVxIKH11oslfbuNqUWN N196fvaqz2ZMMigJ9Eboicx1TPC5UuVWN8pbp/IWGfo4O6UrmXqyHj58bkEbjtrPd/hF ls6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:message-id :in-reply-to:date:references:subject:cc:to:from :arc-authentication-results; bh=ceRREGSnVaoJQYidYtQqZwlqE8hJwgFPuIGSJb2vuP4=; b=YJqCcc1Or+0tjPC2tzX5U1MC+mh0Z9O79cTsdTjw2Hfkv170cU4BgQ65wnOAIvLBwU ZcG0OoUKVPNt02IkRebPvtxAOxsX07aS84zMHDmrsa26FtEpCkWXjsXQccPfp7nu0lLX S92PXAk4nvY5dbWtPjLsAJkUjv3gprOEmUxC5Jq+bySrgMzlrdfnPMIAqFs/pgAFmNRg Pw2rMSJZeg+FDOYqA8U3sfEWnr7gIXK8BfhrGa7LxcR3M80f2EC4K2uBh2xbi9uczm00 9w69EoLVpWVEKPBVp5+MVA0Zlv6wPJOPRdUoTinzc70bCwgKF4zc2NkVFRf6yvDYWXWN Fdbw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n11-v6si7653858plg.230.2018.06.28.12.11.20; Thu, 28 Jun 2018 12:11:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753415AbeF1Q1v (ORCPT + 99 others); Thu, 28 Jun 2018 12:27:51 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:41652 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753040AbeF1Q1u (ORCPT ); Thu, 28 Jun 2018 12:27:50 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B49D076F9F; Thu, 28 Jun 2018 16:27:49 +0000 (UTC) Received: from vitty.brq.redhat.com.redhat.com (unknown [10.43.2.155]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A367E1116717; Thu, 28 Jun 2018 16:27:47 +0000 (UTC) From: Vitaly Kuznetsov To: Wanpeng Li Cc: "the arch\/x86 maintainers" , devel@linuxdriverproject.org, LKML , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Tianyu.Lan@microsoft.com, "Michael Kelley \(EOSG\)" Subject: Re: [PATCH 0/4] x86/hyper-v: optimize PV IPIs References: <20180622170625.30688-1-vkuznets@redhat.com> <8736x8h8wu.fsf@vitty.brq.redhat.com> Date: Thu, 28 Jun 2018 18:27:46 +0200 In-Reply-To: <8736x8h8wu.fsf@vitty.brq.redhat.com> (Vitaly Kuznetsov's message of "Wed, 27 Jun 2018 11:32:17 +0200") Message-ID: <87o9fuev0d.fsf@vitty.brq.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Thu, 28 Jun 2018 16:27:49 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Thu, 28 Jun 2018 16:27:49 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'vkuznets@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Vitaly Kuznetsov writes: > Wanpeng Li writes: > >> Hi Vitaly, (fix my reply mess this time) >> On Sat, 23 Jun 2018 at 01:09, Vitaly Kuznetsov wrote: >>> >>> When reviewing my "x86/hyper-v: use cheaper HVCALL_FLUSH_VIRTUAL_ADDRESS_ >>> {LIST,SPACE} hypercalls when possible" patch Michael suggested to apply the >>> same idea to PV IPIs. Here we go! >>> >>> Despite what Hyper-V TLFS says about HVCALL_SEND_IPI hypercall, it can >>> actually be 'fast' (passing parameters through registers). Use that too. >>> >>> This series can collide with my "KVM: x86: hyperv: PV IPI support for >>> Windows guests" series as I rename ipi_arg_non_ex/ipi_arg_ex structures >>> there. Depending on which one gets in first we may need to do tiny >>> adjustments. >> >> As hyperv PV TLB flush has already been merged, is there any other >> obvious multicast IPIs scenarios? qemu supports interrupt remapping >> since two years ago, I think windows guest can switch to cluster mode >> after entering x2APIC, so sending IPI per cluster. In addition, you >> can also post the benchmark result for this PV IPI optimization, >> although it also fixes the bug which you mentioned above. > > I got confused, which of my patch series are you actually looking at? > :-) > > This particular one ("x86/hyper-v: optimize PV IPIs") is not about > KVM/qemu, it is for Linux running on top on real Hyper-V server. We > already support PV IPIs and here I'm just trying to optimize the way how > we send them by switching to a cheaper hypercall (and using 'fast' > version of it) when possible. I don't actually have a good benchmark > (and I don't remember seeing one when K.Y. posted PV IPI support) but > this can be arranged I guess: I can write a dump 'IPI sender' in kernel > and send e.g. 1000 IPIs. So I used the IPI benchmark (https://lkml.org/lkml/2017/12/19/141, thanks for the tip!) on this series. On a 16 vCPU guest (WS2016) I'm getting the following: Before: Dry-run: 0 203110 Self-IPI: 6167430 11645550 Normal IPI: 380479300 475881820 Broadcast IPI: 0 2557371420 After: Dry-run: 0 214280 (not interesting) Self-IPI: 5706210 10697640 (- 8%) Normal IPI: 379330010 450158830 (- 5%) Broadcast IPI: 0 2340427160 (- 8%) -- Vitaly