Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp3470944pxv; Mon, 19 Jul 2021 00:28:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyKh3NhCEYFsAJKPJZvwocvzj7fME8AB1VJwlz4IlpLPB+aDBXXkmXOWQBlqnQZ7duyu9zt X-Received: by 2002:a05:6402:615:: with SMTP id n21mr33482178edv.139.1626679698829; Mon, 19 Jul 2021 00:28:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626679698; cv=none; d=google.com; s=arc-20160816; b=jAiyiPA/9USzr99eG6bJ4httRiV8nYvNXLpBYhvmpuaRbf4ATpqmhCS4KQmmxJPVCA vr7Wf8TzH8COWktg45lzmmcrD+X5kxU49a4qPC/bp4c/fs/EHAIiX9LBzBYMNUX1YL5i Z2j84+HhKOTYUi83KL3Pcp/y5RChgviPoR1BwdTMe9zcpzjw/4d47PhpWKsaG3Wl1MVx EZD0b1Zcs0P6tfnEwT2o8YLyACWG6Bqlvm/rtQo54xqZMNaeBCoOHdswfpeBqg/X5DmL nZg1JjMDcujk8a4X8szb0iRfOXV2OxQaCEHX4Q+3ePpMyKxxIdOttFrkABIWNV2C8/mC AL6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-language:content-transfer-encoding :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=D9SBWMv/PG4iZ+okIwsuZi3a917aNDulelVF3hRUeyk=; b=W4BPUom4TmaCRM+Hm8lWLPiiz67aUbKk2Bx0uAPFHDZZ9vj1srNcP4mbP6GJ8tt0sz nS2QlwLgxEK/yDzgPOc9BeWb04YsFeJEyKNYSVbpO4vJcQf4+FUE6MJjDu5AmIpq+Aru qlH37kTTrojVOl0kU2PYaTbkSG+E6DT63i2slRDd/sR1pYkWu5NclwuFlKLDZj+5PLzq 0GO6+D1YB4zaBdITtqqcpakQMuRdwtd3XwLBFFVjkKVld2YoESPS7mlAEBpmcVgzRTUR oiNlCzEq6jiUfR5jqsiBIggwqkO6rmYWFDUmVP4GjDitZiatXNygZs5vPoX3fFfsLZsV rbeA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e10si23912207edj.183.2021.07.19.00.27.56; Mon, 19 Jul 2021 00:28:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234764AbhGSH35 (ORCPT + 99 others); Mon, 19 Jul 2021 03:29:57 -0400 Received: from mga04.intel.com ([192.55.52.120]:40623 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233759AbhGSH34 (ORCPT ); Mon, 19 Jul 2021 03:29:56 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10049"; a="209104206" X-IronPort-AV: E=Sophos;i="5.84,251,1620716400"; d="scan'208";a="209104206" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jul 2021 00:26:57 -0700 X-IronPort-AV: E=Sophos;i="5.84,251,1620716400"; d="scan'208";a="499794665" Received: from zengguan-mobl.ccr.corp.intel.com (HELO [10.238.0.133]) ([10.238.0.133]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jul 2021 00:26:50 -0700 Subject: Re: [PATCH 0/5] IPI virtualization support for VM To: Wanpeng Li Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm , Dave Hansen , Tony Luck , Kan Liang , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Kim Phillips , Jarkko Sakkinen , Jethro Beekman , Kai Huang , the arch/x86 maintainers , LKML , Robert Hu , Gao Chao References: <20210716064808.14757-1-guang.zeng@intel.com> From: Zeng Guang Message-ID: Date: Mon, 19 Jul 2021 15:26:38 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 7/16/2021 5:25 PM, Wanpeng Li wrote: > On Fri, 16 Jul 2021 at 15:14, Zeng Guang wrote: >> Current IPI process in guest VM will virtualize the writing to interrupt >> command register(ICR) of the local APIC which will cause VM-exit anyway >> on source vCPU. Frequent VM-exit could induce much overhead accumulated >> if running IPI intensive task. >> >> IPI virtualization as a new VT-x feature targets to eliminate VM-exits >> when issuing IPI on source vCPU. It introduces a new VM-execution >> control - "IPI virtualization"(bit4) in the tertiary processor-based >> VM-exection controls and a new data structure - "PID-pointer table >> address" and "Last PID-pointer index" referenced by the VMCS. When "IPI >> virtualization" is enabled, processor emulateds following kind of writes >> to APIC registers that would send IPIs, moreover without causing VM-exits. >> - Memory-mapped ICR writes >> - MSR-mapped ICR writes >> - SENDUIPI execution >> >> This patch series implement IPI virtualization support in KVM. >> >> Patches 1-3 add tertiary processor-based VM-execution support >> framework. >> >> Patch 4 implement interrupt dispatch support in x2APIC mode with >> APIC-write VM exit. In previous platform, no CPU would produce >> APIC-write VM exit with exit qulification 300H when the "virtual x2APIC >> mode" VM-execution control was 1. >> >> Patch 5 implement IPI virtualization related function including >> feature enabling through tertiary processor-based VM-execution in >> various scenario of VMCS configuration, PID table setup in vCPU creation >> and vCPU block consideration. >> >> Document for IPI virtualization is now available at the latest "Intel >> Architecture Instruction Set Extensions Programming Reference". >> >> Document Link: >> https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html >> >> We did experiment to measure average time sending IPI from source vCPU >> to the target vCPU completing the IPI handling by kvm unittest w/ and >> w/o IPI virtualization. When IPI virtualizatin enabled, it will reduce >> 22.21% and 15.98% cycles comsuming in xAPIC mode and x2APIC mode >> respectly. >> >> KMV unittest:vmexit/ipi, 2 vCPU, AP runs without halt to ensure no VM >> exit impact on target vCPU. >> >> Cycles of IPI >> xAPIC mode x2APIC mode >> test w/o IPIv w/ IPIv w/o IPIv w/ IPIv >> 1 6106 4816 4265 3768 >> 2 6244 4656 4404 3546 >> 3 6165 4658 4233 3474 >> 4 5992 4710 4363 3430 >> 5 6083 4741 4215 3551 >> 6 6238 4904 4304 3547 >> 7 6164 4617 4263 3709 >> 8 5984 4763 4518 3779 >> 9 5931 4712 4645 3667 >> 10 5955 4530 4332 3724 >> 11 5897 4673 4283 3569 >> 12 6140 4794 4178 3598 >> 13 6183 4728 4363 3628 >> 14 5991 4994 4509 3842 >> 15 5866 4665 4520 3739 >> 16 6032 4654 4229 3701 >> 17 6050 4653 4185 3726 >> 18 6004 4792 4319 3746 >> 19 5961 4626 4196 3392 >> 20 6194 4576 4433 3760 >> >> Average cycles 6059 4713.1 4337.85 3644.8 >> %Reduction -22.21% -15.98% > Commit a9ab13ff6e (KVM: X86: Improve latency for single target IPI > fastpath) mentioned that the whole ipi fastpath feature reduces the > latency from 4238 to 3293 around 22.3% on SKX server, why your IPIv > hardware acceleration is worse than software emulation? In addition, Actually this performance data was measured on the basis of fastpath optimization while cpu runs at base frequency. As a result, IPI virtualization could have extra 15.98% cost reduction over IPI fastpath process in x2apic mode. > please post the IPI microbenchmark score w/ and w/o the > patchset.(https://lore.kernel.org/kvm/20171219085010.4081-1-ynorov@caviumnetworks.com), > I found that the hardware acceleration is not always outstanding. > https://lore.kernel.org/kvm/CANRm+Cx597FNRUCyVz1D=B6Vs2GX3Sw57X7Muk+yMpi_hb+v1w@mail.gmail.com > > Wanpeng