Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751895AbdF0OAw (ORCPT ); Tue, 27 Jun 2017 10:00:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50192 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751312AbdF0OAs (ORCPT ); Tue, 27 Jun 2017 10:00:48 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 8E2B6C04D293 Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=rkrcmar@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 8E2B6C04D293 Date: Tue, 27 Jun 2017 16:00:08 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Yang Zhang Cc: Wanpeng Li , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Paolo Bonzini , the arch/x86 maintainers , Jonathan Corbet , tony.luck@intel.com, Borislav Petkov , Peter Zijlstra , mchehab@kernel.org, Andrew Morton , krzk@kernel.org, jpoimboe@redhat.com, Andy Lutomirski , Christian Borntraeger , Thomas Garnier , Robert Gerst , Mathias Krause , douly.fnst@cn.fujitsu.com, Nicolai Stange , Frederic Weisbecker , dvlasenk@redhat.com, Daniel Bristot de Oliveira , yamada.masahiro@socionext.com, mika.westerberg@linux.intel.com, Chen Yu , aaron.lu@intel.com, Steven Rostedt , Kyle Huey , Len Brown , Prarit Bhargava , hidehiro.kawai.ez@hitachi.com, fengtiantian@huawei.com, pmladek@suse.com, jeyu@redhat.com, Larry.Finger@lwfinger.net, zijun_hu@htc.com, luisbg@osg.samsung.com, johannes.berg@intel.com, niklas.soderlund+renesas@ragnatech.se, zlpnobody@gmail.com, Alexey Dobriyan , fgao@48lvckh6395k16k5.yundunddos.com, ebiederm@xmission.com, Subash Abhinov Kasiviswanathan , Arnd Bergmann , Matt Fleming , Mel Gorman , "linux-kernel@vger.kernel.org" , linux-doc@vger.kernel.org, linux-edac@vger.kernel.org, kvm Subject: Re: [PATCH 0/2] x86/idle: add halt poll support Message-ID: <20170627140008.GA1503@potion> References: <1498130534-26568-1-git-send-email-root@ip-172-31-39-62.us-west-2.compute.internal> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Tue, 27 Jun 2017 14:00:42 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3246 Lines: 67 2017-06-23 14:49+0800, Yang Zhang: > On 2017/6/23 12:35, Wanpeng Li wrote: > > 2017-06-23 12:08 GMT+08:00 Yang Zhang : > > > On 2017/6/22 19:50, Wanpeng Li wrote: > > > > > > > > 2017-06-22 19:22 GMT+08:00 root : > > > > > > > > > > From: Yang Zhang > > > > > > > > > > Some latency-intensive workload will see obviously performance > > > > > drop when running inside VM. The main reason is that the overhead > > > > > is amplified when running inside VM. The most cost i have seen is > > > > > inside idle path. > > > > > This patch introduces a new mechanism to poll for a while before > > > > > entering idle state. If schedule is needed during poll, then we > > > > > don't need to goes through the heavy overhead path. > > > > > > > > > > Here is the data i get when running benchmark contextswitch > > > > > (https://github.com/tsuna/contextswitch) > > > > > before patch: > > > > > 2000000 process context switches in 4822613801ns (2411.3ns/ctxsw) > > > > > after patch: > > > > > 2000000 process context switches in 3584098241ns (1792.0ns/ctxsw) > > > > > > > > > > > > If you test this after disabling the adaptive halt-polling in kvm? > > > > What's the performance data of w/ this patchset and w/o the adaptive > > > > halt-polling in kvm, and w/o this patchset and w/ the adaptive > > > > halt-polling in kvm? In addition, both linux and windows guests can > > > > get benefit as we have already done this in kvm. > > > > > > > > > I will provide more data in next version. But it doesn't conflict with > > > > Another case I can think of is w/ both this patchset and the adaptive > > halt-polling in kvm. > > > > > current halt polling inside kvm. This is just another enhancement. > > > > I didn't look close to the patchset, however, maybe there is another > > poll in the kvm part again sometimes if you fails the poll in the > > guest. In addition, the adaptive halt-polling in kvm has performance > > penalty when the pCPU is heavily overcommitted though there is a > > single_task_running() in my testing, it is hard to accurately aware > > whether there are other tasks waiting on the pCPU in the guest which > > will make it worser. Depending on vcpu_is_preempted() or steal time > > maybe not accurately or directly. > > > > So I'm not sure how much sense it makes by adaptive halt-polling in > > both guest and kvm. I prefer to just keep adaptive halt-polling in > > kvm(then both linux/windows or other guests can get benefit) and avoid > > to churn the core x86 path. > > This mechanism is not specific to KVM. It is a kernel feature which can > benefit guest when running inside X86 virtualization environment. The guest > includes KVM,Xen,VMWARE,Hyper-v. Administrator can control KVM to use > adaptive halt poll but he cannot control the user to use halt polling inside > guest. Lots of user set idle=poll inside guest to improve performance which > occupy more CPU cycles. This mechanism is a enhancement to it not to KVM > halt polling. Users of idle=poll shouln't overcommit, so the goal seems to be energy savings without crippling the guest performance too much ... Wouldn't switching to idle=mwait work as well? Thanks.