Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752991AbdFVLWn (ORCPT ); Thu, 22 Jun 2017 07:22:43 -0400 Received: from mail-pg0-f68.google.com ([74.125.83.68]:35425 "EHLO mail-pg0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752621AbdFVLWk (ORCPT ); Thu, 22 Jun 2017 07:22:40 -0400 From: root X-Google-Original-From: root To: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, pbonzini@redhat.com Cc: x86@kernel.org, corbet@lwn.net, tony.luck@intel.com, bp@alien8.de, peterz@infradead.org, mchehab@kernel.org, akpm@linux-foundation.org, krzk@kernel.org, jpoimboe@redhat.com, luto@kernel.org, borntraeger@de.ibm.com, thgarnie@google.com, rgerst@gmail.com, minipli@googlemail.com, douly.fnst@cn.fujitsu.com, nicstange@gmail.com, fweisbec@gmail.com, dvlasenk@redhat.com, bristot@redhat.com, yamada.masahiro@socionext.com, mika.westerberg@linux.intel.com, yu.c.chen@intel.com, aaron.lu@intel.com, rostedt@goodmis.org, me@kylehuey.com, len.brown@intel.com, prarit@redhat.com, hidehiro.kawai.ez@hitachi.com, fengtiantian@huawei.com, pmladek@suse.com, jeyu@redhat.com, Larry.Finger@lwfinger.net, zijun_hu@htc.com, luisbg@osg.samsung.com, johannes.berg@intel.com, niklas.soderlund+renesas@ragnatech.se, zlpnobody@gmail.com, adobriyan@gmail.com, fgao@48lvckh6395k16k5.yundunddos.com, ebiederm@xmission.com, subashab@codeaurora.org, arnd@arndb.de, matt@codeblueprint.co.uk, mgorman@techsingularity.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-edac@vger.kernel.org, kvm@vger.kernel.org, Yang Zhang Subject: [PATCH 0/2] x86/idle: add halt poll support Date: Thu, 22 Jun 2017 11:22:12 +0000 Message-Id: <1498130534-26568-1-git-send-email-root@ip-172-31-39-62.us-west-2.compute.internal> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1586 Lines: 40 From: Yang Zhang Some latency-intensive workload will see obviously performance drop when running inside VM. The main reason is that the overhead is amplified when running inside VM. The most cost i have seen is inside idle path. This patch introduces a new mechanism to poll for a while before entering idle state. If schedule is needed during poll, then we don't need to goes through the heavy overhead path. Here is the data i get when running benchmark contextswitch (https://github.com/tsuna/contextswitch) before patch: 2000000 process context switches in 4822613801ns (2411.3ns/ctxsw) after patch: 2000000 process context switches in 3584098241ns (1792.0ns/ctxsw) Yang Zhang (2): x86/idle: add halt poll for halt idle x86/idle: use dynamic halt poll Documentation/sysctl/kernel.txt | 24 ++++++++++ arch/x86/include/asm/processor.h | 6 +++ arch/x86/kernel/apic/apic.c | 6 +++ arch/x86/kernel/apic/vector.c | 1 + arch/x86/kernel/cpu/mcheck/mce_amd.c | 2 + arch/x86/kernel/cpu/mcheck/therm_throt.c | 2 + arch/x86/kernel/cpu/mcheck/threshold.c | 2 + arch/x86/kernel/irq.c | 5 ++ arch/x86/kernel/irq_work.c | 2 + arch/x86/kernel/process.c | 80 ++++++++++++++++++++++++++++++++ arch/x86/kernel/smp.c | 6 +++ include/linux/kernel.h | 5 ++ kernel/sched/idle.c | 3 ++ kernel/sysctl.c | 23 +++++++++ 14 files changed, 167 insertions(+) -- 1.8.3.1