Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753750AbbDUAw5 (ORCPT ); Mon, 20 Apr 2015 20:52:57 -0400 Received: from mail-bn1bon0141.outbound.protection.outlook.com ([157.56.111.141]:36788 "EHLO na01-bn1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750998AbbDUAwy (ORCPT ); Mon, 20 Apr 2015 20:52:54 -0400 Authentication-Results: freescale.com; dkim=none (message not signed) header.d=none; Message-ID: <1429577566.4352.68.camel@freescale.com> Subject: Re: [PATCH 0/2] powerpc/kvm: Enable running guests on RT Linux From: Scott Wood To: Purcareata Bogdan CC: Sebastian Andrzej Siewior , Paolo Bonzini , Alexander Graf , Bogdan Purcareata , , , , , Thomas Gleixner Date: Mon, 20 Apr 2015 19:52:46 -0500 In-Reply-To: <5534DAA4.3050809@freescale.com> References: <1424251955-308-1-git-send-email-bogdan.purcareata@freescale.com> <54E73A6C.9080500@suse.de> <54E740E7.5090806@redhat.com> <54E74A8C.30802@linutronix.de> <1424734051.4698.17.camel@freescale.com> <54EF196E.4090805@redhat.com> <54EF2025.80404@linutronix.de> <1424999159.4698.78.camel@freescale.com> <55158E6D.40304@freescale.com> <1428016310.22867.289.camel@freescale.com> <551E4A41.1080705@freescale.com> <1428096375.22867.369.camel@freescale.com> <55262DD3.2050707@freescale.com> <1428623611.22867.561.camel@freescale.com> <5534DAA4.3050809@freescale.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.10-0ubuntu1~14.10.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Originating-IP: [2601:2:5800:3f7:12bf:48ff:fe84:c9a0] X-ClientProxiedBy: BN1PR02CA0033.namprd02.prod.outlook.com (10.141.56.33) To BLUPR03MB1475.namprd03.prod.outlook.com (25.163.81.17) X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BLUPR03MB1475; X-Microsoft-Antispam-PRVS: X-Forefront-Antispam-Report: BMV:1;SFV:NSPM;SFS:(10019020)(6009001)(377424004)(51914003)(24454002)(51704005)(50226001)(62966003)(33646002)(77156002)(110136001)(87976001)(36756003)(5820100001)(40100003)(86362001)(46102003)(103116003)(47776003)(92566002)(23676002)(42186005)(50466002)(122386002)(76176999)(50986999)(2950100001)(93886004)(4001450100001);DIR:OUT;SFP:1102;SCL:1;SRVR:BLUPR03MB1475;H:[IPv6:2601:2:5800:3f7:12bf:48ff:fe84:c9a0];FPR:;SPF:None;MLV:sfv;LANG:en; X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(601004)(5005006)(5002010);SRVR:BLUPR03MB1475;BCL:0;PCL:0;RULEID:;SRVR:BLUPR03MB1475; X-Forefront-PRVS: 0553CBB77A X-OriginatorOrg: freescale.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Apr 2015 00:52:50.2199 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR03MB1475 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2724 Lines: 65 On Mon, 2015-04-20 at 13:53 +0300, Purcareata Bogdan wrote: > On 10.04.2015 02:53, Scott Wood wrote: > > On Thu, 2015-04-09 at 10:44 +0300, Purcareata Bogdan wrote: > >> So at this point I was getting kinda frustrated so I decided to measure > >> the time spend in kvm_mpic_write and kvm_mpic_read. I assumed these were > >> the main entry points in the in-kernel MPIC and were basically executed > >> while holding the spinlock. The scenario was the same - 24 VCPUs guest, > >> with 24 virtio+vhost interfaces, only this time I ran 24 ping flood > >> threads to another board instead of netperf. I assumed this would impose > >> a heavier stress. > >> > >> The latencies look pretty ok, around 1-2 us on average, with the max > >> shown below: > >> > >> .kvm_mpic_read 14.560 > >> .kvm_mpic_write 12.608 > >> > >> Those are also microseconds. This was run for about 15 mins. > > > > What about other entry points such as kvm_set_msi() and > > kvmppc_mpic_set_epr()? > > Thanks for the pointers! I redid the measurements, this time for the functions > run with the openpic lock down: > > .kvm_mpic_read_internal (.kvm_mpic_read) 1.664 > .kvmppc_mpic_set_epr 6.880 > .kvm_mpic_write_internal (.kvm_mpic_write) 7.840 > .openpic_msi_write (.kvm_set_msi) 10.560 > > Same scenario, 15 mins, numbers are microseconds. > > There was a weird situation for .kvmppc_mpic_set_epr - its corresponding inner > function is kvmppc_set_epr, which is a static inline. Removing the static inline > yields a compiler crash (Segmentation fault (core dumped) - > scripts/Makefile.build:441: recipe for target 'arch/powerpc/kvm/kvm.o' failed), > but that's a different story, so I just let it be for now. Point is the time may > include other work after the lock has been released, but before the function > actually returned. I noticed this was the case for .kvm_set_msi, which could > work up to 90 ms, not actually under the lock. This made me change what I'm > looking at. kvm_set_msi does pretty much nothing outside the lock -- I suspect you're measuring an interrupt that happened as soon as the lock was released. > So far it looks pretty decent. Are there any other MPIC entry points worthy of > investigation? I don't think so. > Or perhaps a different stress scenario involving a lot of VCPUs > and external interrupts? You could instrument the MPIC code to find out how many loop iterations you maxed out on, and compare that to the theoretical maximum. -Scott -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/