Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753433AbbEKMuu (ORCPT ); Mon, 11 May 2015 08:50:50 -0400 Received: from e06smtp17.uk.ibm.com ([195.75.94.113]:46649 "EHLO e06smtp17.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751787AbbEKMut (ORCPT ); Mon, 11 May 2015 08:50:49 -0400 Message-ID: <5550A5A3.5010403@de.ibm.com> Date: Mon, 11 May 2015 14:50:43 +0200 From: Christian Borntraeger User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: Paolo Bonzini , Joerg Roedel CC: Joerg Roedel , Gleb Natapov , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] kvm: irqchip: Break up high order allocations of kvm_irq_routing_table References: <1431088304-11365-1-git-send-email-joro@8bytes.org> <554CE3A5.7000101@redhat.com> <20150511112522.GJ5438@suse.de> <5550964B.6020001@redhat.com> In-Reply-To: <5550964B.6020001@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15051112-0029-0000-0000-0000048E27F0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1914 Lines: 45 Am 11.05.2015 um 13:45 schrieb Paolo Bonzini: > > > On 11/05/2015 13:25, Joerg Roedel wrote: >>>> It probably doesn't matter much indeed, but can you time the difference? >>>> kvm_set_irq_routing is not too frequent, but happens enough often that >>>> we had to use a separate SRCU instance just to speed it up (see commit >>>> 719d93cd5f5, kvm/irqchip: Speed up KVM_SET_GSI_ROUTING, 2014-01-16). >> The results vary a lot, but what I can say for sure is that the >> kvm_set_irq_routing function takes at least twice as long (~10.000 vs >> ~22.000 cycles) as before on my AMD Kaveri machine (maximum was between >> 3-4 times as long). >> >> On the other side this function is only called 2 times at boot in my >> test, so I couldn't detect a noticable effect on the overall boot time >> of the guest (37 disks were attached). x86 probably has only some irq lines for this, (or Joerg is using virtio-scsi) s390 has a route per device, but with 100 virtio-blk devices the difference seem pretty much on the "dont care" side. qemu aio-poll/drain code seems to cause much more delay since we elimited the kernel delays by using synchronize_srcu_expedited. > Christian, can you test this? guest comes up and performance is ok. I did not do any additional thing (lockdep, kmemleak) but I think the generic approach is good. in case the host is overcommited and paging, order-0 allocations might be much faster and much more reliable than one big order-2, 3 or 4. Bonus points for the future: We might be able to rework this to re-use the old allocations for struct kvm_kernel_irq_routing_entry (bascially replacing only chip, mr_rt_entries and hlist) Christian -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/