Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933253AbbFVJqL (ORCPT ); Mon, 22 Jun 2015 05:46:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56624 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754491AbbFVJqC (ORCPT ); Mon, 22 Jun 2015 05:46:02 -0400 Message-ID: <5587D957.90007@redhat.com> Date: Mon, 22 Jun 2015 11:45:59 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: Igor Mammedov , "Michael S. Tsirkin" CC: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, andrey@xdel.ru Subject: Re: [PATCH 3/5] vhost: support upto 509 memory regions References: <20150619095515-mutt-send-email-mst@redhat.com> <5583CB62.6030405@redhat.com> <20150619100409-mutt-send-email-mst@redhat.com> <5583D85F.7090200@redhat.com> <20150619120734-mutt-send-email-mst@redhat.com> <5583F28A.9080206@redhat.com> <20150619153248-mutt-send-email-mst@redhat.com> <55843310.50403@redhat.com> <20150619181845-mutt-send-email-mst@redhat.com> <558442B3.2020900@redhat.com> <20150619183226-mutt-send-email-mst@redhat.com> <20150622091027.582a1549@nial.brq.redhat.com> In-Reply-To: <20150622091027.582a1549@nial.brq.redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1420 Lines: 39 On 22/06/2015 09:10, Igor Mammedov wrote: > So far HVA is unusable even if we will make this assumption and let guest crash. > virt_net doesn't work with it anyway, > translation of GPA to HVA for descriptors works as expected (correctly) > but vhost+HVA hack backed virtio still can't send/received packets. > > That's why I prefer to merge kernel solution first as a stable and > not introducing any issues solution. And work on userspace approach on > top of that. Also, let's do some math. Let's assume 3 network devices per VM, one vhost device per queue, one queue per VCPU per network device. Let's assume the host is overcommitted 3:1. Thus we have 3*3=9 times vhost devices as we have physical CPUs. We're thus talking about 108K per physical CPU. >From a relative point of view, and assuming 1 GB of memory per physical CPU (pretty low amount if you're overcommitting CPU 3:1), this is 0.01% of the total memory. >From an absolute point of view, it takes a system with 60 physical CPUs to reach the same memory usage as the vmlinuz binary of a typical distro kernel (not counting the modules). Paolo > Hopefully it could be done but we still would need time > to iron out side effects/issues it causes or could cause so that > fix became stable enough for production. > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/