Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757776AbbFQTLS (ORCPT ); Wed, 17 Jun 2015 15:11:18 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40612 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757519AbbFQTLN (ORCPT ); Wed, 17 Jun 2015 15:11:13 -0400 Date: Wed, 17 Jun 2015 21:11:10 +0200 From: "Michael S. Tsirkin" To: Paolo Bonzini Cc: Igor Mammedov , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: Re: [PATCH 3/5] vhost: support upto 509 memory regions Message-ID: <20150617205949-mutt-send-email-mst@redhat.com> References: <20150617163028-mutt-send-email-mst@redhat.com> <20150617171257.11fe405d@nial.brq.redhat.com> <20150617173736-mutt-send-email-mst@redhat.com> <20150617180921.7972345d@igors-macbook-pro.local> <20150617182917-mutt-send-email-mst@redhat.com> <5581A0E4.6050100@redhat.com> <20150617183413-mutt-send-email-mst@redhat.com> <5581A281.4000308@redhat.com> <20150617183936-mutt-send-email-mst@redhat.com> <5581A496.5060503@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5581A496.5060503@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2533 Lines: 79 On Wed, Jun 17, 2015 at 06:47:18PM +0200, Paolo Bonzini wrote: > > > On 17/06/2015 18:41, Michael S. Tsirkin wrote: > > On Wed, Jun 17, 2015 at 06:38:25PM +0200, Paolo Bonzini wrote: > >> > >> > >> On 17/06/2015 18:34, Michael S. Tsirkin wrote: > >>> On Wed, Jun 17, 2015 at 06:31:32PM +0200, Paolo Bonzini wrote: > >>>> > >>>> > >>>> On 17/06/2015 18:30, Michael S. Tsirkin wrote: > >>>>> Meanwhile old tools are vulnerable to OOM attacks. > >>>> > >>>> For each vhost device there will be likely one tap interface, and I > >>>> suspect that it takes way, way more than 16KB of memory. > >>> > >>> That's not true. We have a vhost device per queue, all queues > >>> are part of a single tap device. > >> > >> s/tap/VCPU/ then. A KVM VCPU also takes more than 16KB of memory. > > > > That's up to you as a kvm maintainer :) > > Not easy, when the CPU alone requires three (albeit non-consecutive) > pages for the VMCS, the APIC access page and the EPT root. > > > People are already concerned about vhost device > > memory usage, I'm not happy to define our user/kernel interface > > in a way that forces even more memory to be used up. > > So, the questions to ask are: > > 1) What is the memory usage like immediately after vhost is brought up, > apart from these 16K? About 24K, but most are iov pool arrays are kept around as an optimization to avoid kmalloc on data path. Below 1K tracks persistent state. Recently people have been complaining about these pools so I've been thinking about switching to a per-cpu array, or something similar. > 2) Is there anything in vhost that allocates a user-controllable amount > of memory? Definitely not in vhost-net. > 3) What is the size of the data structures that support one virtqueue > (there are two of them)? Around 256 bytes. > Does it depend on the size of the virtqueues? No. > 4) Would it make sense to share memory regions between multiple vhost > devices? Would it be hard to implement? It's not trivial. It would absolutely require userspace ABI extensions. > It would also make memory > operations O(1) rather than O(#cpus). > > Paolo We'd save the kmalloc/memcpy/kfree, that is true. But we'd still need to flush all VQs so it's still O(#cpus), we'd just be doing less stuff in that O(#cpus). -- MST -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/