Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751434AbdHEJ1I (ORCPT ); Sat, 5 Aug 2017 05:27:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57936 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751241AbdHEJ1G (ORCPT ); Sat, 5 Aug 2017 05:27:06 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 160FC6146B Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=rjones@redhat.com Date: Sat, 5 Aug 2017 10:27:04 +0100 From: "Richard W.M. Jones" To: Christoph Hellwig Cc: linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, "Martin K. Petersen" , pbonzini@redhat.com Subject: Re: Increased memory usage with scsi-mq Message-ID: <20170805092704.GD20914@redhat.com> References: <20170804210035.GA10017@redhat.com> <20170805084436.GA14264@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20170805084436.GA14264@lst.de> User-Agent: Mutt/1.5.20 (2009-12-10) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Sat, 05 Aug 2017 09:27:06 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2119 Lines: 51 On Sat, Aug 05, 2017 at 10:44:36AM +0200, Christoph Hellwig wrote: > On Fri, Aug 04, 2017 at 10:00:47PM +0100, Richard W.M. Jones wrote: > > I read your slides about scsi-mq and it seems like a significant > > benefit to large machines, but could the out of the box defaults be > > made more friendly for small memory machines? > > The default inumber of queues and queue depth and thus memory usage is > set by the LLDD. > > Try to reduce the can_queue value in virtio_scsi and/or make sure > you use the single queue variant in your VM (which should be tunable > in qemu). Thanks, this is interesting. Virtio-scsi seems to have a few settable parameters that might be related to this: DEFINE_PROP_UINT32("num_queues", VirtIOSCSI, parent_obj.conf.num_queues, 1), DEFINE_PROP_UINT32("max_sectors", VirtIOSCSI, parent_obj.conf.max_sectors, 0xFFFF), DEFINE_PROP_UINT32("cmd_per_lun", VirtIOSCSI, parent_obj.conf.cmd_per_lun, 128), Unfortunately (assuming I'm setting them right - see below), none of them have any effect on the number of disks that I can add to the VM. I am testing them by placing them in the ‘-device virtio-scsi-pci’ parameter, ie. as a property of the controller, not a property of the LUN, eg: -device virtio-scsi-pci,cmd_per_lun=32,id=scsi \ -drive file=/home/rjones/d/libguestfs/tmp/libguestfshXImTv/scratch.1,cache=unsafe,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ The debugging output is a bit too large to attach to this email, but I have placed it at the link below. It contains (if you scroll down a bit) the full qemu command line and full kernel output. http://oirase.annexia.org/tmp/bz1478201-log.txt I can add some extra debugging into the kernel if you like. Just point me to the right place. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html