Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp3916159ybl; Mon, 9 Dec 2019 02:16:10 -0800 (PST) X-Google-Smtp-Source: APXvYqz7styVVlh0vDcYpweK9eWgKlN1wMCGk7NR638RmPNoKMPKw3oDjZB76arS10s17rgaIiwp X-Received: by 2002:a05:6808:1c6:: with SMTP id x6mr18078054oic.49.1575886569874; Mon, 09 Dec 2019 02:16:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575886569; cv=none; d=google.com; s=arc-20160816; b=hqP9LTiZnHYAQOxph8cloCP67CVev2IaVwZ8oEIcxoXsLFq9ZYr6XdI6aU3cV2SHWn L7YkjCAm7s6nuyOJQtsYIIVEp9j+wXIkdfytZiJMX8Z5H86fF+Sv+XujMba9+E7mN1v0 q7/nTdxWG1M6iPuQYFdiHxs+I+zYgNRPPFmPM6UTIASTM230Uc3GkjTmH4sPzE4RPYSW ZyYxrBB4mlpLIH8JpJG8xBhW/btgDppCNOse9Zj5M8GPymIHHq8ztvFGK6y1hCYTYxgf PUnRE1C5QgUshvmJh84Qfhl3qn5BzC37IcKxtBOkyAqaoX5nl1//p35Otex8vSCQCMyp 6lFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=q4I7/aC5OS6k4e3IQduDNKMFKYV1HZ+6mUi0PSzzMvc=; b=r0dmwNFSuE/WBxzqwJ3g80cwDMxCBJVUO12uuzV05kle39kRWjjZ2prJGNRDCv5JXt wEOHhtF3MWFoigvJKYJlzMyzYyDvEIVYQIgVXEWhRxykgYo+r3RWnw1a5j5eQQyUY4mg JnfFQyiHAq7rUF4txiwAURr+Z9Atz7/RZPMC+tQl+fCnJcoXGn7rpv2R+jFduE6T4EXv GbRgqpLI2q/HoMxkPMic9LYA4GkXtE1ZmVJC5IHT08uOm23HVupOKnaM189iKlvQmuMQ rqmpYbuXxnxzRCDWlaTQTCv8X4vimy9z67yhdE6Dcip3P+rTbvxrK2gYPJOt5I65wiyt ANmA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v202si10162836oie.79.2019.12.09.02.15.57; Mon, 09 Dec 2019 02:16:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727202AbfLIKP1 (ORCPT + 99 others); Mon, 9 Dec 2019 05:15:27 -0500 Received: from mx2.suse.de ([195.135.220.15]:46110 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726279AbfLIKP1 (ORCPT ); Mon, 9 Dec 2019 05:15:27 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 67BABB3E2; Mon, 9 Dec 2019 10:15:24 +0000 (UTC) Subject: Re: [PATCH v3 0/1] xen/blkback: Squeeze page pools if a memory pressure To: "Durrant, Paul" , "Park, Seongjae" , "axboe@kernel.dk" , "konrad.wilk@oracle.com" , "roger.pau@citrix.com" Cc: "linux-block@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "sj38.park@gmail.com" , "xen-devel@lists.xenproject.org" References: <20191209085839.21215-1-sjpark@amazon.com> <954f7beb-9d40-253e-260b-4750809bf808@suse.com> <026ba79524da43648371e5bca437a5e4@EX13D32EUC003.ant.amazon.com> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: Date: Mon, 9 Dec 2019 11:15:22 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.1 MIME-Version: 1.0 In-Reply-To: <026ba79524da43648371e5bca437a5e4@EX13D32EUC003.ant.amazon.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09.12.19 10:46, Durrant, Paul wrote: >> -----Original Message----- >> From: Jürgen Groß >> Sent: 09 December 2019 09:39 >> To: Park, Seongjae ; axboe@kernel.dk; >> konrad.wilk@oracle.com; roger.pau@citrix.com >> Cc: linux-block@vger.kernel.org; linux-kernel@vger.kernel.org; Durrant, >> Paul ; sj38.park@gmail.com; xen- >> devel@lists.xenproject.org >> Subject: Re: [PATCH v3 0/1] xen/blkback: Squeeze page pools if a memory >> pressure >> >> On 09.12.19 09:58, SeongJae Park wrote: >>> Each `blkif` has a free pages pool for the grant mapping. The size of >>> the pool starts from zero and be increased on demand while processing >>> the I/O requests. If current I/O requests handling is finished or 100 >>> milliseconds has passed since last I/O requests handling, it checks and >>> shrinks the pool to not exceed the size limit, `max_buffer_pages`. >>> >>> Therefore, `blkfront` running guests can cause a memory pressure in the >>> `blkback` running guest by attaching a large number of block devices and >>> inducing I/O. >> >> I'm having problems to understand how a guest can attach a large number >> of block devices without those having been configured by the host admin >> before. >> >> If those devices have been configured, dom0 should be ready for that >> number of devices, e.g. by having enough spare memory area for ballooned >> pages. >> >> So either I'm missing something here or your reasoning for the need of >> the patch is wrong. >> > > I think the underlying issue is that persistent grant support is hogging memory in the backends, thereby compromising scalability. IIUC this patch is essentially a band-aid to get back to the scalability that was possible before persistent grant support was added. Ultimately the right answer should be to get rid of persistent grants support and use grant copy, but such a change is clearly more invasive and would need far more testing. Persistent grants are hogging ballooned pages, which is equivalent to memory only in case of the backend's domain memory being equal or rather near to its max memory size. So configuring the backend domain with enough spare area for ballooned pages should make this problem much less serious. Another problem in this area is the amount of maptrack frames configured for a driver domain, which will limit the number of concurrent foreign mappings of that domain. So instead of having a blkback specific solution I'd rather have a common callback for backends to release foreign mappings in order to enable a global resource management. Juergen