Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp125742ybf; Wed, 26 Feb 2020 10:01:24 -0800 (PST) X-Google-Smtp-Source: APXvYqyzKkKy/GLBjMOCreqDYTsh2XyVGkhBJon9dVpeenSPP5Aes5vVW4sRgmCh9Wfeu0snZHu7 X-Received: by 2002:aca:4a0b:: with SMTP id x11mr149896oia.37.1582740084340; Wed, 26 Feb 2020 10:01:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582740084; cv=none; d=google.com; s=arc-20160816; b=SLMU5qvHU1nUV35p7UmawEGEOUp5yHPE0dG8310aURxxNkXEk58i2sfIxFWlA3fKR5 n+JdkIOKkIHL+B1VECATZsAeyNOk/lQgxow7nUvzVvBcLyE/m4FqyadKiODOSWU4mKyg mWxEN/OVgsJemQWyvZf/pAmvAa3bGTnh5p4Fnw6mwxvXOmtHDx5SVGD5RerUH1bTlCX/ TTrHHoTwdqvouuxgXPSILvU/bb7DdsiupBryLx2qSQLKmmmt8RoH4jw/jpbUUZbqztz0 M6xJ0DsifcwQMLj/rYBw9k2CR/DmKpEpXKe2FTiFjzr01ol/0rOCgkgdBZIqZoPhcCzF znfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=Z0XPgprCI4GPARWGdKuzgdTPGIGfOuurSUoh+j7ppdI=; b=OWvlEStW0re6vTkiKJPPBDdGUKAHiK5s0oFyAGHdDF979E7D3gTyeeJ4DN1ExUZKnI 1ebxNfR83WlGr91HClrsJt4O4IcOCMTtGRlrybaWNwxKgGpzXpuv6wWqeukXIXm53QRy W1CX9JhPU1XJK4lIHL1oBkR+oBfF8OdditE1Xn4lkgl4ZYMeMSbEaFa3uNHcjLpkjvSB K159d2J3CVfCe0ZjKNx+uFIi2+FemkWZyvZJim3xE3xfRZjsVaRCseLqHBOSobLPSwJs 2zH91WWg1R/ffp0Ec19+JtF8A1M1TfyqeIvuID5J+LapyZ4DvXcOvh7JD1zEuPc/LeHj aF6g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t10si143959otd.219.2020.02.26.10.01.10; Wed, 26 Feb 2020 10:01:24 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726980AbgBZSBE (ORCPT + 99 others); Wed, 26 Feb 2020 13:01:04 -0500 Received: from out30-42.freemail.mail.aliyun.com ([115.124.30.42]:51861 "EHLO out30-42.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726787AbgBZSBE (ORCPT ); Wed, 26 Feb 2020 13:01:04 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0Tr-Vlmd_1582740057; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0Tr-Vlmd_1582740057) by smtp.aliyun-inc.com(127.0.0.1); Thu, 27 Feb 2020 02:01:01 +0800 Subject: Re: [v2 PATCH] mm: shmem: allow split THP when truncating THP partially To: David Hildenbrand , Alexander Duyck Cc: "Michael S. Tsirkin" , Hugh Dickins , "Kirill A. Shutemov" , Andrea Arcangeli , Andrew Morton , linux-mm , LKML References: <1575420174-19171-1-git-send-email-yang.shi@linux.alibaba.com> <9b8ff9ca-75b0-c256-cf37-885ccd786de7@linux.alibaba.com> <9c30a891-011b-e041-2647-444d09fa7b8a@linux.alibaba.com> <84492260-726c-dda1-3ec8-b445fe269cad@redhat.com> From: Yang Shi Message-ID: <00338ec9-76fc-bc0e-39af-495ff3b5c7e9@linux.alibaba.com> Date: Wed, 26 Feb 2020 10:00:55 -0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <84492260-726c-dda1-3ec8-b445fe269cad@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/26/20 9:45 AM, David Hildenbrand wrote: > On 26.02.20 18:31, Yang Shi wrote: >> >> On 2/21/20 4:24 PM, Alexander Duyck wrote: >>> On Fri, Feb 21, 2020 at 10:24 AM Yang Shi wrote: >>>> >>>> On 2/20/20 10:16 AM, Alexander Duyck wrote: >>>>> On Tue, Dec 3, 2019 at 4:43 PM Yang Shi wrote: >>>>>> Currently when truncating shmem file, if the range is partial of THP >>>>>> (start or end is in the middle of THP), the pages actually will just get >>>>>> cleared rather than being freed unless the range cover the whole THP. >>>>>> Even though all the subpages are truncated (randomly or sequentially), >>>>>> the THP may still be kept in page cache. This might be fine for some >>>>>> usecases which prefer preserving THP. >>>>>> >>>>>> But, when doing balloon inflation in QEMU, QEMU actually does hole punch >>>>>> or MADV_DONTNEED in base page size granulairty if hugetlbfs is not used. >>>>>> So, when using shmem THP as memory backend QEMU inflation actually doesn't >>>>>> work as expected since it doesn't free memory. But, the inflation >>>>>> usecase really needs get the memory freed. Anonymous THP will not get >>>>>> freed right away too but it will be freed eventually when all subpages are >>>>>> unmapped, but shmem THP would still stay in page cache. >>>>>> >>>>>> Split THP right away when doing partial hole punch, and if split fails >>>>>> just clear the page so that read to the hole punched area would return >>>>>> zero. >>>>>> >>>>>> Cc: Hugh Dickins >>>>>> Cc: Kirill A. Shutemov >>>>>> Cc: Andrea Arcangeli >>>>>> Signed-off-by: Yang Shi >>>>> One question I would have is if this is really the desired behavior we >>>>> are looking for? >>>>> >>>>> By proactively splitting the THP you are likely going to see a >>>>> performance regression with the virtio-balloon driver enabled in QEMU. >>>>> I would suspect the response to that would be to update the QEMU code >>>>> to identify the page size of the shared memory ramblock. At that >>>>> point I suspect it would start behaving the same as how it currently >>>>> handles anonymous memory, and the work done here would essentially >>>>> have been wasted other than triggering the desire to resolve this in >>>>> QEMU to avoid a performance regression. >>>>> >>>>> The code for inflating a the balloon in virtio-balloon in QEMU can be >>>>> found here: >>>>> https://github.com/qemu/qemu/blob/master/hw/virtio/virtio-balloon.c#L66 >>>>> >>>>> If there is a way for us to just populate the value obtained via >>>>> qemu_ram_pagesize with the THP page size instead of leaving it at 4K, >>>>> which is the size I am assuming it is at since you indicated that it >>>>> is just freeing the base page size, then we could address the same >>>>> issue and likely get the desired outcome of freeing the entire THP >>>>> page when it is no longer used. >>>> If qemu could punch hole (this is how qemu free file-backed memory) in >>>> THP unit, either w/ or w/o the patch the THP won't get split since the >>>> whole THP will get truncated. But, if qemu has to free memory in sub-THP >>>> size due to whatever reason (for example, 1MB for every 2MB section), >>>> then we have to split THP otherwise no memory will be freed actually >>>> with the current code. It is not about performance, it is about really >>>> giving memory back to host. >>> I get that, but at the same time I am not sure if everyone will be >>> happy with the trade-off. That is my concern. >>> >>> You may want to change the patch description above if that is the >>> case. Based on the description above it makes it sound as if the issue >>> is that QEMU is using hole punch or MADV_DONTNEED with the wrong >>> granularity. Based on your comment here it sounds like you want to >>> have the ability to break up the larger THP page as soon as you want >>> to push out a single 4K page from it. >> Yes, you are right. The commit log may be confusing. What I wanted to >> convey is QEMU has no idea if THP is used or not so it treats memory >> with base size unless hugetlbfs is used since QEMU is aware huge page is >> used in this case. >> This may sounds irrelevant to the problem, I would just remove that. >> >>> I am not sure the description for the behavior of anonymous THP with >>> respect to QEMU makes sense either. Based on the description you made >>> it sound like it was somehow using the same process used for huge >>> pages. That isn't the case right? My understanding is that in the case >>> of an anonymous THP it is getting broken into 4K subpages and then >>> those are freed individually. That should leave you with the same >>> performance regression that I had brought up earlier. >> No, anonymous THP won't get split immediately and those memory also >> won't get freed immediately if QEMU does MADV_DONTNEED on sub THP range >> (for example, 1MB range in THP). The THP will get freed when: >> 1. Host has memory pressure. The THP will get split and unmapped pages >> will be freed. >> 2. Other sub pages in the same THP are MADV_DONTNEED'ed (eventually the >> whole THP get unmapped). >> >> The difference between shmem and anonymous page is shmem will not get >> freed unless hole punch the whole THP, anonymous page will get freed >> sooner or later. >> > As far as I understood Hugh, the "page size" we'll see in QEMU via > fstatfs() is 4k, not 2MB. IMHO, that's the block size of the "device", > and breaking up THP is the right think to to obey the documentation of > "FALLOC_FL_PUNCH_HOLE". This is what the patch attempts to accomplish. > > IMHO THP is called "transparent" because it shouldn't have any such > visible side effects. AFAICT, the lazy split is due to locking issue in partial unmap paths. Please refer to "Partial unmap and deferred_split_huge_page()" section in Documentation/vm/transhuge.rst. > > As always, anybody correct me if I am wrong here. >