Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp480053ybf; Wed, 26 Feb 2020 16:58:02 -0800 (PST) X-Google-Smtp-Source: APXvYqz4/LpDm46PAamADPxLN1QNh70WGQSNeYNguVp2IDbi5OLItZku/uqgnSmZbRpBboKnNNR0 X-Received: by 2002:aca:52c1:: with SMTP id g184mr60811oib.154.1582765082520; Wed, 26 Feb 2020 16:58:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582765082; cv=none; d=google.com; s=arc-20160816; b=A2aMc1YUchsji4lh0AqEqul9Fzbto7GQrlshrG3M5fjWKQtPymajYyUwgdaMnb6K3L u80tJhwwy4ISMC9TYyKSdAJwk5ClGaZMvmNI91yMqjZ9VDQpr4sgsr8vsTtXU0JOFtdC U1LRj5vF1a+WUh07QvCdfNYssEL7OpeJIF2Ksl6JnFyABts6kImPuClHC9HPad+xMCfi Sb2mFf2x36PUTRrpIl6Iv8nPzoMgLXHbjfbXAKFlDYKCr8xKXVSnuWq3qhApv4MZz0rf 6caATJAKcW832E70bU3NzPlbKlFINXMQWmKdr6tJc8T2+25lt33uXryNK7bCnM4krL4E Ah9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=CNTEHspSiiNg+pjpT+dEySXRPqzCmJjtXrnip4uwgSk=; b=U1L2+AoqVTlY21OMLFUlRGGeeQJzMLcA/6JrGAtRyEku/uHx6h2OP1wK2mCcAVplE3 v2/a2DI0vfBRHHxc4mIBi3eSpaJH7O1OhtmO8jJ9cJUcj9LXkVEpTFCf4ZBufC0ii+33 DWASiZN5J3DsbC5v3IHXXo+W4h2Qz+T24ZhfcaeMcoeDaUiXyD4oGFBAR39A9mQqT6i6 AR7IUPJA5WFK22rRKL5l0FBM+M/F61cQkiS8HE/xwii8iLUjXOBfg7QPbl9iZqFMcZ+d 7A9BHw+99VTcomylffOmNXdK8XAkWMeCKDR2HNr+N8CMNHN9hZtuxhEnFFjhp3R5keLR 98NA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=E5m0NMvU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k6si619307otn.172.2020.02.26.16.57.50; Wed, 26 Feb 2020 16:58:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=E5m0NMvU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728114AbgB0A4W (ORCPT + 99 others); Wed, 26 Feb 2020 19:56:22 -0500 Received: from mail-pf1-f194.google.com ([209.85.210.194]:35625 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727967AbgB0A4V (ORCPT ); Wed, 26 Feb 2020 19:56:21 -0500 Received: by mail-pf1-f194.google.com with SMTP id i19so685363pfa.2 for ; Wed, 26 Feb 2020 16:56:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=CNTEHspSiiNg+pjpT+dEySXRPqzCmJjtXrnip4uwgSk=; b=E5m0NMvU/MReze+fNBH++Q85YNPyahPGQZCR504O8X/1xE3XUrN2leYsjIYnyZamWq 8Vs//E21r7kmcKw8brBGu3W6Zx1QvmRsMFePFLpIA5LSo+u3/OHRpI+3X4jLpKcnQP+0 jjv44zluKHXWpfmgW3SUHt+OO1Gz+FAszwIM5z/RRsOzMa1rz5G4NGl8o9Kk1WIGI4ZJ 6r9EiaFh6O1ZBkOPmhCHLO+MMOkhib5XpVstiDd4CIIziLgmsaqGbewSStxBM8dPahX2 x1kGvp/FtFKwB4XhsqXbD+msNkXGyER2BtSsG9OsrMkxS4FN96RHLRLHZBRyQu7BUiMs pIQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=CNTEHspSiiNg+pjpT+dEySXRPqzCmJjtXrnip4uwgSk=; b=FGnXl0q2WUOeQrt1AeWazbZ6NM7GmqjFX8ORgpxFTjc1vYJLzS2R7QoGIO5Du4bI/G /HhdOoV1sRVKTAK/pKnWyhqY98R9VE11AzspheWpaAKSWwE5CZ/LrkxaEx9lZbZ6SZEe WEAINOO+eCcSqNu7nrFL6tOHRruvVNuLnv/RjEqHHzGWDR2QR3YViaOW/B+UsjUd8teK ijn3L5DUJjJD5Gbfzz+bPXKChFtFzJvUKqt+Zfb4ExPvxN6nsaltIsv2cX16gqJskgK6 3jJSBBNjXkHmHa0JK6NJOZVKTXk8YwbQPuTJbR68QwwdT6KKq0312tMgBHrGAUg8bqrL uPPA== X-Gm-Message-State: APjAAAUZws2+jEF6/gJ9M/i0Dp0+eqEQ1oEe3qOYQv33WXdMKTbjjJSA GjoNly3F8qF9FltL+PkFd8c4+A== X-Received: by 2002:a63:2266:: with SMTP id t38mr1528086pgm.145.1582764978368; Wed, 26 Feb 2020 16:56:18 -0800 (PST) Received: from [100.112.92.218] ([104.133.9.106]) by smtp.gmail.com with ESMTPSA id y3sm4490374pff.52.2020.02.26.16.56.17 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 26 Feb 2020 16:56:17 -0800 (PST) Date: Wed, 26 Feb 2020 16:56:00 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Yang Shi cc: Alexander Duyck , "Michael S. Tsirkin" , David Hildenbrand , Hugh Dickins , "Kirill A. Shutemov" , Andrea Arcangeli , Andrew Morton , linux-mm , LKML Subject: Re: [v2 PATCH] mm: shmem: allow split THP when truncating THP partially In-Reply-To: <9c30a891-011b-e041-2647-444d09fa7b8a@linux.alibaba.com> Message-ID: References: <1575420174-19171-1-git-send-email-yang.shi@linux.alibaba.com> <9b8ff9ca-75b0-c256-cf37-885ccd786de7@linux.alibaba.com> <9c30a891-011b-e041-2647-444d09fa7b8a@linux.alibaba.com> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 26 Feb 2020, Yang Shi wrote: > On 2/21/20 4:24 PM, Alexander Duyck wrote: > > On Fri, Feb 21, 2020 at 10:24 AM Yang Shi > > wrote: > > > On 2/20/20 10:16 AM, Alexander Duyck wrote: > > > > On Tue, Dec 3, 2019 at 4:43 PM Yang Shi > > > > wrote: > > > > > Currently when truncating shmem file, if the range is partial of THP > > > > > (start or end is in the middle of THP), the pages actually will just > > > > > get > > > > > cleared rather than being freed unless the range cover the whole THP. > > > > > Even though all the subpages are truncated (randomly or > > > > > sequentially), > > > > > the THP may still be kept in page cache. This might be fine for some > > > > > usecases which prefer preserving THP. > > > > > > > > > > But, when doing balloon inflation in QEMU, QEMU actually does hole > > > > > punch > > > > > or MADV_DONTNEED in base page size granulairty if hugetlbfs is not > > > > > used. > > > > > So, when using shmem THP as memory backend QEMU inflation actually > > > > > doesn't > > > > > work as expected since it doesn't free memory. But, the inflation > > > > > usecase really needs get the memory freed. Anonymous THP will not > > > > > get > > > > > freed right away too but it will be freed eventually when all > > > > > subpages are > > > > > unmapped, but shmem THP would still stay in page cache. > > > > > > > > > > Split THP right away when doing partial hole punch, and if split > > > > > fails > > > > > just clear the page so that read to the hole punched area would > > > > > return > > > > > zero. > > > > > > > > > > Cc: Hugh Dickins > > > > > Cc: Kirill A. Shutemov > > > > > Cc: Andrea Arcangeli > > > > > Signed-off-by: Yang Shi > > > > One question I would have is if this is really the desired behavior we > > > > are looking for? > > > > > > > > By proactively splitting the THP you are likely going to see a > > > > performance regression with the virtio-balloon driver enabled in QEMU. > > > > I would suspect the response to that would be to update the QEMU code > > > > to identify the page size of the shared memory ramblock. At that > > > > point I suspect it would start behaving the same as how it currently > > > > handles anonymous memory, and the work done here would essentially > > > > have been wasted other than triggering the desire to resolve this in > > > > QEMU to avoid a performance regression. > > > > > > > > The code for inflating a the balloon in virtio-balloon in QEMU can be > > > > found here: > > > > https://github.com/qemu/qemu/blob/master/hw/virtio/virtio-balloon.c#L66 > > > > > > > > If there is a way for us to just populate the value obtained via > > > > qemu_ram_pagesize with the THP page size instead of leaving it at 4K, > > > > which is the size I am assuming it is at since you indicated that it > > > > is just freeing the base page size, then we could address the same > > > > issue and likely get the desired outcome of freeing the entire THP > > > > page when it is no longer used. > > > If qemu could punch hole (this is how qemu free file-backed memory) in > > > THP unit, either w/ or w/o the patch the THP won't get split since the > > > whole THP will get truncated. But, if qemu has to free memory in sub-THP > > > size due to whatever reason (for example, 1MB for every 2MB section), > > > then we have to split THP otherwise no memory will be freed actually > > > with the current code. It is not about performance, it is about really > > > giving memory back to host. > > I get that, but at the same time I am not sure if everyone will be > > happy with the trade-off. That is my concern. > > > > You may want to change the patch description above if that is the > > case. Based on the description above it makes it sound as if the issue > > is that QEMU is using hole punch or MADV_DONTNEED with the wrong > > granularity. Based on your comment here it sounds like you want to > > have the ability to break up the larger THP page as soon as you want > > to push out a single 4K page from it. > > Yes, you are right. The commit log may be confusing. What I wanted to convey > is QEMU has no idea if THP is used or not so it treats memory with base size > unless hugetlbfs is used since QEMU is aware huge page is used in this case. > This may sounds irrelevant to the problem, I would just remove that. Oh, I'm sad to read that, since I was yanking most of your commit message (as "Yang Shi writes") into my version, to give stronger and independent justification for the change. If I try to write about QEMU and ballooning myself, nonsense is sure to emerge; but I don't know what part "I would just remove that" refers to. May I beg you for an updated paragraph or two, explaining why you want to see the change? Thanks, Hugh