Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp857291ybf; Thu, 27 Feb 2020 00:46:51 -0800 (PST) X-Google-Smtp-Source: APXvYqzVXzIMrGodhRAlSylNBi/Jmxog0PCVCdT172iHfLBYjDvdWCRzZr4EJVlXU5uH5dwtUeuO X-Received: by 2002:a9d:6a53:: with SMTP id h19mr2473790otn.120.1582793211572; Thu, 27 Feb 2020 00:46:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582793211; cv=none; d=google.com; s=arc-20160816; b=jqIp7D/oXZGn9YQIhu9bSV7WbYiib6cPZf039rpAQSyjblVYc7ENbTLiNZXSxy/whp n62C6KYNn9splkE+4twzOmGWrdVZQfazseXS2cblJ+SgHfs4q/jAYhqFOKoDUXXHgLEm /6SBu8PzPHQVQAgYmJlMEGXKLAH11pLWE7xnM7DxW42z63RI9BP39VMcLeSk6XgyVIiN 90CA958YelHtW05tusXIpSYN91mzImhwBiffTTARzpgevdqMaXgK6w3CqyNZWm0vyJEM p4TedUUR0MsGv99LH5MR2MhhOatTyT3P4oRsM8H+bjSQRzgEK4zCnjs0106RRcQtdc3d 68kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=rcCJN5mZnlo1jzA5HXBQb+gnAxrgqgh8cu4CBU1C/98=; b=iaUmvF8mT4q+JezSEYlF9ZpCpZKUR4gcSUmbnkuGYxb4zb8H3429FU4yYTbpYajcIR An+lm3JH3fsmGycHop+Jo5vabPBiyQylf1J7MTSIfMSJiHE/u9Q+qeV3lP9xy0OwbwTA y9zVJUJ/BzGx8kaMFmPi3XRckHdeOyiZJa97ifD+0ZSp0vXgy53R+RHJT92g/3LXzFUf 3XIe+ToJd6L5Y9hHdOsMlFRJWszTEPvBMe7dfF66Ze8qnRCj96XfHY79eEg3LWEKTgvg lsbufEB0MGKCuVu66eR30wUt7RvUVZMyeNC/O4XBwsBnSXCzGLs2insayJNB93EJGtiH UIFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b=IE+UfxL9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v26si1250798otj.0.2020.02.27.00.46.40; Thu, 27 Feb 2020 00:46:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b=IE+UfxL9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728602AbgB0Iqc (ORCPT + 99 others); Thu, 27 Feb 2020 03:46:32 -0500 Received: from mail-lf1-f68.google.com ([209.85.167.68]:38443 "EHLO mail-lf1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728440AbgB0Iqc (ORCPT ); Thu, 27 Feb 2020 03:46:32 -0500 Received: by mail-lf1-f68.google.com with SMTP id r14so1458520lfm.5 for ; Thu, 27 Feb 2020 00:46:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=rcCJN5mZnlo1jzA5HXBQb+gnAxrgqgh8cu4CBU1C/98=; b=IE+UfxL9scYb16xPi0I/AOr/2TtKR0nozlNtgQh4cx2xCoiAd0ujgGEJhkJ7EncvZV paYwzPs4FKAQ3ePUzOQPLsOi80T7pln7WLZuxDS//nBcmfylFayePwrNt7pum3/6EjJM brxDn2VSDSDUdCTDeF5flE6yfrboYPyjKDf8XWpYnKEXqPHWh08sKnjfOJE+sBu6sNyI 9UgjIQy4yu4PwlTQ0rzrtfjZNJGxMKyKWB3ix/GDk57rGWAg4CQC4Wtq8uML069GMXif SOxCNlS/p1ZZZWhyMdZUooNG2MZR86WuZMO7js4OQE3g301NaCWWOLSA/Fxebe/XpF/0 OsEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=rcCJN5mZnlo1jzA5HXBQb+gnAxrgqgh8cu4CBU1C/98=; b=q6+f8RxZ+hmr1qViW/8Dyz2649KCwUv/NQWZQBMYsYjmZ8G/6hj0s43DFdWpYExm6j z5Fgdy4xoDVxmgf+3eXTsBhU6f8DFk3i/V/gZy6X3mZXPy1t0Hrt1ySYZy0gjx+Jsxap 66bFRRuWQTqSL20tEun7duQURNVgHV7qYjCbJKIu1KkZJSfPEw60Tp79TJDNwNZKJBuW i8cN6bMlPw89XmHOPSuATw88pm7kDssrztVad8jQrP0JLRx6JiaiOZyDHxSF2ARDddVy 4Ixb+0J5QWL4lyT2Ken8AxaxTiI7uhDy1ivfeNpl0Zh2+vJMMuPxjrxBQt1i5C68QqWq rrZg== X-Gm-Message-State: ANhLgQ0OyR5vegUpBKKn6lDEg3S/LjR1I+7jbjTurgeN7jWl3o9krsYf ansYOlV8kM9g6L7ucalEE/jBUQ== X-Received: by 2002:a05:6512:10c2:: with SMTP id k2mr1636432lfg.0.1582793188498; Thu, 27 Feb 2020 00:46:28 -0800 (PST) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id g18sm1098924ljn.32.2020.02.27.00.46.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Feb 2020 00:46:27 -0800 (PST) Received: by box.localdomain (Postfix, from userid 1000) id B8C60100FC1; Thu, 27 Feb 2020 11:47:04 +0300 (+03) Date: Thu, 27 Feb 2020 11:47:04 +0300 From: "Kirill A. Shutemov" To: Hugh Dickins Cc: Andrew Morton , Yang Shi , Alexander Duyck , "Michael S. Tsirkin" , David Hildenbrand , "Kirill A. Shutemov" , Matthew Wilcox , Andrea Arcangeli , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] huge tmpfs: try to split_huge_page() when punching hole Message-ID: <20200227084704.aolem5nktpricrzo@box> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 26, 2020 at 08:06:33PM -0800, Hugh Dickins wrote: > Yang Shi writes: > > Currently, when truncating a shmem file, if the range is partly in a THP > (start or end is in the middle of THP), the pages actually will just get > cleared rather than being freed, unless the range covers the whole THP. > Even though all the subpages are truncated (randomly or sequentially), > the THP may still be kept in page cache. > > This might be fine for some usecases which prefer preserving THP, but > balloon inflation is handled in base page size. So when using shmem THP > as memory backend, QEMU inflation actually doesn't work as expected since > it doesn't free memory. But the inflation usecase really needs to get > the memory freed. (Anonymous THP will also not get freed right away, > but will be freed eventually when all subpages are unmapped: whereas > shmem THP still stays in page cache.) > > Split THP right away when doing partial hole punch, and if split fails > just clear the page so that read of the punched area will return zeroes. > > Hugh Dickins adds: > > Our earlier "team of pages" huge tmpfs implementation worked in the way > that Yang Shi proposes; and we have been using this patch to continue to > split the huge page when hole-punched or truncated, since converting over > to the compound page implementation. Although huge tmpfs gives out huge > pages when available, if the user specifically asks to truncate or punch > a hole (perhaps to free memory, perhaps to reduce the memcg charge), then > the filesystem should do so as best it can, splitting the huge page. I'm still uncomfortable with proposition to use truncate or punch a hole operations to manage memory footprint. These operations are about managing storage footprint, not memory. This happens to be the same for tmpfs. I wounder if we should consider limiting the behaviour to the operation that explicitly combines memory and storage managing: MADV_REMOVE. This way we can avoid future misunderstandings with THP backed by a real filesystem. > } > > /* > + * Check whether a hole-punch or truncation needs to split a huge page, > + * returning true if no split was required, or the split has been successful. > + * > + * Eviction (or truncation to 0 size) should never need to split a huge page; > + * but in rare cases might do so, if shmem_undo_range() failed to trylock on > + * head, and then succeeded to trylock on tail. > + * > + * A split can only succeed when there are no additional references on the > + * huge page: so the split below relies upon find_get_entries() having stopped > + * when it found a subpage of the huge page, without getting further references. > + */ > +static bool shmem_punch_compound(struct page *page, pgoff_t start, pgoff_t end) > +{ > + if (!PageTransCompound(page)) > + return true; > + > + /* Just proceed to delete a huge page wholly within the range punched */ > + if (PageHead(page) && > + page->index >= start && page->index + HPAGE_PMD_NR <= end) > + return true; > + > + /* Try to split huge page, so we can truly punch the hole or truncate */ > + return split_huge_page(page) >= 0; > +} I wanted to recommend taking into account khugepaged_max_ptes_none here, but it will nullify usefulness of the feature for ballooning. Oh, well... -- Kirill A. Shutemov