Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp6728072pxv; Fri, 30 Jul 2021 00:26:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyktCkKVqgttmxIbOhvkz8LkKKszOiEVq3D+MKD5hVnBv4dMW7MyxPma4+ZyHCW47K0QE/s X-Received: by 2002:a6b:7311:: with SMTP id e17mr468801ioh.127.1627630011988; Fri, 30 Jul 2021 00:26:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627630011; cv=none; d=google.com; s=arc-20160816; b=vaOWngOZdlz/L8ciBpBkmZgLBL7f6HuV4yzkqkBDOkBpdeeb3KSMeoqTVQtp1Gj90l C7Eu3rbh+F6Xk21KF/oGSSuQVUF5EK6aalOKWpNFe6uLagU/R1LU0sQkXOBoo2Yz48DW EY2we/O61BOUk5mYYfh8mEv8fGg3Me8IA8x70Tl5TWT9DKk2WqvjxG/izZy7JQiB0SW/ x9DheOYUyyhrBMJZLxIsHqHUNEjdvHfsx0Oa2At0odDudCLbogWXA+38xCMnOmliz/9A C1BmdXyhe1Dv9MDsYYRogRlaZq+7YLwt4m6YCftS8f0GYpxEmt8WzaCkmGWbHdXsD7Ga OW7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=jkYXHYQOJwdYcrKEK9MjpwYhNSueJb3fc8wKTs2YDzo=; b=wGHFLH0EzCKnztJ+IbZvnSyQImg2apDgky/2Rey9Eq1rjqBxMZQZCezh9xhlnk1+LJ 6Hl3JXYfFmJdEoXf0JTkLi9OVKAlfC+4fDYGrU4MhnC3zoWu3yGGdNwBJwa5llpyvklA Jb7Y27/bSSB6z2A4JPvuK0CEnWXG4TikNKKVbKUaj8iONVeVGlHDMutMaoRuD5s7trwD gQSspB8YGiTVe4zgYHoR9i405LHhqcp4vn1aRbyy7obYOjObTaWIUtGEsj2N13kMSCOY +8oDhMZKICTq/DeHmGJEWNGaoAwfJz+REw/MNPoQcpdpR8OSFx9pHVSyTWKIfQsKXFPL gOag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=MnYv4B4a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y11si869548jat.64.2021.07.30.00.26.40; Fri, 30 Jul 2021 00:26:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=MnYv4B4a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237370AbhG3HZs (ORCPT + 99 others); Fri, 30 Jul 2021 03:25:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237630AbhG3HZr (ORCPT ); Fri, 30 Jul 2021 03:25:47 -0400 Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com [IPv6:2607:f8b0:4864:20::734]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C49E8C0613CF for ; Fri, 30 Jul 2021 00:25:42 -0700 (PDT) Received: by mail-qk1-x734.google.com with SMTP id f22so8571023qke.10 for ; Fri, 30 Jul 2021 00:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=jkYXHYQOJwdYcrKEK9MjpwYhNSueJb3fc8wKTs2YDzo=; b=MnYv4B4ahWeK8EelXwid43sUxDgsWADeVSOhfvddZrBPkswJK5Et+CtJWb+1DpLpwA dJxKP4tIc+aR6szsW9Lj/v66BEXLzd94E7bNJu1Qv1OUyxB+6RVhP7wlqaRXjwuQMRxn pr4uQKsGvfWn5CQjX4TZC+h5ZSYjlqNl8ghcCklzv30T9zSJOLkgfBRgJn4Mnl9FHJjc 5xY3HwHZ6ZhzzS1dkQQTof7vNMZVM8qYC1xMtDJVq2VqMGeUWb4XpvYSVM3lFW/hJJeQ c9HZP2vLmNhDzpObKtkmUZSZAx/7AMfQgfv6CjmnYWAD7b7wiotN3Ekace0xWNMNXI+D fYBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=jkYXHYQOJwdYcrKEK9MjpwYhNSueJb3fc8wKTs2YDzo=; b=DYWjBBZyw0GC6fqm2A38/xFD3omK1Y9D0vFDGbQe2RNCXJGu0vfpLpt1uyoFw/pipR atmgbBXt8oTYj+mzQvzqgURtvcmPBl7APa6pR80sBBv6Uki2KMSZvCMQuBg+SspxZnXY lVY0MFOI1gqW2OmepGEBV2oTwwIKjLvpOZwh0jaxJx+iAbFw+xtfhlAqbaImBvl3Sahb MK0US7iVJw0JI2m8blk1lWQtw8ykKBvaOBbMaDrV5ibnAKCILcFoKFIuAIwNht/92BwR pMNNN6XjJPPLDtjqbjSkCjn6HlFf6KVCTQXCILBTX97+ZMkoJce71qmqzPwbY+oOiAMq 2U5A== X-Gm-Message-State: AOAM530lKbCYclU3uMluloqwKOVfLcAa+Ll/Hst/Wy0o0ITxG6KcD/UJ XFVyAQ2T0A9LW3yK/P1u9Ww/TQ== X-Received: by 2002:ae9:e90e:: with SMTP id x14mr992985qkf.118.1627629941526; Fri, 30 Jul 2021 00:25:41 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id 5sm524075qko.53.2021.07.30.00.25.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Jul 2021 00:25:40 -0700 (PDT) Date: Fri, 30 Jul 2021 00:25:37 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , Shakeel Butt , "Kirill A. Shutemov" , Yang Shi , Miaohe Lin , Mike Kravetz , Michal Hocko , Rik van Riel , Christoph Hellwig , Matthew Wilcox , "Eric W. Biederman" , Alexey Gladkov , Chris Wilson , Matthew Auld , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 01/16] huge tmpfs: fix fallocate(vanilla) advance over huge pages In-Reply-To: <2862852d-badd-7486-3a8e-c5ea9666d6fb@google.com> Message-ID: References: <2862852d-badd-7486-3a8e-c5ea9666d6fb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org shmem_fallocate() goes to a lot of trouble to leave its newly allocated pages !Uptodate, partly to identify and undo them on failure, partly to leave the overhead of clearing them until later. But the huge page case did not skip to the end of the extent, walked through the tail pages one by one, and appeared to work just fine: but in doing so, cleared and Uptodated the huge page, so there was no way to undo it on failure. Now advance immediately to the end of the huge extent, with a comment on why this is more than just an optimization. But although this speeds up huge tmpfs fallocation, it does leave the clearing until first use, and some users may have come to appreciate slow fallocate but fast first use: if they complain, then we can consider adding a pass to clear at the end. Fixes: 800d8c63b2e9 ("shmem: add huge pages support") Signed-off-by: Hugh Dickins --- mm/shmem.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 70d9ce294bb4..0cd5c9156457 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2736,7 +2736,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, inode->i_private = &shmem_falloc; spin_unlock(&inode->i_lock); - for (index = start; index < end; index++) { + for (index = start; index < end; ) { struct page *page; /* @@ -2759,13 +2759,26 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, goto undone; } + index++; + /* + * Here is a more important optimization than it appears: + * a second SGP_FALLOC on the same huge page will clear it, + * making it PageUptodate and un-undoable if we fail later. + */ + if (PageTransCompound(page)) { + index = round_up(index, HPAGE_PMD_NR); + /* Beware 32-bit wraparound */ + if (!index) + index--; + } + /* * Inform shmem_writepage() how far we have reached. * No need for lock or barrier: we have the page lock. */ - shmem_falloc.next++; if (!PageUptodate(page)) - shmem_falloc.nr_falloced++; + shmem_falloc.nr_falloced += index - shmem_falloc.next; + shmem_falloc.next = index; /* * If !PageUptodate, leave it that way so that freeable pages -- 2.26.2