Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp455639pxx; Wed, 28 Oct 2020 08:44:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwA6uQdh01Tw+PP16nFjrGMb6CJELqDjNt0ONWo3owgxn0xyvndvdYtm0aluoMs1E5sjzdH X-Received: by 2002:a05:6402:187:: with SMTP id r7mr8051776edv.360.1603899880423; Wed, 28 Oct 2020 08:44:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603899880; cv=none; d=google.com; s=arc-20160816; b=YxZ1qcuuImVi7Bijn3Ii3uGxYiaPPK4+CI3/ZTd6IrfDJBtmP2d1XpZSNXoLrtIFF2 T0sOckc5v0qZc47ajKeGrR0+hgPRXrtmQaAp6TUM3SncvoO+4012kihuTTf1T9b855dH pE8jDrTe99hI5U2Fh68xrn5hBcjn99ohlvlFCMDbJwKr9gK1J3BDPHlnTznOrEb/4TED WaZZTyweUj1hSl8GI9/TU5jMeM5nmW+GaR3tQ6kJSBf8K9SLMOmtGmp1ewgWjqdRz35G O46HJ2eNSBo3QS881QMABRXIJAvzMqRDQ9Z+FVn7TYWnorqX4qgAL9Kihq322v5XcK/T GyyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=b/+YnHLtqx5A/8AE1SePm8ES+exMvhEucBu7GGWYR7o=; b=c2BKJ0Sbe174cEDVjLk8euyH+OVP8kO/wCtumhWK4/p31ghPMxao4xA136TNLWBjG2 TuTvGlsIV1gzFejtKVe6wNA3Qr6GISGfZuEICHFKUsv+MLZTyIKNzRM4MJcGw0HGlpWm Y0xM/uOlPO7B9yymeTOj39CUt0A7Kpe7du0yD6zhv+bPYOC+RIgQr1g3E97ov56UIUBt En4Jw4Zt+L/fK3/ha9ZLT0u48bwDoHYzW6fKXVXnnHYS5F3+yynNLgV3iXUB0mCGaJ5l QrQOk/qbh6oBQ4E43lQOSDWeFLlqCGKgBYic2Gw6R6qjVRjbyNwIlLKxWfb0pAcIdW/Y wjaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=lvFEe2k2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y9si2654785ede.591.2020.10.28.08.44.17; Wed, 28 Oct 2020 08:44:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=lvFEe2k2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1812946AbgJ0Qqw (ORCPT + 99 others); Tue, 27 Oct 2020 12:46:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:34920 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1801540AbgJ0Pmk (ORCPT ); Tue, 27 Oct 2020 11:42:40 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CF8F920719; Tue, 27 Oct 2020 15:42:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603813360; bh=v1Pe6TGA41rqrWEGk0mxExTJ9wzL4G7FVqnXQgmaDzg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lvFEe2k2YBYRh1FRn5La1dY6IN5ZkLd9rIr/sg5CIIZa5xuM4cB0Cuhs2qp3Ypw+b zLN4j1kIbPFq9gGEOtzVXkq/yq6/kZ1VzIKWqOD17x5Er3M3aRzSxPsHcMH7JqnTM8 SSudU8qFP/qW1+5wA6LnPLJGtEI8vcfx/qbUS6zM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Kirill A. Shutemov" , "Matthew Wilcox (Oracle)" , Andrew Morton , SeongJae Park , Huang Ying , Linus Torvalds , Sasha Levin Subject: [PATCH 5.9 508/757] mm/huge_memory: fix split assumption of page size Date: Tue, 27 Oct 2020 14:52:38 +0100 Message-Id: <20201027135514.293517457@linuxfoundation.org> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201027135450.497324313@linuxfoundation.org> References: <20201027135450.497324313@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kirill A. Shutemov [ Upstream commit 8cce54756806e5777069c46011c5f54f9feac717 ] File THPs may now be of arbitrary size, and we can't rely on that size after doing the split so remember the number of pages before we start the split. Signed-off-by: Kirill A. Shutemov Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton Reviewed-by: SeongJae Park Cc: Huang Ying Link: https://lkml.kernel.org/r/20200908195539.25896-6-willy@infradead.org Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/huge_memory.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index dbac774103769..d37e205d3eae7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2335,13 +2335,13 @@ static void unmap_page(struct page *page) VM_BUG_ON_PAGE(!unmap_success, page); } -static void remap_page(struct page *page) +static void remap_page(struct page *page, unsigned int nr) { int i; if (PageTransHuge(page)) { remove_migration_ptes(page, page, true); } else { - for (i = 0; i < HPAGE_PMD_NR; i++) + for (i = 0; i < nr; i++) remove_migration_ptes(page + i, page + i, true); } } @@ -2416,6 +2416,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; + unsigned int nr = thp_nr_pages(head); int i; lruvec = mem_cgroup_page_lruvec(head, pgdat); @@ -2431,7 +2432,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } - for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { + for (i = nr - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond i_size: drop them from page cache */ if (head[i].index >= end) { @@ -2451,7 +2452,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageCompound(head); - split_page_owner(head, HPAGE_PMD_NR); + split_page_owner(head, nr); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { @@ -2470,9 +2471,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, spin_unlock_irqrestore(&pgdat->lru_lock, flags); - remap_page(head); + remap_page(head, nr); - for (i = 0; i < HPAGE_PMD_NR; i++) { + for (i = 0; i < nr; i++) { struct page *subpage = head + i; if (subpage == page) continue; @@ -2725,7 +2726,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) fail: if (mapping) xa_unlock(&mapping->i_pages); spin_unlock_irqrestore(&pgdata->lru_lock, flags); - remap_page(head); + remap_page(head, thp_nr_pages(head)); ret = -EBUSY; } -- 2.25.1