Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp3561789pxv; Mon, 28 Jun 2021 07:27:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyEZVrl0MW0odg+Cv3NiIacUysl4FtcLJwgUzdTTYbXoAuIIKit66fm9FPApy7i+CVqEJD+ X-Received: by 2002:a92:1948:: with SMTP id e8mr18006994ilm.77.1624890423960; Mon, 28 Jun 2021 07:27:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624890423; cv=none; d=google.com; s=arc-20160816; b=R5ezwrhzqwycJduNX+s3VE5a/27ox3nMBF0Q6/SokohIqeNG6qkbxRede4tThMzV30 Y3hXiNLtfAsm995L7I2R1jfxkNbecc1U7BN6KZUTSe+5CDj4NYGBHaEeQNywb+hIt3Y1 /qdZH18962kO3EouZ/GZAtd2JvBntGxz7edu7m+hEiV7ZSrXGYD+MATYT3LnvpchRqyb gcf1mckHiaaUV763B52r3vvqUzBb8JuRySjSGkuNpoWeKGIqHTbnF6Uk9vdFP8SZkc1R 1uwJqK6zjQpMNpAZRvRj6N2oQz9Jh8sopkwWR2Yvu7c5N3tpnPF/gzCWTPFesD93EfV0 kjHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jdgHrvHD/TM6o2YNY1/b5xpEpzgFF7nIqdlY5j3GQYA=; b=EoSK2Zhd2wjxT9Bd7IijYm7C6oOBF3wP+TUoY4ktOP53vQi5+R6Nt8M6AkkQniagir UlIVOefbAgq9vry5TniGmc3qRFyQWmX6py/lXODMMz12zLsv4b0dSYUDdA6DlAyURrc4 V+CGCfWcZmJdKpGxgrdcQ/ptSGeBkrzdLQ4MXvgfsaaASdvcYnW1mSfbONyKmrAjyO7/ hYsqjXNjDdljqxiB+L9cEnSZUg5ICf5BP0GeNpKRFJBjufLCPWMUPv/eTP6yfUcXdYLo 3zPf7hg/wLhJJ9ytp26cCUvAgypOELA/jmgT65SojDlfxaUy6rSMUUCqJYbHfXarO1BI 5/fg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=E9pIktTc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w17si15597010iot.47.2021.06.28.07.26.49; Mon, 28 Jun 2021 07:27:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=E9pIktTc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233711AbhF1O2W (ORCPT + 99 others); Mon, 28 Jun 2021 10:28:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:54380 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233494AbhF1OWn (ORCPT ); Mon, 28 Jun 2021 10:22:43 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id E84E761C89; Mon, 28 Jun 2021 14:19:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624889996; bh=3/c+fyVisSmL2eR1nj6x/dc9bpkgMQC8e1+NtesV59U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E9pIktTcv9tkd4wCTyW/fFoniTwEVfpugBUt9npNiDnUEitGC6KDVc9loZ4zTeiPY 4QWklBZrQDuGKrFq5Sy/smHEm+LU6ypK+4DuL1sFmhrX9ARm1kVA45ppQlTK5r2KIb eB6Z++dQWuMq3VdjaNcA74atYr+VUtrGKemxuAzFXz2wK+KVlUrOkXeAOfRgJxcwa9 j8cOHkZDcyUKTb0DBBf8XPr4WtTIUAff/WiUk8aLZmGZElU7qqcC/skd2IGUVeBift wu+e1Q9ldK8F+i7tEB5y0LnowR2CzBFjZtyg/O4a978RctOsfMw+4xvhXdSaTIn/Pr sh5gMjMnDQKuA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Yang Shi , Zi Yan , "Kirill A . Shutemov" , Hugh Dickins , Alistair Popple , Jan Kara , Jue Wang , "Matthew Wilcox (Oracle)" , Miaohe Lin , Minchan Kim , Naoya Horiguchi , Oscar Salvador , Peter Xu , Ralph Campbell , Shakeel Butt , Wang Yugui , Andrew Morton , Linus Torvalds , Greg Kroah-Hartman Subject: [PATCH 5.12 089/110] mm: thp: replace DEBUG_VM BUG with VM_WARN when unmap fails for split Date: Mon, 28 Jun 2021 10:18:07 -0400 Message-Id: <20210628141828.31757-90-sashal@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210628141828.31757-1-sashal@kernel.org> References: <20210628141828.31757-1-sashal@kernel.org> MIME-Version: 1.0 X-KernelTest-Patch: http://kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.12.14-rc1.gz X-KernelTest-Tree: git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git X-KernelTest-Branch: linux-5.12.y X-KernelTest-Patches: git://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git X-KernelTest-Version: 5.12.14-rc1 X-KernelTest-Deadline: 2021-06-30T14:18+00:00 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yang Shi commit 504e070dc08f757bccaed6d05c0f53ecbfac8a23 upstream. When debugging the bug reported by Wang Yugui [1], try_to_unmap() may fail, but the first VM_BUG_ON_PAGE() just checks page_mapcount() however it may miss the failure when head page is unmapped but other subpage is mapped. Then the second DEBUG_VM BUG() that check total mapcount would catch it. This may incur some confusion. As this is not a fatal issue, so consolidate the two DEBUG_VM checks into one VM_WARN_ON_ONCE_PAGE(). [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/ Link: https://lkml.kernel.org/r/d0f0db68-98b8-ebfb-16dc-f29df24cf012@google.com Signed-off-by: Yang Shi Reviewed-by: Zi Yan Acked-by: Kirill A. Shutemov Signed-off-by: Hugh Dickins Cc: Alistair Popple Cc: Jan Kara Cc: Jue Wang Cc: "Matthew Wilcox (Oracle)" Cc: Miaohe Lin Cc: Minchan Kim Cc: Naoya Horiguchi Cc: Oscar Salvador Cc: Peter Xu Cc: Ralph Campbell Cc: Shakeel Butt Cc: Wang Yugui Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/huge_memory.c | 24 +++++++----------------- 1 file changed, 7 insertions(+), 17 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9c71a61e4c59..44c455dbbd63 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2360,15 +2360,15 @@ static void unmap_page(struct page *page) { enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; - bool unmap_success; VM_BUG_ON_PAGE(!PageHead(page), page); if (PageAnon(page)) ttu_flags |= TTU_SPLIT_FREEZE; - unmap_success = try_to_unmap(page, ttu_flags); - VM_BUG_ON_PAGE(!unmap_success, page); + try_to_unmap(page, ttu_flags); + + VM_WARN_ON_ONCE_PAGE(page_mapped(page), page); } static void remap_page(struct page *page, unsigned int nr) @@ -2679,7 +2679,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; - int count, mapcount, extra_pins, ret; + int extra_pins, ret; pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); @@ -2738,7 +2738,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } unmap_page(head); - VM_BUG_ON_PAGE(compound_mapcount(head), head); /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); @@ -2756,9 +2755,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); - count = page_count(head); - mapcount = total_mapcount(head); - if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) { + if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); @@ -2778,16 +2775,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __split_huge_page(page, list, end); ret = 0; } else { - if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { - pr_alert("total_mapcount: %u, page_count(): %u\n", - mapcount, count); - if (PageTail(page)) - dump_page(head, NULL); - dump_page(page, "total_mapcount(head) > 0"); - BUG(); - } spin_unlock(&ds_queue->split_queue_lock); -fail: if (mapping) +fail: + if (mapping) xa_unlock(&mapping->i_pages); local_irq_enable(); remap_page(head, thp_nr_pages(head)); -- 2.30.2