Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1254123pxa; Thu, 20 Aug 2020 06:52:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx3QOZAGcz3V9/kubAGnl2TeUaeo0plGKL8iCNEhho3ISczKYaBildxpxHhoepNziK0C6Lg X-Received: by 2002:a05:6402:c0a:: with SMTP id co10mr3077673edb.342.1597931525498; Thu, 20 Aug 2020 06:52:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1597931525; cv=none; d=google.com; s=arc-20160816; b=VNoVIVDesd16abhO8u3+TsUICP7yAlAugbqaRT+OxVsKWkas0qWX8qaMozwBmWcZOT qu3wdcWO/jW1Uk/3aFv0bWMk3j3LLrZfMK7TRlThBwSvZMUcMjcPklcLm12/6OdtdIgR yQwPXQwymfZVme3/ejEQAwOh3iPKzLNzsXTtLdnZNIhp3qXRh1qjRk0aEaevg0U6mwka rxe0XFrwV/XDseUR+Z+Z8R7FuPExqWnCaKX7djxojI0FCplfCQj9b2YEe+Ow24rUo0Zb wZ+wy+B95JPFNQVPgiR5+BCzdb5ntbvxWGShS2FlxTwipBqKDehPwxklcY7W5DMuQ8VS CUZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=pRWoHxF0pghuadk4a6EO87FqQQzUQBKW72YJQK+ZQQs=; b=U0+gmOi6aaEch3QWGouYU+/kMyAvO0xQZv13W/oOLw6EXfZGPAiK8K7pB42c13ubDw 39Lk7pH3MDAN/E81uqX2Qhqe/MfGRgGPsvbNz6a/LmkeRV83ZX29qNmxpoSx3CQkSc7s 2GFq+5M/ducMCb29pnxNPF4IXQtaDPjWYBPS407uaDtlTmsWxbijzpQHRZxLIEB43wFp NiEkiDGoXgm8TmbLYI09ctOLsGwXfTGJvgccVqXKBve1klwxZx05Rn0+sLz4l076TEoK 6L6v7EPtWlIgbEOqNK76w6mWZftcGcWpEGriOSNB1QkBIJgikQW5zrdPKdEI0g2TBpDB z9ow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=V2FSzO5A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s6si1340118edy.74.2020.08.20.06.51.41; Thu, 20 Aug 2020 06:52:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=V2FSzO5A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729742AbgHTNuu (ORCPT + 99 others); Thu, 20 Aug 2020 09:50:50 -0400 Received: from mail.kernel.org ([198.145.29.99]:35052 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726852AbgHTJ1I (ORCPT ); Thu, 20 Aug 2020 05:27:08 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B86D422CA1; Thu, 20 Aug 2020 09:27:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597915627; bh=ka0fvdpQBGDZw+aAdie2jqZIVYKV+E9f2cyE2BpXYZM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V2FSzO5ABZsH79zjBZbFT5/+nkrhCouGb3t3DUAxVlt/ioZMyArJTzwNM69fC6vhq gWam+pCvwOXRpas9Lqv7B+5puqzhgoLqjIh7qUTp9dHZBdyq5BbJxIc74mON4/DXjt 61nIY83EZiADT5ROVusUp4c5CIO8d4bWtlsFrw9w= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hugh Dickins , Andrew Morton , "Kirill A. Shutemov" , Andrea Arcangeli , Mike Kravetz , Song Liu , Linus Torvalds Subject: [PATCH 5.8 079/232] khugepaged: collapse_pte_mapped_thp() protect the pmd lock Date: Thu, 20 Aug 2020 11:18:50 +0200 Message-Id: <20200820091616.640725029@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200820091612.692383444@linuxfoundation.org> References: <20200820091612.692383444@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Hugh Dickins commit 119a5fc16105b2b9383a6e2a7800b2ef861b2975 upstream. When retract_page_tables() removes a page table to make way for a huge pmd, it holds huge page lock, i_mmap_lock_write, mmap_write_trylock and pmd lock; but when collapse_pte_mapped_thp() does the same (to handle the case when the original mmap_write_trylock had failed), only mmap_write_trylock and pmd lock are held. That's not enough. One machine has twice crashed under load, with "BUG: spinlock bad magic" and GPF on 6b6b6b6b6b6b6b6b. Examining the second crash, page_vma_mapped_walk_done()'s spin_unlock of pvmw->ptl (serving page_referenced() on a file THP, that had found a page table at *pmd) discovers that the page table page and its lock have already been freed by the time it comes to unlock. Follow the example of retract_page_tables(), but we only need one of huge page lock or i_mmap_lock_write to secure against this: because it's the narrower lock, and because it simplifies collapse_pte_mapped_thp() to know the hpage earlier, choose to rely on huge page lock here. Fixes: 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP") Signed-off-by: Hugh Dickins Signed-off-by: Andrew Morton Acked-by: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Mike Kravetz Cc: Song Liu Cc: [5.4+] Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008021213070.27773@eggly.anvils Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/khugepaged.c | 44 +++++++++++++++++++------------------------- 1 file changed, 19 insertions(+), 25 deletions(-) --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1412,7 +1412,7 @@ void collapse_pte_mapped_thp(struct mm_s { unsigned long haddr = addr & HPAGE_PMD_MASK; struct vm_area_struct *vma = find_vma(mm, haddr); - struct page *hpage = NULL; + struct page *hpage; pte_t *start_pte, *pte; pmd_t *pmd, _pmd; spinlock_t *ptl; @@ -1432,9 +1432,17 @@ void collapse_pte_mapped_thp(struct mm_s if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE)) return; + hpage = find_lock_page(vma->vm_file->f_mapping, + linear_page_index(vma, haddr)); + if (!hpage) + return; + + if (!PageHead(hpage)) + goto drop_hpage; + pmd = mm_find_pmd(mm, haddr); if (!pmd) - return; + goto drop_hpage; start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl); @@ -1453,30 +1461,11 @@ void collapse_pte_mapped_thp(struct mm_s page = vm_normal_page(vma, addr, *pte); - if (!page || !PageCompound(page)) - goto abort; - - if (!hpage) { - hpage = compound_head(page); - /* - * The mapping of the THP should not change. - * - * Note that uprobe, debugger, or MAP_PRIVATE may - * change the page table, but the new page will - * not pass PageCompound() check. - */ - if (WARN_ON(hpage->mapping != vma->vm_file->f_mapping)) - goto abort; - } - /* - * Confirm the page maps to the correct subpage. - * - * Note that uprobe, debugger, or MAP_PRIVATE may change - * the page table, but the new page will not pass - * PageCompound() check. + * Note that uprobe, debugger, or MAP_PRIVATE may change the + * page table, but the new page will not be a subpage of hpage. */ - if (WARN_ON(hpage + i != page)) + if (hpage + i != page) goto abort; count++; } @@ -1495,7 +1484,7 @@ void collapse_pte_mapped_thp(struct mm_s pte_unmap_unlock(start_pte, ptl); /* step 3: set proper refcount and mm_counters. */ - if (hpage) { + if (count) { page_ref_sub(hpage, count); add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count); } @@ -1506,10 +1495,15 @@ void collapse_pte_mapped_thp(struct mm_s spin_unlock(ptl); mm_dec_nr_ptes(mm); pte_free(mm, pmd_pgtable(_pmd)); + +drop_hpage: + unlock_page(hpage); + put_page(hpage); return; abort: pte_unmap_unlock(start_pte, ptl); + goto drop_hpage; } static int khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot)