Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2996294imu; Mon, 19 Nov 2018 09:08:16 -0800 (PST) X-Google-Smtp-Source: AJdET5eB2uwvJXAMFcgGt62WgeNPRQV01XD6aOnns8C1Zuwl4Vb1zsCk9vSdd0P5hA0qGfi8UrNG X-Received: by 2002:a17:902:31a4:: with SMTP id x33-v6mr22958844plb.105.1542647296029; Mon, 19 Nov 2018 09:08:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542647295; cv=none; d=google.com; s=arc-20160816; b=Pw/bTZo2XgSLyx6isIHKAT7gbMvndQKI5iLnLzkRVH3WnBNOpRlIeyUERhIv+Ds9jC gRelZgegUzc4WJ2iorfkWIN1UZX2ChQvh0zup7IgmfUbczDPqJA6UvxhJZ8TmDb5QjDJ vZy/IEGwo8YPGSLwNdikBTaqzpV5GdRLeEHVsyZ8U13+IiQdVePDmkNh36UAR7YVUdwb cAMGX51zCM9rAJeQVDX0rfBxzKImvgQNIChwsUNT+4iy75h90GiY6b3cU6uQAuMeXJwR UN05/oM/3kRAA+aPZWvPAVQxPaiQvujxed9dHHx2cBw41mkKMqRE5Ox84UyR7ap/8Sce M5mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ZlfwvAgiqrSLqRqatg1WDSfplwE3RwnTFcxU2y0YMY8=; b=Hq8ENlyuRvXqkZCXO729Juin7mEKSsZERsxt15gxzylSgUyIrMrMmBCmD4Id5ldB9s pEYwcejCXWEWL1oI6OLSsTbBynK+k4euPZcoyKCoqrb62ZYVdt8F8NTP+t7wK+gSRZNB F06+jaSXO1IL85sf3ZzrMwT0wbJtEesagsUsInVfnsG4j3TUvrMXJAT0njT72RN2gTZ4 Xejv5+wdXJ4Ft5A/gzangbWA003/9E9hCoK4EawD/5+YpXTJnB4D7dW77MQb/Q5zxVFO EH41zVTQCkjur6U8voYRoBsEXOlEsr9dOhdq/ed89putzfDPy+ZUx2kOxoVKu/hPCUcA jk0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Yr7NSOYQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d3-v6si44641062plj.372.2018.11.19.09.07.59; Mon, 19 Nov 2018 09:08:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Yr7NSOYQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436476AbeKTDbD (ORCPT + 99 others); Mon, 19 Nov 2018 22:31:03 -0500 Received: from mail.kernel.org ([198.145.29.99]:46274 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405956AbeKTDbC (ORCPT ); Mon, 19 Nov 2018 22:31:02 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 56B9D214F1; Mon, 19 Nov 2018 17:06:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542647204; bh=9bXzMK8D9Cho741P8eqY3dOOZ6DBB7CamHMvf0Q2cR4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yr7NSOYQ3REM+cUemjXJWqLLnv5BziKTT+BTHEf9zxcDD5dgZsHdkb9xPo5KjL9wR j73UgNA7kOeASpuW76tpEqFZtaJSTLamyuDIU12A7CRPTeuHosPDs3qjkN1PNm3hsJ S5+azJFPv2OSpBcsRK37q4qw8XyrNR8Ml4Yy0YDg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Kravetz , Naoya Horiguchi , Michal Hocko , Hugh Dickins , Andrea Arcangeli , "Kirill A . Shutemov" , Davidlohr Bueso , Prakash Sangappa , Andrew Morton , Linus Torvalds Subject: [PATCH 3.18 90/90] hugetlbfs: fix kernel BUG at fs/hugetlbfs/inode.c:444! Date: Mon, 19 Nov 2018 17:30:12 +0100 Message-Id: <20181119162634.181292799@linuxfoundation.org> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181119162620.585061184@linuxfoundation.org> References: <20181119162620.585061184@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mike Kravetz commit 5e41540c8a0f0e98c337dda8b391e5dda0cde7cf upstream. This bug has been experienced several times by the Oracle DB team. The BUG is in remove_inode_hugepages() as follows: /* * If page is mapped, it was faulted in after being * unmapped in caller. Unmap (again) now after taking * the fault mutex. The mutex will prevent faults * until we finish removing the page. * * This race can only happen in the hole punch case. * Getting here in a truncate operation is a bug. */ if (unlikely(page_mapped(page))) { BUG_ON(truncate_op); In this case, the elevated map count is not the result of a race. Rather it was incorrectly incremented as the result of a bug in the huge pmd sharing code. Consider the following: - Process A maps a hugetlbfs file of sufficient size and alignment (PUD_SIZE) that a pmd page could be shared. - Process B maps the same hugetlbfs file with the same size and alignment such that a pmd page is shared. - Process B then calls mprotect() to change protections for the mapping with the shared pmd. As a result, the pmd is 'unshared'. - Process B then calls mprotect() again to chage protections for the mapping back to their original value. pmd remains unshared. - Process B then forks and process C is created. During the fork process, we do dup_mm -> dup_mmap -> copy_page_range to copy page tables. Copying page tables for hugetlb mappings is done in the routine copy_hugetlb_page_range. In copy_hugetlb_page_range(), the destination pte is obtained by: dst_pte = huge_pte_alloc(dst, addr, sz); If pmd sharing is possible, the returned pointer will be to a pte in an existing page table. In the situation above, process C could share with either process A or process B. Since process A is first in the list, the returned pte is a pointer to a pte in process A's page table. However, the check for pmd sharing in copy_hugetlb_page_range is: /* If the pagetables are shared don't copy or take references */ if (dst_pte == src_pte) continue; Since process C is sharing with process A instead of process B, the above test fails. The code in copy_hugetlb_page_range which follows assumes dst_pte points to a huge_pte_none pte. It copies the pte entry from src_pte to dst_pte and increments this map count of the associated page. This is how we end up with an elevated map count. To solve, check the dst_pte entry for huge_pte_none. If !none, this implies PMD sharing so do not copy. Link: http://lkml.kernel.org/r/20181105212315.14125-1-mike.kravetz@oracle.com Fixes: c5c99429fa57 ("fix hugepages leak due to pagetable page sharing") Signed-off-by: Mike Kravetz Reviewed-by: Naoya Horiguchi Cc: Michal Hocko Cc: Hugh Dickins Cc: Andrea Arcangeli Cc: "Kirill A . Shutemov" Cc: Davidlohr Bueso Cc: Prakash Sangappa Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/hugetlb.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2576,7 +2576,7 @@ static int is_hugetlb_entry_hwpoisoned(p int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *vma) { - pte_t *src_pte, *dst_pte, entry; + pte_t *src_pte, *dst_pte, entry, dst_entry; struct page *ptepage; unsigned long addr; int cow; @@ -2604,15 +2604,30 @@ int copy_hugetlb_page_range(struct mm_st break; } - /* If the pagetables are shared don't copy or take references */ - if (dst_pte == src_pte) + /* + * If the pagetables are shared don't copy or take references. + * dst_pte == src_pte is the common case of src/dest sharing. + * + * However, src could have 'unshared' and dst shares with + * another vma. If dst_pte !none, this implies sharing. + * Check here before taking page table lock, and once again + * after taking the lock below. + */ + dst_entry = huge_ptep_get(dst_pte); + if ((dst_pte == src_pte) || !huge_pte_none(dst_entry)) continue; dst_ptl = huge_pte_lock(h, dst, dst_pte); src_ptl = huge_pte_lockptr(h, src, src_pte); spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); entry = huge_ptep_get(src_pte); - if (huge_pte_none(entry)) { /* skip none entry */ + dst_entry = huge_ptep_get(dst_pte); + if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) { + /* + * Skip if src entry none. Also, skip in the + * unlikely case dst entry !none as this implies + * sharing with another vma. + */ ; } else if (unlikely(is_hugetlb_entry_migration(entry) || is_hugetlb_entry_hwpoisoned(entry))) {