Received: by 10.223.185.116 with SMTP id b49csp79817wrg; Fri, 2 Mar 2018 14:02:43 -0800 (PST) X-Google-Smtp-Source: AG47ELtF80Qg5s+irPwxF1RXI3wl34cm7yR0jUntteIRrcxvWk5B4Cmr1rpPYPhdDDpcTOyoFoXT X-Received: by 10.101.72.199 with SMTP id o7mr5643524pgs.303.1520028163427; Fri, 02 Mar 2018 14:02:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520028163; cv=none; d=google.com; s=arc-20160816; b=JA4UyQ4Pnq0w8iXmLdY4iCllCt6EHY83ALqu0S6M5GAbIpLqCbBnWOS/tnlCzlPBxF 6OB9w0mW77svX63Nfu6HlVEL1joBb2TXvA1q17/UvKHPrsUYx52gPT2jdKoCILXmQ/iR KMwZrWqD9sOtGbyjn0iYeVcWM0by4AfBko1VEX91cCryBBR0HGYykhZkFU4zLpaMevbb wL1YWeLsuHfnph7zLXJKHUITWl/EOeZ4Y7j6+IHI898m1lUZPIWIpb2Cj/Uuu/2nFE1V hsNHILPlCs8Bs9ozJoJAlwB55W15r3hS//2WU5vBS6K1sXLLDLhazPqNFntYMuqmxgos LKAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=Kjw4KLsYxrNBv9OqNayEHk3oHiUnr7Dvv+SCU6b6Ev4=; b=07YZEtSgpnbqt2cFxHN6WB+K7YJNyNBuxsIEEfVet4fO0iXjPhUU+tx5Ludpw143ku DlWPPsuvkpnhSgZAREgrkbQATFUwH73nnG0voPDbmr9bb2qsAyjMRA4SQvHkYvlFI5kM M9HC/dNW0nQDSZ6g3HFeVif46dDy7LACixgSmDaG0vcPRut/6Se2riWZdCSp5zTRnBQ3 Vgpn+d9ul9QLMPxLAWR0WyVJ/+ZKAm+NKN45Cf1E8y1il1ZniSNQ6qkdSYtxp8Pgda6W rUA3j4uIGbKQeHuV6OZ8ZUZc1vk9pwWbtmLt/bciXKrQcV93xMPS2ti/6CBM84/DY0kD KquA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=tpr5ydUC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u27si5483448pfk.241.2018.03.02.14.02.28; Fri, 02 Mar 2018 14:02:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=tpr5ydUC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933804AbeCBV0j (ORCPT + 99 others); Fri, 2 Mar 2018 16:26:39 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:60414 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933785AbeCBV0i (ORCPT ); Fri, 2 Mar 2018 16:26:38 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Kjw4KLsYxrNBv9OqNayEHk3oHiUnr7Dvv+SCU6b6Ev4=; b=tpr5ydUC9QliGaRXtpMErtA6V s7J7iFjxHCc3T3G3jWxYybIDSfmCA1DYJVcceHoimjSVtIK+OeiPR5MK8Hom6p3OcEaqOiDqRs75J ISlds5Oo7RM3f4EF4h4fx0uQ2wH3MZWCg1vfTaropEgzt4WmXY5/VOYrobFQtU1AsugwWSu8EUXwM Z66qqKs8CTLgU5v+mHe0fmZUWMcLL6imTyd1yrTrB24fCplS1fd9VDuikhvXxlp7HgvqMn85APlMp hAI2vQLwdarQnBG3hHtjwO+oJ8k/l/vdPRZ6tTsfCgyjr7BaZz7ZrDCrXejr/sKtN5+iTds2ImCuT gOEeEMzdw==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ersCL-0001S1-IW; Fri, 02 Mar 2018 21:26:37 +0000 Date: Fri, 2 Mar 2018 13:26:37 -0800 From: Matthew Wilcox To: linux-mm@kvack.org Cc: kernel-hardening@lists.openwall.com, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC] Handle mapcount overflows Message-ID: <20180302212637.GB671@bombadil.infradead.org> References: <20180208021112.GB14918@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180208021112.GB14918@bombadil.infradead.org> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Here's my third effort to handle page->_mapcount overflows. The idea is to minimise overhead, so we keep a list of users with more than 5000 mappings. In order to overflow _mapcount, you have to have 2 billion mappings, so you'd need 400,000 tasks to evade the tracking, and your sysadmin has probably accused you of forkbombing the system long before then. Not to mention the 6GB of RAM you consumed just in stacks and the 24GB of RAM you consumed in page tables ... but I digress. Let's assume that the sysadmin has increased the number of processes to 100,000. You'd need to create 20,000 mappings per process to overflow _mapcount, and they'd end up on the 'heavy_users' list. Not everybody on the heavy_users list is going to be guilty, but if we hit an overflow, we look at everybody on the heavy_users list and if they've got the page mapped more than 1000 times, they get a SIGSEGV. I'm not entirely sure how to forcibly tear down a task's mappings, so I've just left a comment in there to do that. Looking for feedback on this approach. diff --git a/mm/internal.h b/mm/internal.h index 7059a8389194..977852b8329e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -97,6 +97,11 @@ extern void putback_lru_page(struct page *page); */ extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); +#ifdef CONFIG_64BIT +extern void mm_mapcount_overflow(struct page *page); +#else +static inline void mm_mapcount_overflow(struct page *page) { } +#endif /* * in mm/page_alloc.c */ diff --git a/mm/mmap.c b/mm/mmap.c index 9efdc021ad22..575766ec02f8 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1315,6 +1315,115 @@ static inline int mlock_future_check(struct mm_struct *mm, return 0; } +#ifdef CONFIG_64BIT +/* + * Machines with more than 2TB of memory can create enough VMAs to overflow + * page->_mapcount if they all point to the same page. 32-bit machines do + * not need to be concerned. + */ +/* + * Experimentally determined. gnome-shell currently uses fewer than + * 3000 mappings, so should have zero effect on desktop users. + */ +#define mm_track_threshold 5000 +static DEFINE_SPINLOCK(heavy_users_lock); +static DEFINE_IDR(heavy_users); + +static void mmap_track_user(struct mm_struct *mm, int max) +{ + struct mm_struct *entry; + unsigned int id; + + idr_preload(GFP_KERNEL); + spin_lock(&heavy_users_lock); + idr_for_each_entry(&heavy_users, entry, id) { + if (entry == mm) + break; + if (entry->map_count < mm_track_threshold) + idr_remove(&heavy_users, id); + } + if (!entry) + idr_alloc(&heavy_users, mm, 0, 0, GFP_ATOMIC); + spin_unlock(&heavy_users_lock); +} + +static void mmap_untrack_user(struct mm_struct *mm) +{ + struct mm_struct *entry; + unsigned int id; + + spin_lock(&heavy_users_lock); + idr_for_each_entry(&heavy_users, entry, id) { + if (entry == mm) { + idr_remove(&heavy_users, id); + break; + } + } + spin_unlock(&heavy_users_lock); +} + +static void kill_mm(struct task_struct *tsk) +{ + /* Tear down the mappings first */ + do_send_sig_info(SIGKILL, SEND_SIG_FORCED, tsk, true); +} + +static void kill_abuser(struct mm_struct *mm) +{ + struct task_struct *tsk; + + for_each_process(tsk) + if (tsk->mm == mm) + break; + + if (down_write_trylock(&mm->mmap_sem)) { + kill_mm(tsk); + up_write(&mm->mmap_sem); + } else { + do_send_sig_info(SIGKILL, SEND_SIG_FORCED, tsk, true); + } +} + +void mm_mapcount_overflow(struct page *page) +{ + struct mm_struct *entry = current->mm; + unsigned int id; + struct vm_area_struct *vma; + struct address_space *mapping = page_mapping(page); + unsigned long pgoff = page_to_pgoff(page); + unsigned int count = 0; + + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff + 1) { + if (vma->vm_mm == entry) + count++; + if (count > 1000) + kill_mm(current); + } + + rcu_read_lock(); + idr_for_each_entry(&heavy_users, entry, id) { + count = 0; + + vma_interval_tree_foreach(vma, &mapping->i_mmap, + pgoff, pgoff + 1) { + if (vma->vm_mm == entry) + count++; + if (count > 1000) { + kill_abuser(entry); + goto out; + } + } + } + if (!entry) + panic("No abusers found but mapcount exceeded\n"); +out: + rcu_read_unlock(); +} +#else +static void mmap_track_user(struct mm_struct *mm, int max) { } +static void mmap_untrack_user(struct mm_struct *mm) { } +#endif + /* * The caller must hold down_write(¤t->mm->mmap_sem). */ @@ -1357,6 +1466,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr, /* Too many mappings? */ if (mm->map_count > sysctl_max_map_count) return -ENOMEM; + if (mm->map_count > mm_track_threshold) + mmap_track_user(mm, mm_track_threshold); /* Obtain the address to map to. we verify (or select) it and ensure * that it represents a valid section of the address space. @@ -2997,6 +3108,8 @@ void exit_mmap(struct mm_struct *mm) /* mm's last user has gone, and its about to be pulled down */ mmu_notifier_release(mm); + mmap_untrack_user(mm); + if (mm->locked_vm) { vma = mm->mmap; while (vma) { diff --git a/mm/rmap.c b/mm/rmap.c index 47db27f8049e..d88acf5c98e9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1190,6 +1190,7 @@ void page_add_file_rmap(struct page *page, bool compound) VM_BUG_ON_PAGE(!PageSwapBacked(page), page); __inc_node_page_state(page, NR_SHMEM_PMDMAPPED); } else { + int v; if (PageTransCompound(page) && page_mapping(page)) { VM_WARN_ON_ONCE(!PageLocked(page)); @@ -1197,8 +1198,13 @@ void page_add_file_rmap(struct page *page, bool compound) if (PageMlocked(page)) clear_page_mlock(compound_head(page)); } - if (!atomic_inc_and_test(&page->_mapcount)) + v = atomic_inc_return(&page->_mapcount); + if (likely(v > 0)) goto out; + if (unlikely(v < 0)) { + mm_mapcount_overflow(page); + goto out; + } } __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr); out: