Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp5906132ybp; Tue, 8 Oct 2019 09:59:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqyrH4XbZLmzr4pddYbg/a/8q5uS7Khr2TiHSg2E22iEhy6+DNjog7eLRApXD6cheCyKg9ZN X-Received: by 2002:a05:6402:651:: with SMTP id u17mr35398841edx.104.1570553977722; Tue, 08 Oct 2019 09:59:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570553977; cv=none; d=google.com; s=arc-20160816; b=i9/Xfq0WPVbaY3PXdKClb1PVH22vf4JgoJaRIUULCenMwVrsynej42isGMuwcY3490 xLlC4qVpZwHyN8H3tfLf1QEk0umdmxajo9zQ8z3unEJblZOAqCLhHVd1P83Xbvz+KwzO aLKadKip9dTf+8P6d9RV/NrDAgG9mQ1xctOYl47tvvrChrAkKHdmQnvRUpgtfzYu46oh ix7n+GvMHuxt0NPlt0C6nWSqbQi52iatxsgdjLh57Vt2a3TYcsSRl+owQbO0sxQH5p2N gey9bopXP0L6CBTObgMJUX9ae4XPEjMCVrDLbPd9m5LWooqBBhc/xwF7eU6WcjKlIf6b IR1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=0wetkE8Ne7gY4e1uFigd8c9L6hqdr50Hx4Wyql51wF0=; b=xmXAl6GTUrzsS5uZCrbLWCN2/h/LbSXYFsonR066IFIe0Prf+DAY4n5e/cT7Ez2+0Y SyqwFQByFMzfr76CHZoELtKifU56tojqhkY5BAdHRAnReE5bKhws0SLjMvLAtjAj9HKa IhiZjFneuwVf1Tk38JnoB1h1Mgm6jXiapg8D0PkRp+TStoQgq0HzQ2HwCX5USS6JJGvg uGTiDxIzov0qWoPkvJpTpMieMxvx/IAZT/ARU9G4GdMIss64udkFxCLvgTY4nmwYXxjw sYADORe2ysfGvf28rJpVDh9V2ubc/qUFZFyAygACOSzfLrrvo9tGWizRxZhvLfSu6vSx yP3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h17si11317654edb.89.2019.10.08.09.59.14; Tue, 08 Oct 2019 09:59:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727302AbfJHQ6w (ORCPT + 99 others); Tue, 8 Oct 2019 12:58:52 -0400 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:35729 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725966AbfJHQ6v (ORCPT ); Tue, 8 Oct 2019 12:58:51 -0400 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Tue, 8 Oct 2019 09:43:42 -0700 Received: from akaher-lnx-dev.eng.vmware.com (unknown [10.110.19.203]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 932EB40AF8; Tue, 8 Oct 2019 09:43:39 -0700 (PDT) From: Ajay Kaher To: CC: , , , , , , , , , , , , , , , , , , , Jann Horn , Subject: [PATCH v2 1/8] mm: make page ref count overflow check tighter and more explicit Date: Wed, 9 Oct 2019 06:14:16 +0530 Message-ID: <1570581863-12090-2-git-send-email-akaher@vmware.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1570581863-12090-1-git-send-email-akaher@vmware.com> References: <1570581863-12090-1-git-send-email-akaher@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-002.vmware.com: akaher@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Linus Torvalds commit f958d7b528b1b40c44cfda5eabe2d82760d868c3 upsteam. We have a VM_BUG_ON() to check that the page reference count doesn't underflow (or get close to overflow) by checking the sign of the count. That's all fine, but we actually want to allow people to use a "get page ref unless it's already very high" helper function, and we want that one to use the sign of the page ref (without triggering this VM_BUG_ON). Change the VM_BUG_ON to only check for small underflows (or _very_ close to overflowing), and ignore overflows which have strayed into negative territory. Acked-by: Matthew Wilcox Cc: Jann Horn Cc: stable@kernel.org Signed-off-by: Linus Torvalds [ 4.4.y backport notes: Ajay: Open-coded atomic refcount access due to missing page_ref_count() helper in 4.4.y Srivatsa: Added overflow check to get_page_foll() and related code. ] Signed-off-by: Srivatsa S. Bhat (VMware) Signed-off-by: Ajay Kaher --- include/linux/mm.h | 6 +++++- mm/internal.h | 5 +++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ed653ba..701088e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -488,6 +488,10 @@ static inline void get_huge_page_tail(struct page *page) extern bool __get_page_tail(struct page *page); +/* 127: arbitrary random number, small enough to assemble well */ +#define page_ref_zero_or_close_to_overflow(page) \ + ((unsigned int) atomic_read(&page->_count) + 127u <= 127u) + static inline void get_page(struct page *page) { if (unlikely(PageTail(page))) @@ -497,7 +501,7 @@ static inline void get_page(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_count. */ - VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); atomic_inc(&page->_count); } diff --git a/mm/internal.h b/mm/internal.h index f63f439..67015e5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -81,7 +81,8 @@ static inline void __get_page_tail_foll(struct page *page, * speculative page access (like in * page_cache_get_speculative()) on tail pages. */ - VM_BUG_ON_PAGE(atomic_read(&compound_head(page)->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(compound_head(page)), + page); if (get_page_head) atomic_inc(&compound_head(page)->_count); get_huge_page_tail(page); @@ -106,7 +107,7 @@ static inline void get_page_foll(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_count. */ - VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); atomic_inc(&page->_count); } } -- 2.7.4