Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp8071797ybi; Tue, 23 Jul 2019 02:21:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqwlMDXJ6N+ggW6Z7wmd8uiN+isCxLvizBHVcYyxMTnAwvgHZ+AEOO89mFuUGyYqdx1+JmzN X-Received: by 2002:a63:e948:: with SMTP id q8mr73207872pgj.93.1563873679828; Tue, 23 Jul 2019 02:21:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563873679; cv=none; d=google.com; s=arc-20160816; b=TgFxiRjzYUXdYu+BHfkTRJ0P3kGLK08EzQfMzlDIb8Pt6TzIrZhIs/4P9g5jZgt+GZ QNZtknLDbKPH4t1atfhCxgBcE6DXS9RYkQ+r5jUpO408mEs6mr8v6d0D47TadFmQQgI5 tOlYGeKEwywWmZaPutvaVRCI7Q9tfFV4ifupVze+hCS6MGcz0wE7iCE0AgOlL/iNYBmT 1pandcCja3kmS4gmdJBofVwn8maQsI5/YbbdEyuHFOjY8Vx+82OchNUYXD4d0eocBX5J mQ9FtmW+hQIo3nLxAgaKyb6yT0ZYqgJcTkEWjjSrvgM583qbn/SZvxZ4DD75xKmuOgF3 6pqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=0wetkE8Ne7gY4e1uFigd8c9L6hqdr50Hx4Wyql51wF0=; b=iaeNoraN0YMSn6rk9v5fenv89hrq4OYhnLoIfV/egYIcBjZoxqMX1IkrgCrjSLQrUG aS4vz/Ate81lQvKF42/XaKuYXeIIBebwD0dcPgPEsvBLsk+6EMhAEXIex5mPXzPFinuG PjinJPXc96JEkOM2yRd4Z+U8mgJcy5yFjzgGMMeiklGrqQynQUyn3kQtDU27c1U84WIU 1sAIsvwanqp89hv7U2a+yGl9UjdWOylkVr62/wvOFx4O8lD0AY74PJi9J5QF19YPplIO Re2j0e5sGt60rBTydGg6RnIWB+abimfjQGD/5gTGfakGvueRLz9wwFOdCFVCzJ+L92oD elUg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f4si11390025pgg.334.2019.07.23.02.21.03; Tue, 23 Jul 2019 02:21:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387836AbfGWDIC (ORCPT + 99 others); Mon, 22 Jul 2019 23:08:02 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:12306 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726687AbfGWDIB (ORCPT ); Mon, 22 Jul 2019 23:08:01 -0400 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Mon, 22 Jul 2019 20:07:41 -0700 Received: from akaher-lnx-dev.eng.vmware.com (unknown [10.110.19.203]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id A82BBB28A8; Mon, 22 Jul 2019 23:07:55 -0400 (EDT) From: Ajay Kaher To: CC: , , , , , , , , , , , , , , , , , Subject: [PATCH 1/8] mm: make page ref count overflow check tighter and more explicit Date: Tue, 23 Jul 2019 16:38:24 +0530 Message-ID: <1563880111-19058-2-git-send-email-akaher@vmware.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1563880111-19058-1-git-send-email-akaher@vmware.com> References: <1563880111-19058-1-git-send-email-akaher@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: akaher@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Linus Torvalds commit f958d7b528b1b40c44cfda5eabe2d82760d868c3 upsteam. We have a VM_BUG_ON() to check that the page reference count doesn't underflow (or get close to overflow) by checking the sign of the count. That's all fine, but we actually want to allow people to use a "get page ref unless it's already very high" helper function, and we want that one to use the sign of the page ref (without triggering this VM_BUG_ON). Change the VM_BUG_ON to only check for small underflows (or _very_ close to overflowing), and ignore overflows which have strayed into negative territory. Acked-by: Matthew Wilcox Cc: Jann Horn Cc: stable@kernel.org Signed-off-by: Linus Torvalds [ 4.4.y backport notes: Ajay: Open-coded atomic refcount access due to missing page_ref_count() helper in 4.4.y Srivatsa: Added overflow check to get_page_foll() and related code. ] Signed-off-by: Srivatsa S. Bhat (VMware) Signed-off-by: Ajay Kaher --- include/linux/mm.h | 6 +++++- mm/internal.h | 5 +++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ed653ba..701088e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -488,6 +488,10 @@ static inline void get_huge_page_tail(struct page *page) extern bool __get_page_tail(struct page *page); +/* 127: arbitrary random number, small enough to assemble well */ +#define page_ref_zero_or_close_to_overflow(page) \ + ((unsigned int) atomic_read(&page->_count) + 127u <= 127u) + static inline void get_page(struct page *page) { if (unlikely(PageTail(page))) @@ -497,7 +501,7 @@ static inline void get_page(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_count. */ - VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); atomic_inc(&page->_count); } diff --git a/mm/internal.h b/mm/internal.h index f63f439..67015e5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -81,7 +81,8 @@ static inline void __get_page_tail_foll(struct page *page, * speculative page access (like in * page_cache_get_speculative()) on tail pages. */ - VM_BUG_ON_PAGE(atomic_read(&compound_head(page)->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(compound_head(page)), + page); if (get_page_head) atomic_inc(&compound_head(page)->_count); get_huge_page_tail(page); @@ -106,7 +107,7 @@ static inline void get_page_foll(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_count. */ - VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); atomic_inc(&page->_count); } } -- 2.7.4