Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp8245405ybi; Tue, 23 Jul 2019 05:23:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqy6/z2upJJfnz6aHSKAdSbLfjz9PcF4b5wvGEUtBMM4u/jUaa7GNklWnqQoTf8WkVk1FxLP X-Received: by 2002:a17:902:59c3:: with SMTP id d3mr78079497plj.22.1563884580708; Tue, 23 Jul 2019 05:23:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563884580; cv=none; d=google.com; s=arc-20160816; b=mMvh1IYW4bdlqH40bpS3IGzIvJ0pvJ7+iOqRRx4XKHQrlhQbZ0LXlrTUwndh6TRn0o 00w3ZGPQ5eqzHh52m1mmmaggYaLev0o0ngMlf3/WJ6clQxKcmBPlw9ZWu36ybZlYxZTA LAm9IyiQpO5P5jj/0v3Q9iQQCvukWOL9S6GOZRJqNcPSud2MyvvtjRaqagNRBnzkxAod ed/iiBq5x6Dr4mKXig62roLosn97z/8YwDY8u0eAL2Jn/HZGhgts2WuVvHnCRE3wM60k W1YBqXjVkU5xGLWd518p9gG3+jYB/x1kb2nrhCEXzuJzFUT2i7l8uvTPgT+/VdiEN5Su k8qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=3R3dwFJ3lirSrayIO3cwn6HY8SYxdNFKMydzr2i7UnA=; b=wFfpSSOqD6eWg6Sp+KwBSa50o3cI9G9gWOh14FwdRrQ0EhJ3umFXXqzQl4z0a0cCi8 WY8CDuSyBsoSxKk/sZ3HwMwruSVxMLSa4xUSOo9VDs///BR9G4e2blFTkPbsYtdKkaGY 8cMWVsvcI+tK/D/ONCOJJAG5HdALDzOI7YpGD9LuXi3vYuEcCBkPd90Qw/Q4Yx4EqEqq QI+visOtBPSzGvaPVJtqsniAD0Xk0wPqAw0QT3xRLxwHFpyOVQsVFVmFLJqsIMbmMk6I 4vN7jgOBP/lIO2S/1le+AE7zFgf4SsAJ6nulIt4iKWrFXqSDv0s7FAFkg05TKJ/7hAVP 6zlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 70si11031054plc.253.2019.07.23.05.22.44; Tue, 23 Jul 2019 05:23:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387857AbfGWDIR (ORCPT + 99 others); Mon, 22 Jul 2019 23:08:17 -0400 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:6450 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726687AbfGWDIR (ORCPT ); Mon, 22 Jul 2019 23:08:17 -0400 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Mon, 22 Jul 2019 20:08:01 -0700 Received: from akaher-lnx-dev.eng.vmware.com (unknown [10.110.19.203]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id 63BD1B24DB; Mon, 22 Jul 2019 23:08:11 -0400 (EDT) From: Ajay Kaher To: CC: , , , , , , , , , , , , , , , , , Subject: [PATCH 2/8] mm: add 'try_get_page()' helper function Date: Tue, 23 Jul 2019 16:38:25 +0530 Message-ID: <1563880111-19058-3-git-send-email-akaher@vmware.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1563880111-19058-1-git-send-email-akaher@vmware.com> References: <1563880111-19058-1-git-send-email-akaher@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-002.vmware.com: akaher@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Linus Torvalds commit 88b1a17dfc3ed7728316478fae0f5ad508f50397 upsteam. This is the same as the traditional 'get_page()' function, but instead of unconditionally incrementing the reference count of the page, it only does so if the count was "safe". It returns whether the reference count was incremented (and is marked __must_check, since the caller obviously has to be aware of it). Also like 'get_page()', you can't use this function unless you already had a reference to the page. The intent is that you can use this exactly like get_page(), but in situations where you want to limit the maximum reference count. The code currently does an unconditional WARN_ON_ONCE() if we ever hit the reference count issues (either zero or negative), as a notification that the conditional non-increment actually happened. NOTE! The count access for the "safety" check is inherently racy, but that doesn't matter since the buffer we use is basically half the range of the reference count (ie we look at the sign of the count). Acked-by: Matthew Wilcox Cc: Jann Horn Cc: stable@kernel.org Signed-off-by: Linus Torvalds [ 4.4.y backport notes: Srivatsa: - Adapted try_get_page() to match the get_page() implementation in 4.4.y, except for the refcount check. - Added try_get_page_foll() which will be needed in a subsequent patch. ] Signed-off-by: Srivatsa S. Bhat (VMware) Signed-off-by: Ajay Kaher --- include/linux/mm.h | 12 ++++++++++++ mm/internal.h | 23 +++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 701088e..52edaf1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -505,6 +505,18 @@ static inline void get_page(struct page *page) atomic_inc(&page->_count); } +static inline __must_check bool try_get_page(struct page *page) +{ + if (unlikely(PageTail(page))) + if (likely(__get_page_tail(page))) + return true; + + if (WARN_ON_ONCE(atomic_read(&page->_count) <= 0)) + return false; + atomic_inc(&page->_count); + return true; +} + static inline struct page *virt_to_head_page(const void *x) { struct page *page = virt_to_page(x); diff --git a/mm/internal.h b/mm/internal.h index 67015e5..d83afc9 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -112,6 +112,29 @@ static inline void get_page_foll(struct page *page) } } +static inline __must_check bool try_get_page_foll(struct page *page) +{ + if (unlikely(PageTail(page))) { + if (WARN_ON_ONCE(atomic_read(&compound_head(page)->_count) <= 0)) + return false; + /* + * This is safe only because + * __split_huge_page_refcount() can't run under + * get_page_foll() because we hold the proper PT lock. + */ + __get_page_tail_foll(page, true); + } else { + /* + * Getting a normal page or the head of a compound page + * requires to already have an elevated page->_count. + */ + if (WARN_ON_ONCE(atomic_read(&page->_count) <= 0)) + return false; + atomic_inc(&page->_count); + } + return true; +} + extern unsigned long highest_memmap_pfn; /* -- 2.7.4