Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp545824pxb; Wed, 27 Oct 2021 07:53:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzTOQslZdNFU3ktjDR4xAbfiXWfgvJnDbW6bb9PtKpTSiwRxr1yXoh73c+lVyF+suiUFBg6 X-Received: by 2002:a17:90b:1b03:: with SMTP id nu3mr6296539pjb.40.1635346313393; Wed, 27 Oct 2021 07:51:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635346313; cv=none; d=google.com; s=arc-20160816; b=dxMA8MzdISe4V6Jue5y5BRRbUvYDAOEjlc+2W2YHCXJ7VskY7zZDIB+OH5oZ0lxTTm JAJrzR7TQ0zGQBknLiN+HXLSn9+6Z3/0pZS7Fhkg94jYrXCjBQh+GsU+pNcq3WqCEs3e PfFCbEtHgSwknCK3D35Fj7N5Zfc5UKgoyHb73uLVduktU8J1OIiPOVhCfJzRNxejjz1V rWs3kbQkE+fKYyKxRmbAMRBkcGm21A6py64NtJgMptQbUsjFqrwf/o/kjhrK7WQ/dA4V U24Zrnf6sqBkiVMmlUKVe+bOOtpECkczyTIcBbWRaqQgD02KL+/tZHCkIweCYUDwTAyB NRww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=YXjf3eGpY1i6FTdK0pzrewu/J0t+i5gtLQ2TvcFFJOY=; b=FRuTm4sghGip35rcYbR9KXCSqGxI0uhDuDBOPNT2kY/nTGK44T1YikNwpo23y9tJdd 15cGv8HjysOXpOZcPAv+3x41oJ5SMkIC4bvPYYunAt6+z8xXU5yi4szVvNLP6SYUf8hd S/bfwjF9apMd3HUboaZQd4+2Wboyiv3ivWh53A0cBec47FCU/u/3EkpK4mgwfTbY65SA SrSJqB+DWrXJAVIDvQJaTj/OIo9YZNfAOBM06q0n7I3mV6InwJeFmjoW1wdYyJzD/JHL o4/7QPNExl8zZvfiRob8HMxY675xXNccC/GvXsAl4ZlJSfB+pcACNz9aclNS/VE9TWSF UC5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=PxVapz5U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o12si180374pgd.443.2021.10.27.07.51.38; Wed, 27 Oct 2021 07:51:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=PxVapz5U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236342AbhJ0BYf (ORCPT + 99 others); Tue, 26 Oct 2021 21:24:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237208AbhJ0BYc (ORCPT ); Tue, 26 Oct 2021 21:24:32 -0400 Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com [IPv6:2a00:1450:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4710AC061570 for ; Tue, 26 Oct 2021 18:22:07 -0700 (PDT) Received: by mail-lj1-x22a.google.com with SMTP id d23so555418ljj.10 for ; Tue, 26 Oct 2021 18:22:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=YXjf3eGpY1i6FTdK0pzrewu/J0t+i5gtLQ2TvcFFJOY=; b=PxVapz5Uqh/H1c4ohK8jJVIMexXh+95TCfq2vs9j8PwNACM/FvIfg5nt8P9lxazoor p25NImF9tbvqauK4fXRXGntjxBnJlNerDMLR0YcsfgPUiEIn63EWj9oU5OvcmtcOdRvn W35pn9w5/RVcCQW3Fw9EQvhNbaR1f7aqFHI30nMVxM1MWsQ/F+qUAZnW5Gmak4+VYzWu Etk+Fsn1i22vTM6A2vd5cpJ2lv7qwepnXknS002FXOyYpCz3rMY8Wo6ihBC8yXwz19+8 nyTf5y2UgCmA6EIaso4lx0b/b9J8kmxge9EiP0N9b+Ya/mCE8Jyrv8cghLK6jjB/2+oE 0Tow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=YXjf3eGpY1i6FTdK0pzrewu/J0t+i5gtLQ2TvcFFJOY=; b=B8mCHgrrcDhbahqPCgq3RVrwKCJ03M9GAutAN5yahkAvfRX+0xvHbWvXrBmVq/Ma69 9t8EK8PL2yO2F0Srg2ltinSe9B6Xjf06iG1qzAtYbAmtoJePuDnsDpHWa5xX0T8+4fxS BplVgjsCr0AZO7Jg7JiLI4Mn1Z0BWqQmvNX9I+DNIzpVFnoIAvHtgF2cogg9vvaefTQl N9mvT60/FNJ57umCGQg1HBgZbpcw2/k7ZOi2IBnr2XZLbv8lqEVi2sgDLO9VRb4ChCy1 txWHkZYZR4mM70Q4VGbvHt5h+mMfzrKBcMqFoBgeoRf6UKF/6T8xNNMx8M2N70cWZ5fL W3Qw== X-Gm-Message-State: AOAM531df/oabq9GUcNlMaUgDUBRMyBcyz0HuwC2h8hEeNbJsSwIJljQ OpsTEp31k+Q6XR9YnT7ZyxhNqIELs1l4HAXQuTuDDHFfgDw= X-Received: by 2002:a2e:810c:: with SMTP id d12mr30018141ljg.177.1635297725570; Tue, 26 Oct 2021 18:22:05 -0700 (PDT) MIME-Version: 1.0 References: <20211026173822.502506-1-pasha.tatashin@soleen.com> <20211026173822.502506-2-pasha.tatashin@soleen.com> In-Reply-To: From: Pasha Tatashin Date: Tue, 26 Oct 2021 21:21:29 -0400 Message-ID: Subject: Re: [RFC 1/8] mm: add overflow and underflow checks for page->_refcount To: Matthew Wilcox Cc: LKML , linux-mm , linux-m68k@lists.linux-m68k.org, Anshuman Khandual , Andrew Morton , william.kucharski@oracle.com, Mike Kravetz , Vlastimil Babka , Geert Uytterhoeven , schmitzmic@gmail.com, Steven Rostedt , Ingo Molnar , Johannes Weiner , Roman Gushchin , Muchun Song , weixugc@google.com, Greg Thelen Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 26, 2021 at 5:34 PM Pasha Tatashin wrote: > > On Tue, Oct 26, 2021 at 3:50 PM Matthew Wilcox wrote: > > > > On Tue, Oct 26, 2021 at 05:38:15PM +0000, Pasha Tatashin wrote: > > > static inline void page_ref_add(struct page *page, int nr) > > > { > > > - atomic_add(nr, &page->_refcount); > > > + int ret; > > > + > > > + VM_BUG_ON(nr <= 0); > > > + ret = atomic_add_return(nr, &page->_refcount); > > > + VM_BUG_ON_PAGE(ret <= 0, page); > > > > This isn't right. _refcount is allowed to overflow into the negatives. > > See page_ref_zero_or_close_to_overflow() and the conversations that led > > to it being added. > > #define page_ref_zero_or_close_to_overflow(page) \ > 1204 ((unsigned int) page_ref_count(page) + 127u <= 127u) > > > Uh, right, I saw the macro but did not realize there was an (unsigned int) cast. > > OK, I think we can move this macro inside: > include/linux/page_ref.h > > modify it to something like this: > #define page_ref_zero_or_close_to_overflow(page) \ > ((unsigned int) page_ref_count(page) + v + 127u <= v + 127u) > > The sub/dec can also be fixed to ensure that we do not underflow but > still working with the fact that we use all 32bits of _refcount. I think we can do that by using: atomic_fetch_*() and check for overflow/underflow after operation. I will send the updated series soon. Pasha