Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp687595pxu; Thu, 3 Dec 2020 10:06:29 -0800 (PST) X-Google-Smtp-Source: ABdhPJw21Xk/L9FANRTNa2yxichbPMe8pKqhrPrSF6HNU/RDa8WXWaXe1zqoqLnmbjBnMAoCgtCU X-Received: by 2002:a05:6402:658:: with SMTP id u24mr4016949edx.27.1607018788988; Thu, 03 Dec 2020 10:06:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607018788; cv=none; d=google.com; s=arc-20160816; b=zYr6skrSVcuCgAAV/7kjFhC2+uRVNHYae4PnTAUnvHHvA97ql+p25ncfoS/x0ib1Ou +nZRCxPij4JpCbEUqRR2DNXmstsOKoSS8CiSY9epfs+W3APD1KSqoIOvTlhxbHyTBo1k 6ViWbGgPjNYaMeAYuxigdvcRgG/kqdJcRH5qEHQevJCA+tPGvy1MvUuXqM12lg6vC0yj cZ/csyYl0hD9eywURmXw0qNMBeztOCGvsYxvdnovihGQW1R+1GLw1ja29eK7eYK5JPLT thtMWVBfRamZcWVI3sUU3PE9j0wceZat9MXASGjZ9dEDkMozCF54jsA09xRxFWTCobzC 6rNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:subject:from :references:to; bh=2uO7U++VfiZ/5qVozIuJMrj9LFclwxv+ASi4s6LHFQU=; b=xq5kWCdVTwLM8qmi71GhUOh6TCDE8yLPUxaBfMRgksGa3MdkJgXgDjohRJogpuNVvU n7mrkU4nAtP3zAGgDXddAWajPZqYNMUG+ejEVXNoowgXdC2Ufn/sBX9sbxmuwKMo3pmG Nu2Mfsexp/BOplJwjxb+kRkx37n26OOGG+8/4izuf9fvhechVxmJY/D9/cgXy/L1HqzS dzu9L0yxW2Lb06T+RxZcZuAcHvx2lX/qoCrcSgtyMd4Nyyooq40OeNMYHo4npC0cn5oO WFDYB8l8ZFFIPm1pys4icK995mf1oyPlqEYFYSLqJZ1RHhbyl38tLWGuk2tUhwZOpPFp Ob6Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u9si1343395edb.338.2020.12.03.10.06.04; Thu, 03 Dec 2020 10:06:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731347AbgLCSEd (ORCPT + 99 others); Thu, 3 Dec 2020 13:04:33 -0500 Received: from mx2.suse.de ([195.135.220.15]:45374 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729046AbgLCSEd (ORCPT ); Thu, 3 Dec 2020 13:04:33 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 8CE2DAC2E; Thu, 3 Dec 2020 18:03:51 +0000 (UTC) To: Zhaoyang Huang , Zhaoyang Huang , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1606995362-16413-1-git-send-email-zhaoyang.huang@unisoc.com> From: Vlastimil Babka Subject: Re: [PATCH] mm: fix a race on nr_swap_pages Message-ID: <4c9b5a0c-9971-9960-b6a2-4e2966fb145b@suse.cz> Date: Thu, 3 Dec 2020 19:03:50 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0 MIME-Version: 1.0 In-Reply-To: <1606995362-16413-1-git-send-email-zhaoyang.huang@unisoc.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/3/20 12:36 PM, Zhaoyang Huang wrote: > The scenario on which "Free swap -4kB" happens in my system, which is caused by > get_swap_page_of_type or get_swap_pages racing with show_mem. Remove the race > here. > > Signed-off-by: Zhaoyang Huang > --- > mm/swapfile.c | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index cf63b5f..13201b6 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -974,6 +974,8 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) > /* Only single cluster request supported */ > WARN_ON_ONCE(n_goal > 1 && size == SWAPFILE_CLUSTER); > > + spin_lock(&swap_avail_lock); > + > avail_pgs = atomic_long_read(&nr_swap_pages) / size; > if (avail_pgs <= 0) > goto noswap; This goto will leave with the spin lock locked, so that's a bug. > @@ -986,8 +988,6 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) > > atomic_long_sub(n_goal * size, &nr_swap_pages); > > - spin_lock(&swap_avail_lock); > - Is the problem that while we adjust n_goal with a min3(..., avail_pgs), somebody else can decrease nr_swap_pages in the meanwhile and then we underflow? If yes, the spin lock won't eliminate all such cases it seems, as e.g. get_swap_page_of_type isn't done under the same lock, AFAIK. > start_over: > node = numa_node_id(); > plist_for_each_entry_safe(si, next, &swap_avail_heads[node], avail_lists[node]) { > @@ -1061,14 +1061,13 @@ swp_entry_t get_swap_page_of_type(int type) > > spin_lock(&si->lock); > if (si->flags & SWP_WRITEOK) { > - atomic_long_dec(&nr_swap_pages); > /* This is called for allocating swap entry, not cache */ > offset = scan_swap_map(si, 1); > if (offset) { > + atomic_long_dec(&nr_swap_pages); > spin_unlock(&si->lock); > return swp_entry(type, offset); > } > - atomic_long_inc(&nr_swap_pages); This hunk looks safer, unless I miss something. Did you check if it's enough to prevent the negative values on your systems? > } > spin_unlock(&si->lock); > fail: >