Received: by 2002:ac0:8c9a:0:0:0:0:0 with SMTP id r26csp398081ima; Fri, 1 Feb 2019 05:10:11 -0800 (PST) X-Google-Smtp-Source: AHgI3IaQ5a2njEc7K3f4NZKWgW8RcAkv924kECLbzYsWA2y20yZUbgbDOp3+bhAvJ/EXV6OiMzaI X-Received: by 2002:a65:6215:: with SMTP id d21mr2235767pgv.289.1549026611766; Fri, 01 Feb 2019 05:10:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549026611; cv=none; d=google.com; s=arc-20160816; b=HLjnzD9khCyktEjiM73sxxYRCvhEQTcxKiMCN4t+RSD6V9ncW91q5MbxxpHC0anCrg lCyYeH8gQAql1XHXLX5upcZpmF4ZxBdMpx9JShbPx4m2UVcNW1rNLsWtrSgp3di5mG8p 5h0BIy4jL7ihrDP76nl0Ewo5ji6D3vrzyPVaa/7M9NMrJ4YscJ0uJppbU0DC1QKWTM4P ymjLYnpdkMDBn2c6Z/wjowUgYf7dSYDFyMCECYj2v7rHGwhbdBU2Lmy2T4Ii5pad0VOm Sif7CPW6GvafJnY20Jxc7Q0eZsD8JvF/sQTwPUWqa7Unmml5IY82H9r+7UsyVSEDDfmz x5NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=fsUfyrqdxhJzsiazWw30FQJbEwdlRozBSwe6chPLUw8=; b=cPBdbqtM2fw4Gl57vpYP8tmtGimayF3aqLAhFLLMnALxuyMR3hBq4rBV4An2Hm2Jab ssiYCmvmM3gyVTpPAwTtllMiyH8dBDjGtXPqHUb2cp/8gPor4b7xG3652894r++zFgVj mZEyCsHZKtAsTjfpNzX5QNYVenYBgM7jasPFhpq+IJs+RfkMvzrFy2CfTCWaR3GWx18b rLIbjr/5eaKNZj5zcI4/x5bU2TXkivZBkwVbSwHBmaD/ChxJxP86jDHm93VJCe9kPiht 0yP1o9t6WNje6ratGEYXe9iDtqVxYTR2Pv2cW23TjNH2BpdySX98EEfV51njp8Vsamxu tX4Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f7si7303886pga.87.2019.02.01.05.09.55; Fri, 01 Feb 2019 05:10:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729987AbfBAMpc (ORCPT + 99 others); Fri, 1 Feb 2019 07:45:32 -0500 Received: from mx2.suse.de ([195.135.220.15]:60520 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726450AbfBAMpb (ORCPT ); Fri, 1 Feb 2019 07:45:31 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A07E4ACD4; Fri, 1 Feb 2019 12:45:29 +0000 (UTC) Date: Fri, 1 Feb 2019 13:45:28 +0100 From: Michal Hocko To: "Uladzislau Rezki (Sony)" Cc: Andrew Morton , linux-mm@kvack.org, Matthew Wilcox , LKML , Thomas Garnier , Oleksiy Avramchenko , Steven Rostedt , Joel Fernandes , Thomas Gleixner , Ingo Molnar , Tejun Heo Subject: Re: [PATCH 1/1] mm/vmalloc: convert vmap_lazy_nr to atomic_long_t Message-ID: <20190201124528.GN11599@dhcp22.suse.cz> References: <20190131162452.25879-1-urezki@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190131162452.25879-1-urezki@gmail.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 31-01-19 17:24:52, Uladzislau Rezki (Sony) wrote: > vmap_lazy_nr variable has atomic_t type that is 4 bytes integer > value on both 32 and 64 bit systems. lazy_max_pages() deals with > "unsigned long" that is 8 bytes on 64 bit system, thus vmap_lazy_nr > should be 8 bytes on 64 bit as well. But do we really need 64b number of _pages_? I have hard time imagine that we would have that many lazy pages to accumulate. > > Signed-off-by: Uladzislau Rezki (Sony) > --- > mm/vmalloc.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index abe83f885069..755b02983d8d 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -632,7 +632,7 @@ static unsigned long lazy_max_pages(void) > return log * (32UL * 1024 * 1024 / PAGE_SIZE); > } > > -static atomic_t vmap_lazy_nr = ATOMIC_INIT(0); > +static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0); > > /* > * Serialize vmap purging. There is no actual criticial section protected > @@ -650,7 +650,7 @@ static void purge_fragmented_blocks_allcpus(void); > */ > void set_iounmap_nonlazy(void) > { > - atomic_set(&vmap_lazy_nr, lazy_max_pages()+1); > + atomic_long_set(&vmap_lazy_nr, lazy_max_pages()+1); > } > > /* > @@ -658,10 +658,10 @@ void set_iounmap_nonlazy(void) > */ > static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) > { > + unsigned long resched_threshold; > struct llist_node *valist; > struct vmap_area *va; > struct vmap_area *n_va; > - int resched_threshold; > > lockdep_assert_held(&vmap_purge_lock); > > @@ -681,16 +681,16 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) > } > > flush_tlb_kernel_range(start, end); > - resched_threshold = (int) lazy_max_pages() << 1; > + resched_threshold = lazy_max_pages() << 1; > > spin_lock(&vmap_area_lock); > llist_for_each_entry_safe(va, n_va, valist, purge_list) { > - int nr = (va->va_end - va->va_start) >> PAGE_SHIFT; > + unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; > > __free_vmap_area(va); > - atomic_sub(nr, &vmap_lazy_nr); > + atomic_long_sub(nr, &vmap_lazy_nr); > > - if (atomic_read(&vmap_lazy_nr) < resched_threshold) > + if (atomic_long_read(&vmap_lazy_nr) < resched_threshold) > cond_resched_lock(&vmap_area_lock); > } > spin_unlock(&vmap_area_lock); > @@ -727,10 +727,10 @@ static void purge_vmap_area_lazy(void) > */ > static void free_vmap_area_noflush(struct vmap_area *va) > { > - int nr_lazy; > + unsigned long nr_lazy; > > - nr_lazy = atomic_add_return((va->va_end - va->va_start) >> PAGE_SHIFT, > - &vmap_lazy_nr); > + nr_lazy = atomic_long_add_return((va->va_end - va->va_start) >> > + PAGE_SHIFT, &vmap_lazy_nr); > > /* After this point, we may free va at any time */ > llist_add(&va->purge_list, &vmap_purge_list); > -- > 2.11.0 > -- Michal Hocko SUSE Labs