Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp911931ybb; Wed, 1 Apr 2020 11:56:29 -0700 (PDT) X-Google-Smtp-Source: ADFU+vumZtNF5f+fUt5GZcWP/7eupkgGV8mq7+3fVcew//Po0i2y95PyP81Rt/jXXh8iB1bH/i8r X-Received: by 2002:a9d:5a9:: with SMTP id 38mr16347133otd.331.1585767388902; Wed, 01 Apr 2020 11:56:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585767388; cv=none; d=google.com; s=arc-20160816; b=dJ3FdCqIPvXaBMuv09n533fJdBeTXmn7ZJ1xWQ4n0JoWuPSzvZ04978Lr4tBAkaeZ+ uUsgDFcoQQM55qQHCMYKlonhAq2BtRsi0KSQAvoVlq4KTUl54A+gNVZ9oRdAz0gtYXH1 wZB4U04dL4inJ+TxsFJFxPFmyXUl78adllY8ZxqiHn694Khj3H0U3l/yXw9oJzO9utbj W6aotetTzEQLluxrRJLzfnrTt3ktks/vU5F/OG4RdAq+lnaSh5DTIp+Cbia5DvClVMH+ HZQGcne2m+NmruPSGJMZl3oP3jqM0mwM6bhZk81/F6eaE6GQORCroup74l6ZwCFNSaux yeMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=dT6U8MPLrt6oueIfYSuvs0y0ytMfQTjFwuJJtd6r3xc=; b=TbKHiaa+rTyU9+bJAKd9c2dV6tRW1SeAGjeExgoHB3pg6+MH4d/aBKZQEhv/2j9Ww1 xABFCDlg9fndbMXVf4fJw9qk2JF/TK46xVn/ZLelm4aU3G8qg/Y2gFEnHRrZJ7kV9pHh KGO+ozCRnALZNIa6V69UUQ9Amd01GIO57YD86LBlFGG9K78GoD6APxdc/MnlhSf7bVkV F1oVAwUFrfVRRaREfpc4NSJOBTgGqzQFgEwx2FYd8UFXMNRMZ8rRWcRyxARAWiwUfN3i 59u6s8VtieIJHiSP96DAdKAfcr08N1esJpL2HLcVrbRzJEtkmKcH6JzX45gRw/y3GBDq Fbrw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MelMnNfT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q137si1225292oic.139.2020.04.01.11.56.16; Wed, 01 Apr 2020 11:56:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MelMnNfT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732241AbgDASyl (ORCPT + 99 others); Wed, 1 Apr 2020 14:54:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:60060 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727541AbgDASyk (ORCPT ); Wed, 1 Apr 2020 14:54:40 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-105-78.bvtn.or.frontiernet.net [50.39.105.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6E8EF206F5; Wed, 1 Apr 2020 18:54:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585767279; bh=k7sUy1LckokKCq2t+CbZYwxrRl/pN5BuQ9TB1aBzy3s=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=MelMnNfTeXQKB2DLtL0mdeiiR66K6cabljyah83phWY7fdJUjLuPt4zCaK7fbv34z sf7GNcZ4tb6Y8cyamfZeYMytmvw3aktkQ2uzTnGYopg9J4KqjlBGMnPPod5zuWLsWR TQIb+KijMaQ2MXLmp8humArdY09lhSqBVn5Q2b+A= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 4314F35226B3; Wed, 1 Apr 2020 11:54:39 -0700 (PDT) Date: Wed, 1 Apr 2020 11:54:39 -0700 From: "Paul E. McKenney" To: Uladzislau Rezki Cc: Joel Fernandes , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rcu@vger.kernel.org, willy@infradead.org, peterz@infradead.org, neilb@suse.com, vbabka@suse.cz, mgorman@suse.de, Andrew Morton , Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , Steven Rostedt Subject: Re: [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free memory pattern Message-ID: <20200401185439.GG19865@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20200331131628.153118-1-joel@joelfernandes.org> <20200331140433.GA26498@pc636> <20200331150911.GC236678@google.com> <20200331160119.GA27614@pc636> <20200331183000.GD236678@google.com> <20200401122550.GA32593@pc636> <20200401134745.GV19865@paulmck-ThinkPad-P72> <20200401181601.GA4042@pc636> <20200401182615.GE19865@paulmck-ThinkPad-P72> <20200401183745.GA5960@pc636> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200401183745.GA5960@pc636> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 01, 2020 at 08:37:45PM +0200, Uladzislau Rezki wrote: > On Wed, Apr 01, 2020 at 11:26:15AM -0700, Paul E. McKenney wrote: > > On Wed, Apr 01, 2020 at 08:16:01PM +0200, Uladzislau Rezki wrote: > > > > > > > > > > > > Right. Per discussion with Paul, we discussed that it is better if we > > > > > > pre-allocate N number of array blocks per-CPU and use it for the cache. > > > > > > Default for N being 1 and tunable with a boot parameter. I agree with this. > > > > > > > > > > > As discussed before, we can make use of memory pool API for such > > > > > purpose. But i am not sure if it should be one pool per CPU or > > > > > one pool per NR_CPUS, that would contain NR_CPUS * N pre-allocated > > > > > blocks. > > > > > > > > There are advantages and disadvantages either way. The advantage of the > > > > per-CPU pool is that you don't have to worry about something like lock > > > > contention causing even more pain during an OOM event. One potential > > > > problem wtih the per-CPU pool can happen when callbacks are offloaded, > > > > in which case the CPUs needing the memory might never be getting it, > > > > because in the offloaded case (RCU_NOCB_CPU=y) the CPU posting callbacks > > > > might never be invoking them. > > > > > > > > But from what I know now, systems built with CONFIG_RCU_NOCB_CPU=y > > > > either don't have heavy callback loads (HPC systems) or are carefully > > > > configured (real-time systems). Plus large systems would probably end > > > > up needing something pretty close to a slab allocator to keep from dying > > > > from lock contention, and it is hard to justify that level of complexity > > > > at this point. > > > > > > > > Or is there some way to mark a specific slab allocator instance as being > > > > able to keep some amount of memory no matter what the OOM conditions are? > > > > If not, the current per-CPU pre-allocated cache is a better choice in the > > > > near term. > > > > > > > As for mempool API: > > > > > > mempool_alloc() just tries to make regular allocation taking into > > > account passed gfp_t bitmask. If it fails due to memory pressure, > > > it uses reserved preallocated pool that consists of number of > > > desirable elements(preallocated when a pool is created). > > > > > > mempoll_free() returns an element to to pool, if it detects that > > > current reserved elements are lower then minimum allowed elements, > > > it will add an element to reserved pool, i.e. refill it. Otherwise > > > just call kfree() or whatever we define as "element-freeing function." > > > > Unless I am missing something, mempool_alloc() acquires a per-mempool > > lock on each invocation under OOM conditions. For our purposes, this > > is essentially a global lock. This will not be at all acceptable on a > > large system. > > > It uses pool->lock to access to reserved objects, so if we have one memory > pool per one CPU then it would be serialized. I am having difficulty parsing your sentence. It looks like your thought is to invoke mempool_create() for each CPU, so that the locking would be on a per-CPU basis, as in 128 invocations of mempool_init() on a system having 128 hardware threads. Is that your intent? Thanx, Paul