Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp953487ybb; Wed, 1 Apr 2020 12:45:02 -0700 (PDT) X-Google-Smtp-Source: ADFU+vsOpotW+13Nmk4eKZYBJ09ZW07JGQ8JaH1di57RBmaouFGPfPJWbkxylXFJCU6pAo/SuCma X-Received: by 2002:a9d:3a62:: with SMTP id j89mr16817046otc.45.1585770302596; Wed, 01 Apr 2020 12:45:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585770302; cv=none; d=google.com; s=arc-20160816; b=LlU6rMkAmBEmMmkX780bdpbRuKADpxpPptzkTQzkV4ED6kVbYWNxKAXenrqhbJ3ofj HDD/OQj6RmY8ZDEvlIYJGE0HQnlG5w4YyR3hyiOFejVCtzn3Q0hbtoJV2SNj29lE2rdS gwqEED8i7IH7SK7lf7oxgtspvSyEN7iwoQYfY9+Rf3Q1cbwUgDlqmat6C9iJv5ET+Gea JpAXkS5CSwC12lbx82k0eWzi7TE6l3dCK3yy4S3bfLWthWe/1/8tddf4rs7xTpnZlKnX ZO0MjUQ/cnk6GSw8i2J+BUOufn9JAXg1TxO8khRJZcLOFx/2GhnyOqQs9r6kmBUmwiM+ qPaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=fP3yszNSH33i5UcueC6JZ12iyKq5kbAHC45SJImr89c=; b=MashNCsjrAoG8JuFMFYUsRh7ZUb3rJtHuojaK1Vm5h05lI7Vfc9azUAwyP/P08m+in UM1K2dbKkO75yHTo1YgsYpKcHXcpfukK4GzLy3Ab5u8OQg0MuyeBPM17nClUlsbXthHq SZU1dITmCm43KOEotsRI+yBIIbc0+e3qLwzadLYD3+pvE7eA8Uf9FnOUKUOwbua+z9xV GGDoBn/CBixV7IhrPX1mH6J16R0XW999Mp10vPvEIa6SCUfH2G2MBbzaovfRwxOqz0G/ X47Cws+0Tz1PZQmfNyYet3R0HQfgMSwXjcHqgRkeiGxAQTcJ6u69fnNsd3zGHDe2YfFP 1mZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="MXP/dCze"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x136si1288820oif.122.2020.04.01.12.44.50; Wed, 01 Apr 2020 12:45:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="MXP/dCze"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732619AbgDATeG (ORCPT + 99 others); Wed, 1 Apr 2020 15:34:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:41356 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732148AbgDATeG (ORCPT ); Wed, 1 Apr 2020 15:34:06 -0400 Received: from paulmck-ThinkPad-P72.home (50-39-105-78.bvtn.or.frontiernet.net [50.39.105.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6309220714; Wed, 1 Apr 2020 19:34:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585769645; bh=1ziAwZyWgAC9dJ1aazSpPutT4RCKJC3VTwgJQkvvvyw=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=MXP/dCzeEguEmqrUkbR0Lc9vwOEIXHDkxr40tpnGwm6DWP0BF5auzVcjxwSUn8wgK by661+RJU8b8/bzQaFMLLtPqCDK2qJlBfEYvarJuFYngvS4g15pdFM0IfFiOo1P5VV Ouj6X8nHQNDF3vDf/nuWVXjxMOwj8L2WWOW3HOGE= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 2877335226B3; Wed, 1 Apr 2020 12:34:05 -0700 (PDT) Date: Wed, 1 Apr 2020 12:34:05 -0700 From: "Paul E. McKenney" To: Uladzislau Rezki Cc: Joel Fernandes , linux-kernel@vger.kernel.org, linux-mm@kvack.org, rcu@vger.kernel.org, willy@infradead.org, peterz@infradead.org, neilb@suse.com, vbabka@suse.cz, mgorman@suse.de, Andrew Morton , Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , Steven Rostedt Subject: Re: [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free memory pattern Message-ID: <20200401193405.GH19865@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20200331150911.GC236678@google.com> <20200331160119.GA27614@pc636> <20200331183000.GD236678@google.com> <20200401122550.GA32593@pc636> <20200401134745.GV19865@paulmck-ThinkPad-P72> <20200401181601.GA4042@pc636> <20200401182615.GE19865@paulmck-ThinkPad-P72> <20200401183745.GA5960@pc636> <20200401185439.GG19865@paulmck-ThinkPad-P72> <20200401190548.GA6360@pc636> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200401190548.GA6360@pc636> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 01, 2020 at 09:05:48PM +0200, Uladzislau Rezki wrote: > On Wed, Apr 01, 2020 at 11:54:39AM -0700, Paul E. McKenney wrote: > > On Wed, Apr 01, 2020 at 08:37:45PM +0200, Uladzislau Rezki wrote: > > > On Wed, Apr 01, 2020 at 11:26:15AM -0700, Paul E. McKenney wrote: > > > > On Wed, Apr 01, 2020 at 08:16:01PM +0200, Uladzislau Rezki wrote: > > > > > > > > > > > > > > > > Right. Per discussion with Paul, we discussed that it is better if we > > > > > > > > pre-allocate N number of array blocks per-CPU and use it for the cache. > > > > > > > > Default for N being 1 and tunable with a boot parameter. I agree with this. > > > > > > > > > > > > > > > As discussed before, we can make use of memory pool API for such > > > > > > > purpose. But i am not sure if it should be one pool per CPU or > > > > > > > one pool per NR_CPUS, that would contain NR_CPUS * N pre-allocated > > > > > > > blocks. > > > > > > > > > > > > There are advantages and disadvantages either way. The advantage of the > > > > > > per-CPU pool is that you don't have to worry about something like lock > > > > > > contention causing even more pain during an OOM event. One potential > > > > > > problem wtih the per-CPU pool can happen when callbacks are offloaded, > > > > > > in which case the CPUs needing the memory might never be getting it, > > > > > > because in the offloaded case (RCU_NOCB_CPU=y) the CPU posting callbacks > > > > > > might never be invoking them. > > > > > > > > > > > > But from what I know now, systems built with CONFIG_RCU_NOCB_CPU=y > > > > > > either don't have heavy callback loads (HPC systems) or are carefully > > > > > > configured (real-time systems). Plus large systems would probably end > > > > > > up needing something pretty close to a slab allocator to keep from dying > > > > > > from lock contention, and it is hard to justify that level of complexity > > > > > > at this point. > > > > > > > > > > > > Or is there some way to mark a specific slab allocator instance as being > > > > > > able to keep some amount of memory no matter what the OOM conditions are? > > > > > > If not, the current per-CPU pre-allocated cache is a better choice in the > > > > > > near term. > > > > > > > > > > > As for mempool API: > > > > > > > > > > mempool_alloc() just tries to make regular allocation taking into > > > > > account passed gfp_t bitmask. If it fails due to memory pressure, > > > > > it uses reserved preallocated pool that consists of number of > > > > > desirable elements(preallocated when a pool is created). > > > > > > > > > > mempoll_free() returns an element to to pool, if it detects that > > > > > current reserved elements are lower then minimum allowed elements, > > > > > it will add an element to reserved pool, i.e. refill it. Otherwise > > > > > just call kfree() or whatever we define as "element-freeing function." > > > > > > > > Unless I am missing something, mempool_alloc() acquires a per-mempool > > > > lock on each invocation under OOM conditions. For our purposes, this > > > > is essentially a global lock. This will not be at all acceptable on a > > > > large system. > > > > > > > It uses pool->lock to access to reserved objects, so if we have one memory > > > pool per one CPU then it would be serialized. > > > > I am having difficulty parsing your sentence. It looks like your thought > > is to invoke mempool_create() for each CPU, so that the locking would be > > on a per-CPU basis, as in 128 invocations of mempool_init() on a system > > having 128 hardware threads. Is that your intent? > > > In order to serialize it, you need to have it per CPU. So if you have 128 > cpus, it means: > > > for_each_possible_cpu(...) > cpu_pool = mempool_create(); > > > but please keep in mind that it is not my intention, but i had a though > about mempool API. Because it has pre-reserve logic inside. OK, fair point on use of mempool API, but my guess is that extending the current kfree_rcu() logic will be simpler. Thanx, Paul