Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp6941928ybh; Thu, 8 Aug 2019 08:00:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqwW2NQNGzoR3DHoaG1LqLUmIbQYXJAR+38JY3Glb+/Jds7t03k77JUofGhZgqmP7HUB50Qp X-Received: by 2002:a62:6d84:: with SMTP id i126mr15587155pfc.129.1565276418567; Thu, 08 Aug 2019 08:00:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565276418; cv=none; d=google.com; s=arc-20160816; b=t7Os4bzWwXh1rQyjLgw4/DiJ/l6NuOh+2saKy6npcJfMY+Hy1AYKZb488L3jq9s3ZY ZEw+0OMpFR7i6CGRy6W0bS3+3Q37thGOKMjQ2Ut+Rz5ugyQFNmxMNh2nYoG1Ncp6u7dn gIrv9ESoNz3AzSsHreHi0HVzl4XXVJnPiN6GDibXyPbyGWSIz0kTozKfTaaRCNTusnOw uBCXvxalWQhqZ9bcEBZKY+nOJim9B9KTnZCX7tI7Ye5lYIUJ/3pleASUeHKqaEI3cqwg qtURk5VKhHIIRNIjQKTTpt1u9t/ic83OaU+YTCwd0U4MuBzJHFIZxFAoIBoqRWeqHXfo fIqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=wrpT4pEp9tLMuoEIwtzbTdFrr/qoYwWUYKv8PLkoWvA=; b=I8nQy6aeGlCnguTsg36fVHDFTYKUTIgAUz7fZO6EVPqkQkgqzN0V7tgrWPqkf43FbU eKhDTLpFqjJFvqvY9cHmg8pIoNGAsp2rIvbucnlAr7SwE+8KRZoOm2TzS0le/fY6ShG/ 2vrLjQn6vhA4RvEclMz9UAvLQErTSO/0SC5hH1Qb8+GmCtn5NlKgbl5oF2/a7+u7Y8jd fdUOrK+3x/v7+GPqP6yOV7CrrkbQnIOqnH2VzO9TRMZ36HS8SQmUXolqIkvu5me5yCIh 3IomSQ9CKOPYv9T2PMGmYlP62GdCXoiXw2deBYVyeRYpnwT8qBBqR7FO+j08kJxJBv50 wQHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=H9OSEB9S; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n124si47814366pga.214.2019.08.08.08.00.03; Thu, 08 Aug 2019 08:00:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=H9OSEB9S; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732572AbfHHOXb (ORCPT + 99 others); Thu, 8 Aug 2019 10:23:31 -0400 Received: from mail-ed1-f68.google.com ([209.85.208.68]:38832 "EHLO mail-ed1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732404AbfHHOXa (ORCPT ); Thu, 8 Aug 2019 10:23:30 -0400 Received: by mail-ed1-f68.google.com with SMTP id r12so56191010edo.5; Thu, 08 Aug 2019 07:23:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=wrpT4pEp9tLMuoEIwtzbTdFrr/qoYwWUYKv8PLkoWvA=; b=H9OSEB9STtHMMGPs/m1ZJF4owjTD1eSjiP4FHPH6VLyvXafAr72F9UIrYPoaHS5PsL /2Kj6FWMwZmSj38F0Ko3rBSDpGg1MdWf7p8bxjOiAfPlez/jli0AtLycPS5u6AD8QlV6 q6WqlaNoTHJLX400HORVNnPYODdTL2+ZcTzqFPtUflO4+epW12wRZpVoISsUD0owH4b+ P3v/IUESY8QbU8HJqZ5JUu8dplGdzi33tcY3yAe+j1AkO9VuEiw7Wx4KM1SUrCeTldQh 5xSbndopN6NjzrNXJiK+6/gF/e3pNG/1TrnTyFRWEhR1kuC4z+Wy3vpeeA6P95Iqq6Zz OSqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wrpT4pEp9tLMuoEIwtzbTdFrr/qoYwWUYKv8PLkoWvA=; b=itXEnn6mUIw0ocB1xH1AKYl5Z8XCq2NJBE6mIN70cwFux3DrS0uyx7A0phTn8jFwZd 6NOI55X6pSpnZSDR+LfuUNmO64UaSwZnJGI+P72Y/o2MLXg0YIw4oS6k3yeUSQmqeoBC qsTRJ1CmoK8AhPVzD+gID52T3oC6ncms0wf9nTHaxOkE5kzn3mMaz2rV/hW8BCMziyMM sydYiQRVPOA77d8W6lfjHgsL0N+3E+rIhV+WMsv+qHsUBMZXwjujS4P5+xCefhI/xdPt SaeUDFkq64aR3EKE+vdKn+Lbr3ZKhih2T6NJY7K/osrYdXKwis5lEd6qHn2u9voRsXmG snZw== X-Gm-Message-State: APjAAAUC09G7XArfvr9eqiLThXxDgreBnRIe171FqDZFhRVllyqqQSik c3Bex6n1U/ceDJWBrOGaNNU+2tkC3rUt1lG3ZNw= X-Received: by 2002:a17:906:8313:: with SMTP id j19mr13559997ejx.276.1565274208225; Thu, 08 Aug 2019 07:23:28 -0700 (PDT) MIME-Version: 1.0 References: <20190806212041.118146-1-joel@joelfernandes.org> <20190806235631.GU28441@linux.ibm.com> <20190807094504.GB169551@google.com> <20190807175215.GE28441@linux.ibm.com> <20190808095232.GA30401@X58A-UD3R> <20190808125607.GB261256@google.com> In-Reply-To: <20190808125607.GB261256@google.com> From: Byungchul Park Date: Thu, 8 Aug 2019 23:23:17 +0900 Message-ID: Subject: Re: [PATCH RFC v1 1/2] rcu/tree: Add basic support for kfree_rcu batching To: Joel Fernandes Cc: Byungchul Park , "Paul E. McKenney" , LKML , Rao Shoaib , kernel-team@android.com, kernel-team , Davidlohr Bueso , Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , rcu , Steven Rostedt Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 8, 2019 at 9:56 PM Joel Fernandes wrote: > > On Thu, Aug 08, 2019 at 06:52:32PM +0900, Byungchul Park wrote: > > On Wed, Aug 07, 2019 at 10:52:15AM -0700, Paul E. McKenney wrote: > > > > > On Tue, Aug 06, 2019 at 05:20:40PM -0400, Joel Fernandes (Google) wrote: > > > [ . . . ] > > > > > > + for (; head; head = next) { > > > > > > + next = head->next; > > > > > > + head->next = NULL; > > > > > > + __call_rcu(head, head->func, -1, 1); > > > > > > > > > > We need at least a cond_resched() here. 200,000 times through this loop > > > > > in a PREEMPT=n kernel might not always be pretty. Except that this is > > > > > invoked directly from kfree_rcu() which might be invoked with interrupts > > > > > disabled, which precludes calls to cond_resched(). So the realtime guys > > > > > are not going to be at all happy with this loop. > > > > > > > > Ok, will add this here. > > > > > > > > > And this loop could be avoided entirely by having a third rcu_head list > > > > > in the kfree_rcu_cpu structure. Yes, some of the batches would exceed > > > > > KFREE_MAX_BATCH, but given that they are invoked from a workqueue, that > > > > > should be OK, or at least more OK than queuing 200,000 callbacks with > > > > > interrupts disabled. (If it turns out not to be OK, an array of rcu_head > > > > > pointers can be used to reduce the probability of oversized batches.) > > > > > This would also mean that the equality comparisons with KFREE_MAX_BATCH > > > > > need to become greater-or-equal comparisons or some such. > > > > > > > > Yes, certainly we can do these kinds of improvements after this patch, and > > > > then add more tests to validate the improvements. > > > > > > Out of pity for people bisecting, we need this fixed up front. > > > > > > My suggestion is to just allow ->head to grow until ->head_free becomes > > > available. That way you are looping with interrupts and preemption > > > enabled in workqueue context, which is much less damaging than doing so > > > with interrupts disabled, and possibly even from hard-irq context. > > > > Agree. > > > > Or after introducing another limit like KFREE_MAX_BATCH_FORCE(>= > > KFREE_MAX_BATCH): > > > > 1. Try to drain it on hitting KFREE_MAX_BATCH as it does. > > > > On success: Same as now. > > On fail: let ->head grow and drain if possible, until reaching to > > KFREE_MAX_BATCH_FORCE. I should've explain this in more detail. This actually mean: On fail: Let ->head grow and queue rcu_work when ->head_free == NULL, until reaching to _FORCE. > > 3. On hitting KFREE_MAX_BATCH_FORCE, give up batching but handle one by > > one from now on to prevent too many pending requests from being > > queued for batching work. This mean: 3. On hitting KFREE_MAX_BATCH_FORCE, give up batching requests to be added from now on but instead handle one by one to prevent too many pending requests from being queued. Of course, the requests already having been queued in ->head so far should be handled by rcu_work when it's possible which can be checked by the monitor or kfree_rcu() inside every call. > I also agree. But this _FORCE thing will still not solve the issue Paul is > raising which is doing this loop possibly in irq disabled / hardirq context. I added more explanation above. What I suggested is a way to avoid not only heavy work within the irq-disabled region of a single kfree_rcu() but also too many requests to be queued into ->head. > We can't even cond_resched() here. In fact since _FORCE is larger, it will be > even worse. Consider a real-time system with a lot of memory, in this case > letting ->head grow large is Ok, but looping for long time in IRQ disabled > would not be Ok. Please check the explanation above. > But I could make it something like: > 1. Letting ->head grow if ->head_free busy > 2. If head_free is busy, then just queue/requeue the monitor to try again. This is exactly what Paul said. The problem with this is ->head can grow too much. That's why I suggested the above one. > This would even improve performance, but will still risk going out of memory. > > Thoughts? > > thanks, > > - Joel > > > > > This way, we can avoid both: > > > > 1. too many requests being queued and > > 2. __call_rcu() bunch of requests within a single kfree_rcu(). > > > > Thanks, > > Byungchul > > > > > > > > But please feel free to come up with a better solution! > > > > > > [ . . . ] -- Thanks, Byungchul