Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp6409963ybe; Wed, 18 Sep 2019 03:03:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqyYLe+mr3Sx1QOOS3CnPP20ozLGA4o11Hk+n0dL/eb+1PkagmtS8CAYjLprfYvlTVwWmbrt X-Received: by 2002:a50:ee92:: with SMTP id f18mr9276779edr.253.1568801008586; Wed, 18 Sep 2019 03:03:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568801008; cv=none; d=google.com; s=arc-20160816; b=pt8juQjP4ew90O2ERc2yr6IRrFxWJYwi9Pupe36Ut0ZAOKkpl3Eo1OYO88jaTCG8Gk 6siuKrouRGTzyvsAntqLKKgtD9lQTU+7KfOI2N7TrcQ+U4lSaL1CDoq6gJ2KrjfNWcZl WJ6Df2kQxggRjcc5t7Pep80+PXF2sXrfJUvhBHtpEDbPqxcJJ4w0ZyFVmLj+pVcYfOhb eKhjbQ0c0MDOLmqjd8biRm/2rm9jFhs6U+dI8q/aRf3N1KVIEI/Jkudd34esDQZ6IARO h/3wIujhEQ+aLL+zpLYzpRelsOTzghercObvU3fy4Dp2MInrfwMUbsmA+hbf0Erzuqwm vhYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:date:from:dkim-signature; bh=Q/vmDNFly4J1uivAFT0W2AkS7owll1FKiHSlevnMsJk=; b=XeCyFoMcUI2a44IcwyJZ7TcJePbN8XezW49KuRNCOa7/xIp1ICBg5Rl8qrFohoTjKz QUFck96sS0ceoPjymsNNpSV3Wd0DipjKLDw9qM27V+4mIt4IpM7TA92mWFqv4ZEbxg1O sJBHQ3esN7l27S2UA4jzxnMfBCX50ZSH4Hq1WETPSYOhEn8su9h3R+11TWrvzphIRnLi Mn2Ornkxe5X2bLdC0Od5AUwQoYxhjGWDPllSPIVW6ByNFmWjwAAXptzJatUaggYREJcu 1sHXQNKgceirNbsp9JZ8eEAgk3So96r8buhm7kQJLFh3sLYKh+eyu+Q6Dc9CQaXm1/6s ilHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=hkFo0+n7; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n4si2308090ejj.132.2019.09.18.03.03.04; Wed, 18 Sep 2019 03:03:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=hkFo0+n7; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728963AbfIRJ6W (ORCPT + 99 others); Wed, 18 Sep 2019 05:58:22 -0400 Received: from mail-lf1-f67.google.com ([209.85.167.67]:44172 "EHLO mail-lf1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726676AbfIRJ6V (ORCPT ); Wed, 18 Sep 2019 05:58:21 -0400 Received: by mail-lf1-f67.google.com with SMTP id q11so5195137lfc.11; Wed, 18 Sep 2019 02:58:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Q/vmDNFly4J1uivAFT0W2AkS7owll1FKiHSlevnMsJk=; b=hkFo0+n7Wf7x8vAqxxQVe4zwDImf0yEQRrMKLg0KkpZGweBs90sShq9FtwqDZ3DXSs Q0WL+scZU46CJZv55oluwDGj945KUD9O3WUiEVlfjQolm3SrOg8a/YrtCdJ1/Sh3ERjd A7HO9rGujHZunfjZr+xPN1cpfqqKFkjVfNNg0+te++5aJEd10GF2BBQn4OZ33FcBvu6i OD+m487t7nQT7XtvvU0SYyMmeNnM5SbBy++IQhQO6fVIiA2cRalsGczO47ZXjdPMpt8c aYHBj5IlYB2HVJq6o6pdtlUrxIU9Vv9Q3KaxYWUZFlKgW1iQCAEdb4zZcc8jOXsQtH6v Y0vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Q/vmDNFly4J1uivAFT0W2AkS7owll1FKiHSlevnMsJk=; b=hq15VP/t5ZMsc+7D6qBSeaQJyNb1qVOIBCbYLn5MGiHVasy0DCNn5RIBHXHjUFvANo 1tqSxrRJXPiFtcKVgMZfgf7C/54UUtKAPR303C6fDjwGSl7IOBbei/GvO2cLqGkqVbGo bu+BJ4y0ybVxwbmG2Yccr8cKGRcx6KipHcD2o9nh0BKFH8HecUdZJwGD0+S4Tok9k+84 3+09nKlHmPechZPGw6ueIDmdLpq8SZYZNTBIVU45aWmEWtR0Y9C8jjsOyp7jfYhwip11 S0h6/cCiA/AH9MH0yB2kvGbclxzeMPYbGHhmbvRS2assQp4rqB6dfxhEP7tzKowHghuc zRtw== X-Gm-Message-State: APjAAAWyin+4Zxnc+USmo6vc55tIi7gLq+fEXE0wwO3rwjmgwG1jmOtp HU2MNSezXgBB3k3UME2bAcA= X-Received: by 2002:ac2:593c:: with SMTP id v28mr1608329lfi.11.1568800699282; Wed, 18 Sep 2019 02:58:19 -0700 (PDT) Received: from pc636 ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id b7sm930059lfp.23.2019.09.18.02.58.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Sep 2019 02:58:18 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Wed, 18 Sep 2019 11:58:11 +0200 To: "Joel Fernandes (Google)" Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, kernel-team@lge.com, Byungchul Park , Davidlohr Bueso , Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , max.byungchul.park@gmail.com, "Paul E. McKenney" , Rao Shoaib , rcu@vger.kernel.org, Steven Rostedt Subject: Re: [PATCH v4 1/2] rcu/tree: Add basic support for kfree_rcu() batching Message-ID: <20190918095811.GA25821@pc636> References: <20190814160411.58591-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190814160411.58591-1-joel@joelfernandes.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Recently a discussion about stability and performance of a system > involving a high rate of kfree_rcu() calls surfaced on the list [1] > which led to another discussion how to prepare for this situation. > > This patch adds basic batching support for kfree_rcu(). It is "basic" > because we do none of the slab management, dynamic allocation, code > moving or any of the other things, some of which previous attempts did > [2]. These fancier improvements can be follow-up patches and there are > different ideas being discussed in those regards. This is an effort to > start simple, and build up from there. In the future, an extension to > use kfree_bulk and possibly per-slab batching could be done to further > improve performance due to cache-locality and slab-specific bulk free > optimizations. By using an array of pointers, the worker thread > processing the work would need to read lesser data since it does not > need to deal with large rcu_head(s) any longer. > > Torture tests follow in the next patch and show improvements of around > 5x reduction in number of grace periods on a 16 CPU system. More > details and test data are in that patch. > > There is an implication with rcu_barrier() with this patch. Since the > kfree_rcu() calls can be batched, and may not be handed yet to the RCU > machinery in fact, the monitor may not have even run yet to do the > queue_rcu_work(), there seems no easy way of implementing rcu_barrier() > to wait for those kfree_rcu()s that are already made. So this means a > kfree_rcu() followed by an rcu_barrier() does not imply that memory will > be freed once rcu_barrier() returns. > > Another implication is higher active memory usage (although not > run-away..) until the kfree_rcu() flooding ends, in comparison to > without batching. More details about this are in the second patch which > adds an rcuperf test. > > Finally, in the near future we will get rid of kfree_rcu() special casing > within RCU such as in rcu_do_batch and switch everything to just > batching. Currently we don't do that since timer subsystem is not yet up > and we cannot schedule the kfree_rcu() monitor as the timer subsystem's > lock are not initialized. That would also mean getting rid of > kfree_call_rcu_nobatch() entirely. > Hello, Joel. First of all thank you for improving it. I also noticed a high pressure on RCU-machinery during performing some vmalloc tests when kfree_rcu() flood occurred. Therefore i got rid of using kfree_rcu() there. I have just a small question related to workloads and performance evaluation. Are you aware of any specific workloads which benefit from it for example mobile area, etc? I am asking because i think about backporting of it and reuse it on our kernel. Thank you! -- Vlad Rezki