Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp1052400pxb; Thu, 15 Apr 2021 12:45:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyLRvAjVx6cgJsbLX9WhJf4viAlgaw5wLN8OmRRctS7DkXFXMCP1jxTfZU8Iu9ypmL7Ei+r X-Received: by 2002:a17:906:4cc5:: with SMTP id q5mr5139586ejt.302.1618515910217; Thu, 15 Apr 2021 12:45:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618515910; cv=none; d=google.com; s=arc-20160816; b=PAwgVQn7ONxVuqM1NI8ENnfn7MNru1EhlC+Wt+0HSRMY5gibXqCqIfduoIG/ocBE+h QxBK1GMiO5vAyTXTc89IqnplTDy+Xn4JNlj7eYqT/wcIdKYPcPR+Tg1g6elMDGW4jVaw dC00D6HzW1LFRCzGwMiqjQu7SzJ8mffq64V+/QYQ0LyeGOhO/W2SAwWJtm2/kcUEXbyR pGJVAJIz31ky6WECNhJUDGQESxwQMd95piJB8yJry/JNzcA07nMfDlFOJ4DfXUxJZrcg KCjx+d4OQzf9vU2OSVVNaMIRC22Q/KbpraJSnCFxgGBJccrZ7flJ4voLMbwlsb1r5ztU gz0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=ysi6RRTEAiRn+Yy07GZnMkjfObnhYqIw2YgVv+JxE44=; b=Md/VusYiM/ie9PL3B2PKy6KxtTFVAjL2Ruyl9lJ6gdbW4mXU6spfRgODOgYtveJJLE vmn0FeyIOGi42nD09YNU7Tl8IbFI6VVpt6BrSZfe+n/dkd8PT/RxaDsVrvPWmdXk5ik9 M3bRRKy+Dbx/EGd83TPFIIXGROhroO3hvj8wE2d3SziEZUNizu3VqKftoJj/IcT3OFQ8 GFRUGm7BkyTkBV2AU0qSmP15EpxlhM7w19W4w4wARhETU2VItxo+Ro+dAesrFnY8+2wu jh5bqMoc5M1hDL8qbTXDQXj7NKOS9RCD56Ssr/oE3hi1ip+cEr1AmMQrakgGB81Bzl9T AWVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=b2A9c+lK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ay1si3115818edb.519.2021.04.15.12.44.46; Thu, 15 Apr 2021 12:45:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=b2A9c+lK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234506AbhDOTkh (ORCPT + 99 others); Thu, 15 Apr 2021 15:40:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234636AbhDOTkg (ORCPT ); Thu, 15 Apr 2021 15:40:36 -0400 Received: from mail-qv1-xf2d.google.com (mail-qv1-xf2d.google.com [IPv6:2607:f8b0:4864:20::f2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E2B1C061574 for ; Thu, 15 Apr 2021 12:40:12 -0700 (PDT) Received: by mail-qv1-xf2d.google.com with SMTP id h3so11533299qve.13 for ; Thu, 15 Apr 2021 12:40:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=ysi6RRTEAiRn+Yy07GZnMkjfObnhYqIw2YgVv+JxE44=; b=b2A9c+lKNHFTx5ujWSTaeS9zFMdER+ifE7Xz5osy5K6xiMiaNTgWAyMly/+6qpp5tr yRDuZ5mJe1PTVPp/InRtMLkJ2BKi3Xrj/3nluHM7i/iDx0Z/WAVtnMtXAhool/Zs4r7m MUxwDVravbfwueke0nUfYYeStLFeTvTViS1GxMog6kDrxc/NFZFZzZ4kgx5klve3OqdD HLxq6x2jHTz9MI5/1MxdV9MrSJD1djMSnf/MVVy0BF66/E70xCrsL6bmt8FAkFKHifSw /0xvlNjX1zlQ8+imEOXDURfi+m9K0n7A8o9OsU4NvY8QWkqZWIg9TZz3/AtU2wj1nmIs glTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ysi6RRTEAiRn+Yy07GZnMkjfObnhYqIw2YgVv+JxE44=; b=ffTzYH7EP2Belx3BrPi9Rb4D5zIaBJdJdPgHhPzUEAO+3vHUfsmEVscwpEUvxoo99/ PIziRdMnYjVCgq4hNDnYNmtrV0nimo2azIb2Duz6QWN86XEaC1d8iV+FpIw4NAb2pSXz fDv0pxEN718kRBoapt/a/r/NhD/nrW6h7kIJxA2pnznBHk1r3gWQ/MyKw7pthRLrw6ZR /p0+BcAu0WW0xsyePHSU+pAlk7FLnIag2zx/nJgj9u1mJpvzdTZ+XrlwMTh6sDWWQbQt 1oGBEp4OF8r+H2XWbU2Ijy30PwrHi/1eLRD4BzRnZvvDfVNaRAsI+/AbJnX4FCD/HRBJ jE5Q== X-Gm-Message-State: AOAM531tJJggSMspbVmS+1HaS7xKy9jaLXiRoBDrvJKHbago+5KmOHEv jNyn3UFJVMM81j7MVGinTQD3Ug== X-Received: by 2002:a0c:c488:: with SMTP id u8mr4868194qvi.47.1618515611398; Thu, 15 Apr 2021 12:40:11 -0700 (PDT) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id f17sm2529353qtp.39.2021.04.15.12.40.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Apr 2021 12:40:10 -0700 (PDT) Date: Thu, 15 Apr 2021 15:40:10 -0400 From: Johannes Weiner To: Waiman Long Cc: Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Masayoshi Mizuma , Xing Zhengjun Subject: Re: [PATCH v3 2/5] mm/memcg: Introduce obj_cgroup_uncharge_mod_state() Message-ID: References: <20210414012027.5352-1-longman@redhat.com> <20210414012027.5352-3-longman@redhat.com> <1c85e8f6-e8b9-33e1-e29b-81fbadff959f@redhat.com> <8a104fd5-64c7-3f41-981c-9cfa977c78a6@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8a104fd5-64c7-3f41-981c-9cfa977c78a6@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 15, 2021 at 02:47:31PM -0400, Waiman Long wrote: > On 4/15/21 2:10 PM, Johannes Weiner wrote: > > On Thu, Apr 15, 2021 at 12:35:45PM -0400, Waiman Long wrote: > > > On 4/15/21 12:30 PM, Johannes Weiner wrote: > > > > On Tue, Apr 13, 2021 at 09:20:24PM -0400, Waiman Long wrote: > > > > > In memcg_slab_free_hook()/pcpu_memcg_free_hook(), obj_cgroup_uncharge() > > > > > is followed by mod_objcg_state()/mod_memcg_state(). Each of these > > > > > function call goes through a separate irq_save/irq_restore cycle. That > > > > > is inefficient. Introduce a new function obj_cgroup_uncharge_mod_state() > > > > > that combines them with a single irq_save/irq_restore cycle. > > > > > > > > > > @@ -3292,6 +3296,25 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) > > > > > refill_obj_stock(objcg, size); > > > > > } > > > > > +void obj_cgroup_uncharge_mod_state(struct obj_cgroup *objcg, size_t size, > > > > > + struct pglist_data *pgdat, int idx) > > > > The optimization makes sense. > > > > > > > > But please don't combine independent operations like this into a > > > > single function. It makes for an unclear parameter list, it's a pain > > > > in the behind to change the constituent operations later on, and it > > > > has a habit of attracting more random bools over time. E.g. what if > > > > the caller already has irqs disabled? What if it KNOWS that irqs are > > > > enabled and it could use local_irq_disable() instead of save? > > > > > > > > Just provide an __obj_cgroup_uncharge() that assumes irqs are > > > > disabled, combine with the existing __mod_memcg_lruvec_state(), and > > > > bubble the irq handling up to those callsites which know better. > > > > > > > That will also work. However, the reason I did that was because of patch 5 > > > in the series. I could put the get_obj_stock() and put_obj_stock() code in > > > slab.h and allowed them to be used directly in various places, but hiding in > > > one function is easier. > > Yeah it's more obvious after getting to patch 5. > > > > But with the irq disabling gone entirely, is there still an incentive > > to combine the atomic section at all? Disabling preemption is pretty > > cheap, so it wouldn't matter to just do it twice. > > > > I.e. couldn't the final sequence in slab code simply be > > > > objcg_uncharge() > > mod_objcg_state() > > > > again and each function disables preemption (and in the rare case > > irqs) as it sees fit? > > > > You lose the irqsoff batching in the cold path, but as you say, hit > > rates are pretty good, and it doesn't seem worth complicating the code > > for the cold path. > > > That does make sense, though a little bit of performance may be lost. I will > try that out to see how it work out performance wise. Thanks. Even if we still end up doing it, it's great to have that cost isolated, so we know how much extra code complexity corresponds to how much performance gain. It seems the task/irq split could otherwise be a pretty localized change with no API implications.