Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp995609pxb; Thu, 15 Apr 2021 11:12:54 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzQi1X46l+WqiZAtwrfdxeBgQY466bLQVJC1tw4heBXCV/szyAgeL2u0kVapne7ExVCnoTe X-Received: by 2002:a17:902:d305:b029:ea:db56:e7d with SMTP id b5-20020a170902d305b02900eadb560e7dmr5326347plc.66.1618510374420; Thu, 15 Apr 2021 11:12:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618510374; cv=none; d=google.com; s=arc-20160816; b=g+00KZe70DoBSZIID3ibe+4+oLyrXDo/ylMK40da+yIhU4XN3M8E8uVRwGB8VTAXx5 L8NdTlhFetzupqxwMY3ptWYMmNP+tliqZKFUw56SIOUBvRpOaOQH5U17cMAJeBad/4/6 2NbeGQxXV6CXoWTLjvoQm7aAATkJXO7ketrp8DhtJH6FHIP3f/CIXkPGnVD/aE0vNfrH heclHoDPNTTStvVU4csdAB50FCwmNGjc2EGUMkufvB2+8tB9HqoHAEb7TNoC47SPi31K JTjrpPqm5bAMmVD9m7k9xL+AtWGHelWmhds2Au7ZmmzTJFmN99SL+DID+SJ5ztg+BMrt VIRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=0BLFn8+sPdB8VIxTPCUa0jMJG0oUhmRim/8smTI9xvY=; b=NG5S7O7IxkdwOxCSYpyXCmuecDTFPd96lWTNIUIMmN0J8EzWx5rzkgU0/ayttMvnWo hOI2jzRstpUPOOgXGRrV8N4o9SzTlt2+VMAZmAjk6Gb2RNUsWyVo4DMcv2tehqA2q6xO Nu/SU4SzpBEfY05a4ekDS1wN2r63rVytvicPBAe1h0+zxAlwyD2IZassQWMPaOccFAwg wVki9yfHgboy/4kkhAqrVZe6JmhOIbDASfumBPZq6mRAsiQyA/0413fVo5D+wyGFLiH9 5z3RCMbQieOoqcKaTyDheEVX1wKJ6GXGEMr6FF3AE2rImhaxXe/NiALzhj/YTMe/cZdJ Fpaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=tGeopoVp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h1si2972292plb.375.2021.04.15.11.12.41; Thu, 15 Apr 2021 11:12:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=tGeopoVp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233595AbhDOSK4 (ORCPT + 99 others); Thu, 15 Apr 2021 14:10:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233551AbhDOSK4 (ORCPT ); Thu, 15 Apr 2021 14:10:56 -0400 Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com [IPv6:2607:f8b0:4864:20::82d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBC01C061574 for ; Thu, 15 Apr 2021 11:10:32 -0700 (PDT) Received: by mail-qt1-x82d.google.com with SMTP id j7so18874806qtx.5 for ; Thu, 15 Apr 2021 11:10:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=0BLFn8+sPdB8VIxTPCUa0jMJG0oUhmRim/8smTI9xvY=; b=tGeopoVpJDMNEO8IkqF3ObY96XlRxdzEQbmT9ebDYCAN4ET4ZXVB5BRc1gdegOLnak R8ziD48IV5FhyKUzjU/965vBnK3955PsmO/8G6vqpc6FGij3sKbVDMJ0k15g75yhqXwv Obm+pY4Y+EGZNhKxOR/m//LoLYgQb/8andXi1lEzejlWt68oznT+oVYvFzgAFGADEsvO D/oUUST0NO20OhtluWsk4eLbpqrQ9NiAfrHk5abtOTRRxlcNByjSosE1MWQfOpXAZY46 AWkz6NuwLa7NgR4h667WlfYY7biLEF+jWhwCyrOSthB7+/SIix5qEZZdP6NsirMSVje6 oAig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=0BLFn8+sPdB8VIxTPCUa0jMJG0oUhmRim/8smTI9xvY=; b=OTL+S+uIQoPzGnbIdPYdKZ2GJvjSyAQGoRiKIz1iPry9OvN7OXZ16yGhBvI4nOKDoW kxFiC/tOIAoGp+rnxTm55lqfwvEG3YmCXxoDMZu3CmPXZNLnMYfd+Cs5nxFtKAoISFHP oW5F+s8liNgFSE2dpyeFgyal/b8ROMkmhGBsVZXS3tzH2ycyDoB2P97Gvq7DZP7OSS1r jsJVbbXbakGLY8Zbk9j6zLqAQkv80rdFA06+qGs+wPjFNEYORuuwgx48B0oswLPBx2MR 7+2QikmmBRHfGB8NiAFlGOUNgI74LPklpzL9sXKCp0uNEaiv+p9B144r2fRhNdW94QPf macg== X-Gm-Message-State: AOAM533pa5spGGjlVPGkQ30buyuZ5BLioPW73MGud4IYQDeXj550rAFd sz2JlcKPhg5RcSvhRLvJfVhMCw== X-Received: by 2002:ac8:4b7b:: with SMTP id g27mr4048708qts.220.1618510232013; Thu, 15 Apr 2021 11:10:32 -0700 (PDT) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id 23sm850142qkk.51.2021.04.15.11.10.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Apr 2021 11:10:31 -0700 (PDT) Date: Thu, 15 Apr 2021 14:10:30 -0400 From: Johannes Weiner To: Waiman Long Cc: Michal Hocko , Vladimir Davydov , Andrew Morton , Tejun Heo , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Muchun Song , Alex Shi , Chris Down , Yafang Shao , Wei Yang , Masayoshi Mizuma , Xing Zhengjun Subject: Re: [PATCH v3 2/5] mm/memcg: Introduce obj_cgroup_uncharge_mod_state() Message-ID: References: <20210414012027.5352-1-longman@redhat.com> <20210414012027.5352-3-longman@redhat.com> <1c85e8f6-e8b9-33e1-e29b-81fbadff959f@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1c85e8f6-e8b9-33e1-e29b-81fbadff959f@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 15, 2021 at 12:35:45PM -0400, Waiman Long wrote: > On 4/15/21 12:30 PM, Johannes Weiner wrote: > > On Tue, Apr 13, 2021 at 09:20:24PM -0400, Waiman Long wrote: > > > In memcg_slab_free_hook()/pcpu_memcg_free_hook(), obj_cgroup_uncharge() > > > is followed by mod_objcg_state()/mod_memcg_state(). Each of these > > > function call goes through a separate irq_save/irq_restore cycle. That > > > is inefficient. Introduce a new function obj_cgroup_uncharge_mod_state() > > > that combines them with a single irq_save/irq_restore cycle. > > > > > > @@ -3292,6 +3296,25 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) > > > refill_obj_stock(objcg, size); > > > } > > > +void obj_cgroup_uncharge_mod_state(struct obj_cgroup *objcg, size_t size, > > > + struct pglist_data *pgdat, int idx) > > The optimization makes sense. > > > > But please don't combine independent operations like this into a > > single function. It makes for an unclear parameter list, it's a pain > > in the behind to change the constituent operations later on, and it > > has a habit of attracting more random bools over time. E.g. what if > > the caller already has irqs disabled? What if it KNOWS that irqs are > > enabled and it could use local_irq_disable() instead of save? > > > > Just provide an __obj_cgroup_uncharge() that assumes irqs are > > disabled, combine with the existing __mod_memcg_lruvec_state(), and > > bubble the irq handling up to those callsites which know better. > > > That will also work. However, the reason I did that was because of patch 5 > in the series. I could put the get_obj_stock() and put_obj_stock() code in > slab.h and allowed them to be used directly in various places, but hiding in > one function is easier. Yeah it's more obvious after getting to patch 5. But with the irq disabling gone entirely, is there still an incentive to combine the atomic section at all? Disabling preemption is pretty cheap, so it wouldn't matter to just do it twice. I.e. couldn't the final sequence in slab code simply be objcg_uncharge() mod_objcg_state() again and each function disables preemption (and in the rare case irqs) as it sees fit? You lose the irqsoff batching in the cold path, but as you say, hit rates are pretty good, and it doesn't seem worth complicating the code for the cold path.