Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp795411ybt; Tue, 7 Jul 2020 00:01:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz0/HGiMBeCCpyzeNq+116c6RI7JG2sWntWRirIiwii2Fx5lD7o49AIJ/pH032UOaX19+ke X-Received: by 2002:a50:ec8f:: with SMTP id e15mr58383108edr.70.1594105306469; Tue, 07 Jul 2020 00:01:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594105306; cv=none; d=google.com; s=arc-20160816; b=F6rdNMUpdZQ4Wx0rs21NJ3TwpnKweip1Qa9MQjCHGmSSLUFZf2j2rHNXBzIKAmaHI3 F9rGgVbCpZ2/yzd+ZlnedOBhgtamXW7cgth1A6NbDZ0S6BR7SZlFUmZVXf0ge2kKVMfl vJ+oE0wqaiVljLEK2DrSFaTOpYBTQfs6rLyVpo+fBayn2mwbZwQV945uXuTqVXeR/kDK kyF0mnDcRDJjkrOyrsYoZuWxwBCjvav0mkIYjvPzABodLEur7aPOrLtVOlHH8RR2bfYX cKbFQIPtov051EuJqcBk5RdMVAx0yynFKMqEWf4g7Rsk46uZcsbfDLLJm7Ol4T0PS8Ka AvSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=YJYW3vT8Aj6BbkSoclnppx0TFMUuLIPyE7pjxCINKPY=; b=kXZSA1X6tgSu+dSE31b5J5Z823GTKGgZ7pTfsgIy1lNDcrUrHdjrhq0JBKhleDdChj GLziHOZRaBfHIwKqN9G99jVS3+ZKtqz5MtP+M27DFa2F/DKClDQMTnO+h0M4oDhkREw4 B0N4uFNvjeFlCiiU/diMWR8M3FIorHGDkL6F4uAyZ+3D0wVgn5n1v+hZUsSse6Fny4wK NIlJxtgS8IQAFfve61ZyFrsgfvCFNDVRi1QzpDPYkGdLJtgR6BgkZmp0TpFvlOXJmWWA yws0aXiYzVZ6QYYThLYL5LcqUZn2qt7SLtUamoI5diXRAq1wWXPHfnfFkm7CGq74LZIq FmEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g26si14593906edu.253.2020.07.07.00.01.22; Tue, 07 Jul 2020 00:01:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727789AbgGGG7F (ORCPT + 99 others); Tue, 7 Jul 2020 02:59:05 -0400 Received: from gentwo.org ([3.19.106.255]:51566 "EHLO gentwo.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726434AbgGGG7E (ORCPT ); Tue, 7 Jul 2020 02:59:04 -0400 Received: by gentwo.org (Postfix, from userid 1002) id 3CA403FD78; Tue, 7 Jul 2020 06:59:04 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by gentwo.org (Postfix) with ESMTP id 3976D3FD77; Tue, 7 Jul 2020 06:59:04 +0000 (UTC) Date: Tue, 7 Jul 2020 06:59:04 +0000 (UTC) From: Christopher Lameter X-X-Sender: cl@www.lameter.com To: Xunlei Pang cc: Andrew Morton , Wen Yang , Yang Shi , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects In-Reply-To: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> Message-ID: References: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2 Jul 2020, Xunlei Pang wrote: > This patch introduces two counters to maintain the actual number > of partial objects dynamically instead of iterating the partial > page lists with list_lock held. > > New counters of kmem_cache_node are: pfree_objects, ptotal_objects. > The main operations are under list_lock in slow path, its performance > impact is minimal. If at all then these counters need to be under CONFIG_SLUB_DEBUG. > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -616,6 +616,8 @@ struct kmem_cache_node { > #ifdef CONFIG_SLUB > unsigned long nr_partial; > struct list_head partial; > + atomic_long_t pfree_objects; /* partial free objects */ > + atomic_long_t ptotal_objects; /* partial total objects */ Please in the CONFIG_SLUB_DEBUG. Without CONFIG_SLUB_DEBUG we need to build with minimal memory footprint. > #ifdef CONFIG_SLUB_DEBUG > atomic_long_t nr_slabs; > atomic_long_t total_objects; > diff --git a/mm/slub.c b/mm/slub.c Also this looks to be quite heavy on the cache and on execution time. Note that the list_lock could be taken frequently in the performance sensitive case of freeing an object that is not in the partial lists.