Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp4028691ybz; Mon, 4 May 2020 14:23:37 -0700 (PDT) X-Google-Smtp-Source: APiQypLfoI+UdhvGd0ireY7cufg1KUhDvsUglR+w/XPVsAnIrvN+rlwAbBSuEvRqcUIYZEJTnio6 X-Received: by 2002:a05:6402:684:: with SMTP id f4mr23436edy.240.1588627417270; Mon, 04 May 2020 14:23:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588627417; cv=none; d=google.com; s=arc-20160816; b=zamaeLQgdC19BH7NPQeXFvMAtneObJ/iaDvXAe5gNwFHEl5RcG5zYpntZngMY5aLpA lx1mzaZF0eGAIQOqKmh0fNeV0/UoVMOZQ2KaM78L1fIgZ5U3CzN6AN2R4KoY4T+KODa+ LBVzqHlcNmz0QkXXR9wDgL9itdtho2XwPWNiFLdDzm81MIJ3i0NBsBrx4onOAPH7qrEg 38ofs6RSliVYw7rVVDs5SBXPvQ4DTWo8jg2uHJ5HPqoho677EQZqfTyzUiF7UQ3JSxJW 1lMNEby69Ix5a9G6L1IqoLgNfCdheh3LkACIyT0eoGIf9YfBzUbAkzWIemN+jsYPWgsN VQzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=ofVuImENnldfoCd90MLX4zmTDmxbUDPjKQ1UMuBWtAI=; b=Wa5dngK2iobVJvGMJlTS9cDY2SvBkqtBleEiVrXf+QrsaEG8mop0LMDzf60KiNL65x raIM0P05bszzBefSFib0sN4l+MUZ7G/vKBjgN9mR0h9eRhce70bKJuNiLH+IVns51uun ux+fEkVK0Gd2qT+IdEVZEMItIeWikfE4kutKaQl09Pt62d8NWxGfNsCZtYVIVr6tpGyz 0+2/w8q/PW4PD5mF/QZrVO/ZXPu3noiujqsmtkHEgl7mk/HJQThO06BHATqKuM/QBkIS AwfroN8HyISBxH9xhJqN5RbDUelYWFLzIqDPWsH9I/wiF40MeHmOAEbanjUtfjM5348B ZPqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ufRV87cA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j3si7412233ejo.146.2020.05.04.14.23.14; Mon, 04 May 2020 14:23:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ufRV87cA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726773AbgEDVTx (ORCPT + 99 others); Mon, 4 May 2020 17:19:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726291AbgEDVTx (ORCPT ); Mon, 4 May 2020 17:19:53 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54AAEC061A0E for ; Mon, 4 May 2020 14:19:53 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id u10so134723pls.8 for ; Mon, 04 May 2020 14:19:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=ofVuImENnldfoCd90MLX4zmTDmxbUDPjKQ1UMuBWtAI=; b=ufRV87cABctefq3CaUNBeJz+aryNV1QMLbKKZMSgxvS5vmJY9cXxU4A0vO1sJYTkpv qw0ofQhQZdGGXKHGYFlL5L1rbzVebKrR9L4Qac9f+D10zaUJoMV0apsENoPQLwMg6dpP NiaruzsFZ6w2qLR1OvL7brj8ccRl8lZLmhrVKPUL9y93OzcN49B0Pq+mPe2mX9mK+lBB 98RkHdOIoG/fgR7PvO1MlLxXwuPFr2NogTjHm+UcoKoskLBn3YsAoR9p8KZecS9l4coB YOS5h6nnvQdYlk8rZg7G6wTWnZM6AFp1X65YkoS47Em088TeixOzpaCQR+dFgCwUEHY5 3RVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=ofVuImENnldfoCd90MLX4zmTDmxbUDPjKQ1UMuBWtAI=; b=TdfCoSXUSHNGRGV8eB602gl9tNxw8MOqJx9rXFKl1pg8KZEQKWXYZscQosjW1LQyiQ Ec7EZ6riv9vzv1jJdfq4JwQynyd1Y4ujIHmh0phu3j8IvfUOMgd9ILJrvAedimuAcdPt gmlMtSetd+jq9b2UhoSo2lHNpwb3XT/LDScwhRcn6eiewl5aYh5fdCv5utEdyn9FNoZG ZnS6KJXKCEAxOGQeHaen6cBzBIR7Yze5wAsoZPZA8oVPqZNIS6QtSttydlmuVOUG6KBr YsyklPVkpvR7gysEKNlVat47z+QpC4PQtJgQH9pngdgz+vMS0PbHY8H1DdR4VaPkPZRe byBw== X-Gm-Message-State: AGi0PuZ4HOx2d5LMyowOPw0iJyZ/o//JkzPlXS54YFtvL9T5LZw80uqC HvflFtNqD4xy7ypSNFk6tbufCQBPCOQ= X-Received: by 2002:a17:90a:2344:: with SMTP id f62mr86684pje.152.1588627192508; Mon, 04 May 2020 14:19:52 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id a142sm23508pfa.6.2020.05.04.14.19.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 May 2020 14:19:51 -0700 (PDT) Date: Mon, 4 May 2020 14:19:51 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Konstantin Khlebnikov cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Christoph Lameter , Pekka Enberg , Joonsoo Kim Subject: Re: [PATCH] slub: limit count of partial slabs scanned to gather statistics In-Reply-To: <158860845968.33385.4165926113074799048.stgit@buzz> Message-ID: References: <158860845968.33385.4165926113074799048.stgit@buzz> User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 4 May 2020, Konstantin Khlebnikov wrote: > To get exact count of free and used objects slub have to scan list of > partial slabs. This may take at long time. Scanning holds spinlock and > blocks allocations which move partial slabs to per-cpu lists and back. > > Example found in the wild: > > # cat /sys/kernel/slab/dentry/partial > 14478538 N0=7329569 N1=7148969 > # time cat /sys/kernel/slab/dentry/objects > 286225471 N0=136967768 N1=149257703 > > real 0m1.722s > user 0m0.001s > sys 0m1.721s > > The same problem in slab was addressed in commit f728b0a5d72a ("mm, slab: > faster active and free stats") by adding more kmem cache statistics. > For slub same approach requires atomic op on fast path when object frees. > > Let's simply limit count of scanned slabs and print warning. > Limit set in /sys/module/slub/parameters/max_partial_to_count. > Default is 10000 which should be enough for most sane cases. > > Return linear approximation if list of partials is longer than limit. > Nobody should notice difference. > Hi Konstantin, Do you only exhibit this on slub for SO_ALL|SO_OBJECTS? I notice the timing in the changelog is only looking at "objects" and not "partial". If so, it seems this is also a problem for get_slabinfo() since it also uses the count_free() callback for count_partial(). Concern would be that the kernel has now drastically changed a statistic that it exports to userspace. There was some discussion about this back in 2016[*] and one idea was that slabinfo would truncate its scanning and append a '+' to the end of the value to indicate it exceeds the max, i.e. 10000+. I think that '+' actually caused the problem itself for userspace processes. I think the patch is too far reaching, however, since it impacts all count_partial() counting and not only for the case cited in the changelog. Are there examples for things other than the count_free() callback? [*] https://lore.kernel.org/patchwork/patch/708427/ > Signed-off-by: Konstantin Khlebnikov > --- > mm/slub.c | 15 ++++++++++++++- > 1 file changed, 14 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 9bf44955c4f1..86a366f7acb6 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2407,16 +2407,29 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) > #endif /* CONFIG_SLUB_DEBUG */ > > #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) > + > +static unsigned long max_partial_to_count __read_mostly = 10000; > +module_param(max_partial_to_count, ulong, 0644); > + > static unsigned long count_partial(struct kmem_cache_node *n, > int (*get_count)(struct page *)) > { > + unsigned long counted = 0; > unsigned long flags; > unsigned long x = 0; > struct page *page; > > spin_lock_irqsave(&n->list_lock, flags); > - list_for_each_entry(page, &n->partial, slab_list) > + list_for_each_entry(page, &n->partial, slab_list) { > x += get_count(page); > + > + if (++counted > max_partial_to_count) { > + pr_warn_once("SLUB: too much partial slabs to count all objects, increase max_partial_to_count.\n"); > + /* Approximate total count of objects */ > + x = mult_frac(x, n->nr_partial, counted); > + break; > + } > + } > spin_unlock_irqrestore(&n->list_lock, flags); > return x; > } > >