Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp770059ybj; Thu, 7 May 2020 07:16:49 -0700 (PDT) X-Google-Smtp-Source: APiQypLUipljZkrf3DtgIdS16FRBR7qG6HgfaNynkjIzrXyxfxW1RYoY4hpHzWLNXAZqF3Ejt3FI X-Received: by 2002:a17:906:9718:: with SMTP id k24mr12277972ejx.229.1588861009008; Thu, 07 May 2020 07:16:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588861008; cv=none; d=google.com; s=arc-20160816; b=NVM1kb+TsA9nW+pI/qmcciALibBqDVGosdfF9kA1sqb5ZyBUQ6xQckhGYBSEDkZw+m b5BInKjfwyVZmzHX3uGaJ14xIyHzFPVFR/m1ut/uuoDIml4zMeZPJOCq6H01Yi7g5HBn kUzpNrJVwg3DFRx9TQ1zdlbTlruYJUUUJizOmZXpzKMJA6axH8rC5YgggRPadzycvl8q HlM6wAWgtAaX65C9qYEQPOLYhKePCh3S8NJUBaTI9BPQ3mXjE3zgmIIXrl7+3cNbUALD aSpyILgdAZh9i/YPq1qyDRJF9PChy6dlW3Eq828ehjeXVQjEWQIDoe17C2dZk9eOKdEI EFKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=HXHUPI3E2gxPjcUnucBgOJumrPdCN32n4fnxyIrUkKc=; b=WMqLcFcN296ud+C4SlKg7F54y7GDSMLKefbfd0wSMvjbp1pAZo///FihGPYQE4UdqT xOkGVbfdGc4VloTkfjKfY4jsscctv69DNBndxw8yWqtulRuH4CqwSxLEqi6FmexRR0ls xEAdJzFgcAlcqt3Fu1hE9Feo0Fl+Q4ir873p3sOLtP2mZqwYsjbFfd4tehveBH5Jc0iw JcEHBsl5Lmot+7wI4nHFYFHU6FBlj6E3IXCNzIW6LVrDz+n1hmwDg3bpAjgwtDT0JTj/ mwcV4QUP7a55EuXWI9tYzdodPqorI8iMkwDAW/pqw1O4YK/C0lWYx2YAYdSMlzR8x18T OpDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r11si3315446eju.467.2020.05.07.07.16.24; Thu, 07 May 2020 07:16:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727856AbgEGOMb (ORCPT + 99 others); Thu, 7 May 2020 10:12:31 -0400 Received: from mx2.suse.de ([195.135.220.15]:45270 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725947AbgEGOMb (ORCPT ); Thu, 7 May 2020 10:12:31 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 59E05ACED; Thu, 7 May 2020 14:12:32 +0000 (UTC) Subject: Re: [PATCH] slub: limit count of partial slabs scanned to gather statistics To: Konstantin Khlebnikov , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Wen Yang References: <158860845968.33385.4165926113074799048.stgit@buzz> <09e66344-4d30-9a67-24b8-14a910709157@suse.cz> From: Vlastimil Babka Message-ID: <9579b38f-87a2-269a-7598-f857394bc0a9@suse.cz> Date: Thu, 7 May 2020 16:12:26 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/7/20 7:25 AM, Konstantin Khlebnikov wrote: > On 06/05/2020 14.56, Vlastimil Babka wrote: >> On 5/4/20 6:07 PM, Konstantin Khlebnikov wrote: >>> To get exact count of free and used objects slub have to scan list of >>> partial slabs. This may take at long time. Scanning holds spinlock and >>> blocks allocations which move partial slabs to per-cpu lists and back. >>> >>> Example found in the wild: >>> >>> # cat /sys/kernel/slab/dentry/partial >>> 14478538 N0=7329569 N1=7148969 >>> # time cat /sys/kernel/slab/dentry/objects >>> 286225471 N0=136967768 N1=149257703 >>> >>> real 0m1.722s >>> user 0m0.001s >>> sys 0m1.721s >>> >>> The same problem in slab was addressed in commit f728b0a5d72a ("mm, slab: >>> faster active and free stats") by adding more kmem cache statistics. >>> For slub same approach requires atomic op on fast path when object frees. >> >> In general yeah, but are you sure about this one? AFAICS this is about pages in >> the n->partial list, where manipulations happen under n->list_lock and shouldn't >> be fast path. It should be feasible to add a counter under the same lock, so it >> wouldn't even need to be atomic? > > SLUB allocates objects from prepared per-cpu slabs, they could be subtracted from > count of free object under this lock in advance when slab moved out of this list. > > But at freeing path object might belong to any slab, including global partials. Right, freeing can indeed modify a global partial without taking the lock. Nevermind then.