Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp700982ybv; Thu, 20 Feb 2020 05:54:44 -0800 (PST) X-Google-Smtp-Source: APXvYqxP9cPshLDZqPLXUkHM5o+CecC29TkNL/OuSvzipaJn69MRDgkyCK/ueoLRKJrLeqqYZPNc X-Received: by 2002:a05:6830:1047:: with SMTP id b7mr24951481otp.77.1582206884289; Thu, 20 Feb 2020 05:54:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582206884; cv=none; d=google.com; s=arc-20160816; b=vTX4I2nWL/4LcXkbqWpRIz86VoGQfKEb6VegzO04swHnQBMg7BbCETFAEENGM9h3rA RWj9lkBZrBhNqEOf+uq0sjOs9GKX+yemlap62x+R/hrse/hM1I4oeRXwYTxFZWekqHrl dZC+/zRb+2jQe0hx9kp4ftu/PpU04u+1iNC7zWEMgS2Z6PzlTuTAvYRJ3tQbL4/Nume7 ZCMDsSwoA9Kwo6Qy1vc32lIwrCe2amRxf4MVqSLBLxdpO1t3Y1gOkDt0UCUERVZcNF3O tDdXRSzyr+J6DZMEwrsIoJPMiR/JLJB7XECm+47HTzg8YfjrGE/6BtUNjcBUFKSWphd7 nJMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=PGkXEVSaOUhPoFKcws9Lm96INS12lCorMREKZpf+/7g=; b=I9RJybplMZeRNDott3bVkQzik3bke/s7djMxPNvTLuz7/nh/DvqsFY1l0nohHysbn0 V51dxSC8/j8xLUKZ2Rr/hmlxUYU1vfIDO/Z60OP4+JMr5aC84gVgdW38tN8q1dq3PHVW 56K00yR8wqtNH57LxbsCvsy3gjS5IlnWzXzEXxsNWobBOB2bZDrE2+d/ijhqHqkqmtSh AyNAvZ4olbffS2kVza8Uh51LHV/5i0JcBBM4FuKWmYJ7+BAJWEhPcBb4u/e8jPHBCMJT 62fMeUOgYmpfYjaHNCwbk5JtzX+qdaXeMOm7qExGnrqk89e92CRWa9kLeqsaXElsXWhE zeew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q9si11512254oif.92.2020.02.20.05.54.32; Thu, 20 Feb 2020 05:54:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728267AbgBTNx4 (ORCPT + 99 others); Thu, 20 Feb 2020 08:53:56 -0500 Received: from out30-45.freemail.mail.aliyun.com ([115.124.30.45]:49034 "EHLO out30-45.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728071AbgBTNx4 (ORCPT ); Thu, 20 Feb 2020 08:53:56 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=wenyang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0TqSK9Y._1582206806; Received: from IT-C02W23QPG8WN.local(mailfrom:wenyang@linux.alibaba.com fp:SMTPD_---0TqSK9Y._1582206806) by smtp.aliyun-inc.com(127.0.0.1); Thu, 20 Feb 2020 21:53:48 +0800 Subject: Re: [PATCH] mm/slub: Detach node lock from counting free objects To: Roman Gushchin Cc: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Xunlei Pang , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20200201031502.92218-1-wenyang@linux.alibaba.com> <20200212145247.bf89431272038de53dd9d975@linux-foundation.org> <20200218205312.GA3156@carbon> From: Wen Yang Message-ID: Date: Thu, 20 Feb 2020 21:53:26 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Thunderbird/68.1.1 MIME-Version: 1.0 In-Reply-To: <20200218205312.GA3156@carbon> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/2/19 4:53 上午, Roman Gushchin wrote: > On Sun, Feb 16, 2020 at 12:15:54PM +0800, Wen Yang wrote: >> >> >> On 2020/2/13 6:52 上午, Andrew Morton wrote: >>> On Sat, 1 Feb 2020 11:15:02 +0800 Wen Yang wrote: >>> >>>> The lock, protecting the node partial list, is taken when couting the free >>>> objects resident in that list. It introduces locking contention when the >>>> page(s) is moved between CPU and node partial lists in allocation path >>>> on another CPU. So reading "/proc/slabinfo" can possibily block the slab >>>> allocation on another CPU for a while, 200ms in extreme cases. If the >>>> slab object is to carry network packet, targeting the far-end disk array, >>>> it causes block IO jitter issue. >>>> >>>> This fixes the block IO jitter issue by caching the total inuse objects in >>>> the node in advance. The value is retrieved without taking the node partial >>>> list lock on reading "/proc/slabinfo". >>>> >>>> ... >>>> >>>> @@ -1768,7 +1774,9 @@ static void free_slab(struct kmem_cache *s, struct page *page) >>>> static void discard_slab(struct kmem_cache *s, struct page *page) >>>> { >>>> - dec_slabs_node(s, page_to_nid(page), page->objects); >>>> + int inuse = page->objects; >>>> + >>>> + dec_slabs_node(s, page_to_nid(page), page->objects, inuse); >>> >>> Is this right? dec_slabs_node(..., page->objects, page->objects)? >>> >>> If no, we could simply pass the page* to inc_slabs_node/dec_slabs_node >>> and save a function argument. >>> >>> If yes then why? >>> >> >> Thanks for your comments. >> We are happy to improve this patch based on your suggestions. >> >> >> When the user reads /proc/slabinfo, in order to obtain the active_objs >> information, the kernel traverses all slabs and executes the following code >> snippet: >> static unsigned long count_partial(struct kmem_cache_node *n, >> int (*get_count)(struct page *)) >> { >> unsigned long flags; >> unsigned long x = 0; >> struct page *page; >> >> spin_lock_irqsave(&n->list_lock, flags); >> list_for_each_entry(page, &n->partial, slab_list) >> x += get_count(page); >> spin_unlock_irqrestore(&n->list_lock, flags); >> return x; >> } >> >> It may cause performance issues. >> >> Christoph suggested "you could cache the value in the userspace application? >> Why is this value read continually?", But reading the /proc/slabinfo is >> initiated by the user program. As a cloud provider, we cannot control user >> behavior. If a user program inadvertently executes cat /proc/slabinfo, it >> may affect other user programs. >> >> As Christoph said: "The count is not needed for any operations. Just for the >> slabinfo output. The value has no operational value for the allocator >> itself. So why use extra logic to track it in potentially performance >> critical paths?" >> >> In this way, could we show the approximate value of active_objs in the >> /proc/slabinfo? >> >> Based on the following information: >> In the discard_slab() function, page->inuse is equal to page->total_objects; >> In the allocate_slab() function, page->inuse is also equal to >> page->total_objects (with one exception: for kmem_cache_node, page-> inuse >> equals 1); >> page->inuse will only change continuously when the obj is constantly >> allocated or released. (This should be the performance critical path >> emphasized by Christoph) >> >> When users query the global slabinfo information, we may use total_objects >> to approximate active_objs. > > Well, from one point of view, it makes no sense, because the ratio between > these two numbers is very meaningful: it's the slab utilization rate. > > On the other side, with enabled per-cpu partial lists active_objs has > nothing to do with the reality anyway, so I agree with you, calling > count_partial() is almost useless. > > That said, I wonder if the right thing to do is something like the patch below? > > Thanks! > > Roman > > -- > > diff --git a/mm/slub.c b/mm/slub.c > index 1d644143f93e..ba0505e75ecc 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2411,14 +2411,16 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) > static unsigned long count_partial(struct kmem_cache_node *n, > int (*get_count)(struct page *)) > { > - unsigned long flags; > unsigned long x = 0; > +#ifdef CONFIG_SLUB_CPU_PARTIAL > + unsigned long flags; > struct page *page; > > spin_lock_irqsave(&n->list_lock, flags); > list_for_each_entry(page, &n->partial, slab_list) > x += get_count(page); > spin_unlock_irqrestore(&n->list_lock, flags); > +#endif > return x; > } > #endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */ > Hi Roman, Thanks for your comments. In the server scenario, SLUB_CPU_PARTIAL is turned on by default, and can improve the performance of the cloud server, as follows: default y depends on SLUB && SMP bool "SLUB per cpu partial cache" help Per cpu partial caches accelerate objects allocation and freeing that is local to a processor at the price of more indeterminism in the latency of the free. On overflow these caches will be cleared which requires the taking of locks that may cause latency spikes. Typically one would choose no for a realtime system. Best wishes, Wen