Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp2389577ybg; Thu, 30 Jul 2020 20:00:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxittTQTD03wHNiy1zPtfeZ1UF9RVjqOzVbgh/tymScJL8y1RQmAJkd+Jj3ZJPbVnWRPxOO X-Received: by 2002:a17:906:386:: with SMTP id b6mr2006446eja.538.1596164446648; Thu, 30 Jul 2020 20:00:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596164446; cv=none; d=google.com; s=arc-20160816; b=O+XZOArdtQl8G7miDds1PWOozW5UIDRAfAsty0zVlV6fb3K9zvuvZiDMwwDVioLz1T Wq9Vfuh3nHRPRkN4qU2iTr/vdurE9NNBQ5VgXlC18HbE53I1cucndUhNf/dg5Qr9o3T5 Zze9YNz3gbo/gUV+D8gk2f2l7KL2n8Egdm6Fv1HqF32M/yef6MF2uM8XkLsneXt1dGKZ XN7L8H/LCFUAtCN9VqgtZCy3zBDrpcWAPhqO0fbBfQH8jL0+hW1CwUY29kdxCUIZHemn m1Q6x/Gwwd2uxffAHwXTAqGnhxilwQGBakXYmgIHDwKCJfYOG/AoTle1a+0mbTFc+nfb 8Lew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject:reply-to; bh=ryrCLStFA1kc6LWUD5ld8NqN698u/VX/w1gN44WiJbM=; b=fj9I8gKW8niBzUCdBhov0I7KMDcBid/J7EunQQn9f9cMFUG+nGbBqQV2hdiSydvZTl BansXdmLA0IVm3XWas3+lHbcmhs1ZEutLrpJmEGwHS0qa8oqZyuFMVe0kOBZ8TCsKLzh W8BNBWJMFbSVMC1FdNw07mRtBDFkhGScpabO4Zw1/G4jJaB3cLjVi5dIqcLiII4bNkw5 xTV+HR2ftRaQOT6VrFRFra1wNbfHI2GOCuPHul9ZcadeAAArkG98EYpAox4SDu2r0d8v U9Dnd7MW68nOcCvUQAk/RicYfMjIh5jV7gprBbiH7ywvh5I8QmGQ5HHvlicNalf99PUH VXEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a23si4457976edx.271.2020.07.30.20.00.22; Thu, 30 Jul 2020 20:00:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731233AbgGaC5m (ORCPT + 99 others); Thu, 30 Jul 2020 22:57:42 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:50260 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731162AbgGaC5l (ORCPT ); Thu, 30 Jul 2020 22:57:41 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R971e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07488;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0U4Gz1EO_1596164257; Received: from xunleideMacBook-Pro.local(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U4Gz1EO_1596164257) by smtp.aliyun-inc.com(127.0.0.1); Fri, 31 Jul 2020 10:57:38 +0800 Reply-To: xlpang@linux.alibaba.com Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects To: Pekka Enberg Cc: Christoph Lameter , Andrew Morton , Wen Yang , Yang Shi , Roman Gushchin , "linux-mm@kvack.org" , LKML References: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> <7374a9fd-460b-1a51-1ab4-25170337e5f2@linux.alibaba.com> From: xunlei Message-ID: <5eeb5c3d-1a34-ad96-9010-4d8a5ac32241@linux.alibaba.com> Date: Fri, 31 Jul 2020 10:57:38 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/7/7 下午11:23, Pekka Enberg wrote: > Hi! > > (Sorry for the delay, I missed your response.) > > On Fri, Jul 3, 2020 at 12:38 PM xunlei wrote: >> >> On 2020/7/2 PM 7:59, Pekka Enberg wrote: >>> On Thu, Jul 2, 2020 at 11:32 AM Xunlei Pang wrote: >>>> The node list_lock in count_partial() spend long time iterating >>>> in case of large amount of partial page lists, which can cause >>>> thunder herd effect to the list_lock contention, e.g. it cause >>>> business response-time jitters when accessing "/proc/slabinfo" >>>> in our production environments. >>> >>> Would you have any numbers to share to quantify this jitter? I have no >> >> We have HSF RT(High-speed Service Framework Response-Time) monitors, the >> RT figures fluctuated randomly, then we deployed a tool detecting "irq >> off" and "preempt off" to dump the culprit's calltrace, capturing the >> list_lock cost up to 100ms with irq off issued by "ss", this also caused >> network timeouts. > > Thanks for the follow up. This sounds like a good enough motivation > for this patch, but please include it in the changelog. > >>> objections to this approach, but I think the original design >>> deliberately made reading "/proc/slabinfo" more expensive to avoid >>> atomic operations in the allocation/deallocation paths. It would be >>> good to understand what is the gain of this approach before we switch >>> to it. Maybe even run some slab-related benchmark (not sure if there's >>> something better than hackbench these days) to see if the overhead of >>> this approach shows up. >> >> I thought that before, but most atomic operations are serialized by the >> list_lock. Another possible way is to hold list_lock in __slab_free(), >> then these two counters can be changed from atomic to long. >> >> I also have no idea what's the standard SLUB benchmark for the >> regression test, any specific suggestion? > > I don't know what people use these days. When I did benchmarking in > the past, hackbench and netperf were known to be slab-allocation > intensive macro-benchmarks. Christoph also had some SLUB > micro-benchmarks, but I don't think we ever merged them into the tree. I tested hackbench on 24-CPU machine, here are the results: "hackbench 20 thread 1000" == orignal(without any patch) Time: 53.793 Time: 54.305 Time: 54.073 == with my patch1~2 Time: 54.036 Time: 53.840 Time: 54.066 Time: 53.449 == with my patch1~2, plus using a percpu partial free objects counter Time: 53.303 Time: 52.994 Time: 53.218 Time: 53.268 Time: 53.739 Time: 53.072 The results show no performance regression, it's strange that the figures even get a little better when using percpu counter. Thanks, Xunlei