Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1115721ybl; Thu, 22 Aug 2019 09:33:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqwjwXSdUJRrlJzP6Siwhk9PFtexU2BvTMtvjpv+ZBHprWt15ZDLH48Y7XAEVch73NEZcV82 X-Received: by 2002:a62:ce0e:: with SMTP id y14mr35202pfg.73.1566491622966; Thu, 22 Aug 2019 09:33:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566491622; cv=none; d=google.com; s=arc-20160816; b=cmhFuS7pATQmwsx8Df/MAr9Eo0d7k8OTr1WOfdahV5Af4cBLGmgAvKruXF4h4eOl+I FRj+Cd7vKxtmiq1cLNYWqWCnBhhM9qEO3tbsSafHkbnd2D2pjxUPF5eyZGXdGnvHPZU4 +LbdBCqtmkR+0wPcdW9hOFz0Y6rZPWi/wGdl7sDNKOISPzS3FTxxAU7Lxyt++bG5ys42 vPWNka+HdIZ2FT4u6+mbOdyrvmeNc1tI1msZLapE0Gsy5z6rB6azR+S29dZ6/5ZxzGnG Ae7rNJtKUrXndnz3U9quOD23WQFqGNMwOuVzS8TD6MWP5YBSrETuzAJILQuUlM7RZUlr Dpvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:to:subject; bh=C1ClRDWVk/h3oXMJ/++3PzUTrMEAGGmOYT3CzsIkV1I=; b=qQdabae+eDO/FFLwu1U8yzzPNQaM3bBCIqFftwHsS5XK13P4KKemzlmPJoZMa7bk3F tgwtzj4aMJvXCijLB1kXq4ZRZfqMTvXYNR1QLUcjouxv63WL+XLVd530Jl4xdspjJILF BM2TQzcfJ8RYe4cvrgL6qE9rS8BAw/F1jCnEZy9Y6Q/Aslc79ASBsnTOzng3pLxOabPU w4Lpnuw9uvNzsJjXJl1mfzbeKdNK3nYyCuRanPtUsaJlH/Qp68mSi5ypTbFIfbj+kx2q JoFNZwGTddkYUhQdU/pIvfqRLGc/sFaEm4POM1zQm8oJHbzaYuF0NY+c9fUVTmgHWBcj 7ZYw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z12si116206pjr.58.2019.08.22.09.33.27; Thu, 22 Aug 2019 09:33:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733229AbfHVL5D (ORCPT + 99 others); Thu, 22 Aug 2019 07:57:03 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:34132 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731156AbfHVL5C (ORCPT ); Thu, 22 Aug 2019 07:57:02 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0Ta8EzwU_1566475019; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0Ta8EzwU_1566475019) by smtp.aliyun-inc.com(127.0.0.1); Thu, 22 Aug 2019 19:56:59 +0800 Subject: Re: [PATCH 00/14] per memcg lru_lock To: Daniel Jordan , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Mel Gorman , Tejun Heo , Michal Hocko References: <1566294517-86418-1-git-send-email-alex.shi@linux.alibaba.com> <6ba1ffb0-fce0-c590-c373-7cbc516dbebd@oracle.com> From: Alex Shi Message-ID: <348495d2-b558-fdfd-a411-89c75d4a9c78@linux.alibaba.com> Date: Thu, 22 Aug 2019 19:56:59 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <6ba1ffb0-fce0-c590-c373-7cbc516dbebd@oracle.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2019/8/22 上午2:00, Daniel Jordan 写道: >> > > This is system-wide right, not per container?  Even per container, 89 usec isn't much contention over 20 seconds.  You may want to give this a try: yes, perf lock show the host info. > >   https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice> > It's also synthetic but it stresses lru_lock more than just anon alloc/free.  It hits the page activate path, which is where we see this lock in our database, and if enough memory is configured lru_lock also gets stressed during reclaim, similar to [1]. Thanks for the sharing, this patchset can not help the [1] case, since it's just relief the per container lock contention now. Yes, readtwice case could be more sensitive for this lru_lock changes in containers. I may try to use it in container with some tuning. But anyway, aim9 is also pretty good to show the problem and solutions. :) > > It'd be better though, as Michal suggests, to use the real workload that's causing problems.  Where are you seeing contention? We repeatly create or delete a lot of different containers according to servers load/usage, so normal workload could cause lots of pages alloc/remove. aim9 could reflect part of scenarios. I don't know the DB scenario yet. > >> With this patch series, lruvec->lru_lock show no contentions >>          &(&lruvec->lru_l...          8          0               0       0               0               0 >> >> and aim9 page_test/brk_test performance increased 5%~50%. > > Where does the 50% number come in?  The numbers below seem to only show ~4% boost. the Setddev/CoeffVar case has about 50% performance increase. one of container's mmtests result as following: Stddev page_test 245.15 ( 0.00%) 189.29 ( 22.79%) Stddev brk_test 1258.60 ( 0.00%) 629.16 ( 50.01%) CoeffVar page_test 0.71 ( 0.00%) 0.53 ( 26.05%) CoeffVar brk_test 1.32 ( 0.00%) 0.64 ( 51.14%)