Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1248952ybh; Thu, 16 Jul 2020 07:13:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwqzWmAT3uTFCGfcqL50SaU+/CMJo80dyFzUT6t5m9GD4LlhPa+IzinikuRPaa4qOczqI2O X-Received: by 2002:a17:906:4158:: with SMTP id l24mr1534818ejk.101.1594908836684; Thu, 16 Jul 2020 07:13:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594908836; cv=none; d=google.com; s=arc-20160816; b=m9A62vTSyTquObdBJ9ULbsilXdFoq8BH/dW/LKIQtPT9CYCZ2sEJtbC8cHSuAjab2E a2fNNQa5XbrWTfVBfT1s6irA8g0CAQziIDJKc3pr7zu5aTsJSIiP7RX1fnjgF3Q7vlee xr0P5RwQI8HFdOzsKSZ/Lv3fnuIkV1Ioik4rDnkmMXM8XVfkXmVL62kdclyptp74WWtr eJyF9cVR2SejghgRSJMUwtkfYfZvn/pfmStVNdhGKJbsLJ2lYika3lzDAB2uFsFePAKb d0eXGsc+BBUAyHhK+g+1n3cUww7ksAak+4slWcPs10O0/Py6NmV8YImDtLEoBsjOk1bq Z+lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=ZwX/mhlAD3zr833qIWvUPD76zWFR7r5RHDAo4CaqDJs=; b=pCi3ab33MiGoyU5ArcdxF+iAfIrGDEIXikLYRJMo2MRHvVVRad1vWZgSl4HwXyqbo5 vpNlZkScHKQ6XU4fA8uq1N9v/PV6eKvwx+l+/2TqEcAhDy2xbJNXrTFvGNdDH+qAsHVY UMocaXYLkMFth2vwc+YM4Yf733RyTw8mM1SzOKFHZC3I6zekzxK4lF0JavwEG3v3allj rGgMAz6OjwraEEUUGCmG0oCPmXn+Yo1dGMUGwjJ2g+OMjATBEyjLmdVBmzJAxTT6w5Sb Vke6MI3BX4FplSPv3sYXNK7G8CBP5APl/dfWSKa/tX2LMsj5F2+crQsKTMj6zMBiQUCu FOew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=mSUV7Ne4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c19si3388736ejr.522.2020.07.16.07.13.33; Thu, 16 Jul 2020 07:13:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=mSUV7Ne4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728943AbgGPOLZ (ORCPT + 99 others); Thu, 16 Jul 2020 10:11:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727044AbgGPOLY (ORCPT ); Thu, 16 Jul 2020 10:11:24 -0400 Received: from mail-io1-xd42.google.com (mail-io1-xd42.google.com [IPv6:2607:f8b0:4864:20::d42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B64BC061755; Thu, 16 Jul 2020 07:11:24 -0700 (PDT) Received: by mail-io1-xd42.google.com with SMTP id l1so6133419ioh.5; Thu, 16 Jul 2020 07:11:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZwX/mhlAD3zr833qIWvUPD76zWFR7r5RHDAo4CaqDJs=; b=mSUV7Ne41+i4WmO2Vm8ptf3P9RJODM2Y9Mg+O0LU/7IyBGkaVo1SQrKWvgj1Aqnoy8 g08NJT8igdEaGYT6EwoRxBGeK4NsqhiRzrMZ+xgito13FEBBjxlN5v15bCpgv6DKnkkG I10aEZB/G9JJ5HXxoPIO1KVUCT5+ZC5u7WRj6wXuOBwVXYuO3XCXM0XI3GnwJ6bhUPqc TvUNrXNCEPqbBByGmnniW7ylJGm2aUQEArKr9pbDkCj/RCbqTXdx5deU0iC+/OFcvqIx MD+8n1h3b7ddIhzb3mMpul58aqCsG4fODqusChCjuq8/vBxZnkDCtxt4sbpsciw72B66 EH+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZwX/mhlAD3zr833qIWvUPD76zWFR7r5RHDAo4CaqDJs=; b=e7teLsu68qnjXcXz3KiA25apIvAlgyX+qXE9lCsmsrW3mToBvA3VXegXM2jJUGXncD O6JTtDpvHt1lrIxvOG0XUBGPdx38K43Tkt+UWH3UHypc8dVcw4uPgmIKwQRnjmllDETR +SI+adpJ8srsyn/6fDjfCALc32Kp0aBiD+tZ7SNY3jK6LCKc2J2VZqHcUGLT3Cql5xl4 iN8L25MhaZ60XHcBYB9LJQO8svLzEyxzsvdIlVoZ1tLOTcpfmireCz9uEtf5oFwEHk6J YQy4jrURWQ7GAit3hQ//zPNPEqtPz6aE330vjwXn1VlAamX44NdGste8Sk1xwr/wccd/ e14g== X-Gm-Message-State: AOAM5319DAVmgq9uTfSEPb86fWCK+rwAJCiI0ordTSe3ML50HWQGXDZm Z54MIISuKpH3SFJSeoihlWS8fTF4VcP9v+9rcdA= X-Received: by 2002:a02:c888:: with SMTP id m8mr5117977jao.114.1594908683752; Thu, 16 Jul 2020 07:11:23 -0700 (PDT) MIME-Version: 1.0 References: <1594429136-20002-1-git-send-email-alex.shi@linux.alibaba.com> In-Reply-To: <1594429136-20002-1-git-send-email-alex.shi@linux.alibaba.com> From: Alexander Duyck Date: Thu, 16 Jul 2020 07:11:12 -0700 Message-ID: Subject: Re: [PATCH v16 00/22] per memcg lru_lock To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 10, 2020 at 5:59 PM Alex Shi wrote: > > The new version which bases on v5.8-rc4. Add 2 more patchs: > 'mm/thp: remove code path which never got into' > 'mm/thp: add tail pages into lru anyway in split_huge_page()' > and modified 'mm/mlock: reorder isolation sequence during munlock' > > Current lru_lock is one for each of node, pgdat->lru_lock, that guard for > lru lists, but now we had moved the lru lists into memcg for long time. Still > using per node lru_lock is clearly unscalable, pages on each of memcgs have > to compete each others for a whole lru_lock. This patchset try to use per > lruvec/memcg lru_lock to repleace per node lru lock to guard lru lists, make > it scalable for memcgs and get performance gain. > > Currently lru_lock still guards both lru list and page's lru bit, that's ok. > but if we want to use specific lruvec lock on the page, we need to pin down > the page's lruvec/memcg during locking. Just taking lruvec lock first may be > undermined by the page's memcg charge/migration. To fix this problem, we could > take out the page's lru bit clear and use it as pin down action to block the > memcg changes. That's the reason for new atomic func TestClearPageLRU. > So now isolating a page need both actions: TestClearPageLRU and hold the > lru_lock. > > The typical usage of this is isolate_migratepages_block() in compaction.c > we have to take lru bit before lru lock, that serialized the page isolation > in memcg page charge/migration which will change page's lruvec and new > lru_lock in it. > > The above solution suggested by Johannes Weiner, and based on his new memcg > charge path, then have this patchset. (Hugh Dickins tested and contributed much > code from compaction fix to general code polish, thanks a lot!). > > The patchset includes 3 parts: > 1, some code cleanup and minimum optimization as a preparation. > 2, use TestCleanPageLRU as page isolation's precondition > 3, replace per node lru_lock with per memcg per node lru_lock > > Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104 > containers on a 2s * 26cores * HT box with a modefied case: > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice > With this patchset, the readtwice performance increased about 80% > in concurrent containers. > > Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this > idea 8 years ago, and others who give comments as well: Daniel Jordan, > Mel Gorman, Shakeel Butt, Matthew Wilcox etc. > > Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu, > and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks! Hi Alex, I think I am seeing a regression with this patch set when I run the will-it-scale/page_fault3 test. Specifically the processes result is dropping from 56371083 to 43127382 when I apply these patches. I haven't had a chance to bisect and figure out what is causing it, and wanted to let you know in case you are aware of anything specific that may be causing this. Thanks. - Alex