Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp368150pxa; Tue, 4 Aug 2020 07:33:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyBQxANm44JBgRTpCMBY0woAOsDrfcLHavhe5+ECEbXz4GM3NnkLv/RUIHBgLy0LXr2z21V X-Received: by 2002:a17:906:3c02:: with SMTP id h2mr22542601ejg.437.1596551602445; Tue, 04 Aug 2020 07:33:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596551602; cv=none; d=google.com; s=arc-20160816; b=rs0e1L0auwyqaAkrmY5s5usKxi0MOtxkbqfjYsj1LDHHOpAzwxi1KaMI2tC8huTYrX DW5y1unyp2e2vdeCthzYoR3I0C2Hg7cP1AYWvd+CSsCgRRmH+YOvx/ZQlM3mmZ6A3CrS x5Vy5TSmnsWK3fFrpy3qNFBE7beZPsCIq8q2GYH6ahf9o/kC41v+ZpmTiNDwL1N5JNJw 1NADwknq8BhN1BgFrGcqZgvpzGMj3k4sa6WqEIcQG/UYkJy+FsZz+tU+OE/Y2JEwDQOw i7x+e49F03H4+TzQ5+aEw9XjtM0W3LnxbbztMGgc/jNABVM2KAaGzcXw5omz3JNgaQuv WhIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=DSILtMaPZVkkWKSXWo0XxONGdj0mqua+F4iUMIFu5jo=; b=AAds0Sw3cfrirqjAMX8ZyySBQl3BoEjuzj072m6dATAi61uryj8AacesC9Jf8rbfTU GzOH0j4GDJZAV6RA+DYAnXCi235am0mqBcboJeknUW4eALelB3QFeN33p0jPPrSCbSWS Mvl3MB0FnlqTsQRa6Ttd2MbFX+AOdwb4pzo+VkqLnpcRTz93BAdpWerNZKiY0tbSxe8H yiMh8NT6YBFOAxjPPfrYmcNdMP7AbXJPk0yZ6cjObPdMTYaNaZNYSAzE8P+L+5TLc+Th jyl1uhq4SkTYfbqKDl6lyVWgAP1rWg5Yz0xnAZgdnzEq6Wi5b0qIh3l2Xnn8UR+jIs3V mWxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=dbt3cGyX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m29si13050712edb.298.2020.08.04.07.32.46; Tue, 04 Aug 2020 07:33:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=dbt3cGyX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729055AbgHDOa5 (ORCPT + 99 others); Tue, 4 Aug 2020 10:30:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725826AbgHDOaL (ORCPT ); Tue, 4 Aug 2020 10:30:11 -0400 Received: from mail-il1-x143.google.com (mail-il1-x143.google.com [IPv6:2607:f8b0:4864:20::143]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25961C06174A; Tue, 4 Aug 2020 07:30:11 -0700 (PDT) Received: by mail-il1-x143.google.com with SMTP id t4so34330475iln.1; Tue, 04 Aug 2020 07:30:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=DSILtMaPZVkkWKSXWo0XxONGdj0mqua+F4iUMIFu5jo=; b=dbt3cGyXK0zGkFNIC+R9pXNutUkFJp04mRi+AlD6bPj6TtJ2sBFrNr9I+9rfQpo6hX yHwRsJIC9S0+7cLucGvZPblimft6ur5BNrp0vaHJfrkMUAn00H7skzug6KA7LYH+4fKE JJBI5zZkXe8hA7I3uvZ338dmSi4v2qZNctvJM93IjxLHCi8GroNaCxQ2VkZGa9DpOSZM WeR6/T8+HnIDg5YEvSDD1xKeLfigme727LtghevcFLf/u+mt1kIC3gkOpQj+EDtyWPOn KsukeoiHTZkxannG30Cf8DMv4sKlp2kx6wQlG25ckRZG/PD0ywxC7bPb0TfoiW/k0OU7 AyIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=DSILtMaPZVkkWKSXWo0XxONGdj0mqua+F4iUMIFu5jo=; b=guS2ZRuDV7JDWeS4OwUZmXfkn+Nci6hL7g25n0Q1NnPDFioFhuSmAXqzzrl4yIYrnk 2gn+uZDs5yr8S3UAVKZpO4BKsJA21p3jMVzgbvMqv/sjX4rqc+wkM71PZhcWxyQq5pYD y8v6jHSpfc2PUSiexHAq875bwgk856cjtYv+5ONrNxtwoDzsg7eqyTDY293ZCHYPbyD1 cTlCH24Y7UhohMjm0NSPbO6LcrdpbEnE6kLzqvin7S4Vg3EvxS4KvpTKTv5lcQDhmakC YtZZg9gpPX+wyYH5DYfpkXLksBNf6iOhABLi6oD3d+Xk+F051/O9AqUSiDMslha5gjZK 4c0w== X-Gm-Message-State: AOAM532quilBQr/kFh2O1pGUXLrCHMhen4a6bVqHS1+9hVWGgD6+pCu8 OoQHfcY7aKZymt5/kMcI25+Tll6yYIT2zbNHVg8= X-Received: by 2002:a05:6e02:143:: with SMTP id j3mr4680115ilr.97.1596551409234; Tue, 04 Aug 2020 07:30:09 -0700 (PDT) MIME-Version: 1.0 References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-22-git-send-email-alex.shi@linux.alibaba.com> In-Reply-To: From: Alexander Duyck Date: Tue, 4 Aug 2020 07:29:58 -0700 Message-ID: Subject: Re: [PATCH v17 21/21] mm/lru: revise the comments of lru_lock To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen , Andrey Ryabinin , Jann Horn Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 4, 2020 at 3:04 AM Alex Shi wrote: > > > > =E5=9C=A8 2020/8/4 =E4=B8=8A=E5=8D=886:37, Alexander Duyck =E5=86=99=E9= =81=93: > >> > >> shrink_inactive_list() also diverts any unevictable pages that it fin= ds on the > >> -inactive lists to the appropriate zone's unevictable list. > >> +inactive lists to the appropriate node's unevictable list. > >> > >> shrink_inactive_list() should only see SHM_LOCK'd pages that became S= HM_LOCK'd > >> after shrink_active_list() had moved them to the inactive list, or pa= ges mapped > > Same here. > > lruvec is used per memcg per node actually, and it fallback to node if me= mcg disabled. > So the comments are still right. > > And most of changes just fix from zone->lru_lock to pgdat->lru_lock chang= e. Actually in my mind one thing that might work better would be to explain what the lruvec is and where it resides. Then replace zone with lruvec since that is really where the unevictable list resides. Then it would be correct for both the memcg and pgdat case. > > > >> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > >> index 64ede5f150dc..44738cdb5a55 100644 > >> --- a/include/linux/mm_types.h > >> +++ b/include/linux/mm_types.h > >> @@ -78,7 +78,7 @@ struct page { > >> struct { /* Page cache and anonymous pages */ > >> /** > >> * @lru: Pageout list, eg. active_list protect= ed by > >> - * pgdat->lru_lock. Sometimes used as a gener= ic list > >> + * lruvec->lru_lock. Sometimes used as a gene= ric list > >> * by the page owner. > >> */ > >> struct list_head lru; > >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > >> index 8af956aa13cf..c92289a4e14d 100644 > >> --- a/include/linux/mmzone.h > >> +++ b/include/linux/mmzone.h > >> @@ -115,7 +115,7 @@ static inline bool free_area_empty(struct free_are= a *area, int migratetype) > >> struct pglist_data; > >> > >> /* > >> - * zone->lock and the zone lru_lock are two of the hottest locks in t= he kernel. > >> + * zone->lock and the lru_lock are two of the hottest locks in the ke= rnel. > >> * So add a wild amount of padding here to ensure that they fall into= separate > >> * cachelines. There are very few zone structures in the machine, so= space > >> * consumption is not a concern here. > > So I don't believe you are using ZONE_PADDING in any way to try and > > protect the LRU lock currently. At least you aren't using it in the > > lruvec. As such it might make sense to just drop the reference to the > > lru_lock here. That reminds me that we still need to review the > > placement of the lru_lock and determine if there might be a better > > placement and/or padding that might improve performance when under > > heavy stress. > > > > Right, is it the following looks better? > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index ccc76590f823..0ed520954843 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -113,8 +113,7 @@ static inline bool free_area_empty(struct free_area *= area, int migratetype) > struct pglist_data; > > /* > - * zone->lock and the lru_lock are two of the hottest locks in the kerne= l. > - * So add a wild amount of padding here to ensure that they fall into se= parate > + * Add a wild amount of padding here to ensure datas fall into separate > * cachelines. There are very few zone structures in the machine, so sp= ace > * consumption is not a concern here. > */ > > Thanks! > Alex I would maybe tweak it to make sure it is clear that we are using this to pad out items that are likely to cause cache thrash such as various hot spinocks and such.