Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp3824578ybl; Tue, 20 Aug 2019 02:52:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqzEHz0LGqC2y2LU1qm/tYlOVngP1COMOFC2TSj0x9n5kphT/cDxYRwAXyC/MbxjcVWYr3tR X-Received: by 2002:aa7:8d98:: with SMTP id i24mr29351029pfr.199.1566294739576; Tue, 20 Aug 2019 02:52:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566294739; cv=none; d=google.com; s=arc-20160816; b=bR4drALOh3gHkK8uqf3WQB+vQoah+L+EWHtCN3Aug7jc2Jxn48P0XVN9dy7N1vq/Do OULmnxG6yVQ707FfLNPlXbJzpxUUNc2OzdczWJrr3QYA+jriXKxt/cu6goCFjhrCMRdT cNTxTadzqfj7uDsohrG0N40NHYLay34Jjfg4HyTywl8UijWKJtcQvuIuWahN+ZIP5zj+ l5HD+0A48F3lBl4hWlsCeoLXEBAFy8z/g588QCcuna/Lm3pLj0BzNeGbP7kWvmJ2RZ6m rwq2osEhI+SV1oRRiOx/+V6NRDqLpnGoqm8p1jo/pYFn35kqz2Rzr/ALSqo5ZtcqTjfU Sh3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=wnmj20ZU29ozKjA2Pt9zKvqo4E/0jZe5bSyLtRL4GOE=; b=h/NXQ1Gk4PTbrkgEokLVxD/rD22cpcWspbwi7TY+8gLtymToiublgufKHZddVtMYZZ d+VWnPrMay/9B1DFW4Q3ImtoS2b9gwkhULGZE9fZbmd0AYXlPKEmEqtdxM2XdjVccGDQ 5V6LhgOo/iPNpkUQ705oNNdq+LeeEWAQ15Rs1x7+/fPbI66yHLQk3SjIjm4L0O5HDVwN aFweBrNmYXe1A3qrd5kTFWvnSeD9dZhxE+FTVL0Qz7zLfNysHgjW0C5Gi9mpTsWlMhGR PynuvaSkF2wKFHERrwPru0oJeRBy9Ci7PV1rH5B5qMLULfuQXLR4w91ZzLAFSADd1Z50 S4Lg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u191si11698324pgd.281.2019.08.20.02.52.04; Tue, 20 Aug 2019 02:52:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729852AbfHTJua (ORCPT + 99 others); Tue, 20 Aug 2019 05:50:30 -0400 Received: from out30-54.freemail.mail.aliyun.com ([115.124.30.54]:53258 "EHLO out30-54.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729333AbfHTJu2 (ORCPT ); Tue, 20 Aug 2019 05:50:28 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=37;SR=0;TI=SMTPD_---0TZzk.Cg_1566294578; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TZzk.Cg_1566294578) by smtp.aliyun-inc.com(127.0.0.1); Tue, 20 Aug 2019 17:49:38 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Mel Gorman , Tejun Heo Cc: Alex Shi , Jason Gunthorpe , Dan Williams , Vlastimil Babka , Ira Weiny , Jesper Dangaard Brouer , Andrey Ryabinin , Jann Horn , Logan Gunthorpe , Souptick Joarder , Ralph Campbell , "Tobin C. Harding" , Michal Hocko , Oscar Salvador , Wei Yang , Johannes Weiner , Pavel Tatashin , Arun KS , Matthew Wilcox , "Darrick J. Wong" , Amir Goldstein , Dave Chinner , Josef Bacik , "Kirill A. Shutemov" , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Mike Kravetz , Hugh Dickins , Kirill Tkhai , Daniel Jordan , Yafang Shao , Yang Shi Subject: [PATCH 14/14] mm/lru: fix the comments of lru_lock Date: Tue, 20 Aug 2019 17:48:37 +0800 Message-Id: <1566294517-86418-15-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1566294517-86418-1-git-send-email-alex.shi@linux.alibaba.com> References: <1566294517-86418-1-git-send-email-alex.shi@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since we changed the pgdat->lru_lock to lruvec->lru_lock, have to fix the incorrect comments in code. Also fixed some zone->lru_lock comment error in ancient time. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Jason Gunthorpe Cc: Dan Williams Cc: Vlastimil Babka Cc: Ira Weiny Cc: Jesper Dangaard Brouer Cc: Andrey Ryabinin Cc: Jann Horn Cc: Logan Gunthorpe Cc: Souptick Joarder Cc: Ralph Campbell Cc: "Tobin C. Harding" Cc: Michal Hocko Cc: Oscar Salvador Cc: Mel Gorman Cc: Wei Yang Cc: Johannes Weiner Cc: Pavel Tatashin Cc: Arun KS Cc: Matthew Wilcox Cc: "Darrick J. Wong" Cc: Amir Goldstein Cc: Dave Chinner Cc: Josef Bacik Cc: "Kirill A. Shutemov" Cc: "Jérôme Glisse" Cc: Mike Kravetz Cc: Hugh Dickins Cc: Kirill Tkhai Cc: Daniel Jordan Cc: Yafang Shao Cc: Yang Shi Cc: cgroups@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mm_types.h | 2 +- include/linux/mmzone.h | 4 ++-- mm/filemap.c | 4 ++-- mm/rmap.c | 2 +- mm/vmscan.c | 6 +++--- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6a7a1083b6fb..f9f990d8f08f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -79,7 +79,7 @@ struct page { struct { /* Page cache and anonymous pages */ /** * @lru: Pageout list, eg. active_list protected by - * pgdat->lru_lock. Sometimes used as a generic list + * lruvec->lru_lock. Sometimes used as a generic list * by the page owner. */ struct list_head lru; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8d0076d084be..d2f782263e42 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -159,7 +159,7 @@ static inline bool free_area_empty(struct free_area *area, int migratetype) struct pglist_data; /* - * zone->lock and the zone lru_lock are two of the hottest locks in the kernel. + * zone->lock and the lru_lock are two of the hottest locks in the kernel. * So add a wild amount of padding here to ensure that they fall into separate * cachelines. There are very few zone structures in the machine, so space * consumption is not a concern here. @@ -295,7 +295,7 @@ struct zone_reclaim_stat { struct lruvec { struct list_head lists[NR_LRU_LISTS]; - /* move lru_lock to per lruvec for memcg */ + /* perf lruvec lru_lock for memcg */ spinlock_t lru_lock; struct zone_reclaim_stat reclaim_stat; diff --git a/mm/filemap.c b/mm/filemap.c index d0cf700bf201..0a604c8284f2 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -100,8 +100,8 @@ * ->swap_lock (try_to_unmap_one) * ->private_lock (try_to_unmap_one) * ->i_pages lock (try_to_unmap_one) - * ->pgdat->lru_lock (follow_page->mark_page_accessed) - * ->pgdat->lru_lock (check_pte_range->isolate_lru_page) + * ->lruvec->lru_lock (follow_page->mark_page_accessed) + * ->lruvec->lru_lock (check_pte_range->isolate_lru_page) * ->private_lock (page_remove_rmap->set_page_dirty) * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) diff --git a/mm/rmap.c b/mm/rmap.c index 003377e24232..6bee4aebced6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -27,7 +27,7 @@ * mapping->i_mmap_rwsem * anon_vma->rwsem * mm->page_table_lock or pte_lock - * pgdat->lru_lock (in mark_page_accessed, isolate_lru_page) + * lruvec->lru_lock (in mark_page_accessed, isolate_lru_page) * swap_lock (in swap_duplicate, swap_info_get) * mmlist_lock (in mmput, drain_mmlist and others) * mapping->private_lock (in __set_page_dirty_buffers) diff --git a/mm/vmscan.c b/mm/vmscan.c index ea5c2f3f2567..1328eb182a3e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1662,7 +1662,7 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, } /** - * pgdat->lru_lock is heavily contended. Some of the functions that + * lruvec->lru_lock is heavily contended. Some of the functions that * shrink the lists perform better by taking out a batch of pages * and working on them outside the LRU lock. * @@ -1864,9 +1864,9 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, * processes, from rmap. * * If the pages are mostly unmapped, the processing is fast and it is - * appropriate to hold zone_lru_lock across the whole operation. But if + * appropriate to hold lru_lock across the whole operation. But if * the pages are mapped, the processing is slow (page_referenced()) so we - * should drop zone_lru_lock around each page. It's impossible to balance + * should drop lru_lock around each page. It's impossible to balance * this, so instead we remove the pages from the LRU while processing them. * It is safe to rely on PG_active against the non-LRU pages in here because * nobody will play with that bit on a non-LRU page. -- 1.8.3.1