Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756630Ab3EHPTx (ORCPT ); Wed, 8 May 2013 11:19:53 -0400 Received: from mail-pb0-f42.google.com ([209.85.160.42]:63544 "EHLO mail-pb0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756525Ab3EHPTu (ORCPT ); Wed, 8 May 2013 11:19:50 -0400 From: Jiang Liu To: Andrew Morton Cc: Jiang Liu , David Rientjes , Wen Congyang , Mel Gorman , Minchan Kim , KAMEZAWA Hiroyuki , Michal Hocko , James Bottomley , Sergei Shtylyov , David Howells , Mark Salter , Jianguo Wu , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Michel Lespinasse , Rik van Riel Subject: [PATCH v5, part3 11/15] mm: use a dedicated lock to protect totalram_pages and zone->managed_pages Date: Wed, 8 May 2013 23:17:10 +0800 Message-Id: <1368026235-5976-12-git-send-email-jiang.liu@huawei.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1368026235-5976-1-git-send-email-jiang.liu@huawei.com> References: <1368026235-5976-1-git-send-email-jiang.liu@huawei.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4711 Lines: 120 Currently lock_memory_hotplug()/unlock_memory_hotplug() are used to protect totalram_pages and zone->managed_pages. Other than the memory hotplug driver, totalram_pages and zone->managed_pages may also be modified at runtime by other drivers, such as Xen balloon, virtio_balloon etc. For those cases, memory hotplug lock is a little too heavy, so introduce a dedicated lock to protect totalram_pages and zone->managed_pages. Now we have a simplified locking rules totalram_pages and zone->managed_pages as: 1) no locking for read accesses because they are unsigned long. 2) no locking for write accesses at boot time in single-threaded context. 3) serialize write accesses at runtime by acquiring the dedicated managed_page_count_lock. Also adjust zone->managed_pages when freeing reserved pages into the buddy system, to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu Cc: Andrew Morton Cc: Mel Gorman Cc: Michel Lespinasse Cc: Rik van Riel Cc: Minchan Kim Cc: linux-mm@kvack.org (open list:MEMORY MANAGEMENT) Cc: linux-kernel@vger.kernel.org (open list) --- include/linux/mm.h | 6 ++---- include/linux/mmzone.h | 14 ++++++++++---- mm/page_alloc.c | 12 ++++++++++++ 3 files changed, 24 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9f7006f..86014c9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1303,6 +1303,7 @@ extern void free_initmem(void); */ extern unsigned long free_reserved_area(unsigned long start, unsigned long end, int poison, char *s); + #ifdef CONFIG_HIGHMEM /* * Free a highmem page into the buddy system, adjusting totalhigh_pages @@ -1311,10 +1312,7 @@ extern unsigned long free_reserved_area(unsigned long start, unsigned long end, extern void free_highmem_page(struct page *page); #endif -static inline void adjust_managed_page_count(struct page *page, long count) -{ - totalram_pages += count; -} +extern void adjust_managed_page_count(struct page *page, long count); /* Free the reserved page into the buddy system, so it gets managed. */ static inline void __free_reserved_page(struct page *page) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8c9f859..14ca1a9 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -474,10 +474,16 @@ struct zone { * frequently read in proximity to zone->lock. It's good to * give them a chance of being in the same cacheline. * - * Write access to present_pages and managed_pages at runtime should - * be protected by lock_memory_hotplug()/unlock_memory_hotplug(). - * Any reader who can't tolerant drift of present_pages and - * managed_pages should hold memory hotplug lock to get a stable value. + * Write access to present_pages at runtime should be protected by + * lock_memory_hotplug()/unlock_memory_hotplug(). Any reader who can't + * tolerant drift of present_pages should hold memory hotplug lock to + * get a stable value. + * + * Read access to managed_pages should be safe because it's unsigned + * long. Write access to zone->managed_pages and totalram_pages are + * protected by managed_page_count_lock at runtime. Idealy only + * adjust_managed_page_count() should be used instead of directly + * touching zone->managed_pages and totalram_pages. */ unsigned long spanned_pages; unsigned long present_pages; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c5c0988..9d8209a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -100,6 +100,9 @@ nodemask_t node_states[NR_NODE_STATES] __read_mostly = { }; EXPORT_SYMBOL(node_states); +/* Protect totalram_pages and zone->managed_pages */ +static DEFINE_SPINLOCK(managed_page_count_lock); + unsigned long totalram_pages __read_mostly; unsigned long totalreserve_pages __read_mostly; /* @@ -5186,6 +5189,15 @@ early_param("movablecore", cmdline_parse_movablecore); #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ +void adjust_managed_page_count(struct page *page, long count) +{ + spin_lock(&managed_page_count_lock); + page_zone(page)->managed_pages += count; + totalram_pages += count; + spin_unlock(&managed_page_count_lock); +} +EXPORT_SYMBOL(adjust_managed_page_count); + unsigned long free_reserved_area(unsigned long start, unsigned long end, int poison, char *s) { -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/