Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757271Ab2KHXZ4 (ORCPT ); Thu, 8 Nov 2012 18:25:56 -0500 Received: from mail-wg0-f74.google.com ([74.125.82.74]:43988 "EHLO mail-wg0-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751425Ab2KHXZz (ORCPT ); Thu, 8 Nov 2012 18:25:55 -0500 From: Sonny Rao To: linux-kernel@vger.kernel.org Cc: Fengguang Wu , Peter Zijlstra , Andrew Morton , Michal Hocko , linux-mm@kvack.org, Mandeep Singh Baines , Johannes Weiner , Olof Johansson , Will Drewry , Kees Cook , Aaron Durbin , Sonny Rao , Puneet Kumar Subject: [PATCH] mm: Fix calculation of dirtyable memory Date: Thu, 8 Nov 2012 15:25:35 -0800 Message-Id: <1352417135-25122-1-git-send-email-sonnyrao@chromium.org> X-Mailer: git-send-email 1.7.7.3 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2196 Lines: 61 The system uses global_dirtyable_memory() to calculate number of dirtyable pages/pages that can be allocated to the page cache. A bug causes an underflow thus making the page count look like a big unsigned number. This in turn confuses the dirty writeback throttling to aggressively write back pages as they become dirty (usually 1 page at a time). Fix is to ensure there is no underflow while doing the math. Signed-off-by: Sonny Rao Signed-off-by: Puneet Kumar --- mm/page-writeback.c | 17 +++++++++++++---- 1 files changed, 13 insertions(+), 4 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 830893b..2a6356c 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -194,11 +194,19 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) unsigned long x = 0; for_each_node_state(node, N_HIGH_MEMORY) { + unsigned long nr_pages; struct zone *z = &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; - x += zone_page_state(z, NR_FREE_PAGES) + - zone_reclaimable_pages(z) - z->dirty_balance_reserve; + nr_pages = zone_page_state(z, NR_FREE_PAGES) + + zone_reclaimable_pages(z); + /* + * Unreclaimable memory (kernel memory or anonymous memory + * without swap) can bring down the dirtyable pages below + * the zone's dirty balance reserve. + */ + if (nr_pages >= z->dirty_balance_reserve) + x += nr_pages - z->dirty_balance_reserve; } /* * Make sure that the number of highmem pages is never larger @@ -222,8 +230,9 @@ static unsigned long global_dirtyable_memory(void) { unsigned long x; - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() - - dirty_balance_reserve; + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); + if (x >= dirty_balance_reserve) + x -= dirty_balance_reserve; if (!vm_highmem_is_dirtyable) x -= highmem_dirtyable_memory(x); -- 1.7.7.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/