Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757296Ab2KHXh7 (ORCPT ); Thu, 8 Nov 2012 18:37:59 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:41677 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757256Ab2KHXh6 (ORCPT ); Thu, 8 Nov 2012 18:37:58 -0500 Date: Thu, 8 Nov 2012 15:37:56 -0800 From: Andrew Morton To: Sonny Rao Cc: linux-kernel@vger.kernel.org, Fengguang Wu , Peter Zijlstra , Michal Hocko , linux-mm@kvack.org, Mandeep Singh Baines , Johannes Weiner , Olof Johansson , Will Drewry , Kees Cook , Aaron Durbin , Puneet Kumar Subject: Re: [PATCH] mm: Fix calculation of dirtyable memory Message-Id: <20121108153756.cca505da.akpm@linux-foundation.org> In-Reply-To: <1352417135-25122-1-git-send-email-sonnyrao@chromium.org> References: <1352417135-25122-1-git-send-email-sonnyrao@chromium.org> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2225 Lines: 65 On Thu, 8 Nov 2012 15:25:35 -0800 Sonny Rao wrote: > The system uses global_dirtyable_memory() to calculate > number of dirtyable pages/pages that can be allocated > to the page cache. A bug causes an underflow thus making > the page count look like a big unsigned number. This in turn > confuses the dirty writeback throttling to aggressively write > back pages as they become dirty (usually 1 page at a time). > > Fix is to ensure there is no underflow while doing the math. > > Signed-off-by: Sonny Rao > Signed-off-by: Puneet Kumar > --- > mm/page-writeback.c | 17 +++++++++++++---- > 1 files changed, 13 insertions(+), 4 deletions(-) > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index 830893b..2a6356c 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -194,11 +194,19 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) > unsigned long x = 0; > > for_each_node_state(node, N_HIGH_MEMORY) { > + unsigned long nr_pages; > struct zone *z = > &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; > > - x += zone_page_state(z, NR_FREE_PAGES) + > - zone_reclaimable_pages(z) - z->dirty_balance_reserve; > + nr_pages = zone_page_state(z, NR_FREE_PAGES) + > + zone_reclaimable_pages(z); > + /* > + * Unreclaimable memory (kernel memory or anonymous memory > + * without swap) can bring down the dirtyable pages below > + * the zone's dirty balance reserve. > + */ > + if (nr_pages >= z->dirty_balance_reserve) > + x += nr_pages - z->dirty_balance_reserve; If the system has two nodes and one is below its dirty_balance_reserve, we could end up with something like: x = 0; ... x += 1000; ... x += -100; In this case, your fix would cause highmem_dirtyable_memory() to return 1000. Would it be better to instead return 900? IOW, we instead add logic along the lines of if ((long)x < 0) x = 0; return x; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/