Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753075Ab0GHUCR (ORCPT ); Thu, 8 Jul 2010 16:02:17 -0400 Received: from smtp1.linux-foundation.org ([140.211.169.13]:54478 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751578Ab0GHUCP (ORCPT ); Thu, 8 Jul 2010 16:02:15 -0400 Date: Thu, 8 Jul 2010 13:00:48 -0700 From: Andrew Morton To: KOSAKI Motohiro Cc: LKML , linux-mm , Mel Gorman , Rik van Riel , Minchan Kim , Johannes Weiner Subject: Re: [PATCH v2 1/2] vmscan: don't subtraction of unsined Message-Id: <20100708130048.fccfcdad.akpm@linux-foundation.org> In-Reply-To: <20100708163401.CD34.A69D9226@jp.fujitsu.com> References: <20100708163401.CD34.A69D9226@jp.fujitsu.com> X-Mailer: Sylpheed 2.4.8 (GTK+ 2.12.9; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2409 Lines: 65 On Thu, 8 Jul 2010 16:38:10 +0900 (JST) KOSAKI Motohiro wrote: > 'slab_reclaimable' and 'nr_pages' are unsigned. so, subtraction is > unsafe. > > Signed-off-by: KOSAKI Motohiro > Acked-by: Christoph Lameter > --- > mm/vmscan.c | 14 +++++++------- > 1 files changed, 7 insertions(+), 7 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 9c7e57c..8715da1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2588,7 +2588,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) > .swappiness = vm_swappiness, > .order = order, > }; > - unsigned long slab_reclaimable; > + unsigned long n, m; Please use better identifiers. > disable_swap_token(); > cond_resched(); > @@ -2615,8 +2615,8 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) > } while (priority >= 0 && sc.nr_reclaimed < nr_pages); > } > > - slab_reclaimable = zone_page_state(zone, NR_SLAB_RECLAIMABLE); > - if (slab_reclaimable > zone->min_slab_pages) { > + n = zone_page_state(zone, NR_SLAB_RECLAIMABLE); > + if (n > zone->min_slab_pages) { > /* > * shrink_slab() does not currently allow us to determine how > * many pages were freed in this zone. So we take the current > @@ -2628,16 +2628,16 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) > * take a long time. > */ > while (shrink_slab(sc.nr_scanned, gfp_mask, order) && > - zone_page_state(zone, NR_SLAB_RECLAIMABLE) > > - slab_reclaimable - nr_pages) > + (zone_page_state(zone, NR_SLAB_RECLAIMABLE) + nr_pages > n)) > ; > > /* > * Update nr_reclaimed by the number of slab pages we > * reclaimed from this zone. > */ > - sc.nr_reclaimed += slab_reclaimable - > - zone_page_state(zone, NR_SLAB_RECLAIMABLE); > + m = zone_page_state(zone, NR_SLAB_RECLAIMABLE); > + if (m < n) > + sc.nr_reclaimed += n - m; And it's not a completly trivial objection. Your patch made the above code snippet quite a lot harder to read (and hence harder to maintain). -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/