Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755025AbaDXI4N (ORCPT ); Thu, 24 Apr 2014 04:56:13 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:46951 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754767AbaDXIzl (ORCPT ); Thu, 24 Apr 2014 04:55:41 -0400 From: Luis Henriques To: linux-kernel@vger.kernel.org, stable@vger.kernel.org, kernel-team@lists.ubuntu.com Cc: Kent Overstreet , NeilBrown , Luis Henriques Subject: [PATCH 3.11 141/182] md/raid1: r1buf_pool_alloc: free allocate pages when subsequent allocation fails. Date: Thu, 24 Apr 2014 09:51:06 +0100 Message-Id: <1398329507-5911-142-git-send-email-luis.henriques@canonical.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1398329507-5911-1-git-send-email-luis.henriques@canonical.com> References: <1398329507-5911-1-git-send-email-luis.henriques@canonical.com> X-Extended-Stable: 3.11 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.11.10.9 -stable review patch. If anyone has any objections, please let me know. ------------------ From: NeilBrown commit da1aab3dca9aa88ae34ca392470b8943159e25fe upstream. When performing a user-request check/repair (MD_RECOVERY_REQUEST is set) on a raid1, we allocate multiple bios each with their own set of pages. If the page allocations for one bio fails, we currently do *not* free the pages allocated for the previous bios, nor do we free the bio itself. This patch frees all the already-allocate pages, and makes sure that all the bios are freed as well. This bug can cause a memory leak which can ultimately OOM a machine. It was introduced in 3.10-rc1. Fixes: a07876064a0b73ab5ef1ebcf14b1cf0231c07858 Cc: Kent Overstreet Reported-by: Russell King - ARM Linux Signed-off-by: NeilBrown Signed-off-by: Luis Henriques --- drivers/md/raid1.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 6edc2db..66c4aee 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -94,6 +94,7 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) struct pool_info *pi = data; struct r1bio *r1_bio; struct bio *bio; + int need_pages; int i, j; r1_bio = r1bio_pool_alloc(gfp_flags, pi); @@ -116,15 +117,15 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) * RESYNC_PAGES for each bio. */ if (test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery)) - j = pi->raid_disks; + need_pages = pi->raid_disks; else - j = 1; - while(j--) { + need_pages = 1; + for (j = 0; j < need_pages; j++) { bio = r1_bio->bios[j]; bio->bi_vcnt = RESYNC_PAGES; if (bio_alloc_pages(bio, gfp_flags)) - goto out_free_bio; + goto out_free_pages; } /* If not user-requests, copy the page pointers to all bios */ if (!test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery)) { @@ -138,6 +139,14 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) return r1_bio; +out_free_pages: + while (--j >= 0) { + struct bio_vec *bv; + + bio_for_each_segment_all(bv, r1_bio->bios[j], i) + __free_page(bv->bv_page); + } + out_free_bio: while (++j < pi->raid_disks) bio_put(r1_bio->bios[j]); -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/