Received: by 10.213.65.68 with SMTP id h4csp1239621imn; Wed, 4 Apr 2018 15:24:48 -0700 (PDT) X-Google-Smtp-Source: AIpwx4966KVrSJ5S+7JULTe4mr78xJGTQWrfrNRmqbI+ZPA7laqzl62KSwoPhgvzJAzX9PDDklbU X-Received: by 10.99.109.142 with SMTP id i136mr13060780pgc.306.1522880688285; Wed, 04 Apr 2018 15:24:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522880688; cv=none; d=google.com; s=arc-20160816; b=rcI2PiiFd3yiftF3ToZ3Eo8hBBSJXdkg2KCUo1CrimEZRmr4Xf/bG+tg/pfPUN1OGI Im4R7bVwwQhI+50p4qEP/nVqAUhLZljx5+ui+lhWYj4/EV5JQ2vph9eLP28nmaSn2Kfk 00i38UevgsixCWTJ4yVlf8d6IipriZiWdhKirxzZ32ZrH5AjGXMA0AcZgbD3HoLT/aM4 sFGHMya8iVa5tSvOwele9EafOla0EFaSJ4XnTh1GuUQ79867dRlDOm3/7SdVU0fDHNQG CS5/LPqLY7rfUeMjE5QDUf3XzkldTf0evR6KEgDnGJIHvH+b+51TKAJjA4yv8l1Lt+6Z jphA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:to:from:date :arc-authentication-results; bh=waAsQqa3LG1Wfx0HKPx/yxClED+90rAp4mMnd2q24u4=; b=CT6qAVcTkZG3dEJZ8c57MLMfFEfCKb8YXL6oaWC0v0wyTfm2ZUsiIPZNmNJv611dcJ AZrBHzo2euWfzpQT85sAEGdLNfYutJaeWymUCIJZZyBkaUiJgGseFxxpSnJj/UH8bHmB Stsj0maJw1tI1d1EpF+tR9WlXg8Y2aH9/T5VPgjzj2aTn5ADjHKbzhfHNPExYH9nc+u9 4bdHPq6yTjrYHmuhWmchGHqEcPZCTFy0IlqMVJjevOrYrZjmXzzZ62neIEDyNP2z24wv LtyE4pqPTgkb05Jp7GPVihPGbYYtj/7WTdHU1C70+wGAQk9tIsFhFnmNw67sN5eXHuXt Ii3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d6-v6si4433193pll.417.2018.04.04.15.24.33; Wed, 04 Apr 2018 15:24:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752344AbeDDWX1 (ORCPT + 99 others); Wed, 4 Apr 2018 18:23:27 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:48130 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751749AbeDDWX0 (ORCPT ); Wed, 4 Apr 2018 18:23:26 -0400 Received: from akpm3.svl.corp.google.com (unknown [104.133.9.71]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 09838D25; Wed, 4 Apr 2018 22:23:26 +0000 (UTC) Date: Wed, 4 Apr 2018 15:23:24 -0700 From: Andrew Morton To: Xidong Wang , Vitaly Wool , Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/1] z3fold: fix memory leak Message-Id: <20180404152324.14c8ed8af41b0ec8b3516b7f@linux-foundation.org> In-Reply-To: <20180404152039.aadbe5bbed5bc91da8c5fa99@linux-foundation.org> References: <1522803111-29209-1-git-send-email-wangxidong_97@163.com> <20180404152039.aadbe5bbed5bc91da8c5fa99@linux-foundation.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 4 Apr 2018 15:20:39 -0700 Andrew Morton wrote: > On Wed, 4 Apr 2018 08:51:51 +0800 Xidong Wang wrote: > > > In function z3fold_create_pool(), the memory allocated by > > __alloc_percpu() is not released on the error path that pool->compact_wq > > , which holds the return value of create_singlethread_workqueue(), is NULL. > > This will result in a memory leak bug. > > > > ... > > > > --- a/mm/z3fold.c > > +++ b/mm/z3fold.c > > @@ -490,6 +490,7 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp, > > out_wq: > > destroy_workqueue(pool->compact_wq); > > out: > > + free_percpu(pool->unbuddied); > > kfree(pool); > > return NULL; > > } > > That isn't right. If the initial kzallc fails we'll goto out with > pool==NULL. > > Please check: > > --- a/mm/z3fold.c~z3fold-fix-memory-leak-fix > +++ a/mm/z3fold.c > @@ -479,7 +479,7 @@ static struct z3fold_pool *z3fold_create > pool->name = name; > pool->compact_wq = create_singlethread_workqueue(pool->name); > if (!pool->compact_wq) > - goto out; > + goto out_unbuddied; > pool->release_wq = create_singlethread_workqueue(pool->name); > if (!pool->release_wq) > goto out_wq; > @@ -489,9 +489,10 @@ static struct z3fold_pool *z3fold_create > > out_wq: > destroy_workqueue(pool->compact_wq); > -out: > +out_unbuddied: > free_percpu(pool->unbuddied); > kfree(pool); > +out: > return NULL; > } We may as well check that __alloc_percpu() return value, too: --- a/mm/z3fold.c~z3fold-fix-memory-leak-fix +++ a/mm/z3fold.c @@ -467,6 +467,8 @@ static struct z3fold_pool *z3fold_create spin_lock_init(&pool->lock); spin_lock_init(&pool->stale_lock); pool->unbuddied = __alloc_percpu(sizeof(struct list_head)*NCHUNKS, 2); + if (!pool->unbuddied) + goto out_pool; for_each_possible_cpu(cpu) { struct list_head *unbuddied = per_cpu_ptr(pool->unbuddied, cpu); @@ -479,7 +481,7 @@ static struct z3fold_pool *z3fold_create pool->name = name; pool->compact_wq = create_singlethread_workqueue(pool->name); if (!pool->compact_wq) - goto out; + goto out_unbuddied; pool->release_wq = create_singlethread_workqueue(pool->name); if (!pool->release_wq) goto out_wq; @@ -489,9 +491,11 @@ static struct z3fold_pool *z3fold_create out_wq: destroy_workqueue(pool->compact_wq); -out: +out_unbuddied: free_percpu(pool->unbuddied); +out_pool: kfree(pool); +out: return NULL; } End result: static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp, const struct z3fold_ops *ops) { struct z3fold_pool *pool = NULL; int i, cpu; pool = kzalloc(sizeof(struct z3fold_pool), gfp); if (!pool) goto out; spin_lock_init(&pool->lock); spin_lock_init(&pool->stale_lock); pool->unbuddied = __alloc_percpu(sizeof(struct list_head)*NCHUNKS, 2); if (!pool->unbuddied) goto out_pool; for_each_possible_cpu(cpu) { struct list_head *unbuddied = per_cpu_ptr(pool->unbuddied, cpu); for_each_unbuddied_list(i, 0) INIT_LIST_HEAD(&unbuddied[i]); } INIT_LIST_HEAD(&pool->lru); INIT_LIST_HEAD(&pool->stale); atomic64_set(&pool->pages_nr, 0); pool->name = name; pool->compact_wq = create_singlethread_workqueue(pool->name); if (!pool->compact_wq) goto out_unbuddied; pool->release_wq = create_singlethread_workqueue(pool->name); if (!pool->release_wq) goto out_wq; INIT_WORK(&pool->work, free_pages_work); pool->ops = ops; return pool; out_wq: destroy_workqueue(pool->compact_wq); out_unbuddied: free_percpu(pool->unbuddied); out_pool: kfree(pool); out: return NULL; }