Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2824315imm; Sun, 30 Sep 2018 05:51:10 -0700 (PDT) X-Google-Smtp-Source: ACcGV61Sm2tkpBuBiPFe/3ogCBOp3LrQVqihOD7knUCFMqoGCRdQgD6XVnQJ7h5Nc/q/iocE4JSw X-Received: by 2002:a63:2dc5:: with SMTP id t188-v6mr6247979pgt.362.1538311869973; Sun, 30 Sep 2018 05:51:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538311869; cv=none; d=google.com; s=arc-20160816; b=VU02QXY97KeKVIa0dTbTnytuI9J+X57MbgTOS7XmTs7kNVnS4sFfWGJaUPSUB7G81q wWRORCyfYZ9y7ccplxbHLIWCPLEsCjE6D36pyrD6Lxpfb+RLWcSltLGfpEQbLtUhkdQK t7DbxUG8uHgEl4Z3E9vV3DaUYyfbyybbGn7HWt4tBNVN062V6coCyLir218DlarPJloT hSjoBFPL3+icT+P8fKkBmizXKHV7aROmSOEGb/7KXu0DwyrFyRly6uDaRw1H4Gdl348c JsKzRjVoH3Zl/pFCy0HYCmW+Phjml4D8R0ONcIohI/763drtrh+otBN1Tg5Vu0Iu1/xg Si9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=DjUZGSScEpQLqDmJD+VJ5m38dox3hRnNG/K6lMNqylg=; b=TAzfeYJL7zioFgHeOt/I7OU7gFSrs48pz/YtS6QQNy1zBqyMZpZtjzXA4gvck6x1P1 DBsIScFNP+TTAkT47eFqH5OA+MUb5rfh3poO2vxXr+R6ApKBH3dE2jUFal9kH+LIWrqC oX7dW4WT8tMvdPNgQRQUCoQxKw8LmcZaEiNxSgdtoWH157KTIdQzyxuyKlH8xjBh77mo Min3iHFRChfMPEod/VyUBoKwrp4zLBjv4nEX/L5T2COSgshjHSb6iQGlDFcBJ93Yeb7s 9GOtgVKnAHyPg1GGoew2Pw3Z+8tRzdMLEnXf6foRqasq3ShEyjQiugZCofnwTvKjxxwr +twg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=C1egZHW+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z65-v6si9868414pff.223.2018.09.30.05.50.54; Sun, 30 Sep 2018 05:51:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=C1egZHW+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728405AbeI3TXn (ORCPT + 99 others); Sun, 30 Sep 2018 15:23:43 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:40136 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728220AbeI3TXn (ORCPT ); Sun, 30 Sep 2018 15:23:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=DjUZGSScEpQLqDmJD+VJ5m38dox3hRnNG/K6lMNqylg=; b=C1egZHW+JU3PUhMXkotpxaxoe HsTdy73LUAppDgJP5R7XPXsB6tjZwmkZXU2gkkEFgoU/rxU1DIuzdqrKSdqYTRyP3PWXD65f1gY2x hCR/oPGRU/pnD59Sxzf4i7QA+Mf14D0FyE92VPfn/niIB+el2zYKxY/o/VP/sKeWlEaf9bRfKb5PZ wTmTaRoqHHNwqkO4rbrDoclRrvfZBeyMWiXrM7utEl+7Nxpj2ezF0cOaqM6G2wLIWajb4psVPhDzx XUdCEffa1l8ST8KmbJzan23o1wfI+L8EmydXP0Fr7ErQml7AoutzKDervgj26ose/sM3JFz0iiE0J s50K4ToSQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1g6bBG-0000vy-Sr; Sun, 30 Sep 2018 12:50:38 +0000 Date: Sun, 30 Sep 2018 05:50:38 -0700 From: Matthew Wilcox To: zhong jiang Cc: gregkh@linux-foundation.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, mhocko@kernel.org, mgorman@suse.de, vbabka@suse.cz, andrea@kernel.org, kirill@shutemov.name, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [STABLE PATCH] slub: make ->cpu_partial unsigned int Message-ID: <20180930125038.GA2533@bombadil.infradead.org> References: <1538303301-61784-1-git-send-email-zhongjiang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1538303301-61784-1-git-send-email-zhongjiang@huawei.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Sep 30, 2018 at 06:28:21PM +0800, zhong jiang wrote: > From: Alexey Dobriyan > > [ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ] > > /* > * cpu_partial determined the maximum number of objects > * kept in the per cpu partial lists of a processor. > */ > > Can't be negative. > > I hit a real issue that it will result in a large number of memory leak. > Becuase Freeing slabs are in interrupt context. So it can trigger this issue. > put_cpu_partial can be interrupted more than once. > due to a union struct of lru and pobjects in struct page, when other core handles > page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will > result in pobjects being a negative value(0xdead0000). Therefore, a large number > of slabs will be added to per_cpu partial list. > > I had posted the issue to community before. The detailed issue description is as follows. > > https://www.spinics.net/lists/kernel/msg2870979.html > > After applying the patch, The issue is fixed. So the patch is a effective bugfix. > It should go into stable. > > Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com > Signed-off-by: Alexey Dobriyan > Acked-by: Christoph Lameter Hang on. Christoph acked the _original_ patch going into upstream. When he reviewed this patch for _stable_ last week, he asked for more investigation. Including this patch in stable is misleading. > Cc: Pekka Enberg > Cc: David Rientjes > Cc: Joonsoo Kim > Cc: # 4.4.x > Signed-off-by: Andrew Morton > Signed-off-by: Linus Torvalds > Signed-off-by: zhong jiang > --- > include/linux/slub_def.h | 3 ++- > mm/slub.c | 6 +++--- > 2 files changed, 5 insertions(+), 4 deletions(-) > > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index 3388511..9b681f2 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -67,7 +67,8 @@ struct kmem_cache { > int size; /* The size of an object including meta data */ > int object_size; /* The size of an object without meta data */ > int offset; /* Free pointer offset. */ > - int cpu_partial; /* Number of per cpu partial objects to keep around */ > + /* Number of per cpu partial objects to keep around */ > + unsigned int cpu_partial; > struct kmem_cache_order_objects oo; > > /* Allocation and freeing of slabs */ > diff --git a/mm/slub.c b/mm/slub.c > index 2284c43..c33b0e1 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1661,7 +1661,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, > { > struct page *page, *page2; > void *object = NULL; > - int available = 0; > + unsigned int available = 0; > int objects; > > /* > @@ -4674,10 +4674,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf) > static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, > size_t length) > { > - unsigned long objects; > + unsigned int objects; > int err; > > - err = kstrtoul(buf, 10, &objects); > + err = kstrtouint(buf, 10, &objects); > if (err) > return err; > if (objects && !kmem_cache_has_cpu_partial(s)) > -- > 1.7.12.4 >