Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755182AbcCBPNP (ORCPT ); Wed, 2 Mar 2016 10:13:15 -0500 Received: from mail-ob0-f180.google.com ([209.85.214.180]:34679 "EHLO mail-ob0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754880AbcCBPNM (ORCPT ); Wed, 2 Mar 2016 10:13:12 -0500 MIME-Version: 1.0 In-Reply-To: <56D6DC13.8060008@huawei.com> References: <56D6DC13.8060008@huawei.com> From: Jianyu Zhan Date: Wed, 2 Mar 2016 23:12:32 +0800 Message-ID: Subject: Re: a question about slub in function __slab_free() To: Xishi Qiu Cc: LKML , Linux MM , js1304@gmail.com Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1551 Lines: 50 On Wed, Mar 2, 2016 at 8:26 PM, Xishi Qiu wrote: > __slab_free() > prior = page->freelist; // prior is NULL > was_frozen = new.frozen; // was_frozen is 0 > ... > /* > * Slab was on no list before and will be > * partially empty > * We can defer the list move and instead > * freeze it. > */ > new.frozen = 1; > ... > > I don't understand why "Slab was on no list before"? in this __slab_free() code path, we are freeing an object to a remote CPU. Consider the condition that leads to this branch, that is: new.inuse && !prior && !was_frozen new.inuse means that slab page has free objects after we do this free operiton. !prior && !was_frozen together means that slab page has previously depleted all objects and forgotten(SLUB don't remember a slab page that has got all its object allocated). All these 3 conditions mean that, A slab page on a remote CPU has got all tis objects allocated and it was forgotten by SLUB, so "Slab was on no list before", and then at the present, we (on local CPU) are freeing object back to that CPU, that "will make the slab page partially empty". But we don't bother to immediately add it back to the node partial list( to avoid the list->lock contention), so "we can defer the list move". But how do we handle this? Easy, just mark it frozen, and latter that CPU's per-cpu freelist queue will use it. Regards, Jianyu Zhan