Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp29481imu; Mon, 26 Nov 2018 16:37:42 -0800 (PST) X-Google-Smtp-Source: AFSGD/X9QnqFSq7wDcsNlYRyW+FwgPCWvNZMDWf7b+6QmYbVp3Pfrc9Kak2YnrU3TxIu01VJ/iux X-Received: by 2002:a17:902:bf0c:: with SMTP id bi12mr3008000plb.0.1543279062773; Mon, 26 Nov 2018 16:37:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543279062; cv=none; d=google.com; s=arc-20160816; b=ROquchmzaD+OA63PFt4xb/YYu8CyPrFVr2htBFlk0I5gP5BIyrtEDKczioosUZ+/Bn bkT+9WUrHS5TuuLbhP6luA9Zx2Nk7u8QwVA+OZDnbRyqZXBDjro6+YbC3w9ye+Vn/0bX 0qiFavgjvZznup/YzlJmCTqGfvfI3c+awnNwtmQFpMz0muIzaJ5smB4pafy5JBQJwIZM 9O4q9Q2j0VUgCBxQYPWx/82uJ4+Lz9pFyQr4etW+RSA710lREBLMpuaEpE72DkazBKuQ KHy78PKPGKzo3A5wDXoI+GaVTYk0H0jwAHa0r8sWUwz8kSUtQbenDOQrdhvgL9gylrcF qamQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=IxhGUgSnl1U8EVi8yixIQDGMIPIU+M0XjhnTW/O+KRw=; b=vOPjQKa1DbYlf4CewOPRNyGY1P+nOoP9TaoCQk/i3c43JLn7bux6j1lJ7+amoEOesO ukXtwgMLgajShWVxXfiM4pyp7ve2uurymGmxyfORfWYvZCOftKcxUw+gxuSMK5kDvff+ Pnt2mC35/EYHNjET8ueEtL58sAtTDZ15I+ySgKCKUzD7btzLv/IstdPxxMsfUNss0goo J7UcuCDeGQCeVnVlIs2ROYgUNzv5cbcj4/uc/3ZD5oQXEksO6HVbz3kbAquRYoxoftQp hNdgaQvrMfSQs5BLWSP384W9tfBmm1Nz7f8tVoFkkPSw+jPJ68zwLNxfm4ro59hNqiqn UzXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=klxRRvRU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s2si1900682pgj.60.2018.11.26.16.37.27; Mon, 26 Nov 2018 16:37:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=klxRRvRU; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727837AbeK0Lcl (ORCPT + 99 others); Tue, 27 Nov 2018 06:32:41 -0500 Received: from mail-ed1-f67.google.com ([209.85.208.67]:43650 "EHLO mail-ed1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727596AbeK0Lcl (ORCPT ); Tue, 27 Nov 2018 06:32:41 -0500 Received: by mail-ed1-f67.google.com with SMTP id f4so17486259edq.10 for ; Mon, 26 Nov 2018 16:36:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:reply-to:references:mime-version :content-disposition:in-reply-to:user-agent; bh=IxhGUgSnl1U8EVi8yixIQDGMIPIU+M0XjhnTW/O+KRw=; b=klxRRvRUUZht2PaCk43JuplSX8u7gdmcWKc5R6kmlMYEbOu0BTC84m5CzxJxG8KjJ2 Or0h0/6JfUixdyQ3GICw7kp2UTLl6Dnu08m46T3i8nznZ3l9jsC5m9MWiIA1d8nffd+D GgnONyw6el0qBohxuO1mQTJFC2SIvWyl2AZuaCcAlqwdrh8/dNif4DZJqVvvH/VqMei5 dDWdEuyq44YmDhERhwe2Oh5HKACAkTyiwWSZeQ1mPkTmVMyxLjNbFkPEzJWo19THmC1H gaUBQm1Vf7BiaqigNt6Hxok/fCYlijM6QafHdjDQ3m+8Q0oQZiHMoUeiMSH7HmQEpLvv U8DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:reply-to :references:mime-version:content-disposition:in-reply-to:user-agent; bh=IxhGUgSnl1U8EVi8yixIQDGMIPIU+M0XjhnTW/O+KRw=; b=WEz6/QKF+o2OEqFjwuPvXQJLQ6DGoUaI7roixxAj2sZPhaDJfSCQoGdB7Oxg1sKN4M DSEHa/Qj/UlwBUeKkyQImMoyIjT8+RlqyUH1uBgoAo1RbuClu3HCQ5DygTJEAtco3l5D PFps/jq9JkgHt/rhAf3eB6g/LZIvruVM2Buc7Fq56b33RqkoySM4geDHOWm2+MMk+bEg qJtilCqAlbI8unO8TwX5shm+Ko32npgARQjzkNTQmknQPoT9a/4qCqsgb8aJ9YpHczlw QSzWN/nJ+sfdsVAVbQGDRbHKUpCK7jI86SpeQV9RZwTeZ2QZWxfaWcTFmQKXG2Dgr1L4 robw== X-Gm-Message-State: AA+aEWaJJ5yL09iE69e+9xeaAiaDkN8YqleJiXzmYDsaXKL/HjhtmVHL qkaND2oIZIUcxgxyjSzHJ3UlDst4 X-Received: by 2002:a50:d2d6:: with SMTP id q22mr25480621edg.121.1543279000117; Mon, 26 Nov 2018 16:36:40 -0800 (PST) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id f19-v6sm304142eje.28.2018.11.26.16.36.39 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 26 Nov 2018 16:36:39 -0800 (PST) Date: Tue, 27 Nov 2018 00:36:38 +0000 From: Wei Yang To: Wengang Wang Cc: Wei Yang , zhong jiang , Christopher Lameter , penberg@kernel.org, David Rientjes , iamjoonsoo.kim@lge.com, Andrew Morton , Linux-MM , Linux Kernel Mailing List Subject: Re: [PATCH] mm: use this_cpu_cmpxchg_double in put_cpu_partial Message-ID: <20181127003638.2oyudcyene6hb6sb@master> Reply-To: Wei Yang References: <20181117013335.32220-1-wen.gang.wang@oracle.com> <5BF36EE9.9090808@huawei.com> <476b5d35-1894-680c-2bd9-b399a3f4d9ed@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <476b5d35-1894-680c-2bd9-b399a3f4d9ed@oracle.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 26, 2018 at 08:57:54AM -0800, Wengang Wang wrote: > > >On 2018/11/25 17:59, Wei Yang wrote: >> On Tue, Nov 20, 2018 at 10:58 AM zhong jiang wrote: >> > On 2018/11/17 9:33, Wengang Wang wrote: >> > > The this_cpu_cmpxchg makes the do-while loop pass as long as the >> > > s->cpu_slab->partial as the same value. It doesn't care what happened to >> > > that slab. Interrupt is not disabled, and new alloc/free can happen in the >> > > interrupt handlers. Theoretically, after we have a reference to the it, >> > > stored in _oldpage_, the first slab on the partial list on this CPU can be >> > > moved to kmem_cache_node and then moved to different kmem_cache_cpu and >> > > then somehow can be added back as head to partial list of current >> > > kmem_cache_cpu, though that is a very rare case. If that rare case really >> > > happened, the reading of oldpage->pobjects may get a 0xdead0000 >> > > unexpectedly, stored in _pobjects_, if the reading happens just after >> > > another CPU removed the slab from kmem_cache_node, setting lru.prev to >> > > LIST_POISON2 (0xdead000000000200). The wrong _pobjects_(negative) then >> > > prevents slabs from being moved to kmem_cache_node and being finally freed. >> > > >> > > We see in a vmcore, there are 375210 slabs kept in the partial list of one >> > > kmem_cache_cpu, but only 305 in-use objects in the same list for >> > > kmalloc-2048 cache. We see negative values for page.pobjects, the last page >> > > with negative _pobjects_ has the value of 0xdead0004, the next page looks >> > > good (_pobjects is 1). >> > > >> > > For the fix, I wanted to call this_cpu_cmpxchg_double with >> > > oldpage->pobjects, but failed due to size difference between >> > > oldpage->pobjects and cpu_slab->partial. So I changed to call >> > > this_cpu_cmpxchg_double with _tid_. I don't really want no alloc/free >> > > happen in between, but just want to make sure the first slab did expereince >> > > a remove and re-add. This patch is more to call for ideas. >> > Have you hit the really issue or just review the code ? >> > >> > I did hit the issue and fixed in the upstream patch unpredictably by the following patch. >> > e5d9998f3e09 ("slub: make ->cpu_partial unsigned int") >> > >> Zhong, >> >> I took a look into your upstream patch, while I am confused how your patch >> fix this issue? >> >> In put_cpu_partial(), the cmpxchg compare cpu_slab->partial (a page struct) >> instead of the cpu_partial (an unsigned integer). I didn't get the >> point of this fix. > >I think the patch can't prevent pobjects from being set as 0xdead0000 (the >primary 4 bytes of LIST_POISON2). >But if pobjects is treated as unsigned integer, > >2266???????????????????????????????????????????????? pobjects = oldpage->pobjects; >2267???????????????????????????????????????????????? pages = oldpage->pages; >2268???????????????????????????????????????????????? if (drain && pobjects > s->cpu_partial) { >2269???????????????????????????????????????????????????????????????? unsigned long flags; > Ehh..., you mean (0xdead0000 > 0x02) ? This is really a bad thing, if it wordarounds the problem like this. I strongly don't agree this is a *fix*. This is too tricky. >line 2268 will be true in put_cpu_partial(), thus code goes to >unfreeze_partials(). This way the slabs in the cpu partial list can be moved >to kmem_cache_nod and then freed. So it fixes (or say workarounds) the >problem I see here (huge number of empty slabs stay in cpu partial list). > >thanks >wengang > >> > Thanks, >> > zhong jiang -- Wei Yang Help you, Help me