Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp101839imu; Mon, 26 Nov 2018 17:45:02 -0800 (PST) X-Google-Smtp-Source: AJdET5fsnSxZnxuHfhZlQxgksaXCCoHFFkAn6+fhvzwkMV2tMNdWwSc8OcrPP7p/7wGgWKxgTUSk X-Received: by 2002:a62:25c2:: with SMTP id l185-v6mr31697150pfl.64.1543283102110; Mon, 26 Nov 2018 17:45:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543283102; cv=none; d=google.com; s=arc-20160816; b=BdLuW1mmLfPMR8wNUJhILEjukqc+X1m9FWxu99BSENEAsV2O2s44KVuQwlMHF/L1aN fqCa7BHBr86m2w0sa+wBo0LEBL4WCcL5WY15WAcFwxZhRf2nvKUlONQDUwri6dZdV9r6 MmA7nRqYm0dBmXtOqfHk+5QfuvMIoCFpoTVTzHuPJF5ydGsRQYJaPeRuuDCQ28Jw3+SU LCWVuOiML6D6Xx1vjVsxWJS4xXKHE/lRZT8aG3czy7ZU1DPa4PeW9WG8u80TOM4kZaWf VXHzP++8QhA7dlmyvwZB6Xk2TpGo6+Ybql/xKdsfxkOcCysPkGfrUbchMxwdzJxUHgRK rRlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:cc:to:subject :dkim-signature; bh=s5dRQ8CRGQQ7Iql8WJuQgIjftbcQQ2UAyPT96bhbf4c=; b=o7QYtCrEcyoP/nShIJZIhsrKxvtVK7md4/ZBAfLa8dBk8PwOd0RO4yh70rz0Q7HoCi eAfGcg57LGFg7O7zQ8BrIWcGzunhDVV9zrlXl1bKOyl4XdHexZLPUypgpNO7dRcn4mhI x9bxBTDrp6fW3+Xd2/V7gp3j7G7BYJmDaL2g5e6GjXb3ZAlvTRPKERA5gVzWNCt4a1Mb WwqngsZuAieNMBdRuAroKygLT2PpR5MVeZO5u/em9VLQSAusstTZq5ghF/kiNEfGiXYR qS3WKpxFapOhNIvz+tqZtZ752TQjSoSqGO+/7QfXfSBnMEp6piVoyYc8svVizfe2WFXG 2d+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=qin4Zdva; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2si2068505plh.426.2018.11.26.17.44.46; Mon, 26 Nov 2018 17:45:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=qin4Zdva; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728144AbeK0Mi7 (ORCPT + 99 others); Tue, 27 Nov 2018 07:38:59 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:42358 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727542AbeK0Mi6 (ORCPT ); Tue, 27 Nov 2018 07:38:58 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id wAR1cwIQ146577; Tue, 27 Nov 2018 01:42:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=s5dRQ8CRGQQ7Iql8WJuQgIjftbcQQ2UAyPT96bhbf4c=; b=qin4ZdvaBliDOkYB6TmDnwz1BjxFkdafVTliMGKalbvyocrqHHm6FV64wZCgRvlsEIxe NUxRcowijGY1ne/H2Y38lLb5u3NcUlnEV9I6seTM3w00ekVC9tVIUmqFI1ilQxIyawrb J13R0hSdVg3cCNxPtZ0t7IOkEoXt94lCtWXADH6igr3oQcc0Ug57OR9MTJ0PIB2hfHlj ETVcKvZPi+/PhstwLtGGOgfk6fz7sjiK2TLfRHKJricTI9YqbEz+s8ilU+A4mn0eq5yW YIMmhKmidAQ+EgRQPJBchdyE1AHSV40LBH130lZb6UVMdzw8Vf0YnQzLn6f5vLuev92F Cg== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2120.oracle.com with ESMTP id 2nxy9r150n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 27 Nov 2018 01:42:35 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id wAR1gX5q013290 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 27 Nov 2018 01:42:34 GMT Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id wAR1gVI8006179; Tue, 27 Nov 2018 01:42:32 GMT Received: from [10.159.138.192] (/10.159.138.192) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 26 Nov 2018 17:42:31 -0800 Subject: Re: [PATCH] mm: use this_cpu_cmpxchg_double in put_cpu_partial To: Wei Yang Cc: zhong jiang , Christopher Lameter , penberg@kernel.org, David Rientjes , iamjoonsoo.kim@lge.com, Andrew Morton , Linux-MM , Linux Kernel Mailing List References: <20181117013335.32220-1-wen.gang.wang@oracle.com> <5BF36EE9.9090808@huawei.com> <476b5d35-1894-680c-2bd9-b399a3f4d9ed@oracle.com> <20181127003638.2oyudcyene6hb6sb@master> From: Wengang Wang Organization: Oracle Corporation Message-ID: <6e9efd77-b6af-40f4-56b0-c0572930b3e0@oracle.com> Date: Mon, 26 Nov 2018 17:42:28 -0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20181127003638.2oyudcyene6hb6sb@master> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9089 signatures=668685 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1811270010 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/11/26 16:36, Wei Yang wrote: > On Mon, Nov 26, 2018 at 08:57:54AM -0800, Wengang Wang wrote: >> >> On 2018/11/25 17:59, Wei Yang wrote: >>> On Tue, Nov 20, 2018 at 10:58 AM zhong jiang wrote: >>>> On 2018/11/17 9:33, Wengang Wang wrote: >>>>> The this_cpu_cmpxchg makes the do-while loop pass as long as the >>>>> s->cpu_slab->partial as the same value. It doesn't care what happened to >>>>> that slab. Interrupt is not disabled, and new alloc/free can happen in the >>>>> interrupt handlers. Theoretically, after we have a reference to the it, >>>>> stored in _oldpage_, the first slab on the partial list on this CPU can be >>>>> moved to kmem_cache_node and then moved to different kmem_cache_cpu and >>>>> then somehow can be added back as head to partial list of current >>>>> kmem_cache_cpu, though that is a very rare case. If that rare case really >>>>> happened, the reading of oldpage->pobjects may get a 0xdead0000 >>>>> unexpectedly, stored in _pobjects_, if the reading happens just after >>>>> another CPU removed the slab from kmem_cache_node, setting lru.prev to >>>>> LIST_POISON2 (0xdead000000000200). The wrong _pobjects_(negative) then >>>>> prevents slabs from being moved to kmem_cache_node and being finally freed. >>>>> >>>>> We see in a vmcore, there are 375210 slabs kept in the partial list of one >>>>> kmem_cache_cpu, but only 305 in-use objects in the same list for >>>>> kmalloc-2048 cache. We see negative values for page.pobjects, the last page >>>>> with negative _pobjects_ has the value of 0xdead0004, the next page looks >>>>> good (_pobjects is 1). >>>>> >>>>> For the fix, I wanted to call this_cpu_cmpxchg_double with >>>>> oldpage->pobjects, but failed due to size difference between >>>>> oldpage->pobjects and cpu_slab->partial. So I changed to call >>>>> this_cpu_cmpxchg_double with _tid_. I don't really want no alloc/free >>>>> happen in between, but just want to make sure the first slab did expereince >>>>> a remove and re-add. This patch is more to call for ideas. >>>> Have you hit the really issue or just review the code ? >>>> >>>> I did hit the issue and fixed in the upstream patch unpredictably by the following patch. >>>> e5d9998f3e09 ("slub: make ->cpu_partial unsigned int") >>>> >>> Zhong, >>> >>> I took a look into your upstream patch, while I am confused how your patch >>> fix this issue? >>> >>> In put_cpu_partial(), the cmpxchg compare cpu_slab->partial (a page struct) >>> instead of the cpu_partial (an unsigned integer). I didn't get the >>> point of this fix. >> I think the patch can't prevent pobjects from being set as 0xdead0000 (the >> primary 4 bytes of LIST_POISON2). >> But if pobjects is treated as unsigned integer, >> >> 2266???????????????????????????????????????????????? pobjects = oldpage->pobjects; >> 2267???????????????????????????????????????????????? pages = oldpage->pages; >> 2268???????????????????????????????????????????????? if (drain && pobjects > s->cpu_partial) { >> 2269???????????????????????????????????????????????????????????????? unsigned long flags; >> > Ehh..., you mean (0xdead0000 > 0x02) ? Yes. > This is really a bad thing, if it wordarounds the problem like this. It does. > I strongly don't agree this is a *fix*. This is too tricky. It's tricky. I don't know for what purpose the patch went in. The commit message was just saying _pobjects_ should be "unsigned"... thanks, wengang >> line 2268 will be true in put_cpu_partial(), thus code goes to >> unfreeze_partials(). This way the slabs in the cpu partial list can be moved >> to kmem_cache_nod and then freed. So it fixes (or say workarounds) the >> problem I see here (huge number of empty slabs stay in cpu partial list). >> >> thanks >> wengang >> >>>> Thanks, >>>> zhong jiang