Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp729288imu; Thu, 22 Nov 2018 04:35:10 -0800 (PST) X-Google-Smtp-Source: AFSGD/VHd3ld+WgfbVvXIFQp+VAs11daqAMztE+z6AHbgePQIHXUkuRc0DsE/5oIsE61agOoJ8Ks X-Received: by 2002:a17:902:8f97:: with SMTP id z23mr11264175plo.283.1542890110945; Thu, 22 Nov 2018 04:35:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542890110; cv=none; d=google.com; s=arc-20160816; b=QYlmJDv3biGrm7KAmn42H9Y7Y+kmeCo5CdwsqLFFWyj6nM02EhqUlGFZDJ+Cm7oT0A LY12+hz79V7CGrZB+2+ojk9WcEsucya9Qmccu/v0GXFPHcUSSfuT9uBKPC+BCUNlRde6 9VacD7FUDvycNd5fdsJ04UWSK6hP5SktbT0V4N4GnmqrwN2tpaeIjC11TRKMR7PE6Sjz mvdDpL/Uoc0Ap8GebNCRRs7rrp37Eogr+a66Ohr9H8TeMHQnApv1tHS6QshRF5wkTMhu VB5dnHTvAgzfaHakhZhWfwe2L4sjTBVb8G74zh+tmAv4PgzGk4RStt90Vo/yNTOOTMwl PwOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date:dkim-signature; bh=k3R9baTTzJwVPv5u0FDrXuiwV1nrHmlliU76tzeMYzM=; b=u2n47KUckVntmIfCTwv3ER1sSiUG0QKIE5x9PzxAiiErzDVxWS3T1a5DMtfJh9gnmi VE2NfyFuMKPNGfvszy8BCMi6uOlDknvDsRNS60Y/7w7/0GQTwJpDpCM0zNtTMroGr9FR Iebe5tUIlajEH9iy2JnLaAsLAVcSYaKcd4iK0O0S2OcyfM2dHitSvcP0Eet1Gfw+WXrP DPv+tsPeQ4s624Bp5DPa7CkkrX21qZC9UcomOhrQh+EHcBsIIh5+zlifvgWwfv+0QrBG H/e6Xo5sHFT2h+MEi2vDqfgy+zU44Gt2dyDnkxCMddO0dcZoWlex+Xz1YAr6TqqsZ+T8 Wwtg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=njxabi4N; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t2si6357644pfd.236.2018.11.22.04.34.53; Thu, 22 Nov 2018 04:35:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=njxabi4N; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391111AbeKVLNq (ORCPT + 99 others); Thu, 22 Nov 2018 06:13:46 -0500 Received: from mail-ed1-f67.google.com ([209.85.208.67]:37695 "EHLO mail-ed1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732226AbeKVLNq (ORCPT ); Thu, 22 Nov 2018 06:13:46 -0500 Received: by mail-ed1-f67.google.com with SMTP id h15so6330149edb.4 for ; Wed, 21 Nov 2018 16:36:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:reply-to:references:mime-version :content-disposition:in-reply-to:user-agent; bh=k3R9baTTzJwVPv5u0FDrXuiwV1nrHmlliU76tzeMYzM=; b=njxabi4NxBOA/L92l3WXeQams9xVA3/QKNRDS2ndtgqFLjTdn2PTZ7DymN86cxrF2u WvLjgugnCiA8ReCHcv+kf0C7Wc0Wb7B4YnhThSMsnujCePFH8vSqQDgt6RhfVovszORQ O7Ek5o3HkAf2sOrJ18IvKF/qo6AbPc+nuP1mgtqdPn/eI2ByrHV95l9Xx7dlj6+Z/JX7 m/5/zebu6HWPqTPlGO8J0OAQBBKrXGMiuJbqnlP3soVpF33Tms+di1AyoBA3HWSUwTZ4 jo6APDdJur76DpfixtvZrWsKwqu/PxANjZ+bG9/j7dfvupoEcv2c2ysBYkP13CVQzTE8 GqSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:reply-to :references:mime-version:content-disposition:in-reply-to:user-agent; bh=k3R9baTTzJwVPv5u0FDrXuiwV1nrHmlliU76tzeMYzM=; b=UHCYaMRShXVaA+Xcxl2UNx/dixZgayy0mOoeFeWnChtCRPwqYgOGlTowik7vfNhqQ0 BIxJ7CjIgjaULoCiTC80CiXSAJgK0NAWuSeWsNssKmTAsNOj2DTaCkeMlTr3P7bh+twC /hARUJ9BCgO9dPpEmG9b3uskgXNPMlsFbGii997fk8A04CrjDTYD/dWGM6UON4a4p1De G1t2X/b4q1BDFMKJb5LcB3oPFZTmjfrAgEXPXkQCACm481+T6Bh4MgaZ4DO275f9YdlE rufE5Q9+H/WYiw182Xh+SFDLSDYiJAnEQAbfO6qbuu6qqJMuPNFudlD6zh9Sy1QuKHuH lBYA== X-Gm-Message-State: AA+aEWbm/sHzfjm9cU/mz7Hl2RxMdNHz4hDqAFZirxYKJT217JpMa7PG tryqeL/DH1w6SdP/Z578NRA= X-Received: by 2002:a50:a1e5:: with SMTP id 92mr7658766edk.181.1542847018237; Wed, 21 Nov 2018 16:36:58 -0800 (PST) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id p22-v6sm1039465ejf.48.2018.11.21.16.36.57 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 21 Nov 2018 16:36:57 -0800 (PST) Date: Thu, 22 Nov 2018 00:36:56 +0000 From: Wei Yang To: Wengang Wang Cc: Wei Yang , cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: use this_cpu_cmpxchg_double in put_cpu_partial Message-ID: <20181122003656.wmaoncvgjhlnei5m@master> Reply-To: Wei Yang References: <20181117013335.32220-1-wen.gang.wang@oracle.com> <20181118010229.esa32zk5hpob67y7@master> <20181121030241.h7rgyjtlfcnm3hki@master> <9e238df6-d018-68b8-1c79-0c248abf0804@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9e238df6-d018-68b8-1c79-0c248abf0804@oracle.com> User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 20, 2018 at 07:18:13PM -0800, Wengang Wang wrote: >Hi Wei, > >I think you will receive my reply to Zhong, But I am copying my comments for >that patch here (again): > >Copy starts ==> > >I am not sure if the patch you mentioned intended to fix the problem here. >With that patch the negative page->pobjects would become a large positive >value, >it will win the compare with s->cpu_partial and go ahead to unfreeze the >partial slabs. >Though it may be not a perfect fix for this issue, it really fixes (or >workarounds) the issue here. >I'd like to skip my patch.. > ><=== Copy ends Thanks. I still didn't get the point. Let's see whether I would get your replay to that thread. > >thanks, > >wengang > > >On 2018/11/20 19:02, Wei Yang wrote: >> On Tue, Nov 20, 2018 at 09:58:58AM -0800, Wengang Wang wrote: >> > Hi Wei, >> > >> > >> > On 2018/11/17 17:02, Wei Yang wrote: >> > > On Fri, Nov 16, 2018 at 05:33:35PM -0800, Wengang Wang wrote: >> > > > The this_cpu_cmpxchg makes the do-while loop pass as long as the >> > > > s->cpu_slab->partial as the same value. It doesn't care what happened to >> > > > that slab. Interrupt is not disabled, and new alloc/free can happen in the >> > > Well, I seems to understand your description. >> > > >> > > There are two slabs >> > > >> > > * one which put_cpu_partial() trying to free an object >> > > * one which is the first slab in cpu_partial list >> > > >> > > There is some tricky case, the first slab in cpu_partial list we >> > > reference to will change since interrupt is not disabled. >> > Yes, two slabs involved here just as you said above. >> > And yes, the case is really tricky, but it's there. >> > >> > > > interrupt handlers. Theoretically, after we have a reference to the it, >> > > ^^^ >> > > one more word? >> > sorry, "the" should not be there. >> > >> > > > stored in _oldpage_, the first slab on the partial list on this CPU can be >> > > ^^^ >> > > One little suggestion here, mayby use cpu_partial would be more easy to >> > > understand. I confused this with the partial list in kmem_cache_node at >> > > the first time. :-) >> > Right, making others understanding easily is very important. I just meant >> > cpu_partial. >> > >> > > > moved to kmem_cache_node and then moved to different kmem_cache_cpu and >> > > > then somehow can be added back as head to partial list of current >> > > > kmem_cache_cpu, though that is a very rare case. If that rare case really >> > > Actually, no matter what happens after the removal of the first slab in >> > > cpu_partial, it would leads to problem. >> > Maybe you are right, what I see is the problem on the page->pobjects. >> > >> > > > happened, the reading of oldpage->pobjects may get a 0xdead0000 >> > > > unexpectedly, stored in _pobjects_, if the reading happens just after >> > > > another CPU removed the slab from kmem_cache_node, setting lru.prev to >> > > > LIST_POISON2 (0xdead000000000200). The wrong _pobjects_(negative) then >> > > > prevents slabs from being moved to kmem_cache_node and being finally freed. >> > > > >> > > > We see in a vmcore, there are 375210 slabs kept in the partial list of one >> > > > kmem_cache_cpu, but only 305 in-use objects in the same list for >> > > > kmalloc-2048 cache. We see negative values for page.pobjects, the last page >> > > > with negative _pobjects_ has the value of 0xdead0004, the next page looks >> > > > good (_pobjects is 1). >> > > > >> > > > For the fix, I wanted to call this_cpu_cmpxchg_double with >> > > > oldpage->pobjects, but failed due to size difference between >> > > > oldpage->pobjects and cpu_slab->partial. So I changed to call >> > > > this_cpu_cmpxchg_double with _tid_. I don't really want no alloc/free >> > > > happen in between, but just want to make sure the first slab did expereince >> > > > a remove and re-add. This patch is more to call for ideas. >> > > Maybe not an exact solution. >> > > >> > > I took a look into the code and change log. >> > > >> > > _tid_ is introduced by commit 8a5ec0ba42c4 ('Lockless (and preemptless) >> > > fastpaths for slub'), which is used to guard cpu_freelist. While we don't >> > > modify _tid_ when cpu_partial changes. >> > > >> > > May need another _tid_ for cpu_partial? >> > Right, _tid_ changes later than cpu_partial changes. >> > >> > As pointed out by Zhong Jiang, the pobjects issue is fixed by commit >> Where you discussed this issue? Any reference I could get a look? >> >> > e5d9998f3e09 (not sure if by side effect, see my replay there), >> I took a look at this commit e5d9998f3e09 ('slub: make ->cpu_partial >> unsigned int'), but not see some relationship between them. >> >> Would you mind show me a link or cc me in case you have further >> discussion? >> >> Thanks. >> >> > I'd skip this patch.?? If we found other problems regarding the change of >> > cpu_partial, let's fix them. What do you think? >> > >> > thanks, >> > wengang -- Wei Yang Help you, Help me