Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932435AbcCHPcm (ORCPT ); Tue, 8 Mar 2016 10:32:42 -0500 Received: from mx2.suse.de ([195.135.220.15]:36751 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932320AbcCHPce (ORCPT ); Tue, 8 Mar 2016 10:32:34 -0500 Subject: Re: [PATCH] mm: slub: Ensure that slab_unlock() is atomic To: Vineet Gupta , linux-mm@kvack.org References: <1457447457-25878-1-git-send-email-vgupta@synopsys.com> Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Noam Camus , stable@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org From: Vlastimil Babka Message-ID: <56DEF08D.607@suse.cz> Date: Tue, 8 Mar 2016 16:32:29 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <1457447457-25878-1-git-send-email-vgupta@synopsys.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1964 Lines: 55 On 03/08/2016 03:30 PM, Vineet Gupta wrote: > We observed livelocks on ARC SMP setup when running hackbench with SLUB. > This hardware configuration lacks atomic instructions (LLOCK/SCOND) thus > kernel resorts to a central @smp_bitops_lock to protect any R-M-W ops > suh as test_and_set_bit() Sounds like this architecture should then redefine __clear_bit_unlock and perhaps other non-atomic __X_bit() variants to be atomic, and not defer this requirement to places that use the API? > The spinlock itself is implemented using Atomic [EX]change instruction > which is always available. > > The race happened when both cores tried to slab_lock() the same page. > > c1 c0 > ----------- ----------- > slab_lock > slab_lock > slab_unlock > Not observing the unlock > > This in turn happened because slab_unlock() doesn't serialize properly > (doesn't use atomic clear) with a concurrent running > slab_lock()->test_and_set_bit() > > Cc: Christoph Lameter > Cc: Pekka Enberg > Cc: David Rientjes > Cc: Joonsoo Kim > Cc: Andrew Morton > Cc: Noam Camus > Cc: > Cc: > Cc: > Cc: > Signed-off-by: Vineet Gupta > --- > mm/slub.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index d8fbd4a6ed59..b7d345a508dc 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -345,7 +345,7 @@ static __always_inline void slab_lock(struct page *page) > static __always_inline void slab_unlock(struct page *page) > { > VM_BUG_ON_PAGE(PageTail(page), page); > - __bit_spin_unlock(PG_locked, &page->flags); > + bit_spin_unlock(PG_locked, &page->flags); > } > > static inline void set_page_slub_counters(struct page *page, unsigned long counters_new) >