Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754350AbdFPOoK (ORCPT ); Fri, 16 Jun 2017 10:44:10 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:44402 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754329AbdFPOoH (ORCPT ); Fri, 16 Jun 2017 10:44:07 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org B38D360850 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=stummala@codeaurora.org Subject: Re: [PATCH] mm/list_lru.c: use cond_resched_lock() for nlru->lock To: Andrew Morton Cc: Alexander Polakov , Vladimir Davydov , Jan Kara , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org References: <1497228440-10349-1-git-send-email-stummala@codeaurora.org> <20170615140523.76f8fc3ca21dae3704f06a56@linux-foundation.org> From: Sahitya Tummala Message-ID: <3c478a65-6cd1-0ee9-2470-7ca368dd88bf@codeaurora.org> Date: Fri, 16 Jun 2017 20:14:00 +0530 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: <20170615140523.76f8fc3ca21dae3704f06a56@linux-foundation.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1992 Lines: 47 On 6/16/2017 2:35 AM, Andrew Morton wrote: > diff --git a/mm/list_lru.c b/mm/list_lru.c >> index 5d8dffd..1af0709 100644 >> --- a/mm/list_lru.c >> +++ b/mm/list_lru.c >> @@ -249,6 +249,8 @@ restart: >> default: >> BUG(); >> } >> + if (cond_resched_lock(&nlru->lock)) >> + goto restart; >> } >> >> spin_unlock(&nlru->lock); > This is rather worrying. > > a) Why are we spending so long holding that lock that this is occurring? At the time of crash I see that __list_lru_walk_one() shows number of entries isolated as 1774475 with nr_items still pending as 130748. On my system, I see that for dentries of 100000, it takes around 75ms for __list_lru_walk_one() to complete. So for a total of 1900000 dentries as in issue scenario, it will take upto 1425ms, which explains why the spin lockup condition got hit on the other CPU. It looks like __list_lru_walk_one() is expected to take more time if there are more number of dentries present. And I think it is a valid scenario to have those many number dentries. > b) With this patch, we're restarting the entire scan. Are there > situations in which this loop will never terminate, or will take a > very long time? Suppose that this process is getting rescheds > blasted at it for some reason? In the above scenario, I observed that the dentry entries from lru list are removedall the time i.e LRU_REMOVED is returned from the isolate (dentry_lru_isolate()) callback. I don't know if there is any case where we skip several entries in the lru list and restartseveral times due to this cond_resched_lock(). This can happen even with theexisting code if LRU_RETRY is returned often from the isolate callback. > IOW this looks like a bit of a band-aid and a deeper analysis and > understanding might be needed. -- Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc. Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.