Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp1317135rwi; Thu, 20 Oct 2022 11:01:07 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4zHN8SgxJ8thooClvOA0Ok7rd6FYGSkuuwfASq2OiL9qr1HBv97KkkVculnyp3RNYt0Kvi X-Received: by 2002:a05:6402:448b:b0:43b:5ec6:8863 with SMTP id er11-20020a056402448b00b0043b5ec68863mr13130563edb.377.1666288867460; Thu, 20 Oct 2022 11:01:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666288867; cv=none; d=google.com; s=arc-20160816; b=zDgIyGERopST1D5FpVvBXphQc63sfGmovtf/Co28LKINewwOx8u9XiMIVDZ4/olk6X xrV0PX5marynVKP99Uvq9Bxy6ZHQuatLG6MMu+Z+IoTl83H+ALF4atp1D1tOb+THX7g8 AhUlSEHgsxTwtrhzzVn559mxiWHsvNb7/uTznPFulCp+fl2paRnFjiEU9ge2po3oWN/R h9sFXkGYKFuXZ/574RS7LCHi2HOUVofon0beJdVOQEp+ijFVLxi9jHKu7rf7E0j1GEE4 GUxJ/B3LPOcx5XC9xNz5/8K2CYNerv8kJ4y/6KeHva703FNVd7ymJvSBcrH+ZEiVldXa 1JSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=PaeWh3iDOjbpXwZzl99qVReBa9qcWqeI0FHLvNogrhI=; b=rspRCZGGbef7RGZnle2dtmLH/9kbharGp7DAlbhhmohm3IlAu1Y8BmqNRd0WJjmAeo 9BCp35jrKeNLk3MEEe2bz5Q/+ZwowfzA3JyCgStZ+lUxylhu4jKEEoEXmINDg7Gx1NnF xXm2TKo/emQhLR/dh99ZEf3l+XxhSfU89RucEB6AAmoXWnTf3ySOiUo4KkEe/r0QZGJW cHta72SBSByNOWmQ4XTPxnoxgvNicaUcWP7tPibNLA9LWkgl4zp6sjeBO7LGuzH1X4jZ MMSojYqqQEnKq1K8hVxsZGuw6HujWHCgRdl3HUDAHmHE2m0mlf84isoxYgrbeLZmu283 X96Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=AXSvAI0u; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hp5-20020a1709073e0500b007803e371aedsi20301663ejc.161.2022.10.20.11.00.40; Thu, 20 Oct 2022 11:01:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=AXSvAI0u; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229711AbiJTR4c (ORCPT + 99 others); Thu, 20 Oct 2022 13:56:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbiJTR43 (ORCPT ); Thu, 20 Oct 2022 13:56:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09C36139C36 for ; Thu, 20 Oct 2022 10:56:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666288588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=PaeWh3iDOjbpXwZzl99qVReBa9qcWqeI0FHLvNogrhI=; b=AXSvAI0u8LxJPavjGKrF4npHjSwAN6fvrPClbyF5fVT7Tx0NXN611KkwF932+qDTnSnh7M /fjmgM9f6L74zdRjzkJFDdotrU6MtB7ASSaVHztGEBT0V+RhqeP7vrDiDP+JWXIz7M70WG RVzBXow9W2owGMo3nsCHW71W0n6g79c= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-639-td6WHL2gNWO-4wxNLQu1gQ-1; Thu, 20 Oct 2022 13:56:26 -0400 X-MC-Unique: td6WHL2gNWO-4wxNLQu1gQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F36FA1C05AFE; Thu, 20 Oct 2022 17:56:25 +0000 (UTC) Received: from llong.com (unknown [10.22.17.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9CEAA35429; Thu, 20 Oct 2022 17:56:25 +0000 (UTC) From: Waiman Long To: Catalin Marinas , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , Waiman Long Subject: [PATCH] mm/kmemleak: Prevent soft lockup in kmemleak_scan()'s object iteration loops Date: Thu, 20 Oct 2022 13:56:19 -0400 Message-Id: <20221020175619.366317-1-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit 6edda04ccc7c ("mm/kmemleak: prevent soft lockup in first object iteration loop of kmemleak_scan()") adds cond_resched() in the first object iteration loop of kmemleak_scan(). However, it turns that the 2nd objection iteration loop can still cause soft lockup to happen in some cases. So add a cond_resched() call in the 2nd and 3rd loops as well to prevent that and for completeness. Signed-off-by: Waiman Long --- mm/kmemleak.c | 61 +++++++++++++++++++++++++++++++++++---------------- 1 file changed, 42 insertions(+), 19 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 1eddc0132f7f..613d34b57c5d 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -1463,6 +1463,27 @@ static void scan_gray_list(void) WARN_ON(!list_empty(&gray_list)); } +/* + * Conditionally call resched() in a object iteration loop while making sure + * that the given object won't go away without RCU read lock by performing a + * get_object() if !pinned. + * + * Return: false if can't do a cond_resched() due to get_object() failure + * true otherwise + */ +static bool kmemleak_cond_resched(struct kmemleak_object *object, bool pinned) +{ + if (!pinned && !get_object(object)) + return false; + + rcu_read_unlock(); + cond_resched(); + rcu_read_lock(); + if (!pinned) + put_object(object); + return true; +} + /* * Scan data sections and all the referenced memory blocks allocated via the * kernel's standard allocators. This function must be called with the @@ -1474,7 +1495,7 @@ static void kmemleak_scan(void) struct zone *zone; int __maybe_unused i; int new_leaks = 0; - int loop1_cnt = 0; + int loop_cnt = 0; jiffies_last_scan = jiffies; @@ -1483,7 +1504,6 @@ static void kmemleak_scan(void) list_for_each_entry_rcu(object, &object_list, object_list) { bool obj_pinned = false; - loop1_cnt++; raw_spin_lock_irq(&object->lock); #ifdef DEBUG /* @@ -1517,24 +1537,11 @@ static void kmemleak_scan(void) raw_spin_unlock_irq(&object->lock); /* - * Do a cond_resched() to avoid soft lockup every 64k objects. - * Make sure a reference has been taken so that the object - * won't go away without RCU read lock. + * Do a cond_resched() every 64k objects to avoid soft lockup. */ - if (!(loop1_cnt & 0xffff)) { - if (!obj_pinned && !get_object(object)) { - /* Try the next object instead */ - loop1_cnt--; - continue; - } - - rcu_read_unlock(); - cond_resched(); - rcu_read_lock(); - - if (!obj_pinned) - put_object(object); - } + if (!(++loop_cnt & 0xffff) && + !kmemleak_cond_resched(object, obj_pinned)) + loop_cnt--; /* Try again on next object */ } rcu_read_unlock(); @@ -1601,7 +1608,15 @@ static void kmemleak_scan(void) * scan and color them gray until the next scan. */ rcu_read_lock(); + loop_cnt = 0; list_for_each_entry_rcu(object, &object_list, object_list) { + /* + * Do a cond_resched() every 64k objects to avoid soft lockup. + */ + if (!(++loop_cnt & 0xffff) && + !kmemleak_cond_resched(object, false)) + loop_cnt--; /* Try again on next object */ + /* * This is racy but we can save the overhead of lock/unlock * calls. The missed objects, if any, should be caught in @@ -1635,7 +1650,15 @@ static void kmemleak_scan(void) * Scanning result reporting. */ rcu_read_lock(); + loop_cnt = 0; list_for_each_entry_rcu(object, &object_list, object_list) { + /* + * Do a cond_resched() every 64k objects to avoid soft lockup. + */ + if (!(++loop_cnt & 0xffff) && + !kmemleak_cond_resched(object, false)) + loop_cnt--; /* Try again on next object */ + /* * This is racy but we can save the overhead of lock/unlock * calls. The missed objects, if any, should be caught in -- 2.31.1