Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp2383735ybi; Sun, 9 Jun 2019 10:22:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqzHdxXwpG04EngsCur/706ItqeC18Q8r7DaVYPVvMStahOm7neMPQY98qrLb77ATwrfUlHw X-Received: by 2002:a17:902:a40d:: with SMTP id p13mr63564897plq.11.1560100938416; Sun, 09 Jun 2019 10:22:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560100938; cv=none; d=google.com; s=arc-20160816; b=bMbZyUbYrBYMPKQrBvbubhEFuW1hYgwgxajbQJ34MLk5UZ3X3EWytkUP7jTBMZ1BZg Vl2yAa7WO8oBRciB/DCBmhLwPZiYp1lcPAv8qdGgNJ1CW34YIj2BEvYu/DqgPWroTvoz 5gL8vXsOKMgP+NIRZqmz9j2iNchFM4ZawEH3xx3fwvHKdZW6dsOj7LXvHhdd9zCfoCO4 WHO4BT1UICsqlMFv6qalP/eWovHw/VJhI/CEa181rOPd+4j/FE+h+4kCJyGzO9gx3fLw QEOlxb+ff2Ii9oSer4AInhgB2DQnvVknQnoAkKXmS4J5yq7ayTgceD3AOlmcZvO0Zq92 jnBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=mkutb4dyhgSU7pQXk24vLqlvp9NF8XGWJZIf2nB44qo=; b=Bfi4txwDlKlp01CGUza6nQXqmzXAJY8dny1QHf/pnldjmJXAUhi5I0UXh4K5dsbrsH M24gRyUDJHckk6RjG2CeeaO7jdYBG54dR/t0EMWL3DyDjDOv0gWhTv4bEcpiX6NazuCc rSEyKDMN5LsEXVY9kdgcDjCsTQn63zFe/2sL7YzbgMJ9o5iZYawXmT/NrdyyLHlu75yt GmrtPI0moC1fOreGvmeUxUkwbRjxFtUoHMeTe3RdJgsGdht65MgHtbcpu1VoIsgEvZuB vgEvaVIE6yCT9qcRxmd2kWR+Z3/Xh7ERKIgvRexk28nL4Fn+dL3mtLguJ1j2wcpahVZO WnJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=OOGPYk2R; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u11si3187270plz.219.2019.06.09.10.22.02; Sun, 09 Jun 2019 10:22:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=OOGPYk2R; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731330AbfFIQsy (ORCPT + 99 others); Sun, 9 Jun 2019 12:48:54 -0400 Received: from mail.kernel.org ([198.145.29.99]:48078 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730097AbfFIQsw (ORCPT ); Sun, 9 Jun 2019 12:48:52 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1C9ED205ED; Sun, 9 Jun 2019 16:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560098931; bh=g75Sm/xoGHFxBj4nYNq3+J9XqVcsXYBwcfM3Lm26yAg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OOGPYk2R+ozotSZM7KEtrkDt4+L3hikiEAfVzRW3Ite4bO5BZ4byj6DDHb3EwgC+Q HtIWljfTUIMSOsMd/2cUimx35qYxZRjAl7RWXNvNtx/LNwgdPk1odIwaaWPmCqcl7T LAotkqMs5Kmnaieu7IKX3UreFGsNKEfNWTa3stn8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Zhu Yanjun , Santosh Shilimkar , "David S. Miller" Subject: [PATCH 4.19 09/51] net: rds: fix memory leak in rds_ib_flush_mr_pool Date: Sun, 9 Jun 2019 18:41:50 +0200 Message-Id: <20190609164127.631681340@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190609164127.123076536@linuxfoundation.org> References: <20190609164127.123076536@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhu Yanjun [ Upstream commit 85cb928787eab6a2f4ca9d2a798b6f3bed53ced1 ] When the following tests last for several hours, the problem will occur. Server: rds-stress -r 1.1.1.16 -D 1M Client: rds-stress -r 1.1.1.14 -s 1.1.1.16 -D 1M -T 30 The following will occur. " Starting up.... tsks tx/s rx/s tx+rx K/s mbi K/s mbo K/s tx us/c rtt us cpu % 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 1 0 0 0.00 0.00 0.00 0.00 0.00 -1.00 " >From vmcore, we can find that clean_list is NULL. >From the source code, rds_mr_flushd calls rds_ib_mr_pool_flush_worker. Then rds_ib_mr_pool_flush_worker calls " rds_ib_flush_mr_pool(pool, 0, NULL); " Then in function " int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool, int free_all, struct rds_ib_mr **ibmr_ret) " ibmr_ret is NULL. In the source code, " ... list_to_llist_nodes(pool, &unmap_list, &clean_nodes, &clean_tail); if (ibmr_ret) *ibmr_ret = llist_entry(clean_nodes, struct rds_ib_mr, llnode); /* more than one entry in llist nodes */ if (clean_nodes->next) llist_add_batch(clean_nodes->next, clean_tail, &pool->clean_list); ... " When ibmr_ret is NULL, llist_entry is not executed. clean_nodes->next instead of clean_nodes is added in clean_list. So clean_nodes is discarded. It can not be used again. The workqueue is executed periodically. So more and more clean_nodes are discarded. Finally the clean_list is NULL. Then this problem will occur. Fixes: 1bc144b62524 ("net, rds, Replace xlist in net/rds/xlist.h with llist") Signed-off-by: Zhu Yanjun Acked-by: Santosh Shilimkar Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/rds/ib_rdma.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) --- a/net/rds/ib_rdma.c +++ b/net/rds/ib_rdma.c @@ -428,12 +428,14 @@ int rds_ib_flush_mr_pool(struct rds_ib_m wait_clean_list_grace(); list_to_llist_nodes(pool, &unmap_list, &clean_nodes, &clean_tail); - if (ibmr_ret) + if (ibmr_ret) { *ibmr_ret = llist_entry(clean_nodes, struct rds_ib_mr, llnode); - + clean_nodes = clean_nodes->next; + } /* more than one entry in llist nodes */ - if (clean_nodes->next) - llist_add_batch(clean_nodes->next, clean_tail, &pool->clean_list); + if (clean_nodes) + llist_add_batch(clean_nodes, clean_tail, + &pool->clean_list); }