Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp1549119imj; Sun, 17 Feb 2019 08:25:15 -0800 (PST) X-Google-Smtp-Source: AHgI3IYwLul1Tw/i16S57tS5MhzqGQnP1MT5ewGM0GDB2B0ObVTJeLB3F44HoiSOP4pr1IgsV6nj X-Received: by 2002:a63:134f:: with SMTP id 15mr14630043pgt.19.1550420714979; Sun, 17 Feb 2019 08:25:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550420714; cv=none; d=google.com; s=arc-20160816; b=W227Ki6x/C3iYI934xroSYQa3Wv6LIuNuM/0oLZxqe2iDBCpvBHZ/vLKLxrOHvmOwB gTyh4S2EI9grGoR2r3+kF8thdpK4Az2BkAXSRlGH5uzFmAWaC/KpjS5ds48xcPaI0TMi tEagZg+hPpQYQkEweC0tradwRfliWaIHjFHkuRVk3X9PAOPj6R3VA1Exk4E0x3SDDSko Fy2ZJuWKBw0s3BMpHZ/ztFiWRwTa8IOd9BwYrJ6Y1KvkwwNNx1tHb+qinzyDP4a23qcJ QzXdHcygGBzqCESR2zvTFr3QglsHYIbX+uSuwplXMBw9oOUPJyO+VcJCM/VVyCPwKf8y Z07g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=ba5MWPuKWXiWUhChh6+IUa7Jvmwhw4+yhUnrxADzL/4=; b=H8qX8IW/1tv0FY1MEWSb4zJ3Xt7l8UrweeLyc4qLV6bpMX+UHKB0MlLAZ0MfIz91R7 CZQYDueD0Z3OoTN56RZqUVgiWEd6sxZThsR0QXtL+rZe2Fg3C/YOZp5QBV9HGar9Yy9K Z1aOoYUuep3zo3qEyeJT+yoPkYKw+iqxmmsTNZDB57gYmaljTqEJtLKmV4MvkTlw+hu6 praEy7/aCrlS97Kdbe48tFUktCqehGi1wjgWwfZ1ropTw5dyOF+svTZRd7FDMS/5xVlk P/SlsnF0lcxiINxqSb8cJhh9UDklwvqcnplzc0caPWFGKMXxT9AkWpqsWoHBioiopyFF 5eWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=Rlq1CCaO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 4si11337872pfg.280.2019.02.17.08.24.16; Sun, 17 Feb 2019 08:25:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=Rlq1CCaO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727462AbfBQOpj (ORCPT + 99 others); Sun, 17 Feb 2019 09:45:39 -0500 Received: from userp2130.oracle.com ([156.151.31.86]:46094 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725795AbfBQOpi (ORCPT ); Sun, 17 Feb 2019 09:45:38 -0500 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x1HEjUHB062587; Sun, 17 Feb 2019 14:45:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : mime-version : content-type : content-transfer-encoding; s=corp-2018-07-02; bh=ba5MWPuKWXiWUhChh6+IUa7Jvmwhw4+yhUnrxADzL/4=; b=Rlq1CCaORqVJZorlX35Nkw1sZoBx41R6tA54WP0e1jy0hBTxS/I1E3uGce8r7+6HKv99 gI7ungCzbNNoWvBzLUdcloxjZ63ICa47GdK2Sxz8jaeXG4Cz3XRBZvZlstgQpjBm2dri 7hZYExwVt2CG5FKunidKfne9KYVzbjz6VxHqi9lEbInj1QazlocYYM9DYF9DcCsBP0iW 1G2P5xYi8H2WkJTn+074SN206YnwROZpuVb/WAO1q2n25pZLJmm7nlu66SVQJtFykmmL LL1kEx9ujsGYUgh/Owsp/JiO8r2X7F0qjAxdYxerMN0MIyHGwreyVwJ/UpcawIawCrpl yg== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by userp2130.oracle.com with ESMTP id 2qp9xtk1h9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 17 Feb 2019 14:45:30 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id x1HEjOKk029924 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 17 Feb 2019 14:45:24 GMT Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x1HEjNJB007109; Sun, 17 Feb 2019 14:45:23 GMT Received: from lab02.no.oracle.com (/10.172.144.56) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 17 Feb 2019 14:45:23 +0000 From: =?UTF-8?q?H=C3=A5kon=20Bugge?= To: Yishai Hadas , Doug Ledford , Jason Gunthorpe , jackm@dev.mellanox.co.il, majd@mellanox.com Cc: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] IB/mlx4: Increase the timeout for CM cache Date: Sun, 17 Feb 2019 15:45:12 +0100 Message-Id: <20190217144512.1171546-1-haakon.bugge@oracle.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9170 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=904 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902170116 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Using CX-3 virtual functions, either from a bare-metal machine or pass-through from a VM, MAD packets are proxied through the PF driver. Since the VF drivers have separate name spaces for MAD Transaction Ids (TIDs), the PF driver has to re-map the TIDs and keep the book keeping in a cache. Following the RDMA Connection Manager (CM) protocol, it is clear when an entry has to evicted form the cache. But life is not perfect, remote peers may die or be rebooted. Hence, it's a timeout to wipe out a cache entry, when the PF driver assumes the remote peer has gone. During workloads where a high number of QPs are destroyed concurrently, excessive amount of CM DREQ retries has been observed The problem can be demonstrated in a bare-metal environment, where two nodes have instantiated 8 VFs each. This using dual ported HCAs, so we have 16 vPorts per physical server. 64 processes are associated with each vPort and creates and destroys one QP for each of the remote 64 processes. That is, 1024 QPs per vPort, all in all 16K QPs. The QPs are created/destroyed using the CM. When tearing down these 16K QPs, excessive CM DREQ retries (and duplicates) are observed. With some cat/paste/awk wizardry on the infiniband_cm sysfs, we observe as sum of the 16 vPorts on one of the nodes: cm_rx_duplicates: dreq 2102 cm_rx_msgs: drep 1989 dreq 6195 rep 3968 req 4224 rtu 4224 cm_tx_msgs: drep 4093 dreq 27568 rep 4224 req 3968 rtu 3968 cm_tx_retries: dreq 23469 Note that the active/passive side is equally distributed between the two nodes. Enabling pr_debug in cm.c gives tons of: [171778.814239] mlx4_ib_multiplex_cm_handler: id{slave: 1,sl_cm_id: 0xd393089f} is NULL! By increasing the CM_CLEANUP_CACHE_TIMEOUT from 5 to 30 seconds, the tear-down phase of the application is reduced from approximately 90 to 50 seconds. Retries/duplicates are also significantly reduced: cm_rx_duplicates: dreq 2460 [] cm_tx_retries: dreq 3010 req 47 Increasing the timeout further didn't help, as these duplicates and retries stems from a too short CMA timeout, which was 20 (~4 seconds) on the systems. By increasing the CMA timeout to 22 (~17 seconds), the numbers fell down to about 10 for both of them. Adjustment of the CMA timeout is not part of this commit. Signed-off-by: HÃ¥kon Bugge --- v1 -> v2: * Reworded commit message to reflect the new test-setup using multiple VFs --- drivers/infiniband/hw/mlx4/cm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c index fedaf8260105..8c79a480f2b7 100644 --- a/drivers/infiniband/hw/mlx4/cm.c +++ b/drivers/infiniband/hw/mlx4/cm.c @@ -39,7 +39,7 @@ #include "mlx4_ib.h" -#define CM_CLEANUP_CACHE_TIMEOUT (5 * HZ) +#define CM_CLEANUP_CACHE_TIMEOUT (30 * HZ) struct id_map_entry { struct rb_node node; -- 2.20.1