Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755478AbdIFQgo (ORCPT ); Wed, 6 Sep 2017 12:36:44 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:34630 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753775AbdIFQgm (ORCPT ); Wed, 6 Sep 2017 12:36:42 -0400 From: =?UTF-8?q?H=C3=A5kon=20Bugge?= To: Santosh Shilimkar , "David S . Miller" Cc: netdev@vger.kernel.org, linux-rdma@vger.kernel.org, rds-devel@oss.oracle.com, linux-kernel@vger.kernel.org, knut.omang@oracle.com Subject: [PATCH net v2] rds: Fix incorrect statistics counting Date: Wed, 6 Sep 2017 18:35:51 +0200 Message-Id: <20170906163551.20387-1-Haakon.Bugge@oracle.com> X-Mailer: git-send-email 2.9.3 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1371 Lines: 46 In rds_send_xmit() there is logic to batch the sends. However, if another thread has acquired the lock and has incremented the send_gen, it is considered a race and we yield. The code incrementing the s_send_lock_queue_raced statistics counter did not count this event correctly. This commit counts the race condition correctly. Changes from v1: - Removed check for *someone_on_xmit()* - Fixed incorrect indentation Signed-off-by: HÃ¥kon Bugge Reviewed-by: Knut Omang --- net/rds/send.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/net/rds/send.c b/net/rds/send.c index 058a407..b52cdc8 100644 --- a/net/rds/send.c +++ b/net/rds/send.c @@ -428,14 +428,18 @@ int rds_send_xmit(struct rds_conn_path *cp) * some work and we will skip our goto */ if (ret == 0) { + bool raced; + smp_mb(); + raced = send_gen != READ_ONCE(cp->cp_send_gen); + if ((test_bit(0, &conn->c_map_queued) || - !list_empty(&cp->cp_send_queue)) && - send_gen == READ_ONCE(cp->cp_send_gen)) { - rds_stats_inc(s_send_lock_queue_raced); + !list_empty(&cp->cp_send_queue)) && !raced) { if (batch_count < send_batch_count) goto restart; queue_delayed_work(rds_wq, &cp->cp_send_w, 1); + } else if (raced) { + rds_stats_inc(s_send_lock_queue_raced); } } out: -- 2.9.3