Received: by 2002:ac0:a581:0:0:0:0:0 with SMTP id m1-v6csp3008807imm; Sun, 1 Jul 2018 10:12:26 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdgLC5BuglSfKZXEhoow5wOPya1roxK0LEPLltIc88vgZOt68GUjNUtEcNMwZyrkKhLpcB8 X-Received: by 2002:a62:a3d1:: with SMTP id q78-v6mr22049395pfl.5.1530465146222; Sun, 01 Jul 2018 10:12:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530465146; cv=none; d=google.com; s=arc-20160816; b=U0geAGB/KT4dBmK16S7xmnTLlNfShMLg2AuATPM93K2iah0uqurQB4JEhLjfGQhNPZ LTPCDaRSxbIk9bKsCiGaOZzdSuBcCRVlJKXdTcF+P1Gu+YJxjIhsDnD0BJfmVDAKYH88 Tyy3rb+w5Hhg15shBP1YL3iXa+i2MQNCyvpBBn+9NDsv2H9CHHjQtgw1Qw66qHnhxF8j ajoGNnjBRn3KMM/P6Abi2oFknWVts44+nq4irFK9sCdG+H9GRQtXMcNwws/+kFrGOkRq 2fsfNee8EZKswKtZ+zMZn1/kssJ+aO4UATgiMgJj9mvQ3bZC5kCQqmwNad99UGL9yCXc Hiog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=VA0V+3RanP+ea/qjmEg9Ns4T06tU5uIn6FAMiknDFxs=; b=UTfHccYwKegjVxmtkD7ZL48HGcPgPRaAGslmVP8uM3mUu0HkPkDkiTsBbLLldqEspl R8HS78SVpMvnTcQxrDG7Wr2mYWEP/e6RLZ4YkyWYjdceO+ViPjC97qYmgdWxrRpbV9Ol CtfDJxGxvovEZbzCcG838xLwIAaUTinkqL4mT8qpT7xaT8yswUiDK4wjQqx8+EuqAX4b aAxZQvmJaxZdXWZRduvG6qzSyB3TvyX1/EfaKHXHaByIBIGsnVnI67dumDZMK/UvYufJ ybnyKbHv9vlZ/I7NT8Yj43PywaKwkYDsFDOTn+plw7Ht+1ON5iP+EEa3P6BnAMx6nKIR pXvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x7-v6si12223153pgb.297.2018.07.01.10.12.11; Sun, 01 Jul 2018 10:12:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753811AbeGARLZ (ORCPT + 99 others); Sun, 1 Jul 2018 13:11:25 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:37248 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031493AbeGAQkt (ORCPT ); Sun, 1 Jul 2018 12:40:49 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 5B8D2AA6; Sun, 1 Jul 2018 16:40:48 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mike Marciniszyn , Sebastian Sanchez , Dennis Dalessandro , Doug Ledford Subject: [PATCH 4.17 093/220] IB/hfi1: Optimize kthread pointer locking when queuing CQ entries Date: Sun, 1 Jul 2018 18:21:57 +0200 Message-Id: <20180701160912.298804652@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180701160908.272447118@linuxfoundation.org> References: <20180701160908.272447118@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.17-stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Sanchez commit af8aab71370a692eaf7e7969ba5b1a455ac20113 upstream. All threads queuing CQ entries on different CQs are unnecessarily synchronized by a spin lock to check if the CQ kthread worker hasn't been destroyed before queuing an CQ entry. The lock used in 6efaf10f163d ("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") is a device global lock and will have poor performance at scale as completions are entered from a large number of CPUs. Convert to use RCU where the read side of RCU is rvt_cq_enter() to determine that the worker is alive prior to triggering the completion event. Apply write side RCU semantics in rvt_driver_cq_init() and rvt_cq_exit(). Fixes: 6efaf10f163d ("IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker") Cc: # 4.14.x Reviewed-by: Mike Marciniszyn Signed-off-by: Sebastian Sanchez Signed-off-by: Dennis Dalessandro Signed-off-by: Doug Ledford Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/sw/rdmavt/cq.c | 31 +++++++++++++++++++------------ include/rdma/rdma_vt.h | 2 +- 2 files changed, 20 insertions(+), 13 deletions(-) --- a/drivers/infiniband/sw/rdmavt/cq.c +++ b/drivers/infiniband/sw/rdmavt/cq.c @@ -120,17 +120,20 @@ void rvt_cq_enter(struct rvt_cq *cq, str if (cq->notify == IB_CQ_NEXT_COMP || (cq->notify == IB_CQ_SOLICITED && (solicited || entry->status != IB_WC_SUCCESS))) { + struct kthread_worker *worker; + /* * This will cause send_complete() to be called in * another thread. */ - spin_lock(&cq->rdi->n_cqs_lock); - if (likely(cq->rdi->worker)) { + rcu_read_lock(); + worker = rcu_dereference(cq->rdi->worker); + if (likely(worker)) { cq->notify = RVT_CQ_NONE; cq->triggered++; - kthread_queue_work(cq->rdi->worker, &cq->comptask); + kthread_queue_work(worker, &cq->comptask); } - spin_unlock(&cq->rdi->n_cqs_lock); + rcu_read_unlock(); } spin_unlock_irqrestore(&cq->lock, flags); @@ -512,7 +515,7 @@ int rvt_driver_cq_init(struct rvt_dev_in int cpu; struct kthread_worker *worker; - if (rdi->worker) + if (rcu_access_pointer(rdi->worker)) return 0; spin_lock_init(&rdi->n_cqs_lock); @@ -524,7 +527,7 @@ int rvt_driver_cq_init(struct rvt_dev_in return PTR_ERR(worker); set_user_nice(worker->task, MIN_NICE); - rdi->worker = worker; + RCU_INIT_POINTER(rdi->worker, worker); return 0; } @@ -536,15 +539,19 @@ void rvt_cq_exit(struct rvt_dev_info *rd { struct kthread_worker *worker; - /* block future queuing from send_complete() */ - spin_lock_irq(&rdi->n_cqs_lock); - worker = rdi->worker; + if (!rcu_access_pointer(rdi->worker)) + return; + + spin_lock(&rdi->n_cqs_lock); + worker = rcu_dereference_protected(rdi->worker, + lockdep_is_held(&rdi->n_cqs_lock)); if (!worker) { - spin_unlock_irq(&rdi->n_cqs_lock); + spin_unlock(&rdi->n_cqs_lock); return; } - rdi->worker = NULL; - spin_unlock_irq(&rdi->n_cqs_lock); + RCU_INIT_POINTER(rdi->worker, NULL); + spin_unlock(&rdi->n_cqs_lock); + synchronize_rcu(); kthread_destroy_worker(worker); } --- a/include/rdma/rdma_vt.h +++ b/include/rdma/rdma_vt.h @@ -402,7 +402,7 @@ struct rvt_dev_info { spinlock_t pending_lock; /* protect pending mmap list */ /* CQ */ - struct kthread_worker *worker; /* per device cq worker */ + struct kthread_worker __rcu *worker; /* per device cq worker */ u32 n_cqs_allocated; /* number of CQs allocated for device */ spinlock_t n_cqs_lock; /* protect count of in use cqs */