Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp13372196pxu; Sun, 3 Jan 2021 12:09:04 -0800 (PST) X-Google-Smtp-Source: ABdhPJysBYlNALCEHQDckQD8dstz07fbJSSnwOzMm+/YJYVg0PL4wLKASCwNWZcDTErJOhK7gTos X-Received: by 2002:a17:907:6e6:: with SMTP id yh6mr65649215ejb.512.1609704544440; Sun, 03 Jan 2021 12:09:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1609704544; cv=none; d=google.com; s=arc-20160816; b=va8XC91YK1ylGvw6YtMAgG1BBro1yjxFR28KXkMZtWymMtjE4eGt20ZYy28qeU+Idp UXnKjGfYvEH96Hqf9uhyf3gaV9yoW11CZCt7NZmtj72uclbJABj6eubR5eX9EKlEAN7s NJ6fBboqEfZ+Jina1MwTtHOh/QYTAh3h3OtFR3vtyGc1DDv2zaifnNC6fVXP2R2uUkY1 w1R5i9O3wQYIie3o4V9qRjd3sU0jgWSxGBcvuwd2AMFjHttc4qGB6SKtvXSe9uxnDvZX E/FfXktevtDjCBwKtdzmE8nJomWTJY4gF/Aego8Ghd/zTKky0kV7p9QvgTAQ853OHUYA ihWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=p1JDwNjo4kW+Onlqx41wby/8vL/xJp1erYMmcX8soa4=; b=l3YpIg0arND43F5rz7yn1cEIL962XpqWZpyFX9AFk4jcKBWAer7ZWJUdzhoh8tqEes nAhm5jCjAQIB0kcc5NmQjhJgvsOR8Ss6DhB3ZLcpD1g0WMjeD9/vvN0kPNf6D4ku+e5m GEM7CjkCL054mvLBVpPHWNYQIYmbdfkpp84Il4riAfz74MaKIux39xQvJtWQDv3kpy23 vt/36pm2prPBefrw/fHieAAIwEP2RRiKml06fcrJv0mEnjkRZP+XYMKAjt34Dy2MzAGL YkGBxOzAxM7ifgNqDaJAtVNzdy66fx+mkHU7givKp/o2Dt7aK3uh6bcyr6iMyTEQcn8A 7lSQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@dell.com header.s=smtpout1 header.b="Mp0/IMXi"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=dell.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c3si30535454eds.266.2021.01.03.12.08.41; Sun, 03 Jan 2021 12:09:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@dell.com header.s=smtpout1 header.b="Mp0/IMXi"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=dell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727738AbhACSN6 (ORCPT + 99 others); Sun, 3 Jan 2021 13:13:58 -0500 Received: from mx0a-00154904.pphosted.com ([148.163.133.20]:36454 "EHLO mx0a-00154904.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726129AbhACSN5 (ORCPT ); Sun, 3 Jan 2021 13:13:57 -0500 Received: from pps.filterd (m0170390.ppops.net [127.0.0.1]) by mx0a-00154904.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 103ICxBJ016559 for ; Sun, 3 Jan 2021 13:13:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=smtpout1; bh=p1JDwNjo4kW+Onlqx41wby/8vL/xJp1erYMmcX8soa4=; b=Mp0/IMXiqgZVXcpwSVyLpl/oXUp8xmq1DsM32+CKD5gI2wDU7n3dQcFZ5jGiDqn4dAu3 m4yhrBP6hRp7dQDPHzpyxVyJWCf/dGgASizwzme8p359i/NPDqck5cfKUajChNz5158W HTFEjGoWn2oeenpdSH+MCWTmuFJCZH4/JiJL8J8Ttr8SpbAvg/ndv/iDq8btI/i2r1A9 ZzcE1+lETjIG0I8jgBQXN8nVP8iJkynjNHhSKTR0OS7dy0JcOZnSxqBmmgBk8A0DkUiM SEjZpXHY6lTByfmvd6bsXx24dnsCq0qM/pvg7uAIctB9R6qOccWdchR3z4jgXzq2GZBb ag== Received: from mx0a-00154901.pphosted.com (mx0a-00154901.pphosted.com [67.231.149.39]) by mx0a-00154904.pphosted.com with ESMTP id 35tms8adgs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Sun, 03 Jan 2021 13:13:16 -0500 Received: from pps.filterd (m0134746.ppops.net [127.0.0.1]) by mx0a-00154901.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 103I5Mem120821 for ; Sun, 3 Jan 2021 13:13:16 -0500 Received: from esaploutdur04.us.dell.com ([128.221.233.10]) by mx0a-00154901.pphosted.com with ESMTP id 35u79g4jsm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Sun, 03 Jan 2021 13:13:15 -0500 X-PREM-Routing: D-Outbound X-LoopCount0: from 10.55.226.146 Received: from unknown (HELO vd-leonidr.xiolab.lab.emc.com) ([10.55.226.146]) by esaploutdur04.us.dell.com with ESMTP; 03 Jan 2021 12:13:13 -0600 From: leonid.ravich@dell.com To: james.smart@broadcom.com Cc: lravich@gmail.com, Leonid Ravich , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] nvmet-fc: associations list protected by rcu, instead of spinlock_irq where possible. Date: Sun, 3 Jan 2021 20:12:53 +0200 Message-Id: <1609697575-103348-1-git-send-email-leonid.ravich@dell.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <20201224110542.22219-1-leonid.ravich@dell.com> References: <20201224110542.22219-1-leonid.ravich@dell.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-03_11:2020-12-31,2021-01-03 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 bulkscore=0 malwarescore=0 clxscore=1015 adultscore=0 spamscore=0 mlxscore=0 priorityscore=1501 mlxlogscore=992 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101030117 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 suspectscore=0 malwarescore=0 bulkscore=0 mlxlogscore=999 mlxscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101030118 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Leonid Ravich searching assoc_list protected by rcu_read_lock if list not changed inline. and according to the rcu list rules. queue array embedded into nvmet_fc_tgt_assoc protected by rcu_read_lock according to rcu dereference/assign rules. queue and assoc object freed after grace period by call_rcu. tgtport lock taken for changing assoc_list. Reviewed-by: Eldad Zinger Reviewed-by: Elad Grupi Signed-off-by: Leonid Ravich --- 1) fixed sytle issus 2) queues array protects by rcu as well drivers/nvme/target/fc.c | 81 +++++++++++++++++++++++------------------------- 1 file changed, 38 insertions(+), 43 deletions(-) diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index cd4e73a..c14c60b 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -145,6 +145,7 @@ struct nvmet_fc_tgt_queue { struct list_head avail_defer_list; struct workqueue_struct *work_q; struct kref ref; + struct rcu_head rcu; struct nvmet_fc_fcp_iod fod[]; /* array of fcp_iods */ } __aligned(sizeof(unsigned long long)); @@ -167,6 +168,7 @@ struct nvmet_fc_tgt_assoc { struct nvmet_fc_tgt_queue *queues[NVMET_NR_QUEUES + 1]; struct kref ref; struct work_struct del_work; + struct rcu_head rcu; }; @@ -790,7 +792,6 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, u16 qid, u16 sqsize) { struct nvmet_fc_tgt_queue *queue; - unsigned long flags; int ret; if (qid > NVMET_NR_QUEUES) @@ -829,9 +830,7 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, goto out_fail_iodlist; WARN_ON(assoc->queues[qid]); - spin_lock_irqsave(&assoc->tgtport->lock, flags); - assoc->queues[qid] = queue; - spin_unlock_irqrestore(&assoc->tgtport->lock, flags); + rcu_assign_pointer(assoc->queues[qid], queue); return queue; @@ -851,11 +850,8 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, { struct nvmet_fc_tgt_queue *queue = container_of(ref, struct nvmet_fc_tgt_queue, ref); - unsigned long flags; - spin_lock_irqsave(&queue->assoc->tgtport->lock, flags); - queue->assoc->queues[queue->qid] = NULL; - spin_unlock_irqrestore(&queue->assoc->tgtport->lock, flags); + rcu_assign_pointer(queue->assoc->queues[queue->qid], NULL); nvmet_fc_destroy_fcp_iodlist(queue->assoc->tgtport, queue); @@ -863,7 +859,7 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, destroy_workqueue(queue->work_q); - kfree(queue); + kfree_rcu(queue, rcu); } static void @@ -965,24 +961,23 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, struct nvmet_fc_tgt_queue *queue; u64 association_id = nvmet_fc_getassociationid(connection_id); u16 qid = nvmet_fc_getqueueid(connection_id); - unsigned long flags; if (qid > NVMET_NR_QUEUES) return NULL; - spin_lock_irqsave(&tgtport->lock, flags); - list_for_each_entry(assoc, &tgtport->assoc_list, a_list) { + rcu_read_lock(); + list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) { if (association_id == assoc->association_id) { - queue = assoc->queues[qid]; + queue = rcu_dereference(assoc->queues[qid]); if (queue && (!atomic_read(&queue->connected) || !nvmet_fc_tgt_q_get(queue))) queue = NULL; - spin_unlock_irqrestore(&tgtport->lock, flags); + rcu_read_unlock(); return queue; } } - spin_unlock_irqrestore(&tgtport->lock, flags); + rcu_read_unlock(); return NULL; } @@ -1137,7 +1132,7 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, } if (!needrandom) { assoc->association_id = ran; - list_add_tail(&assoc->a_list, &tgtport->assoc_list); + list_add_tail_rcu(&assoc->a_list, &tgtport->assoc_list); } spin_unlock_irqrestore(&tgtport->lock, flags); } @@ -1167,7 +1162,7 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, nvmet_fc_free_hostport(assoc->hostport); spin_lock_irqsave(&tgtport->lock, flags); - list_del(&assoc->a_list); + list_del_rcu(&assoc->a_list); oldls = assoc->rcv_disconn; spin_unlock_irqrestore(&tgtport->lock, flags); /* if pending Rcv Disconnect Association LS, send rsp now */ @@ -1177,7 +1172,7 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, dev_info(tgtport->dev, "{%d:%d} Association freed\n", tgtport->fc_target_port.port_num, assoc->a_id); - kfree(assoc); + kfree_rcu(assoc, rcu); nvmet_fc_tgtport_put(tgtport); } @@ -1198,7 +1193,6 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, { struct nvmet_fc_tgtport *tgtport = assoc->tgtport; struct nvmet_fc_tgt_queue *queue; - unsigned long flags; int i, terminating; terminating = atomic_xchg(&assoc->terminating, 1); @@ -1207,19 +1201,23 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, if (terminating) return; - spin_lock_irqsave(&tgtport->lock, flags); + for (i = NVMET_NR_QUEUES; i >= 0; i--) { - queue = assoc->queues[i]; - if (queue) { - if (!nvmet_fc_tgt_q_get(queue)) - continue; - spin_unlock_irqrestore(&tgtport->lock, flags); - nvmet_fc_delete_target_queue(queue); - nvmet_fc_tgt_q_put(queue); - spin_lock_irqsave(&tgtport->lock, flags); + rcu_read_lock(); + queue = rcu_dereference(assoc->queues[i]); + if (!queue) { + rcu_read_unlock(); + continue; + } + + if (!nvmet_fc_tgt_q_get(queue)) { + rcu_read_unlock(); + continue; } + rcu_read_unlock(); + nvmet_fc_delete_target_queue(queue); + nvmet_fc_tgt_q_put(queue); } - spin_unlock_irqrestore(&tgtport->lock, flags); dev_info(tgtport->dev, "{%d:%d} Association deleted\n", @@ -1234,10 +1232,9 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, { struct nvmet_fc_tgt_assoc *assoc; struct nvmet_fc_tgt_assoc *ret = NULL; - unsigned long flags; - spin_lock_irqsave(&tgtport->lock, flags); - list_for_each_entry(assoc, &tgtport->assoc_list, a_list) { + rcu_read_lock(); + list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) { if (association_id == assoc->association_id) { ret = assoc; if (!nvmet_fc_tgt_a_get(assoc)) @@ -1245,7 +1242,7 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, break; } } - spin_unlock_irqrestore(&tgtport->lock, flags); + rcu_read_unlock(); return ret; } @@ -1473,19 +1470,17 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, static void __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport) { - struct nvmet_fc_tgt_assoc *assoc, *next; - unsigned long flags; + struct nvmet_fc_tgt_assoc *assoc; - spin_lock_irqsave(&tgtport->lock, flags); - list_for_each_entry_safe(assoc, next, - &tgtport->assoc_list, a_list) { + rcu_read_lock(); + list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) { if (!nvmet_fc_tgt_a_get(assoc)) continue; if (!schedule_work(&assoc->del_work)) /* already deleting - release local reference */ nvmet_fc_tgt_a_put(assoc); } - spin_unlock_irqrestore(&tgtport->lock, flags); + rcu_read_unlock(); } /** @@ -1568,16 +1563,16 @@ static void nvmet_fc_xmt_ls_rsp(struct nvmet_fc_tgtport *tgtport, continue; spin_unlock_irqrestore(&nvmet_fc_tgtlock, flags); - spin_lock_irqsave(&tgtport->lock, flags); - list_for_each_entry(assoc, &tgtport->assoc_list, a_list) { - queue = assoc->queues[0]; + rcu_read_lock(); + list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) { + queue = rcu_dereference(assoc->queues[0]); if (queue && queue->nvme_sq.ctrl == ctrl) { if (nvmet_fc_tgt_a_get(assoc)) found_ctrl = true; break; } } - spin_unlock_irqrestore(&tgtport->lock, flags); + rcu_read_unlock(); nvmet_fc_tgtport_put(tgtport); -- 1.9.3