Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1967815ybh; Tue, 14 Jul 2020 11:55:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyubp0PVYXuMy4m2sRy5soSsygCG4Frx20aWt4LkXP4yCTwUuzsA9ftQR9JjfIROvwHYTZh X-Received: by 2002:a17:906:2654:: with SMTP id i20mr6102739ejc.80.1594752917452; Tue, 14 Jul 2020 11:55:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594752917; cv=none; d=google.com; s=arc-20160816; b=rwob5RNoWRKunX5q3CiUKXIrJkoCx9eWRQeKxhySB/O9IgPHrM3JyRdRlM3uftIeiE pHVSVgPwlNuCVZw2WjXH17iAkYPsXJizYv7y/KNSzRTZ/+YVD8VMysJnAeCbXHzTsa2b i5+42RHk7bQBpIu8vrWT8jDR2Jc9095qXzeFIEXj2HD3Y86SuKmsT2jjBPU9APi+Q/wo EOEO3H8vRl6A0ZoU+3CFNLM2xyqqCKwckjRmlgNI123TZGI5qDgkNtVjRi6chvtgYFt0 4y3+sTYRxNfciZiAKvYmCmQ6ATJ80nB63x7zWjToYqfHiKAEUGpXx5oRd4+2G9xAhCm1 FI3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=WueBnycfj1drHTMh/bDBWM4XukY+CbwEHDneG036Z4I=; b=srg0qiAOeFqEj4ntrI0ZB6eno4Bk0GE/MarivvMPXyQ8Y3amM8Og2sQK16B1BVpesy P8uDn/d3lbaeILDIt2n2qS40jwwy8lJefqS0pXONzb62motkD8HzHV/wQOcU1Fnqf3kN /0C6/c4gi+dSG8TRZaWGVQVOfYAupXogGAoNSwAyc+FUeWK8Riv1z01IhfgeYCpm3aCn 1o1dxO1aSNaNH1HCVjGRyZJK+PAJYT4S8oAnrJFABTOtYCYe7BcwD9F1/0Za1uEgPlXY CNj4ZxWnBvbWnT8JYtq/aGDwr7T0Z+gWpHb8zmXbcE6Y8vlq4OwUs3jurnx/JOh5nHm8 A2Hg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=RLooLLrs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a8si11882955edr.123.2020.07.14.11.54.53; Tue, 14 Jul 2020 11:55:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=RLooLLrs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729761AbgGNSvc (ORCPT + 99 others); Tue, 14 Jul 2020 14:51:32 -0400 Received: from mail.kernel.org ([198.145.29.99]:47948 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730406AbgGNSv3 (ORCPT ); Tue, 14 Jul 2020 14:51:29 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2AFD722BF3; Tue, 14 Jul 2020 18:51:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1594752688; bh=dy2uQDYAA9PIx9olKKhx+sOCYXjelwa7t4G64V/NNKQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RLooLLrsLTF4bZ3/C5yYZ2K1e6uS7Mf6hRdVYf3RgahuBvFscgQweU2nUjNlbdsXe 6yckD6Ac+ueHh0nxau4xlgiAZgWpjn1JFfu3Nd5/rWpyZqb8rDxVemjAnVclQ79ejU LINgu63fBypTHrxAO8OqlDdXniqDzZ+t12Pmj0EA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Divya Indi , Jason Gunthorpe , Sasha Levin Subject: [PATCH 5.4 045/109] IB/sa: Resolv use-after-free in ib_nl_make_request() Date: Tue, 14 Jul 2020 20:43:48 +0200 Message-Id: <20200714184107.680715975@linuxfoundation.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200714184105.507384017@linuxfoundation.org> References: <20200714184105.507384017@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Divya Indi [ Upstream commit f427f4d6214c183c474eeb46212d38e6c7223d6a ] There is a race condition where ib_nl_make_request() inserts the request data into the linked list but the timer in ib_nl_request_timeout() can see it and destroy it before ib_nl_send_msg() is done touching it. This could happen, for instance, if there is a long delay allocating memory during nlmsg_new() This causes a use-after-free in the send_mad() thread: [] ? ib_pack+0x17b/0x240 [ib_core] [ ] ib_sa_path_rec_get+0x181/0x200 [ib_sa] [] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm] [] ? cma_bind_port+0xa0/0xa0 [rdma_cm] [] ? rds_rdma_cm_event_handler_cmn+0x850/0x850 [rds_rdma] [] rds_rdma_cm_event_handler_cmn+0x22c/0x850 [rds_rdma] [] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma] [] addr_handler+0x9e/0x140 [rdma_cm] [] process_req+0x134/0x190 [ib_addr] [] process_one_work+0x169/0x4a0 [] worker_thread+0x5b/0x560 [] ? flush_delayed_work+0x50/0x50 [] kthread+0xcb/0xf0 [] ? __schedule+0x24a/0x810 [] ? __schedule+0x24a/0x810 [] ? kthread_create_on_node+0x180/0x180 [] ret_from_fork+0x47/0x90 [] ? kthread_create_on_node+0x180/0x180 The ownership rule is once the request is on the list, ownership transfers to the list and the local thread can't touch it any more, just like for the normal MAD case in send_mad(). Thus, instead of adding before send and then trying to delete after on errors, move the entire thing under the spinlock so that the send and update of the lists are atomic to the conurrent threads. Lightly reoganize things so spinlock safe memory allocations are done in the final NL send path and the rest of the setup work is done before and outside the lock. Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list before sending") Link: https://lore.kernel.org/r/1592964789-14533-1-git-send-email-divya.indi@oracle.com Signed-off-by: Divya Indi Signed-off-by: Jason Gunthorpe Signed-off-by: Sasha Levin --- drivers/infiniband/core/sa_query.c | 38 +++++++++++++----------------- 1 file changed, 17 insertions(+), 21 deletions(-) diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c index bddb5434fbed2..d2d70c89193ff 100644 --- a/drivers/infiniband/core/sa_query.c +++ b/drivers/infiniband/core/sa_query.c @@ -829,13 +829,20 @@ static int ib_nl_get_path_rec_attrs_len(ib_sa_comp_mask comp_mask) return len; } -static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask) +static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask) { struct sk_buff *skb = NULL; struct nlmsghdr *nlh; void *data; struct ib_sa_mad *mad; int len; + unsigned long flags; + unsigned long delay; + gfp_t gfp_flag; + int ret; + + INIT_LIST_HEAD(&query->list); + query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq); mad = query->mad_buf->mad; len = ib_nl_get_path_rec_attrs_len(mad->sa_hdr.comp_mask); @@ -860,36 +867,25 @@ static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask) /* Repair the nlmsg header length */ nlmsg_end(skb, nlh); - return rdma_nl_multicast(&init_net, skb, RDMA_NL_GROUP_LS, gfp_mask); -} + gfp_flag = ((gfp_mask & GFP_ATOMIC) == GFP_ATOMIC) ? GFP_ATOMIC : + GFP_NOWAIT; -static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask) -{ - unsigned long flags; - unsigned long delay; - int ret; + spin_lock_irqsave(&ib_nl_request_lock, flags); + ret = rdma_nl_multicast(&init_net, skb, RDMA_NL_GROUP_LS, gfp_flag); - INIT_LIST_HEAD(&query->list); - query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq); + if (ret) + goto out; - /* Put the request on the list first.*/ - spin_lock_irqsave(&ib_nl_request_lock, flags); + /* Put the request on the list.*/ delay = msecs_to_jiffies(sa_local_svc_timeout_ms); query->timeout = delay + jiffies; list_add_tail(&query->list, &ib_nl_request_list); /* Start the timeout if this is the only request */ if (ib_nl_request_list.next == &query->list) queue_delayed_work(ib_nl_wq, &ib_nl_timed_work, delay); - spin_unlock_irqrestore(&ib_nl_request_lock, flags); - ret = ib_nl_send_msg(query, gfp_mask); - if (ret) { - ret = -EIO; - /* Remove the request */ - spin_lock_irqsave(&ib_nl_request_lock, flags); - list_del(&query->list); - spin_unlock_irqrestore(&ib_nl_request_lock, flags); - } +out: + spin_unlock_irqrestore(&ib_nl_request_lock, flags); return ret; } -- 2.25.1