Received: by 10.192.165.148 with SMTP id m20csp17101imm; Wed, 9 May 2018 08:07:33 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpzIssvloFdT4HiaNOgZK5uUsp6LQT8BmBj5SDzwmmaNQ+0zvZMyTlWIw9sdpYqAN0bAz3W X-Received: by 2002:a17:902:b60b:: with SMTP id b11-v6mr46629150pls.330.1525878453384; Wed, 09 May 2018 08:07:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525878453; cv=none; d=google.com; s=arc-20160816; b=Ka2YUFl757nbKUpYX5IGkanvszsQ5BdfnMPhfoyEgcNdazjjwoFtF0v13S2I4Z9jTM RHYhBw1NEohQVFDN3GXn765y4t83y3PMra7G5yJj7FX9knmVJVrhUIjrB8uQHdWyi+3r emjKX657ohOooqHCDGDhbTAqrvAOU26hFBsgzDZ8Jl7OGW1gtfaErYpQfBPdLL92LZ6S 2EyBdLGRioDuyas8UE03l+lVjHRob+biYXt2yVxkS+l9MsB2VaswjuYlMSjsB5v0CdsQ ZCZDWsM1zBHN4a9lLxtpPkpPDFHlEkXZG/DZMqzE+DZpX0zbKemWa1SiS1kwIJJpJzgk abkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=Wo1wIT9ZkEhFj5uIIjp/sBTUFEAf+wHTSjRKnZxAKf8=; b=YZXUGMkKngVZmN+tIs966PzB7JreA2nymPpAR5iK6JioAsUh2f51VWmSwNVw9Qzai+ En4OQptzp7msNIigXeE3ocZAdjUdE4gZN1OEm57Idl9UCOZn4IKcaPr/G42dp082QEV0 n+FHhe4VbKFb0nbLpHTRzQc4n/FDHb0HEaG+8WBwV2LtqdacoF794VR/ww6fdN+DxCQy am82dDc3fS8jky+Ij/gQjV5WUE7iUml3TkXLfZa4Sfx8ofGbUkMbV02pZg82x1JDu9h0 TvqJ9aNUH9LQxXfjeJmJE4CS2MKmhnz2I2HiVohzZTAKGok3BYBFAuFNXxE5S7b71TmR R21w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p7-v6si20872809pgv.372.2018.05.09.08.07.17; Wed, 09 May 2018 08:07:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935357AbeEIPGv (ORCPT + 99 others); Wed, 9 May 2018 11:06:51 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:56142 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935065AbeEIPGt (ORCPT ); Wed, 9 May 2018 11:06:49 -0400 Received: by mail-wm0-f66.google.com with SMTP id a8so25394979wmg.5 for ; Wed, 09 May 2018 08:06:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=Wo1wIT9ZkEhFj5uIIjp/sBTUFEAf+wHTSjRKnZxAKf8=; b=aG+yj9OGc7BRRSdiVhzQxKBgdpRQn4LD5uy3eYNav3hzAh8xvNgNZXnbf5EnUN33L1 U4SBiup8zDV2uLWt8EtawV+Bo3qQG9iySLgbUW7JyOVvuMmnxErLMxBzfzBmalBcWEw7 Je8UkBgbZowLLdt+yyctii1aWaWh4+F1JhZRlVqRc4g+m0W4F6izzkRjCIUAWQFahnD4 9L1OaUA2S+BT9lPcES14Tz6V7Nw/6/DApzDpmbjWNqihD35q0hNwK4Crc8/U2BMavQQ5 k/qT2a5chU95h8oZFxaZZ/koRsGtQ+jqFjovgEl5QoMUEDv7K2tRFEr5qLLM6reT2UNE vH1g== X-Gm-Message-State: ALQs6tCSOkntc1LJyJ+GPolsEdRrXRse9EVh74NWR/JQkZrDdDEjQET6 1N1RI1KiAnGNjgQT4Y4QJE1Ah5Gm X-Received: by 2002:a50:869a:: with SMTP id r26-v6mr60331204eda.64.1525878408360; Wed, 09 May 2018 08:06:48 -0700 (PDT) Received: from [192.168.64.169] (bzq-219-42-90.isdn.bezeqint.net. [62.219.42.90]) by smtp.gmail.com with ESMTPSA id y4-v6sm13549034edr.51.2018.05.09.08.06.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 08:06:47 -0700 (PDT) Subject: Re: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue To: Jianchao Wang , keith.busch@intel.com, axboe@fb.com, hch@lst.de Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org References: <1525420958-9537-1-git-send-email-jianchao.w.wang@oracle.com> From: Sagi Grimberg Message-ID: <7096b02a-956a-6765-7839-227c154d8336@grimberg.me> Date: Wed, 9 May 2018 18:06:46 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <1525420958-9537-1-git-send-email-jianchao.w.wang@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/04/2018 11:02 AM, Jianchao Wang wrote: > When nvme_init_identify in nvme_rdma_configure_admin_queue fails, > the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set. > If nvme_rdma_stop_queue is invoked, we will incur use-after-free > which will cause memory corruption. > BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm] > Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304 > > CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G W 4.17.0-rc3+ #20 > Workqueue: nvme-delete-wq nvme_delete_ctrl_work > Call Trace: > dump_stack+0x91/0xeb > print_address_description+0x6b/0x290 > kasan_report+0x261/0x360 > rdma_disconnect+0x1f/0xe0 [rdma_cm] > nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma] > nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma] > nvme_delete_ctrl_work+0x98/0xe0 > process_one_work+0x3ca/0xaa0 > worker_thread+0x4e2/0x6c0 > kthread+0x18d/0x1e0 > ret_from_fork+0x24/0x30 > > To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0]. > The queue will be freed, so it certainly is not LIVE any more. > > Signed-off-by: Jianchao Wang > --- > drivers/nvme/host/rdma.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c > index fd965d0..ffbfe82 100644 > --- a/drivers/nvme/host/rdma.c > +++ b/drivers/nvme/host/rdma.c > @@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl, > if (new) > nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset); > out_free_queue: > + /* > + * The queue will be freed, so it is not LIVE any more. > + * This could avoid use-after-free in nvme_rdma_stop_queue. > + */ > + clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags); > nvme_rdma_free_queue(&ctrl->queues[0]); > return error; > } > The correct fix would be to add a tag for stop_queue and call nvme_rdma_stop_queue() in all the failure cases after nvme_rdma_start_queue.