Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3494115pxb; Sun, 31 Jan 2021 18:20:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJzPgajm69W2HTRncEj92AatNNmt69PEb/ryeTGxuRqxnIpAVCnd42lrhB6eFsTC13Kqr1cq X-Received: by 2002:a05:6402:1013:: with SMTP id c19mr16682991edu.298.1612146017881; Sun, 31 Jan 2021 18:20:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612146017; cv=none; d=google.com; s=arc-20160816; b=isCIs649Ob+XDV+5ddxhJmaCSTlEXeC7Oar3sFdmpADegtI2eypvePA2jBKjHzTQx7 MACrhgxLr300dd6keBkhIghGo/knyhQO5hcdd+jYHNqMh4dghSjyH74h8dXNbMS8fqcx +co3ukq9P4/hkLhGwqc6Bt1GQJ8t71AyLTJ/kmKpNIadT5HOyd3LSc9CRRLhGvt8sPsm gO9L/riX8znbzq2jehccOVl1jJwXuAaUjTT7J+7A3qleOyBvTxQUdqZ+7QDg/HN7fs26 vptpXjoHc5PGfG434JgI7hoAlQCJ0zuXKg12tTCNaivb9Ig0/LY2jAEKQM4ZNNL7zg+B /QIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=2PftpMfZN3+i06cqnV6IghA7h8TofSyoOKwqgSgE7vs=; b=0Qs9QJSxmC7S1XyHU7A+0xOS0jnT5wUeVgHsdqvfmkJN5EMVhY5nx2EO7BOfi29NYa lqdaT8viR0u//aD7z3uKNtG92usEpBwvTwFwLvCSgH6+CHkyA/PZN9VXeqjalNUpUnFH fmP+EwEM1YYuTn3gpppem2DDEhFJOTC1ghdUW7liu9aTbSQdvyc63VTz2hEXqThzMtsL FtXH33doXowy1QvGuI0d6svM+eygeJ7bA6RA0UqVugDop81X1XQzj2MPHQWPPFVrh8/I 4YN1spOHP4AVnB9xqJ3t4fXvD8vKnIspZm3nqWsGsAvyjtWSfiYdpsqBd1JAYN7uKmhX ddeA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y4si9565152ejf.479.2021.01.31.18.19.53; Sun, 31 Jan 2021 18:20:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231254AbhBACRb (ORCPT + 99 others); Sun, 31 Jan 2021 21:17:31 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:2572 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231153AbhBACRU (ORCPT ); Sun, 31 Jan 2021 21:17:20 -0500 Received: from DGGEMM404-HUB.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4DTWjN2ZnJzW62Q; Mon, 1 Feb 2021 10:14:32 +0800 (CST) Received: from dggema772-chm.china.huawei.com (10.1.198.214) by DGGEMM404-HUB.china.huawei.com (10.3.20.212) with Microsoft SMTP Server (TLS) id 14.3.498.0; Mon, 1 Feb 2021 10:16:36 +0800 Received: from [10.169.42.93] (10.169.42.93) by dggema772-chm.china.huawei.com (10.1.198.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2106.2; Mon, 1 Feb 2021 10:16:35 +0800 Subject: Re: [PATCH v2] nvme-multipath: Early exit if no path is available To: Hannes Reinecke , Sagi Grimberg , "Daniel Wagner" CC: , , "Jens Axboe" , Keith Busch , Christoph Hellwig References: <20210127103033.15318-1-dwagner@suse.de> <20210128075837.u5u56t23fq5gu6ou@beryllium.lan> <69575290-200e-b4a1-4269-c71e4c2cc37b@huawei.com> <20210128094004.erwnszjqcxlsi2kd@beryllium.lan> <675d3cf7-1ae8-adc5-b6d0-359fe10f6b23@grimberg.me> <59cd053e-46cb-0235-141f-4ce919c93f48@huawei.com> <65392653-6b03-9195-f686-5fe4b3290bd2@suse.de> <81b22bbf-4dd3-6161-e63a-9699690a4e4f@huawei.com> <715dd943-0587-be08-2840-e0948cf0bc62@suse.de> From: Chao Leng Message-ID: <6ceff3cb-c9e9-7e74-92f0-dd745987c943@huawei.com> Date: Mon, 1 Feb 2021 10:16:35 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.169.42.93] X-ClientProxiedBy: dggeme718-chm.china.huawei.com (10.1.199.114) To dggema772-chm.china.huawei.com (10.1.198.214) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/1/29 17:20, Hannes Reinecke wrote: > On 1/29/21 9:46 AM, Chao Leng wrote: >> >> >> On 2021/1/29 16:33, Hannes Reinecke wrote: >>> On 1/29/21 8:45 AM, Chao Leng wrote: >>>> >>>> >>>> On 2021/1/29 15:06, Hannes Reinecke wrote: >>>>> On 1/29/21 4:07 AM, Chao Leng wrote: >>>>>> >>>>>> >>>>>> On 2021/1/29 9:42, Sagi Grimberg wrote: >>>>>>> >>>>>>>>> You can't see exactly where it dies but I followed the assembly to >>>>>>>>> nvme_round_robin_path(). Maybe it's not the initial nvme_next_ns(head, >>>>>>>>> old) which returns NULL but nvme_next_ns() is returning NULL eventually >>>>>>>>> (list_next_or_null_rcu()). >>>>>>>> So there is other bug cause nvme_next_ns abormal. >>>>>>>> I review the code about head->list and head->current_path, I find 2 bugs >>>>>>>> may cause the bug: >>>>>>>> First, I already send the patch. see: >>>>>>>> https://lore.kernel.org/linux-nvme/20210128033351.22116-1-lengchao@huawei.com/ >>>>>>>> Second, in nvme_ns_remove, list_del_rcu is before >>>>>>>> nvme_mpath_clear_current_path. This may cause "old" is deleted from the >>>>>>>> "head", but still use "old". I'm not sure there's any other >>>>>>>> consideration here, I will check it and try to fix it. >>>>>>> >>>>>>> The reason why we first remove from head->list and only then clear >>>>>>> current_path is because the other way around there is no way >>>>>>> to guarantee that that the ns won't be assigned as current_path >>>>>>> again (because it is in head->list). >>>>>> ok, I see. >>>>>>> >>>>>>> nvme_ns_remove fences continue of deletion of the ns by synchronizing >>>>>>> the srcu such that for sure the current_path clearance is visible. >>>>>> The list will be like this: >>>>>> head->next = ns1; >>>>>> ns1->next = head; >>>>>> old->next = ns1; >>>>> >>>>> Where does 'old' pointing to? >>>>> >>>>>> This may cause infinite loop in nvme_round_robin_path. >>>>>> for (ns = nvme_next_ns(head, old); >>>>>>      ns != old; >>>>>>      ns = nvme_next_ns(head, ns)) >>>>>> The ns will always be ns1, and then infinite loop. >>>>> >>>>> No. nvme_next_ns() will return NULL. >>>> If there is just one path(the "old") and the "old" is deleted, >>>> nvme_next_ns() will return NULL. >>>> The list like this: >>>> head->next = head; >>>> old->next = head; >>>> If there is two or more path and the "old" is deleted, >>>> "for" will be infinite loop. because nvme_next_ns() will return >>>> the path which in the list except the "old", check condition will >>>> be true for ever. >>> >>> But that will be caught by the statement above: >>> >>> if (list_is_singular(&head->list)) >>> >>> no? >> Two path just a sample example. >> If there is just two path, will enter it, may cause no path but there is >> actually one path. It is falsely assumed that the "old" must be not deleted. >> If there is more than two path, will cause infinite loop. > So you mean we'll need something like this? > > diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c > index 71696819c228..8ffccaf9c19a 100644 > --- a/drivers/nvme/host/multipath.c > +++ b/drivers/nvme/host/multipath.c > @@ -202,10 +202,12 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node) >  static struct nvme_ns *nvme_next_ns(struct nvme_ns_head *head, >                 struct nvme_ns *ns) >  { > -       ns = list_next_or_null_rcu(&head->list, &ns->siblings, struct nvme_ns, > -                       siblings); > -       if (ns) > -               return ns; > +       if (ns) { > +               ns = list_next_or_null_rcu(&head->list, &ns->siblings, > +                                          struct nvme_ns, siblings); > +               if (ns) > +                       return ns; > +       } No, in the scenario, ns should not be NULL. May be we can do like this: diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 282b7a4ea9a9..b895011a2cbd 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -199,30 +199,24 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node) return found; } -static struct nvme_ns *nvme_next_ns(struct nvme_ns_head *head, - struct nvme_ns *ns) -{ - ns = list_next_or_null_rcu(&head->list, &ns->siblings, struct nvme_ns, - siblings); - if (ns) - return ns; - return list_first_or_null_rcu(&head->list, struct nvme_ns, siblings); -} +#define nvme_next_ns_condition(head, current, condition) \ +({ \ + struct nvme_ns *__ptr = list_next_or_null_rcu(&(head)->list, \ + &(current)->siblings, struct nvme_ns, siblings); \ + __ptr ? __ptr : (condition) ? (condition) = false, \ + list_first_or_null_rcu(&(head)->list, struct nvme_ns, \ + siblings) : NULL; \ +}) static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head, int node, struct nvme_ns *old) { struct nvme_ns *ns, *found = NULL; + bool first_half = true; - if (list_is_singular(&head->list)) { - if (nvme_path_is_disabled(old)) - return NULL; - return old; - } - - for (ns = nvme_next_ns(head, old); + for (ns = nvme_next_ns_condition(head, old, first_half); ns && ns != old; - ns = nvme_next_ns(head, ns)) { + ns = nvme_next_ns_condition(head, ns, first_half)) { if (nvme_path_is_disabled(ns)) continue; >         return list_first_or_null_rcu(&head->list, struct nvme_ns, siblings); >  } > > Cheers, > > Hannes