Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp4031252rdb; Mon, 11 Dec 2023 07:07:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IH+OaMvSM5RQZKkV9XGD10X55OC+BI31r/CPZS9afjYwuCIRxawwMN+9gUT7P6pqF4PiB7A X-Received: by 2002:a17:902:f542:b0:1d0:aee3:59fd with SMTP id h2-20020a170902f54200b001d0aee359fdmr1849608plf.55.1702307242808; Mon, 11 Dec 2023 07:07:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702307242; cv=none; d=google.com; s=arc-20160816; b=S6vtUfOGn1FbTg6PQoSOjrflaZR8Z3j+QT7yTsZNyiqbeK9l3cQc8+PSooA+XP1FQ2 8IHKdCmrBnqhBrkbi4aXEryEXtsivqaDrbCAFK50hmI833X4j3QIC0NWk2bEWuAr7IqM hGCvxdBM+HkBCQ4wytKIsxFrZra93afvVzmAcUEs9qFa8tJLTbRdII1m3lbSpOj7Dypx bpgHCKmqH3lcuozaJlB/vt8ArhQAssUhlBfiF3MPPLWCcqCGuDSIUdGcuoMVs+X5YNBa SqbJpheS+8nhRJsGDOEfR9hO1Tox8+s2xKeZRnP1s2l0MB2GqLPUALAEMR5IxFdGV4st XBgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=+wdxSdjdF4o/1lB6FMCd70yulV6l3/SKnvD03JiEpC0=; fh=2byYpt/zKXHW7fARoJC++A6P0jE4+LHEdVoL70K5V8s=; b=FiMKumj1S/e5XbAMzirYb70Oi1HTLqCv6rlJWs0Ej27BZxB+xzIYMSUVRJWgqUfidS qmCr5ohKFWlK6sWsHdYhdZyA2QNiRvG+LET/KOljwDdbExIFPN0qel25Q7S6Zs8doUgn bI/kEvUCwA2m2FxrYizoRqm3M3rQ+axV95JMVF5M6nG7+rA6BEQBnTC/WlQHt9yX1izo tsNGvVLUVzsbsR/pnCL+x+uTGioK9mZrSv7A2C1XadGrAZesR8JIiM1KOVP4I2TrGGrr czUxE8YD8CpwQ3QkdJdJOJobhMchBH+ourLmymAtiK/lYj2D3sJyNy+UdZHkCplia/1I WdIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=trKnAszU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id b3-20020a170902d50300b001cffce39be3si6386913plg.218.2023.12.11.07.07.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 07:07:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=trKnAszU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 651BE80A6EF6; Mon, 11 Dec 2023 07:07:12 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343710AbjLKPG5 (ORCPT + 99 others); Mon, 11 Dec 2023 10:06:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234927AbjLKPG4 (ORCPT ); Mon, 11 Dec 2023 10:06:56 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95B78AB for ; Mon, 11 Dec 2023 07:07:02 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AD4EFC433C7; Mon, 11 Dec 2023 15:07:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702307222; bh=VIp4h9tv1SIR7l1R5ym5KGqEQcZrwRt73SIL4r9uSks=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=trKnAszUuaV39UKT+lAgU4SNWnmm2ddpER9w+YpkhEq2AnuyGrGYczk7el5UO8FIP 6owA0pREQXxMGYhJxV+pDOpkxEOC7Pj/eyukKdzt5KoBu3S1ZLtq2JHCMuLGxx0ma6 MXlL+5FIrZBN5IJp9PHBxRHFxw8x4+1IWmv6IiHGUfSTnnudZBnmhIrvvm0rrRHX8G O0HcFVpF+pInPcojOtlKxu0chJ4eeuAmASm6KpqZC2qtp5HvOzXm95YUP4QfkbVkMb 80Cr+1CcDpocSwci50guVJ62fROwoSx3bhB1ZBy9I2dF5tjPdDiUBTFTDugN8ai/YJ Mt9mPNjgv5n0Q== Date: Mon, 11 Dec 2023 17:06:57 +0200 From: Leon Romanovsky To: Daniel Vacek Cc: Jason Gunthorpe , linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Yuya Fujita-bishamonten Subject: Re: [PATCH 1/2] IB/ipoib: Fix mcast list locking Message-ID: <20231211150657.GH4870@unreal> References: <20231211130426.1500427-1-neelx@redhat.com> <20231211130426.1500427-2-neelx@redhat.com> <20231211134542.GG4870@unreal> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Spam-Status: No, score=-1.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 11 Dec 2023 07:07:12 -0800 (PST) On Mon, Dec 11, 2023 at 03:25:39PM +0100, Daniel Vacek wrote: > On Mon, Dec 11, 2023 at 2:45 PM Leon Romanovsky wrote: > > > > On Mon, Dec 11, 2023 at 02:04:24PM +0100, Daniel Vacek wrote: > > > We need an additional protection against list removal between ipoib_mcast_join_task() > > > and ipoib_mcast_dev_flush() in case the &priv->lock needs to be dropped while > > > iterating the &priv->multicast_list in ipoib_mcast_join_task(). If the mcast > > > is removed while the lock was dropped, the for loop spins forever resulting > > > in a hard lockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): > > > > > > Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) > > > -----------------------------------+----------------------------------- > > > ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) > > > spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) > > > list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev = priv->dev) > > > &priv->multicast_list, list) | mutex_lock(&priv->mcast_mutex) > > > ipoib_mcast_join(dev, mcast) | > > > spin_unlock_irq(&priv->lock) | > > > | spin_lock_irqsave(&priv->lock, flags) > > > | list_for_each_entry_safe(mcast, tmcast, > > > | &priv->multicast_list, list) > > > | list_del(&mcast->list); > > > | list_add_tail(&mcast->list, &remove_list) > > > | spin_unlock_irqrestore(&priv->lock, flags) > > > spin_lock_irq(&priv->lock) | > > > | ipoib_mcast_remove_list(&remove_list) > > > (Here, mcast is no longer on the | list_for_each_entry_safe(mcast, tmcast, > > > &priv->multicast_list and we keep | remove_list, list) > > > spinning on the &remove_list of the \ >>> wait_for_completion(&mcast->done) > > > other thread which is blocked and the| > > > list is still valid on it's stack.) | mutex_unlock(&priv->mcast_mutex) > > > > > > Fix this by adding mutex_lock(&priv->mcast_mutex) to ipoib_mcast_join_task(). > > > > I don't entirely understand the issue and the proposed solution. > > There is only one spin_unlock_irq() in the middle of list_for_each_entry(mcast, &priv->multicast_list, list) > > and it is right before return statement which will break the loop. So > > how will loop spin forever? > > There's another unlock/lock pair around ib_sa_join_multicast() call in > ipoib_mcast_join() no matter the outcome of the condition. The > ib_sa_join_multicast() cannot be called with the lock being held due > to GFP_KERNEL allocation can possibly sleep. That's what's causing the > issue. > > Actually if you check the code, only if the mentioned condition is > false (and the loop is not broken and returned from > ipoib_mcast_join_task()) the lock is released and re-acquired, > creating the window for > ipoib_ib_dev_flush_light()/ipoib_mcast_dev_flush() to break the list. > The vmcore data shows/confirms that clearly. Thanks, it is more clear now. What about the following change instead of adding extra lock to already too much complicated IPoIB? diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c index 5b3154503bf4..bca80fe07584 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c @@ -531,21 +531,17 @@ static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast) if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) rec.join_state = SENDONLY_FULLMEMBER_JOIN; } - spin_unlock_irq(&priv->lock); multicast = ib_sa_join_multicast(&ipoib_sa_client, priv->ca, priv->port, - &rec, comp_mask, GFP_KERNEL, + &rec, comp_mask, GFP_ATOMIC, ipoib_mcast_join_complete, mcast); - spin_lock_irq(&priv->lock); if (IS_ERR(multicast)) { ret = PTR_ERR(multicast); ipoib_warn(priv, "ib_sa_join_multicast failed, status %d\n", ret); /* Requeue this join task with a backoff delay */ __ipoib_mcast_schedule_join_thread(priv, mcast, 1); clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags); - spin_unlock_irq(&priv->lock); complete(&mcast->done); - spin_lock_irq(&priv->lock); } return 0; } > > --nX > > > > Thanks > > > > > Unfortunately we could not reproduce the lockup and confirm this fix but > > > based on the code review I think this fix should address such lockups. > > > > > > crash> bc 31 > > > PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: "kworker/u72:2" > > > -- > > > [exception RIP: ipoib_mcast_join_task+0x1b1] > > > RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 > > > RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 > > > work (&priv->mcast_task{,.work}) > > > RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 > > > &mcast->list > > > RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 > > > R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 > > > mcast > > > R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 > > > dev priv (&priv->lock) &priv->multicast_list (aka head) > > > ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 > > > --- --- > > > #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_ipoib] > > > #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967 > > > > > > crash> rx ff646f199a8c7e68 > > > ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.work > > > > > > crash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000 > > > (empty) > > > > > > crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a1a04dc8000 > > > mcast_task.work.func = 0xffffffffc0944910 , > > > mcast_mutex.owner.counter = 0xff1c69998efec000 > > > > > > crash> b 8 > > > PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: "kworker/u72:0" > > > -- > > > #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 > > > #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib_ipoib] > > > #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_ipoib] > > > #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_ipoib] > > > #7 [ff646f1980153e98] process_one_work+0x1a7 at ffffffff9bf10967 > > > > > > crash> rx ff646f1980153e68 > > > ff646f1980153e68: ff1c6a1a04dc83f0 <<< work = &priv->flush_light > > > > > > crash> ipoib_dev_priv.flush_light.func,broadcast ff1c6a1a04dc8000 > > > flush_light.func = 0xffffffffc0943820 , > > > broadcast = 0x0, > > > > > > The mcast(s) on the &remove_list (the remaining part of the ex &priv->multicast_list): > > > > > > crash> list -s ipoib_mcast.done.done ipoib_mcast.list -H ff646f1980153e10 | paste - - > > > ff1c6a192bd0c200 done.done = 0x0, > > > ff1c6a192d60ac00 done.done = 0x0, > > > > > > Reported-by: Yuya Fujita-bishamonten > > > Signed-off-by: Daniel Vacek > > > --- > > > drivers/infiniband/ulp/ipoib/ipoib_multicast.c | 3 +++ > > > 1 file changed, 3 insertions(+) > > > > > > diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c > > > index 5b3154503bf4..8e4f2c8839be 100644 > > > --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c > > > +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c > > > @@ -580,6 +580,7 @@ void ipoib_mcast_join_task(struct work_struct *work) > > > } > > > netif_addr_unlock_bh(dev); > > > > > > + mutex_lock(&priv->mcast_mutex); > > > spin_lock_irq(&priv->lock); > > > if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags)) > > > goto out; > > > @@ -634,6 +635,7 @@ void ipoib_mcast_join_task(struct work_struct *work) > > > /* Found the next unjoined group */ > > > if (ipoib_mcast_join(dev, mcast)) { > > > spin_unlock_irq(&priv->lock); > > > + mutex_unlock(&priv->mcast_mutex); > > > return; > > > } > > > } else if (!delay_until || > > > @@ -655,6 +657,7 @@ void ipoib_mcast_join_task(struct work_struct *work) > > > ipoib_mcast_join(dev, mcast); > > > > > > spin_unlock_irq(&priv->lock); > > > + mutex_unlock(&priv->mcast_mutex); > > > } > > > > > > void ipoib_mcast_start_thread(struct net_device *dev) > > > -- > > > 2.43.0 > > > > > >