Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp78527pxy; Tue, 20 Apr 2021 21:08:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzLBw9to0C11g6zPvDh+d0fFV8CaOQzSYYVDrDnHuIxGnRfS+KkHrWRmN1x4KLD6yHheUSf X-Received: by 2002:a17:902:ba8a:b029:ec:b04c:451d with SMTP id k10-20020a170902ba8ab02900ecb04c451dmr10067973pls.67.1618978127076; Tue, 20 Apr 2021 21:08:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618978127; cv=none; d=google.com; s=arc-20160816; b=P0+RkCelTx9BfAh5e+KN797VbxKfiEBouheYtQ1EeU/BIafycne/Dw611P2jtHUtY3 BFCZSxOhIeLiTdHUYxoUEeWuV4P8cL+Bv6iaQLIvzhUrxcikU0TAuXadZlwtcuQkb/QR 1DPP52IuJKBqh7GJxqvRsHmaAEb3LruTD70rGyUjOmQ6FFndWRaKyTPfYVye23Bv/66o XjrzlNOjj+0LKXIMU+nNFuR32CnOzFyWKkSFlsb+Zo6mpNKX4Zi+taFJNt0GZOWVc2wc ME9kYaRc7W92LUeFQ1DTX6Zt7Pu//yLlAL5n0Rcl+q4XBkAWOzl7TxamDfnrQkuIV5dD Yvhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=DZGlVXKQ6zvEiNeaq88fDWrZgLyId4KAaJDVjvU6xEQ=; b=EQKirpfnIgDoqRFzWuqd6MDpXuTr659ZlUJ8ND65bvC6pdAjtbWHxE7TaIANVsfHK+ CYcFJvfUyjp+7RwM56pU2juQLj+6fP0ESzk6rHm2UcfKtwWFqbKkQvuCR4TMwRJUWI3e p2SOCm8PQAQXXbTHEnqWRpzBASeehuFgE9NLAwQPKydIZJyws2YdQxmag6PUN5d+Ii2H 5c36ErrPGuQkszhgMbw2NdEA5fr9RsahL+XfCZ20uFBOMG4iXWTMtBFgKZ+PQQNMO4ej QR+Tnqa3Zk1b+cP4t222uYBtwwHmX2I775jP+GkjiXkuIeCp3YUip0M3E5MBwqKJxjkv uo5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w3si1333673pgi.1.2021.04.20.21.08.22; Tue, 20 Apr 2021 21:08:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234707AbhDUDfj (ORCPT + 99 others); Tue, 20 Apr 2021 23:35:39 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:17385 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233874AbhDUDfi (ORCPT ); Tue, 20 Apr 2021 23:35:38 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4FQ5jZ5GNJzlYgk; Wed, 21 Apr 2021 11:33:06 +0800 (CST) Received: from [10.174.177.26] (10.174.177.26) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Wed, 21 Apr 2021 11:34:55 +0800 Subject: Re: [PATCH] bonding: 3ad: update slave arr after initialize To: Jay Vosburgh CC: , , , , , , , , References: <1618537982-454-1-git-send-email-jinyiting@huawei.com> <17733.1618547307@famine> <1165c45f-ae7f-48c1-5c65-a879c7bf978a@huawei.com> <492.1618895040@famine> From: jin yiting Message-ID: <612b5e32-ea11-428e-0c17-e2977185f045@huawei.com> Date: Wed, 21 Apr 2021 11:34:55 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 MIME-Version: 1.0 In-Reply-To: <492.1618895040@famine> Content-Type: text/plain; charset="gbk"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.26] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2021/4/20 13:04, Jay Vosburgh ะด??: > jin yiting wrote: > [...] >>> The described issue is a race condition (in that >>> ad_agg_selection_logic clears agg->is_active under mode_lock, but >>> bond_open -> bond_update_slave_arr is inspecting agg->is_active outside >>> the lock). I don't see how the above change will reliably manage this; >>> the real issue looks to be that bond_update_slave_arr is committing >>> changes to the array (via bond_reset_slave_arr) based on a racy >>> inspection of the active aggregator state while it is in flux. >>> >>> Also, the description of the issue says "The best aggregator in >>> ad_agg_selection_logic has not changed, no need to update slave arr," >>> but the change above does the opposite, and will set update_slave_arr >>> when the aggregator has not changed (update_slave_arr remains false at >>> return of ad_agg_selection_logic). >>> >>> I believe I understand the described problem, but I don't see >>> how the patch fixes it. I suspect (but haven't tested) that the proper >>> fix is to acquire mode_lock in bond_update_slave_arr while calling >>> bond_3ad_get_active_agg_info to avoid conflict with the state machine. >>> >>> -J >>> >>> --- >>> -Jay Vosburgh, jay.vosburgh@canonical.com >>> . >>> >> >> Thank you for your reply. The last patch does have redundant >> update slave arr.Thank you for your correction. >> >> As you said, holding mode_lock in bond_update_slave_arr while >> calling bond_3ad_get_active_agg_info can avoid conflictwith the state >> machine. I have tested this patch, with ifdown/ifup operations for bond or >> slaves. >> >> But bond_update_slave_arr is expected to hold RTNL only and NO >> other lock. And it have WARN_ON(lockdep_is_held(&bond->mode_lock)); in >> bond_update_slave_arr. I'm not sure that holding mode_lock in >> bond_update_slave_arr while calling bond_3ad_get_active_agg_info is a >> correct action. > > That WARN_ON came up in discussion recently, and my opinion is > that it's incorrect, and is trying to insure bond_update_slave_arr is > safe for a potential sleep when allocating memory. > > https://lore.kernel.org/netdev/20210322123846.3024549-1-maximmi@nvidia.com/ > > The original authors haven't replied, so I would suggest you > remove the WARN_ON and the surrounding CONFIG_LOCKDEP ifdefs as part of > your patch and replace it with a call to might_sleep. > > The other callers of bond_3ad_get_active_agg_info are generally > obtaining the state in order to report it to user space, so I think it's > safe to leave those calls not holding the mode_lock. The race is still > there, but the data returned to user space is a snapshot and so may > reflect an incomplete state during a transition. Further, having the > inspection functions acquire the mode_lock permits user space to spam > the lock with little effort. > > -J > >> diff --git a/drivers/net/bonding/bond_main.c >> b/drivers/net/bonding/bond_main.c >> index 74cbbb2..db988e5 100644 >> --- a/drivers/net/bonding/bond_main.c >> +++ b/drivers/net/bonding/bond_main.c >> @@ -4406,7 +4406,9 @@ int bond_update_slave_arr(struct bonding *bond, >> struct slave *skipslave) >> if (BOND_MODE(bond) == BOND_MODE_8023AD) { >> struct ad_info ad_info; >> >> + spin_lock_bh(&bond->mode_lock); >> if (bond_3ad_get_active_agg_info(bond, &ad_info)) { >> + spin_unlock_bh(&bond->mode_lock); >> pr_debug("bond_3ad_get_active_agg_info failed\n"); >> /* No active aggragator means it's not safe to use >> * the previous array. >> @@ -4414,6 +4416,7 @@ int bond_update_slave_arr(struct bonding *bond, >> struct slave *skipslave) >> bond_reset_slave_arr(bond); >> goto out; >> } >> + spin_unlock_bh(&bond->mode_lock); >> agg_id = ad_info.aggregator_id; >> } >> bond_for_each_slave(bond, slave, iter) { > > --- > -Jay Vosburgh, jay.vosburgh@canonical.com > . > I have remove the WARN_ON and the surrounding CONFIG_LOCKDEP ifdefs in the new patch and replace it with a call to might_sleep. And I will send a new patch again. Thank you for your guidance. diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index 74cbbb2..83ef62d 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -4391,9 +4391,7 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave) int agg_id = 0; int ret = 0; -#ifdef CONFIG_LOCKDEP - WARN_ON(lockdep_is_held(&bond->mode_lock)); -#endif + might_sleep(); usable_slaves = kzalloc(struct_size(usable_slaves, arr, bond->slave_cnt), GFP_KERNEL); @@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave) if (BOND_MODE(bond) == BOND_MODE_8023AD) { struct ad_info ad_info; + spin_lock_bh(&bond->mode_lock); if (bond_3ad_get_active_agg_info(bond, &ad_info)) { + spin_unlock_bh(&bond->mode_lock); pr_debug("bond_3ad_get_active_agg_info failed\n"); /* No active aggragator means it's not safe to use * the previous array. @@ -4414,6 +4414,7 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave) bond_reset_slave_arr(bond); goto out; } + spin_unlock_bh(&bond->mode_lock); agg_id = ad_info.aggregator_id; } bond_for_each_slave(bond, slave, iter) { --