Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp1644778pxy; Fri, 23 Apr 2021 13:11:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwTPOhuhuJrBy8hxdSLOfAKVsFONDeXkkbtzqX9L52u1Tx6U9Ygs0+P1HiVMOAlPWHjwrLq X-Received: by 2002:a50:9f65:: with SMTP id b92mr6668445edf.19.1619208686404; Fri, 23 Apr 2021 13:11:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619208686; cv=none; d=google.com; s=arc-20160816; b=Q9LhZCH5eVTKU7jRan/Lo1r1wHipQSUkKK78+H28qbP5BWfCh8ioeZK95G326HIQCz TT6kbtRcTTA+jIGb7W2XqmOAhpskscJzy2Fw4KNgj6nDShDtSrOgsb0jPJULah7Ck/c1 OwWX560zDNAb0omSrdZzotXd3AoN2aBReAOjLP92iL1TqkfLqsilwrPDn7MXfO5FNTf9 7+S/g+NTPNvwLO9rZaZRVudcbx516O34JvQ7glYBytn/PrevxJG0MSE4siLOLvjIts/R rHtuAl3rrtGI236MgJTlA0nabiV9zJLLhhN6vq8wF40nQQhZBC1ZSJVM7AVUXgBRCZcw FUcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:from:subject:cc:to:message-id:date; bh=secapEME5qs7Acp4zFY2oBFgrrrf0KahiBz7URc6k8I=; b=jnZlAl9chUZPJVLk/mbssFiIwsA2rgpkMGzyJUIlBQIR9lhh3I1cOS7pSaCC8u8Vtg C5crnQGJ4yPm2DlXTGRROQhlMhwvHJespzB5Lcb4Yfca4zIwmYxI3ezAF9slSVk0pIwC 9CtA6fr4/6bpOmHflZmfMmYHQhHJX3zcJjRwE5jrkbKVd8yZbK0PaTykxzOkVx8Af4VT lanX85lUQLP2IgjYb4o2b0KFN1d22fQfNpsZO44fHj5I8/yCHceVN4v8ltCp+qkosXXp TxcFseKDPLFSmIOrJkFpVDjvDMlG4EheFJQlbeYrcn49CuRfwDrDXb/9WEMhUVAdzzGZ InZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t10si6625046edd.74.2021.04.23.13.10.41; Fri, 23 Apr 2021 13:11:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243932AbhDWUIb (ORCPT + 99 others); Fri, 23 Apr 2021 16:08:31 -0400 Received: from shards.monkeyblade.net ([23.128.96.9]:58130 "EHLO mail.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229549AbhDWUIa (ORCPT ); Fri, 23 Apr 2021 16:08:30 -0400 Received: from localhost (unknown [IPv6:2601:601:9f00:477::3d5]) by mail.monkeyblade.net (Postfix) with ESMTPSA id A3F7C4D25BD9B; Fri, 23 Apr 2021 13:07:52 -0700 (PDT) Date: Fri, 23 Apr 2021 13:07:48 -0700 (PDT) Message-Id: <20210423.130748.1071901004935481894.davem@davemloft.net> To: jinyiting@huawei.com Cc: j.vosburgh@gmail.com, vfalico@gmail.com, andy@greyhouse.net, kuba@kernel.org, netdev@vger.kernel.org, security@kernel.org, linux-kernel@vger.kernel.org, xuhanbing@huawei.com, wangxiaogang3@huawei.com Subject: Re: [PATCH] bonding: 3ad: Fix the conflict between bond_update_slave_arr and the state machine From: David Miller In-Reply-To: <1618994301-1186-1-git-send-email-jinyiting@huawei.com> References: <1618994301-1186-1-git-send-email-jinyiting@huawei.com> X-Mailer: Mew version 6.8 on Emacs 27.1 Mime-Version: 1.0 Content-Type: Text/Plain; charset=iso-2022-jp Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.6.2 (mail.monkeyblade.net [0.0.0.0]); Fri, 23 Apr 2021 13:07:53 -0700 (PDT) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: jinyiting Date: Wed, 21 Apr 2021 16:38:21 +0800 > The bond works in mode 4, and performs down/up operations on the bond > that is normally negotiated. The probability of bond-> slave_arr is NULL > > Test commands: > ifconfig bond1 down > ifconfig bond1 up > > The conflict occurs in the following process: > > __dev_open (CPU A) > --bond_open > --queue_delayed_work(bond->wq,&bond->ad_work,0); > --bond_update_slave_arr > --bond_3ad_get_active_agg_info > > ad_work(CPU B) > --bond_3ad_state_machine_handler > --ad_agg_selection_logic > > ad_work runs on cpu B. In the function ad_agg_selection_logic, all > agg->is_active will be cleared. Before the new active aggregator is > selected on CPU B, bond_3ad_get_active_agg_info failed on CPU A, > bond->slave_arr will be set to NULL. The best aggregator in > ad_agg_selection_logic has not changed, no need to update slave arr. > > The conflict occurred in that ad_agg_selection_logic clears > agg->is_active under mode_lock, but bond_open -> bond_update_slave_arr > is inspecting agg->is_active outside the lock. > > Also, bond_update_slave_arr is normal for potential sleep when > allocating memory, so replace the WARN_ON with a call to might_sleep. > > Signed-off-by: jinyiting > --- > > Previous versions: > * https://lore.kernel.org/netdev/612b5e32-ea11-428e-0c17-e2977185f045@huawei.com/ > > drivers/net/bonding/bond_main.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c > index 74cbbb2..83ef62d 100644 > --- a/drivers/net/bonding/bond_main.c > +++ b/drivers/net/bonding/bond_main.c > @@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave) > if (BOND_MODE(bond) == BOND_MODE_8023AD) { > struct ad_info ad_info; > > + spin_lock_bh(&bond->mode_lock); The code paths that call this function with mode_lock held will now deadlock.