Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp43710ybz; Tue, 28 Apr 2020 17:48:46 -0700 (PDT) X-Google-Smtp-Source: APiQypI5zLCUXw+4SB1W4x3G1g8dLqmY8aiZml41piUj6lglk+p8m3PSrvMnvzJUnQsM1/kKAayY X-Received: by 2002:aa7:db41:: with SMTP id n1mr331699edt.314.1588121326413; Tue, 28 Apr 2020 17:48:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588121326; cv=none; d=google.com; s=arc-20160816; b=OtjnzDRJq/sdwqjna4PEB2upRa+LBq+pKDB8R0Af/VaGpGhyKUmvZOPSsJlXhLpkyh 3wimlKaW4i2no07Z78yiZeMMv2g/uxapiv8o9hkWLpVfet3Haqux4y/Q3eKx9XRlrVPH jwg5MJH2qqUC/Oc8mmaHBLadtLZZkBVtHy0/uvFwnuJ+NN2m4ZI/hXtd8QUikxFo1SY8 s9o3he1YWN+ZkbzOHg2do3AoCmJcGGWdJdGd+Nl+DoBFhqOOoEb/QBcVKnibQS9w2D3d zl529LvlXNp75ytnr9fychaaGkJe5MM62g6QTLdgv7RbU2tulTjBVPuaJIQWy/NzXYuF ogFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=cM1AfUQDNccTb0rAzyeqcHPoUdSIZzJhrRP02uUhfOU=; b=uikdRKERa0VeQe2KavYrUgxn+5ymoaVS0LCwsY/x+Om9O0eHRdMGpHkBtwb3Owzz5B y/6KuuDFOV4MJFevqlym8xsBOoS4Z3/fRb7VfDlWEUnS2sDr66TlCL0fR/2Z+oO5vRo1 tupasjR0aXj9HO+0oTwHI1Ocwzcm3W1NnokaGC1LjZQevRcayNpCgOa2sXqFup4nlD84 oZ/t0TdLodB5Qi3i1DxcXvcSQPz51dV94gNnFksI1xe+lWTVS07SL/zAwPj6Hb2TRCta UFFdpbClUxVQmiEa4uj4MgMoKpZe4iZ2lqD3MxCLz8oskWp1uQbD6ZetV5v8/z9K7kMf dJ/w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lu44si2831174ejb.394.2020.04.28.17.48.23; Tue, 28 Apr 2020 17:48:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726423AbgD2Aqr (ORCPT + 99 others); Tue, 28 Apr 2020 20:46:47 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:51008 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726274AbgD2Aqq (ORCPT ); Tue, 28 Apr 2020 20:46:46 -0400 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 7BAC97760B1CF50C5999; Wed, 29 Apr 2020 08:46:43 +0800 (CST) Received: from [127.0.0.1] (10.166.215.100) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Wed, 29 Apr 2020 08:46:33 +0800 Subject: Re: [PATCH V2] fs/ceph:fix double unlock in handle_cap_export() To: Jeff Layton , , CC: , , , References: <1588079622-423774-1-git-send-email-wubo40@huawei.com> From: Wu Bo Message-ID: <6c99072a-f92b-b7e8-9aef-509d1a9ee985@huawei.com> Date: Wed, 29 Apr 2020 08:46:33 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.166.215.100] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/4/28 22:48, Jeff Layton wrote: > On Tue, 2020-04-28 at 21:13 +0800, Wu Bo wrote: >> if the ceph_mdsc_open_export_target_session() return fails, >> should add a lock to avoid twice unlocking. >> Because the lock will be released at the retry or out_unlock tag. >> > > The problem looks real, but... > >> -- >> v1 -> v2: >> add spin_lock(&ci->i_ceph_lock) before goto out_unlock tag. >> >> Signed-off-by: Wu Bo >> --- >> fs/ceph/caps.c | 27 +++++++++++++++------------ >> 1 file changed, 15 insertions(+), 12 deletions(-) >> >> diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c >> index 185db76..414c0e2 100644 >> --- a/fs/ceph/caps.c >> +++ b/fs/ceph/caps.c >> @@ -3731,22 +3731,25 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex, >> >> /* open target session */ >> tsession = ceph_mdsc_open_export_target_session(mdsc, target); >> - if (!IS_ERR(tsession)) { >> - if (mds > target) { >> - mutex_lock(&session->s_mutex); >> - mutex_lock_nested(&tsession->s_mutex, >> - SINGLE_DEPTH_NESTING); >> - } else { >> - mutex_lock(&tsession->s_mutex); >> - mutex_lock_nested(&session->s_mutex, >> - SINGLE_DEPTH_NESTING); >> - } >> - new_cap = ceph_get_cap(mdsc, NULL); >> - } else { >> + if (IS_ERR(tsession)) { >> WARN_ON(1); >> tsession = NULL; >> target = -1; >> + mutex_lock(&session->s_mutex); >> + spin_lock(&ci->i_ceph_lock); >> + goto out_unlock; > > Why did you make this case goto out_unlock instead of retrying as it did > before? > If the problem occurs, target = -1, and goto retry lable, you need to call __get_cap_for_mds() or even call __ceph_remove_cap(), and then jump to out_unlock lable. All I think is unnecessary, goto out_unlock instead of retrying directly. Thanks. Wu Bo >> + } >> + >> + if (mds > target) { >> + mutex_lock(&session->s_mutex); >> + mutex_lock_nested(&tsession->s_mutex, >> + SINGLE_DEPTH_NESTING); >> + } else { >> + mutex_lock(&tsession->s_mutex); >> + mutex_lock_nested(&session->s_mutex, >> + SINGLE_DEPTH_NESTING); >> } >> + new_cap = ceph_get_cap(mdsc, NULL); >> goto retry; >> >> out_unlock: >