Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp788762imd; Thu, 1 Nov 2018 05:44:13 -0700 (PDT) X-Google-Smtp-Source: AJdET5e+iJ+Lc0XVndXep99aJMeKFyMuuWD5e9ytHaMGTlYFrbUAV09iBkjBXj2sQMTeF+PIBCf1 X-Received: by 2002:a63:3204:: with SMTP id y4mr7031397pgy.41.1541076253051; Thu, 01 Nov 2018 05:44:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1541076253; cv=none; d=google.com; s=arc-20160816; b=O2V8B3wFkaap4ogNSqAMNVDj1BukcPq4y/9Kgnb9fHIIayqDR12FEZ6t5xNWltII3C rzQ5gIEf0d6X1a5+RG65TqERCa5+yozovbN0mvF4hWtoJHcsh2VbqWh7VyXlO9YQcdnj WM7C2C+hJY5FnKbjkI8ErvMmSbLh8O+rALoHnm9epG1OcXbxn1/5ZXgxTjx2Na9VVVxW e3U9X3xECHUPkC2MNM9qyRL5GNFt/ra67knXKN0zUMLjl7JGWkySVKBaPnQmpV5UIAVH R/dZ8DKx3qFb+Z0HVhEs5jk2aE9My75lMg+X6gRTInxTgwVtus3PehBMk5imezDPqfCh zSIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=h2ARBfjQgw89PQli9VKwQ+jS00Uf97p0dAtwzmaP20Q=; b=qATwTQPNmM1qi6/5J8UuQTz8JWjupYnzh/dgWB1FQ9vljrjiWVKY2LsJMq6swFguwL suqQtpdInYTV2SmH2KU6nunh4IWGQL07nbKLBHnIezMU0oxwVSIyzjPtMox6HivhvCTz LSSkov1qT8z/MzEjCuMBZnltkuV/O7L1DD58HRPtdsuMESmybB0iEE07pSd2M0xFNg4B OdX7b5du0HSYnaLNUdt2TMj6j4ci6pNCYlBXeq6qkPze18rVlPqcRhMb1+Fd3Hx+d/OF S7xaXDMAL7LyjGcNAb/b1CEH/Fns9PZTmzyPTMIZ5Im/YGSGy1h9pBjgRR7KzaozGJFH xD6Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l16-v6si30592872pfb.69.2018.11.01.05.43.54; Thu, 01 Nov 2018 05:44:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728108AbeKAVmX (ORCPT + 99 others); Thu, 1 Nov 2018 17:42:23 -0400 Received: from smtp2.provo.novell.com ([137.65.250.81]:42481 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727736AbeKAVmW (ORCPT ); Thu, 1 Nov 2018 17:42:22 -0400 Received: from [10.67.18.174] (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by smtp2.provo.novell.com with ESMTP (TLS encrypted); Thu, 01 Nov 2018 06:39:30 -0600 Subject: Re: [Ocfs2-devel] [PATCH V3] ocfs2: fix dead lock caused by ocfs2_defrag_extent To: Changwei Ge , Joseph Qi Cc: "linux-kernel@vger.kernel.org" , "ocfs2-devel@oss.oracle.com" , Andrew Morton References: <20181101071422.14470-1-lchen@suse.com> <63ADC13FD55D6546B7DECE290D39E37301277DE397@H3CMLB12-EX.srv.huawei-3com.com> From: Larry Chen Message-ID: <24a08d67-dd33-7fc1-628a-af55cd2de1fe@suse.com> Date: Thu, 1 Nov 2018 20:39:26 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 In-Reply-To: <63ADC13FD55D6546B7DECE290D39E37301277DE397@H3CMLB12-EX.srv.huawei-3com.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Joseph, On 11/1/18 7:52 PM, Changwei Ge wrote: > Hello Joseph, > > On 2018/11/1 17:01, Joseph Qi wrote: >> Hi Larry, >> >> On 18/11/1 15:14, Larry Chen wrote: >>> ocfs2_defrag_extent may fall into deadlock. >>> >>> ocfs2_ioctl_move_extents >>> ocfs2_ioctl_move_extents >>> ocfs2_move_extents >>> ocfs2_defrag_extent >>> ocfs2_lock_allocators_move_extents >>> >>> ocfs2_reserve_clusters >>> inode_lock GLOBAL_BITMAP_SYSTEM_INODE >>> >>> __ocfs2_flush_truncate_log >>> inode_lock GLOBAL_BITMAP_SYSTEM_INODE >>> >>> As backtrace shows above, ocfs2_reserve_clusters() will call inode_lock >>> against the global bitmap if local allocator has not sufficient cluters. >>> Once global bitmap could meet the demand, ocfs2_reserve_cluster will >>> return success with global bitmap locked. >>> >>> After ocfs2_reserve_cluster(), if truncate log is full, >>> __ocfs2_flush_truncate_log() will definitely fall into deadlock because it >>> needs to inode_lock global bitmap, which has already been locked. >>> >>> To fix this bug, we could remove from ocfs2_lock_allocators_move_extents() >>> the code which intends to lock global allocator, and put the removed code >>> after __ocfs2_flush_truncate_log(). >>> >>> ocfs2_lock_allocators_move_extents() is referred by 2 places, one is here, >>> the other does not need the data allocator context, which means this patch >>> does not affect the caller so far. >>> >>> Change log: >>> 1. Correct the function comment. >>> 2. Remove unused argument from ocfs2_lock_meta_allocator_move_extents. >>> >>> Signed-off-by: Larry Chen >>> --- >>> fs/ocfs2/move_extents.c | 47 ++++++++++++++++++++++++++--------------------- >>> 1 file changed, 26 insertions(+), 21 deletions(-) >>> > >> IMO, here clusters_to_move is only for data_ac, since we change this >> function to only handle meta_ac, I'm afraid clusters_to_move related >> logic has to been changed correspondingly. > > I think we can't remove *clusters_to_move* from here as clusters can be reserved latter outsides this function, but we > still have to reserve metadata(extents) in advance. > So we need that argument. > Yeah, I think clusters_to_move should be reserved, in order to keep the original logic as it was. But I'm curious about why max_recs_needed should be equal to 2 * extents_to_split + cluster_to_move? Does that mean that each cluster might form an extent?? Thanks, Larry > Thanks, > Changwei > >> >> Thanks, >> Joseph >>> u32 extents_to_split, >>> struct ocfs2_alloc_context **meta_ac, >>> - struct ocfs2_alloc_context **data_ac, >>> int extra_blocks, >>> int *credits) >>> { >>> @@ -192,13 +188,6 @@ static int ocfs2_lock_allocators_move_extents(struct inode *inode, >>> goto out; >>> } >>> >>> - if (data_ac) { >>> - ret = ocfs2_reserve_clusters(osb, clusters_to_move, data_ac); >>> - if (ret) { >>> - mlog_errno(ret); >>> - goto out; >>> - } >>> - } >>> >>> *credits += ocfs2_calc_extend_credits(osb->sb, et->et_root_el); >>> >>> @@ -257,10 +246,10 @@ static int ocfs2_defrag_extent(struct ocfs2_move_extents_context *context, >>> } >>> } >>> >>> - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, *len, 1, >>> - &context->meta_ac, >>> - &context->data_ac, >>> - extra_blocks, &credits); >>> + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et, >>> + *len, 1, >>> + &context->meta_ac, >>> + extra_blocks, &credits); >>> if (ret) { >>> mlog_errno(ret); >>> goto out; >>> @@ -283,6 +272,21 @@ static int ocfs2_defrag_extent(struct ocfs2_move_extents_context *context, >>> } >>> } >>> >>> + /* >>> + * Make sure ocfs2_reserve_cluster is called after >>> + * __ocfs2_flush_truncate_log, otherwise, dead lock may happen. >>> + * >>> + * If ocfs2_reserve_cluster is called >>> + * before __ocfs2_flush_truncate_log, dead lock on global bitmap >>> + * may happen. >>> + * >>> + */ >>> + ret = ocfs2_reserve_clusters(osb, *len, &context->data_ac); >>> + if (ret) { >>> + mlog_errno(ret); >>> + goto out_unlock_mutex; >>> + } >>> + >>> handle = ocfs2_start_trans(osb, credits); >>> if (IS_ERR(handle)) { >>> ret = PTR_ERR(handle); >>> @@ -600,9 +604,10 @@ static int ocfs2_move_extent(struct ocfs2_move_extents_context *context, >>> } >>> } >>> >>> - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, len, 1, >>> - &context->meta_ac, >>> - NULL, extra_blocks, &credits); >>> + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et, >>> + len, 1, >>> + &context->meta_ac, >>> + extra_blocks, &credits); >>> if (ret) { >>> mlog_errno(ret); >>> goto out; >>> >> >> _______________________________________________ >> Ocfs2-devel mailing list >> Ocfs2-devel@oss.oracle.com >> https://oss.oracle.com/mailman/listinfo/ocfs2-devel >> >