Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754119AbZCQXZs (ORCPT ); Tue, 17 Mar 2009 19:25:48 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752070AbZCQXZg (ORCPT ); Tue, 17 Mar 2009 19:25:36 -0400 Received: from cantor2.suse.de ([195.135.220.15]:39343 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751995AbZCQXZf (ORCPT ); Tue, 17 Mar 2009 19:25:35 -0400 From: Mark Fasheh To: linux-kernel@vger.kernel.org Cc: ocfs2-devel@oss.oracle.com, Joel Becker , Sunil Mushran , Mark Fasheh Subject: [PATCH 22/32] ocfs2/dlm: Indent dlm_cleanup_master_list() Date: Tue, 17 Mar 2009 13:01:12 -0700 Message-Id: <1237320082-12921-23-git-send-email-mfasheh@suse.com> X-Mailer: git-send-email 1.5.6 In-Reply-To: <1237320082-12921-1-git-send-email-mfasheh@suse.com> References: <1237320082-12921-1-git-send-email-mfasheh@suse.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4826 Lines: 143 From: Sunil Mushran The previous patch explicitly did not indent dlm_cleanup_master_list() so as to make the patch readable. This patch properly indents the function. Signed-off-by: Sunil Mushran Signed-off-by: Mark Fasheh --- fs/ocfs2/dlm/dlmmaster.c | 106 ++++++++++++++++++++++----------------------- 1 files changed, 52 insertions(+), 54 deletions(-) diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c index 8045581..604552e 100644 --- a/fs/ocfs2/dlm/dlmmaster.c +++ b/fs/ocfs2/dlm/dlmmaster.c @@ -3320,66 +3320,64 @@ top: mle = hlist_entry(list, struct dlm_master_list_entry, master_hash_node); - BUG_ON(mle->type != DLM_MLE_BLOCK && - mle->type != DLM_MLE_MASTER && - mle->type != DLM_MLE_MIGRATION); - - /* MASTER mles are initiated locally. the waiting - * process will notice the node map change - * shortly. let that happen as normal. */ - if (mle->type == DLM_MLE_MASTER) - continue; - + BUG_ON(mle->type != DLM_MLE_BLOCK && + mle->type != DLM_MLE_MASTER && + mle->type != DLM_MLE_MIGRATION); + + /* MASTER mles are initiated locally. The waiting + * process will notice the node map change shortly. + * Let that happen as normal. */ + if (mle->type == DLM_MLE_MASTER) + continue; + + /* BLOCK mles are initiated by other nodes. Need to + * clean up if the dead node would have been the + * master. */ + if (mle->type == DLM_MLE_BLOCK) { + dlm_clean_block_mle(dlm, mle, dead_node); + continue; + } - /* BLOCK mles are initiated by other nodes. - * need to clean up if the dead node would have - * been the master. */ - if (mle->type == DLM_MLE_BLOCK) { - dlm_clean_block_mle(dlm, mle, dead_node); - continue; + /* Everything else is a MIGRATION mle */ + + /* The rule for MIGRATION mles is that the master + * becomes UNKNOWN if *either* the original or the new + * master dies. All UNKNOWN lockres' are sent to + * whichever node becomes the recovery master. The new + * master is responsible for determining if there is + * still a master for this lockres, or if he needs to + * take over mastery. Either way, this node should + * expect another message to resolve this. */ + + if (mle->master != dead_node && + mle->new_master != dead_node) + continue; + + /* If we have reached this point, this mle needs to be + * removed from the list and freed. */ + dlm_clean_migration_mle(dlm, mle); + + mlog(0, "%s: node %u died during migration from " + "%u to %u!\n", dlm->name, dead_node, mle->master, + mle->new_master); + + /* If we find a lockres associated with the mle, we've + * hit this rare case that messes up our lock ordering. + * If so, we need to drop the master lock so that we can + * take the lockres lock, meaning that we will have to + * restart from the head of list. */ + res = dlm_reset_mleres_owner(dlm, mle); + if (res) + /* restart */ + goto top; + + /* This may be the last reference */ + __dlm_put_mle(mle); } - - /* everything else is a MIGRATION mle */ - - /* the rule for MIGRATION mles is that the master - * becomes UNKNOWN if *either* the original or - * the new master dies. all UNKNOWN lockreses - * are sent to whichever node becomes the recovery - * master. the new master is responsible for - * determining if there is still a master for - * this lockres, or if he needs to take over - * mastery. either way, this node should expect - * another message to resolve this. */ - if (mle->master != dead_node && - mle->new_master != dead_node) - continue; - - /* if we have reached this point, this mle needs to - * be removed from the list and freed. */ - dlm_clean_migration_mle(dlm, mle); - - mlog(0, "%s: node %u died during migration from " - "%u to %u!\n", dlm->name, dead_node, - mle->master, mle->new_master); - - /* If we find a lockres associated with the mle, we've - * hit this rare case that messes up our lock ordering. - * If so, we need to drop the master lock so that we can - * take the lockres lock, meaning that we will have to - * restart from the head of list. */ - res = dlm_reset_mleres_owner(dlm, mle); - if (res) - /* restart */ - goto top; - - /* this may be the last reference */ - __dlm_put_mle(mle); - } } spin_unlock(&dlm->master_lock); } - int dlm_finish_migration(struct dlm_ctxt *dlm, struct dlm_lock_resource *res, u8 old_master) { -- 1.5.6 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/