Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2353822imm; Thu, 7 Jun 2018 09:14:19 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLaYse9uIiZf2r3rTCygdTa5QNLBUDC7WJXF4/Cvcj0a+feD+aKUP76GcoZ3EezkjhXriLZ X-Received: by 2002:a65:61d1:: with SMTP id j17-v6mr2135619pgv.447.1528388059078; Thu, 07 Jun 2018 09:14:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528388059; cv=none; d=google.com; s=arc-20160816; b=W7JVLkWaWuW88sv0z3P0Rh6vzs2LUFJwv5AbmxPVV4fyM8yIzV4jyaNYkB3B8yEK1U h3U6iO5i9PPrJbZyz7hE6BUnTO1qaZptbVpv3Ao3D0LFPJ5VhXECwMqfPzUJauevlQNU KUElUVdlU03OyLIE7tPjepRVMkgK8OIPO2IbKSZqZFzo+U8dc3XnMzPo7tGQKhZMd0fi 6RXnSq/rJXVrK05Q5pK/kbE1hVHkP7VzYHsTVd8xyadbKaQb1H26oZPOiW1BhrzMQIcT Wi+0AbAcBf7bokYVjMrseZPuab1wUpv/heKQm8LbYrhfCOWFy8n5vw96MLrJh9lom/xW Js6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition :arc-authentication-results; bh=ECFAyUaD2D8ZSS1CousmWO3tJo9iXbYVPX+wqqA0U5E=; b=ZC3gPI5FFj7g7CkLZksM6vsqO/FB17/bIx/j73P2BAWbDo56EWGMQqTlDKIYaWdt35 QgXnmbM98kysDSrbxuYlBrRfR37Jw12sgGjGGWNxZQguqNI6syBcDsDmBqQF0gw0pKFD tkxvyenR6HaiY35qJIMG0emS8PBsGHCELEKpDVjM26EK3D/G0mTKwUpfuyvWajitdNP2 p5AF4QM4+dNtWH5aM/GgCpDUIJKd+XextHwMSMhzzt8LDiL0+LUGxEoyMggY2sPHfRUF 1xDEmiJFsOXiv2tQKm53tdpbZjltDfsFXcIEK2uhwFJScJY0mb3Ua2pA5Pl14kJKS+9U rBEg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n64-v6si29494940pfh.210.2018.06.07.09.14.04; Thu, 07 Jun 2018 09:14:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934539AbeFGQMn (ORCPT + 99 others); Thu, 7 Jun 2018 12:12:43 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:39154 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932959AbeFGOJA (ORCPT ); Thu, 7 Jun 2018 10:09:00 -0400 Received: from [148.252.241.226] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from ) id 1fQvay-0005Zs-Ab; Thu, 07 Jun 2018 15:08:56 +0100 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1fQvax-0002gp-Aq; Thu, 07 Jun 2018 15:08:55 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Joseph Qi" , "Mark Fasheh" , "Linus Torvalds" , "alex chen" , "Jun Piao" , "Junxiao Bi" , "Joel Becker" Date: Thu, 07 Jun 2018 15:05:21 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 010/410] ocfs2: subsystem.su_mutex is required while accessing the item->ci_parent In-Reply-To: X-SA-Exim-Connect-IP: 148.252.241.226 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.57-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: alex chen commit 853bc26a7ea39e354b9f8889ae7ad1492ffa28d2 upstream. The subsystem.su_mutex is required while accessing the item->ci_parent, otherwise, NULL pointer dereference to the item->ci_parent will be triggered in the following situation: add node delete node sys_write vfs_write configfs_write_file o2nm_node_store o2nm_node_local_write do_rmdir vfs_rmdir configfs_rmdir mutex_lock(&subsys->su_mutex); unlink_obj item->ci_group = NULL; item->ci_parent = NULL; to_o2nm_cluster_from_node node->nd_item.ci_parent->ci_parent BUG since of NULL pointer dereference to nd_item.ci_parent Moreover, the o2nm_cluster also should be protected by the subsystem.su_mutex. [alex.chen@huawei.com: v2] Link: http://lkml.kernel.org/r/59EEAA69.9080703@huawei.com Link: http://lkml.kernel.org/r/59E9B36A.10700@huawei.com Signed-off-by: Alex Chen Reviewed-by: Jun Piao Reviewed-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings --- fs/ocfs2/cluster/nodemanager.c | 63 +++++++++++++++++++++++++++++----- 1 file changed, 55 insertions(+), 8 deletions(-) --- a/fs/ocfs2/cluster/nodemanager.c +++ b/fs/ocfs2/cluster/nodemanager.c @@ -40,6 +40,9 @@ char *o2nm_fence_method_desc[O2NM_FENCE_ "panic", /* O2NM_FENCE_PANIC */ }; +static inline void o2nm_lock_subsystem(void); +static inline void o2nm_unlock_subsystem(void); + struct o2nm_node *o2nm_get_node_by_num(u8 node_num) { struct o2nm_node *node = NULL; @@ -181,7 +184,10 @@ static struct o2nm_cluster *to_o2nm_clus { /* through the first node_set .parent * mycluster/nodes/mynode == o2nm_cluster->o2nm_node_group->o2nm_node */ - return to_o2nm_cluster(node->nd_item.ci_parent->ci_parent); + if (node->nd_item.ci_parent) + return to_o2nm_cluster(node->nd_item.ci_parent->ci_parent); + else + return NULL; } enum { @@ -194,7 +200,7 @@ enum { static ssize_t o2nm_node_num_write(struct o2nm_node *node, const char *page, size_t count) { - struct o2nm_cluster *cluster = to_o2nm_cluster_from_node(node); + struct o2nm_cluster *cluster; unsigned long tmp; char *p = (char *)page; @@ -213,6 +219,13 @@ static ssize_t o2nm_node_num_write(struc !test_bit(O2NM_NODE_ATTR_PORT, &node->nd_set_attributes)) return -EINVAL; /* XXX */ + o2nm_lock_subsystem(); + cluster = to_o2nm_cluster_from_node(node); + if (!cluster) { + o2nm_unlock_subsystem(); + return -EINVAL; + } + write_lock(&cluster->cl_nodes_lock); if (cluster->cl_nodes[tmp]) p = NULL; @@ -222,6 +235,8 @@ static ssize_t o2nm_node_num_write(struc set_bit(tmp, cluster->cl_nodes_bitmap); } write_unlock(&cluster->cl_nodes_lock); + o2nm_unlock_subsystem(); + if (p == NULL) return -EEXIST; @@ -261,7 +276,7 @@ static ssize_t o2nm_node_ipv4_address_wr const char *page, size_t count) { - struct o2nm_cluster *cluster = to_o2nm_cluster_from_node(node); + struct o2nm_cluster *cluster; int ret, i; struct rb_node **p, *parent; unsigned int octets[4]; @@ -278,6 +293,13 @@ static ssize_t o2nm_node_ipv4_address_wr be32_add_cpu(&ipv4_addr, octets[i] << (i * 8)); } + o2nm_lock_subsystem(); + cluster = to_o2nm_cluster_from_node(node); + if (!cluster) { + o2nm_unlock_subsystem(); + return -EINVAL; + } + ret = 0; write_lock(&cluster->cl_nodes_lock); if (o2nm_node_ip_tree_lookup(cluster, ipv4_addr, &p, &parent)) @@ -287,6 +309,8 @@ static ssize_t o2nm_node_ipv4_address_wr rb_insert_color(&node->nd_ip_node, &cluster->cl_node_ip_tree); } write_unlock(&cluster->cl_nodes_lock); + o2nm_unlock_subsystem(); + if (ret) return ret; @@ -303,7 +327,7 @@ static ssize_t o2nm_node_local_read(stru static ssize_t o2nm_node_local_write(struct o2nm_node *node, const char *page, size_t count) { - struct o2nm_cluster *cluster = to_o2nm_cluster_from_node(node); + struct o2nm_cluster *cluster; unsigned long tmp; char *p = (char *)page; ssize_t ret; @@ -321,17 +345,26 @@ static ssize_t o2nm_node_local_write(str !test_bit(O2NM_NODE_ATTR_PORT, &node->nd_set_attributes)) return -EINVAL; /* XXX */ + o2nm_lock_subsystem(); + cluster = to_o2nm_cluster_from_node(node); + if (!cluster) { + ret = -EINVAL; + goto out; + } + /* the only failure case is trying to set a new local node * when a different one is already set */ if (tmp && tmp == cluster->cl_has_local && - cluster->cl_local_node != node->nd_num) - return -EBUSY; + cluster->cl_local_node != node->nd_num) { + ret = -EBUSY; + goto out; + } /* bring up the rx thread if we're setting the new local node. */ if (tmp && !cluster->cl_has_local) { ret = o2net_start_listening(node); if (ret) - return ret; + goto out; } if (!tmp && cluster->cl_has_local && @@ -346,7 +379,11 @@ static ssize_t o2nm_node_local_write(str cluster->cl_local_node = node->nd_num; } - return count; + ret = count; + +out: + o2nm_unlock_subsystem(); + return ret; } struct o2nm_node_attribute { @@ -889,6 +926,16 @@ static struct o2nm_cluster_group o2nm_cl }, }; +static inline void o2nm_lock_subsystem(void) +{ + mutex_lock(&o2nm_cluster_group.cs_subsys.su_mutex); +} + +static inline void o2nm_unlock_subsystem(void) +{ + mutex_unlock(&o2nm_cluster_group.cs_subsys.su_mutex); +} + int o2nm_depend_item(struct config_item *item) { return configfs_depend_item(&o2nm_cluster_group.cs_subsys, item);