Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp728361ybi; Fri, 26 Jul 2019 18:36:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqzUt2MO4s3TSaQx8kac+Gk+qr5V900pIaOUn0FJdshNk9rbz+w3urzCYyCOQYcJ0BvuFUJx X-Received: by 2002:a17:90a:71ca:: with SMTP id m10mr47318923pjs.27.1564191418912; Fri, 26 Jul 2019 18:36:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564191418; cv=none; d=google.com; s=arc-20160816; b=Z0TcLTsqgEkgb7C4Susmw/af8X0pFG/05ik/5aDl5r+VXcd9WTG+lc77tzHV4Kcym1 uUaDxxJxdZsM5dfxomEonLLHmKeh7XunS1b1eeFvvlbeTfcJGuympLI0vQWSw5thAnlR C0cBHCKgoA1Du5N6z+0RipUYMaeDzilGGE2la14Ssw8AKHAepBxltGUMHehF377JHAwe F7jHefZyGTWUb2PWnhm3FzZieRNIr9kvNIHR//F9KRiy5+sl/Sq0FLgyNuddgN7gxrz9 L2s18HzMoSYn/zNWTj1qAD4FGgaPYUorshhzAnKYLcNBmdsyFk+r2vCUulznLZWQrcdO wXnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject:cc :to:from; bh=d5MZiXY51oX3ZU/l0ym5m/WRbh2wHwRQJ+E/rEH6FQ4=; b=H0GVse81a8zR9sNOW3HzNmWYI3qGgnqAECUJyfaB0PcVWXRT4KJJ5p4kN15SRlkiJE Xvl98S9KS7OxcUIcEssw38gDyiZLsmSfvmXgvgFIGnH2xmiP+gloM3jvntvj9aJRdfyH DvaR3warxeTMTBtUfkT/TA64YSSSPDPxYrItKR2Ue6sS3j2UwRKAJ4Tu8yrhlRmKL3u/ P7jtySQ2kpCMsaDWVrc+yyg9Hv/On+Jz3+ufQ0a7RV7om9cTDsaI1o2oJf1nOinpSA8f FjM0eUyZAcYqhXwPLULdh6pf+fh9GYJwWXS0Th4Xc+DZaxxZbc6eSURtJLCtUJj6Ngd9 I1bQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mediatek.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id be3si20353797plb.383.2019.07.26.18.36.44; Fri, 26 Jul 2019 18:36:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727398AbfG0AAL (ORCPT + 99 others); Fri, 26 Jul 2019 20:00:11 -0400 Received: from mailgw01.mediatek.com ([210.61.82.183]:15713 "EHLO mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726102AbfG0AAK (ORCPT ); Fri, 26 Jul 2019 20:00:10 -0400 X-UUID: 2aef6c0802374436a368eb5fb208030d-20190727 X-UUID: 2aef6c0802374436a368eb5fb208030d-20190727 Received: from mtkcas06.mediatek.inc [(172.21.101.30)] by mailgw01.mediatek.com (envelope-from ) (Cellopoint E-mail Firewall v4.1.10 Build 0707 with TLS) with ESMTP id 407416634; Sat, 27 Jul 2019 08:00:03 +0800 Received: from mtkcas08.mediatek.inc (172.21.101.126) by mtkmbs06n1.mediatek.inc (172.21.101.129) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Sat, 27 Jul 2019 08:00:03 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas08.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1395.4 via Frontend Transport; Sat, 27 Jul 2019 08:00:03 +0800 From: Miles Chen To: Johannes Weiner , Michal Hocko , Vladimir Davydov CC: , , , , , Miles Chen Subject: [PATCH v3] mm: memcontrol: fix use after free in mem_cgroup_iter() Date: Sat, 27 Jul 2019 08:00:02 +0800 Message-ID: <20190727000002.17844-1-miles.chen@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 Content-Type: text/plain X-MTK: N Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch is sent to report an use after free in mem_cgroup_iter() after merging commit: be2657752e9e "mm: memcg: fix use after free in mem_cgroup_iter()". I work with android kernel tree (4.9 & 4.14), and the commit: be2657752e9e "mm: memcg: fix use after free in mem_cgroup_iter()" has been merged to the trees. However, I can still observe use after free issues addressed in the commit be2657752e9e. (on low-end devices, a few times this month) backtrace: css_tryget <- crash here mem_cgroup_iter shrink_node shrink_zones do_try_to_free_pages try_to_free_pages __perform_reclaim __alloc_pages_direct_reclaim __alloc_pages_slowpath __alloc_pages_nodemask To debug, I poisoned mem_cgroup before freeing it: static void __mem_cgroup_free(struct mem_cgroup *memcg) for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->stat); + /* poison memcg before freeing it */ + memset(memcg, 0x78, sizeof(struct mem_cgroup)); kfree(memcg); } The coredump shows the position=0xdbbc2a00 is freed. (gdb) p/x ((struct mem_cgroup_per_node *)0xe5009e00)->iter[8] $13 = {position = 0xdbbc2a00, generation = 0x2efd} 0xdbbc2a00: 0xdbbc2e00 0x00000000 0xdbbc2800 0x00000100 0xdbbc2a10: 0x00000200 0x78787878 0x00026218 0x00000000 0xdbbc2a20: 0xdcad6000 0x00000001 0x78787800 0x00000000 0xdbbc2a30: 0x78780000 0x00000000 0x0068fb84 0x78787878 0xdbbc2a40: 0x78787878 0x78787878 0x78787878 0xe3fa5cc0 0xdbbc2a50: 0x78787878 0x78787878 0x00000000 0x00000000 0xdbbc2a60: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2a70: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2a80: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2a90: 0x00000001 0x00000000 0x00000000 0x00100000 0xdbbc2aa0: 0x00000001 0xdbbc2ac8 0x00000000 0x00000000 0xdbbc2ab0: 0x00000000 0x00000000 0x00000000 0x00000000 0xdbbc2ac0: 0x00000000 0x00000000 0xe5b02618 0x00001000 0xdbbc2ad0: 0x00000000 0x78787878 0x78787878 0x78787878 0xdbbc2ae0: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2af0: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b00: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b10: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b20: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b30: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b40: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b50: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b60: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b70: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2b80: 0x78787878 0x78787878 0x00000000 0x78787878 0xdbbc2b90: 0x78787878 0x78787878 0x78787878 0x78787878 0xdbbc2ba0: 0x78787878 0x78787878 0x78787878 0x78787878 In the reclaim path, try_to_free_pages() does not setup sc.target_mem_cgroup and sc is passed to do_try_to_free_pages(), ..., shrink_node(). In mem_cgroup_iter(), root is set to root_mem_cgroup because sc->target_mem_cgroup is NULL. It is possible to assign a memcg to root_mem_cgroup.nodeinfo.iter in mem_cgroup_iter(). try_to_free_pages struct scan_control sc = {...}, target_mem_cgroup is 0x0; do_try_to_free_pages shrink_zones shrink_node mem_cgroup *root = sc->target_mem_cgroup; memcg = mem_cgroup_iter(root, NULL, &reclaim); mem_cgroup_iter() if (!root) root = root_mem_cgroup; ... css = css_next_descendant_pre(css, &root->css); memcg = mem_cgroup_from_css(css); cmpxchg(&iter->position, pos, memcg); My device uses memcg non-hierarchical mode. When we release a memcg: invalidate_reclaim_iterators() reaches only dead_memcg and its parents. If non-hierarchical mode is used, invalidate_reclaim_iterators() never reaches root_mem_cgroup. static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) { struct mem_cgroup *memcg = dead_memcg; for (; memcg; memcg = parent_mem_cgroup(memcg) ... } So the use after free scenario looks like: CPU1 CPU2 try_to_free_pages do_try_to_free_pages shrink_zones shrink_node mem_cgroup_iter() if (!root) root = root_mem_cgroup; ... css = css_next_descendant_pre(css, &root->css); memcg = mem_cgroup_from_css(css); cmpxchg(&iter->position, pos, memcg); invalidate_reclaim_iterators(memcg); ... __mem_cgroup_free() kfree(memcg); try_to_free_pages do_try_to_free_pages shrink_zones shrink_node mem_cgroup_iter() if (!root) root = root_mem_cgroup; ... mz = mem_cgroup_nodeinfo(root, reclaim->pgdat->node_id); iter = &mz->iter[reclaim->priority]; pos = READ_ONCE(iter->position); css_tryget(&pos->css) <- use after free To avoid this, we should also invalidate root_mem_cgroup.nodeinfo.iter in invalidate_reclaim_iterators(). Change since v1: Add a comment to explain why we need to handle root_mem_cgroup separately. Rename invalid_root to invalidate_root. Change since v2: add fix tag Fixes: 5ac8fb31ad2e ("mm: memcontrol: convert reclaim iterator to simple css refcounting") Cc: Johannes Weiner Cc: Michal Hocko Signed-off-by: Miles Chen Acked-by: Michal Hocko --- mm/memcontrol.c | 38 ++++++++++++++++++++++++++++---------- 1 file changed, 28 insertions(+), 10 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cdbb7a84cb6e..7d079e862646 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1130,26 +1130,44 @@ void mem_cgroup_iter_break(struct mem_cgroup *root, css_put(&prev->css); } -static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) +static void __invalidate_reclaim_iterators(struct mem_cgroup *from, + struct mem_cgroup *dead_memcg) { - struct mem_cgroup *memcg = dead_memcg; struct mem_cgroup_reclaim_iter *iter; struct mem_cgroup_per_node *mz; int nid; int i; - for (; memcg; memcg = parent_mem_cgroup(memcg)) { - for_each_node(nid) { - mz = mem_cgroup_nodeinfo(memcg, nid); - for (i = 0; i <= DEF_PRIORITY; i++) { - iter = &mz->iter[i]; - cmpxchg(&iter->position, - dead_memcg, NULL); - } + for_each_node(nid) { + mz = mem_cgroup_nodeinfo(from, nid); + for (i = 0; i <= DEF_PRIORITY; i++) { + iter = &mz->iter[i]; + cmpxchg(&iter->position, + dead_memcg, NULL); } } } +/* + * When cgruop1 non-hierarchy mode is used, parent_mem_cgroup() does + * not walk all the way up to the cgroup root (root_mem_cgroup). So + * we have to handle dead_memcg from cgroup root separately. + */ +static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) +{ + struct mem_cgroup *memcg = dead_memcg; + int invalidate_root = 0; + + for (; memcg; memcg = parent_mem_cgroup(memcg)) { + __invalidate_reclaim_iterators(memcg, dead_memcg); + if (memcg == root_mem_cgroup) + invalidate_root = 1; + } + + if (!invalidate_root) + __invalidate_reclaim_iterators(root_mem_cgroup, dead_memcg); +} + /** * mem_cgroup_scan_tasks - iterate over tasks of a memory cgroup hierarchy * @memcg: hierarchy root -- 2.18.0