Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp4539038ybi; Tue, 30 Jul 2019 03:53:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqw6aL+7z0ycATmadVh8SoCH2a2dIQRG1584OD/QZU+mZeHbg47np3z3rDQc58NzKntjoqmU X-Received: by 2002:a63:4612:: with SMTP id t18mr99780151pga.85.1564484012553; Tue, 30 Jul 2019 03:53:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564484012; cv=none; d=google.com; s=arc-20160816; b=0FSlCjlb8BrducyWFeURaD0nZg0gKMcYTWJkKrtseqULRYF6a1Nr9SICLi1QnjsMdU BKzPxIiHFnI0cqsJvZJvdUifZcnOUUo5F6B2tE9bdoGI8qqa3sVcv6qDY/s6WyLLJSJ6 7eooqQrUDfiKX1oAP+fZId+0cR5V78e0Y5zdKsao1R+3Zu/eDsHaisP4bwViN3WCibHq 6gu7jvrVXEerY6RBs/t4KXYAohjzeOUKIpIJTNwY2cX/J/Xo/98ICtqjhReg0zzEJgAe 9Lr604jRVPbCmLnDV7wGXBWQ09JL/0Tz0ycvugYQ4FHSJuNIFzOKTqOtcTWtohHWeCdw uOkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=nrBxUHmgl8x1VGgMHHpPsiWYj4tSkoM26u6CTSy92ZI=; b=PL4/pruUDrJPkkBVOaGhfbq21CR6zqxKrDxJVIw9l8zyLxFhXHuV2D417LYg5LmNHI rDvhAJKZbEAdEDXYW5HNYELq8ulV/9L52d+bkYwud0N6BscSFF6OuwFtGiteGR5D5z6K wnZ3AdWnMiZBBX4jXt/dFNUxutbYguekaCNmVVv9JFktqkFqKh7dhzTg4h3wBC55Lz74 uXblFz6vmu49+aS1Y7H1G4xJUa2JuHjLyGLBvSmbF8oN06XEdboucEpJRX/JYE9gNQjj +VPiz6Ni9JcxW2wS6eCeldHSfpudqP80TRjoGFeWNhKcp7X5GpcrokWXsBjT3VR3XwXq M3pQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 31si26854909pld.245.2019.07.30.03.53.17; Tue, 30 Jul 2019 03:53:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729414AbfG3H13 (ORCPT + 99 others); Tue, 30 Jul 2019 03:27:29 -0400 Received: from mx2.suse.de ([195.135.220.15]:53694 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726432AbfG3H12 (ORCPT ); Tue, 30 Jul 2019 03:27:28 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 8A31BAFBB; Tue, 30 Jul 2019 07:27:25 +0000 (UTC) Date: Tue, 30 Jul 2019 09:27:24 +0200 From: Michal Hocko To: Miles Chen Cc: Johannes Weiner , Vladimir Davydov , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mediatek@lists.infradead.org, wsd_upstream@mediatek.com, Andrew Morton Subject: Re: [PATCH v4] mm: memcontrol: fix use after free in mem_cgroup_iter() Message-ID: <20190730072724.GM9330@dhcp22.suse.cz> References: <20190730015729.4406-1-miles.chen@mediatek.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190730015729.4406-1-miles.chen@mediatek.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [Cc Andrew to pick up the patch] On Tue 30-07-19 09:57:29, Miles Chen wrote: > This patch is sent to report an use after free in mem_cgroup_iter() > after merging commit: be2657752e9e "mm: memcg: fix use after free in > mem_cgroup_iter()". > > I work with android kernel tree (4.9 & 4.14), and the commit: > be2657752e9e "mm: memcg: fix use after free in mem_cgroup_iter()" has > been merged to the trees. However, I can still observe use after free > issues addressed in the commit be2657752e9e. > (on low-end devices, a few times this month) > > backtrace: > css_tryget <- crash here > mem_cgroup_iter > shrink_node > shrink_zones > do_try_to_free_pages > try_to_free_pages > __perform_reclaim > __alloc_pages_direct_reclaim > __alloc_pages_slowpath > __alloc_pages_nodemask > > To debug, I poisoned mem_cgroup before freeing it: > > static void __mem_cgroup_free(struct mem_cgroup *memcg) > for_each_node(node) > free_mem_cgroup_per_node_info(memcg, node); > free_percpu(memcg->stat); > + /* poison memcg before freeing it */ > + memset(memcg, 0x78, sizeof(struct mem_cgroup)); > kfree(memcg); > } > > The coredump shows the position=0xdbbc2a00 is freed. > > (gdb) p/x ((struct mem_cgroup_per_node *)0xe5009e00)->iter[8] > $13 = {position = 0xdbbc2a00, generation = 0x2efd} > > 0xdbbc2a00: 0xdbbc2e00 0x00000000 0xdbbc2800 0x00000100 > 0xdbbc2a10: 0x00000200 0x78787878 0x00026218 0x00000000 > 0xdbbc2a20: 0xdcad6000 0x00000001 0x78787800 0x00000000 > 0xdbbc2a30: 0x78780000 0x00000000 0x0068fb84 0x78787878 > 0xdbbc2a40: 0x78787878 0x78787878 0x78787878 0xe3fa5cc0 > 0xdbbc2a50: 0x78787878 0x78787878 0x00000000 0x00000000 > 0xdbbc2a60: 0x00000000 0x00000000 0x00000000 0x00000000 > 0xdbbc2a70: 0x00000000 0x00000000 0x00000000 0x00000000 > 0xdbbc2a80: 0x00000000 0x00000000 0x00000000 0x00000000 > 0xdbbc2a90: 0x00000001 0x00000000 0x00000000 0x00100000 > 0xdbbc2aa0: 0x00000001 0xdbbc2ac8 0x00000000 0x00000000 > 0xdbbc2ab0: 0x00000000 0x00000000 0x00000000 0x00000000 > 0xdbbc2ac0: 0x00000000 0x00000000 0xe5b02618 0x00001000 > 0xdbbc2ad0: 0x00000000 0x78787878 0x78787878 0x78787878 > 0xdbbc2ae0: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2af0: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b00: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b10: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b20: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b30: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b40: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b50: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b60: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b70: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2b80: 0x78787878 0x78787878 0x00000000 0x78787878 > 0xdbbc2b90: 0x78787878 0x78787878 0x78787878 0x78787878 > 0xdbbc2ba0: 0x78787878 0x78787878 0x78787878 0x78787878 > > In the reclaim path, try_to_free_pages() does not setup > sc.target_mem_cgroup and sc is passed to do_try_to_free_pages(), ..., > shrink_node(). > > In mem_cgroup_iter(), root is set to root_mem_cgroup because > sc->target_mem_cgroup is NULL. > It is possible to assign a memcg to root_mem_cgroup.nodeinfo.iter in > mem_cgroup_iter(). > > try_to_free_pages > struct scan_control sc = {...}, target_mem_cgroup is 0x0; > do_try_to_free_pages > shrink_zones > shrink_node > mem_cgroup *root = sc->target_mem_cgroup; > memcg = mem_cgroup_iter(root, NULL, &reclaim); > mem_cgroup_iter() > if (!root) > root = root_mem_cgroup; > ... > > css = css_next_descendant_pre(css, &root->css); > memcg = mem_cgroup_from_css(css); > cmpxchg(&iter->position, pos, memcg); > > My device uses memcg non-hierarchical mode. > When we release a memcg: invalidate_reclaim_iterators() reaches only > dead_memcg and its parents. If non-hierarchical mode is used, > invalidate_reclaim_iterators() never reaches root_mem_cgroup. > > static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) > { > struct mem_cgroup *memcg = dead_memcg; > > for (; memcg; memcg = parent_mem_cgroup(memcg) > ... > } > > So the use after free scenario looks like: > > CPU1 CPU2 > > try_to_free_pages > do_try_to_free_pages > shrink_zones > shrink_node > mem_cgroup_iter() > if (!root) > root = root_mem_cgroup; > ... > css = css_next_descendant_pre(css, &root->css); > memcg = mem_cgroup_from_css(css); > cmpxchg(&iter->position, pos, memcg); > > invalidate_reclaim_iterators(memcg); > ... > __mem_cgroup_free() > kfree(memcg); > > try_to_free_pages > do_try_to_free_pages > shrink_zones > shrink_node > mem_cgroup_iter() > if (!root) > root = root_mem_cgroup; > ... > mz = mem_cgroup_nodeinfo(root, reclaim->pgdat->node_id); > iter = &mz->iter[reclaim->priority]; > pos = READ_ONCE(iter->position); > css_tryget(&pos->css) <- use after free > > To avoid this, we should also invalidate root_mem_cgroup.nodeinfo.iter in > invalidate_reclaim_iterators(). > > Change since v1: > Add a comment to explain why we need to handle root_mem_cgroup separately. > Rename invalid_root to invalidate_root. > > Change since v2: > Add fix tag > > Change since v3: > Remove confusing 'invalidate_root', make the code easier to read > > Fixes: 5ac8fb31ad2e ("mm: memcontrol: convert reclaim iterator to simple css refcounting") > Cc: Johannes Weiner > Cc: Michal Hocko > Signed-off-by: Miles Chen Acked-by: Michal Hocko > --- > mm/memcontrol.c | 39 +++++++++++++++++++++++++++++---------- > 1 file changed, 29 insertions(+), 10 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index cdbb7a84cb6e..8a2a2d5cfc26 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1130,26 +1130,45 @@ void mem_cgroup_iter_break(struct mem_cgroup *root, > css_put(&prev->css); > } > > -static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) > +static void __invalidate_reclaim_iterators(struct mem_cgroup *from, > + struct mem_cgroup *dead_memcg) > { > - struct mem_cgroup *memcg = dead_memcg; > struct mem_cgroup_reclaim_iter *iter; > struct mem_cgroup_per_node *mz; > int nid; > int i; > > - for (; memcg; memcg = parent_mem_cgroup(memcg)) { > - for_each_node(nid) { > - mz = mem_cgroup_nodeinfo(memcg, nid); > - for (i = 0; i <= DEF_PRIORITY; i++) { > - iter = &mz->iter[i]; > - cmpxchg(&iter->position, > - dead_memcg, NULL); > - } > + for_each_node(nid) { > + mz = mem_cgroup_nodeinfo(from, nid); > + for (i = 0; i <= DEF_PRIORITY; i++) { > + iter = &mz->iter[i]; > + cmpxchg(&iter->position, > + dead_memcg, NULL); > } > } > } > > +static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) > +{ > + struct mem_cgroup *memcg = dead_memcg; > + struct mem_cgroup *last; > + > + do { > + __invalidate_reclaim_iterators(memcg, dead_memcg); > + last = memcg; > + } while (memcg = parent_mem_cgroup(memcg)); > + > + /* > + * When cgruop1 non-hierarchy mode is used, > + * parent_mem_cgroup() does not walk all the way up to the > + * cgroup root (root_mem_cgroup). So we have to handle > + * dead_memcg from cgroup root separately. > + */ > + if (last != root_mem_cgroup) > + __invalidate_reclaim_iterators(root_mem_cgroup, > + dead_memcg); > +} > + > /** > * mem_cgroup_scan_tasks - iterate over tasks of a memory cgroup hierarchy > * @memcg: hierarchy root > -- > 2.18.0 -- Michal Hocko SUSE Labs