Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp1370509ybj; Tue, 5 May 2020 19:18:18 -0700 (PDT) X-Google-Smtp-Source: APiQypJC637z4oeKK70aXGwtBYLaBCLfKEWjF9e24PbvMCsyLkh9dFtLvj17SOEESAh6b3YH+lFG X-Received: by 2002:a17:906:5004:: with SMTP id s4mr5566086ejj.13.1588731498182; Tue, 05 May 2020 19:18:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588731498; cv=none; d=google.com; s=arc-20160816; b=QUDLcCJ9+4lGyCP72D8x1ZQn1AcNKJ7uTVb4oACnQDrX2cxC+le4YZYZPsoioYVwpr QDFyFc1ECAg2xkWICLr3C2mpvecMGy1W0ObYX3q2wi+7Hw1oPkBifR/kg6xqAfdaxG+2 3wEKEYtVL3iT/QhvuIJQggWq0WZLHPWC+3dx87QmABCgSAkiZ/qppddS0Z+UOD4dquV4 dNFntVvYgAr3fmSb+13hM5NHjYjet/g9iut4onf9jyDvUa4wBCJmrWcKz89L+TvEa+l8 xkmk4TrZNmUTIfjxjtdoRTo6afIj7ibt9EVMrxzO4vK8d43t3vtkAadMJg0IaBv8N2ZW GEQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=B7q9K4glX57mFxxgq9Yll9CSMTs7GOWRjVO912Y/Oyc=; b=cJ4qL1F1M2monITM5FJDZw12UAqlB5qKI2yyflUgcaNFatynEm7Nk/LHhCLi7oqiU0 PncNUPuJFtGjYRpMXMP8HR9QqPh9wJ5CBahLdIY5VVph3Ln+ATDD396NFL0iRB7eHbCE z03Ftl0ioJAB0l0SKKRvyGwitTI4VPLU7OQkE/Jt0GUU6j6ZM9MslVTNY48nF9/2af4M c9qlN3Du8UiIa/pnxXzOxqUCKQ95ZMY48Qd6tkdj08xdCIB+WgY09cI60IbC0Q+bWTRF pY+7CHwKerSyVxG9xQQ6RN3UFxz6ZGeeZyyuWDDtM+TgRTzV04CoWyK0dNtOnDKQMQyb mxHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dx16si313515ejb.532.2020.05.05.19.17.55; Tue, 05 May 2020 19:18:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729630AbgEFCQj (ORCPT + 99 others); Tue, 5 May 2020 22:16:39 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3806 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727989AbgEFCQi (ORCPT ); Tue, 5 May 2020 22:16:38 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 87B7FCE5517664ADD426; Wed, 6 May 2020 10:16:34 +0800 (CST) Received: from [10.133.206.78] (10.133.206.78) by smtp.huawei.com (10.3.19.207) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 6 May 2020 10:16:27 +0800 Subject: Re: cgroup pointed by sock is leaked on mode switch To: Yang Yingliang , Tejun Heo CC: , , , "Libin (Huawei)" , , References: <03dab6ab-0ffe-3cae-193f-a7f84e9b14c5@huawei.com> <20200505160639.GG12217@mtj.thefacebook.com> From: Zefan Li Message-ID: <0a6ae984-e647-5ada-8849-3fa2fb994ff3@huawei.com> Date: Wed, 6 May 2020 10:16:27 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.133.206.78] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/5/6 9:50, Yang Yingliang wrotee: > +cc lizefan@huawei.com > > On 2020/5/6 0:06, Tejun Heo wrote: >> Hello, Yang. >> >> On Sat, May 02, 2020 at 06:27:21PM +0800, Yang Yingliang wrote: >>> I find the number nr_dying_descendants is increasing: >>> linux-dVpNUK:~ # find /sys/fs/cgroup/ -name cgroup.stat -exec grep >>> '^nr_dying_descendants [^0]'  {} + >>> /sys/fs/cgroup/unified/cgroup.stat:nr_dying_descendants 80 >>> /sys/fs/cgroup/unified/system.slice/cgroup.stat:nr_dying_descendants 1 >>> /sys/fs/cgroup/unified/system.slice/system-hostos.slice/cgroup.stat:nr_dying_descendants >>> 1 >>> /sys/fs/cgroup/unified/lxc/cgroup.stat:nr_dying_descendants 79 >>> /sys/fs/cgroup/unified/lxc/5f1fdb8c54fa40c3e599613dab6e4815058b76ebada8a27bc1fe80c0d4801764/cgroup.stat:nr_dying_descendants >>> 78 >>> /sys/fs/cgroup/unified/lxc/5f1fdb8c54fa40c3e599613dab6e4815058b76ebada8a27bc1fe80c0d4801764/system.slice/cgroup.stat:nr_dying_descendants >>> 78 >> Those numbers are nowhere close to causing oom issues. There are some >> aspects of page and other cache draining which is being improved but unless >> you're seeing numbers multiple orders of magnitude higher, this isn't the >> source of your problem. >> >>> The situation is as same as the commit bd1060a1d671 ("sock, cgroup: add >>> sock->sk_cgroup") describes. >>> "On mode switch, cgroup references which are already being pointed to by >>> socks may be leaked." >> I'm doubtful that you're hitting that issue. Mode switching means memcg >> being switched between cgroup1 and cgroup2 hierarchies, which is unlikely to >> be what's happening when you're launching docker containers. >> >> The first step would be identifying where memory is going and finding out >> whether memcg is actually being switched between cgroup1 and 2 - look at the >> hierarchy number in /proc/cgroups, if that's switching between 0 and >> someting not zero, it is switching. >> I think there's a bug here which can lead to unlimited memory leak. This should reproduce the bug: # mount -t cgroup -o netprio xxx /cgroup/netprio # mkdir /cgroup/netprio/xxx # echo PID > /cgroup/netprio/xxx/tasks /* this PID process starts to do some network thing and then exits */ # rmdir /cgroup/netprio/xxx /* now this cgroup will never be freed */ Look at the code: static inline void sock_update_netprioidx(struct sock_cgroup_data *skcd) { ... sock_cgroup_set_prioidx(skcd, task_netprioidx(current)); } static inline void sock_cgroup_set_prioidx(struct sock_cgroup_data *skcd, u16 prioidx) { ... if (sock_cgroup_prioidx(&skcd_buf) == prioidx) return ; ... skcd_buf.prioidx = prioidx; WRITE_ONCE(skcd->val, skcd_buf.val); } task_netprioidx() will be the cgrp id of xxx which is not 1, but sock_cgroup_prioidx(&skcd_buf) is 1 because it thought it's in v2 mode. Now we have a memory leak. I think the eastest fix is to do the mode switch here: diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c index b905747..2397866 100644 --- a/net/core/netprio_cgroup.c +++ b/net/core/netprio_cgroup.c @@ -240,6 +240,8 @@ static void net_prio_attach(struct cgroup_taskset *tset) struct task_struct *p; struct cgroup_subsys_state *css; + cgroup_sk_alloc_disable(); + cgroup_taskset_for_each(p, css, tset) { void *v = (void *)(unsigned long)css->cgroup->id;