Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp1575370ybj; Wed, 6 May 2020 00:56:38 -0700 (PDT) X-Google-Smtp-Source: APiQypJ80+PPn/t2YWCsOG/1eWTo0hqymgxpky/EOsXAz0Ov0cagdIS6e4wVHN1LOHm5fKbVGitP X-Received: by 2002:adf:82b3:: with SMTP id 48mr7769800wrc.223.1588751798039; Wed, 06 May 2020 00:56:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588751798; cv=none; d=google.com; s=arc-20160816; b=tB2g7FO2+iFqk+GQqcicGXpzGd8Z8VZK73M0TzzQbci1QR1FLr754OVjdezGjx+TIh r4d045EL544FMnzzfxMTsRvEirAGizUIo8fJByDSdBjMGfc9f8A4Fw9T24smvRqFQRlt jXsJNKyqTGKJSsIPd3+My0HKDbh0tEYOFsL1djUQGYQocCPFU3BXFTizEJYni4/2d5/d p9MJekOfACTdGkc7UXqjR/LQhtRFFqEZN/oo7wlmJBBhma1hP3vmujj/MbIuBSr4lQ2/ Lp5mH9rzOmV0FpbjMs/qzSpIGr6pqaTd+UaKylRpfWU8xq+8PqKu7JsYov94tcENpF/Z 6UWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:references:cc:to:from :subject; bh=m82CLQbNN7kgkqXQ5YYECpQdXGah1Ryeufm4hsp5rWs=; b=twdbCZN1fK/0EvC2jemr4KKCfLtS5D974+RoSoBAUjzxmLkhd/eFcjrbZtj8JjB7Zn D9tmsdeICJ2gWtqikrBsbYDrpN+ea35c7oDCO5F8F0jFAnIlea/CRoPeVV4VPO4BD7R7 yMqzbPLYZXI9P/KijJaoRi3pXemk/C5gErkuGAZ+o/ibhQv4LqR2IhYoF1eAOsW96rOF d44f/BbdQuhozBuUsdc8ntuHCStWAR1FPAg6sWcuDxiT/o6F3jwbIHHdrZsYiIEy2s+Z 2zwBfb8XvfvdT+FC3HEQzxg4ZkqE1f295DNCV+2/RRgtinKx4xvbdl0f1kTYrStSRO4H etVQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l35si516985edl.187.2020.05.06.00.56.11; Wed, 06 May 2020 00:56:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728490AbgEFHvp (ORCPT + 99 others); Wed, 6 May 2020 03:51:45 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:3814 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728280AbgEFHvp (ORCPT ); Wed, 6 May 2020 03:51:45 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id C8507BAD909D4E3D7B84; Wed, 6 May 2020 15:51:40 +0800 (CST) Received: from [10.133.206.78] (10.133.206.78) by smtp.huawei.com (10.3.19.207) with Microsoft SMTP Server (TLS) id 14.3.487.0; Wed, 6 May 2020 15:51:32 +0800 Subject: Re: cgroup pointed by sock is leaked on mode switch From: Zefan Li To: Yang Yingliang , Tejun Heo CC: , , , "Libin (Huawei)" , , References: <03dab6ab-0ffe-3cae-193f-a7f84e9b14c5@huawei.com> <20200505160639.GG12217@mtj.thefacebook.com> <0a6ae984-e647-5ada-8849-3fa2fb994ff3@huawei.com> Message-ID: <1edd6b6c-ab3c-6a51-6460-6f5d7f37505e@huawei.com> Date: Wed, 6 May 2020 15:51:31 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.1 MIME-Version: 1.0 In-Reply-To: <0a6ae984-e647-5ada-8849-3fa2fb994ff3@huawei.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.133.206.78] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/5/6 10:16, Zefan Li wrote: > On 2020/5/6 9:50, Yang Yingliang wrotee: >> +cc lizefan@huawei.com >> >> On 2020/5/6 0:06, Tejun Heo wrote: >>> Hello, Yang. >>> >>> On Sat, May 02, 2020 at 06:27:21PM +0800, Yang Yingliang wrote: >>>> I find the number nr_dying_descendants is increasing: >>>> linux-dVpNUK:~ # find /sys/fs/cgroup/ -name cgroup.stat -exec grep >>>> '^nr_dying_descendants [^0]'  {} + >>>> /sys/fs/cgroup/unified/cgroup.stat:nr_dying_descendants 80 >>>> /sys/fs/cgroup/unified/system.slice/cgroup.stat:nr_dying_descendants 1 >>>> /sys/fs/cgroup/unified/system.slice/system-hostos.slice/cgroup.stat:nr_dying_descendants >>>> 1 >>>> /sys/fs/cgroup/unified/lxc/cgroup.stat:nr_dying_descendants 79 >>>> /sys/fs/cgroup/unified/lxc/5f1fdb8c54fa40c3e599613dab6e4815058b76ebada8a27bc1fe80c0d4801764/cgroup.stat:nr_dying_descendants >>>> 78 >>>> /sys/fs/cgroup/unified/lxc/5f1fdb8c54fa40c3e599613dab6e4815058b76ebada8a27bc1fe80c0d4801764/system.slice/cgroup.stat:nr_dying_descendants >>>> 78 >>> Those numbers are nowhere close to causing oom issues. There are some >>> aspects of page and other cache draining which is being improved but unless >>> you're seeing numbers multiple orders of magnitude higher, this isn't the >>> source of your problem. >>> >>>> The situation is as same as the commit bd1060a1d671 ("sock, cgroup: add >>>> sock->sk_cgroup") describes. >>>> "On mode switch, cgroup references which are already being pointed to by >>>> socks may be leaked." >>> I'm doubtful that you're hitting that issue. Mode switching means memcg >>> being switched between cgroup1 and cgroup2 hierarchies, which is unlikely to >>> be what's happening when you're launching docker containers. >>> >>> The first step would be identifying where memory is going and finding out >>> whether memcg is actually being switched between cgroup1 and 2 - look at the >>> hierarchy number in /proc/cgroups, if that's switching between 0 and >>> someting not zero, it is switching. >>> > > I think there's a bug here which can lead to unlimited memory leak. > This should reproduce the bug: > >    # mount -t cgroup -o netprio xxx /cgroup/netprio >    # mkdir /cgroup/netprio/xxx >    # echo PID > /cgroup/netprio/xxx/tasks >    /* this PID process starts to do some network thing and then exits */ >    # rmdir /cgroup/netprio/xxx >    /* now this cgroup will never be freed */ > Correction (still not tested): # mount -t cgroup2 none /cgroup/v2 # mkdir /cgroup/v2/xxx # echo PID > /cgroup/v2/xxx/cgroup.procs /* this PID process starts to do some network thing */ # mount -t cgroup -o netprio xxx /cgroup/netprio # mkdir /cgroup/netprio/xxx # echo PID > /cgroup/netprio/xxx/tasks ... /* the PID process exits */ rmdir /cgroup/netprio/xxx rmdir /cgroup/v2/xxx /* now looks like this v2 cgroup will never be freed */ > Look at the code: > > static inline void sock_update_netprioidx(struct sock_cgroup_data *skcd) > { >     ... >     sock_cgroup_set_prioidx(skcd, task_netprioidx(current)); > } > > static inline void sock_cgroup_set_prioidx(struct sock_cgroup_data *skcd, >                     u16 prioidx) > { >     ... >     if (sock_cgroup_prioidx(&skcd_buf) == prioidx) >         return ; >     ... >     skcd_buf.prioidx = prioidx; >     WRITE_ONCE(skcd->val, skcd_buf.val); > } > > task_netprioidx() will be the cgrp id of xxx which is not 1, but > sock_cgroup_prioidx(&skcd_buf) is 1 because it thought it's in v2 mode. > Now we have a memory leak. > > I think the eastest fix is to do the mode switch here: > > diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c > index b905747..2397866 100644 > --- a/net/core/netprio_cgroup.c > +++ b/net/core/netprio_cgroup.c > @@ -240,6 +240,8 @@ static void net_prio_attach(struct cgroup_taskset *tset) >         struct task_struct *p; >         struct cgroup_subsys_state *css; > > +       cgroup_sk_alloc_disable(); > + >         cgroup_taskset_for_each(p, css, tset) { >                 void *v = (void *)(unsigned long)css->cgroup->id;