Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3705596pxj; Mon, 7 Jun 2021 18:34:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw9PaXMSgPtomhlJT6vOYckSQcjLIkQDd1iurs3Y42Hc86pspIMXZrSOzoGcCoJoZsGKrSA X-Received: by 2002:a17:906:22c8:: with SMTP id q8mr20063679eja.12.1623116074756; Mon, 07 Jun 2021 18:34:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623116074; cv=none; d=google.com; s=arc-20160816; b=QISgRNqsA8MtgzxXExVVDmZdh9r26/QbEaIv6ivxWX6vb/IwWBJEbB3qk/lnBlF7Bg YaOcAq5Erkf2HnonwBc0ZO7w+xBKpENpY3QDUKZeJJi9uF3BpNy/V2Mn1EaEfitpmyNi 5Ft8A7OVi1rXQ5ngvkmvbHnABzoSYSRJisOAfD8v0a6WKcRZzl590MRDeahMXmyYunpi GDN860NOstNiFh5ZveGoX2x4XDVk05S0SfCbmA5NCklm7SsjkJzOc30XriuraEH4Y7y0 08dvCWcqHQtjyIJPqS3A/9UdJ4XXsiYqart7fFibVY8NtpEaaOyL8Sj11F67FcKmtW/G KNQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=JbC745/nCoOIrQ4+JlQ/s2K41dkqCbW9cAP/+uGDj5E=; b=oTN+rsJX9UWhcqNVij2NvCgf66P0PUOy8rxf6aNecdbrgDPuMbNL22UawwMKtu3pTW ojl4ydWdTWsSfid+OPeN9kMN7yu8fK1J0XG/Wv5WE5sqpPbWrIqpSpYsNE2/Wd/ANg/3 H+IyZyOEDaeMhYP33jIWcu+icGAVUY3rohXY0Eh5OTAovTGPZ8qM+h6KMI1Sc7uW9VEI pS55AYoPuTu5NGMQzBlQqCfWT5x0r2RIqNcguNhi8tDhe/JjEXsrK+yU/SmWyPEocXlz rQ65QzDhj0rdDCJRIJmO9HkRc7FxLNxSe5mMyWHrwjBUuhslOODhXBjJMCsuRDlF7U63 DFSg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=Htd5yLtP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=fb.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ec2si13859058ejb.189.2021.06.07.18.34.11; Mon, 07 Jun 2021 18:34:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=Htd5yLtP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=fb.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231337AbhFHBdj (ORCPT + 99 others); Mon, 7 Jun 2021 21:33:39 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:56354 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231268AbhFHBdf (ORCPT ); Mon, 7 Jun 2021 21:33:35 -0400 Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 1581SQDp008154 for ; Mon, 7 Jun 2021 18:31:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=facebook; bh=JbC745/nCoOIrQ4+JlQ/s2K41dkqCbW9cAP/+uGDj5E=; b=Htd5yLtPSPxG0bZ+mHT8kD2ZVBc3LLwhLSv+atuCp/wuLaSCNG15x4L5vxy6Udy/eZex lLHKR2/StGsDHSL4fwsAgg+cbloxwvDf/IfuYP+pjE/CvJUE5bIXX0OzenjaI3d8cYfG cCJt/aEXBzf0Jp0XX8hCi7G58/uKU/KRBaA= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 391mx0ktjc-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Jun 2021 18:31:42 -0700 Received: from intmgw001.05.prn6.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::c) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 7 Jun 2021 18:31:41 -0700 Received: by devvm3388.prn0.facebook.com (Postfix, from userid 111017) id 88F2A81D6D41; Mon, 7 Jun 2021 18:31:29 -0700 (PDT) From: Roman Gushchin To: Jan Kara , Tejun Heo CC: , , , Alexander Viro , Dennis Zhou , Dave Chinner , , Roman Gushchin Subject: [PATCH v8 0/8] cgroup, blkcg: prevent dirty inodes to pin dying memory cgroups Date: Mon, 7 Jun 2021 18:31:15 -0700 Message-ID: <20210608013123.1088882-1-guro@fb.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-FB-Internal: Safe Content-Type: text/plain X-Proofpoint-GUID: 9k-iQkMS3EXzU57jmA4eD-nJpqfB0F_0 X-Proofpoint-ORIG-GUID: 9k-iQkMS3EXzU57jmA4eD-nJpqfB0F_0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.761 definitions=2021-06-08_01:2021-06-04,2021-06-08 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 bulkscore=0 priorityscore=1501 impostorscore=0 mlxlogscore=298 mlxscore=0 malwarescore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 phishscore=0 suspectscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106080007 X-FB-Internal: deliver Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When an inode is getting dirty for the first time it's associated with a wb structure (see __inode_attach_wb()). It can later be switched to another wb (if e.g. some other cgroup is writing a lot of data to the same inode), but otherwise stays attached to the original wb until being reclaimed. The problem is that the wb structure holds a reference to the original memory and blkcg cgroups. So if an inode has been dirty once and later is actively used in read-only mode, it has a good chance to pin down the original memory and blkcg cgroups forewer. This is often the case wit= h services bringing data for other services, e.g. updating some rpm packages. In the real life it becomes a problem due to a large size of the memcg structure, which can easily be 1000x larger than an inode. Also a really large number of dying cgroups can raise different scalability issues, e.g. making the memory reclaim costly and less effective. To solve the problem inodes should be eventually detached from the corresponding writeback structure. It's inefficient to do it after every writeback completion. Instead it can be done whenever the original memory cgroup is offlined and writeback structure is getting killed. Scanning over a (potentially long) list of inodes and detach them from the writeback structure can take quite some time. To avoid scanning all inodes, attached inodes are kept on a new list (b_attached). To make it less noticeable to a user, the scanning and switching is perfo= rmed from a work context. Big thanks to Jan Kara, Dennis Zhou, Hillf Danton and Tejun Heo for their= ideas and contribution to this patchset. v8: - switch inodes to a nearest living ancestor wb instead of root wb - added two inodes switching fixes suggested by Jan Kara v7: - shared locking for multiple inode switching - introduced inode_prepare_wbs_switch() helper - extended the pre-switch inode check for I_WILL_FREE - added comments here and there v6: - extended and reused wbs switching functionality to switch inodes on cgwb cleanup - fixed offline_list handling - switched to the unbound_wq - other minor fixes v5: - switch inodes to bdi->wb instead of zeroing inode->i_wb - split the single patch into two - only cgwbs maintain lists of attached inodes - added cond_resched() - fixed !CONFIG_CGROUP_WRITEBACK handling - extended list of prohibited inodes flag - other small fixes Roman Gushchin (8): writeback, cgroup: do not switch inodes with I_WILL_FREE flag writeback, cgroup: add smp_mb() to cgroup_writeback_umount() writeback, cgroup: increment isw_nr_in_flight before grabbing an inode writeback, cgroup: switch to rcu_work API in inode_switch_wbs() writeback, cgroup: keep list of inodes attached to bdi_writeback writeback, cgroup: split out the functional part of inode_switch_wbs_work_fn() writeback, cgroup: support switching multiple inodes at once writeback, cgroup: release dying cgwbs by switching attached inodes fs/fs-writeback.c | 323 +++++++++++++++++++++---------- include/linux/backing-dev-defs.h | 20 +- include/linux/writeback.h | 1 + mm/backing-dev.c | 69 ++++++- 4 files changed, 312 insertions(+), 101 deletions(-) --=20 2.31.1