Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1007137pxj; Wed, 2 Jun 2021 17:59:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw1j5m76v9mWSANDz/JPLmObUIiraG/0zPDkG4ABPSIQPWO9QMSR5Ts2mQhUIiNxJW4QBGg X-Received: by 2002:a05:6402:4390:: with SMTP id o16mr41510919edc.79.1622681959744; Wed, 02 Jun 2021 17:59:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622681959; cv=none; d=google.com; s=arc-20160816; b=Gt64gu2Bk1ghU/DiSZBIv2k0q9QH7CUQRdxtRV9cLoulyeaYibKc5ATUUX39Lg42XG ntpDb7YTC264A2wuI2WAoYM71H3hmt1s7bQlmfxg0RNwsy5Tw55FAiVszD3oKX6Fk4sG pU2d/6UHT68zMpU7XeH9dg2CElnWYiZ7hf4pzOVpfmuiJ503UICGHn4mDAGoQYx5eSWr b3ZqL8E5xFkABqweJZMjxMrYl8ilzR8Yj1iQY3t45xQ99jt6hvzVp/k8r0ij0Wwp0HGn hTSBRymnvqrVlUzjMv1+Rogj7asnZioMEhFBRZ2V71AwlKEYpLGTU7Z4E8dJz1lsCCCn Gbag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=L8m8gyrfdV8KT/gd5pz1QPnzKy0K3EfCtK1Ky3VDACI=; b=GzUNqGEiBPtIeoMsiUYlcWX6pDkisZFfFvIdD6tsy5OSN+AjzbxohuXgh7suk7rYSx 5/Nfhp/8vNblp9dMenCQvZijSqJ8tcjnzCejSlBk2WUCDCBj+UxH04pOSK07hVZX2HvG 4O09puggyBg3P/GTfsIaqrgrWhxBrC5VPCDzH0L42guT1FUWCJ9VlC5tv4D5CnpUXobT y/zC/kOTuDKkOAuD1E85iVBfcofkeF91CtMmCgAYh37JpSpMzq7sLHF1tUjaGanAcLFf 4o3CsaCDzMabppXHm9hGDzasaY/yoEKcNoLFCNaH7jxAeDEJ014z1x8/Olfy+WmWx2gT kzMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=WNQRQrAb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=fb.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bx11si1155323edb.203.2021.06.02.17.58.57; Wed, 02 Jun 2021 17:59:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=WNQRQrAb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=fb.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229810AbhFCA5j (ORCPT + 99 others); Wed, 2 Jun 2021 20:57:39 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:60426 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229792AbhFCA5g (ORCPT ); Wed, 2 Jun 2021 20:57:36 -0400 Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 1530sQOv013300 for ; Wed, 2 Jun 2021 17:55:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=L8m8gyrfdV8KT/gd5pz1QPnzKy0K3EfCtK1Ky3VDACI=; b=WNQRQrAbIQrsfmeHxmDfx7FDsTJeloDZEsYBu+QxbpPrd0Mr62WIabW2KO1yT7Qg9ypm cJLsmzefYlz/WVXTqJv0Mch0enoqKI3M3/PZZuBPDlE3XICLbGVBXcvuSE3oZt+civ7k 2HASo8smIXmtX8ZlxZwOfRtvuVaGdHnzBso= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 38xj5k8yx5-12 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 02 Jun 2021 17:55:52 -0700 Received: from intmgw002.48.prn1.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 2 Jun 2021 17:55:29 -0700 Received: by devvm3388.prn0.facebook.com (Postfix, from userid 111017) id A5C807F192AA; Wed, 2 Jun 2021 17:55:22 -0700 (PDT) From: Roman Gushchin To: Jan Kara , Tejun Heo CC: , , , Alexander Viro , Dennis Zhou , Dave Chinner , , Roman Gushchin Subject: [PATCH v6 4/5] writeback, cgroup: support switching multiple inodes at once Date: Wed, 2 Jun 2021 17:55:16 -0700 Message-ID: <20210603005517.1403689-5-guro@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210603005517.1403689-1-guro@fb.com> References: <20210603005517.1403689-1-guro@fb.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-FB-Internal: Safe Content-Type: text/plain X-Proofpoint-GUID: Xol_ZZ3b7YQKX0Y4EXCTHF5xA_6ssj8o X-Proofpoint-ORIG-GUID: Xol_ZZ3b7YQKX0Y4EXCTHF5xA_6ssj8o X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.761 definitions=2021-06-02_11:2021-06-02,2021-06-02 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 suspectscore=0 mlxlogscore=933 priorityscore=1501 lowpriorityscore=0 impostorscore=0 bulkscore=0 malwarescore=0 phishscore=0 spamscore=0 adultscore=0 mlxscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106030004 X-FB-Internal: deliver Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently only a single inode can be switched to another writeback structure at once. That means to switch an inode a separate inode_switch_wbs_context structure must be allocated, and a separate rcu callback and work must be scheduled. It's fine for the existing ad-hoc switching, which is not happening that often, but sub-optimal for massive switching required in order to release a writeback structure. To prepare for it, let's add a support for switching multiple inodes at once. Instead of containing a single inode pointer, inode_switch_wbs_context will contain a NULL-terminated array of inode pointers. inode_do_switch_wbs() will be called for each inode. Signed-off-by: Roman Gushchin --- fs/fs-writeback.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 212494d89cc2..49d7b23a7cfe 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -335,10 +335,10 @@ static struct bdi_writeback *inode_to_wb_and_lock_l= ist(struct inode *inode) } =20 struct inode_switch_wbs_context { - struct inode *inode; - struct bdi_writeback *new_wb; - struct rcu_work work; + + struct bdi_writeback *new_wb; + struct inode *inodes[]; }; =20 static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi) @@ -473,10 +473,14 @@ static void inode_switch_wbs_work_fn(struct work_st= ruct *work) { struct inode_switch_wbs_context *isw =3D container_of(to_rcu_work(work), struct inode_switch_wbs_context, work)= ; + struct inode **inodep; + + for (inodep =3D &isw->inodes[0]; *inodep; inodep++) { + inode_do_switch_wbs(*inodep, isw->new_wb); + iput(*inodep); + } =20 - inode_do_switch_wbs(isw->inode, isw->new_wb); wb_put(isw->new_wb); - iput(isw->inode); kfree(isw); atomic_dec(&isw_nr_in_flight); } @@ -503,7 +507,7 @@ static void inode_switch_wbs(struct inode *inode, int= new_wb_id) if (atomic_read(&isw_nr_in_flight) > WB_FRN_MAX_IN_FLIGHT) return; =20 - isw =3D kzalloc(sizeof(*isw), GFP_ATOMIC); + isw =3D kzalloc(sizeof(*isw) + 2 * sizeof(struct inode *), GFP_ATOMIC); if (!isw) return; =20 @@ -528,7 +532,7 @@ static void inode_switch_wbs(struct inode *inode, int= new_wb_id) __iget(inode); spin_unlock(&inode->i_lock); =20 - isw->inode =3D inode; + isw->inodes[0] =3D inode; =20 /* * In addition to synchronizing among switchers, I_WB_SWITCH tells --=20 2.31.1