Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp147987pxj; Thu, 3 Jun 2021 03:12:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwxf4VNAhg6eYfbcVAzaGXuIQvt8MIezYe8kpzrw5eb2kTbi4heui8O3hJ6XwNEAtC62dOU X-Received: by 2002:aa7:c584:: with SMTP id g4mr23619665edq.335.1622715153239; Thu, 03 Jun 2021 03:12:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622715153; cv=none; d=google.com; s=arc-20160816; b=D7ZT8ar3zB1Td4TOWtAd65TxMhP11r2KBoajWZCHln4++wDpsw1B9MsJbeBMiAI8o4 UlJs8iDQPlhKNmRP4i2WnBp02M/Ptg0qusJ9oT+UPNZRobbTct5QGiMYsw/9LLYpwiwX I4TgGD0xOM9DKsq2wT7zhK7cCs5xFtOnu1EZo9omgQKxInll4jRmzdkU5A7yVtMVSs75 LJm5ow50LdDyJw4jdmSD5SIfSdU5LWqrwDn3iYWGKFQMwqXUFNyoS/VBFrcksKkfTZ6p z2Djcx7Q7rJ0/Kw3tJ4fdTCLVVuVTKV39GcI/E1zEarsIfFYHh1Mw1bfb+Ph4V2LP8os FZ0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature:dkim-signature; bh=dl1cSFiS5eKqztKERplBtFE3q8X+b7XPT0PQ/DFNCvM=; b=Hks+jCtImx5iFxlIRmHBA+CRrK2zEwvLi2x+uEhyiD5VD1zjkG00ItrTmFTm8V2c9p OjQ5qN4rtiXKnZ3iQRKXKvL8uXCL5UQBF1yHrA+20M5u67fSAl/ICg4uhjI0nmxO5kt3 SSEzwSNBK38OdM8E0R90RT8S5RKfBqCH72Rjuej7UvAZU9+okx0wdMJ0o0FhifZaRDPz hU6aaeNZWrTMYkfKC30i4VtFzawJoFyJiuOH0iPL/g8vvmb//+3Eksc8KIVCbR7NmVlb dNFSsF44zZ1CAOp8cnp2yEkKHrR4sBqePzZ1lSDaVUZfNu+ai4a1fEjfye+RUNhS4YfO TqDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=tPuCzD1f; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a22si2530596edb.395.2021.06.03.03.12.09; Thu, 03 Jun 2021 03:12:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.cz header.s=susede2_rsa header.b=tPuCzD1f; dkim=neutral (no key) header.i=@suse.cz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbhFCKMn (ORCPT + 99 others); Thu, 3 Jun 2021 06:12:43 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:42618 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229576AbhFCKMn (ORCPT ); Thu, 3 Jun 2021 06:12:43 -0400 Received: from relay2.suse.de (unknown [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 045A21FD56; Thu, 3 Jun 2021 10:10:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1622715058; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=dl1cSFiS5eKqztKERplBtFE3q8X+b7XPT0PQ/DFNCvM=; b=tPuCzD1fQp7yP6anCTvYzIYNpUk9tc3LVokNr+FsYzOzm74PZesjrtCYB8WUDID5WbX6lb Eh1+21542gh0yxrTadXYt2VbTT5S0myebYY0wbrvIkodTZMSVCFALWL/WxzoUsO2luaJ1a 3l4cCvlm8BgUKgQd4xaj4JkkJG2sVJE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1622715058; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=dl1cSFiS5eKqztKERplBtFE3q8X+b7XPT0PQ/DFNCvM=; b=rGD56/LecjA4ecN9EDCJM0AZenCGGzEQBAm2KixEVmJVBebVq3OzFJkGR0UI7gz9tau0ce aetK6i7QWZCzUJCw== Received: from quack2.suse.cz (unknown [10.100.200.198]) by relay2.suse.de (Postfix) with ESMTP id E934CA3B85; Thu, 3 Jun 2021 10:10:57 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id C765C1F2C98; Thu, 3 Jun 2021 12:10:57 +0200 (CEST) Date: Thu, 3 Jun 2021 12:10:57 +0200 From: Jan Kara To: Roman Gushchin Cc: Jan Kara , Tejun Heo , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Viro , Dennis Zhou , Dave Chinner , cgroups@vger.kernel.org Subject: Re: [PATCH v6 4/5] writeback, cgroup: support switching multiple inodes at once Message-ID: <20210603101057.GH23647@quack2.suse.cz> References: <20210603005517.1403689-1-guro@fb.com> <20210603005517.1403689-5-guro@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210603005517.1403689-5-guro@fb.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 02-06-21 17:55:16, Roman Gushchin wrote: > Currently only a single inode can be switched to another writeback > structure at once. That means to switch an inode a separate > inode_switch_wbs_context structure must be allocated, and a separate > rcu callback and work must be scheduled. > > It's fine for the existing ad-hoc switching, which is not happening > that often, but sub-optimal for massive switching required in order to > release a writeback structure. To prepare for it, let's add a support > for switching multiple inodes at once. > > Instead of containing a single inode pointer, inode_switch_wbs_context > will contain a NULL-terminated array of inode > pointers. inode_do_switch_wbs() will be called for each inode. > > Signed-off-by: Roman Gushchin Two small comments below: > @@ -473,10 +473,14 @@ static void inode_switch_wbs_work_fn(struct work_struct *work) > { > struct inode_switch_wbs_context *isw = > container_of(to_rcu_work(work), struct inode_switch_wbs_context, work); > + struct inode **inodep; > + > + for (inodep = &isw->inodes[0]; *inodep; inodep++) { ^^^^ why not just isw->inodes? > + inode_do_switch_wbs(*inodep, isw->new_wb); > + iput(*inodep); > + } I was kind of hoping that we would save the repeated locking of bdi->wb_switch_rwsem, old_wb->list_lock, and new_wb->list_lock for multiple inodes. Maybe we can have 'old_wb' as part of isw as well and assert that all inodes are still attached to the old_wb at this point to make this a bit simpler. Or we can fetch old_wb from the first inode and then just lock & assert using that one. Honza -- Jan Kara SUSE Labs, CR