Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1096661pxu; Thu, 17 Dec 2020 01:59:19 -0800 (PST) X-Google-Smtp-Source: ABdhPJwYfxaPjg88fWCV0IABxa2rUKVTgZ3tSGwbuKXdx3trAvC6Gi2TzEjQRfCQL1aNrsNCwnXA X-Received: by 2002:a17:906:f184:: with SMTP id gs4mr6127724ejb.137.1608199159571; Thu, 17 Dec 2020 01:59:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608199159; cv=none; d=google.com; s=arc-20160816; b=ldUmdWgdE61ZGbjWQpWwBpTDzOhL9koX+DAcDZOttLBhffTpr/3iJ7B0oDTGP0imHz JE329rnimxxwA5Fmotfrp5LP/dG7hVqhJBokcLqvAPOWql+kj9VF4m2gyODp4t87Kgwh kgPfeLJsoIBZJtnbtUeYs8GjG3tfIdF4VhCzRQDEG9pErDpEgNYbISV+RQ+53mqexMZ9 abeY+CJ/r4JdN9wn2yC6npSI0d0ikF4aHDzqrOXM4mmC9LG9eHwlDtvZvM6Noi5bXaD3 +j34pOu3S/CRQx18f/e9dBnT3JmMieeSJf4VlqfhffqWIdfPkPaELqB1dR0tkEZt0MJi 6Oaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=rCvPpqSYuqXnJdA0RzAxoQjlY6X0N3si0k+FTbe5Atg=; b=vaaXo4z6js2mtoz7vpjrmMw8pRlp5OylarkW4On60AdcSG8TlL0Ib6XJWM6ocRrZrQ 6OPDKmKrYe3SZdbnlOvhi9kH+TJN5+F5COISupeaP/sIxavGIn2Mc9LpH0DPFGnY8eSY cKFfpLbIi+XnSWT/mkJb4GJ1bRIgGd/+Xog33gtvLyf4TVwWFTMtvmD5PDPOHpZWzM4m z/CbDDGA9xPdhsxls2/1avPkFNSjxJTlg5bHFhkTBk/DisTA8OBsdTydiq+1L2uwahc+ M0GXTF0JHCPz5t7K2OUi3ckkDnf/27E3sS7UshRPdK8WKBV65Yh5G3jP7bJYBrCcNILj IJTQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f24si2955931edq.247.2020.12.17.01.58.56; Thu, 17 Dec 2020 01:59:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727378AbgLQJ6N (ORCPT + 99 others); Thu, 17 Dec 2020 04:58:13 -0500 Received: from mx2.suse.de ([195.135.220.15]:57860 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726548AbgLQJ6M (ORCPT ); Thu, 17 Dec 2020 04:58:12 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id ABA97AC7B; Thu, 17 Dec 2020 09:57:30 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 5B3921E135E; Thu, 17 Dec 2020 10:57:28 +0100 (CET) Date: Thu, 17 Dec 2020 10:57:28 +0100 From: Jan Kara To: Al Viro Cc: Vivek Goyal , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-unionfs@vger.kernel.org, jlayton@kernel.org, amir73il@gmail.com, sargun@sargun.me, miklos@szeredi.hu, willy@infradead.org, jack@suse.cz, neilb@suse.com, Christoph Hellwig Subject: Re: [PATCH 1/3] vfs: add new f_op->syncfs vector Message-ID: <20201217095728.GB6989@quack2.suse.cz> References: <20201216233149.39025-1-vgoyal@redhat.com> <20201216233149.39025-2-vgoyal@redhat.com> <20201217004935.GN3579531@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201217004935.GN3579531@ZenIV.linux.org.uk> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 17-12-20 00:49:35, Al Viro wrote: > [Christoph added to Cc...] > On Wed, Dec 16, 2020 at 06:31:47PM -0500, Vivek Goyal wrote: > > Current implementation of __sync_filesystem() ignores the return code > > from ->sync_fs(). I am not sure why that's the case. There must have > > been some historical reason for this. > > > > Ignoring ->sync_fs() return code is problematic for overlayfs where > > it can return error if sync_filesystem() on upper super block failed. > > That error will simply be lost and sycnfs(overlay_fd), will get > > success (despite the fact it failed). > > > > If we modify existing implementation, there is a concern that it will > > lead to user space visible behavior changes and break things. So > > instead implement a new file_operations->syncfs() call which will > > be called in syncfs() syscall path. Return code from this new > > call will be captured. And all the writeback error detection > > logic can go in there as well. Only filesystems which implement > > this call get affected by this change. Others continue to fallback > > to existing mechanism. > > That smells like a massive source of confusion down the road. I'd just > looked through the existing instances; many always return 0, but quite > a few sometimes try to return an error: > fs/btrfs/super.c:2412: .sync_fs = btrfs_sync_fs, > fs/exfat/super.c:204: .sync_fs = exfat_sync_fs, > fs/ext4/super.c:1674: .sync_fs = ext4_sync_fs, > fs/f2fs/super.c:2480: .sync_fs = f2fs_sync_fs, > fs/gfs2/super.c:1600: .sync_fs = gfs2_sync_fs, > fs/hfsplus/super.c:368: .sync_fs = hfsplus_sync_fs, > fs/nilfs2/super.c:689: .sync_fs = nilfs_sync_fs, > fs/ocfs2/super.c:139: .sync_fs = ocfs2_sync_fs, > fs/overlayfs/super.c:399: .sync_fs = ovl_sync_fs, > fs/ubifs/super.c:2052: .sync_fs = ubifs_sync_fs, > is the list of such. There are 4 method callers: > dquot_quota_sync(), dquot_disable(), __sync_filesystem() and > sync_fs_one_sb(). For sync_fs_one_sb() we want to ignore the > return value; for __sync_filesystem() we almost certainly > do *not* - it ends with return __sync_blockdev(sb->s_bdev, wait), > after all. The question for that one is whether we want > __sync_blockdev() called even in case of ->sync_fs() reporting > a failure, and I suspect that it's safer to call it anyway and > return the first error value we'd got. No idea about quota > situation. WRT quota situation: All the ->sync_fs() calls there are due to cache coherency reasons (we need to get quota changes to disk, then prune quota files's page cache, and then userspace can read current quota structures from the disk). We don't want to fail dquot_disable() just because caches might be incoherent so ignoring ->sync_fs() return value there is fine. With dquot_quota_sync() it might make some sense to return the error - that's just a backend for Q_SYNC quotactl(2). OTOH I'm not sure anybody really cares - Q_SYNC is rarely used. Honza -- Jan Kara SUSE Labs, CR