Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp2010306pxb; Thu, 16 Sep 2021 23:18:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyXUatsDcYYShmlX6LXWvPJNxgPNpMYtxI8Q8+02+txASbOmJoyIEQtM+f4BLiV+TGZOuA3 X-Received: by 2002:a6b:3c16:: with SMTP id k22mr7396241iob.130.1631859490091; Thu, 16 Sep 2021 23:18:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631859490; cv=none; d=google.com; s=arc-20160816; b=lHLxEPMccVyLKP5tzKzSP9Qu3jjCTgygh22Pb7F5HjNT5L+qewpN3EODBQ+RjUZlIK VVWD2Nugk/aLmmjYkri8r3vMajXtFb8L1CyGKAkFF1Wa+LKQ2ueXquGdI0YBpBHlmdsf TN/ih8re9GbSKQyhSPSdQHIwIb1Hyj/ogmNucLyUj7/HtqY/6sitZ07ne2GqtJeR9AAV 68paxVtq7TTSDMyOGaFJ1DoBb7GTxLonIGbZwc7dr/TmMFs8Ua1pCLQzVqbBpVIWHIkP sIZfg0XWONrViRY9b0yrLdSS9B5SQll5Ypos/UvZdIovCPXcQmlwmyf8c3srONtUA9uF gLTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=pqNh3G+k/k2Q/sEiKLZ0W6kWn3c0uZIiKqwMW42xSuw=; b=H7GDHzGXN8K9lA7oWlgmA9+TuPXLxH8kbalzKAiZX57p77hmwoWWc+qU2/QcZIJLe5 osjeuRlYTp3P0GeeheXDBDbeazhjhxf8F/mq0V/drPLGC+LupWoK58Vh2/lawPa7+9O4 3ufnLs47uMzuJuE/YkJey8cl3RlGmlebDUJL3yb1qPDju96IEgGtSTj8SdstLI1+HQX+ XQxEbqihDRrnXTjcIoMobGbeKD7/AwIED7wXTGp92b75OrBzh1PDQeKEtQeSjPGlMlpM kWXnRiOqooydFi1booOMNlS+52dTFwx4R1Grj6ir+AYKCy4MvE/WHGGUA+4UnFZ5jBMZ 9RsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n22si4812628ioo.36.2021.09.16.23.17.59; Thu, 16 Sep 2021 23:18:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232925AbhIPTRW (ORCPT + 99 others); Thu, 16 Sep 2021 15:17:22 -0400 Received: from outgoing-auth-1.mit.edu ([18.9.28.11]:53293 "EHLO outgoing.mit.edu" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S230243AbhIPTRV (ORCPT ); Thu, 16 Sep 2021 15:17:21 -0400 Received: from cwcc.thunk.org (pool-72-74-133-215.bstnma.fios.verizon.net [72.74.133.215]) (authenticated bits=0) (User authenticated as tytso@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id 18GJFTw4011993 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 16 Sep 2021 15:15:31 -0400 Received: by cwcc.thunk.org (Postfix, from userid 15806) id CBBD115C0098; Thu, 16 Sep 2021 15:15:29 -0400 (EDT) Date: Thu, 16 Sep 2021 15:15:29 -0400 From: "Theodore Ts'o" To: James Bottomley Cc: Chris Mason , Johannes Weiner , Kent Overstreet , Matthew Wilcox , Linus Torvalds , "linux-mm@kvack.org" , linux-fsdevel , "linux-kernel@vger.kernel.org" , Andrew Morton , "Darrick J. Wong" , Christoph Hellwig , David Howells , "ksummit@lists.linux.dev" Subject: Re: [MAINTAINER SUMMIT] Folios as a potential Kernel/Maintainers Summit topic? Message-ID: References: <17242A0C-3613-41BB-84E4-2617A182216E@fb.com> <33a2000f56d51284e2df0cfcd704e93977684b59.camel@HansenPartnership.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <33a2000f56d51284e2df0cfcd704e93977684b59.camel@HansenPartnership.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 16, 2021 at 01:11:21PM -0400, James Bottomley wrote: > > Actually, I don't see who should ack being an unknown. The MAINTAINERS > file covers most of the kernel and a set of scripts will tell you based > on your code who the maintainers are ... that would seem to be the > definitive ack list. It's *really* not that simple. It is *not* the case that if a change touches a single line of fs/ext4 (as well as 60+ other filesystems), for example: - ei = kmem_cache_alloc(ext4_inode_cachep, GFP_NOFS); + ei = alloc_inode_sb(sb, ext4_inode_cachep, GFP_NOFS); that the submitter *must* get a ACK from me --- or that I am entitled to NACK the entire 79 patch series for any reason I feel like, or to withhold my ACK as hostage until the submitter does some development work that I want. What typically happens is if someone were to try to play games like this inside, say, the Networking subsystem, past a certain point, David Miller will just take the patch series, ignoring people who have NACK's down if they can't be justified. The difference is that even though Andrew Morton (the titular maintainer for all of Memory Management, per the MAINTAINERS file), Andrew seems to have a much lighter touch on how the mm subsystem is run. > I think the problem is the ack list for features covering large areas > is large and the problems come when the acker's don't agree ... some > like it, some don't. The only deadlock breaking mechanism we have for > this is either Linus yelling at everyone or something happening to get > everyone into alignment (like an MM summit meeting). Our current model > seems to be every acker has a foot on the brake, which means a single > nack can derail the process. It gets even worse if you get a couple of > nacks each requesting mutually conflicting things. > > We also have this other problem of subsystems not being entirely > collaborative. If one subsystem really likes it and another doesn't, > there's a fear in the maintainers of simply being overridden by the > pull request going through the liking subsystem's tree. This could be > seen as a deadlock breaking mechanism, but fear of this happening > drives overreactions. > > We could definitely do a clear definition of who is allowed to nack and > when can that be overridden. Well, yes. And this is why I think there is a process issue here that *is* within the MAINTAINERS SUMMIT purview, and if we need to technical BOF to settle the specific question of what needs to happen, whether it happens at LPC, or it needs to happen after LPC, then let's have it happen. I'd be really disappointed if we have to wait until December 2022 for the next LSF/MM, and if we don't get consensus there, ala DAX, that we then have to wait until late 2023, etc. As others have said, this is holding up some work that file system developers would really like to see. - Ted