Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751761AbbBQADD (ORCPT ); Mon, 16 Feb 2015 19:03:03 -0500 Received: from mail-yh0-f44.google.com ([209.85.213.44]:43586 "EHLO mail-yh0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751555AbbBQAC7 (ORCPT ); Mon, 16 Feb 2015 19:02:59 -0500 Date: Mon, 16 Feb 2015 19:02:54 -0500 From: Jeff Layton To: Linus Torvalds Cc: "Kirill A. Shutemov" , linux-fsdevel , Linux Kernel Mailing List , "J. Bruce Fields" , Christoph Hellwig , Dave Chinner , Sasha Levin Subject: Re: [GIT PULL] please pull file-locking related changes for v3.20 Message-ID: <20150216190254.3b66a9ba@tlielax.poochiereds.net> In-Reply-To: References: <20150209055540.2f2a3689@tlielax.poochiereds.net> <20150216133200.GB3270@node.dhcp.inet.fi> <20150216090054.62455465@tlielax.poochiereds.net> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.25; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3556 Lines: 90 On Mon, 16 Feb 2015 11:24:03 -0800 Linus Torvalds wrote: > On Mon, Feb 16, 2015 at 10:46 AM, Linus Torvalds > wrote: > > > > This code is so broken that my initial reaction is "We need to just > > revert the crap". > > How the hell is flock_lock_file() supposed to work at all, btw? > > Say we have an existing flock, and now do a new one that conflicts. I > see what looks like three separate bugs. > > - We go through the first loop, find a lock of another type, and > delete it in preparation for replacing it > > - we *drop* the lock context spinlock. > > - BUG #1? So now there is no lock at all, and somebody can come in > and see that unlocked state. Is that really valid? > > - another thread comes in while the first thread dropped the lock > context lock, and wants to add its own lock. It doesn't see the > deleted or pending locks, so it just adds it > > - the first thread gets the context spinlock again, and adds the lock > that replaced the original > > - BUG #2? So now there are *two* locks on the thing, and the next > time you do an unlock (or when you close the file), it will only > remove/replace the first one. > > Both of those bugs are due to the whole "drop the lock in the middle", > which is pretty much always a mistake. BUG#2 could easily explain the > warning Kirill reports, afaik. > > BUG#3 seems to be independent, and is about somebody replacing an > existing lock, but the new lock conflicts. Again, the first loop will > remove the old lock, and then the second loop will see the conflict, > and return an error (and we may then end up waiting for it for the > FILE_LOCK_DEFERRED case). Now the original lock is gone. Is that > really right? That sounds bogus. *Failing* to insert a flock causing > the old flock to go away? > > Now, flock semantics are pretty much insane, so maybe all these bugs > except for #2 aren't actually bugs, and are "features" of flock. But > bug #2 can't be a semantic feature. > > Is there something I'm missing here? > > This was all just looking at a *single* function. Quite frankly, I > hate how the code also just does > > if (filp->f_op->flock) > filp->f_op->flock(filp, F_SETLKW, &fl); > else > flock_lock_file(filp, &fl); > > and blithely assumes that some random filesystem will get the flock > semantics right, when even the code code screwed it up this badly. > > And maybe I'm wrong, and there's some reason why none of the things > above can actually happen, but it looks really bad to me. > > Linus Now that I look, it may be best to just revert this whole set for now. Linus, are you amenable to doing that? While I think this could be a nice cleanup, it obviously needs more testing and scrutiny, and it won't hurt to wait another release or two to make sure I get this right. There may be some merge conflicts with Bruce's tree, but I'd rather deal with those than break the file locking code. Once we do that, then we could introduce some smaller patches to fix up the bug you spotted, but at least we'll be proceeding from a spot that is known to work. I'll start preparing a pull request that does that... Thanks, -- Jeff Layton -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/