Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S261240AbVDIQwe (ORCPT ); Sat, 9 Apr 2005 12:52:34 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S261272AbVDIQwe (ORCPT ); Sat, 9 Apr 2005 12:52:34 -0400 Received: from rproxy.gmail.com ([64.233.170.205]:49381 "EHLO rproxy.gmail.com") by vger.kernel.org with ESMTP id S261240AbVDIQwc (ORCPT ); Sat, 9 Apr 2005 12:52:32 -0400 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:references; b=oY04axRbgxBgux4Ja++1vGZW3fSmi9es5nuH3Ap227Gk5fAyt+3EqalluBTNKhpPD4UngoXhbJbLaL3Jzp5FWmdNGWZS8b1DRcNy6VEr85lboDOzB3kMHx6Ir3txIY0YLr3T4E8KzvjxWNLUdmceEmDR7a8hItSAKT+xfxiHWNY= Message-ID: <311601c905040909525ef8242e@mail.gmail.com> Date: Sat, 9 Apr 2005 10:52:30 -0600 From: "Eric D. Mudama" Reply-To: "Eric D. Mudama" To: Roman Zippel Subject: Re: Kernel SCM saga.. Cc: Linus Torvalds , David Woodhouse , Kernel Mailing List In-Reply-To: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit References: <1112858331.6924.17.camel@localhost.localdomain> Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2294 Lines: 48 On Apr 8, 2005 4:52 PM, Roman Zippel wrote: > The problem is you pay a price for this. There must be a reason developers > were adding another GB of memory just to run BK. > Preserving the complete merge history does indeed make repeated merges > simpler, but it builds up complex meta data, which has to be managed > forever. I doubt that this is really an advantage in the long term. I > expect that we were better off serializing changesets in the main > repository. For example bk does something like this: > > A1 -> A2 -> A3 -> BM > \-> B1 -> B2 --^ > > and instead of creating the merge changeset, one could merge them like > this: > > A1 -> A2 -> A3 -> B1 -> B2 > > This results in a simpler repository, which is more scalable and which > is easier for users to work with (e.g. binary bug search). > The disadvantage would be it will cause more minor conflicts, when changes > are pulled back into the original tree, but which should be easily > resolvable most of the time. The kicker comes that B1 was developed based on A1, so any test results were based on B1 being a single changeset delta away from A1. If the resulting 'BM' fails testing, and you've converted into the linear model above where B2 has failed, you lose the ability to isolate B1's changes and where they came from, to revalidate the developer's results. With bugs and fixes that can be validated in a few hours, this may not be a problem, but when chasing a bug that takes days or weeks to manifest, that a developer swears they fixed, one has to be able to reproduce their exact test environment. I believe that flattening the change graph makes history reproduction impossible, or alternately, you are imposing on each developer to test the merge results at B1 + A1..3 before submission, but in doing so, the test time may require additional test periods etc and with sufficient velocity, might never close. This is the problem CVS has if you don't create micro branches for every single modification. --eric - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/