开发者

Questions about task driven branching

I'm considering switching from HG to Plastic SCM (http://www.plasticscm.com, mainly because it seems to offer much nicer VS integration), and they promote "task driven branching", that is, branching from mainline for every feature. This makes sense, but, I had a few questions:

  1. They recommend not merging your tasks back to mainline after they're completed. This seems very non-intuitive, I'd have thought that one would, after testing, want to immediately merge back to tip so that you don't have to rebase later on. Not to mention, if tasks aren't merged back, and say a new release is coming up, one needs to merge in possibly hundreds of different branches, and make sure they all play nice with each other in a short period of time (testing in independence doesn't mean they'll play nice with others, imho). So, this seems like it's bound to fail, am I wrong? Do you practice this method?
  2. Let's say I'm wrong about the above, given the following scenario: Task A, B, C. Where B, C are dependent on A being completed, would it be better to complete A, merge it back to the mainline, and then branch from there to work on B/C, or, sub-branch your initial branch(the branch you created for A). Is that even possible? Recommended? It seems slightly cleaner in my head, if the same person is implementing A, B, C. If not, obviously, merge back to mainline makes the most sense.

Let me know what you guys think!

开发者_运维百科

Thanks.


In a rather good discussion about branch strategies we had recently, jgifford25's answer contained a link to what one of Subversion's developers calls the 'agile release strategy', and which looks rather similar to what the Plastic guys are suggesting - a branch per feature, with merges into release branches rather than into the trunk. I didn't think that was a good idea, and i don't think this is a good idea. I also don't think it's a coincidence that in both cases, the idea is being pushed by a SCM developer - i think those guys have a case of "everything looks like a nail", and think any process problem can be fixed with more and bigger SCM.

So why is this idea bad? Let's follow the Plastic guys' argument. They build this process around one central idea: 'keep the mainline pristine'. So far so good. They then advance a syllogism that looks like:

  1. If you check broken code into the trunk, the build breaks
  2. Broken builds are bad
  3. Therefore don't check code into the trunk

The problem with this is that it completely misunderstands why broken builds are bad. Broken builds are not bad in and of themselves (although they are unhelpful, because they stall development), they are bad because they mean that someone has checked in broken code. It's broken code that's the real problem, not broken builds - it's the broken code which actually has the potential to cause damage (lost user data, lost space probes, global thermonuclear war, that sort of thing).

Their solution, then, amounts to having people check their broken code in elsewhere, so that it doesn't break the build. This pretty obviously does nothing at all to deal with the actual problem of broken code - quite the opposite, it's a way of concealing broken code. Indeed, it's not clear to me at which point the brokenness gets detected - when the task branches are finalised and merged to the release branch? That sounds like a great way of deferring difficult work to late in your release cycle, which is a very poor idea.

The real solution, rather, is quite simply not check broken code in at all. In the pursuit of that goal, a broken build is actually good, because it tells you that there is broken code, which lets you fix it. That, in fact, is the whole flipping point of the idea of continuous integration - your merge early and often into a single trunk which is the prototype of what will actually get released, so you detect problems with what you intend to release as early as possible. That absolutely requires the 'unstable trunk' model, or something isomorphic to it.

The blog post that orangepips's answer links to mentions Ubuntu's idea about process as a driver for this idea. But look at what Shuttleworth actually said:

  • Keep trunk pristine
  • Keep features flowing
  • Release on demand

That's my emphasis on the last point, but it's Shuttleworth's end goal: he wants to be able to cut releases at any time. A process which defers merging and testing to the release process, as the Plastic model does, cannot possibly do this.

Rather, if you want to see what a process which can do it looks like, look at what the lean guys do: one codeline, continuous integration (on a scale of hours or even minutes, rather than days or weeks), no broken code.

So, in conclusion: don't do this. Have one codeline, and check working code into it as often as you can. Simple.

PS Okay, so you might want to make release branches to stabilise and bugfix actual releases. Ideally, you wouldn't, but you might need to.

PPS And if you have a CI test suite that is too slow to run before checking in (eg functional tests which take an hour), then something you could do with any DVCS is have two repositories: a dirty one, where developers merge into, and a clean one, which is pushed to by a script which watches the dirty repository for changes, builds and tests new versions coming into it, and pushes to the clean repository if they pass. You can then run on-demand releases (for QA and so on) from the clean repository, and developers can update from the clean repository to stay current while developing. They will obviously have to update from the dirty repository immediately before merging, though.


After reading the PR, it sounds as if they advocate for a model where code is tested before it's merged into the trunk/main/base-line (see rule #4). This presupposes a suite of unit tests and that those tests cover whatever changes have been made. For most projects I've been involved with, the suite doesn't exist and likely never will fully.

In my own experience using Subversion, the trunk is pristine, but is not what releases are made from. Instead the trunk is where back and forward ports between version flow. Releases come from version branches.

From the version branches, feature branches are created - sometimes. These branches allow for frequent commits that may break things. Once a feature branch is done, it's merged into the version; inevitably there are problems to resolve when this integration occurs. Finally once a version has been built and validated, it's merged into the trunk.

So I think #1 is not realistic. As for #2, it depends. Does it seem certain that B and C will not change A? If so, merge A back then branch for B and C. But most likely I would branch A to make B and C, because likely the latter will change the former. Then once done, roll up all three.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜