Continuous integration future?A few days ago I was re-reading the book "Continuous integration" by Paul Duvall. I find it a really interesting reading, especially when you use agile practices.
The book dates from mid 2007, so is quite new, and there's a chapter at the end of it which really surprised me. It is titled "the future of continuous integration", and it focuses on two interesting questions:
The first question is not a concern for us internally, but the second one is probably one of the toughest problems we've reached here at Codice. Can they be solved with version control?
The author starts examining the first question: can broken builds be prevented? And if so, how? Well, he states something that really shocked me:
Imagine if the only activity the developer needs to perform is to “commit” her code to the version control system. Before the repository accepts the code, it runs an integration build on a separate machine.
Only if the integration build is successful will it commit the code to the repository. This can significantly reduce broken integration builds and may reduce the need to perform manual integration builds.
Then he draws a nice graphic representing an "automated queued integration chain". He introduces something like a "two-phase" commit, so the code doesn't reach the mainline until the tests pass...
I don't know if I'm missing something because I find a too obvious answer, something all plastic users know by heart now... commit to a separate branch, rebase it from the mainline, run the tests, and only merge up (which would be a "copy up") if the tests pass... Branching is the answer, isn't it?
I mean, I couldn't understand such a "futuristic" set up with a two-phase commit scenario, if this is precisely what you already have with systems with good branch support.
I understand when he states "the only activity the developer needs to perform is to "commit"", his problem is not actually checkin the changes in, but being able to have a place where the code can reside in some sort of intermediate status and then, while the tests pass, the developer can continue working.
Again, I must be missing something here, because otherwise I only see one reason to find it a "future improvement": the author is always thinking on "mainline development" (you know, only working with the main branch, or just a few more at most, and directly checking in changes into this mainline). Because if you're used to patterns like "branch per task", then you don't have this problem anymore. You're used to deliver your changes to the version control system and continue working on something else without ever breaking the mainline.
He continues with:
An alternative approach to preventing broken builds is to provide the capability for a developer to run an integration build using the integration build machine and his local changes (that haven’t been committed to the version control repository) along with any other changes committed to the version control repository.
Of course it is! That's why branch per task is a better alternative than mainline development for almost every development scenario I've been involved into!
The problem behind all this statements has a name: the most well-known version control tools out there (including glorified Subversion, which is the tool the book focuses on) have (did I say have? I wanted to say have) big problems dealing with branches. They don't always fail creating a big number of branches (which is what every SVN or CVS user tells me whenever I mention plastic can handle thousands of branches... "mine too" they say), the problem is handling them after a few months (on the "test day" everything works great, doesn't it?), merging them, checking what has been modified on the branch, tracking branch evolution, and so on. And, believe it or not (and that's why we wrote plastic in the first place!) all of these well-known-widely-available-sometimes-for-free tools, lack proper visualization methods, proper merge tools (ok, there're third party ones sometimes) and sometimes even basic features to deal with branches like true renaming and merge tracking.
I guess that's the reason why after 200 pages of decent reading, I've found such an obvious chapter, describing as a "future innovation" some well-known and widely used SCM best practices. I'd rather recommend going to the now classic Software Configuration Management Patterns, which I still found the best SCM book ever written.
The question about how to speed up test execution remained unsolved...