We’ve been working with git as a distributed version control system (DVCS) for Fedora Documentation this release. All of these documents (Installation Guide, Release Note, and various README files) are authored in DocBook XML, so they work great within a VCS.
Sure, it’s cool to work entirely offline, do granular commits, and merge perfectly with the codebase when I’ve reconnected later. That’s all good. What has been a real gift from the gods the last 48 hours is how fast it is. Especially the operations via the network; I see very little bandwidth used and it goes by very quickly. No more ‘cvs ci’ and wait.
Another great aspect is how a DVCS such as git supports a distributed team working in real and non-real time. Unlike with old VCS, it is much easier to work divergently and still have it merge together. For example, after I do a number of changes, I make granular commits, which occur only in my local repo (clone.) I can do a commit per file, so I have a specific changelog tied to the actions, and it is easier to undo or manipulate from that commit, which has it’s own UUID. Then I only have to run ‘git-pull –rebase’, which pulls down any changes pushed by other collaborators, updates my local repo, and replays my changes on top of this new copy of HEAD. Then I can ‘git push’ without any conflicts arising.
In a VCS, if you find yourself out of sync with the latest from HEAD, you either have to sync and deal with potentially spurious merge errors, or do a manual version of what git does. How many times have you copied your changes from out of a Subversion or CVS directory locally, reverted back to HEAD from the central repository, then manually reapplied your changes?
If anyone is making them … I want a t-shirt that says “praise git”.
I disagree that this is an improvement over the old practice of just keeping the documentation (especially the release notes) in the wiki and converting it to XML the day of the deadline. It’s easy to make a simple edit in the wiki, it’s a lot more work to make it in git (and also a significant learning curve for most of the documentation writers), to the point where the recommendation [1] is to file all changes in Bugzilla rather than just committing them – hardly an efficient process.
[1] http://marilyn.frields.org:8080/~paul/wordpress/?p=1272
I think you disagree because you’ve never been involved in the process of converting from the wiki to XML. ;-P
The timeframe that is under discussion here is a small window from when we build the fedora-release-notes package for the Preview Release to when we freeze content changes for GA.
This timeframe is about seven days, and it actually occurs after the fedora-release-notes package is out in the PR. Up until the release of that package, it makes sense to do all work in the upstream source (a.k.a. the wiki.) Once that package is out, there is a lot of work going on during that one week. It has continuously proved to be difficult to pull from the wiki during that time. As soon as we get a section complete, more changes come in, often very raw language that requires the full treatment that the rest of the notes got all month.
We honestly haven’t decided what the process is going to be next time; we just needed a solution this time to stem the bleeding. We’ve always done a good job of allowing people to make content changes as close as possible to the various stages of release. We have to balance that against the historical record of how many people make such last hour changes (answer: relatively few) and how disruptive it is keeping that channel open until the last hour.
One surprise this time was a change in wiki behavior. I personally lost track of the release notes beats because MediaWiki turns off the watch alert for a page if you don’t respond to each watch email. Unlike an issue tracker, such as Bugzilla or Trac, the wiki is not a comparatively great way to keep track of release blockers close to a release.