I wonder who Mark is referring to in his blog post:
If you’ve done what you want for Ubuntu, then move on. That’s normal – there’s no need to poison the well behind you just because you want to try something else.
I wonder who Mark is referring to in his blog post:
If you’ve done what you want for Ubuntu, then move on. That’s normal – there’s no need to poison the well behind you just because you want to try something else.
Today marks the beginning of the end of me having an Ubuntu machine at home, and I have mixed feelings about that. By the weekend the last machine that I do have, my network file server and general dogsbody machine, will have been replaced and its replacement will not be running Ubuntu.
The primary purpose of the machine is to be a point of backup for my laptop and other devices, as well as a host for the large and valuable content collections such as my photos, music, purchased TV shows, movies, etc.
Since this collection is multiple terabytes in size there just isn’t a viable cloud storage solution. Firstly getting content to the cloud would be a long and difficult process, secondly the actual costs of that much storage are still reasonably prohibitive compared to home solutions and thirdly since a lot of this content is in the form of high quality media, while my home Internet connection can stream it, the download bandwidth costs of cloud storage providers are equally cost prohibitive.
So I still need some form of fast and reliable file storage at home, at least for the foreseeable future. And this is where Ubuntu comes up short.
For the last few years I’ve done what anyone would have done, I purchased a small form-factor machine, loaded it with SATA drives and installed Ubuntu 10.04 LTS using Software RAID to deal with the reliability factor.
This has all worked fine, the box even survived a transatlantic voyage; what it hasn’t survived is the upgrade to Ubuntu 12.04 LTS. At some point after the upgrade the box did not come back up after a reboot; after searching for a monitor to plug into it to find out what was going on, I was dismayed to see a message about the RAID being in degraded mode and the boot not continuing.
My first reaction, naturally, was that one of the disks had finally given out; so, knowing that the Ubuntu initramfs is too limited to debug, I booted a USB image and grabbed the various SMART utilities to figure out which disk had been thrown and needed replacing.
Mysteriously they all checked out. I rebooted back into Ubuntu, and it came up just fine. Weird. And a subsequent reboot works fine too.
At this point my disk utilization is well over 90% and I’m already starting to consider my options for expanding it, I’m still thinking dodgy disk and so begin accelerating that process. The most obvious option is just to buy larger disks; the next option would be to buy more smaller disks, but this would require additional SATA capacity in the machine; the final option would be to buy a proper RAID array or even a NAS of some description.
I’m wary of NAS, the last one I bought, while admittedly a relatively budget option, ie. under $1,000, just didn’t perform. It didn’t have the power to actually get data from its disks and out of the network port in anything like a timely manner, certainly not enough for 1080p 7.1 streaming, for example.
And then the server throws a disk again, but at least this time a monitor is plugged into it so I can see the messages I missed last time. And this time I stay in the initramfs and do a little bit of poking around.
I realize there’s nothing wrong with the disks at all.
The problem is Ubuntu 12.04 LTS.
I do what anyone else would do with a problem, and hit Google, Stack Overflow and Launchpad to find a workaround. And what I find saddens me; huge numbers of people reporting that their RAIDs frequently boot in degraded mode. Bugs are marked “Invalid”, “Won’t Fix” or “Unassigned”.
Now I know this used to work, because I wrote unreasonable amounts of the code that did it. So I quickly dived in to see if there was an obvious bug fix to find that all the code I’d written had been ripped out, not replaced with anything better, just gone. All that remained was the “upstream” code that had existed before I started, or at least an updated version of it.
I dug through the history to figure out if I was missing something, expecting that things were no longer required and that new ways of doing things had been put in place, but that wasn’t the case either. The history clearly showed a different story: faced with the pressure of updating to a newer upstream release of various utilities, for no reason other than to keep roughly in step with Debian, all of the bug fixes, patches and changes to make things work had been dropped because they were “hard to merge”.
Now I don’t want to come across as bitter at this point that my work had been dropped, because that’s not my feeling at all. I entirely understand and appreciate the decision that must have happened here.
Canonical has limited resources of its own, and a small hobbyist developer community around it. Those resources have to be spent wisely and not squandered. The Ubuntu focus right now is on the desktop, and on Unity; the Server focus is a lesser one, and entirely aimed at cloud hosting and guests — though given that the Canonical VP of Cloud couldn’t even be bothered to turn up for his scheduled panel at the most recent CloudOpen conference, it’s hard to fathom how much of a focus even that is for them anymore.
So if they have a low server focus, and what they do have is for cloud, then it’s no surprising that support for things like Software RAID aren’t a priority worth spending resource on. Cloud guests and hosts access storage over a network using protocols like NFS or (ugh) iSCSI.
Simply put, the home server is a uninteresting and dying product, and I’m a weird outlier for still having one at all.
This wasn’t quite the end though, I still had disks to replace and storage to sort out. If Ubuntu couldn’t do Software RAID reliably anymore, it could still at least do Hardware RAID. I looked around for Hardware RAID boxes, especially single enclosure ones that could just plug into the box and go.
This seemed like a good plan except that high performance Hardware RAID devices come in two fundamental flavors: Thunderbolt, which Ubuntu does not support; and Ethernet, which means the Ubuntu machine is superfluous to requirements.
With the nomination period beginning for the Ubuntu Technical Board, big changes like Unity having arrived in Ubuntu recently, and the upcoming UDS for being what will likely be a new LTS release of Ubuntu, it’s as good as time as any to ask big questions about the development process, challenge assumptions, and make suggestions for big changes.
The Ubuntu release process is well known, and its developers talk regularly about the cadence of it. A new release of Ubuntu comes out every six months, and each release follows a predictable pattern. I’ve stolen the following image from OMG! Ubuntu’s recent series about Ubuntu Development.
Each developer working on Ubuntu follows this cycle. When Ubuntu 11.10 is released on October 13th, they’ll begin again. After they recover, of course.
First there’ll be a bit of a wait for the archive to be open, this gets quicker and quicker each release but since it depends on a toolchain being built and other similarly fundamental things, it tends to be a period where most people figure out what they’re going to discuss at UDS.
UDS is a bit late for the 12.04 cycle, so the merge period will probably occupy developer time both before and after UDS. This isn’t represented on Daniel’s chart above, but this is the time when massive amounts of updates arrive from Debian; it’s a time of great instability for Ubuntu. At some point there will be an Alpha 1, but you won’t want to try and install that.
Planning for UDS is going to take up some time, and writing up the results of the plans afterwards and turning that into work items. There’s also a UDS Hangover which nobody (except Robbie Williamson, when drafting the 10.10 Release Cycle) seems to like to talk about. Nothing gets done in the week or two following UDS, everybody is too wiped out.
So realistically speaking, development of features for 12.04 is going to start around mid-November at the earliest. And by features I mean the big headline things in Ubuntu; like Unity, like the Software Center, like the Installer. These things are important to get right.
Pretending for a moment that features are developed over the winter holidays like Thanksgiving, Christmas and New Year, you’ve got clear development time until Feature Freeze. The 12.04 Release Schedule isn’t published yet, but I figure that’s going to be somewhere around February 16th after which everyone switches to bug fixing and release testing.
That’s just 13 weeks of development time!
So you’re an Ubuntu developer working on features for the upcoming release, you don’t have anywhere near as much time as you’d expect to actually do the development work. What happens if you’re replacing something that works with something completely new? Can’t you just target a later release, and work continually until the feature freeze of that release?
It turns out that you can’t. There is an incredible emphasis on the Ubuntu planning process of targeting features for particular releases. This is the exact thing you’re not supposed to do with a time-based release schedule.
Unfortunately Canonical’s own performance-review and management is also based around this schedule. The Ubuntu developers so employed (the vast majority) have such fundamentals as their pay, bonuses, etc. dictated by how many of their assigned features and work items are into the release by feature freeze. It’s not the only requirement, but it’s the biggest one.
Your new feature is going to take twelve months of development time to fully develop before it’s truly a replacement for the existing feature in Ubuntu. What you don’t do is spend twelve months developing and land it when it’s a perfect replacement.
What you do do is develop it in 12-13 week bursts, which means it’s going to take you roughly four release cycles before it’s ready rather than two. And you land the quarter-complete feature in the first release, replacing the older stable feature.
If this were true, you would expect to see new features repeatedly arriving in Ubuntu before they were ready. Removing the old, deprecated feature and breaking things temporarily with the promise that everything will be better in the next release, certainly the one after that, definitely by the LTS.
Maybe you don’t believe that characterizes Ubuntu, in which case you should probably just stop reading now because we’re not going to agree with my fundamental complaint.
But I will say this: I know I’m responsible for doing this on more than one occasion because I had to; and I saw the exact same pattern in others’ work, when I was a manager my reports complained that they had to follow this pattern and I still see the same pattern today with features such as Unity and the Software Center.
Follow this pattern and developers are going to complain that they need a release where they don’t have any features to work on, and can just spend the time stabilizing and bug fixing.
Worse, follow this pattern and you’re going to create a user expectation that releases are going to be largely unstable and contain sweeping changes that are going to be surprising to administrators of Enterprise desktop deployments, and discourage them from using your distribution at all.
A kludge to this would be to overlay a second release schedule onto your first one, with more of an emphasis on stability and support. It’s a target for your developers to complete their features, or at least stabilize them in those 12 weeks; and it’s a target for your users to consider deployment. So three out of four of your releases are really just unstable previews of that final fourth release.
This second LTS release cycle solves the unstable release issue, so why is this a problem?
Because developer time is wasted; because user time is wasted; because user confidence is lost.
Because features can take longer than two years to develop; or if even if a feature takes just two years, if it’s not begun immediately after the previous LTS release, it’s not going to be ready for the next one so you might postpone and lose the lead.
Because you might expect a knock-on degeneracy effect in the LTS releases as well; with 12.04 LTS being less stable than 10.04 LTS, which was less stable than 8.04 LTS which was less stable than 6.06 LTS. And it’s far too late now to have considered the 10.10/11.04/11.10/12.04 cycle to have been a Super-Long-Term-Support release and kept back the complete replacement of the desktop environment.
Because the original reason for the six-month cycle has already been forgotten: features are targeted towards releases, rather than released when ready; because the original base for the release schedule (GNOME) is no longer a key component of the distribution; because no other key component has adopted this schedule.
Because these might be a better way.
What I’m going to suggest here is a completely new development process for Ubuntu, complete with details about how it would be implemented.
I’m going to suggest a monthly release process, beginning with the 11.10 release. It so happens that this fits perfectly with Ubuntu’s version numbering system, the next release would be 11.11, followed by 11.12, followed by 12.01 and so on.
This monthly release would be simply known as release in your sources.list, updates would be published to it on the first week of the month. There would be no codenames, and due to the rapid releases, changes would be largely unsurprising and iterative on the previous releases.
In order to provide user testing, a second release known as beta would exist. It’s from this release that release would be copied from on that first week of the month. beta would be updated every two weeks, on the first week of the month after it became the new release, and then on the half-way point of the month. Users who like a little bleeding on their edge can change their sources.list to use this more exciting release, or download appropriate disk images.
Developers wouldn’t run either of these, they would run the third release branch alpha. It’s from here that beta is updated; and from here that daily disk images would be generated.
Publishing from alpha to beta, and then from beta to release is handled semi-automatically. The release manager will track Release Critical bugs, and will hold up packages from copying from one to the other if they have outstanding problems. If this sounds familiar, it’s because this is exactly how the Debian testing distribution works and I recommend using the same software (which Ubuntu already uses to check for archive issues).
So where do developers upload? It’s tempting to just say to alpha, but if we say that, alpha will end up looking very different from release because it will be filled with unstable software that’s not ready for users yet. This will make it harder for problems in the release branch to be fixed, because none of the components are left in alpha because they’ve been replaced by something that’s not ready yet.
Developers will upload to an unpublished trunk branch. Packages will be copied to alpha provided:
I just introduced a bunch of new checks to the developer process there; I just introduced code review, mandatory unit tests and then piled functional tests and verification tests on top.
The first four are relatively self-explanatory; fail any of these tests and your upload has marked the tree red. In which case not only will your package fail to copy to alpha, but you’re about to have a conversation with the Release Manager.
For functional and verification tests, this means doing more automated QA. A failing test could be an automated installer run, or an automated boot-and-test run, etc. They’ll run sometime after the fact and will make the entire tree red. The Release Manager or their team will have to examine the logs to figure out the culprit.
So things aren’t copying to alpha, now one of two things is going to happen.
While the tree is red, nobody else is allowed to upload unless it’s a fix for the problem. All effort should go to fixing the tree.
If the archive has to always remain stable, how do you develop large features such as Upstart, Unity, Ubiquity, Software Center, etc.? You use a PPA to do development, on your own timeline.
If your feature takes twelve months to develop, you take twelve months to develop it in that PPA. You’re going to be posting regularly to mailing lists or blogging about your feature to encourage users to add your PPA to their sources.list to gain testing. Obviously you’ll be doing various uploads to the main series over time to get all your dependencies in early where they don’t conflict with what’s already there.
My proposal is a radical change to the Ubuntu Release Process, but surprisingly it would take very little technical effort to implement because all the pieces are already there including the work on performing automated functional and verification tests.
I believe it solves the problem of landing unstable features before they’re ready, because it almost entirely removes releases as a thing. As a developer you simply work in a PPA until you’ll pass review, and land a stable feature that can replace what was there before.
It solves the need for occasional stabilization and bug-fixing releases because the main series is always stable and can receive bug-fixes easily separate to any development work going on. A developer can chose to focus on looking after the main series for some of their time in addition to their feature development work, or devote all of their time to it.
Another problem I’ve not talked about is that of building software on an unstable foundation, also solved by this change. Since developers will run alpha, and vendor developers can just run a relatively up-to-date, yet stable, release branch, software can be built on a solid foundation. Only the new feature or software itself is unstable until ready.
Canonical can keep its review schedule, and use developer uploads and work items; except rather than landing in a release, they can now land in a PPA.
Merges from Debian unstable can be handled pretty much continually as long as they keep the tree green, alternatively one can decide that users ultimately don’t care about an updated version of cat and until a case can be made (e.g. an open bug) for a package’s update, it need not be merged.
Users can now be confident of always receiving a stable operating system, because of the multiple testing and QA passes each change continually receives. Updates come in monthly, two-weekly or dailyish batches depending where in the main series they chose to run.
Enterprise administrators can run this stable release, because it only changes gradually with well-tested updates. The big changes and features have a long gestation period in PPAs, with many advance notices and blog posts about them. They’re not a surprise and can be planned for well in advance of their landing.
Downsides will, doubtless, be found in the comments below.
For your consideration.
UDS is over! And in the customary wrap-up I stood up and told the audience what the Foundations team have been discussing all week. One of the items is almost certainly going to get a little bit of publicity.
We are going to be doing the work to have btrfs as an installation option, and we have not ruled out making it the default.
I do stress the emphasis of that statement, a number of things would have to be true for us to take that decision:
It’s a tough gauntlet, and it would only made with the knowledge that production servers and desktops can be run on Lucid as a fully supported version of Ubuntu at the same time. I’d give it a 1-in-5 chance.
As you know, improvements to the boot process has been something that Ubuntu have been working on for a few years now and this led to the development of Upstart. We’re not the only ones working in this area, Intel have also been hard at work with different improvements of their own with the Moblin and MeeGo projects.
So it’s great to see some Fedora and OpenSuSE guys working on this too, and bringing some different ideas to the table!
I can’t say I disagree with some of Lennart’s observations about problems with Upstart, it’s certainly nowhere near perfect. Now that the stable period leading up to the release of Ubuntu 10.04 LTS is over, I’m looking forwards to getting back into the code and trying to address them.
It’s far too early to tell which approach is going to work out better in the end; but that’s one of the great things about Linux. The different distributions are able to develop in different directions, and we’re able to try out many different things.
On a personal note, I’m particularly pleased that Lennart has continued the punny naming scheme I began with Upstart. System D is a French concept that embraces responding to challenges when they happen, thinking fast and on your feet and adapting and improvising to get the job done.
Graphics cards from different manufacturers are very different beasts, in fact, often different generations of graphics cards from the same manufacturer can be pretty different too. While there’s a great deal of standardisation for things such as resolutions, colour depths and talking to monitors; the software side has almost no standardisation whatsoever.
In fact, one of the most fundamental operations, setting the resolution and color depth we desire (aka. the Mode) can be different for each and every card, let alone each and every manufacturer.
Practically speaking, this means that there are only two things that know how to set the mode. The Video BIOS and the Graphics Driver. Historically, within Linux, the Graphics Driver has resided within the X Window System server.
There are lots of different places that we need to set the mode:
The two biggest problems here are switching from/to the X server VT, and resuming from suspend. In the former case, the biggest difficulty is actually switching between X servers on different VTs (e.g. user switching). The process ends up looking like this:
This leads to the black screen and flashing cursor between VT switches that everybody hates. In the resuming from suspend case, it’s really tricky; we don’t necessarily know the right video mode to restore, calling into the Video BIOS (even with VBE) is tricky, error-prone and doesn’t work if it wasn’t a VESA mode, and the X server’s mode setting code was never designed to be called externally. This means we used to basically rely on resuming into the X server and having that restore the mode for us.
All this changes with Kernel Mode Setting (KMS). Throughout the development of Lucid, you’ve probably seen that term mentioned a few times.
Simply put, Kernel Mode Setting is all about taking the bulk of the Graphics Driver code out of the X server and putting it into the Kernel. This means that the kernel has Graphics Drivers just like the kernel has Network Card Drivers, Wireless Drivers, USB Drivers, etc.
Most importantly, the kernel can set the mode whenever we need to and restore it on resume. The three most user-visible results of this are:
Behind the scenes there’s even more awesome waiting to be used in future releases.
Obviously most of the effort on writing these new drivers has gone to the “big three” graphics card vendors’ hardware. Intel themselves have contributed a large part of the KMS work, and their own drivers for it; nVidia owners are covered by the reverse-engineering effort that created the nouveau driver; and ATI owners are covered by the new radeon driver.
Those with graphics cards from other vendors are a little out in the cold here, but at least they’re no worse off than they were before. The biggest source of complaint comes from those with cards for which there is also a binary driver available (usually nVidia).
By switching from the in-kernel graphics driver (i.e. nouveau) to one supplied as a binary loaded into your X server (i.e. nvidia-glx), you will not have Kernel Mode Setting support and are effectively regressing your own system to Ubuntu 9.04 state.
I’m sure that nVidia users will be quick to point out that in Ubuntu 9.04, they had more than 16-colors for the splash screen, and that’s true.
One of the other changes made in 10.04 is the switch from using usplash to draw the splash screen to using Plymouth. Both had the ability to draw to frame buffers (basically kernel 2D graphics canvases), but Plymouth had much better support for multiple heads, native panel resolutions, and most importantly the smooth transition into X.
Supporting KMS properly for those using the open source drivers meant regressing slightly in prettyness for those who don’t.
usplash used to support higher colors and resolutions by using SVGAlib as a backend when a frame buffer was not available; writing a new Plymouth SVGAlib renderer was simply too much work on top of the existing integration worked that needed to happen for KMS. (If someone wanted to do it as a personal project though, go right ahead!)
An alternative to using SVGAlib would have been to at least set a better mode through VESA or VBE. There are two common ways to do this, firstly by using the VESA frame buffer (vesafb) or secondly by having the boot loader set the graphics mode and keep it set when running the kernel, triggering the use of the EFI frame buffer (efifb). The problem with both of these is resuming from suspend. As discussed in detail above, we don’t have the ability to restore this mode on resume.
So we picked the third alternative, which has the added attraction that it works on all hardware and doesn’t cause other issues. We use good, old, reliable 16-color VGA.
As you probably know by now, even if you’re not following the karmic development closely, Ubuntu has gained new splash screen software called xsplash. This is the hard work of Cody Russell and Ken VanDine of the Ubuntu Desktop Experience team.
There’s been some press coverage of this already, and various comments from different people raising some questions about why Ubuntu is going down this route. Since this was largely my idea, I felt I should try and answer a few of the common ones.
rhgb was the “RedHat Graphical Boot” system they used in RedHat and Fedora until the most recent Fedora releases. It worked by starting an X Window System server and running the splash screen inside that.
This sounds very similar to xsplash, but with one key difference: the X server used by rhgb was shut down when boot finished, and a new X server started for the user to login with.
Instead, xsplash uses the same X server as the login window, and the same X server as your desktop. In fact, it’s started by the GNOME Display Manager while it starts the greeter or auto-logins your desktop alongside.
Plymouth is RedHat’s replacement for rhgb, instead of using an X server it relies on Kernel Mode Setting to provide a framebuffer in the panel’s native resolution and colour depth and draws directly to that. When the X server starts, the X server takes over the framebuffer without requiring a mode switch or a screen clear.
In many ways, Plymouth is simply a reimplementation of our original usplash. Indeed, I found it quite ironic that some people have accused us of “NIH”ing xsplash instead of adopting Plymouth, when technically Plymouth is an “NIH”d usplash.
So really the question as to why we don’t use Plymouth is the same as to why we don’t use usplash. We did actually consider replacing usplash with Plymouth since the implementation is rather cleaner, but Plymouth only supports Kernel Mode Setting drivers whereas usplash has a fall-back SVGA mode (it always had framebuffer support, thus KMS support, due to the Ubuntu PowerPC port).
(or Plymouth, see above)
One of our main goals for Ubuntu is boot performance. For Ubuntu 10.04 (that’s karmic+1) we’re targetting a 10s boot from the boot loader to a fully logged in desktop with idle disks. To boot this fast requires some prioritisation.
We need the X Window System server for just about everything. Until we’ve started the X server, we don’t have a display so can’t even start basic things like the underlying services and agents that run for a user’s login session.
Or, put another way, anything we do during boot that isn’t directly required to start the X server is delaying the boot.
The boot sequence must be setup in such a way that starting the X server is prioritised; once the X server is up, we can begin starting the user’s session (or the login screen session). We can also start all those other system services that weren’t dependencies of the X server.
This pretty much turns the current sequence on its head, where the X server is one of the last things started rather than one of the first.
If we started a splash screen such as usplash before the X server was up, we’d still have to wait for the Kernel Mode Setting driver to be loaded (or hardware to be sufficiently probed to know that we don’t have a Kernel Mode Setting driver for it). This is one of the primary dependencies of the X server too (the other is a mounted, writable filesystem), so in many cases we can start the X server at the same time!
Now, you could suggest that we do start the X server but also start the splash screen as well; and that the splash screen animates while the X server is running and the X server doesn’t paint to the screen until the desktop is logged in or the login screen is ready. Unfortunately this simply doesn’t appear to be possible, the X server “takes over” the framebuffer in such a way that the splash screen simply freezes at that point (we can prevent it clearing the screen). With an early X server start, it would spend more time frozen then it would animating.
This also wouldn’t work for the non-KMS case.
You could also argue that we could load the KMS drivers earlier, in the initramfs for example. While possible, this adds a significant time to the boot time for the extra loading and probing required. If we load the KMS driver in the initramfs, it takes about 8-10s to load the X server; if we load the KMS driver in the full system, it only takes about 6-8s to load the X server. Easy win.
But what if we have to check the filesystem, or enter a decryption passphrase to mount it? That’s why we still have usplash. In those cases we will start usplash to display the progress or request the key. The usplash theme will be themed such that when it finishes, and X starts up, it looks ok frozen for the short time until xsplash replaces it with an identical theme.
Patience, my friend. The 10s target is for Ubuntu 10.04 (karmic+1), not karmic (9.10).
That being said there are a number of updates planned for karmic that will boost the performance quite a bit, especially in terms of starting the X Window System server earlier. These have always been targetted to land between Feature Freeze and Beta, simply because it’s delicate work that needed a lot of testing first and that everyone else’s uploads prior to Feature Freeze kept throwing it off.
Look for a call for testing RSN.
The automotive industry, with its particular emphasis on efficient workflow and practices, has had a lot to teach the software world over the years. From the process of requirements, specification and design through to LEAN development practices, it is difficult to argue that we haven’t learned anything from them.
I think that there’s another practice from that industry it might be fun to adopt: the Concept Car (sometimes known as a Show or Halo Car).
These are cars where the designers and engineers have been allowed to let their imaginations run wild, and build something that shows off the limits of what’s possible. Often they’re also used to explore new technologies or ideas without having to commit to standards of production that would be required for a marketable road car.
And that’s pretty much the key point about these cars, they normally build just one or two and take them around the car and motoring shows for everybody to look at and talk about.
Obviously I’m not suggesting that we build strange and outlandish cars, and drape them in fancy lights and scantily clad people on a slowly rotating podium; but I think the idea can translate to our world.
Thus I’d like to humbly introduce my idea of the Concept Distro.
The Concept Distro would be an engineering project to allow developers and maintainers to let their imaginations run wild. It’d be released, probably to demonstrate at a major event, and would explicitly not be supported. Not even basic security support, or a bug tracker, or even answering questions about why things don’t work.
On a Concept Car, it’s entirely normal that half of the doors don’t even open; likewise in the Concept Distro, it would be entirely expected that half of the icons were just placeholders and didn’t do anything if you clicked on them.
After release, engineering effort could be focussed either on integrating the successful technologies into Linux distributions proper, or on working on the next Concept Distro for the next big event a year or two down the line.
In the early days of Ubuntu, when we had two different CDs, we had a plan to do this kind of thing with the Live CD. Since that didn’t have an installer, it could be a little more experimental and a little more risqué. It was a good place to try out Network Manager before we integrated it with the distribution proper, and the intent was that the naked people would have had even less clothing (I didn’t mind the loss of this, the male model they picked was not the prettiest of the options).
Assuming we don’t resurrect the naked people, what kinds of things would we do with the Concept Distro?
It’s a chance to make some fundamental changes without having to worry about the support or upgrade implications of them. I’d like to see what we could do by assuming that the filesystem is a single mount of ext4 on LVM on RAID, which we grow onto additional disks as they are made available.
And since we wouldn’t have to worry about partitioning, it might be interesting to look into rearranging the hierarchy. Maybe having
/Users really is better than
If we went down that route, we could throw out the traditional package manager and experiment with some new approaches. What better way to upgrade the operating system than:
cd /System bzr update
or switch to a new version with
bzr switch? It works well enough to upgrade my WordPress installation, after all.
From a technology fetishist point of view, there’s plenty to play with and try out. Would we use ALSA and dmix instead of PulseAudio? Assuming we didn’t use the Concept Distro to try out going fully volume control per application, of course. It’d be a great place to see what we can do with Upstart, udev, D-Bus, DeviceKit (replacing HAL) and other plumbing-layer components.
In the desktop library layer, the bling guys could play with Multi-Pointer X with kernel-mode setting support and a resolution independent GTK+. Rendering could be fully indirect or entirely direct GL based, depending on preference.
And for the desktop itself, the user experience and interface designers have a completely blank canvas to play with. Since it’s just a Concept Distro, one needn’t worry about the ability of users to transition to new ways of working. Instead you can see how they react to seeing new ways of working in a demonstration or talk, perform usability testing in the lab and even see how they get on in the field.
It would be a very fun and exciting project.
Unfortunately, unlike the car world, there’s not necessarily the funding for such a thing. Who would want to finance an ongoing software development project that was explicitly intended to have no users?
In the automotive world, the Concept Car from a development point of view is important since companies cannot, for example, experiment with new engine technologies and expect their customers to be able to drive them on the road. In the software world, such “lab” projects are much easier to develop in isolation and tend to remain on our own workstations.
The Concept Car can also serve as a marketing tool, it draws potential customers to your show stand and while looking at the sexy car on the stand they’re ripe for being sold a somewhat more pedestrian road car. It also aids towards customer loyalty, since you’re more likely to buy another car from a manufacturer who is showing off the most advanced concepts.
In the Linux world, while we appear to have direct competitions between the distributions, the reality is that we co-operate far more than you might expect unless you’re involved with development. A Concept Distro would need upstream work from just about everybody.
And would such a thing help convert people from Windows or Mac OS? If it would, maybe it’s a good idea after all.
I’m afraid I have a confession to make. A couple of weeks ago, I purchased an iPhone. And to make matters worse, I’m wonderfully happy with it.
Now, I know that I should have got something more compatible with the community that I’m a member of. Maybe one of those OpenMoko powered Neo FreeRunner devices or even an Ubuntu Mobile powered prototype device.
But an iPhone it was. Why?
Well, frankly I needed something that works today.
The iPhone is a fascinating device. Don’t worry, I’m not going to go on about its features and all of its bling. What fascinates me is how easily Apple brought it to market, and now that the App Store is up and running, how quickly native applications are being written for it.
The most breath-taking thing is that this device is effectively running a version of Mac OS X ported to the ARM processor, and with any unnecessary bits for the smaller platform removed. The graphics, audio and other core libraries are basically the same as on the bigger brother computers.
In other words, Apple have done what Linux always promised; turned Darwin into a truly scalable platform.
What’s more, the pace at which new applications have been developed for it shows that this platform is easy to write for. My phone has rich, native applications for Twitter, Facebook, Flickr and Google; none of which came pre-installed.
I have a theory about how they’ve managed to scale their platform so quickly down to a size that fits in my pocket whilst also running on a machine that barely fits on my desk. The same theory explains why developers have been so quick to develop applications for it.
It’s not that their platform is better, or more capable, or even necessarily more flexible.
It’s that their platform is better componentised.
The core technologies of their platform are grouped into easy to understand components. It’s easy to draw boxes that show how these stack up to provide functionality to the developers, and it’s easy to see which boxes you can remove when scaling the platform down. Documentation is easier to write too, each component has a specific function and tech writers can turn that into a story and write simple to understand overviews and rich API documentation.
Audio playback is a great example here.
In Linux, you want to play sounds from your application, so you have a quick hunt around for Linux audio APIs. Your resulting list looks something like this:
And those are just the libraries and daemons installed by default, and I didn’t even include the format libraries such as libogg. If I were to include those, and the various other sound daemons, mixers and framework libraries (hello, Phonon), we’d be here all night.
Where is an application developer actually supposed to start?
Even I have no real idea where GStreamer, PulseAudio and ALSA begin and end; and where they overlap and contradict each other, which I’m supposed to use.
Apple developers have it much easier. If you want to do anything with audio, you want Core Audio.
If I were to try and do something more interesting, like putting things on the screen, a somewhat common requirement for GUI applications, I’d have to read up on Clutter, Pigment, GTK+, GDK, Cairo, Pango, FreeType, Xft and X11. At least.
An analogy can be drawn with Lego.
When I was a young kid, if I wanted to make cars to sit on the roads around my lego town, you used to have to build them from scratch. I didn’t really care about lego cars, but the town looked silly without them, so it was a chore.
The chassis for each car was the same. A 1×4 flat at each end for the bumpers, with a 2×4 end on in the middle to make the wheel arches. These were joined by a 4×4 to make the car floor. (Sadly I couldn’t find any images on Google).
You had to know how to do it, but when you did there was a certain pride in being able to build a car from memory and knowing how all the pieces fit together. If you cared about cars, anyway.
Then an amazing thing occurred. Lego released a new car, and in the box was a single piece that made the chassis. No more mucking around and searching for lost bits, or realising you’d built it upside down. Now you could instead spend more time deciding what colour the body and windows would be, or if you really didn’t care, spend more time on the houses and other buildings that were more fun to build.
If the single piece wasn’t right, nothing stopped you building your own custom chassis, but it was a great time-saver. Nowadays they probably have a box where a complete car rolls out, but that’s ok too. Those are the boxes for people who really don’t care about cars, but understand that they need them to fill the multi-storey car park. They do other boxes with a thousand pieces to build a single car for those people who like making cars. Those are neat, the engines look like they’re working and everything!
Apple’s approach is somewhat like this. Their APIs are grouped into big components that you can quickly get to grips with, and spend your time on the interesting bits of the application. Linux’s API stack is more like a box of bits, you have to know how to fit them together and build the chassis before you start.
The only people that really delight in the differences between GTK+, GDK, Cairo and X11 are the authors of those particular parts of the platforms. The rest of us really wish we just had a single piece marked “InterfaceKit” that we could use.
The great thing about the Ubuntu distro team development sprints is that you get to sit around a table and share your knowledge about workarounds for all of the broken things in the current release: