There is quite a bit of software whose upstream authors decide that, as they are already using Git for development, the main distribution channel should be GitHub - This allows, yes, for quite a bit of flexibility, which many authors have taken advantage of.
So, I just registered and set up http://githubredir.debian.net/ to make it easier for packagers to take advantage of it.
Specifically, what does this redirector make? Given that GitHub allows for downloading as a .zip or as a .tar.gz any given commit, it suddenly becomes enough to git tag with a version number, and GitHub magically makes that version available for download. Which is sweet!
Sometimes it is a bit problematic, though, to follow their format. Github gives a listing of the tags for each particular prooject, and each of those tags has a download page, with both archiving formats.
I won't go into too much detail here - Thing is, going over several pages becomes painful for Debian's uscan, widely used for various of our QA processes. There are other implemented redirectors, such as the one used for SourceForge.
This redirector is mainly meant to be consumed by Debian's uscan. Anybody who finds this system useful can freely use it, although you might be better served by the rich, official GitHub.com interface.
Anyway - Enough repeating what I said on the http://githubredir.debian.net/ base page. Find it useful? Go ahead and use it!
Thanks to some unexplained comments on some oldish entries on my blog, I found -with a couple of days of delay- Rubigem is from Mars, Apt-get is from Venus, in Pelle's weblog. And no, I have not yet read the huge amount of comments generated from it... Still, I replied with the following text - And I am leaving this blog post in place to remind me to further extend my opinions later on.
Wow... Quite a bit of comments. And yes, given that the author wrote a (very well phrased and balanced) post, I feel obliged to reply. But given that he refered to me first, I'll just skip the chatter for later - I'm tired this time of day ;-)
Pelle, I agree with you - This problem is because we are from two very different mindsets. I have already said so - http://www.gwolf.org/soft/debian+rails is a witness to that point.
But I do not think the divide is between sysadmins and developers. I am a developer that grew from the sysadmin stance, but that's not AFAICT that much the fact in Debian.
Thing is, in a distribution, we try to cater for common users. I have a couple of Rails apps under development that I expect to be able to package for Debian, and I think can be very useful for the general public.
Now, how is the user experience when you install a desktop application, in whatever language/framework it is written? You don't care what the platform is - you care that it integrates nicely with your environment. Yes, the webapp arena is a bit more difficult - but we have achieved quite a bit of advance in that way. Feel like using a PHP webapp? Just install it, and it's there. A Python webapp? Same thing. A Perl webapp? As long as you don't do some black magic (and that's one of the main factors that motivated me away from mod_perl), the same: Just ask apt-get to install it and you are set.
But... What about installing a Rails application? From a package manager? For a user who does not really care about what design philosophy you followed, who might not even know what a MVC pattern is?
Thing is, distributions aim at _users_. And yes, I have gradually adopted a user's point of view. I very seldom install anything not available as a .deb - and if I do, I try to keep it clean enough so I can package it for my personal use later on.
Anyway... I will post a copy of this message in my blog (http://gwolf.org/), partly as a reminder to come back here and read the rest of the buzz. And to go to the other post referenced here. And, of course, I invite other people involved in Ruby and Debian to continue sharing this - I am sure I am not the only person (or, in more fairness, that Debian's pkg-ruby-extras team is not the only team) interested in bridging this huge divide and get to a point we can interact better - And I am sure that among the Rubyists many people will also value having their code usable by non-developers as well.
I love it when a lack-of-humor and lack-of-appropriateness-originated flamewar causes somebody to point me towards a very nice display of intelligent humor. Specially when it is so close to me, to my roots, to my family and my personal history. FWIW, for several years, while I was a BBS user, I used WereWolf as my nickname. Great thanks to Frank Küster - and, of course, to Christian Morgenstern.
The Werewolf - English translation by Alexander Gross
A Werewolf, troubled by his name,
Left wife and brood one night and came
To a hidden graveyard to enlist
The aid of a long-dead philologist.
"Oh sage, wake up, please don't berate me,"
He howled sadly, "Just conjugate me."
The seer arose a bit unsteady
Yawned twice, wheezed once, and then was ready.
"Well, 'Werewolf' is your plural past,
While 'Waswolf' is singularly cast:
There's 'Amwolf' too, the present tense,
And 'Iswolf,' 'Arewolf' in this same sense."
"I know that--I'm no mental cripple--
The future form and participle
Are what I crave," the beast replied.
The scholar paused--again he tried:
"A 'Will-be-wolf?' It's just too long:
'Shall-be-wolf?' 'Has-been-wolf?' Utterly wrong!
Such words are wounds beyond all suture--
I'm sorry, but you have no future."
The Werewolf knew better--his sons still slept
At home, and homewards now he crept,
Happy, humble, without apology
For such folly of philology.
Wouter insists that Ruby Gems are enough of an argument to keep Rails at a distance. Even though I agree with the basic claim and think that Gems are basically insane and sick, this should be taken a bit more under perspective.
We are blessed. We are blessed to have Debian, such a rich OS with such a great package management system, and with superb integration between so many packages. Blessed are also the users of most Free Software based distributions, as they share the advantages of systems growing with full consciousness of their interaction's benefits. However, integration with the rest of the world is not seamless.
Most scripting languages have their own infrastructure for managing the modules/libraries/pacakges/whatevers dependencies. Perl has CPAN, PHP has PEAR, Ruby has Gems... I do see Gems as the most obnoxious of them all, but the basics are the same.
The Rails crowd started being Unix-centric, but the Windows (and MacOS - they are no better, believe me, at least in this regard) world has exerted its pressure. Gems caters very well to their needs, but we do suffer the integration at the distro side.
The only sane way the propietary-minded people have managed to stay clear of the well known "DLL hell" is to ship everything a given program requires bundled together - that's the main reason for the bloat of lots of applications, and for the sloppiness of security support. Every application packager is responsible for shipping updated versions to any library it bundles in, except for the very basic core that the OS itself provides. That seems so annoyingly backwards to us that... it is unbelievable.
So, yes, Rails application trees often include Rails itself. For $DEITY sake, even I have grown used to working that way, as things tend to break under your nose otherwise. My proposal (which we talked over at DebConf, but have not pushed so far) is to support simultaneous versions of Rails installed in a Debian system (of course, via different packages), more or less in the way that simultaneous versions of Ruby, PHP or Python (and, in some limited fashion, Perl - Although Perl does not suffer from this incompatible bumps. Yay for Perl!) can be installed.
...And, yes, together with the pkg-ruby-extras team, I have been trying to -slowly, yes- package whatever modules we often use so they don't have to be included in Rails applications.
So far, the best way (although by far not optimal) I have found to limit this explosion of trees is to include most libraries in my Rails application trees as git submodule trees - If they are not explicitly downloaded, the systemwide libraries will be used.
Yes, the "ship the whole thing as a bundle" approach is quite annoying. However, at least I must acknowledge that it works better than the approach I took with previous (mod_perl-based) webapps I wrote... As relatively few people grok mod_perl, I ended up with quite some apps which were not installed by anybody but me. Rails is obnoxious... but seems to be parsable by the humanity at large.
It's sad that today in Planet Debian we have hit an Eurocentric geographically discriminating meme. Particularly, one I'd love to take part of. Well, at least I can assure you we have reached the usual low temperature of 2 Celsius in Mexico City... As always, people say it's so cold that this year we _will_ get snow. And as always, I'm sure it's just wishful thinking ;-)
So, even with Marcelo's frozen Zócalo live again for this winter, I can only reinforce our tropical paradise stereotypes by reminding you that this is less than 500Km away from home - All year 'round:
Almost a month ago, Mauro pointed towards acerfand, a daemon to keep the Acer Aspire One's fan quiet while not needed. Thanks, Mauro, you made my life more pleasant ;-)
Today I had some free time in my hands (of course, putting aside everything else I should be doing), so I decided to un-uglify my machine. I hate having random stuff in /usr/local! So I packaged Rachel Greenham's acerfand for Debian. It should hit unstable soon.
Of course, it will not make it to Lenny - which is a shame, giving how nicely Lenny recognizes everything in this sweet machine. So, I have set up a repository for it - Once the package is formally accepted in Debian, and once lenny-backports comes to life there, I will move it to backports.org. Anyway, you can add this to your /etc/apt/sources.list:
Note that in the future, this package might provide some more niceties... I decided to -at least for now- stash away acer_ec in /usr/share/acerfand, but it does open a nice window to the AAO's EC(?) registers... And could be useful for many other things.
[Update]: Following Matthew's comments, both on this blog post and on the ITP bug, I am not uploading acerfand to Debian. Still, I'm using the program and find it working fine, and quite useful. You can use it from my personal repository, as written above.
I was invited to participate at Festival Internacional de Software Libre (FISOL), in Tapchula, Chiapas. The other invited speakers were Sandino Flores (tigrux), Alexandro Colorado (jza), Eric Herrera (crac), John Hall (maddog) and Fernando Romo (pop), all well-known due to very different contributions to the Free Software movement in Mexico and abroad. Several other people also presented tutorials, but I was not involved in that part, and mentioning one while not the rest would be unfair.
The conference was quite massive - Tapachula is a medium-sized city (~200,000 people) in Mexico's Southernmost point - Sadly, due to its geographical location, it is mainly famous for being the region where illegal immigrants from Central America enter Mexico towards the USA, and it is a known spot for all kind of abuses, both from the authorities and from gangs of thieves.
This is the third time I come to this conference. The first two years (2005, 2006) it was organized by the local CUCS university and it was reasonably large, but this year it counted also with many other universities in the region. Attendance was... HUGE. We were told around 1600 students were registered to participate, and I expect at least 1000 to have actually been there. Very amazing and encouraging!
It is, by far, a base-level conference - Most attendees had had no previous contact with Free Software at all, or had at most toyed around with a distro for some hours. Some people, of course, _are_ already working and involved, on various different degrees. All in all, quite encouraging.
But not only I had fun (and got extremely tired!) at the conference, or at the beer sessions afterwards. I also got to push some more publicity (and work, of course!) towards my new favorite pet project: OpenStreetMap.
As many other Debianers, I joined the fever last August, during Debconf. So far, I have been quite busy tracing and mapping; I am quite fortunate to get the OSM addiction while living on the edge of the well-mapped area of Mexico City. So far, I have mostly worked on the Ciudad Universitaria and Coyoacán areas, where some sensible improvement can be felt. Lots yet to do, for sure, but I'm making progress.
Still, mapping Coyoacán sometimes feels a bit futile. Why? Because all of my cycling/tracing/mapping sessions look almost like a little blip on the overall state of my city, which is way better than what I expected - Most of the central city is done (although lots of work is still pending on the very large outskirts - but getting there can be a trip just by itself!)...
But this time, I had the opportunity to do something new, something better and sensible. And, yes, it feels very good. How does the map of Tapachula look for just a weekend of mapping activity? And, yes, I only went out once (morning running) expressely to get some new traces, the rest of it was while being transported by car to the conference-related activities. And I didn't even have to say once "lets go by a different route"! ;-)
Just for comparison: Last week, Tapachula's state was quite similar to what they have today on Google Maps - Just the major highways in the area. Besides, if you look at the satellite map for Tapachula, I estimate I managed to map around between a fifth and a tenth of the city's surface.
So, have you got a GPS? Do you enjoy going out on the street, be it walking, running, cycling on driving? Or even if you don't enjoy it, are you sometimes forced into it? Start contributing to OpenStreetMap now!
I was already used to regularly receiving Bubulle's bug 500000 contest reports. Lately, he has been busy pushing translators to get d-i in shape - But expect notices from him soon! Right now, we sit at 499416 bug reports so far registered in the Debian BTS. We are really close to the half megabug mark!
For about eight years, I was a very happy WindowMaker user. It was very lightweight, aesthetically pleasing, and I had interiorized its behaviour and keybindings so much, I didn't feel I'd ever switch away. I periodically tried (forced myself even!) to use any of the other, more en vogue environments... Experimented using Gnome for a week, KDE for a week, XFCE for a week (so I would have enough time to learn their ways)... And always came back to my good, well-known wmaker.
In 2006, when we held DebConf6 in Mexico, I saw how other people worked with ion3. I fell immediately in love with it. Not because it is prettier, snappier or has nicer widgets than other window managers - but because it is enormously more usable. Quoting Tuomo Valkonen, the ion3 author,
Ion is not perfect and certainly not for everyone, but neither is any user interface. Usability is subjective.
Using a keyboard-oriented, tiling window manager represented -for the first time in 20 years (I had my first contact with a Macintosh in 1986)- a radical user interface change. For my way of working, I just don't need a desktop, I don't need having a background space or overlapping windows. What I need is a way to functionally organize the windows I have open at any given time, quickly switch between them (and not depending on the mouse, please!), maximizing screen space and all that. ion3 was godsent.
Now, in 2007 there was (yet another) huge flamefest. Valkonen basically does not want distributors distributing any version of ion3 that's not the latest, and introducing changes not approved by him - He basically demands ion to be non-free. So, it was moved to the non-free section of Debian where only Womble decided to keep giving it support. And, of course, by September 2007 Julien Danjou announced he had written another window manager: Awesome.
My main motivation for switching away from ion3 is that... I don't want to use non-free software. But I was very comfortable with ion3. It was only after I saw many other people using Awesome at DebConf8 I decided to bite the bullet and switch. It looked at least as comfortable as ion3.
But... Well, I cannot come up with better phrasing than what Joey said when he switched to Awesome, almost exactly a month ago. When changing between the mainstream window managers, the differences are mostly cosmetic. But with these really different window managers... I cannot but reproduce Joey's words:
I wish I had a good analogy to explain to my nontechnical readers what changing to a new window manager is about.
One way to think about it is that it's like driving a car down the road, and suddenly swapping the steering wheel and brakes out for a tiller and gear shifter. And having to downshift for braking until you learn that the brakes moved to the turn indicator lever. By trial and error.
But that's really only part of it. Another way to look at it is adopting a new philosophy. Or, in some cases a cult. (In some cases, with crazy cult leaders.) Whether they use Windows or a Mac, or Linux, most computer users are members of a big established religion, with some implicit assumptions, like "thy windows shall be overlapping, like papers on the desktop, and thou shalt move them with thy mouse".
So, changing to a new window manager is a process of being dumped into a different environment, where nothing works like you've come to expect, and trying to construct a mental model that you can use to make sense of it. But it's also a process of modifying that environment to behave the way you like.
And when done whole-heartedly, this doesn't just mean trying to make it like the environment you were used to before. It means trying to absorb the underlying philosopy of the window manager, and think up new ways of doing things, inspired by that philosopy, and modify the environment to allow doing those things.
So ideally, "I switched to a new window manager" doesn't mean "my screen has some different widgets on it now". It means "I'm looking at the screen with new eyes."
So, what's so different?
Besides learning some new keybindings (expected, of course), Awesome has several suggested layouts to help you organize your workflow, usually (although not always) consisting of a main area and a side area, tiled side by side (or one above the other, or several stranger ways). This sounds rigid, but it is incredibly comfortable - and I've only been an Awesome user for three days!
But what sets Awesome apart from basically anything else is the concept of tags. Whey you see an Awesome session, you recognize something similar to the very well known workspaces concept we have had in any Unix-like environment for many years, right? Well, but they are not workspaces. They are tags. What is the difference?
When I work with workspace, each of my windows can be in a different workspace. So in one, I'll have my mail-related stuff. In another one, I might be browsing. In another one, I have my development things, and I might be following some logfiles in yet another one.
Awesome allows you to use tags for categorization in much a more flexible way.
For example: I am mostly a web-oriented developer. I usually need four things when developing a system: A browser, Emacs, the log for my development server, and a console where I can peek and poke at my objects and interactions. Of course, cramming them all into the same screen makes no sense - It would be better to follow the good ol' desktop metaphore, and just switch the focus and raise the window, right?
In Awesome, I can have them all set to the maximum screen size - _and_ use the most common combinations as well. Each of the windows can be tagged to more than one workspace (and yes, this is immensely more flexible than the always visible hint on traditional WMs).
To begin with, I'll give each major process a tag to itself, to work full-screen. Emacs is on tag 1, my browser on tag 2. The log and the console share a terminal (i.e. via screen or terminator).
This console by itself is not too useful, so I'll set it to tag 4, and we will go back to it later.
So if I'm building a view or following online documentation, I will add tag 3 to both Emacs and the browser.
I'm also setting tag 4 to the browser - That allows me to use it next to my terminal, following the results of my website-clicking.
And, of course, tag 5 will be set to Emacs and to the console, so I can quickly check any quick API-related question that does not need the documentation or look at a newly written method.
By the way, have you noticed CapsLock is the most stupid key invented, ever? Ok, I gave a good use to it: It's called Mod4. Imagine it is just an extra Ctrl, Alt or Meta key.
So, Mod4-1 gives me Emacs. Mod4-2 gives me the browser. Mod4-3 gives Emacs+browser, Mod4-4 gives browser+console, and Mod4-5 gives Emacs+console. And, of course, the handy Mod4-0 gives me all of my open windows tiled side by side.
Even this is a pattern of being a newbie, I know - I could keep 4 and 5 free, and just tag several simultaneous tags to be active. How? Switch to tag 1 exclusively (Mod4-1), and activate tag2 as well (Ctrl-Mod4-2). There, I have Emacs and browser side by side. Want to get the console instead of the browser? Simple. Ctrl-Mod4-2 (toggle off tag 2), Ctrl-Mod4-3 (toggle on tag 3).
Anyway - As you can see, I am excited at finding a very new and nice tool to help me work better. Today I was playing a bit with the Awesome widgets, but that's something to be talked about later.
In Debian, we are at a crossing point: Awesome is just reaching ion3's popularity. And I'm adding my two main machines' votes to Awesome.
This is Awesome. Quoting (yes, one last quote) the official Awesome site, This gonna be LEGEN... wait for it... DARY!.
Back from Argentina, back from DebConf. As always, the ~3 weeks I spent there were really great, in as many fronts as I can imagine or describe. But I won't go into that now - For the purposes of this posting, the single thing that I got out of DebConf was looking with envy at all the people that had something that used to be called a sub-notebook some time ago, and now morphed into the more modern(?) name netbook.
Several people were seen with their tiny Asus Eee machines, of various models. And I definitively decided I want one - I was quite close to buying one in Argentina, as they are readily for sale there (surprisingly, in Mexico Asus sells motherboards, but no netbooks)... But I've always prefered waiting or paying a little premium for having an on-country seller and warranty.
Back home, Pooka told me that several stores in Mexico do sell the Acer Aspire One. After a little research, I decided to go for it. Office Depot sells the AAO for MX$4500 (around US$450). The only model they carry comes with Windows XP installed (instead of Linpus Linux), which is a shame I thoroughly repeated to the vendor - But it does come with 1GB RAM and 120GB HDD, much better for my needs than the other model, with 512MB RAM and 8GB SSD. This is, after all, a full (although very small) machine. It has an Intel Atom 270 CPU - I haven't yet measured how it fares, but it feels quite responsive so far for typical desktop tasks.
But what made me really happy about it is the Debian support. The only tricky part was to get the installer going, as it does not have a CD drive to boot from (and I didn't want to completely overwrite my only available USB stick's data). Don't try installing Debian Etch, as its kernel will not support the built-in Realtek RTL8101E network card (maybe etch-and-a-half's kernel does?). My greatest ally in this was, of course, the wiki.debian.org article on the Acer One - I rebooted with Lenny's debian-installer, and everything was smooth from that point on. Propietary firmware is required for the wireless AR5007 card and webcam, but -exactly as documented in the wiki- they are covered respectively by madwifi and linux-uvc.
I did a very regular install, with basically the default desktop and notebook setup. I continue to be amazed... Everything just works! It is not even fun, there are no funny drivers to recompile, no bang-your-head-against-the-wall... Even suspend-to-RAM. It just works.
The only glitch I found so far is that, after suspend-to-RAM, the madwifi module must be removed and reloaded to have wireless network. This is a well-known glitch that can be easily worked around. But besides that, it is... as easy as it gets. And, at such a price, and under 1Kg weight... This computer will get used to go out with me quite often! Battery life is just 2 hours, but for most situations, it's more than enough.
Several weeks ago, the people in charge of maintaining the Windows machines in my institute were desperate because of a series of virus outbreaks - Specially, as expected, in the public lab - but the whole network smell virulent. After seeing their desperation, I asked Rolman to help me come up with a solution. He suggested me to try replacing the Windows workstations by substituting local installations by a server having several virtual machines, all regenerated from a clean image every day, and exporting rdesktop sessions. He suggested using Xen for this, as it is the virtualization/paravirtualization solution until now best offered and supported by most Linux distributions (including, of course, RedHat, towards which he is biased, and Debian, towards I am... more than biased, even bent). So far, no hassle, right?
Of course, I could just stay clear of this mess, as everything related to Windows is off my hands... But in October, we will be renewing ~150 antivirus licences. I want to save that money by giving a better solution, even if part of that money gets translated to a big server.
Get the hardware
But problems soon arose. The first issue was hardware. Xen can act in its paravirtualization mode on basically any x86 machine - but it requires a patched guest kernel. That means, I can paravitualize many several different free OSs on just any computer I lay my hands on here, but Windows requires full- or hardware-assisted- virtualization. And, of course, only one of the over 300 computers we have (around 100 of which are recent enough for me to expect to be usable as a proof-of-concept for this) has a CPU with VT extensions - And I'm not going to de-comission my firewall to become a test server! ;-)
When software gets confused for hardware
So, I requested a Intel® Core™2 Quad Q9300 CPU, which I could just drop in any box with a fitting motherboard. But, of course, I'm not the only person requiring computer-related stuff. So, after pestering the people in charge for buying stuff on a daily basis for three weeks, the head of acquisitions came smiling to my office with a little box in his hands.
But no, it was not my Core 2 Quad CPU.
It was a box containing... Microsoft Visio. Yes, they spent their effort looking for the wrong computer-related thingy :-/ And meanwhile, Debconf 8 is getting nearer and nearer. Why does that matter? Because I have a deadline: By October, I want the institute to decide not to buy 150 antivirus licenses! Debconf will take some time off that target from me.
Anyway... The university vacations started on July 5. The first week of vacations I went to sweat my ass off at Monterrey, by Monday 14 I came back to my office, and that same day I finally got the box, together with two 2GB DIMMs.
Experiences with a nice looking potential disaster
Anyway, by Tuesday I got the CPU running, and a regular Debian install in place. A very nice workhorse: 5GB RAM, quad core CPU at 2.5GHz, 6MB cache (which seems to be split in two 3MB banks, each for two cores - but that's pure speculation from me). I installed Lenny (Debian testing), which is very soon going to freeze and by the time this becomes a production server will be very close to being a stable release, and I wanted to take advantage of the newest Xen administration tools. Of course, the installation was for AMD64 - Because 64 bitness is a terrible thing to waste.
But I started playing with Xen - And all kind of disasters stroke. First, although there is a Xen-enabled 2.6.25 Linux kernel, it is -686 only (i.e. no 64 bit support). Ok, install a second system on a second partition. Oh, but this kernel is only domU-able (this is, it will correctly run in a Xen paravirtualized host), but not dom0-able (it cannot act as a root domain). Grmbl.
So, get Etch's 2.6.18 AMD64 Xen-enabled kernel, and hope for the best. After all, up to this point, I was basically aware of many of the facts I mentioned (i.e. up to this point I did reinstall once, but not three times)... And I hoped the kernel team would have good news regarding a forward-port of the Xen dom0 patches to 2.6.25 - because losing dom0 support was IMO a big regression.
But quite on time, this revealing thread came up on the debian-devel mailing list. In short: Xen is a disaster. The Xen developers have done their work quite far away from the kernel developers, and the last decent synchronization that was made was in 2.6.18, over two years ago. Not surprisingly, enterprise-editions of other Linux distributions also ship that kernel version. There are some forward-patches, but current support in Xen is... Lacking, to say the least. From my POV, Xen's future in the Linux kernel looks bleakish.
Now, on the lightweight side...
Xen is also a bit too complicated - Of course, its role is also complicated as well, and it has a great deal of tunability. But I decided to keep a clean Lenny AMD64 install, and give KVM, the Kernel Virtual Machine a go. My first gripe? What a bad choice of name. Not only Google searches for KVM gives completely unrelated answers (to a name that's already well known, even in the same context, even in the same community).
KVM takes a much, much simpler approach to virtualization (both para- and full-): We don't need no stinkin' hypervisors. The kernel can just do that task. And then, kvm becomes just another almost-regular process. How nice!
In fact, KVM borrows so very much from qemu that it even refers to qemu's manpage for everything but two command-line switches.
Qemu is a completely different project, which gets to a very similar place but from the other extreme - Qemu started off as Bochs, a very slow but very useful multi-architecture emulator. Qemu started adding all kinds of optimizations, and it is nearly useful (i.e. I use it in my desktop whenever I need a W2K machine).
Instead of a heavyweight framework... KVM is just a modprobe away - Just ask Linux to modprobe kvm, and kvm -hda /path/to/your/hd/image gets you a working machine.
Anyway - I was immediatly happy with KVM. It took me a week to get a whole "lab" of 15 virtual computers (256MB RAM works surprisingly well for a regular XP install!) configured to start at boot time off a single master image over qcow images.
Xen has already been a long time in the enterprise, and has a nice suite of administrative tools. While Xen depends on having a configuration file for each host, KVM expects them to be passed at the command line. To get a bird-eye view of the system, xen has a load of utilities - KVM does not. And although RedHat's virt-manager is said to support KVM and qemu virtualization (besides its native Xen, of course), it falls short of what I need (i.e. it relies on a configuration file... which lacks expresivity to specify a snapshot-based HD image).
To my surprise, KVM has attained much of Xen's most amazing capabilities, such as the live migration. And although it's easier to just use fully virtualized devices (i.e. to use an emulation of the RTL8139 network card), as they require no drivers extraneous to the operating system, performance can be greatly enhanced by using the VirtIO devices. KVM is quickly evolving, and I predict it will largely overtake Xen's (and of course, vmware and others) places.
Where I am now
So... Well, those of us that adopt KVM and want to get it into production now will have some work of building the tools to gracefully manage and report it, it seems. I won't be touching much my setup until after Debconf, but so far I've done some work over Freddie Cash's kvmctl script. I'm submitting him some patches to make his script (IMHO) more reliable and automatizable (if you are interested, you can get my current version of the script as well). And... Starting September, I expect to start working on a control interface able to cover my other needs (such as distributing configuration to the terminals-to-be, or centrally managing the configurations).
Several people have approached me (or I've stumbled upon their sites) asking me about something called Debian 5.0 Beta 2.
It. Is. Not. That.
Please read clearly the announcement for Debian Installer lenny beta 2 - Yes, I understand this reached many people who are not involved in Debian but are enthusiastic users nevertheless. In short: The only thing that reached the beta is the debian-installer program (usually called just d-i), the amazing piece of code that handles a Debian installation in your system. And yes, it is meant for wide testing and work.
But please, do not take this as a preview of the new Debian release - it is not. If you install a system using this version of d-i, you will be tracking the Testing branch of Debian, and your system will be in a continuous state of flux. Yes, we do expect a freeze of Lenny in the next couple of weeks, after what it will be quite close to a Beta release (i.e. almost no new versions, no fresh software, just bug fixes). But hey - A Beta is supposed to be close to release quality. And if you look at the release-critical bugs affecting the Testing branch (green line), you will clearly see we have over 400 bugs to fix before Lenny is allowed to be called stable. And that's only one of the criteria needed to reach Lenny - Glance over at the Debian Release Management page to quickly understand the nature of changes still to come.
Oh, and of course - Even if it is not necessarily up-to-date, I have found the Wiki page created by Peter Eisentraut as an excellent place to start working whenever I have some free time: Lenny Release Goals.
so... If you are not yet working towards making Debian the best distribution ever, and Lenny the best Debian release ever, you now know where we need your help ;-)
(side note: d-i team, maybe the next announcement could use some words pointing out we are not doing a Debian beta program, just a d-i beta release?)
The recent OpenSSL incident can not be hidden. It was a very important blow to the Debian project's public face and reputation. A major hole slipped under the door in the form of a bugfix - and with all the good intention. This was not a deliberate attack, nor was it the result of a bad or sloppy maintainer - It was a honest, although painful, human mistake.
Several people started laughing at our processes and supposed strengths right away. I do, however, feel this shows how Debian is stronger security-wise than any other system. And it also shows how this saying, with enough eyeballs, all bugs are shallow, not only didn't lose validity, but was reaffirmed. Free Software development was also proved to be better than security through obscurity again.
Because were it not because of OpenSSL (and in this case in particular, Debian's packaging) being Free and subject to a code audit, this problem would have never been found. I have been asking to some friends who are part of different black-hat groups, and looking for this kind of information on the Web, it seems that -were it not for Luciano's work, we would still be running cryptographically weakened versions of OpenSSL for a long time. After all, 32768 possible keys is still quite a lot for a black-hat group to find as uneven noise, as a lead to showing the undeniable weakness.
It took two years to find the bug, yes. But it was found doing quality assurance work on publicly available source code. It was promptly fixed, mitigating (as far as possible) as much damage as could be caused. Tools for finding and fixing the defective keys were crafted and freed together with the announcement. Yes, there will be some compromises due to this, I'm sure, but an embarassing hole has been dealt with in the best way possible.
Anyway... I am very happy - I was going over Luciano's NM report, and found something I only suspected but was not sure about. I can now state clearly: I have never been so happy to advocate somebody to become a DD.
Luciano wrote a very good blog post (in Spanish) with his viewpoints on the Debian OpenSSL incident. If you happen to understand Spanish and are reading this blog, please drop over Luciano's.
Luciano: Once again, my hat goes off for you :-)
I usually don't like me too comments... But this is something that really disappoints me of my otherwise-favorite development framework. I must echo Matt Palmer's comment on Luke Kanies' entry:
Ruby. Has. A. Distribution. Problem.
Nice, good read. Sadly, many Rails pushers see distributability as something very minor, something that should not worry Rails developers right now, as there is too much other serious work to be done - Better UTF8, a clearer language, better performance... And besides, any programmer can live well with gems. (yes, that's all taken from a rant I had with a very convinced person)
My gripe is that... Rails is no longer a small, fringe project. Rails is an enterprise-grade development framework, with thousands of deployed production systems. And if they don't start to act responsably, if the Rails developers keep pushing said problems as low-priority, the Rails developers' (that is, their users) culture will become rigid - and will constitute a serious harm to Rails' future.
Distributability and packageability is not only for OS distributors. Not only we Debian zealots care about software being easily packageable. By using Ruby Gems, you dramatically increase entropy and harm your systems' security.
Read Luke's text for more details. It is quite worth the time.
I'm about to leave for Monterrey, Nuevo León, some 700Km North. Why? Because I was invited to be at the Monterrey FLISOL. And what exactly is a FLISOL? A very nice and interesting idea: Festival Latinoamericano de Instalación de Software Libre, Latin American Free Software Install Festival.
So far, I have stayed away from install-fests. I don't like them. And I will keep what I have always said: I am going because I was invited to talk about network security (of course, giving more than a little bit of relevance to Free Software as the IMHO only way to get to a decent level of security). But I do want to be part of this. It is large. Very large. So large, you don't want to miss out.
According to Beatriz Busaniche, FLISOL will be simultaneously held in 210 cities all over Latin America, in Argentina, Bolivia, Brasil, Chile, Colombia, Costa Rica, Cuba, Ecuador, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Uruguay and Venezuela. In short: everywhere.
Spread the word. Spread the love. Spread the fun!