warning: Creating default object from empty value in /home/gwolf/drupal6/modules/taxonomy/ on line 33.

On the number of attempts on brute-force login attacks

Submitted by gwolf on Fri, 02/06/2015 - 12:51

I would expect brute-force login attacks to be more common. And yes, at some point I got tired of ssh scans, and added rate-limiting firewall rules, even switched the daemon to a nonstandard port... But I have very seldom received an IMAP brute-force attack. I have received countless phishing scams on my users, and I know some of them have bitten because the scammers then use their passwords on my servers to send tons of spam. Activity is clearly atypical.

Anyway, yesterday we got a brute-force attack on IMAP. A very childish atack, attempted from an IP in the largest ISP in Mexico, but using only usernames that would not belong in our culture (mosty English firstnames and some usual service account names).

What I find interesting to see is that each login was attempted a limited (and different) amount of times: Four account names were attempted only once, eight were attempted twice, and so on — following this pattern:

 1 •
 2 ••
 3 ••
 4 •••••
 5 •••••••
 6 ••••••
 7 •••••
 8 ••••••••
 9 •••••••••
10 ••••••••
11 ••••••••
12 ••••••••••
13 •••••••
14 ••••••••••
15 •••••••••
16 ••••••••••••
17 •••••••••••
18 ••••••••••••••
19 •••••••••••••••
20 ••••••••••••
21 ••••••••••••
22 ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

(each dot represents four attempts)

So... What's significant in all this? Very little, if anything at all. But for such a naïve login attack, it's interesting to see the number of attempted passwords per login varies so much. Yes, 273 (over ¼ of the total) did 22 requests, and another 200 were 18 and more. The rest... Fell quite shorter.

In case you want to play with the data, you can grab the list of attempts with the number of requests. I filtered out all other data, as i was basically meaningless. This file is the result of:

  1. $ grep LOGIN /var/log/syslog.1 |
  2. grep FAILED.*|
  3. awk '{print $7 " " $8}'|
  4. sort|uniq -c

Split regarding Docker

Submitted by gwolf on Tue, 04/29/2014 - 13:15

I have heard many good things about Docker, and decided to give it a spin on my systems. I think application-level virtualization has a lot to offer to my workflow...

But the process to understand and later adopt it has left me somewhat heart-torn.

Docker is clearly great technology, but its documentation is... Condescending and completely out of line with what I have grown used to in my years using Linux. First, there is so much simplistic self-praise sprinkled throughout it. There is almost no page I landed on that does not mention how user-friendly and user-centric Docker's commandline arguments are — They let you talk in almost plain1 English. What they don't mention is that... Well, that's the way of basically every command-line tool. Of course, as soon as you start specifying details to it, the plain-Englishness starts dilluting into a more realistic English-inspiredness...

Then... Things that go against our historical culture. It is often said that Windows documentation tends to be repetitive because users don't have the patience to read a full document. And our man pages are succint and to the point, because in our culture it is expected that users know how to search for the bit of information they are after. But reading documentation that's so excited with itself and praises again and again the same values and virtues, but never gets to the point I am interested in getting at (be it deployment, interoperation, description of the in-disk images+overlays layout, or anything moderately technical) never gets there... makes me quite unhappy.

Last (for now)... Such a continuous sales pitch, an insistence on the good virtues, makes me wary of something they might be hiding.

Anyway, at least for now, I just wanted to play a bit with it; I will wait at least until there is a backport to the stable Debian version before I consider moving my LXC VMs setup over to Docker (and a backport does not seem trivial to achieve, as Docker has several updated low-level dependencies we are unlikely to see in Wheezy).

But I had to vent this. OK, now go back to your regular work ;-)

  • 1. Of course, plain as long as you agree to a formal grammar... Details, details
( categories: )

The joys of updating a webapp

Submitted by gwolf on Fri, 05/31/2013 - 13:23

I like Drupal. It's a very, very flexible CMS that evolved into a full-fledged Web development framework. Mind you, it's written in PHP, and that makes it a nightmare to develop for (in ~6 years I have used it for all of my important websites I have only got around to develop a set of related modules for it once).

PHP programming sucks and makes my eyes and fingers bleed, but happily there are people who disagree with me — And they tend to write code. All the better!

Minor upgrades with Drupal are quite easy to handle. Not as easy as I'd like (i.e. whenever I upgrade the core system or a module, I have to log in as root^Wadmin^WUser #1 to ~15 different sites and run http://someurl/update.php — It very seldom causes any pain.

The updates that have to be run via this URL are usually on the database's structures, so I understand they have to be started (and watched) by a human. And yes, I know I could do that with Drush, the Drupal shell, but it is not very friendly to Debian-packaged Drupal... But easy enoguh.

But major updates are a royal pain, and they usually amount to quite a bit. First, disable all of the modules and revert to a known-safe theme. Ok, it makes sense. Second, check whether the modules exist for the newer version (as they won't work — Drupal changes enough between major versions that not only it's API-incompatible, I'd classify it as API-unrecognizable). Ok, all set? Now for the live migration itself... It has to be triggered from the browser.

So yes, I am now staring at a window making clever AJAX status updates. I am sitting at 46 of 199, but following the lovely ways of programmers, it's impossible to forsee whether update #47 will just be an UPDATE foo SET bar=0 WHERE bar IS NULL or a full-scale conversion between unspeakable serialized binary structures while rearranging the whole database structure.

And yes, while the meter progresses I stand in fear that update #n+1 will bomb giving me an ugly red error. I must keep the magic AJAX running, or the update might be botched.

And, of course, the update has sat at #69 all while I wrote the last two paragraphs. Sometimes the updates can progress after an interruption... And it seems I have no choice but to interrupt it.

/me crosses fingers...

[update] Wow... I am happy I got bored of looking at the meter and decided to write this blog post: After several minutes, and just as I was about to launch a second update session (130 updates to go), the meter advanced! I'm now sitting watching it at #75. Will it ever reach 199?

[update] And so it had to be... At around 115, I now got:

*sigh* The update process was aborted prematurely while running update #7000 in biblio.module...

( categories: )

LVM? DM-RAID? Both? None?

Submitted by gwolf on Sat, 09/17/2011 - 13:06

Patrick tells about his experience moving from LVM to RAID.Now, why do this? I have two machines set up with LVM-based mirroring, and they work like a charm - I even think they work with better flexibility than setting it up in a RAID-controlled way, as each of the partitions in a volume group can be easily set to use (or stop using) the mirroring independently, and the requisite of having similar devices (regarding size) also disappears. Of course, this flexibility allows you to do very stupid things (such as setting up a mirror on two areas of the same rotational device - Good for toying around, but of course, never to be considered for production). And the ability to online grow and shrink partitions is just great.

So, Patrick, fellow readers, dear lazyweb, why would you prefer LVM-based mirroring to a RAID alternative? Or the other way around?

( categories: )

Ruby dissonance with Debian, again

Submitted by gwolf on Wed, 09/29/2010 - 11:48

Lucas has written two long, insightful posts on the frustration about the social aspects of integrating Ruby stuff Debian – The first one, on September 12 and the second one, on September 29.

I cannot really blame (thoroughly) the Ruby guys for their position. After all, they have a vibrant community, and they are advancing great pieces of work. And they know who that code is meant for — Fellow programmers. And yes, although it is a pain to follow their API changes (and several of the Gems I regularly use do often get refactorings and functionality enhancements which break compatibility but introduce very nice new features), they say that's solved with one of Gems' main features being the simultaneous installability of different versions.

The key difference in Debian's worldview with Ruby's is they cater to Fellow programmers. Even leaving aside heaps of different positions and worldview/mindset, we have a fundamental difference: Debian cares about its users, whatever that means. So, our users should not even care what language a given application is implemented in – They should only care that it works. We, as packagers, should take care of all the infrastructural stuff.

And yes, that's where we find the conflicting spot: We don't want to ship many versions of a system library (that in this case would be a Gem). Specially if later versions fix known bugs in earlier versions and backports are not available or supported. Specially if upstream authors' only response to a bug in an older release will be "upgrade and rewrite whatever breaks in your application".

As an example of this, I am not currently updating the gems I maintain, as Debian is on a freeze to get out the next stable release. Or if at all, I am targetting those uploads to our Experimental branch, in order not to create a huge backlog for me when the freeze is over (just a series of rebuilds targetted at unstable). And yes, I will have to be responsible for any bugs that will most likely not be supported by most of my upstreams during the next ~2 years.

That's the role of a Linux distribution. And yes, as Lucas writes in the comments he got as responses to the first post – This dissonance comes in no small part because the Ruby developer community is mostly made from non-linuxers. People coming from a background where (mostly propietary) applications bundle up everything they need, where static linking is more popular than dynamic libraries, where there is no coordination between parts of the system are much less likely to understand our work.

And yes, the Perl community is a joy to work with in this regard. And that's the same I understand from the Python one. Because of their origins and where their main strength was grown and remains.

PS - And yes, I will join the flock of people saying that... The specific person that attacked your work is a great programmer, but well known as intolerant and obnoxious. Fortunately, even if our respective cultures fail to mix so much, most of our interactions just end with a "sigh" of lack of understanding, and not with the flames you got targetted with :-/

( categories: )

Damage control: Cleaning up compromised SSH keys

Submitted by gwolf on Wed, 09/22/2010 - 13:36

This morning, my laptop was stolen from my parked car while I was jogging. I do not want to make a big deal out of it.

Still, even though I am sure it was not targetted at my data (three other people at least were reporting similar facts in the same area), and the laptop's disk will probably just be reformatted, I am trying to limit the possible impact of my cryptographic identification being in somebody else's hands.

GPG makes it easy: I had on that machine just my old 1024D key, so it is just matter of generating a revocation certificate. I have done that, and uploaded it to the SKS keyservers - Anyway, here is my revocation certificate:

Version: GnuPG v1.4.10 (GNU/Linux)
Comment: A revocation certificate should follow


But… What worries me more is access to the computers my ssh key works for. Yes, the ssh key uses a nontrivial passphrase, but still — SSH keys cannot be revoked (and this makes sense, as SSH should not add the delay, or potential impossibility, to check with a remote infrastructure whenever you want to start a session).

So, I generated a new key (and stored it at ~/.ssh/ / ~/.ssh/ and came up with this snippet:

  1. $ OLDKEY=xyHywJuHD3nsfLh03G1TqUEBKSj6NlzMfB1T759haoAQ
  2. $ for host in $(cat .ssh/known_hosts | cut -f 1 -d \ |cut -f 1 -d , |
  3. sort | uniq); do
  4. echo == $host
  5. ssh-copy-id -i .ssh/ $host &&
  6. ssh $host "perl -n -i -e 'next if /$OLDKEY/;print' .ssh/authorized_keys"
  7. done

Points about it you might scratch your head about:

  • .ssh/known_hosts' lines start with the server's name (or names, if more than one, comma-separated), followed by the key algorithm and the key fingerprint (space-separated). That's the reason for the double cut – It could probably be better using a regex-enabled thingy understanding /[, ]/, but... I didn't think of that. Besides, the savings would be just for academic purposes ;-)
  • I thought about not having the ssh line conditionally depend on ssh-copy-id. But OTOH, this makes sure I only try to remove the old key from the servers it is present on, and that I don't start sending my new key everywhere just for the sake of it.
  • my $OLDKEY (declared in Shell, and only literally interpolated in the Perl one-liner below) contains the final bits of my old key. It is long enough for me not to think I'm risking collision with any other key. Why did I choose that particular length? Oh, it was a mouse motion.
  • perl -n -i -e is one of my favorite ways to invoke perl. -i means in-line editing, it allows me to modify a file on the fly. This line just skips (removes) any keys containing $OLDKEY; -n tells it to loop all the lines over the provided program (and very similarly, -p would add a print at the end – Which in this particular ocassion, I prefer not to have). It is a sed lookalike, if you wish, but with a full Perl behind.


  • This assumes you have set HashKnownHosts: no in your .ssh/config. It is a tradeoff, after all – I use a lot tab-expansion (via bash_completion) for hostnames, so I do have the fully parseable list of hosts I have used on each of my computers.
  • I have always requested my account names to be gwolf. If you use more than one username... well, you will have to probably do more than one run of it connecting to foo@$host instead.
  • Although most multiuser servers stick to the usual port 22, many people change the ports (me included) either because they perceive concealing them gives extra security, or (as in my case) because they are fed up with random connection attempts. Those hosts are stored as [hostname]:port (i.e. []:22000). Of course, a little refinement takes care of it all.,/li>
  • Oh, I am not storing results... I should, so for successive runs I won't try to connect to a system I already did, or that already denied me access. Why would I want to? Because some computers are not currently turned on. So I'll run this script at least a couple of times.

Oh, by the way: If you noticed me knocking on your SSH ports... please disregard. Possibly at some point I connected to that machine to do something, or it landed in my .ssh/known_hosts for some reason. I currently have 144 hosts registered. I am sure I triggered at least one raised eyebrow.

And I will do it from a couple of different computers, to make it less probable that I miss some I have never connected from while at the particular computer I am sitting at right now.

So... Any ideas on how to make this better?

( categories: )

OpenSSH 5.4 and netcat mode

Submitted by gwolf on Mon, 03/08/2010 - 12:32

The release of OpenSSH 5.4 was announced today. Its announced features include many small improvements, in usability and in crypto strength.

One of my favorite tricks using ssh is what Ganneff named ssh jumphosts – Many (most?) of my machines are not directly accessible from across the firewall, so the ability to specify in the configuration files where to jump through is most welcome. Well, with this "netcat mode" it will be much clearer to read and less of a hack… Of course, it loses a bit of the hackish æsthetic value, but becomes easier!

(yes, this post is basically a marker so I remember about it — But others might find it interesting)

Captchas are for humans...

Submitted by gwolf on Thu, 01/28/2010 - 08:35

Nobody cares about me, I thought. Whatever I say is just like throwing a bottle to the infinite ocean.

No comments, no hopes of getting any, for several days. Weeks maybe? Not even the spammers cared about me.

Until I read this mail, by Thijs Kinkhorst commenting to my yesterday post:

(BTW, I was unable to comment on your blog - couldn't even read one letter of the CAPTCHA...)

And, yes, Drupal module «captcha» introduced in its 2.1 release (January 2) feature #571344: Mix multiple fonts.

Only... no fonts were selected. Grah.

( categories: )

Packaging PKP OJS (Open Journals System)

Submitted by gwolf on Wed, 01/27/2010 - 15:23

New guidelines for periodic publications' websites at my University favor the different journals we have to use a standardized system — And it makes quite a bit of sense. It is quite hard to explain to the people I work with that the content is not only meant to be consumed by humans, but also by other systems; the reasons behind rich content tagging and deep hierarchies for what they would just see as a list of words (think list of authors for an article, list of keywords, and so on). After all, aggregator databases such as Latindex and SciELO have achieved getting this understanding through.

And I must be quite grateful, as the University's guidelines point to what appears to be a very well-thought and thorough system, the Open Journal Systems by the Public Knowledge Project, co-funded by several well-regarded universities. OJS is a GPL-2-covered PHP bundle.

Anyway… I am very glad at least one of my Institute's journal accepted the challenge and decided to go OJS. I know I will quite probably be administering this system long-term. And, being as snobbish as I am, I know I loathe anything installed in my machines that is not either developed by myself or comes in a Debian package. So, as it was not packaged, I made the package ☺

Note that I am still not filing an ITP (which means, I have not yet decided whether I will upload this to Debian) because I want first to make sure I do have the needed long-term commitment — Besides, I am by far not a PHP person, and being responsible for a package… Carries a nontrivial weight. Still, you might be interested in getting it. If you are interested, you can either download the .deb package or add it to your apt repositories (and stay updated with any new releases), by adding this to your /etc/apt/sources.list:

deb lenny misc
deb-src lenny misc

Note: My packaging has still a small bug: The installer fails to create the PostgreSQL database. The MySQL database works fine. I will look into it soon

So far, I am quite impressed with this program's functionality and the depth/quality of its (online) documentation. Besides, its usage statistics speak for themselves:

So, it is quite possible I will be uploading this into Debian in a couple of weeks (hopefully in time to be considered for Squeeze). The reasons I am making it available in my personal repository now is:

  • I want to make it available for other Debian- and Ubuntu- users in my University, as I am sure several people will be installing it soon. And after apt-getting it, it is just ready to be used right away.
  • As I said, I am no PHP guy. So if you want to criticize my packaging (and even my minor patch, fixing a silly detail that comes from upstream's bundling of several PHP and Javascript libraries, and those libraries' authors not sticking to a published API in a well-distributed version), please go ahead!
( categories: )

Among the reasons that brought me to Debian...

Submitted by gwolf on Mon, 10/19/2009 - 23:42

Every now and then, people ask me why Debian? Why, among so many projects to choose from, I first liked, then got into, and finally I got committed into Debian, and not anything else?

Of course, one of the main points —back in 2000-2001 when I started using it, and still to this very day— is a strong identification with the ideological side. Yes, I am a strong Free Software believer, and Debian is what best suites my ideology.

Still, I did not only get into Debian because of this — And I was reminded about this by an article in this month's Usenix ;login: magazine: An anecdotal piece by Thomas A. Limoncelli titled Hey! I have to install and maintain this crap too, ya know! (article requires ;login: subscription, but I'll be glad to share it with whoever requests it to me — I have of course no permission to openly put it here in whole online. Yes, I am expressly sending a copy of this text to the author, I will update this if/when I hear from him) [update] The author has kindly allowed me to redistribute his article's PDF — Download it here.

Before anything else… I'll go on a short digression: I am writing a bit regarding the Free Software participants' culture, and this is a trait I love about it: The lack of formality. Even though ;login: (and Usenix as a whole) is not exactly Free Software, it runs quite close to it), it is a well regarded magazine (and association) with an academic format and good (not deep or highly theoretical, but good) contents. Still, it is quite usual to see titles as informal and inviting as this one. And it happens not only here — I have been fearing having to explain at work, over and over, why I have requesting permissions to go to Yet Another Perl Conference, Festival de Software Libre or DebCamp, tagging them as academic settings. Or why I am wasting our library's resources on buying cookbooks, recipes and similar material on the most strange-sounding subjects.

Anyway, back on track… This article I found refers to the lack of value given to the system administrator's time when selling or purchasing (or more in general, as it happens also in Free Software, when offering or adopting) a product. Quoting Thomas:

A person purchasing a product is focused on the features and benefits and the salesperson is focused on closing the deal. If the topic of installation does come up, a user thinks, “Who cares! My sysadmin will install it for me!” as if such services are free. Ironically, it is the same non-technical executive who dismisses installation and upkeep as if they are “free” who might complain that IT costs are too high and go on a quest to kill IT spending. But I digress.

I can understand why a product might be difficult to install. It is hard enough to write software, and with the shortage of software developers it seems perfectly reasonable that the installation script becomes an afterthought, possibly given to a low-ranking developer. The person purchasing the product usually requires certain features, and ease of installation is not a consideration during the procurement process. However, my ability to install a product affects my willingness to purchase more of the product.

Thomas goes on to explain his experience with Silicon Graphics, how Irix was so great regarding install automation and how they blew it when switching to Windows NT; talks very briefly about IBM AIX's smit, a very nifty sysadmin aid which is basically a point-and-click interface to system administration with the very nice extra that allows you to view the commands smit executes to perform a given action (and then you can copy into a script and send over to your hundreds of AIX machines)… Incidentally, by the time I started digging out of what became the RedHat mess of the late 1990s and passed briefly through OpenBSD on my way to Debian enlightenment, I was temporarily the sysadmin for an AIX machine — And I too loved this Smit approach, having it as the ultimate pedagogical tool you could ever find.

Anyway, I won't comment and paraphrase the full article. I'll just point out to the fact that… this was what ultimately sold me into Debian. The fact that I could just install anything and (by far) most of the times it will be configured and ready to use. Debian made my life so much easier! As a sysadmin, I didn't have to download, browse documentation, scratch head, redo from start until I got a package working — Just apt-get into it, and I'd be set. Of course, one of the bits I learnt back then was that Debian was for lazy people — Everything works in a certain way. Policy is enforced throughout.

So as a sysadmin, I should better get well acquinted with the Debian policy and know it by heart. In order to be able to enjoy my laziness, I should read it and study it. And so I did, and fell in love. And that is where my journey into becoming a Debian Developer started.

Why am I talking so nostalgic here? Because I got this magazine on the mail just last weekend… And coincidentally, I also got bug report #551258 — I packaged and uploaded the Haml Ruby library (Gem, as the Rubyists would call it). Haml is a great, succint markup language which makes HTML generation less of a mess. It is even fun and amazing to write Haml, and the result is always nicely formatted, valid HTML! And well, one of Haml's components is haml-elisp, the Emacs Lisp major mode to do proper syntax highlighting in Haml files.

Of course, I am an Emacs guy (and have been for over 25 years), so I had to package it. But I don't do Emacs Lisp! So I just stuffed the file in its (supposed) place, copying some stuff over from other Emacs packages. During DebConf, I got the very valuable help of Axel Beckert to fix a simple bug which prevented my package from properly being installed, and thought I was basically done with it. I was happy just to add this to my ~/.emacs and get over with it:

  1. (require 'haml-mode)
  2. (add-to-list 'auto-mode-alist '("\\.haml$" . haml-mode))
  3. (require 'sass-mode)
  4. (add-to-list 'auto-mode-alist '("\\.sass$" . sass-mode))

However… As Mike Castleman points out: This requires manual intervention. So it is not the Debian Way!

Reading Mike's bug report, and reading Thomas' article, made me realize I was dilluting something I held so dearly as to commit myself to the best Free Software-based distribution out there. And the solution, of course, was very simple: Debian allows us to be very lazy, not only as sysadmins, but as Debian packagers. Just drop this (simplified version) as $pkgroot/debian/haml-elisp.emacsen.startup and you are set!

  1. (let ((package-dir (concat "/usr/share/"
  2. (symbol-name flavor)
  3. "/site-lisp/haml-elisp")))
  4. ;; If package-dir does not exist, the haml-mode package must have
  5. ;; removed but not purged, and we should skip the setup.
  6. (when (file-directory-p package-dir)
  7. ;; Use debian-pkg-add-load-path-item per §9 of debian emacs subpolicy
  8. (debian-pkg-add-load-path-item package-dir )
  9. (autoload 'haml-mode "haml-mode"
  10. "Major mode for editing haml-mode files." t)
  11. (add-to-list 'auto-mode-alist '("\\.haml\\'" . haml-mode))
  12. ;; The same package provides HAML and SASS modes in the same
  13. ;; directory - So repeat only the last two instructions for sass
  14. (autoload 'sass-mode "sass-mode"
  15. "Major mode for editing sass-mode files." t)
  16. (add-to-list 'auto-mode-alist '("\\.sass\\'" . sass-mode))
  17. ))

This will make the package just work as soon as it is installed, with no manual intervention required from the user. And it does not, contrary to what I feared, bloat up Emacs — Adding it to the auto-mode-alist leaves it as known to Emacs, but is not loaded or compiled unless it is required.

Deepest thanks to both of you! (and of course, thanks also to Manoj, for pointing out at the right spells in emacs-land)

( categories: )

Strange scanning on my server?

Submitted by gwolf on Thu, 10/01/2009 - 18:04

Humm... Has anybody else seen a pattern like this?

I am getting a flurry of root login attempts at my main server at the University since yesterday 7:30AM (GMT-5). Now, from the machines I run in the network (UNAM), only two listen to the world with ssh at port 22 — And yes, it is a very large network, but I am only getting this pattern on one of them (they are on different subnets, quite far apart). They are all attempting to log in as root, with a frequency that varies wildly, but is consistently over three times a minute right now. This is a sample of what I get in my logs:

[update] Logs omitted from blog post, as it is too wide and breaks displays for most users. You can download the log file instead.

Anyway… This comes from all over the world, and all the attempts are made as root (no attempts from unprivileged users). Of course, I have PermitRootLogin to no in /etc/ssh/sshd_config, but… I want to understand this as much as possible.

Initially it struck me that most of the attempts appeared to come from Europe (quite atypical for the usual botnet distribution), so I passed my logs through:

  1. #!/usr/bin/perl
  2. use Geo::IP;
  3. use IO::File;
  4. use strict;
  5. my ($geoip, $fh, %by_ip, %by_ctry);
  7. $fh = IO::File->new('/tmp/sshd_log');
  8. $geoip=Geo::IP->new(GEOIP_STANDARD);
  9. while (my $lin = <$fh>) { next unless $lin =~ /rhost=(\S+)/; $by_ip{$1}++};
  11. print " Incidence by IP:\n", "Num Ctry IP\n", ('='x60),"\n";
  13. for my $ip ( sort {$by_ip{$a} <=> $by_ip{$b}} keys %by_ip) {
  14. my $ctry = ($ip =~ /^[\d\.]+$/) ?
  15. $geoip->country_code_by_addr($ip) :
  16. $geoip->country_code_by_name($ip);
  18. $by_ctry{$ctry}++;
  19. printf "%3d %3s %s\n", $by_ip{$ip}, $ctry, $ip;
  20. }
  22. print " Incidence by country:\n", "Num Country\n", "============\n";
  23. map {printf "%3d %s\n", $by_ctry{$_}, $_}
  24. sort {$by_ctry{$b} <=> $by_ctry{$a}}
  25. keys(%by_ctry);

The top countries (where the number of attempts ≥ 5) are:

  1. 104 CN
  2. 78 US
  3. 58 BR
  4. 49 DE
  5. 43 PL
  6. 20 ES
  7. 20 IN
  8. 19 RU
  9. 17 CO
  10. 17 UA
  11. 16 IT
  12. 13 AR
  13. 12 ZA
  14. 10 CA
  15. 10 CH
  16. 8 GB
  17. 8 AT
  18. 8 JP
  19. 8 FR
  20. 7 KR
  21. 7 HK
  22. 7 PE
  23. 7 ID
  24. 6 PT
  25. 5 CZ
  26. 5 AU
  27. 5 BE
  28. 5 SE
  29. 5 RO
  30. 5 MX

I am attaching to this post the relevant log (filtering out all the information I could regarding legitimate users) as well as the full output. In case somebody has seen this kind of wormish botnetish behaviour lately… please comment.

[Update] I have tried getting some data regarding the attacking machines, running a simple nmap -O -vv against a random sample (five machines, I hope I am not being too agressive in anybody's eyes). They all seem to be running some flavor of Linux (according to the OS fingerprinting), but the list of open ports varies wildly — I have seen the following:

  1. Not shown: 979 closed ports
  3. 21/tcp open ftp
  4. 22/tcp open ssh
  5. 23/tcp open telnet
  6. 111/tcp open rpcbind
  7. 135/tcp filtered msrpc
  8. 139/tcp filtered netbios-ssn
  9. 445/tcp filtered microsoft-ds
  10. 593/tcp filtered http-rpc-epmap
  11. 992/tcp open telnets
  12. 1025/tcp filtered NFS-or-IIS
  13. 1080/tcp filtered socks
  14. 1433/tcp filtered ms-sql-s
  15. 1434/tcp filtered ms-sql-m
  16. 2049/tcp open nfs
  17. 4242/tcp filtered unknown
  18. 4444/tcp filtered krb524
  19. 6346/tcp filtered gnutella
  20. 6881/tcp filtered bittorrent-tracker
  21. 8888/tcp filtered sun-answerbook
  22. 10000/tcp open snet-sensor-mgmt
  23. 45100/tcp filtered unknown
  24. Device type: general purpose|WAP|PBX
  25. Running (JUST GUESSING) : Linux 2.6.X|2.4.X (96%), ()
  28. Not shown: 993 filtered ports
  30. 22/tcp open ssh
  31. 25/tcp open smtp
  32. 80/tcp open http
  33. 443/tcp open https
  34. 444/tcp open snpp
  35. 3389/tcp open ms-term-serv
  36. 4125/tcp closed rww
  37. Device type: general purpose|phone|WAP|router
  38. Running (JUST GUESSING) : Linux 2.6.X (91%), ()
  40. Not shown: 994 filtered ports
  42. 22/tcp open ssh
  43. 25/tcp closed smtp
  44. 53/tcp closed domain
  45. 80/tcp open http
  46. 113/tcp closed auth
  47. 443/tcp closed https
  48. Device type: general purpose
  49. Running (JUST GUESSING) : Linux 2.6.X (90%)
  50. OS fingerprint not ideal because: Didn't receive UDP response. Please try again with -sSU
  51. Aggressive OS guesses: Linux 2.6.15 - 2.6.26 (90%), Linux 2.6.23 (89%), (…)
  53. Not shown: 982 closed ports
  55. 21/tcp open ftp
  56. 22/tcp open ssh
  57. 37/tcp open time
  58. 80/tcp open http
  59. 113/tcp open auth
  60. 135/tcp filtered msrpc
  61. 139/tcp filtered netbios-ssn
  62. 445/tcp filtered microsoft-ds
  63. 1025/tcp filtered NFS-or-IIS
  64. 1080/tcp filtered socks
  65. 1433/tcp filtered ms-sql-s
  66. 1434/tcp filtered ms-sql-m
  67. 4242/tcp filtered unknown
  68. 4444/tcp filtered krb524
  69. 6346/tcp filtered gnutella
  70. 6881/tcp filtered bittorrent-tracker
  71. 8888/tcp filtered sun-answerbook
  72. 45100/tcp filtered unknown
  73. Device type: general purpose|WAP|broadband router
  74. Running (JUST GUESSING) : Linux 2.6.X|2.4.X (95%), (…)
  76. Not shown: 994 filtered ports
  78. 22/tcp open ssh
  79. 25/tcp open smtp
  80. 53/tcp open domain
  81. 80/tcp open http
  82. 110/tcp open pop3
  83. 3389/tcp open ms-term-serv
  84. Warning: OSScan results may be unreliable because we could not find at least 1 open and 1 closed port
  85. Device type: firewall|general purpose
  86. Running: Linux 2.6.X
  87. OS details: Smoothwall firewall (Linux, Linux 2.6.13 - 2.6.24, Linux 2.6.16

Of course, it strikes me that several among said machines seem to be Linuxes, but (appear to) run Microsoft services. Oh, and they also have P2P clients.

( categories: )

Drupal 6 Tour Centroamerica — Now in Mexico!

Submitted by gwolf on Wed, 09/02/2009 - 11:05

I met my friend Josef Daberning, who did his Austrian Social Service working with Drupal at the Casa de los Tres Mundos NGO, in Granada, Nicaragua, at the Central American Free Software Encounter, last May. He told me that, when going back to Austria, he would spend some days in Mexico, and wanted to give a workshop on Drupal.

The course has just started, and will take place today and tomorrow — You can follow the live stream at — The videos will be uploaded soon as well, I will post them on this same node.

This node will be used for whatever is needed to make public for people following the talk. As of right now, you can download his presentation — and


  • If you are following the stream and want to say something, connect via IRC to OFTC ( and join the #edusol channel.
  • The videos for the talks will be made available soon, licensed under CC-BY 3.0 (generic)
  • If you want to come, you are more than welcome. There is still space! We are in Ciudad Universitaria, South of Mexico City. Instituto de Investigaciones Económicas - UNAM. We are starting the second session (16:00-10:00), tomorrow we have a third and final session (10:00-14:00).
  • Of course,it is all over by now. I will be processing the videos and uploading them to's Open Source Movies archive. The three videos (for the three sessions) are available!

Serverless and mailless

Submitted by gwolf on Mon, 08/31/2009 - 10:56


Yesterday (Sunday, 31/08/09) I far from any computer-like object for most of the day. When I got back home, of course, I promptly opened my laptop to check my mail — who knows what destiny might have for me in a 24 hour period? Maybe I won yet another fortune I have to cash in Nigeria? Maybe there is (GASP!) a new RC bug on one of my packages?

But no, my mail server didn't feel like answering to my ssh queries. The connection was established, but shut down before even sending the protocolary SSH-2.0-OpenSSH_5.1p1 string. Fearing an overload (after all, the little bugger is just a Mac Mini running in another room in my house), I tried to check (via Web) its Munin status — Apache didn't want to listen either. It answered, but got only access denied. Things started worrying me… But (silly me) not enough — The machine runs headless1, so I just danced the boring raising elephants song2.

Allowed for a couple of minutes for everything to settle, and tried to connect. Horror, now even pings didn't work!

So I ran to fetch my old, bulky and trusty monitor. Went back to the machine, plugged it in, switched it off and back on. Everything worked fine this time — At least appearingly. I opened up mutt and started happily reading mails, while trying to understand on another console what happened at 07:06 that didn't get logged anywhere and had the machine dead for basically all the day. And then, BRRRT-BRRRT-BRRRT, I started hearing the HDD seeking.

I was able to send a couple of mails, but decided to let the machine rest and... Will reduce its disk usage to an absolute minimum. Fortunately, I have already the machine meant to replace it — A much nicer, beefier iMac G5, waiting to be vacated from its data, task which has suddenly become prioritary.

So, in short: If you need to get in touch with me in the next day or two, don't count on my usual mail, as it is down. I hope to be able to get the data out of the poor little bugger painlessly after it rests a bit. And I hope not to drown in a sea of mails after I get the replacement back online :-/

( categories: )

The great firehole of Nicaragua

Submitted by gwolf on Mon, 06/15/2009 - 22:37


I have spent a couple of hours connected from Norman García's house, in Managua. Norman is most kindly hosting me at home for a couple of days before we leave (tomorrow) for Estelí, where the Central American Free Software Encounter will be held.

Now, the network feels really slow. However, it can sustain download rates of around 512Kbps, quite acceptable. Latency is what kills. But... I was stunned with mtr's results to my home server:

  1. Host Loss% Snt Last Avg Best Wrst StDev
  2. 1. speedtouch.lan 0.0% 16 101.1 51.0 3.2 101.1 32.2
  3. 2. 12.5% 16 210.6 156.3 21.4 376.4 91.7
  4. 3. 86.7% 16 23.0 22.8 22.6 23.0 0.2
  5. 4. 66.7% 16 219.7 166.1 30.9 283.0 94.4
  6. 5. 86.7% 16 136.7 106.7 76.7 136.7 42.5
  7. 6. 66.7% 16 125.2 130.2 42.2 233.5 72.0
  8. 7. 66.7% 16 249.9 130.3 55.2 249.9 75.3
  9. 8. 92.9% 15 41.3 41.3 41.3 41.3 0.0
  10. 9. 66.7% 15 85.6 138.8 85.6 169.7 33.4
  11. 10. 64.3% 15 141.1 154.9 82.4 264.0 66.8
  12. 11. 78.6% 15 86.4 150.3 74.1 290.5 121.5
  13. 12. 86.7% 15 112.6 78.0 43.4 112.6 49.0
  14. 13. 57.1% 15 225.1 152.6 52.5 246.1 77.7
  15. 14. 53.8% 14 69.5 151.4 69.5 235.6 52.8
  16. 15. 84.6% 14 236.0 141.1 46.1 236.0 134.3
  17. 16. 71.4% 14 116.3 161.2 39.4 258.0 101.9
  18. 17. 84.6% 14 67.9 52.5 37.1 67.9 21.8
  19. 18. 76.9% 14 258.2 172.0 112.6 258.2 76.4
  20. 19. 61.5% 14 159.8 117.3 42.3 159.8 51.8
  21. 20. 78.6% 14 128.1 192.4 128.1 240.4 57.9
  22. 21. 92.3% 14 63.8 63.8 63.8 63.8 0.0
  23. 22. 58.3% 13 249.3 196.3 136.3 249.3 46.2
  24. 23. 91.7% 13 260.5 260.5 260.5 260.5 0.0
  25. 24. 69.2% 13 231.1 208.7 82.5 260.7 85.3
  26. 25. ???
  27. 26. 63.6% 12 158.0 172.0 111.3 261.2 63.4
  28. 27. 90.9% 12 161.0 161.0 161.0 161.0 0.0
  29. 28. ???

Please, somebody explain the basics of routing to Claro/Enitel. This just does not make any sense.

( categories: ) back online

Submitted by gwolf on Tue, 05/19/2009 - 14:39

You might have noticed that during last week the Mexican Debian mirror, (a.k.a., a.k.a., went offline. The motherboard died on us, and Facultad de Ciencias was kind enough to give us a brand new one. So, excuse us for the blackout, but we are back – Meaner and badder than ever before!

Now, Sergio (nisamox's main admin) prefered to rebuild the whole mirror, as there was a shadow of doubt regarding the data integrity. So, rsync was pulling as fast as he could for the whole weekend (leading to some people scratching their heads regarding the 404 for the missing files; sorry, we should have left Apache shut down until the mirror was complete!). After three days of sustaining a 10-20Mbps download from the main mirrors, all 364GB of Debian are finally installed and –as you can clearly see– we are back to normality, with small, regular mirror pulses and a nice sustained 5-10Mbps (with some up to 40Mbps peaks — We have seen up to 100Mbps peaks in the past, and I doubt with the current network infrastructure we won't top that).

You can see we have currently plenty of disk space still to fill up. Among our plans is to host the most popular ISOs, which are a common request, and... What else? Well, ask us and we shall do so (quite probably).

Ethernet usage over the last week

Disk space usage over the last week

So, if you switched away from due to our downtime, readjust your mirror settings. Nisamox is back!

( categories: )
Syndicate content