Dasein Cloud at OSCON

While working on Dell’s acquisition of Enstratius, one of the highlights for me was the work George Reese and team have done on the open source (Apache license) cloud abstraction layer – Dasein Cloud.  I’m pleased Enstratius joined Dell, and that the work on building Dasein, and making Dasein available for other uses, has only accelerated.

Please see George’s blog post on his views of Dasein’s progress in just the last few months, and if you’re at OSCON, stop by the Dell booth or the Dasein session and talk to George.

The Open Source Soul of Dell Multi-Cloud Manager

Why MirrorManager Matters

MirrorManager’s primary aim is to make sure end users get directed to the “best” mirror for them.  ”Best” is defined in terms of network scopes, based on the concept that a mirror that is network-wise “close” to you is going to provide you a better download experience than a mirror that is “far” from you.

In a pure DNS-based round robin mirror system, you would expect all requests to be sent to a “global” mirror, with no preference for where you are on the network.  In a country-based DNS round robin system, perhaps where the user has specified what country they are in, or perhaps it was automatically determined, you’d expect most hits in countries where you know you have mirrors.

MirrorManager’s scopes include clients and mirrors on the the same network blocks, Autonomous System Numbers, jointly on Internet2 or its related regional high speed research and education networks in your same country, then falling back to GeoIP to find mirrors in the same country, and same continent.  In only the rarest of cases does the GeoIP lookup fail, we have no idea where you are, and you get sent to some random mirror somewhere.

But, how well does this work in practice?  MM 1.4 added logging, so we can create statistics on how often we get a hit for each scope.  Raw statistics:

 

Scope Percentage On-Network Percentage
netblocks 16.10% 16.10%
Autonomous System 5.61% 21.71%
Internet2 8.95% 30.66%
geoip country 57.50% 88.16%
geoip continent 10.34% 98.51%
Global (any mirror) 1.38% 99.88%

mirrormanager-scope

In the case of MirrorManager, we take it three steps further than pure DNS round robin or GeoIP lookups.  By using Internet2 routing tables, ASN routing tables, and letting mirror admins specify their Peer ASNs and their own netblocks, we are able to, in nearly 22% of all requests, keep the client traffic completely local to the organization or upstream ISP, and when adding in Internet2 lookups, a whopping 30% of client traffic never hits the commodity Internet at all.  In 88% of all cases, you’re sent to a mirror within your own country – never having to deal with congested inter-country links.

MirrorManager 1.4 now in production in Fedora Infrastructure

After nearly 3 years in on-again/off-again development, MirrorManager 1.4 is now live in the Fedora Infrastructure, happily serving mirrorlists to yum, and directing Fedora users to their favorite ISOs – just in time for the Fedora 19 freeze.

Kudos go out to Kevin Fenzi, Seth Vidal, Stephen Smoogen, Toshio Kuratomi, Pierre-Yves Chivon, Patrick Uiterwijk, Adrian Reber, and Johan Cwiklinski for their assistance in making this happen.  Special thanks to Seth for moving the mirrorlist-serving processes to their own servers where they can’t harm other FI applications, and to Smooge, Kevin and Patrick, who gave up a lot of their Father’s Day weekend (both days and nights) to help find and fix latent bugs uncovered in production.

What does this bring the average Fedora user?  Not a lot…  More stability – fewer failures with yum retrieving the mirror lists, not that there were many, but it was nonzero.  A list of public mirrors where the versions are sorted in numerical order.

What does this bring to a Fedora mirror administrator?  A few new tricks:

  • Mirror admins have been able to specify their own Autonomous System Number for several years.  Clients on the same AS get directed to that mirror.  MM 1.4 adds the ability for mirror admins to request additional “peer ASNs” – particularly helpful for mirrors located at a peering point (say, Hawaii), where listing lots of netblocks instead is unwieldy.  As this has the potential to be slightly dangerous (no, you can’t request ALL ASNs be sent your way), ask a Fedora sysadmin if you want to use this new feature – we can help you.
  • Multiple mirrors claiming the same netblock, or overlapping netblocks, were returned to clients in random order.  Now they will be returned in ascending netblock size order.  This lets an organization that has a private mirror, and their upstream ISP, both have a mirror, and most requests will be sent to the private mirror first, falling back to the ISP’s mirror.  This should save some bandwidth for the organization.
  • If you provide rsync URLs, You’ll see reduced load from the MM crawler as it will now use rsync to retrieve your content listing, rather than a ton of HTTP or FTP requests.

What does this bring Fedora Infrastructure (or anyone else running MirrorManager)?

  • reduced memory usage in the mirrorlist servers.  Especially with as bad as python is at memory management on x86_64 (e.g. reading in a 12MB pickle file blows out memory usage from 4MB to 120MB), this is critical.  This directly impacts the number of simultaneous users that can be served, the response latency, and the CPU overhead too – it’s a win-win-win-win.
  • An improved admin interface – getting rid of hand-coded pages that looked like they could have been served by BBS software on my Commodore 64 – for something modern, more usable, and less error prone.
  • Code specifically intended for use by Debian/Ubuntu and CentOS communities, should they decide to use MM in the future.
  • A new method to upgrade database schemas – saner than SQLObject’s method.  This should make me less scared to make schema changes in the future to support new features.  (yes, we’re still using SQLObject – if it’s not completely broken, don’t fix it…)
  • Map generation moved to a separate subpackage, to avoid the dependency on 165MB of  python-basemap and python-basemap-data packages on all servers.

MM 1.4 is a good step forward, and hopefully I’ve laid the groundwork to make it easier to improve in the future.  I’m excited that more of the Fedora Infrastructure team has learned (the hard way) the internals of MM, so I’ll have additional help going forward too.

Fedora Project Board Town Hall Thursday 1900 UTC

I have the pleasure of moderating the Fedora Project Board Town Hall today, 1900 UTC, having served on the board for five years previously.  Held on IRC, these Town Halls give project members a chance to ask questions directly of the five Board candidates, so that you can make a more informed decision when casting your vote.  I hope you can join us.

MirrorManager at FUDCon Lawrence

Two weeks ago I once again had the opportunity to attend the Fedora User and Developer Conference, this time in Lawrence, KS.  My primary purpose in going was to work with the Fedora Infrastructure team, and develop a plan for MirrorManager maintenance going forward, and learn about some of the faster-paced projects that Fedora is driving.

MirrorManager began as a labor of love immediately after the Fedora 6 launch, when our collection of mirrors was both significantly smaller and less well wrangled, leading to unacceptable download times for the release, and impacts to Fedora and Red Hat networks and our few functional mirrors that we swore never to suffer or inflict again.  Fedora 18 launch, 6 years later, was just as downloaded as before, but with nearly 300 public mirrors and hundreds of private mirrors, the release was nary a blip on the bandwidth charts, as “many  mirrors make for light traffic”.  To that end, MirrorManager continues to do its job well.

However, over the past 2 years, with changes in my job and outside responsibilities, I haven’t had as much time to devote to MirrorManager maintenance as I would have liked.  The MirrorManager 1.4 (future) branch has languished, with an occasional late-night prod, but no significant effort. This has prevented MirrorManager from being more widely adopted by other non-Fedora distributions.  The list of hotfixes sitting in Fedora Infrastructure’s tree was getting untenable.  And I hadn’t really taken advantage of numerous offers of help from potential new maintainers.

FUDCon gave me the opportunity to sit down with the Infrastructure team, including Kevin, Seth, Toshio, Pierre, Stephen, Ricky, Ian and now Ralph, to think through our goals for this year, specifically with MM.  Here’s what we came up with.

  1.  I need to get MM 1.4 “finished” and into production.  This falls squarely on my shoulders, so I spent time both at FUDCon, and since, moving in that direction.  The backlog of hotfixes needed to get into the 1.4 branch.  The schema upgrade from 1.3 to 1.4 needed testing on a production database (Postgres) not just my local database (mysql) – that revealed additional work to be done.  Thanks to Toshio for getting me going on the staging environment again.  Now it’s just down to bug fixes.
  2. I need not to be the single point of knowledge about how the system works.  To that end, I talked through the MM architecture, which components did what, and how they interacted.  Hopefully the whole FI team has a better understanding of how it all fits together.
  3. I need to be more accepting of offers of assistance.  Stephen, Toshio, and Pierre have all offered, and I’m saying “yes”.  Stephen and I sat down, figured out a capability he wanted to see (better logging for mirrorlist requests to more easily root cause failure reports), he wrote the patch, and I accepted it.  +1 to the AUTHORS list.
  4. Ralph has been hard at work on fedmsg, the Fedora Infrastructure Message Bus.  This is starting to be really cool, and I hope to see it used to replace a lot of the cronjob-based backend work, and cronjob-based rsyncs that all our mirrors do.  One step closer to a “push mirror” system.  Wouldn’t it be cool if Tier 2 mirrors listened on the message bus for their Tier 1 mirror to report “I have new content in this directory tree, now is a good time to come get it!” , and started their syncs, rather than the “we sync 2-6 times a day whenever we feel like it” that mirrors use today ?  I think so.

Now, to get off (or really, on) the couch and make it happen!

A few other cool things I saw at FUDCon I wanted to share (snagged mostly from my twitter stream):

  1. OpenLMI = Open Linux Management Infrastructure software to manage systems based on DMTF standards. http://del.ly/6019VxOl
  2. Mark Langsdorf from @calxeda is demonstrating the ECX1000 #armserver SoC based build hardware going in PHX at #fudcon pic.twitter.com/hgfo2mO7
  3. @ralphbean talking about fedmsg at #fudconhttp://del.ly/6015VxTD . I need to think about how @mirrormanager can leverage this.
  4. Hyperkitty is a new Mailman mailing list graphical front end, bringing email lists into the 21st century.

I look forward to next year’s FUDCon, wherever it happens to be.

logrotate and bash

It took me a while (longer that I should admit) to figure out how to make daemon processes written in bash, work properly with logrotate so that the output from bash gets properly rotated, compressed, closed, and re-opened.

Say, you’re doing this in bash:

#!/bin/bash
logfile=somelog.txt
while :; do
     echo -n "Today's date is" >>  ${logfile}
     echo date >> ${logfile} 
     sleep 60
done

This will run forever, adding a line to the noted logrotate file every minute.  Easy enough, and if logrotate is asked to rotate the somelog.txt file, it will do so happily.

But what if bash has started a process that itself takes a long time to complete:

#!/bin/bash
logfile=somelog.txt
find / -type f -exec cat \{\} \; >>  ${logfile}

which, I think we’d agree, will take a long time.  During this time, it keeps the logfile open for writing.  If logrotate then fires to rotate it, we will lose all data written to the logfile after the rotate occurs.  The find continues to run, but the results are lost.  This isn’t really what we want.

The solution is to change how logs are written.  Instead of using the > ${logfile} syntax, we’re going to let bash itself do the writing.

#!/bin/bash
logfile=somefile.txt
exec 1>>${logfile} 2>&1
find / -type f -exec cat \{\} \;

Now, the output from the find command is written to its stdout, which winds up on bash’s stdout, which because of the exec command there, writes it to the logfile.  If logrotate fires here, we’ll still lose any data written after the rotate.  To solve this, we’d need to have bash close and re-open its logfile.

Logrotate can send a signal, say SIGHUP, to a process, when it rotates its logfile out from underneath it.  On receipt of that signal, the process should close its logfile and reopen it. Here’s how that looks in bash:

#!/bin/bash
logfile=somelog.txt
pidfile=pidfile.txt

function sighup_handler()
{
    exec 1>>${logfile} 2>&1
}
trap sighup_handler HUP
trap "rm -f ${pidfile}" QUIT EXIT INT TERM
echo "$$" > ${pidfile}
# fire the sighup handler to redirect stdout/stderr to logfile
sighup_handler
find / -type f -exec cat \{\} \;

and we add to our logrotate snippet:

somelog.txt {
 daily
 rotate 7
 missingok
 ifempty
 compress
 compresscmd /usr/bin/bzip2
 uncompresscmd /usr/bin/bunzip2
 compressext .bz2
 dateext
copytruncate
postrotate
    /bin/kill -HUP `cat pidfile.txt 2>/dev/null` 2>/dev/null || true
endscript
}

Now, when logrotate fires, it sends a SIGHUP signal to our long-running bash process.  Bash catches the SIGHUP, closes and re-opens its logfiles (via the exec command), and continues writing.  There is a brief window between when the logrotate fires, and when bash can re-open the logfile, where those messages may be lost, but that is often pretty minimal.

There you have it.  Effective log rotation of bash-generated log files.

(Update 7/5: missed the ‘copytruncate’ option in the logrotate config before, added it now.)


					

Dell Linux Engineers work over 5000 bugs with Red Hat

A post today by Dell’s Linux Engineering team announcing support for RHEL 5.8 on PowerEdge 12G servers made me stop and think.  In the post, they included a link to a list of fixes and enhancements worked in preparing RHEL 5.8 for our new servers.  The list was pretty short. But that list doesn’t tell the whole story.

A quick search in Bugzilla for issues which Dell has been involved in since 1999 yields 5420 bugs, 4959 of which are CLOSED, and only 380 of which are still in NEW or ASSIGNED state, many of which look like they’re pretty close to being closed as well.  This is a testament to the hard work Dell puts into ensuring Linux “Just Works” on our servers, straight out of the box, with few to no extra driver disks or post-install updates needed to make your server fully functional.  You want a working new 12G server?  Simply grab the latest RHEL or SLES DVD image and go.  Want a different flavor of Linux?  Just be sure you’re running a recent upstream kernel – we push updates and fixes there regularly too.

Sure, we could make it harder for you, but why?

Congratulations to the Linux Engineering team for launching 12G PowerEdge with full support baked into Linux!  Keep up the good work!

s3cmd sync enhancements and call for help

Coming soon, Fedora and EPEL users with virtual machines in Amazon (US East for starters) will have super-fast updates.  I’ve been hacking away in Fedora Infrastructure and the Fedora Cloud SIG to place a mirror in Amazon S3.  A little more testing, and I’ll flip the switch in MirrorManager, and all Amazon EC2 US East users will be automatically directed to the S3 mirror first.  Yea!  Once that looks good, if there’s enough demand, we can put mirrors in other regions too.

I hadn’t done a lot of uploading into S3 before.  It seems the common tool people use is s3cmd.  I like to think of ‘s3cmd sync’ as a replacement for rsync.  It’s not – but with a few patches, and your help, I think it can be made more usable.  So far I’ve patched in –exclude-from so that it doesn’t walk the entire local file system only to later prune and exclude files – a speedup of over 20x in the Fedora case.  I added a –delete-after option, because there’s no reason to delete files early in the case of S3 – you’ve got virtually unlimited storage.  And I added a –delay-updates option, to minimize the amount of time the S3 mirror yum repositories are in an inconsistent state (now down to a few seconds, and could be even better).  I’m waiting on upstream to accept/reject/modify my patches, but Fedora Infrastructure is using my enhancements in the meantime.

One feature I’d really like to see added is to honor hardlinks.  Fedora extensively uses hardlinks to cut down on the number of files, amount of storage, and time needed to upload content.  Some files in the Fedora tree have 6 hardlinks, and over three quarters of the files have at least one hardlink sibling.  Unfortunately, S3 doesn’t natively understand anything about hardlinks.  Lacking that support, I expect that S3 COPY commands would be the best way to go about duplicating the effect of hardlinks (reduced file upload time), even if we don’t get all the benefits.  However, I don’t have a lot more time available in the next few weeks to create such a patch myself – hence my lazyweb plea for help.  If this sounds like something you’d like to take on, please do!

IPv6 on Dell Cloud and Rackspace Cloud Servers

IPv6 is coming – albeit slowly.  While the core Internet is IPv6-capable, getting that plumbed all the way through to your system, be it at home, in your company’s data center, or in a cloud offering, is still elusive.  When waiting isn’t an option, tunneling IPv6 over IPv4 has proven viable, at least for light uses.

Since 2006, I’ve been using the tunnel service provided by SixXS to have IPv6 at home.  Now that I’ve been making more use of cloud servers, first with Dell Cloud with VMware vCloud Datacenter Service, and now adding Rackspace Cloud Servers, I’ve wanted IPv6 connectivity to those servers too.  While both clouds have roadmap plans to add native IPv6 connectivity, I’m a little impatient, and can afford to make the conversion once each is ready with native service.  So, I’ve expanded by my use of SixXS into each of those clouds as well.

As it happens, both Dell Cloud and Rackspace Cloud Servers are network-located in the Dallas, TX area, where SixXS also has a PoP.  That means in both cases there’s only about a 2ms round trip time between my cloud servers and the PoP, which is an acceptable overhead.  In configuring my cloud servers, I have requested a tunnel from SixXS, installed the aiccu program from the Linux distro repositories, and configured the /etc/aiccu.conf file with my credentials and tunnel ID.  Voila – IPv6 connectivity!  A quick update to /etc/sysconfig/ip6tables, and now my services are reachable through both IPv4 and IPv6.  As each tunnel also comes with a whole routed /48 subnet as well, as I stand up more cloud servers in each location, I can route this subnet so I don’t have to configure separate tunnels for each server.

Free IPv6 connectivity for my cloud servers, without waiting for native connectivity.  That’s cool!

Dell 12G PowerEdge – IPMI interrupt and the death of kipmi0

A seemingly minor feature was added to our 12G PowerEdge servers announced this week – IPMI interrupt handling.  This is the culmination of work I started back in 2005 when we discovered that many actions utilizing IPMI, such as polling all the sensors for status during system startup, and performing firmware updates to the IPMI controller itself, took a very very long time.  System startup could be delayed by minutes while OMSA polled the sensors, and firmware updates could take 15 minutes or more.

At the time, hardware rarely had an interrupt line hooked up to the Baseboard Management Controller, which meant we had to rely on polling the IPMI status register for changes.  The polling interval, by default, was the 100Hz kernel timer, meaning we could transfer no more than 100 characters of information per second – reading a single sensor could take several seconds.  To speed up the process, I introduced the “kipmi0″ kernel thread, which could poll much more quickly, but which PowerEdge users noted consumed far more CPU cycles than they would have liked.

Over time the Dell engineering team has made several enhancements to the IPMI driver to try to reduce the impact of the kipmi0 polling thread, but it could never be quite eliminated – until now.

With the launch of the 12G PowerEdge servers, we have a hardware interrupt line from the BMC hooked up and plumbed through the device driver.  This eliminates the need for the polling thread completely, and provides the best IPMI command performance while not needlessly consuming CPU cycles polling.

Congratulations to the Dell PowerEdge and Linux Engineering teams for finishing this effort!