bob's tech ramblings

where i ramble about technical things

6th June 2012

Early this morning I got alerts from several of our nagios installations saying they couldnt reach www.google.com. I logged into a production machine from my phone and cheked it was still talking to the internet properly. They went back to sleep with an inkling this maybe an issue with World IPv6 Launch day.

When I woke up refreshed and ready for action I went an investigated and it would seem that on our servers check_ping will default to resolving the AAAA record which isnt good when you dont have IPv6 connectivity. So the easy fix is to specify -4 to check ping. Although I'm tempted to just remove the check since if the internet is that broken the alert cant be sent anyway.


30th August 2009

Capistrano is useful for deploying ruby applications. It can also be used to help with configuration and setup of things related to the applications

Such as creating the ssl keys and certificates for the demo and staging sites. I've always used an openssl one liner to do this but I still needed to fill in the details for the certificate. Which is less that ideal if you want to automate the creation of the keys and certificates. So I dug around and found the right incantation to pass the certificate details to openssl. I then made this into a capistrano recipe.

namespace :sslcert do

  desc "create a self signed ssl cert"
  task :create, :roles => :web do
    sudo "openssl req -x509 -newkey rsa:2048 -keyout /etc/ssl/private/#{application}.key -out /etc/ssl/certs/#{application}.crt  -days 9999 -nodes -subj \"/C=GB/ST=England/L=London/O=#{application}/OU=IT/CN=#{servername}\""
  end
end

As you can see the magic happens with the -subj option.

This recipe puts the ssl certificate and key in the default location for them on debian. You could of course change this and then not need to use sudo. In my actual work version I also make sure this recipe doesn't run on production deployments since they should be using real ssl certificates.


16th August 2009

On Thursday the Randomness Guide to London was probably the target of a ddos attack. The machine it was hosted on went to load of about 60. Which is fine since its solaris. The main problem was when the machine exhausted its memory and started to swap the machine would become unresponive.. To mitigate this I dropped the number of concurrent connections apache allowed. The machine suffered less but the website was still unusable because you were fighting to get one of the limited number of available connections.

Since I was in the pub the other administrator of the Randomness Guide to London was renaming the CGIs so that load would drop and she could get on to it. When I got back I whipped up some mod_rewrite rules such that we could see the site but everyone else got a holding page. 10 minutes after this the ddos stopped. So it was a bit late. However a holding page is still a useful thing to have.

The next morning I refined it a bit to be more intelligent and return a 503 which is the correct status code.

RewriteEngine On
RewriteCond %{REMOTE_ADDR} !AAA.BBB.CCC.DDD
RewriteCond %{DOCUMENT_ROOT}/holding.html -f
RewriteCond %{DOCUMENT_ROOT}/holding.enable -f
RewriteCond %{SCRIPT_FILENAME} !holding.html
RewriteRule ^.*$ /holding.html [R=503,L]
ErrorDocument 503 /holding.html

First turn on the rewrite engine. Then the conditions for which the RewriteRule applies
  • Don't match an IP address. So you can see the site.
  • Make sure holding.html exists
  • Check for the existence of a file called holding.enable. This is the means by which you turn holding page on and off.
  • Don't apply the rule when serving holding.html
Then the rule itself. Which basically redirects anything to the 503 error page. Which you then set as holding.html

That's all you really need although at work I add an extra line to help stupid web caches not keep on showing the error page after the site is back.

Header Set Cache-Control "max-age=0, no-store"


10th August 2009

The first perl 5.10.1 release candidate was released the other day. As mst says if you don't test it now and it breaks for you when its released its your fault. As a good cpan tester I've added it to my testing setup.

Since I care about OpenGuides for the Randomness Guide to London I made sure to submit a test report for it.

While I was doing this I also added perl 5.8.9 to my setup. I also changed my setup slightly by pre-installing Moose and Catalyst since thats what all the cool perl programmers are using these days. Since installing and testing their dependency chains was burning a lot of cpu, time and bandwidth every time a new module based on them was updated or added to CPAN


7th August 2009

So you've set up a apt repository following dean's excellant instructions and youve tried to install a package and got the following warning

WARNING: The following packages cannot be authenticated!

At this point you have several choices:-

  • press yes and carry on.(not that useful if youre using puppet to install stuff)
  • echo "APT::Get::AllowUnauthenticated 1;" >> /etc/apt/apt.conf.d/99unauth
  • Set up a secure repository

Lets go with setting up a secure repository.

  • Make yourself a gpg key - gpg --gen-key
  • Export your public key to a file - gpg --armor --export $keyid >public.key. You will need this later
  • Create an apt-release.conf containing APT::FTPArchive::Release::Suite "etch";(Im behind and should have written this post a year ago) in your repository base.
  • Generate a release file - apt-ftparchive -c apt-release.conf release dists/etch/ > dists/etch/Release
  • Create a signed version - gpg --sign -ba -o dists/etch/Release.gpg dists/etch/Release
Your repository is now secure. Now you need to tell your machines about your key or apt-get will emit
W: GPG error: http://debianrepo etch Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY $KEYID
To do this manually you can take the public.key you generated earlier and copy it to your machines and then run apt-key add public.key

Of course in this day and age doing things like that for all your machines would be tedious so I use puppet with a class something like the following.

class aptkey {

  file { "/etc/apt/public.key":
    mode   => 440,
    owner  => root,
    group  => root,
    source => [
        "puppet://puppet/host/public.key",
        "puppet://puppet/files/public.key"
        ],
  }
  exec { "install-key":
     command => "/usr/bin/apt-key add /etc/apt/public.key",
     require => File["/etc/apt/public.key"],
     unless  => "/usr/bin/apt-key list | /bin/grep -q 'firstname.lastname'";
  }
  exec { "key-update":
     command => "/usr/bin/apt-get update",
     require => Exec["install-key"],
  }
}

4th August 2008

Dean wrote his Cron Commandments a while back but they got some link loving from Simon Willison relatively recently. The one he misses though is Thou shall not rewrite cron. Im looking at you backgroundrb and rufus-scheduler.

Comments Tags: cron, dean, rules.

22nd June 2008

If you dont know much about using and setting up cisco routers I humbly suggest you buy Cisco Routers for the Desperate. It was very useful this week when I set up a pair of 2811s with HSRP. I would tell you how but I think you should buy the book instead.


22nd May 2008

So recently I've backported[1] a couple of debian packages and needed somewhere to serve them from. The current work apt repo scares me so I followed Dean's useful guide to setting one up instead. It worked very nicely and wasn't pain.

[1] post to come hopefully


21st January 2008

When I was setting up my Planet I noticed that Chronicle was outputting less that useful titles for the rss feed. Instead of whining I provided a patch which Steve accpeted.


13th January 2008

Today I have mostly been hating mailman

That's about an hour of my life im never getting back