10 December, 2011

https in cpanel

working with someone who has a cpanel server. they want https on it. cpanel doesn't do that by default. google doesn't reveal much in the way of tutorials for this, so here's a note for people to find.

  1. generate a key pair and certificate using the Generate a SSL Certificate & Signing Request page. Copy the certificate onto your clipboard.
  2. go to the Install a SSL Certificate and Setup the Domain page. Paste in the certificate. click fetch on the key text field and it should populate that field for you. Set the username to nobody so that all users can use this key pair.
  3. When you save that page, apache will reload and you'll get https service on port 443, with a self-signed certificate (and so with consequent certificate mismatch error messages). But your existing domains won't work on that server - they'll go to the default cpanel parking page - cpanel only configures its virtual hosts on port 80... grr
  4. So next I made an apache mod_rewrite rule in the VirtualHost directive for the port 443 virtual server. That causes all the internal sites appear on port 443.
        RewriteEngine on
        RewriteRule   ^(.+)          http://%{HTTP_HOST}$1 [P]
    
    That's an awkward hack to have to add to cpanel's generated config, but it seems to work (modulo invalid certificate warnings that all users ignore anyway)...

There's also a hole in the way that that rewrite rule is implemented: with a custom http client, you can probably make this server act as an arbitrary proxy for you, depending on mod_proxy configuration.

03 December, 2011

DHmPC

I have a server running inside EC2. It gets its network details using dhcp.

ubuntu@s0:~$ hostname
s0.barwen.ch
ubuntu@s0:~$ hostname --fqdn
hostname: Name or service not known

grr.

This happens pretty much every time the VM reboots. Its something to do with getting a new private IP address each time it reboots.

Although this manages to work:

hostname --all-fqdns
s0.barwen.ch polm.pl stacheldraht.it s0.barwen.ch

The autoconfigured resolv.conf looks like this:

nameserver 172.16.0.23
domain eu-west-1.compute.internal
search eu-west-1.compute.internal

and if I comment out or remove both the domain and search lines, then everything works...

Those lines are wrong anyway - this machine is in my barwen.ch domain. It just happens to be hosted in amazon's network...

29 November, 2011

mysql rsync backup doh!

Well, I moved a big live database from one server to another for a customer. I was planning on taking the site offline first and doing an sql dump/move/restore, but someone accidentally deleted the old server before they were meant too (oops) - leaving me with some rsync backups that I'd made of the whole server over the previous days, to guard against such an eventuality.

Turns out rsyncing a live mysql database is not the right way to do things - I ended up with a database that managed to get mysqld to actually crash when I tried to dump out the database contents. I managed to fix this by dropping the tables that it was crashing on: it conveniently told me which ones it was crashing on, and it let me drop them, just not dump them. Luckily they were all cache tables so there didn't seem to be harm in dropping them and recreating them empty.

It also turns out (obviously, post facto) that mysql databases don't play nicely with rsync's --link-dest option, which is supposed to reduce disk space on the target file system when making multiple snapshots. Because it only optimises away duplicate data for identical files, but mysql stores the entire database (to a first approximation) in a single file, hiding its internal structure from rsync, and so the whole db gets duplicated each time.

Taking an SQL dump should solve the first problem. Not entirely sure what the right way to do an incremental backup of the database is. Maybe I shouldn't even try.

19 November, 2011

batch checking DNS delegation

I'm working with someone who has ten or so domains. All the domains are registered in different places, and with slightly different DNS settings. As part of tidying that up to get everything consistent, I wrote the following bash+dig script to display the delegation of each zone from its parent.

That is importantly not the same as what the name servers for the zone return as NS records. The "authoritative" source of NS (nameserver) records for a zone is that zone itself. Using dig to query the NS records seems to be returning those, unsurprisingly.

However, in order to query a zone, there is a second place where name servers must be configured, separately from in the zone itself. That is in the parent zone. If those are wrong, then you can get awkward to diagnose problems: you can see from dig that the nameservers are right, yet lookups go to the wrong place.

Hence my script.

You can see even on my own domains there's a slight misconfiguration: barwen.ch claims to have s0.barwen.ch as a server, but the .ch registry isn't delegating to it. That won't cause bad DNS lookups but will cause s0.barwen.ch to not be used as a nameserver sometimes. Worse is when the delegation points to an old server that then returns a new servers DNS, giving the illusion that all is well, until you turn off the old server (which is the problem I have on other zones)

$ ./list-domains-NS.sh 
ZONE hawaga.org.uk
NAMESERVERS ACCORDING TO GOOGLE DNS
dildano.hawaga.org.uk.
paella.hawaga.org.uk.
NAMESERVERS ACCORDING TO org.uk
hawaga.org.uk.  172800 IN NS paella.hawaga.org.uk.
hawaga.org.uk.  172800 IN NS dildano.hawaga.org.uk.

ZONE barwen.ch
NAMESERVERS ACCORDING TO GOOGLE DNS
paella.hawaga.org.uk.
s0.barwen.ch.
dildano.hawaga.org.uk.
NAMESERVERS ACCORDING TO ch
barwen.ch.  3600 IN NS dildano.hawaga.org.uk.
barwen.ch.  3600 IN NS paella.hawaga.org.uk.

So here's the script:

#!/bin/bash
cat domainlist.txt | while read d ; do
  echo "ZONE $d"
  echo "NAMESERVERS ACCORDING TO GOOGLE DNS"
  dig @8.8.8.8 -t ns $d +short

  PARENT=$(echo $d | sed 's/^[^.]*\.//')
  echo "NAMESERVERS ACCORDING TO $PARENT"

  PARENTNS=$(dig +short -t NS ${PARENT}. | head -n 1)

  dig @$PARENTNS -t NS +noall +authority +norecurse $d

  echo
done

12 November, 2011

country selector

I've had a rant queued up on this blog but never completed about various forms of internationalisation/localisation.

One part of the rant is about web forms that ask for your country. A common way to do this is a drop down list of "every country" (deliberately in quotes). That's awful.

Its especially awful when you are from simultaneously from: Britain; United Kingdom; Great Britain; and England, all "countries" in someone's definition or another - not only do you have to scroll down to your country, you don't even know which of the four countries you are meant to scroll to until you get there and find that country missing.

I saw (on hacker news) this redesigned country selector: http://baymard.com/labs/country-selector which presents an autocompleting text box.

Type "eng" and it brings up "United Kingdom". woo!

It also converts "deut" into "Germany", but fails to convert ”日” into Japan.

08 October, 2011

beep is not useful.

sometimes I get a beep on my laptop. my console has ~10 applications on 9 desktops, including being remotely connected to ~5 machines. a beep without any context is not useful.

01 October, 2011

POSSIBLE BREAK-IN ATTEMPT (not really)

SSH gives out error messages like this:
Sep 28 09:50:09 s0 sshd[27967]: reverse mapping checking 
                  getaddrinfo for adsl86-34-217-144.romtelecom.net 
                  [86.34.217.144] failed - POSSIBLE BREAK-IN ATTEMPT!
Why does it label it as POSSIBLE BREAK-IN ATTEMPT!? How is it more of a possible break-in attempt than a user attempting to connect more than a few times with a wrong password? This has bugged me a bit recently when helping a few people who aren't really used to linux - its shouting at them that something is SERIOUSLY WRONG!!! and when they look through their log files, they've fixated on this (as far as I can) relatively minor misconfiguration of a remote user's network.

24 September, 2011

♫ if i could turn back time ♪

Cooker has a one-way-only knob for setting time. You wind forwards until you reach now. If you miss now, you wind forwards another 24h (luckily the cooker doesn't have a date too) until you reach now again. On TZ shift (i.e. end/start of summertime / daylight savings), once per year, there's a wind-forward 1h event. fairly easy. you make the clock run faster than real time. but once per year there's a once per year wind-forward 23h event. BUT! Turns out if you can stop the clock, then you can let real time wind forwards faster than clock time. and wait an hour. and then start the clock again. It involves more real time (you have to wait an hour) but a lot less tedious winding.

17 September, 2011

10 September, 2011

5 centuries ago...

I just queued up jobs for a physics simulation that are 5 compute-centuries long:

6 jobs x 11.5 hours x 16384 nodes x 4 cores/node = 5.16 centuries.

I'm expecting the runs to be finished in about a week

If this ran on a single core and I wanted it to be finishing in about a week, I would have had to start them in the year 1495 (the most interesting wikipedia note for that year is Friar John Cor records the first known batch of Scotch whisky).

03 September, 2011

frustrated-with-slow-simulation haiku




microseconds pass
simulation time stretched
long as winters night


I can't complain too much though - this was 65536 cores on an IBM BlueGene/P

31 August, 2011

today is blog day

Today is blog day (well, today isn't, it's the 19th of May, but hurrah for autopost)

I'm supposed to recommend 5 blogs to you.

I'm also supposed to find *new* blogs. But FTS, I'm too lazy. Plus I know the ones listed here are ones that I know I go back to day after day and find useful.

Oglaf is some dude's NSFW comic that started out as an attempt at cartoon porn but now is mostly funny. The shield maiden is my favourite post.

Hacker News from ycombinator - kinda like slashdot in that its a lot of interesting geek links. More links. Less discussion. Feels more intimate, and more focused around the bay area startup crowd.

Lambda the ultimate - a programming languages weblog. A cross between programming and maths. If you like Haskell, you'll like this, though all sorts of language stuff (theory, design) is talked about. You can also laugh at the "amusing" paper titles that authors come up with.

In a bit of shameless self promotion, I'll point you at the drupal news feed for my hobby project, barwen.ch - this is partly blog, partly just a web page change tracker.

Finally, Bruce Schneier's security blog. Aside from his tendency to ramble on about squids, this is pretty interesting. It evolved out of his company newsletter, and talks mostly about computer security but also about other related topics such as social engineering and terrorism. And squid.

27 August, 2011

Exbibyte divergence.

Mostly used by debian crackheads, but slowly becoming more widespread (and apparently also defined by the IEC) is the use of kibibyte to mean a kilobyte measured as 1024 bytes, so that KB can be used to mean kilobyte measured as 1000 bytes. There's a 2.4% difference between the two.

Correspondingly there is Mibibyte, Gibibyte, tebibyte, pebibyte, exbibyte, zebibyte, and yobibyte.

While the difference is only 2.4% at the kibibyte scale, it gets bigger with larger prefixes: by the time you get to petabytes, the scale of the largest file systems I have access to these days, the difference is over 12% and by yobibytes its 20%

20 August, 2011

brazen sms charging

Its fairly obvious for anyone who thinks about it for a while that SMS is appalling bad value per-byte compared to IP or voice over the same system.

But Orange UK really made me laugh/cry with the brazen way in which they juxtaposed two menu options to make it so clear:

To buy 50 UK SMSes valid for a day for 1 pound, press 1.
 To buy 50 Mb of data valid for a day for 1 pound, press 2.

Yes, 1 UK SMS == 1 Mb of internet data.

W
T
F

13 August, 2011

y2.1k bug

a decade into this century, and I see plenty of people using 2 digit years. we all know where that's leading...

06 August, 2011

auctions in course allocation

Apparently some universities use an auction mechanism for allocation of places in courses, as detailed for example in page three of this memo from NYU.

Very briefly describing this in quotes:
  • Students will be allotted bidding points
  • After an auction is run, students admitted to a class will be charged a “clearing price” in points equal to the highest bid from a student not admitted to the class at that auction.

30 July, 2011

HTML Slidy

HTML Slidy is a neat style sheet for making powerpoint-like presentations that are: i) written in HTML, and ii) appear in a browser. I've used it for one presentation and quite liked it. My presentation was entirely text. I can imagine it being more awkward for graphics-intensive presentations where the automatic reflowing of content to fit the current window turns into a downside rather than an upside.

23 July, 2011

dnssec revisited

well I was pleased that my dnssec notes remained sufficiently intelligble after a few months to set up a new zone. but the machine I was just using took almost 5h to generate keys for a new zone. oh the entropy.

16 July, 2011

browserid in-browser shell logins

I've previously wired up my shell server barwen.ch to allow browser-based logins using OpenID and shellinabox. I've written about that on this blog before. I saw a few articles about Mozilla's BrowserID. The code snippets there looked like they would integrate well with the code I had already. So my evening project (which ended up only taking about half an hour) was to prototype BrowserID-based shell login.

It works basically the same as for the OpenID login on barwen.ch:

To get set up: you need to sign up for a barwen.ch account which will cost you 50 cents on PayPal; you need to send me the email that you use for BrowserID (instead of / in addition to an SSH key).

To actually log in, go to the login page http://s0.barwen.ch/~benc/browserid.html and log in. A terminal will appear in your browser. You do shell stuff.

This code is pretty crappy so I don't really want to release it until I've had a thought about the security for at least an hour (though you can find fragments of it elsewhere on this blog). I especially think that there might be some attacks possible by using freaky email addresses vs my unsanitised string handling. (I'm looking at you, Bobby Tables).

bbc 500

During the wedding of the Duke and Duchess of Cambridge in April, the BBC website was for a while overwhelmed and giving http 500 errors. I was amused by the graphic they presented:



which UK viewers above a certain age will find familiar.

09 July, 2011

blue traffic light

when I was a little boy, it always struck me that traffic lights should have a blue light in addition to the red and the green (and the amber), though I could never decide what it should be for.

anyway I discovered that in Hong Kong, the MTR East Rail line *does* have a blue light on its signals, so my childhood desire is finally satisfied. it even seems a sensible use:

Passenger trains have been running under an overlaid Automatic Train Protection (ATP) system since 1998. A blue aspect is displayed at signals when a train fitted with working ATP is approaching. A signal showing a blue aspect can be ignored by the driver, who will drive according to the information shown on the cab display.

02 July, 2011

mrtg user counts

I had a graph in mrtg of two variables: number of login sessions (the blue line, the count of users from the unix users command), and number of unique users logged in (the green solid area - formed by deduping the output of users).

I decided that the number of login sessions was a bit useless especially with people using screen. A more interesting graph is perhaps something else based on the number of users doing things rather than the number of things a user is doing. So I chose to count the number of (normal, not system) users who have at least one process running.

I count it this way:

#!/bin/bash

# first, unique users logged in
echo $(users | sed 's/ /\n/g' | sort | uniq | wc -l)

# number of users who have 3xxx user numbers and have a process running 
# 3xxx is the "normal user" range, though perhaps I could distinguish in other ways
U=$(ps -A --format user= | sort | uniq | while read a ; do id -u $a ; done | grep -e '3...' | wc -l)

echo $U
echo 0
echo 0


and here's the output: (green is logged-in users, blue is users with processes running)



If you click through to the full page, you might see the change happening in the middle of week 17 of 2011, which is the beginning of May.

24 June, 2011

non-cidr netmask: "worked in testing but broke colleagues' brains"

CIDR prefix length (for example, the 24 in 128.9.128.0/24 is a more concise notation for (a commonly used subset of) netmasks.

A prefix length contains less information - it can only represent netmasks that consist of a sequence of 1 bits, followed by 0 bits to the end. For example, /24 is 11111111111111111111111100000000 (24 1s and then 32-24=8 0s)

This is useful because thats how most people use netmasks.

But there's a set of netmasks that aren't representable this way - for example 11111111000000001111111100000000.

Did anyone ever use netmasks that weren't prefix-length-representable? Apparently yes:

Addresses were allocated from these networks sequentially, and the oldest
web sites tended to get the most traffic, so a straightforward setup that
spread the six /18s across the reverse proxies didn't balance the load
particularly well. I toyed with using 0xffff0003 netmasks to split the /16
so that successive addresses could be routed to each of the four London
reverse proxies in turn.

This worked in testing but I didn't deploy it because it broke my
colleagues' brains and non-contiguous netmasks were an unsupported
feature.

18 June, 2011

password policy for ssh key hosts

I have a host with a handful of users. When they authenticate, they have to use either ssh public key or openid - there is no on-machine password. But some of the services that are running pretty much need a password: for example, IMAP, SMTP AUTH, web portal.

I'd like to give these users the ability to acquire a password.

Two ways are immediately apparent:
  • Implement a mechanism which presents the user with a new machine-generated password, for example to their registered email address.
  • Allow the user to run passwd in a shell, but not require them to enter their existing password first (instead, rely on the fact that they are logged in to be sufficient authentication).

Dear Reader, can you think of other ways? Do you have any opinions on the wiseness/unwiseness of these approaches?

10 June, 2011

roman numeral literals using Template Haskell

I've know about Template Haskell (TH) for ages but never used it. As a diversion from another project, I thought I'd try to implement roman numeral literals for Haskell, using TH -- this seems an easy way to get started without having to dig in too far.

I already wrote some Haskell roman numeral code which gives a function numeralsToInteger :: String -> Integer. I tweaked its source code a bit (it wasn't a module previously, for example) and can write this:

import Roman

main = do
  putStrLn "roman test 1 (this should print 1998 below)"
  print $ numeralsToInteger "MCMXCVIII"

What I want to do with TH is shorten the syntax and make the roman->literal conversion happen at compile time instead of at runtime.

I'll try to use TH's quasi-quoter syntax, which should allow me to write:
print [ro|MCMXCVIII|]

ro is the name I've chosen for my quasiquoter, and the [...|...|] bracket syntax comes from template haskell quasiquoting.

What will happen is that everything inside those [|brackets|] will be passed to my code, which can return an arbitrary Haskell expression to be substituted at compile time. Think C-style #define on steroids.

In my case, I'll always be returning an integer literal expression. The value of that literal will be determined by parsing the roman numeral.

I need to define ro :: Language.Haskell.TH.Quote.QuasiQuoter, and it needs to live in a different module (so that it can be compiled before attempting to parse anything that uses that quasiquoter)

I want to use my quasiquoter to return an expression. They can be used in more contexts than that (for example, when a type is needed). I'll make the other contexts into an error, which leaves only the hard one that parses and returns my expression. (after doing a bunch of fiddling Agda, it feels like a relief to be able to say "oh, I'll just make that an error).

It turns out that the code is mostly plumbing. RomanTH.hs looks like this:

module RomanTH where

import Roman
import Language.Haskell.TH
import Language.Haskell.TH.Quote

ro :: Language.Haskell.TH.Quote.QuasiQuoter
ro = QuasiQuoter parser err err err

err = error "Roman numeral quasiquoter cannot be used in this context"

parser :: String -> Q Exp
parser s = litE (IntegerL (numeralsToInteger s))

which is all types and wiring except the functionparser s = litE (IntegerL (numeralsToInteger s)) which means (right-to-left) convert string to an Integer, embed it in a literal type, and embed that inside an expression.

The complete test program looks like this:

import RomanTH

main = do
  putStrLn "roman test 2 (this should print 1998 below)"
  print [ro|MCMXCVIII|]

and runs like this:

$ ./test2 
roman test 2 (this should print 1998 below)
1998

Well, that took all of 36 minutes.

08 June, 2011

best ipv6 address seen so far

2600::

It even works, serving http for sprint.net

June 8th is world IPv6 day!

June 8th is ISOC's "world ipv6 day". According to the organisers:

On 8 June, 2011, Google, Facebook, Yahoo!, Akamai and Limelight Networks will be amongst some of the major organisations that will offer their content over IPv6 for a 24-hour “test flight”. The goal of the Test Flight Day is to motivate organizations across the industry – Internet service providers, hardware makers, operating system vendors and web companies – to prepare their services for IPv6 to ensure a successful transition as IPv4 addresses run out.

Some of us have been trying to do that for years ;)

Anyway, some things you can do:
  • Read all the ipv6 related posts on this blog
  • Make sure you are using IPv6 certified cable
  • Play loops of zen, an ipv6-only browser tile-puzzle
  • If your ISP does not provide IPv6 and you want easy IPv6 access for a single workstation, I recommend the teredo protocol (its built into windows, and you can apt-get miredo on debian. For a server or network installation, take a look at Hurricane Electric's tunnel broker which will give you a /48 prefix routed over your normal internet connection.
  • If you're on ipv6 already and you're feeling hardcore you can use NAT64 and DNS64 to get rid of all your local ipv4 traffic, instead routing it all through a NAT gateway at (for example) Andrews and Arnold.

29 May, 2011

Panickers guide to world ipv6 day

8th June 2011 is World IPv6 day. Maybe you haven't done anything about getting your website ipv6 enabled. Its taken the world 15 years to develop IPv6, so sure it seems *totally* reasonable that you can get it deployed in 10 days.

What I'm going to tell you about here will get basic IPv6 access to your site. It won't do it in a particularly pretty way, and its probably not the long term way to do it. Also I just hammered this out this afternoon (though its based on years of IPv6 use). (I hope) it should work.

First some DOs and DONTs:

* DON'T deploy IPv6 on your production servers. If you don't know much about IPv6, then blindly sticking it on your real production resources is probably a good way to put even your IPv4 (read: the real IP that everyone actually uses) connectivity at risk (for various reasons that you'll understand when you understand...)

* DO deploy a http proxy server on its own machine, and have that proxy your IPv6 traffic. You shouldn't need to modify *anything* on your production machines.

* DON'T put the IPv6 address record (AAAA, as opposed to the IPv4 A record) in your normal DNS. If you do, then users who have both IPv4 and IPv6 will usually try to connect over IPv6. You don't know how well this is going to scale, or how good your IPv6 connectivity is going to be (or even how good your users ipv6 connectivity is going to be, if everyone is going to be fucking around that day)

* DO put a DNS name (eg www.6.example.com, if your main site is on www.example.com) with the AAAA record. That way, users can choose to try using IPv6, and if its broken can easily get back to your main site. You'll need to publicise this, though, because its not going to get users connecting via IPv6 automatically, and at the same time you should provide some feedback: for example, an email address or a forum.


So what do you need to do:

* Get a dedicated server (either a physical hardware server, or a VPS) running a recent version of Linux. (Ubuntu 10.x would be enough)

* Connect that server to the ipv6 internet. If its on a network with native IPv6, then your host will probably give you connection details. If not, then use Hurricane Electric's free tunnel broker which will connect you over a regular internet connection.

* However you connect, you'll end up with an IPv6 address for your machine. It will be a string something like this 2001:470:1f09:1288::2 that you can get out of ifconfig(specifically, if you have a choice, choose the one that begins with a 2, not the one that begins with an f). Put that IPv6 address into an AAAA record in DNS (better hope your DNS hosting provider does AAAA records - the good ones do...) under a new DNS name. Don't put IPv4 addresses in there too. In my example, I'm going to configure:
blog.6.hawaga.org.uk AAAA 2001:470:1f09:1288::2

* Put apache httpd on your server, apt-get install apache2

* Now you'll need a client machine with IPv6 to try connecting to your new server. If you have a windows PC, you can probably turn on Teredo in the network configuration - it comes built in. On OS X, Linux or BSD, you can install miredo which is a Teredo client. Or you can set up another Hurricane Electric tunnel for your client machine. You can use test-ipv6.com to get a score for how well your new client machine is connected to the ipv6 internet.

* You should now be able to use a web browser to reach the hostname you configured in DNS back there - you should see apache's welcome/default page.

* Now, configure apache to forward all requests it receives onwards to your production website over IPv4. In the following example, my production IPv4 website is the one you are reading right now, benctechnicalblog.blogspot.com. Enable mod_proxy and mod_proxy_http, and then set up a virtual host directive like this (or put it in the base of your server config, seeing as this a host dedicated to forwarding ipv6 traffic):

<virtualHost *:80>
  ServerName blog.6.hawaga.org.uk
  ProxyPass / http://benctechnicalblog.blogspot.com/
  <Proxy http://benctechnicalblog.blogspot.com/>
    Allow from All
  </Proxy>
</virtualhost>

Once you've done that, visiting your ipv6 hostname (eg blog.6.hawaga.org.uk) should serve you the content of your real production website website.

Now publish the new hostname in a news item and make it seem like you know what you're doing...

Some stuff will not work, for sure: if you have anything that does things based on IPv4 address of client, that's all going to be based on the address of the proxy machine, not the real client IPv6 address. Things that might affect are localisation (eg language), and rate/load limiting based on ip address.

So, please ask questions in the comments and I'll see about answering them...

27 May, 2011

printing a message to stdout in fortran

write (*,'(''I wonder how someone kept a straight face when they invented this syntax for writing a string to the console'')')

21 May, 2011

ruby cloudwatch -> mrtg interface

I use mrtg to gather historical data on some of my servers. One of those servers lives in Amazon's Elastic Compute Cloud (EC2) and so is also monitored by Amazon CloudWatch.

Can I get cloudwatch data into mrtg?

mrtg has a fairly straightforward interface for plugging in arbitrary unix executables to collect data, so my first attempt was to use the main Java-based cloudwatch client to get data. that attempt started up one jvm for each metric collected, which massively overloaded my ec2 microinstance, keeping the load average around 3. pretty lame.

Amazon also provides a ruby interface. I had never programmed in ruby before, but its often interesting to learn a new language.

Here's what I ended up with.

First the config block for mrtg, which calls out to the ruby-mrtg-cloudwatch3 program that I wrote:

Target[cloudwatch_network]: `/home/mrtg/ruby-mrtg-cloudwatch/ruby-mrtg-cloudwatch3 NetworkIn NetworkOut AWS
/EC2 InstanceId=i-26bcaf51`
Title[cloudwatch_network]: Network traffic according to cloudwatch
options[cloudwatch_network]: growright,absolute,logscaleMaxBytes[cloudwatch_network]: 100000000

This gives a graph of network traffic according to cloudwatch. I can compare that alongside the network traffic graph for eth0 gathered from the local interface statistics. They should roughly match up, and they do (well hopefully they still do by the time you read this - these are live images):

According to the on-host network interface:


According to cloudwatch:


Now the actual ruby code:

#!/usr/bin/ruby1.8

require 'rubygems'
require 'AWS'

The two cloudwatch metric names, one that measures output data, one that measures input data, are give on the command line:
metrico=ARGV[0]
metrici=ARGV[1]

My code has hardcoded access keys at the moment which is a bit shitty:
ACCESS_KEY_ID='foo'
SECRET_ACCESS_KEY='bar'


Using the above credentials, a new cloudwatch object is made, @cw.

@cw = AWS::Cloudwatch::Base.new(:access_key_id => ACCESS_KEY_ID, :secret_access_key => SECRET_ACCESS_KEY, :server => "eu-west-1.monitoring.amazonaws.com" )

Each of the two metrics will be probed with the probe function. This uses a state file based on the metric name to get only readings which have not already been seen by this script. The two metrics use separate state files because cloudwatch doesn't give an atomic read for multiple metrics at once. The state file stores the time of the last seen reading. If there is no state file, we have to invent a time. There is a subtlety here: data does not appear in cloudwatch until around 5 minutes after its time stamp, so using the current time as an initial value results in not seeing any results. Instead, I go back about 15 minutes the first time, which will seems to be far enough back to get something.

def probe(metric)

  et = Time.now()

  statusfn="cloudwatch-"+ARGV[3]+"-"+metric+".status"
  if FileTest.exist?(statusfn) then

    f = File.new(statusfn, "r")
    tstring = f.gets
    ts = Time.parse(tstring)
    f.close
  else
    ts = et - 900 # needs to be more than 5 mins because otherwise we never get any data.
end

  res = @cw.get_metric_statistics(:measure_name => metric,  :statistics => 'Average,Sum', :namespace => ARGV[2], :period => 300, :start_time => ts, :end_time => et, :dimensions=> ARGV[3])


Now we're going to look at the rows that come back. Usually only one row will come back, if we're running this at about the same rate that cloudwatch is adding readings, but sometheres there will be more, or fewer.

In the case of network traffic, I want to return the sum of all readings for this metric. In other cases, such as disk usage, I would want to return the mean. This distinction is the same as default vs gauge measurements in MRTG.

samples = 0
  sum = 0
  avgsum = 0

  datapoints = res["GetMetricStatisticsResult"]["Datapoints"]

  lt = ts
  if datapoints.nil? then
   # nop
  else
    rows = datapoints["member"]

    rows.each { |r|
      nlt = Time.parse(r["Timestamp"])
      if(nlt < ts) then
        # nop - time was before requested start
      else
        samples += Float(r["Samples"])
        avgsum += Float(r["Average"])
        sum += Float(r["Sum"])
        nlt += 1
        if(nlt > lt) then
          lt = nlt
        end
      end
    } 

Now we can write out the new state file:

f=File.new(statusfn, "w")
    f.puts(lt)
    f.close
  end
  return sum
end


and finally output the MRTG format information:

sumo=probe(metrico)
sumi=probe(metrici)

# output mrtg format
puts sumo
puts sumi
puts 0
puts "cloudwatch: "+metrico+" and "+metrici


The end.

14 May, 2011

UK electric sockets vs chopsticks



My father taught me this trick when I was far too young to be taught this trick - how to connect a Europlug to a UK (actually HK in this case) electrical socket.

07 May, 2011

PAM python module for out-of-band one-time tokens

I previous wrote about integrating openid with unix shell logins for my public shell server barwen.ch. This post talks about the out-of-band PAM token module that I wrote as part of that - where I'm generating the tokens when the user is authenticated by OpenID. But I guess it could also have use when there's some other out of band mechanism (such as sending something by SMS?)

Subject to some other non-PAM web-based authentication (openid in this case, but it could be anything really), I want to issue a token value to the user in-band with respect to that other authentication (i.e. in a web page shown to the user), which is out-of-band with respect to PAM. That token should then be usable for a short period of time to make a single PAM authorisation to sshd.

That is, if you can log into the web-based authentication, you should then be able to log into the ssh system.

So on one side I need a PAM module (which I will write in Python) to check the tokens, and on the other side I need something (a command line tool) to issue the tokens. To complete the loop, I need some database (the filesystem) to store the tokens on the server side.

So here's the code. The PAM module comes complete with documented security vulnerability which allows anyone to delete certain files on your file system. ho ho.


First the token creator:

#!/usr/bin/python

import base64
import os
import pickle
import sys
import time

VALIDTIME = 60

tokenbits = os.urandom(8)
token = base64.b64encode(tokenbits, "+-")

print token

fn = token + ".token"
fh = open(fn, 'w')

obj = (sys.argv[1], time.time() + VALIDTIME)

pickle.dump(obj, fh)

and secondly the PAM module, which is based on the PAM module in my previous post:

import os
import syslog
import pickle
import time

def pam_sm_authenticate(pamh, flags, argv):
  syslog.syslog("start benc")
  pamh.authtok
  if pamh.authtok == None:
    syslog.syslog("got no password in authtok - trying through conversation")
    passmsg = pamh.Message(pamh.PAM_PROMPT_ECHO_OFF, "Monkeyballs?")
    rsp = pamh.conversation(passmsg)
    syslog.syslog("response is "+rsp.resp)
    pamh.authtok = rsp.resp
  # so we should at this point have the password either through the
  # prompt or from previous module
  syslog.syslog("got password: "+pamh.authtok)

  # now look for token
  # SECURITY BUG TODO: we're using this to make a path so we need to make sure
  # we're not being directed out into some other directory. We know the
  # range of characters that can be used in a token so we can reject if
  # anything other than those exists.
  # Especially as we delete the token file on use - otherwise could delete
  # arbitrary files on the system...
  tfn = "/root/" + pamh.authtok + ".token"

  if os.path.exists(tfn):
    fh = open(tfn)
    tokendata = pickle.load(fh)
    tokenuser = tokendata[0]
    tokentime = tokendata[1]
    fh.close()
    os.remove(tfn);

    # will remove the token even if it was for the wrong user
    # not sure if there's any security different wrt leaving it there if
    # its the wrong user?

    if tokentime < time.time():
      syslog.syslog("token time expired")
      return pamh.PAM_AUTH_ERR

    if tokenuser != pamh.user:
      syslog.syslog("token user "+tokenuser+" does not match requested user "+pamh.user)
      return pamh.PAM_AUTH_ERR
    

    return pamh.PAM_SUCCESS

  return pamh.PAM_AUTH_ERR

def pam_sm_setcred(pamh, flags, argv):
  return pamh.PAM_SUCCESS

26 April, 2011

ssh-like login with openid

I rigged together shellinabox and mod_auth_openid with some custom PAM glue so people can log into my hobby server s0.barwen.ch with an in-browser terminal window and openid.

shellinabox is not ssh (although web-based ssh is a good approximation). Instead it seems to be AJAX-over-https (which is kinda wtf for terminal access, but hey it seems to work).

The way I've glued it together is: First you visit the login page. That is an openid protected CGI script. The script runs with your openid in $REMOTE_USER, and does three things: it maps your openid to a local username; it generates (via sudo) an authentication token for you; and it HTTP-meta-redirects you to a hacked version of shellinabox.

shellinabox gives you a login prompt, asking for username and password. My hacked version stuffs the username and authentication token from the previous step into the keyboard.

The token ends up at a custom PAM module which verifies that the token is valid (for that user, and within a small time window after issue) and lets you in.

Then you get your shell prompt.

This seems like an interesting addition to barwen.ch's collection of login methods.

If you want a play, you can sign up at s0.barwen.ch

Also, if you break this, let me know rather than deleting / ...

23 April, 2011

pam_python

I came across pam_python, a PAM module that lets you write PAM modules in Python. I've come across things scripted by python a couple of times in the last few weeks at work so it seems interesting to play in this direction.

The first module I got sort-of working is this, which lets anyone log in with the password poop53.

import syslog

def pam_sm_authenticate(pamh, flags, argv):
  syslog.syslog("start benc")
  at = pamh.authtok
  syslog.syslog("got password: "+at)
  if at == "poop53" : 
    return pamh.PAM_SUCCESS
  else:
    return pamh.PAM_AUTH_ERR

def pam_sm_setcred(pamh, flags, argv):
  return pamh.PAM_SUCCESS

Now this cheats a bit - it assumes that some other module has read in the password from the user - I used pam_unix to do that, configured as below, so that first a check against the unix password happens and then if that fails, check against poop53.

auth sufficient pam_unix.so
auth sufficient pam_python.so /root/auth.py

The specific use I am thinking of, I don't want unix passwords to work. So in that case, I need to read in the password myself if it isn't already set.

Here's how I made that work:

def pam_sm_authenticate(pamh, flags, argv):
  syslog.syslog("start benc")
  pamh.authtok
  if pamh.authtok == None:
    syslog.syslog("got no password in authtok - trying through conversation")
    passmsg = pamh.Message(pamh.PAM_PROMPT_ECHO_OFF, "Monkeyballs?")
    rsp = pamh.conversation(passmsg)
    syslog.syslog("response is "+rsp.resp)
    pamh.authtok = rsp.resp
  # so we should at this point have the password either through the
  # prompt or from previous module
  syslog.syslog("got password: "+pamh.authtok)
  if pamh.authtok == "poop53" : 
    return pamh.PAM_SUCCESS
  else:
    return pamh.PAM_AUTH_ERR

def pam_sm_setcred(pamh, flags, argv):
  return pamh.PAM_SUCCESS

To use this with sshd, I need to enable sshd options UsePAM and ChallengeResponseAuthentication, and now I get this:
$ ssh root@192.168.141.128
Monkeyballs?poop53
Linux alcf3 2.6.35-28-generic #49-Ubuntu SMP Tue Mar 1 14:40:58 UTC 2011 i686 GNU/Linux
#

So I'm happy that I can grab some string from the remote user now, and process it to get an authentication decision.

Thought its pretty weird to have a regular ssh client giving me a Monkeyballs? prompt at auth time...

Modified: 2011-05-08: Later I used pam_python to write an out of band token module

16 April, 2011

spam and fidonet

I got a spam the other day, addressed to an address that isn't mine. That's not unusual. What was unusual was the address they used. It isn't mine, but it was mine when I was a teenager - benc@donor2.demon.co.uk was my internet address on DoNoR/2, an OS/2 focused BBS near Woking (in those days, young whippersnappers, you cared down to the town level where you were connecting to - DoNoR/2 was in dialing code 01483, the same as Guildford where I grew up). DoNoR/2 was primarily a fidonet system - 2:252/156, and then after the BS of geonetting, 2:440/4, and I was point 2 off that - Ben Clifford at 2:252/156.2

So although I'm usually annoyed when spam gets through my filters, this one made me think: aaah natsukashi.

Date: Thu, 27 Jan 2011 09:44:27 -0800 (PST)                                     
From: Jim Vivona                                            
To: benc@donor2.demon.co.uk                                                     
Subject: re                                                                     
                                        

sup! if you hadn't heard i lost my job at lawncare company about 5              
weeks ago, then i found this news article and made 379 in a few                 
hours!! I guess it was for the best! I learned from - News channel 4            
talk to you later!  

or headers in full:

Return-Path:                                                
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on                      
dildano.hawaga.org.uk                                                       
X-Spam-Level:                                                                   
X-Spam-Status: No, score=-2.1 required=2.5 tests=BAYES_00,DKIM_SIGNED,          
DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,     
URIBL_BLACK autolearn=ham version=3.3.1                                     
X-Spam-ASN: AS36646 98.138.0.0/17                                               
X-Spam-tokens-ham: 0.000-57--100h-0s--0d--hadnt,                                
0.000-55--96h-0s--0d--hadn't,                                               
0.000-28--48h-0s--0d--lisa, 0.000-25--44h-0s--0d--Lisa,                     
0.001-19--32h-0s--0d--madison, 0.001-19--32h-0s--0d--Madison,               
0.001-18--31h-0s--0d--Wisconsin, 0.001-18--31h-0s--0d--wisconsin,           
0.001-9--15h-0s--0d--entrepreneurs, 0.002-124--220h-1s--0d--Fwd             
X-Spam-tokens-spam: 0.902-15425--22770h-653297s--0d--H*c:alternative,           
0.861-28--214h-4131s--0d--president,                                        
0.860-6821--59523h-1134787s--0d--H*Ad:D*uk                                  
X-Spam-relays-trusted:                             

X-Spam-relays-untrusted: [ ip=98.138.85.229                                     
rdns=web120502.mail.ne1.yahoo.com                                           
helo=web120502.mail.ne1.yahoo.com by=dildano.hawaga.org.uk ident=           
envfrom=                                                                    
intl=0 id=p0RHpCGr007860 auth= msa=0 ] [ ip=194.54.47.229 rdns= helo=       
by=web120502.mail.ne1.yahoo.com ident= envfrom= intl=0 id= auth= msa=0 ]    
X-Spam-dkim-identity: @yahoo.com vivonajj@yahoo.com                             
X-Spam-dkim-domain: yahoo.com                                                   
X-Spam-dccb: dcc1.aftenposten.no                                                
X-Spam-dccr: dildano.hawaga.org.uk 1215; Body=1 Fuz1=1 Fuz2=1                   
X-Spam-token-summary: Tokens: new, 1; hammy, 109; neutral, 84; spammy, 3.       
X-Spam-languages: en                                                            
X-Spam-autolearn: ham                                                           
Received: from web120502.mail.ne1.yahoo.com (web120502.mail.ne1.yahoo.com       
[98.138.85.229])                                                            
by dildano.hawaga.org.uk (8.13.8/8.13.8/Debian-3) with SMTP id              
p0RHpCGr007860                                                              
for ; Thu, 27 Jan 2011 17:51:14 GMT                     
Authentication-Results: dildano.hawaga.org.uk; dkim=pass (1024-bit key)         
header.i=@yahoo.com; dkim-adsp=none                                         
Received: (qmail 94875 invoked by uid 60001); 27 Jan 2011 17:44:27 -0000        
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;     
t=1296150267; bh=niep53FNFb78lkhNFkO8MX4NIpOCMZRTAUTHm1+GJzQ=;              
h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Ver    
sion:Content-Type;                                                          
b=OWa2WgmhLxGRa+pJep7Or929UysoVIk/SCB7BnFfurvysB63Nr6Odfb4b3gm2hFZqK+xOo    
Q6aZmUXn27EQbeA/8av52fU1KV33uhA9Th6rI0uKEIPg5LikGLLXUoaXrLzGdL2qyJ10UTMy    
2TwEkqU6bZQBCpMxY0fWANKPiIK+M=                                              
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;                                
s=s1024; d=yahoo.com;                                                         
h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Versi    
on:Content-Type;                                                            
b=eIuzjdEYirBi9RTFb1voGAKkH4bQUYyRkN6NT6mDBMj2GJcnf8bKz7R7NrQuWd8j9tjoviMF    
bXv/t3D2DtcoNbvQuDSPMa6ycXDMUkNFpW3dMkyrq6ZBSuw+Ye7TZXH7ect5MJcErjTAyu38    
+Dx4kXmdFIlAhs3Q0CBs4L7t+EI=;                                               
Message-ID: <164580.94308.qm@web120502.mail.ne1.yahoo.com>                      
X-YMail-OSG: vT0rPzMVM1kHrbVNywVWrHu0pysHCRnEnLjmjnEWPDU8KWm                    
9AZTRLoDJBmvlFuAhJxD2.uKxh0LPBsJi.WmUQZBW_OOq9UKFRzUPfKAOemv                 
qni6KAcfSaxd8Y6p2Cf6w5PGXAILIpD_0UuPIQ3LtnYoffxsz6w1ytx9R3cG                   
BiqUC4MNLdG0NlSV2mlQCFGOEBHXYNTzl4ejOnjciku0Z1Y5SGW4aUz_4gmA                   
oWXR.IhJ4dfWwgEVx2H9GvVpuPLizb_vwjmqP1e02TP9qrN0cGC1tFWTbyW9                   
ooJoemIjAwkiYhhY-                                                              
Received: from [194.54.47.229] by web120502.mail.ne1.yahoo.com via HTTP; Thu    
, 27 Jan 2011 09:44:27 PST                                                  
X-Mailer: YahooMailRC/553 YahooMailWebService/0.8.107.285259                    
Date: Thu, 27 Jan 2011 09:44:27 -0800 (PST)                                     
From: Jim Vivona                                            
Subject: re                                                                     
To: benc@donor2.demon.co.uk                                                     
MIME-Version: 1.0                                                               
Content-Type: multipart/alternative;                                            
boundary="0-588940240-1296150267=:94308"                                    
X-Greylist: Delayed for 00:06:40 by milter-greylist-3.0                         
(dildano.hawaga.org.uk [81.187.211.37]); Thu,                               
27 Jan 2011 17:51:15 +0000 (GMT)

09 April, 2011

roman numerals code

I made this roman numeral convert applet years ago (the RCS tag is $Id: Roman.java,v 1.11 2001/01/07 15:12:00 benc Exp $) and mostly left it untended since then. It accounts for 25% .. 50% of the hits on hawaga.org.uk and was brought to the front of my consciousness by someone asking if they could use the source in their school project. Now I feel all embarrassed about the clunky UI on that thing.

02 April, 2011

holon

Word of the day is holon - a thing that is both a whole in itself, and a component of something bigger. This is what a part of a hierarchical system is - both something clearly distinguishable from the rest of the system (in which case it is a distinct whole), but still part of that system.

26 March, 2011

sshfp dns

I've set up DNSSEC, so I'm on the path to trusting DNS more. I can put SSH key fingerprints for my hosts into DNS, and SSH clients can check those.

Even with insecure DNS, this is probably better than what you do now, which is to just to choose 'yes' to the following prompt without actually checking: (seriously, do you ever bother?)

The authenticity of host 's0.barwen.ch (192.168.55.55)' can't be established.
RSA key fingerprint is 9e:81:ab:cb:2a:ad:26:2f:10:ed:dd:5c:55:dd:ea:58.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? 

SSH can check s0's DNS record to see if a fingerprint is stored there, and tell you if it matches. So lets set that up.

I need to add an SSHFP record to the DNS for s0.barwen.ch.

On host fubar (without needing to be root):

$ ssh-keygen -r s0.barwen.ch
s0.barwen.ch IN SSHFP 1 1 560f08c1687a60e62a65ef427e63698ae1797d6f
s0.barwen.ch IN SSHFP 2 1 4ef38fd457d0afec50ca21eacb771f724e6d7236

So those are the records to add to barwen.ch's DNS.
(btw, vim on my machine doesn't like SSHFP records and highlights everythign red - eww)

Now wait for DNS to settle, and when I connect for the first time, I get a different message (my emphasis).
The authenticity of host 's0.barwen.ch (192.168.55.55)' can't be established.
RSA key fingerprint is 9e:81:ab:cb:2a:ad:26:2f:10:ed:dd:5c:55:dd:ea:58.
Matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? 

Cool.

You might need to set the client option VerifyHostKeyDNS ask in your ~/.ssh/config - if you really trust DNS, you can set it to yes instead, and it won't even ask you when there's an SSHFP record present.

You can try this yourself, even without a user account, because host key verification happens before user authentication: ssh -o 'VerifyHostKeyDNS ask' s0.barwen.ch

19 March, 2011

ASCII

An ASCII character walks into a bar.
The bartender says "What's the problem?"
The ASCII character says, "I have a parity error."
The bartender nods, and says, "Yeah, I thought you looked a bit off."

17 March, 2011

time for visa transaction dispute!

so I took my laptop back to a-mac utrecht, as its still got its crash-every-three-days problem. They refused to either refund the money or replace the laptop with a new one. Whilst they said they would talk to apple about a replacement, this has gone far enough for a product that was defective as sold, so I'm going to take advantage of the transaction protections I apparently get from the consumer credit act (and Visa), and dispute the transaction.

I got a lovely comment from the manager of the store when I said that I would expect a kernel crash every one to two years not every three days, that he would expect it to never happen. Which is funny in a "yes we sold you crap but we won't fix it" way.

11 March, 2011

numbering commits

svn gives each commit a unique number. mercurial does somethign similar. CVS doesn't. git has commit IDs btu they are big and dont' have an intrinsic order (if you dehash two commit IDs you can figure out relative order but its not apparent from the IDs themselves)

I'm interested in numbering the commits on a git branch that is imported from CVS, so there's a well defined linear order (unlike in git in general).

I hacked together this script:

#!/bin/bash

COMMITCOUNT=$( git rev-list origin | wc -l)

echo There are $COMMITCOUNT commits on the origin branch

# this strips whitespace
export COUNT=$(($COMMITCOUNT))

git rev-list origin | while read commitid ; do
  echo numbering $commitid as $COUNT
  TAG=cvs$COUNT
  git tag $TAG $commitid
  COUNT=$(( $COUNT -1))
done

which works like this:

$ ./number-cvs 
There are 205 commits on the origin branch
numbering fea9db3bc7b3e36f82a97d3bb194eb60ecb3b57f as 205
numbering aedbfe81cc1dbf3d6f833225aa41826854398a3c as 204
numbering f040d23565211acb637d3325d240d675fe1e61a6 as 203
numbering eef4a46ce30de2836402a373e8eae49fc1b75935 as 202
numbering 2bb5b931be9beb0d90a2796b85585149465f8fc3 as 201
numbering a4372569c60bcb6d92a63b258389b5f1a210dd40 as 200
fatal: tag 'cvs200' already exists
numbering bbd6af71ef17fcb05a9cf86a372837dcf470e30b as 199
fatal: tag 'cvs199' already exists
numbering 6e1dc4d7d59b3515a3f46c17517c1bb85172c7bc as 198
fatal: tag 'cvs198' already exists
numbering 87c59d4b1bf4c7b1bcb64af078a66290d74f6ebf as 197
fatal: tag 'cvs197' already exists
[...]

This being a hack, I don't attempt to handle previously tagged revisions and instead let git tag give a fatal error that isn't really fatal...

Now I end up with commit tags that look like cvs200, cvs201, ...

last repair attempt which pretty much says they did nothing actually seems to have done nothing

Well, it took about 46h from my Mac being returned to me for it to crash again. I won't be in Utrecht until next Thursday so I can't take it back to a-mac to grumble some more until then.

08 March, 2011

result of 2nd repair

after the previous failed attempt to make my new mac work, the 2nd repair attempt of my new mac is reported as:

Uitvoerig getest . weer waren er kleine problemen op de schijf
ditmaal konden ze wel hersteld worden
Ik zie aan de log files dat de kernel panic puur om software gaan en
niet hardware.
deze problemen zijn hersteld , en de Macbook werkt weer prima
voor een goede test zal ik de mac nog door testen 
 Na diverse testen werkt de Macbook prima geen verdere problemen

07-03-2011 Aan de log files van de machine te zien heeft het probleem
te maken met het feit dat er een Backup is terug gezet , waarvan de
fouten waarschijnlijk mee over zijn gekopieerd
Na herstel van de Harddisk * software ( werkt hij prima

which in a one-liner is "we found some corrupted files and repaired them. this is a software problem. it works fine now."

To be honest, that sounds like complete BS given the symptoms and the actions taken for previous repair (meaning its had at least two separate OS installs on two different hard-drives)... plus yes "it works fine" most of the time - just it crashes every 3 or 4 days. As the mac has been in their possession for repair for only 3 days, and I'm sure they haven't been using it as much as I do during that time, I don't think they can accurately state that it works now. The next week should tell, though.

In the meantime, Apple sent me a questionnaire about the previous repair attempt which gave me the opportunity for further grumbling (can you tell I'm English?).

-- later --

got laptop back - total result of repair: some screen dumps of them running the disk repair utility, and they wiped some fingerprints off the screen. lets see if that fixed the problem. <cough;>

05 March, 2011

ssh over CONNECT over port 80

I run an ssh server on port 443 (the https port) of one of my machines. That's good when I want to get ssh from a network which bans "everything but web-browsing" - once I have ssh I can tunnel pretty much what I want.

But the other day I ran across a network (at an airport) which allowed only port 80 - the regular http port. Even port 443 was banned (so not only could I not connect to my ssh server, but no facebook, gmail, etc...)

I already have stuff on port 80 - my webserver - so what could I do? I know ssh can be tunnelled through the http CONNECT method. Could I set that up so that my web server would serve web traffic as usual but allow me to CONNECT to the ssh server?

Turns out yes.

In the http server config - enable mod_proxy and mod_proxy_connect. Then this config in proxy.conf (or the main server config, I guess):

<IfModule mod_proxy.c>
         ProxyRequests On
        <Proxy *>
                AddDefaultCharset off
                Order deny,allow
                Deny from all
        </Proxy>
        <Proxy localhost>
          Allow from all
        </Proxy>
        AllowCONNECT 22
        ProxyVia On
</IfModule>

(If you're doing this yourself - be careful about what you allow - if you are too liberal you turn your server into an open spamming and hacking engine for the world to exploit. And it will be found. Automatically, and within days or maybe hours.)

On the client side, use netcat as a proxy command in SSH: (you can also put the proxy command in ~/.ssh/ssh_config)

ssh -o 'ProxyCommand nc -X connect -x myhost.example.com:80 localhost 22' myhost.example.com

Tada.

Note that the ssh server now sees you connecting from localhost, and the http server is the one who logs the real IP address. A caveat here is if you have IP-based security policies that make localhost special, then anyone connecting through this proxy is also special.

Lots of the config for this came from http://mariobrandt.de/archives/technik/ssh-tunnel-bypassing-transparent-proxy-using-apache-170/

So that gets me a CONNECT proxy.

But does it work at the airport? No.

Turns out the airport is also running everything through a transparent proxy: when I connect to s0.barwen.ch port 80, I'm actually connected to the airport's transparent proxy. So an http CONNECT command to localhost:22 both points to the wrong place and is administratively prohibits.

It turns out I can http CONNECT to s0.barwen.ch port 80, though, and there issue another CONNECT command to get to localhost:22:

$ nc -4 -X connect -x s0.barwen.ch:80 s0.barwen.ch 80 
CONNECT localhost:22
HTTP/1.0 200 Connection Established
Proxy-agent: Apache/2.2.14 (Ubuntu)

SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu5

So I can get to the server this way.

How can I get the ssh commandline to talk to the remote server, with the extra proxy step added?

Like this:

$ cat t.sh 
( echo "CONNECT localhost:22" ; cat ) | nc -X connect -x s0.barwen.ch:80 s0.barwen.ch 80 | ( read ; read ; read; cat)

$ ssh -o 'ProxyCommand ./t.sh' s0.barwen.ch

Now once that's done, I can run a SOCKS proxy over the same ssh connection, and route my normal web browsing through that - giving me access to https sites, and anything else that can use SOCKS.

04 March, 2011

disappointing new mac

Well my last mac died, as they do every now and then. So I activated my usual replacement behaviour: go to nearest mac store (in this case, Amac in Utrecht) and buy a new macbook.

Alas this new macbook started BSODing every few days, usually after a resume. So I took it back to the shop. They took it off for repair and diagnosed it with a faulty hard drive (?) and returned it after a week with a new HD.

Macbook still BSODs ever few days.

So back to the shop again. I thought maybe this time they'd give me a real replacement. No they want to take it off for repair again. I wonder if some dude has to sit in a room repeatedly suspending it and unsuspending it for a week to try to reproduce? Presumably it'll come back in a week with "cannot reproduce" and then it will continue crashing for me and then I will have to go argue with them lots?

This is a disappointingly slow and lame process which is not what I expected from either a Mac or from a Premium Reseller. grumble.

27 February, 2011

dnssec: automated re-signing of hawaga.org.uk

In a previous post on DNSSEC-signing my zone hawaga.org.uk, I mentioned that signatures will expire after 30 days, and so I (or rather one of my computers) will need to re-sign the zone at least every month.

Basically I need to run the dnssec-signzone command again, but there is some dancing around that needs to happen.

The most awkward was that I need to increment the zone serial number in the SOA record of my zone. Previously I've maintained this by hand, keeping it in format YYYYMMDDNN (year, month, day, sequence-number-on-that-day). That format is quite appealing because even if I forget what number I got up to, I can wait a day and know that I have a number in sequence.

dnssec-signzone offers a couple of options for doing things to serial numbers, but neither was what I wanted: one will increment the input SOA by one, but I want to maintain a pristine source zone file; another will set the SOA to the number of seconds since the unix epoch. This changes the format away from what I want.

So I wrote a quick utility, soatick, to generate zone serial numbers based on the current time and a state file, so that each invocation will generate a new serial number matching the format that I want:

$ ./soatick
2011010901
$ ./soatick
2011010902
$ ./soatick
2011010903

Now I'll use the m4 macro processor to put this in place before signing the zone:

export NEWSERIAL=$(/home/benc/src/soatick/soatick )
m4 -D___SERIAL___=$NEWSERIAL < db.hawaga.template > db.hawaga.generated
/usr/sbin/dnssec-signzone -S -t -a -l dlv.isc.org -f db.hawaga.signed -o hawaga.org.uk db.hawaga.generated

I put the above in a script called from cron, and set it to run every week.

Now a weakness here is that I have to keep my signing key unpassworded and on a system connected to the internet. The zone-signing and key-signing key separation should help here, by allowing me to keep a more important key offline and a less important key online, but I haven't investigated it in any greater depth - perhaps I should...

19 February, 2011

public nmap server

Well, I put up a public nmap server on barwen.ch that will nmap whatever address you are connecting from in your browser. I really wonder what the bad uses and the good uses (if any) this can be put to are. It was at least funny to watch yahoo and google hammer on it 'till I put a robots.txt in place.

13 February, 2011

choosing a password

many sites want a password. for a lot of middle-to-low security accounts, I keep a(n encrypted) database of passwords on my computer, rather than making them memorable or using the same one on all. So I cut and paste each password and don't care about it being easily typable. To generate the passwords, I use a command-line like this:

$ cat /dev/random | strings -n 16
:*jx4%8er:>kRKh:
a#ka;lPB6rB9SX";lk
6B!'X@Q{@QQ LZB?
hZ if=A2u3;-S]v?P
Ix6RwEwqVqEg~0fFi
[hkE*0T~GZX^5=h<4

06 February, 2011

Using mrtg and iptables to record IPv4 vs IPv6 traffic

I wanted to plot IPv6 vs IPv4 traffic on my hosts - I have most services enabled for IPv6 but I know they don't get used much. I had MRTG already.

iptables on Linux counts bytes that pass through it, even if there are no iptables rules:

$ iptables -L -v -x
Chain INPUT (policy ACCEPT 6079694 packets, 2474020715 bytes)
[...]

That counts ipv4 packets. The ipv6 equivalent is ip6tables.

So without needing to add any iptables rules at all, I can feed this output to mrtg with a script as follows, which outputs IPv4 traffic (for all three categories) as the first (input) variable, and IPv6 as the second(output) variable.


#!/bin/bash
A=0
IP4=$(/sbin/iptables -L -x -v | grep -e ^Chain | sed 's/.* \([0-9]*\) bytes)$/\1/' | ( while read n ; do A=$(( $A + $n )) ; done ; echo $A))

A=0
IP6=$(/sbin/ip6tables -L -x -v | grep -e ^Chain | sed 's/.* \([0-9]*\) bytes)$/\1/' | ( while read n ; do A=$(( $A + $n )) ; done ; echo $A))

echo $IP4
echo $IP6
echo 0
echo unknown

On one host, there really is hardly any ipv6 traffic (2 bytes/sec!) so I turned on the log scale plot option in MRTG to show the ipv6 a bit more (though to be honest its still pretty invisible).

Here's the config I used in MRTG to call the above script:

Target[ip46]: `/home/benc/bin/iptables-to-mrtg`
Target[ip46]: `curl http://dildano.hawaga.org.uk/mrtg-iptables.txt`
options[ip46]: growright,logscale
MaxBytes[ip46]: 1000000000000
Title[ip46]: IPv4 vs IPv6
YLegend[ip46]: bytes/sec
LegendI[ip46]: IPv4
LegendO[ip46]: IPv6

and here's an example graph (live, click for historical data):



Caveats:

I'm summing the all three iptables chains: input, output, and forwarded, for all interfaces. So some traffic here can be counted unexpectedly: A forwarded packet traverses all three chains (I think) so this is not a good way to count traffic if your linux box is a router; The lo interface will also be counted, so traffic to localhost (127.0.0.1 or ::1) will be counted in this graph. This might be useful to remove.

When there's a tunnel endpoint on the machine, then traffic to that tunnel will be counted twice: one as it passes the tunnel interface, and once as the encapsulated form passes the physical ethernet interface.

These are not insurmountable, I think: by setting specific iptables rules that address these concerns and counting traffic from those instead of the main chain counters.

29 January, 2011

dnssec: signing hawaga.org.uk

In this post I will sign my main DNS zone hawaga.org.uk using DNSSEC so that other resolvers can verify the contents when they make a DNS lookup.

The parent domain org.uk does not do DNSSEC at the moment so I am going to use DLV (DNSSEC Lookaside Validation). DLV will let me publish my zone public keys in DLVs registry instead of in the parent org.uk zone, and my zones will be validated by anyone who has configured DLV in their resolver, rather than by all DNSSEC users (pretty much the first is a strict subset of the second). This is something of a hack - hopefully in a few months org.uk will support DNSSEC and I'll be able to directly publish my keys there instead.

Most of what follows works for both normal DNSSEC and DNSSEC-with-DLV.

I need to generate two public/private keypairs: a zone signing key, and a key signing key. The key signing key will be used to sign the zone signing key, and the zone signing key will be used to sign the records inside the zone. Its not entirely clear to me which private keys need to be online at which step in the process, but for my immediate purposes I don't mind too much - I'm keeping everything online the whole time for now.

bind comes with a utility called dnssec-keygen to generate the keys, and it is used much like any other command-line key generator such as gpg oropenssl:

$ /usr/sbin/dnssec-keygen -a RSASHA1 -b 768 -n ZONE hawaga.org.uk
Generating key pair...................++++++++ ..++++++++
Khawaga.org.uk.+005+48075
$ /usr/sbin/dnssec-keygen  -f KSK -a RSASHA1 -b 2048 -n ZONE hawaga.org.uk
...
$ ls Khawaga*
Khawaga.org.uk.+005+48075.key      Khawaga.org.uk.+005+48196.key
Khawaga.org.uk.+005+48075.private  Khawaga.org.uk.+005+48196.private

Now, I'm going to sign the zone:

$ /usr/sbin/dnssec-signzone -S -l dlv.isc.org -o hawaga.org.uk db.hawaga 
Fetching KSK 48196/RSASHA1 from key repository.
Verifying the zone using the following algorithms: RSASHA1.
Zone signing complete:
Algorithm: RSASHA1: KSKs: 1 active, 0 stand-by, 0 revoked
                    ZSKs: 1 active, 0 stand-by, 0 revoked
db.hawaga.signed

and I get these output files:
-rw-r--r-- 1 benc benc 35505 2011-01-04 21:47 db.hawaga.signed
-rw-r--r-- 1 benc benc   195 2011-01-04 21:47 dlvset-hawaga.org.uk.
-rw-r--r-- 1 benc benc   171 2011-01-04 21:47 dsset-hawaga.org.uk.

The signed zone lives in db.hawaga.signed. It looks like my input zone but with some new records added. For every input records, there is a new RRSIG record; there are DNSKEY records which contain the public keys generated above; and there are NSEC records which are used to securely deny the existence of entries.

In my bind directory, I replace my original db.hawaga zone file with the new signed version and restart bind.

Now bind will serving the appropriate DNSSEC records. I can see them by querying using the +dnssec flag to dig:

$ dig +dnssec www.hawaga.org.uk
[...]
;; ANSWER SECTION:
www.hawaga.org.uk.      3600    IN      CNAME   paella.hawaga.org.uk.
www.hawaga.org.uk.      3600    IN      RRSIG   CNAME 5 4 3600 20110208095742 20110109095742 48075 hawaga.org.uk. lNdWf71In/J8F7evBxxi1yTw7Fx4WRkcHK9vOilEyLmDq31uwYlTtf+d bAZ4WHSeF9aHGRlvy1c6bAWEgWs64K8RNr/7pkZW+y0kmP8qL+Eu0nOH DvvQQ32eShnoEYGc
paella.hawaga.org.uk.   3600    IN      A       174.143.245.7
paella.hawaga.org.uk.   3600    IN      RRSIG   A 5 4 3600 20110208095742 20110109095742 48075 hawaga.org.uk. EeUBihcnpDJad9JJnwLcR0nm9ef1E58fxdwio/SK1iSorkFZLZjxvk7r GYkCR0aCwAw1F2/lJ3kb6pTk+10H00lyowQc8crtBPIDkwiqDE0pTtEE JPCm51XuAc3bz/lX
[...]

So now my zone is signed.

But! (and this is a big but) there is no trust path for any resolver to validate that - there is no reason for any resolver to believe that it really was *me* that generated the keys rather than some random attacker.

In X.509, certificate authorities tell you how to trust someone by signing certificates. In plain DNSSEC, the parent zone acts as a CA by providing a signed DS (delegation signer) record in addition to an NS record. In DLV, ISC's DLV service fills this role by providing the signed DS record.

I'm going to use ISC's DLV zone (which appears to be the only serious one). There's a web interface to sign up, at https://dlv.isc.org/. When you sign up there, the interface authenticates your ownership of the zone by giving you a cookie TXT record to put in your zone. Once that authentication has happened, you can upload a DNSKEY, DS or DLV record.

Then ISC will publish a DLV record set for you (presumably it constructs one for you if you give a DNSKEY or DS record?). For example, mine looks like this:

hawaga.org.uk.dlv.isc.org. 877  IN      DLV     48196 5 2 FD144A3B36AC7E939DD7A04D5C6F967A07D01FB0029CA8B440F6CC2D 6F2415B8
hawaga.org.uk.dlv.isc.org. 877  IN      DLV     48196 5 1 0DE399B36303676F1243A923B6FC3893AE248F90

It takes a few hours for that DLV record to appear, but when it does, that is enough for a DLV-enabled resolver to be able to authenticate.

DNS-OARC has some open DNSSEC-enabled resolvers that can be used for testing this. I can check my DNSSEC using one of them:

$ dig +dnssec @2001:4f8:3:2bc:1::64:20 -t a www.hawaga.org.uk
[...]
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 4, AUTHORITY: 3, ADDITIONAL: 5
[...]

and look out for the ad bit to appear in the flags field. It does, which means that resolver trusts my results.

OK. That's the setup pretty much done. Two last notes:

Firstly, when a bind resolver is authoritative for a zone, then it won't check signatures and give an ad flag on results for that zone. That left me puzzling for a while, wondering why it wasn't checking my results, and is why I have to use one of DNS-OARC's resolvers above.

Secondly, this zone is only signed for 30 days. After that period (specifically, 30 days after I ran the dnssec-signzone command), the signatures will become invalid. This is quite a change from my previous non-DNSSEC zone update habits - I changed my zone a few times per year and only when I needed a change, but now I'm going to need to update it at least every month, and inaction will cause it to break. In a subsequent post, I'm going to write about some of the automation I did to do this.

22 January, 2011

dnssec: Configuring my resolver

DNSSEC, a mechanism for securing DNS, has around for a long time but only in the last year or so has it seen serious deployment. The root zone was signed about 6 months ago, which provides a security root from which all other DNSSEC can flow. Last time I looked at DNSSEC that was far in the untimetabled future, so I didn't put much effort then to get it actually working.

In this post, I'm going to write about configuring bind to check DNSSEC when I make DNS queries. In later posts, I'll write about the other side of things: securing my own zones with DNS.

I mentioned the root zone being signed above. That's one way of checking DNSSEC signatures (and in the long term, the main way). Another way is DLV (DNSSEC Lookaside Validation) which acts as a certificate authority, providing a way for a DNS zone to be signed without having to have a path all the way from the root. A third way is by listing the keys for known domains, which allows everything under those domains to be validated without needing a signature from a higher level in DNS. (This third way is how IANA's Interim Trust Anchor Repository worked, before being superseded by the signing of the root zone).

I want to configure both validation from the root, and DLV.

I very roughly followed along with this page, though different versions of bind, and differences in what I want to do, lead to differences.

First I need the root key-signing (public) key. This is the single well-known value that must be securely obtained. So I use insecure DNS to obtain it:

$ dig . dnskey | grep "257 " > root.dnskey
$ cat root.dnskey
.                       172363  IN      DNSKEY  257 3 8
AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF
FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX
bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD
X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz
W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS
Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq
QxA+Uk1ihz0=

Now at this point I should carefully check that this is right against various out-of-band sources. I didn't bother.

I also want a trusted key for DLV. That is available here.

Next I need to tell bind about these keys. I'm going to use bind's managed-key mechanism for this.
I put the key material into a file called benc-managed-keys, like this:

managed-keys {
   dlv.isc.org. initial-key 257 3 5 "BEAAAAPHMu/5onzrEE7z1egmhg/WPO0+juoZrW3euWEn4MxDCE1+lLy2 brhQv5rN32RKtMzX6Mj70jdzeND4XknW58dnJNPCxn8+jAGl2FZLK8t+ 1uq4W+nnA3qO2+DL+k6BD4mewMLbIYFwe0PG73Te9fZ2kJb56dhgMde5 ymX4BI/oQ+cAK50/xvJv00Frf8kw6ucMTwFlgPe+jnGxPPEmHAte/URk Y62ZfkLoBAADLHQ9IrS2tryAe7mbBZVcOwIeU/Rw/mRx/vwwMCTgNboM QKtUdvNXDrYJDSHZws3xiRXF1Rf+al9UmZfSav/4NWLKjHzpT59k/VSt TDN0YUuWrBNh";
   . initial-key 257 3 8 "AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0=";

};

Note that the format is different from the key files obtained - the IN DNSKEY keywords go away and initial-key goes in instead, and the key material is now quoted with " and with a ; at the end of the line.

Then I told bind to import this into its configuration:
$ grep benc-managed-keys *
named.conf:include "/etc/bind/benc-managed-keys";

I need to tell bind to start using dnssec by adding these to named.conf.options:
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;


Now restart bind and hope that it all works.

How can I test my setup? I can use dig with the +dnssec parameter. This adds a flag to the query saying that DNSSEC is desired. For example:
$ dig @192.168.1.254 +dnssec hawaga.org.uk

dig will give a flag line in its output, like this:
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 1

If DNSSEC was able to check the results (which means that the result was signed, and that the resolver that you just queried was able to validate the results, then there will be another flag in there: ad. If not, then something didn't work correctly.

You can get more DNSSEC logging by adding this to named.conf:

logging {
           channel dnssec_log {             // a DNSSEC log channel
                     file "/var/log/bind/dnssec.log" size 200m;
                   print-time yes;        // timestamp the entries
                     print-category yes;    // add category name to entries
                   print-severity yes;    // add severity level to entries
                     severity debug 3;      // print debug message <= 3
   
             };
   
       category dnssec  { dnssec_log;  };
     };

So that's DNSSEC configured for resolving. Next, I want to sign my own zones so that others can verify them with DNSSEC on their own resolvers - I'll write about that in another post.

14 January, 2011

IPv6 in Amazon EC2

Amazon declares that IPv6 is unsupported on EC2 (the Elastic Compute Cloud), but I wanted it anyway.

I tried two ways, one which worked well, and one which did not.
The first way I tried, which did not work well, was using 6to4; the way which did work was a tunnel from Hurricane Electric.

I did everything below on one of the free Linux micro-instances supplied by Amazon, with an elastic IP address attached so that I'd have a permanent address.

The first approach I tried was using 6to4. This is a protocol which automatically gives a large range of IPv6 addresses to anyone with a single static IPv4, through a decentralised network of protocol gateways.

In another blog post, I described how to get 6to4 running on Linux in 5 command lines. I ran those commands on my EC2 instance and end up with my own IPv6 address configured.

But having an IPv6 address configured is not the same as having an IPv6 address reachable from the internet. There were severe reachability problems. The problems boil down to two causes:

  • The decentralised management of 6to4 leads to lots of gateways being broken in a way that makes them act of blackholes; and the protocol makes it hard to discover which those gateways are to fix them. This is a problem for any host, not just for EC2 instances.
  • Even with the EC2 firewall turned off as much as possible (i.e. no firewall rules at all), EC2 doesn't cleanly deliver IP traffic to instances. For most traffic, this is not a problem; but it interacts with 6to4 in a terrible way:
    your instance can only receive traffic from any particular 6to4 gateway if it has recently sent at least one packet to that gateway; there is a global network of 6to4 gateways, any of which can send traffic to you; when you send 6to4 traffic, you send it only to your closest 6to4 gateway. As a result, basically no 6to4 gateways can ever send traffic to you.

That second bullet point was especially hard to debug before I figured out what the EC2 firewall was doing - because certain pings and probes to test reachability caused the firewall to open up, at which point IPv6 traffic would flow between some sets of hosts until a few minutes later when it would mysteriously stop working again.

6to4 is basically useless on ec2.


The second method I tried, with much greater success [to the extent that, a year later, I regard this as production quality] is a manually configured tunnel via Hurricane Electric. HE have been around a long time; have a good reputation; and I used them before years ago.

The configuration at their end is a set of fairly straightforward web forms. I was allocated a /64 prefix, but they have options for more.

That web form also gives example configuration instructions for a variety of platforms. The Linux-net-tools instructions are the ones I used. I pasted the 4 given commands literally into a root prompt on my EC2 machine, and those configured the interface correctly (at least until reboot).


I get similar "stateful firewall" behaviour from EC2 as I mentioned above, but the difference here is that that the connection is only between me and the HE tunnel endpoint, rather than to arbitrary 6to4 gateways around the network. As long as *anything* goes over the tunnel every few minutes, then connectivity with the entire IPv6 world stays up. Compared to 6to4, when that tunnel is up, the connectivity from machines D and P seem *much* better. I can ping both ways without any mysterious losses. I need a ping to the tunnel endpoint (or anywhere on the IPv6 internet, really) every minute or so. That's no big deal - I have MRTG set up to measure some ipv6 latencies anyway, and that is generating this traffic as a side-effect.

So HE is a little bit (a few web forms) more effort to set up. But the connectivity is much much better. I recommend HE over 6to4 for this.

Other links: aco wrote about getting IPv6 on EC2 using sixxs, and if you're interested in getting a shell account on this machine (barwen.ch) to try for yourself: www.barwen.ch
. I found this this online ping tool useful during testing.


Modified: 2011-04-19 Rephrasing a bit based on ongoing experience, and some more hyperlinks
Modified: 2012-05-05 Rephrasing some more.


Flattr this

6 to 4 in 5

I have native IPv6 on my LAN for one server. I have another server which lives on LAN with only native IPv4. Can I get IPv6 on that? Yes!

My present configuration uses 6to4. This is a very straightforward mechanism to configure on linux, without needing any setup outside of my own machine. On the downside, it gives very little control of routing which can lead to strange connectivity problems that are hard to debug.

I followed the
Linux IPv6 HOWTO howto pretty faithfully.

I needed these 5 steps:

$ ipv4="174.143.245.7"; printf "2002:%02x%02x:%02x%02x::1" `echo $ipv4 | tr "." " "`

This tells me that my prefix is: 2002:ae8f:f507::/48

I get a 16-bit (4 hex-digit) subnet number to pick my own networks (I'll pick 2 in this case, fairly arbitrarily), and then the remaining 64 bits go towards identifying the host on the network (which I'll pick 0123:4567:89ab:cdef. Another good choice would be ::1)

So I end up choosing the IP address: 2002:ae8f:f507:2:0123:4567:89ab:cdef

Next:

modprobe sit  (it was compiled but not loaded)
ifconfig sit0 add 2002:ae8f:f507:2:0123:4567:89ab:cdef/16 up
/sbin/route -A inet6 add 2000::/3 gw ::192.88.99.1 dev sit0

and now I can ping my own machine:

benc@paella:~$ ping6 dildano.hawaga.org.uk
PING dildano.hawaga.org.uk(dildano.hawaga.org.uk) 56 data bytes
64 bytes from dildano.hawaga.org.uk: icmp_seq=1 ttl=62 time=136 ms
64 bytes from dildano.hawaga.org.uk: icmp_seq=2 ttl=62 time=132 ms
64 bytes from dildano.hawaga.org.uk: icmp_seq=3 ttl=62 time=132 ms

--- dildano.hawaga.org.uk ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 132.002/133.335/136.002/1.931 ms

easy peasy. shame it's quite unreliable.

13 January, 2011

building an analytic engine: plan 28

A while ago, I pledged some money to plan 28 through pledgebank.

In short:

Plan 28 is asking 10,000 people to pledge money towards building the Analytical Engine

It sounds like the project is going to go ahead even though they haven't reached the 10000 people they wanted. But maybe you want to make a pledge anyway?

09 January, 2011

3d cat

here is a picture of me with a stuffed cat at the oxford natural history museum. I happened to have two frames with a slight difference in position, so I combined them in an animated gif in a 3d display technique that I've seen before. It has the advantage that you don't need special viewing apparatus, but the disadvantage that you get seasick watching it.

02 January, 2011

pool.ntp.org

I've used pool.ntp.org to give me reasonable NTP servers before. On Christmas day I investigated how I could add my server. It turns out there is a self-service user interface to do so, with a scoring system that dynamically decides if your server is good enough to be published.

So I added by NTP server ntp.hawaga.org.uk (that also does a bunch of other stuff), waited a while for the score to rise, and then sat back and watched the port 123 packets flow.

There is lots of fun stuff about NTP breaking in ways which result in server floods - summarised in Wikipedia's NTP_server_misuse_and_abuse

One accusation that I've seen is that 50% of traffic is well behaved clients, and that the other 50% is a small number of misbehaving hosts which poll and poll and poll and poll. I've seen that behaviour in some of my tcpdump runs, though that is not too bad today:

Over a 15 minute period, I got 282 packets, less that one per second, from 53 different IP addresses, with a very unbalanced distribution: 24 hosts sending one packet, 50 sending less than one per minute (15 packets total), 3 sending more than one per minute, and one host sent 65 packets.

Update (Jan 6th 2011): You can watch the score history for my server in my pool.ntp.org profile, although at time of writing its kinda dull, being a flat top score for the past few days.

Update (Jan 7th 2011): I grabbed packets for about the last 24, and see 2736 distinct IP addresses, 32778 packets total (8 packets avg), noisiest host was 85.248.56.73 with 4628 packets. Anyway that's the most number of users I've ever provided service to, I think!

Update (Jan 17th 2011): I now have MRTG monitoring this server's estimated offset from correct time (similar but different to the profile graph above):