Notice: This article was originally published on the blog ioexception.de (in german).
To get more familiar with Processing and OpenGL I wrote a graphical frontend for the Unix progarm traceroute
. The output of traceroute
is a list of stations a packet takes on it’s way through the network. This way network connection can easily be debugged, for example.
Technically this is realized with a “Time-To-Live”-field in the header of IP-packets. The TTL-entry describes after how many stations a packet should be discarded. Each router, which the packet passes, decrements this field. Once the TTL reaches 0 the packet is discarded and the sender gets notified with the ICMP-message TIME_EXCEEDED
.
traceroute
makes use of this and repeatedly sends packets to the destination host. The TTL gets incremented with each packet until the destination host is reached. The hosts on the route will give notice via ICMP-message. This way we will gather informations about the hosts and hopefully be able to identify the individual hosts on the route. The route may not be correct inevitably. There are several reasons for possible variations, e.g. firewalls often completely disable ICMP.
For the visualization I have tied traceroute
to Processing. For further explanations on how to this see my blog post at ioexception.de. Though the post is in german the code will make things clear. It’s not really a complicated to do. The frontend reads the output of the command traceroute domain.org
until EOF. Each line gets parsed and each individual host is resolved to an IP-address. Then a coordinate for this IP is assigned.
The coordinates can then — with some sin/cos magic — be mapped on a globe. Resolving IPs to a Geolocation is realized using a GeoIP database. GeoIP databases assign a coordinate for an IP with a certain probability and are not specifically 100% exact. But for our purpose this will do. There are some free suppliers and many commerical ones. I decided to give the free GeoLite City by Maxmind a go. This way we can resolve IP adresses to a WGS84 coordinate.
For the fronted I wrote a visualization in Java using the Processing API. The texture of the globe gets furthered rendered using a shader written in GLSL. Libraries I used: GLGraphics (OpenGL Rendering Engine for Processing), controlP5 (Button, Slider, Textfield) and toxiclibs (Interpolation & more numerical methods).
The source code is available under MIT on GitHub: visual-traceroute.
Some eye candy can be found within this video:
vimeo directlink.
There are some very good artists releasing music under the Creative Commons Attribution ShareAlike license out there. This means that you are free to download, share and remix the music, as long as you give credit and share derived work under a similar or the same license.
How do these artists make money from their music?
Giving your music away for free does not collide with making money from it!
I actually think most people are willing to pay, under three conditions:
- It just has to be really easy for them to pay. PayPal donations are not easy! Flattr for example is.
- They get to decide how much money they want to pay.This means that you don’t attach a price label to digital content. Flattr has the great concept where you always know: no matter how often I flattr this month, I won’t spend more than the money I put in it at the beginning. There is no way of loosing track about how much money you spent.
- They pay voluntarily. This includes of course that you are not restricted to first paying and then downloading a song but rather get a completely free song and pay whenever you want and only if you want.
I am pretty sure that there is no way of financing Britney Spears or Lady Gaga this way. But creative, independent music? Probably.
You can’t do anything against music getting illegaly distributed via torrent, usenet, etc.. I think it is a much better concept to take this as a fact and see the pros in it: It is very easy to share and distribute music today, which makes it very easy to get popular and get a large fanbase these days.
My utopian vision of the digital content future is that there is a “Pay/Flattr/Donate” button for the artist on last.fm, on youtube, in your torrent client, etc..
Actually there are several problems (at least in germany) right now: The GEMA forbids artists who are with them to license any of the work they do under a free license. It is also forbidden to double license work, with for example a commercial and a free license.
In my opinion the reason why most artists releasing free music are not very popular is simply because many people just don’t know them. So these are five completely free songs, which I wanted to share.
If you want to read further on this topic you should check out these sources:
Notice: This article was originally published on the blog ioexception.de (in german).
If you are developing software with a client-server infrastructure, you may have the need to safely update your client systems. In my case I had several devices in ubiquitary environments without any human technical maintenance. I needed an update mechanism which makes it easy to deploy new software to all machines.
I had several needs for the system:
- Authentication
The connection has to be authenticated. This was a huge problem in the main operating systems and is still a big problem in many programs. The Instant Messenger-Exploit earlier this year for example exploited that the system didn’t authenticate the update server. Attack vectors are packet-spoofing or manipulating the hosts file. - Fallbacks
If the update fails one should always be able to easily restore to the last working state. It is inacceptable that any data gets lost, everything should be stored. - Scripting
I want to be able to hook scripts in every part of the update-process (after updating, before updating, etc.). You could use this to reboot the device after installing updates, to check if the update went successful, or to execute database changes after all files are pulled down to the device. - Authorization
The update server must not be publicly available. Instead, clients fetching updates have to provide authorization data in order to get the software updates. It should be possible to later build a group-based license policy for different software versions on top.
I decided to use git for this purpose because it fulfilled several of my needs from the start and I am already quite familiar with it. The way I have set up the system looks like this:

On the server side:
On the client side:
- Cronjob. Regularly check for updates.
check.sh
contains:#!/bin/sh
# your repository folder
cd "~/bar.git"
# fetch changes, git stores them in FETCH_HEAD
git fetch
# check for remote changes in origin repository
newUpdatesAvailable=`git diff HEAD FETCH_HEAD`
if [ "$newUpdatesAvailable" != "" ]
then
# create the fallback
git branch fallbacks
git checkout fallbacks
git add .
git add -u
git commit -m `date "+%Y-%m-%d"`
echo "fallback created"
git checkout master
git merge FETCH_HEAD
echo "merged updates"
else
echo "no updates available"
fi
- Hook hooks/post-merge to execute database-changes, reboot or check if updates were successful.
The client has a copy of the server certificate and uses it to authenticate the connection. So you can be sure that you are connected to the right server. To ensure this your .gitconfig
on the client should contain:
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = https://user@updates.foo/bar.git
[http]
sslVerify = true
sslCAInfo = ~/server.crt
[core]
# needed to provide the pw for the http authentication
askpass = ~/echo_your_https_authentication_pw.sh
What are disadvantages of this setup?
I have chosen a version control system by intention, because my main need was to ensure that no data gets lost.
This might not be ideally for you: If your devices have little disk space or you often have to update a large part of the device system, this solution would not make sense. Since git stores everything internally, this would result in disk space loss. In this case you should rather look into a solution using rsync or similar tools.
rdiff-backup for example does incremental backups and offers the possibility of keeping just a fixed number of revisions.
Do not use SSH.
An easier way to setup such a system would be over SSH with restricted file permissions for the clients. The problem here is that you give devices real access to your server. An attacker could take the private SSH key of the client and connect to the remote server. The past has shown, it is not uncommon that flaws in operating system kernels or system libraries are found. A worst case scenario could be that an attacker manipulates the repository on the server. All clients would then fetch the manipulated content. Fetching manipulated content is not prevented in my setup either, but someone must have a login on the server.
Thanks to nico for his — as always 🙂 — helpful remarks.
A friend of mine, matou, wrote a script that searches a file tree recursively, hashes every file and outputs duplicate files.
I forked his script and added several functionalities. I added a flag -s
, which makes the script to keep just one occurence of duplicate files, the other duplicates are deleted. Then hardlinks are set from the location of the, now deleted, files to the one occurence that still exists.
$ python duplicatefiles.py -s /foo/
In fact the file system should look exactly the same to the operating system after execution — besides from using less space. I used the script to clean my media folder and saved nearly 1 gb of disk space.
You can also do a dry run:
$ python duplicatefiles.py -l=error -c ./foo
these files are the same:
.//music/foo/bar.mp3
.//music/bar/foo.mp3
-------------------------------
Duplicate files, total: 15
Estimated space freed after deleting duplicates: ca. 40 MiB
The script is available under a BSD-like license on github.
I tested it under Mac OS X but it should work on other UNIX systems as well.
I would recommend to make a backup before executing the script.