Project: Informative Rights for Journalists

I have followed the NSA leaks very closely from the beginning. When the leaks started and Snowden firstly revealed himself I had been in Berlin and some tech-savy people involved in the Bitcoin scene told me about the leaks. Following the leaks closely, watching Snowdens first video and the upcoming revelations have definitely influenced me. I curiosly read Glenn Greenwalds book and watched the Laura Poitras movie “Citizenfour” soon after they were available.

In the summer of 2014 I got the possibility of creating a tool which, in my opinion, was a reasonable and necessary step in the direction of making the supervision of authorities more accessible. The german organization netzwerk recherche is a journalists association with a focus on investigative journalism. German journalists have certain extended rights to question official institutions (such as secret services) on data which has been saved about them. This is meant to strengthen their journalistic role within a democracy. These rights, however, are seldomly invoked. Concerning the secret services, a reason could be that there are actually 20 (sic!) intelligence agencies/secret services in Germany — one for each state and four for the entire country. Each service has different requirements for answering requests — one might e.g. require a copy of the passport while another might demand other documents. Additionaly each service has a different address, which is not always easy to find. So to ease these hurdles the idea came about of developing a simple PDF generator, where one could just click the agencies one wants to inquire. A PDF containing the necessary inquiry text, attachment information and the address would then be generated.

A special requirement for such a tool was that the PDF document generation had to happen on the client-side — no server should hold any state. It must not be possible for any person having access to the server to monitor who wants to take use of ones informative rights. Also one should not have to trust the organization, instead one should have the possibility of downloading and deploying the generator himself. Making the source code available (as free software) was a natural conclusion.

I have created this tool and it has now been deployd since July or so. I just didn’t get around to do a proper writeup.
The public instance is available at the netzwerkrecherche.org website. The source code is available via GitHub (under the MIT license).

Das netzwerk recherche ruft Journalisten auf, bei den Geheimdiensten anzufragen, ob diese Daten über sie gespeichert haben. Um das zu vereinfachen, stellen wir einen Generator für die entsprechenden Anträge bereit.

Ziel dieses Projektes ist, in Zeiten der zunehmenden Massenüberwachung den Diensten zu zeigen, dass ihr Handeln von der Öffentlichkeit kritisch beobachtet wird. Die Aufmerksamkeit für das Problemfeld Geheimdienste im demokratischen Rechtsstaat muss erhöht werden.

Insbesondere für investigativ recherchierende Journalisten ist eine Überwachung durch Geheimdienste und eine damit einhergehende Ausforschung ihrer Informanten und Kontakte nicht hinnehmbar.

Anlass des Projektes ist der Fall Andrea Röpke. Beim niedersächsischen Verfassungsschutz wurden offensichtlich rechtswidrig Daten über sie gesammelt. Als sie einen Antrag auf Aktenauskunft stellte, vernichtete die Behörde ihre Akte und behauptete, es gäbe keine Akte über sie. Erst ein Machtwechsel in Hannover offenbarte die Vertuschung – die politische Aufarbeitung läuft bis heute.

Cross-Site Request Forgery (CSRF)


The Institute of Distributed Systems at the University Ulm runs the course “Practical IT-Secutiry” this semester. To me this was especially interesting because of the examination modalities: one does not have to take an exam, instead each student has to prepare and hold a lecture and accompanying assignments on a certain topic. I decided to dive deeper into Web Security and chose for Cross-Site Request Forgery (CSRF) attacks.

The presentation can be found online here. The preparation document for students was distributed one week in advance to the two-hour assignments (pdf). Assignments were based on the Metasploitable framework, the Damn Vulnerable Web App and TWiki. Additionally, I wrote some intentionally vulnerable PHP scripts with increasing levels of security.

All of this material (*.tex, *.html, *.pdf, etc.) and solutions for the assignments can be found in my talks GitHub repository as well.


Using git as an autoupdate mechanism

Notice: This article was originally published on the blog ioexception.de (in german).

If you are developing software with a client-server infrastructure, you may have the need to safely update your client systems. In my case I had several devices in ubiquitary environments without any human technical maintenance. I needed an update mechanism which makes it easy to deploy new software to all machines.

I had several needs for the system:

  • Authentication
    The connection has to be authenticated. This was a huge problem in the main operating systems and is still a big problem in many programs. The Instant Messenger-Exploit earlier this year for example exploited that the system didn’t authenticate the update server. Attack vectors are packet-spoofing or manipulating the hosts file.
  • Fallbacks
    If the update fails one should always be able to easily restore to the last working state. It is inacceptable that any data gets lost, everything should be stored.
  • Scripting
    I want to be able to hook scripts in every part of the update-process (after updating, before updating, etc.). You could use this to reboot the device after installing updates, to check if the update went successful, or to execute database changes after all files are pulled down to the device.
  • Authorization
    The update server must not be publicly available. Instead, clients fetching updates have to provide authorization data in order to get the software updates. It should be possible to later build a group-based license policy for different software versions on top.

I decided to use git for this purpose because it fulfilled several of my needs from the start and I am already quite familiar with it. The way I have set up the system looks like this:

On the server side:

  • Web server, accessable only through HTTPS, using a self-signed X.509 certificate.
  • A bare git repository, protected with basic authentication: https://updates.foo.net/bar.git.
    This is needed to ensure that only devices with correct authorization data have access to the repository.
    If you have new updates for the clients you push them to this repository.
  • hooks/post-update: Gets executed if you push new updates to the repository.
    git update-server-info  # required by git for HTTP repositories
    find ~/bar.git -exec chmod o+rx '{}' \;  # clients must have access to the repository

On the client side:

  • Cronjob. Regularly check for updates. check.sh contains:
    # your repository folder
    cd "~/bar.git"
    # fetch changes, git stores them in FETCH_HEAD git fetch
    # check for remote changes in origin repository newUpdatesAvailable=`git diff HEAD FETCH_HEAD` if [ "$newUpdatesAvailable" != "" ] then # create the fallback git branch fallbacks git checkout fallbacks
    git add . git add -u git commit -m `date "+%Y-%m-%d"` echo "fallback created"
    git checkout master git merge FETCH_HEAD echo "merged updates" else echo "no updates available" fi
  • Hook hooks/post-merge to execute database-changes, reboot or check if updates were successful.

The client has a copy of the server certificate and uses it to authenticate the connection. So you can be sure that you are connected to the right server. To ensure this your .gitconfig on the client should contain:

        repositoryformatversion = 0
        filemode = true
        bare = false
        logallrefupdates = true
[remote "origin"]
        fetch = +refs/heads/*:refs/remotes/origin/*
        url = https://user@updates.foo/bar.git
	sslVerify = true
	sslCAInfo = ~/server.crt
	# needed to provide the pw for the http authentication
	askpass = ~/echo_your_https_authentication_pw.sh

What are disadvantages of this setup?
I have chosen a version control system by intention, because my main need was to ensure that no data gets lost.

This might not be ideally for you: If your devices have little disk space or you often have to update a large part of the device system, this solution would not make sense. Since git stores everything internally, this would result in disk space loss. In this case you should rather look into a solution using rsync or similar tools.
rdiff-backup for example does incremental backups and offers the possibility of keeping just a fixed number of revisions.

Do not use SSH.
An easier way to setup such a system would be over SSH with restricted file permissions for the clients. The problem here is that you give devices real access to your server. An attacker could take the private SSH key of the client and connect to the remote server. The past has shown, it is not uncommon that flaws in operating system kernels or system libraries are found. A worst case scenario could be that an attacker manipulates the repository on the server. All clients would then fetch the manipulated content. Fetching manipulated content is not prevented in my setup either, but someone must have a login on the server.

Thanks to nico for his — as always 🙂 — helpful remarks.

iCTF 2010

A week ago a team of the University Ulm participated in the iCTF-Contest, held by the University of Santa Barbara.
The iCaptureTheFlag Contest is held every year. This year 72 universities (900 students!) participated in the worldwide contest.

All universities are connected in a closed network where a special contest scenario is set up. This year the scenario was that some dictator of a fictional world ‘Litya’ set up a worldwide botnet. The goal was to score points by attacking services of Litya while at the same time maintaining your bot. If your bot went down you were locked out of the botnet, which meant no access to the botnet-servers.

A team could lose points by attacking the wrong services at the wrong time or if the bot was not active. The bot itself was an virtual machine image of which every team got the same one.

Since it was legitimate to attack other teams there were several suspicious incidents like our server machine with the bot suddenly shutting down. Since all the teams had the same image (same ssh keys…) it makes the contest pretty interesting.
I also noticed that there was a suspicious Twitter-Account claiming to give hints, well the hints didn’t make any sense at all :).
Probably another team giving wrong informations.

Besides this there were so called side-challenges in which you could score money. The scenario implied that you could get locked out of the botnet if you attacked wrong services, so with money you could for example buy you back in.

There was a challenge on generating valid credit card numbers for a given owner or challenges on calculating private RSA keys for example. While solving some challenges I once more noticed how incredibly helpful it is to be fit in UNIX, Shell-Scripting and Scripting in general (python, ruby, etc.). It makes your life a whole lot easier!

In the end we managed to achieve place 9/73 — which is a great success!
I had a whole lot of fun and hope to participate next year again!

Thanks to the Prof. Giovanni Vigna and his team. They did a great job on setting up the environment. There were many great details (for example a site in the closed network ‘LityaLeaks’ that leaked informations about the botnet or a site ‘LityaBook’ enabling XSS attacks).

Related Links:

About Me

I am a 28 year old techno-creative enthusiast who lives and works in Berlin. In a previous life I studied computer science (more specifically Media Informatics) at the Ulm University in Germany.

I care about exploring ideas and developing new things. I like creating great stuff that I am passionate about.


All content is licensed under CC-BY 4.0 International (if not explicitly noted otherwise).
I would be happy to hear if my work gets used! Just drop me a mail.
The CC license above applies to all content on this site created by me. It does not apply to linked and sourced material.