Mar 21, 2011
Using git as an autoupdate mechanism
Notice: This article was originally published on the blog ioexception.de (in german).
If you are developing software with a client-server infrastructure, you may have the need to safely update your client systems. In my case I had several devices in ubiquitary environments without any human technical maintenance. I needed an update mechanism which makes it easy to deploy new software to all machines.
I had several needs for the system:
- Authentication
The connection has to be authenticated. This was a huge problem in the main operating systems and is still a big problem in many programs. The Instant Messenger-Exploit earlier this year for example exploited that the system didn’t authenticate the update server. Attack vectors are packet-spoofing or manipulating the hosts file. - Fallbacks
If the update fails one should always be able to easily restore to the last working state. It is inacceptable that any data gets lost, everything should be stored. - Scripting
I want to be able to hook scripts in every part of the update-process (after updating, before updating, etc.). You could use this to reboot the device after installing updates, to check if the update went successful, or to execute database changes after all files are pulled down to the device. - Authorization
The update server must not be publicly available. Instead, clients fetching updates have to provide authorization data in order to get the software updates. It should be possible to later build a group-based license policy for different software versions on top.
I decided to use git for this purpose because it fulfilled several of my needs from the start and I am already quite familiar with it. The way I have set up the system looks like this:
On the server side:
- Web server, accessable only through HTTPS, using a self-signed X.509 certificate.
- A bare git repository, protected with basic authentication:
https://updates.foo.net/bar.git
.
This is needed to ensure that only devices with correct authorization data have access to the repository.
If you have new updates for the clients you push them to this repository. -
hooks/post-update
: Gets executed if you push new updates to the repository.git update-server-info # required by git for HTTP repositories find ~/bar.git -exec chmod o+rx '{}' \; # clients must have access to the repository
On the client side:
- Cronjob. Regularly check for updates.
check.sh
contains:#!/bin/sh # your repository folder cd "~/bar.git"
# fetch changes, git stores them in FETCH_HEAD git fetch
# check for remote changes in origin repository newUpdatesAvailable=`git diff HEAD FETCH_HEAD` if [ "$newUpdatesAvailable" != "" ] then # create the fallback git branch fallbacks git checkout fallbacks
git add . git add -u git commit -m `date "+%Y-%m-%d"` echo "fallback created"
git checkout master git merge FETCH_HEAD echo "merged updates" else echo "no updates available" fi - Hook hooks/post-merge to execute database-changes, reboot or check if updates were successful.
The client has a copy of the server certificate and uses it to authenticate the connection. So you can be sure that you are connected to the right server. To ensure this your .gitconfig
on the client should contain:
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
fetch = +refs/heads/*:refs/remotes/origin/*
url = https://user@updates.foo/bar.git
[http]
sslVerify = true
sslCAInfo = ~/server.crt
[core]
# needed to provide the pw for the http authentication
askpass = ~/echo_your_https_authentication_pw.sh
What are disadvantages of this setup?
I have chosen a version control system by intention, because my main need was to ensure that no data gets lost.
This might not be ideally for you: If your devices have little disk space or you often have to update a large part of the device system, this solution would not make sense. Since git stores everything internally, this would result in disk space loss. In this case you should rather look into a solution using rsync or similar tools.
rdiff-backup for example does incremental backups and offers the possibility of keeping just a fixed number of revisions.
Do not use SSH.
An easier way to setup such a system would be over SSH with restricted file permissions for the clients. The problem here is that you give devices real access to your server. An attacker could take the private SSH key of the client and connect to the remote server. The past has shown, it is not uncommon that flaws in operating system kernels or system libraries are found. A worst case scenario could be that an attacker manipulates the repository on the server. All clients would then fetch the manipulated content. Fetching manipulated content is not prevented in my setup either, but someone must have a login on the server.
Thanks to nico for his — as always 🙂 — helpful remarks.
Hi Micha,
first of all great article! Given that this was written in 2011. what is you opinion is this still the best solution now 2 years later? Maybe you would create this system differently now?
I am trying to find a solution for the similar problem. I have one server and couple of clients. Clients have desktop application and web application. Desktop application is going to be Java, and web application is written in PHP. I am thinking about having another application that will keep track of the updates, monitoring and reporting of the system. When new updates are available it will use git to get the new version to the client.
On the server I would have git repository with the branch for each client. This would allow me to customize application for one client without making a change for another clients. Also this way I could have features that are available only for some clients, something like premium users.
What do you think about a system like this? I am also wondering is the git the right way to go if I want to keep .jar files synced on client side?
Thanks,
Veljko