My name is Marko Mrdjenovič. I’m a web developer, manager and an entrepreneur from Ljubljana, Slovenia.


I like solving problems. I do that by writing code, managing projects and people. I like creating good experiences. And going to conferences.


I work full time on CubeSensors so I'm currently not available for freelance work (UX, frontend, backend).



Benchmarking web servers

We recently bought a server at CubeSensors and are now putting it through its paces.

One of the things I was looking at was our HTTP server / load balancing setup. We’re currently using an Nginx server in front of Tornado instances and I was wondering if there is something that is more specialized and thus likely faster that would allow for realtime upstream configuration changes. I found HAProxy.

I did a simple setup on the server (Ubuntu 14.04) to test things:

Tornado is running a very simple app that only returns OK at /, there’s two of them and both are started with 2 forks (server.start(2)). HAProxy is set up with option http-keep-alive with a 60s timeout and points to the two Tornados while Nginx has two virtual hosts, one linked directly to Tornados with keepalive 4, the other to HAProxy also the same keepalive setting, both with proxy_http_version 1.1 and removed Connection header. Server timeout is set to 30s, client to 10s.

The first idea was to test with ab using something like ab -n 10000 -c 100 URL. On a machine that is running nothing else but the tests, the times were changing way too much from one test to another. I also noticed that ab is making HTTP 1.0 requests.

The second tool I tried is siege, which has a few different options than ab and also makes HTTP/1.1 requests. I used siege -c 1000 -r 100 -b URL and Nginx-Tornado combination manages a 50% higher transaction rate than any combination with HAProxy. But I think this says more about my ability to configure HAProxy than it does about HAProxy itself – I keep getting a bunch of stale connections (in TIME_WAIT state) hanging around even with option forceclose.

Resolution: sticking with Nginx. It can log POST body, upstream configuration changes will be handled by including the upstream configuration which can be changed and reloaded.

Express your opinion