Note: I'm no longer using Raspberry Pi + Wordpress. Results in this post doesn't reflect the current performance of my site.
In this post I will put my Raspberry Pi into the test. I’m going to essentially DDoS my site and see what kind of results I get. To do this I have to lay siege on my site.
Siege is an open source tool that is used to test your sites HTTP performance under load. Like the name says, it lays siege on the site by bombarding it with HTTP requests. Tester can change the parameters of the siege to suit their needs. Tests here are ran without the help of caching but I will revisit this topic in another post once I have implemented caching properly on this site.
My server setup is as follows. I have installed Nginx that proxies all PHP requests to PHP-FPM. I use Unix sockets instead of using TCP sockets with IPs and ports. The processor of the Raspberry Pi is ARMv6k based chip that runs between 700MHz and 1GHz. It has 512MB of RAM at its disposal.
First I tested the most likely scenario. One user that requests my site twice. Once for the home page and once on a particular post. Even though I’m sure you are enjoying this post immensely, I do not expect any Reddit or Slashdot type of barrages to my humble site.
siege -c1 -d10 -r2 -v http://hkionline.net Transactions: 2 hits Availability: 100.00 % Elapsed time: 3.61 secs Data transferred: 0.00 MB Response time: 0.80 secs Transaction rate: 0.55 trans/sec Throughput: 0.00 MB/sec Concurrency: 0.44 Successful transactions: 2 Failed transactions: 0 Longest transaction: 0.93 Shortest transaction: 0.67
Everything looks quite good. My site seems responsive to one concurrent visitor requesting two pages. The transactions were over under a second. Ok, lets ramp up the siege since one user and two request does not really tell anything about our limits.
siege -c25 -d10 -r2 -v http://hkionline.net Transactions: 50 hits Availability: 100.00 % Elapsed time: 31.95 secs Data transferred: 0.11 MB Response time: 9.74 secs Transaction rate: 1.56 trans/sec Throughput: 0.00 MB/sec Concurrency: 15.24 Successful transactions: 50 Failed transactions: 0 Longest transaction: 14.55 Shortest transaction: 1.65
The results look still fairly good considering that we are using fairly low powered device with little memory and processing power. My Raspberry Pi was able to successfully complete all transactions and on average the transaction was over in under 10 seconds. We used around 32 seconds to serve 25 concurrent visitors, asking two pages delayed randomly between 0 and 10 seconds. The longest request was a bit too long but that too got served eventually.
Now what if for some reason a single post got some sudden fame or notoriety? Lets say a hundred people were trying to get to my site at once asking one page.
siege -c100 -d10 -r1 -v http://hkionline.net Transactions: 49 hits Availability: 49.00 % Elapsed time: 40.04 secs Data transferred: 0.11 MB Response time: 16.63 secs Transaction rate: 1.22 trans/sec Throughput: 0.00 MB/sec Concurrency: 20.35 Successful transactions: 49 Failed transactions: 51 Longest transaction: 29.81 Shortest transaction: 4.48
Oh my. That did not go so well anymore. Main reason for this abysmal success rate was that I ran out of sockets on the machine I used to launch the siege against my site. I found out that I ran out of sockets after 49 concurrent request so lets adjust our test slightly to a more modest rush. There is a way to increase the available sockets in my test machine and I will do that when I test my servers performance with caching on.
siege -c45 -d10 -r1 -v http://hkionline.net Transactions: 45 hits Availability: 100.00 % Elapsed time: 28.86 secs Data transferred: 0.10 MB Response time: 17.18 secs Transaction rate: 1.56 trans/sec Throughput: 0.00 MB/sec Concurrency: 26.79 Successful transactions: 45 Failed transactions: 0 Longest transaction: 27.85 Shortest transaction: 5.64
Ok. With 45 concurrent user my site stayed still up. Average response time has gone quite high and there is a good chance anyone landing on a page that takes 17 seconds to load might just close their browser tab and continue to more responsive sites. I’m still impressed that my little server could handle the traffic and not go belly up.
Obviously these numbers would not cut it for a major high traffic site that wanted to serve their visitors with a pleasant user experience. However the test shows me that for blog with a decent readership (who don’t all try to access the site once in a frenzy) Raspberry Pi is viable option as a server. The major benefit on running my site on Raspberry Pi is the radically lower electricity usage compared to the tower server with more resources that would go unused most of the time.
If you are running your site on Raspberry Pi, I would be interested to hear about your experiences. I’m also interested hearing comments about what kind of performance testing you do. I have very little experience in this area of testing so it would be nice to hear alternative methods and even critique towards my tests. My tests are far from exhaustive, aimed merely test limits of Raspberry Pi and my needs as a low traffic site administrator. Even though not many visitors, I still want to respect them with speedy page loads.
In the next post I will explore what bats and caching have in common and how those pesky flying rodents affect the performance of my site.