Simulate Parallel Connections to a Website

3 minute read

In this post I’ll show you how to measure a website’s responsiveness if there are multiple users requesting the site at the same time. We’ll make a simple shell script to simulate those users and make it to display how long it takes.

Recently, I’ve been working on an online voting system to be used by delegates attending a model UN conference (bratmun.sk), where I needed to make sure that when all those people voted at the same time, the server wouldn’t freeze. To test how fast the site would load, I made myself a simple shell script which simulates a given number of parallel connections to the website and displays the time it takes for the last user to finish loading the website. It looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash

# Parallel Connection Simulator
# ($1) 1st parameter = website
# ($2) 2nd parameter = number of simultaneous requests

initial_time=$(date +%s)

for ((i=1; i<=$2; i++))
do
  curl -s -o /dev/null $1 && echo "done $i" &
done

wait

final_time=$(date +%s)
time_elapsed=$(($final_time-$initial_time))

echo $2 "requests processed in" $time_elapsed "s"

Now the explanation:

  • On line 7, we store the initial time, which we can get in bash by the expression $(date +%s).
  • On line 9, we set up a for cycle which repeats itself that many times, what is the 2nd argument passed to the script (the number of parallel connections).
  • On line 11, we initiate a request to the site specified as the script’s 1st argument (curl $1), we hide the output produced by the script by using the silent option (-s), and discard the requested website by passing it to /dev/null (-o /dev/null). After that’s done, we display a message that the website has been fetched successfully (echo &#8220;done&#8221;) together with the number of the request that has been completed ($i). Finally, the & at the end makes sure that this command gets executed in the background, allowing multiple requests for the website to be executed at the same time.
  • Line 14 (wait) waits for all the parallel connections to finish before it proceeds to the next lines.
  • On line 16, we store the final time after all the requests have been completed.
  • On line 17, we calculate the time needed for all the requests to finish by subtracting the initial time from the final time.
  • Finally, on line 19 we display how long it took (in seconds) to finish all the parallel requests to the given website.

You can try it yourself by creating a file (e.g. script.sh) into which you’ll place the above code. Then, open a terminal and navigate to the location of the script. You can now create 10 parallel connections to http://example.com by executing the script as follows: ./script.sh http://example.com 10 (the http:// part is necessary, otherwise it wouldn’t work).

I’ve tried to run it on my website and this is what I’ve got:

1
2
3
4
5
6
7
8
9
10
11
12
./script.sh http://marianlonga.com/ 10
done 4
done 8
done 3
done 2
done 9
done 7
done 5
done 10
done 6
done 1
10 requests processed in 5 s

Notice that the request aren’t necessarily processed in order (1, 2, 3, …), since all of them were essentially initiated at the same time, and so there’s no order in which to process them. You can also see that my website is quite slow to load when there are many people viewing it — just for comparison, when I tried the script on Google.com, it processed 100 requests in 2 seconds.

So, if you know you’ll be having multiple visitors at the same time and you want each of them to be able to load the website reasonably fast, you can use this script to test the site’s responsiveness. Enjoy!

Leave a comment