Practice 6 - Load balancing
In this lab we will take a look at load balancing web applications running ontop of IaaS cloud instances. You will set up two instances. One with a simple PHP application and second one with a load balancer. You will define a load balancing group which will contain your own application server, together with other student servers and generate synthetic user requests against your load balancer using JMeter. The goal is to learn how load balancing works in the cloud, how to set up load generation for simple performance testing and take a look at different load balancing strategies (bonus task).
We will use Apache web server with PHP for the application and nginx for the load balancer.
In this lab we are again working on the OpenStack cloud platform, located at: https://stack.cloud.hpc.ut.ee/
- To access the local university cloud resources your computer has to be inside the Institute network. So you should set up a VPN connection to university network.
- VPN (English on the right side column) - https://wiki.ut.ee/pages/viewpage.action?pageId=17105590
NB! Practical session communication!
There will be no physical lab sessions and they should be completed online for the foreseeable future.
- Lab supervisors will provide support through online channels (Slack, email).
- When asking questions or support from lab assistant, please make sure to also provide all needed information, including screenshots, configurations, and even code if needed.
- We have set up a Slack workspace for lab and course related discussions. Invitation link will be sent separately to all students through email.
- Use the
#practice6-loadbalancing
Slack channel for lab related questions and discussion. - If you have not received Slack invitation, please contact lab assistants through email.
- Use the
In case of issues check:
- Pinned messages in the
#practice6-loadbalancing
Slack channel. - Possible solutions to common issues section at the end of the guide.
- Ask in the
#practice6-loadbalancing
Slack channel.
Exercise 6.1. Setting up an Application Server instance
We will create new instance for a Application server that use Apache 2 web server and PHP. We will also download a simple PHP page that keeps track of how many users have visited the page and from which IP's.
- Create a new clean instance from
Ubuntu 18.04
image.- This time you will run multiple machines at the same time so for flavor (instance type) please use
m1.xsmall
- For network you can use
provider_64_net
- Make sure to select the
Delete Volume on Instance Delete
option in the Source tab - Use m1.xsmall flavor
- This time you will run multiple machines at the same time so for flavor (instance type) please use
- Log into the instance through SSH
- Install the required software packages on the instance:
- Refresh the list of software packages:
sudo apt update
- Install the following ubuntu packages: apache2 php libapache2-mod-php
- Refresh the list of software packages:
- Restart the apache web server:
sudo service apache2 restart
- Delete the default Apache HTTP web page file located at:
/var/www/html/index.html
- NB! Normal user (ubuntu) does not have permissions to delete files in this system folder. You will have to use the
sudo
command before file removal command to elevate the command permissions.
- NB! Normal user (ubuntu) does not have permissions to delete files in this system folder. You will have to use the
- Download index.zip
- Unpack it and upload the containing
index.php
file to the instance usingscp
(or just copy its content) - Alternatively you can use
wget
to download the zip directly to the instance andunzip
to unpack it.
- Unpack it and upload the containing
- Copy the
index.php
file into the Apache HTTP application folder/var/www/html/index.php
(You will have to use sudo as your user does have required permissions in that folder) - Create an empty data file:
/var/www/html/data.txt
- This file is used to store the list and count of incoming user requests.
- Change the owner of the created file to 'www-data':
sudo chown www-data /var/www/html/data.txt
(Apache web server is running under this user)
- Access your instance through a web browser using its Public IP to check that the website is working as required
- Modify
/var/www/html/index.php
to make the web page more personal to you, so that others would recognize that this application server was set up by you when they visit the page.- How you decide to modify it up to you, but there should at least be your Full Name present.
- You may want to make it visually very recognizable to differentiate between your server and other students servers more easily.
nano
is a suitable command line file editor to modify text files.
- Take a screenshot of your application server web page
Exercise 6.2. Setting up load balancer instance and balancing multiple application servers
In this exercise, we set up another instance and install a NginX load balancer on it.
Load balancer distributes all user requests of the web page across multiple application servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications and are a required component for auto-scaling - meaning changing the number of application servers dynamically based on number of active users.
(image taken from http://www.netdigix.com/linux-loadbalancing.php)
- We are using Nginx (http://nginx.org/en/) as a load balancer.
- Create a new instance (Should also be Ubuntu 18.04) for the load balancer
- This instance should also have the port 80 open, just like with the first instance
- Make sure to select the "Delete Volume on Instance Delete" option in the Source tab
- Use
m1.xsmall
flavor
- Install Nginx on the instance
- Refresh the list of software packages:
sudo apt update
- Install the following ubuntu package:
nginx-full
- Refresh the list of software packages:
- Modify Nginx configuration to add your application server IP into the list of managed services
- Download the example nginx load balancer configuration:
wget http://kodu.ut.ee/~jakovits/default
- Modify the downloaded
default
configuration file- Find the upstream lab_load_balancer block
- Modify the
server 172.17.64.109;
line in the configuration file and replace the existing IP (172.17.64.109) with the actual IP of your application server.
- Download the example nginx load balancer configuration:
- Overwrite the default nginx site configuration on the instance with the modified configuration
sudo cp default /etc/nginx/sites-enabled/default
- Remember to run this command again every time you modify the local default file
- Reload Nginx service
sudo service nginx reload
- Remember to run this command again every time you modify the local default file
- Visit the IP address of the load balancer using a web browser and verify that it displays your application server
- Now also add lab supervisors and few other student's application servers under your load balancer to distribute user requests between multiple servers.
- Create a new
server IP_ADDRESS;
line inside the upstream lab_load_balancer block for each additional server you wish to add - Lab supervisors Application Server IP is:
172.17.66.49
- Create a new
- Take a screenshot of visiting your application server web page through the load balancer (Browser is accessing load balancer IP address, but your application server content is shown)
Exercise 6.3. Verifying that the load balancing is working properly
Lets test whether the previous steps have been completed successfully by verifying that the load balancing is working and user traffic is being rerouted into your application server.
- Visit the load balancer multiple times. Do you recognize your own application sever?
- Ask other students to visit your load balancer or to add your application server into their load balancer
- Wait until you start seeing requests on your application server tracker from a number of other locations.
- You can check current incoming HTTP connections to your Application Server using the following command:
netstat | grep http
- Take a screenshot of the output of netstat command. Try to have it display more than 3 connections when other student`s requests are being redirected to your server.
- If you have issues getting sufficient number of connections to show up, You may try generating sufficient connections yourself. It may also be best to finish the next exercise first before taking this screenshot, as it will allow you to generate a large number of synthetic user requests.
Exercise 6.4. Generating additional user traffic for benchmarking
Lets check whether the load balancer can now handle a higher number of simultaneous user requests.
- Your task is to generate a large number of user requests to the load balancer and verify how many of those requests are sent to your application server by the load balancer.
- To generate a large number of user request, we will use Apache JMeter, which is a Java based framework for simulating users and generating web traffic.
- Investigate how to set up JMeter load generation as web requests
- A nice tutorial for creating web testing configuration in JMeter is available here: https://jmeter.apache.org/usermanual/build-web-test-plan.html
- Download JMeter.
- Open JMeter.
- It is a Java application, you can start it by opening bin/ApacheJMeter.jar file with Java.
- Set up web testing configuration
- Right click on Test plan and add HTTP requests details configuration
- Server Name or IP: use IP of your LOAD BALANCER instance
- Right click on Test plan and add Thread Group
- Some of the suggested parameters:
- Number of threads/users: 10 (but you can use larger number of you wish)
- Loop count: how many requests you want each user to perform.
- Some of the suggested parameters:
- Right click on Thread Group and add HTTP request sampler
- Set path to
/
- Set path to
- Right click on Thread Group and add Response Time graph listener
- Configure Graph interval to 1000 ms
- Right click on Test plan and add HTTP requests details configuration
- Also add the lab supervisors application server (e.g.
172.17.66.49
) to your Load balancer so there would be more than one. Or add another students application server. - Generate at 2000 requests against your load balancer using JMeter.
- Note down how many of those requests end up on your application server. How large percent of your generated user requests ended up visiting your server?
- Take a screenshot of the JMeter tool configuration (HTTP request element) which you used for load generation.
- Take a screenshot of the JMeter result visualization view (Response Time graph element) which displays the results.
Bonus task
Your task is to create another Application server, this time a bit more capable than the first one, add both application servers to your load balancer and investigate the performance of these two instance types and also the effect of different load balancing strategies.
- Create another Applications server instance of
m2.tiny
flavor, just like in Exercise 6.1. - Add the Application server into your load balancer configuration. And remove other userappliaction servers if you added any in the previous exercises.
- Your load balancer should now manage only two application servers:
m1.xsmall
instance, your first application server you made in 6.1m2.tiny
instance, your second application server
- Our initial PHP script is very fast, which makes it difficult to gauge the respective capabilities of these two instance types. Lets add a bit more computation heavy task to our scripts
- Add a Pi calculation PHP script at the end of your PHP application on BOTH application servers
- You can find a PHP script for computing PI from here:
- http://www.codecodex.com/wiki/Calculate_digits_of_pi#PHP
- Copy the whole content of the PHP section to the end of your script, just before
</body>
tag. - As a result, your application should now also print out a very long number which represents the first X digits of Pi.
- Configure Nginx to use the
Least connections
load balancing strategy, generate 2500 user requests using JMeter to your load balancer and track how many of these 2500 user requests end up in application server 1 vs application server 2- You will find description of available load balancing strategies here: http://nginx.org/en/docs/http/load_balancing.html
- Deliverables :
- Answer: When using the
least_conn;
load balancing strategy, what percent of user request are sent to first and second application server?- Also provide screenshots from both JMeter and your application tracker to display the state after your experiments.
- Answer: Which of these instances are more capable in terms of responding to user requests?
- Take a screenshot of the
top
command line command output on both application server instances while the requests are being sent to the instances. - Answer: What is the CPU load value on both instances? What does this indicate in regards to the performance of these cloud instance types?
- Take a screenshot of the
- Answer: When using the
Deliverables
- DO NOT leave your instances running after submitting the solution. This lab requires quite a few VM CPU cores per student and may mean some students can not start lab exercises until previous students shut down their instances.
- Submit screenshots from the exercise:
- Exercise 6.1: Take a screenshot of your application server web page
- Exercise 6.2: Take a screenshot of visiting your application server web page through the load balancer
- Exercise 6.3: Take a screenshot of the output of netstat command.
- Exercise 6.4:
- Take a screenshot of the JMeter tool configuration (HTTP request element) which you used for load generation.
- Take a screenshot of the JMeter result visualization view (Response Time graph element) which displays the results.
In case of issues, check these potential solutions to common issues:
- If you can not access your cloud instances over SSH:
- To access the local university cloud resources your computer has to be inside the Institute network. So you should set up a VPN connection to university network.
- VPN (English on the right side column) - https://wiki.ut.ee/pages/viewpage.action?pageId=17105590
- You might encounter an error stating that something is locked that is Ubuntu running some updates in background so please give it few minutes to complete and try again later, if still no luck ask help from lab instructor.
- If you get errors about permissions when running command line commands inside the instance:
- In several exercises you are asked to modify, delete or edit files outside your home folder.
- You will have to use sudo command to elevate file manipulation command permissions to be able to complete such operations.
- For example:
sudo nano /etc/nginx/sites-enabled/default
will runnano
command under root user with elevated permissions. - NB! But be careful, not everything should be run through sudo!