Practice 10 - Cloud service deployment using xOpera and TOSCA
In this Practice session you will learn about cloud service deployment using TOSCA and xOpera. You will learn how to create service templates using TOSCA, along with Ansible. also learn to automatic deployment using orchestration engine known as xOpera. You will create and deploy simple scenario on openstack cloud.
The lab content is related to the RADON Horizon 2020 EU project (http://radon-h2020.eu/), which goal is to unlocking the benefits of serverless FaaS for the European software industry using TOSCA language and developed tools.
- G. Casale, M. Artac, W.-J. van den Heuvel, A. van Hoorn, P. Jakovits, F. Leymann, M. Long, V. Papanikolaou, D. Presenza, A. Russo, S. N. Srirama, D.A. Tamburri, M. Wurster, L. Zhu RADON: rational decomposition and orchestration for serverless computing. SICS Software-Intensive Cyber-Physical Systems. 2019 Jan 1-11. https://doi.org/10.1007/s00450-019-00413-w
References
- TOSCA: http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.3/TOSCA-Simple-Profile-YAML-v1.3.html
- TOSCA: Topology and Orchestration Specification of Cloud Applications
- It is a standard created by group of industrial experts.
- The first version (version 1.0) of this standard came on 16 January 2014.
- It is used to describe the topology of cloud based web services, the components of the web services and their relationships and the processes that manage them.
- Cloudify is an open source cloud service orchestration frame based on TOSCA
- Similarly, Alien4Cloud is an open source cloud application lifecycle management platform based on TOSCA
- Ubicity provides tooling and orchestration cloud environment which is base on TOSCA
- SeaCloud and DICE are some of the related research projects that are based on this standard.
- TOSCA Example-1: Template for deploying a single server
tosca_definitions_version: tosca_simple_yaml_1_3tosca_simple_yaml_1_3 description: Template for deploying a single server with predefined properties. topology_template: node_templates: db_server: type: tosca.nodes.Compute capabilities: # Host container properties host: properties: num_cpus: 1 disk_size: 10 GB mem_size: 4096 MB # Guest Operating System properties os: properties: # host Operating System image properties architecture: x86_64 type: linux distribution: rhel version: 6.5
Exercise 10.1. Working with xOpera orchestrator
opera aims to be a lightweight orchestrator compliant with OASIS TOSCA. The initial compliance is with the TOSCA Simple Profile YAML v1.2. In simple word, this orchestrator takes the TOSCA service template (similar to above TOSCA example) and manages the life cycle of the nodes (services). Life cycle refers to creating, configuring, starting, stopping, and deleting the service.
- Task1: Create a Ubuntu VM in https://stack.cloud.hpc.ut.ee/ with size
m1.xsmall
- Task2: Installing xOpera orchestrator:
- Connect to the vm and update the apt
sudo apt update
- Install the python virtual environment
sudo apt install -y python3-venv python3-wheel python-wheel-common
- Create a directory
mkdir opera && cd opera
, Download the operawget https://github.com/xlab-si/xopera-opera/archive/0.5.1.tar.gz
and untar ittar -xvf 0.5.1.tar.gz
- Create a python virtual environment
python3 -m venv .venv
- activate it
. .venv/bin/activate
,Now you should see (.venv) $ - Now change the dir
cd xopera-opera-0.5.1
, - Install opera with required openstack libraries to use opera to connect openstack
pip install -U opera[openstack]
andsudo apt install python3-openstackclient
- Connect to the vm and update the apt
- Task1: Create a Ubuntu VM in https://stack.cloud.hpc.ut.ee/ with size
Exercise 10.2. Deploying hello world service template
Make sure that you have finished Exercise 10.1 and you are into virtual environment. Now follow below steps:
- Make sure, you are inside the directory
xopera-opera-0.5.1
- Move to examples
cd examples/hello
- Deploy hello service
opera deploy service.yaml
. Now you should see the output as
- Make sure, you are inside the directory
- In the current directory, you will find a directory .opera that contain all necessary files
ls -l
- Now, let’s see the service.yaml file.
- In the current directory, you will find a directory .opera that contain all necessary files
We have two nodes: my-workstation and hello. my-workstation is Compute type of node, which has the capability to host a software component. hello is another node of type hello_type derived from SoftwareComponent type. hello requires a Compute node (my-workstation in this case example). create.yml file provides the instruction to create this node/service.
Now let’s see the create.yml Ansible file
When the software component will be created, it will primarily perform two tasks: making a directory /tmp/playing-opera/hello and creating an empty file hello.txt in the same directory with the content of marker variable. Internally, xOpera invoke ansible command to execute the create.yml file. So the create.yml should contain only ansible commands. For more on ansible, follow below link: https://docs.ansible.com/ansible/latest/modules/modules_by_category.html
- Now lets check the tmp directory
ls /tmp
.This directory should contain a sub-directory playing-opera and an empty file hello.txt inside that sub-directory.
- Now lets check the tmp directory
Exercise 10.3. Deploying complex services using xOpera and TOSCA
Prerequisites: Make sure that all the steps in Exercise 10.1 is followed.
- You need to need to obtain/download the OpenStack credentials. For this go to API Access -> Download OpenStack RC file -> OpenStack RC file (ldpc-openrc.sh).A sample screenshot is also given in the below figure
Copying ldpc-openrc.sh to openstack vm:
For Windows users: You need copy the downloaded ldpc-openrc.sh in to your openstack vm using the gitbash, Open gitbash in your windows machine and go to the directory where you have downloaded the ldpc-openrc.sh, Now use the following command to copy the file to vm. Modify the values in command as per your requirement.
Note: If your not having gitbash running and can download from https://gitforwindows.org/.
scp -i path-to-private-of-vm/yourkey-name.pem ldpc-openrc.sh username@your-ip:
. For example:scp -i C:/Users/poojara/Desktop/shivupoojar.pem ldpc-openrc.sh ubuntu@172.17.67.23:
For Linux users:
In linux treminal use the command to copy the file to vm:
scp -i path-to-private-of-vm/yourkey-name.pem ldpc-openrc.sh username@your-ip:
, Example is given as:scp -i C:/Users/poojara/Desktop/shivupoojar.pem ldpc-openrc.sh ubuntu@172.17.67.23:
Now run source ldpc-openrc.sh
and it will ask for password, please provide your openstack login password and test it using nova list
to list all vms from openstack.
NB!!! Following instructions are only to understand the TOSCA template, but you need to perform in Task 10.3.1.
Let’s extend the service.yml
template to implement scenario as given in the architecture. To implement the scenario, we will require two VMs:
- Nginx Load Balancer (VM size
m1.xsmall
) - WebApp1 (VM size
m1.xsmall
)
- Nginx Load Balancer (VM size
Web server application is a simple php page to display some data. Clients will access the web application and any one of the servers will provide the service through load balancer. Let’s start with preparing the TOSCA service template.
Warning:The code in blue color are very specific to the current environment. It is advised strongly to update those content if the environment changes.
- We need a TOSCA node type for creating the virtual machines. The basic information we need to have before creating a virtual machine are: name of VM, image, flavor, network and the key_name to access.
- Now we need to define the TOSCA node type for installing and configuring Nginx on each VM in the same
service.yml
file.
- Now we need to define the TOSCA node type for installing and configuring Nginx on each VM in the same
- Define the TOSCA node type for load balancer in the same
service.yml
file.
- Define the TOSCA node type for load balancer in the same
- Now define the TOSCA node type for deploying the webApp. Also provide the database information to access the remote data.
Based on the above node definition, it's time to create the actual nodes using the above type definition. For this, the topology_template
in template file contains one node topology inside node_topologies
section (in same service.yml
template file) within which all the nodes must be defined.
- TOSCA node to create four virtual machines/instances:
- The node_topologies also contain the node to install nginx on virtual machines, i.e. on webApp1 server, and webApp2 server.
- Node definition for Nginx load balancer
- Node definition for deploying
web server application. The underlined Ansible script would deploy the WebApp that displays the web page of your application.
The final visualization of the service template structure/relationships.The create and delete are the life cycle interface commands that xopera uses for each of these nodes, which are internally implemented using ansible playbooks
Task 10.3.1
- Step 1: Now you need to follow the following instructions to modify and deploy service template using opera and Make sure that your inside the directory
xopera-opera-0.5.1
and in virtual environment'''
- Step 1: Now you need to follow the following instructions to modify and deploy service template using opera and Make sure that your inside the directory
Instructions if you loose the connection or disconnect from the virtual environment
- Activate the virtual environment
. .venv/bin/activate
- ssh key add
eval `ssh-agent`
ssh-add PATH_TO_YOUR_SSH_KEY
.
- Activate the virtual environment
- Step 2: Now clone the git repository of project containing ansible scripts and TOSCA service template
git clone https://github.com/shivupoojar/webApp-loadbalancer-DB-TOSCA.git
. Now move to projectcd webApp-loadbalancer-DB-TOSCA
. Dols -l
, Now you should see the service templateservice.yaml
. - Step 3: Now to modify the following in
vi service.yaml
You should be careful in modifying the following values in service.yaml
as mentioned in the below figure
- Step 4: Update the path of
nginx.conf
inplaybooks/nginx/install.yml
file as shown in the figure below,
- Step 4: Update the path of
vi playbooks/nginx/install.yml
- Step 5: Here you need to modify the web page of web server application, you need to put your name. You can modify the page in the path
vi playbooks/site/create.yml
and try modify in the section show below in the figure.
- Step 5: Here you need to modify the web page of web server application, you need to put your name. You can modify the page in the path
- Step 6: Copy your open stack private key from your host machine to VM using gitbash in windows and in Ubuntu directly from terminal.
- please edit your paths in the command
- Step 6: Copy your open stack private key from your host machine to VM using gitbash in windows and in Ubuntu directly from terminal.
scp -i C:/Users/xxx/Desktop/xxx.pem C:/Users/xxx/Desktop/xxx.pem ubuntu@your-ip:
- Add your ssh key to the SSH agent. First activate SSH agent service:
eval `ssh-agent`
then add the keyssh-add PATH_TO_YOUR_SSH_KEY
.
- Add your ssh key to the SSH agent. First activate SSH agent service:
- Step 7: After modification, now its time to deploy the final service template
opera deploy service.yaml
NB!!! It will take some time to complete the deployment and after completion of the command please take screenshot from openstack portal(https://stack.cloud.hpc.ut.ee/project/instances/) showing your instances created.
- Step 8: On successful execution, type the load balancer IP in your browser. You should see your modified web page running in the web server, as shown in below figure.
NB!!! Take screenshot of web application running in browser
- Step 9: Please destroy the environment after completing the experiment using command
opera undeploy
- Step 9: Please destroy the environment after completing the experiment using command
NB!!! Take screenshot showing output of undeploy command.
Deliverables:
- A copy of service.yaml file
- Screenshot of final web server application running in browser.
- Screenshot showing instances running in openstack
- Screenshot showing output of the command in step 9 of 10.3.1
Potential issues:
- If you run into issues, please check the following things:
- SSH key is added to SSH agent:
ssh-add -l
(should show a SSH key)- Add ssh key again using ssh-add command
- OpenStack API has been configured:
nova list
(should return a list of instances)- run
source ldpc-openrc.sh
again
- run
- You have activated the Python environment
- These things need to be done again, if you have exited the instance in the meantime
- SSH key is added to SSH agent:
- If you get an error, please also check the JSON error text above the Python error.
- Error:
MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url
refers to OpenStack access not being configured. In such case, runsource ldpc-openrc.sh
again
- Error:
- If you get error
BadRequestException: 400: Client Error for url: https://cloud.hpc.ut.ee:8774/v2.1/os-volumes_boot, Invalid key_name provided
this means your OpenStack access key in the service.yaml file is not correct.- Make sure you did not include
.pem
. - It should be the same you see in https://stack.cloud.hpc.ut.ee/project/instances/ under your instance Key Pair column.
- Make sure you did not include