Cloud Labs - Amazon

Introduction to Amazon Cloud

https://aws.amazon.com/fr/about-aws/global-infrastructure/

https://aws.amazon.com/fr/documentation/


Lab 1: AWS Free Account


https://aws.amazon.com/fr/free/

https://aws.amazon.com/fr/free/#software

http://docs.aws.amazon.com/fr_fr/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html


Lab 2: Linux VM


Introductory tutorial

Read https://aws.amazon.com/fr/getting-started/tutorials/launch-a-virtual-machine/

The appropriate user names are as follows:

For Amazon Linux 2 or the Amazon Linux AMI, the username is ec2-user.

For a Centos AMI, the username is centos.

For a Debian AMI, the username is admin or root.

For a Fedora AMI, the username is ec2-user or fedora.

For a RHEL AMI, the username is ec2-user or root.

For a SUSE AMI, the username is ec2-user or root.

For an Ubuntu AMI, the user name is ubuntu.

If ec2-user and root do not work, check with your AMI provider.

SSH Access

$ chmod 400 MyfirtsKey.pem
$ ssh -i MyfirtsKey.pem ubuntu@ec2-176-34-128-40.eu-west-1.compute.amazonaws.com

Installing an Apache server

$ sudo apt install apache2 -y

Host a web site in your web server

$ sudo apt install git
$ git clone https://github.com/xxxx/xxxxx

Create an EBS (Amazon Elastic Block Store) volume for a Linux instance

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 2G 0 disk
loop0 7:0 0 87.9M 1 loop /snap/core/5742
loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784
$ sudo file -s /dev/xvdf
/dev/xvdf: data

-->Create the file system for the EBS /dev/xvdf :

$ sudo mkfs -t ext4 /dev/xvdf
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 0110556b-71b8-46e0-a5af-d4d48bef0d80
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

-->Create a mount point so that /data is seen as a storage space
   storage space (possibly change the permissions to allow writes to this volume
   writes to this volume via chmod)

$ sudo mkdir /data
$ sudo mount /dev/xvdf /data/

-->Testing. Does your volume mounted correctly?
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            488M     0  488M   0% /dev
tmpfs           100M  3.3M   96M   4% /run
/dev/xvda1      7.7G  4.0G  3.8G  52% /
tmpfs           496M     0  496M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           496M     0  496M   0% /sys/fs/cgroup
/dev/loop0       88M   88M     0 100% /snap/core/5742
/dev/loop1       17M   17M     0 100% /snap/amazon-ssm-agent/784
tmpfs           100M     0  100M   0% /run/user/1000
/dev/xvdf       2.0G  3.0M  1.8G   1% /data

Test I/O performance.


Lab 3: Windows Server VM


https://aws.amazon.com/fr/getting-started/tutorials/launch-windows-vm/

https://cdn.devolutions.net/download/Setup.RemoteDesktopManagerFree.5.1.2.0.exe


Lab 4: Amazon S3


Create an S3 object storage space

Tutorial 1:

<!DOCTYPE html>
<html>
<head>
<style>
div.polaroid {
  width: 350px;
  box-shadow: 0 4px 8px 0 grey;
  text-align: center;
}
div.container {
  padding: 10px;
}
</style>
</head>
<body>
<h2>Cards Image</h2>
<div class="polaroid">
  <img src="https://test-gk-3384.s3.us-east-2.amazonaws.com/image.jpg" alt="Norway" style="width:100%">
  <div class="container">
    <p>Hardanger, Norway</p>
  </div>
</div>
</body>
</html>

Tutorial 2 : Hosting a static website in S3


Lab 5: Amazon RDS


Learn about the technologies surrounding RDS

Wordpress as a Service

Benchmark (performance analysis) RDS

drop table if exists LINEITEM;
CREATE TABLE LINEITEM (
    L_ORDERKEY INTEGER NOT NULL,
    L_PARTKEY INTEGER NOT NULL,
    L_SUPPKEY INTEGER NOT NULL,
    L_LINENUMBER INTEGER NOT NULL,
    L_QUANTITY DECIMAL(15,2) NOT NULL,
    L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL,
    L_DISCOUNT DECIMAL(15,2) NOT NULL,
    L_TAX DECIMAL(15,2) NOT NULL,
    L_RETURNFLAG CHAR(1) NOT NULL,
    L_LINESTATUS CHAR(1) NOT NULL,
    L_SHIPDATE DATE NOT NULL,
    L_COMMITDATE DATE NOT NULL,
    L_RECEIPTDATE DATE NOT NULL,
    L_SHIPINSTRUCT CHAR(25) NOT NULL,
    L_SHIPMODE CHAR(10) NOT NULL,
    L_COMMENT VARCHAR(44) NOT NULL);
load data local infile '/home/ec2-user/2.18.0_rc2/dbgen/lineitem.tbl'
     into table LINEITEM fields terminated by '|'
     lines terminated by '\n';

Lab 6 : Amazon Command Line Interface (CLI)


Install the AWS CLI

$ aws configure
$ aws ec2 describe-key-pairs --key-name myfirstkey
aws ec2 run-instances --image-id ami-08660f1c6fb6b01e7 --count 1 --instance-type t2.micro --key-name myfirstkey
....
$ ssh -i myfirstkey.pem ubuntu@ec2-34-243-23-80.eu-west-1.compute.amazonaws.com
$ aws ec2 terminate-instances --instance-ids i-02f2f4efd1218ad23

Lab 7: Amazon DynamoDB


Read https://en.wikipedia.org/wiki/Amazon_DynamoDB

https://aws.amazon.com/fr/dynamodb/getting-started/

https://aws.amazon.com/fr/getting-started/hands-on/create-nosql-table/

Install and test DynamoDB with the benchmark https://en.wikipedia.org/wiki/YCSB whose sources are located here https://github.com/brianfrankcooper/YCSB.


Lab 8 : AWS Rekognition


Introduction

Work to be done

Additional information


Lab 9 : Advanced AWS Operations


Networikng with Virtual Private Cloud (VPC)

https://docs.aws.amazon.com/fr_fr/vpc/latest/userguide/what-is-amazon-vpc.html and https://www.qwiklabs.com/focuses/15673?parent=catalog

Configuration Managment with CloudFormation

https://www.qwiklabs.com/focuses/15515?parent=catalog

Scalable Deployment Elastic Load Balancing (ELB)

https://docs.aws.amazon.com/fr_fr/elasticloadbalancing/latest/classic/introduction.html and https://www.qwiklabs.com/focuses/14125?parent=catalog

Auto Scaling

https://docs.aws.amazon.com/fr_fr/autoscaling/ec2/userguide/GettingStartedTutorial.html

https://www.qwiklabs.com/focuses/14834?parent=catalog

AWS Lambda, Cognito & DynamoDB

https://aws.amazon.com/fr/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/

AWS Batch & Step Functions

https://aws.amazon.com/fr/getting-started/hands-on/process-video-jobs-with-aws-batch-on-aws-step-functions/

AWS Hands-on Labs

https://aws.amazon.com/fr/getting-started/hands-on/


Hints (various aids)


Take an EC2 instance (AMI).

We did this with the EC2 service GUI. We can choose Ubuntu, CentOs, Suse, Amazon distributions or even a Windows Server 2019 instance. We need to choose an instance eligible for our free offer. To connect inside the VM, either we use the graphical interface proposed by (no need for security keys), or we generate and download the keys (.pem file) then we connect, in command line, via the command ssh -i . Be careful, under a Unix distribution, you have to change the rights of the .pem file. It is Amazon that generates this .pem file and it allows it to ensure that the person who uses it is the person who claims to be that person. In other words the .pem file allows authentication to Amazon in a secure way i.e. authorization to access the VM (see https://fr.wikipedia.org/wiki/Authentification_forte).

Installing an Apache server in an Amazon instance

You need to configure the Amazon firewall to say that you (your Apache server) want to be contacted. In other words, you need to expose ports 80 (http) and 443 (https) on the Internet because these are the ports on which Apache servers communicate. This manipulation is independent of any command line configuration of the instance. It is done with the Amazon graphical interface. It is also necessary to check that the Apache service is operational (via the command line sudo systemctl status apache2 or httpd depending on whether you have deployed an Ubuntu or CentOs or…). After this check you can type http:// in your browser to get the default Apache page (index.html file). This file, depending on the distribution, can be found under /var/www/html or /usr/share… or … We can modify this index.html and check, by refreshing the browser, that the modification is effective.

EBS

An “EBS” is a sequence of bytes. We have to associate it with a type (ext4 in our Labs, but we could have chosen other types : https://fr.wikipedia.org/wiki/Système_de_fichiers) with the mkfs command. Then we have to tell where the users will be able to read/write with this storage. This is done with the mount command. Be careful, when creating the instance (AMI) you have to declare that you want to use an EBS (and give its size). We also installed the bonnie++ benchmark to test the performance difference between the SSD and the remote EBS disk. (installation of make and g++ to compile the sources).

S3

It's a bit like a dropbox. The Lab is only done with the graphical mode… and no more in command line (phew). If you want someone to be able to access your bucket you have to make it public as well as the objects inside the bucket. If you are on a VM inside Amazon and all the development of the application in progress does not require exposure outside Amazon, there is no need to make the bucket public.