https://aws.amazon.com/fr/about-aws/global-infrastructure/
https://aws.amazon.com/fr/documentation/
https://aws.amazon.com/fr/free/
https://aws.amazon.com/fr/free/#software
http://docs.aws.amazon.com/fr_fr/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Read https://aws.amazon.com/fr/getting-started/tutorials/launch-a-virtual-machine/
The appropriate user names are as follows: For Amazon Linux 2 or the Amazon Linux AMI, the username is ec2-user. For a Centos AMI, the username is centos. For a Debian AMI, the username is admin or root. For a Fedora AMI, the username is ec2-user or fedora. For a RHEL AMI, the username is ec2-user or root. For a SUSE AMI, the username is ec2-user or root. For an Ubuntu AMI, the user name is ubuntu. If ec2-user and root do not work, check with your AMI provider.
$ chmod 400 MyfirtsKey.pem
$ ssh -i MyfirtsKey.pem ubuntu@ec2-176-34-128-40.eu-west-1.compute.amazonaws.com
$ sudo apt install apache2 -y
$ sudo apt install git
$ git clone https://github.com/xxxx/xxxxx
Note: you can perhaps take the following static web site: https://github.com/cloudacademy/static-website-example or this one https://github.com/daviddias/static-webpage-example/tree/master/src which uses the CCS mini style.
Access to your website
Subsidiary question: do you know how to make sure that access to your website is password protected?
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk xvda1 202:1 0 8G 0 part / xvdf 202:80 0 2G 0 disk loop0 7:0 0 87.9M 1 loop /snap/core/5742 loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784 $ sudo file -s /dev/xvdf /dev/xvdf: data -->Create the file system for the EBS /dev/xvdf : $ sudo mkfs -t ext4 /dev/xvdf mke2fs 1.42.13 (17-May-2015) Creating filesystem with 524288 4k blocks and 131072 inodes Filesystem UUID: 0110556b-71b8-46e0-a5af-d4d48bef0d80 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done -->Create a mount point so that /data is seen as a storage space storage space (possibly change the permissions to allow writes to this volume writes to this volume via chmod) $ sudo mkdir /data $ sudo mount /dev/xvdf /data/ -->Testing. Does your volume mounted correctly? $ df -h Filesystem Size Used Avail Use% Mounted on udev 488M 0 488M 0% /dev tmpfs 100M 3.3M 96M 4% /run /dev/xvda1 7.7G 4.0G 3.8G 52% / tmpfs 496M 0 496M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup /dev/loop0 88M 88M 0 100% /snap/core/5742 /dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784 tmpfs 100M 0 100M 0% /run/user/1000 /dev/xvdf 2.0G 3.0M 1.8G 1% /data
/home/ubuntu
and then on the /data
directory. We assume here that we are with an Ubuntu AMI. And now the question: is the read/write performance the same?https://aws.amazon.com/fr/getting-started/tutorials/launch-windows-vm/
https://fr.wikipedia.org/wiki/Remote_Desktop_Protocol
), for
for example:https://cdn.devolutions.net/download/Setup.RemoteDesktopManagerFree.5.1.2.0.exe
Questions: a) do you have to install the RDP client on your premises or in b) does your Windows server need to have the server that provides RDP service enabled? server that provides the RDP service?
Extra work in case you have time. Read the comments in the middle of the page https://lipn.univ-paris13.fr/~cerin/LPASUR/ to understand how to to understand how to deport a screen from AWS to your browser with VNC technology (https://fr.wikipedia.org/wiki/Virtual_Network_Computing).
Read https://aws.amazon.com/fr/getting-started/tutorials/backup-files-to-amazon-s3/
Create a new compartment
Upload an image in jpg format in the compartment
Get the url of the Image object
Create the following web page testS.html and modify src=“bla bla” by replacing “bla bla” with the address of your image
<!DOCTYPE html> <html> <head> <style> div.polaroid { width: 350px; box-shadow: 0 4px 8px 0 grey; text-align: center; } div.container { padding: 10px; } </style> </head> <body> <h2>Cards Image</h2> <div class="polaroid"> <img src="https://test-gk-3384.s3.us-east-2.amazonaws.com/image.jpg" alt="Norway" style="width:100%"> <div class="container"> <p>Hardanger, Norway</p> </div> </div> </body> </html>
Install and run the TPC-H tests (http://tpc.org/tpch/default5.asp) which allow to estimate the performance of a SQL database manager.
After getting the sources, you have to compile them. You may have to install make and gcc with a 'sudo apt install make gcc binutils' because the sources are written in C language.
The generation of the lineitem.tbl table is done by the command './dbgen -v -T L -s 1'.
The SQL script for creating the table is:
drop table if exists LINEITEM; CREATE TABLE LINEITEM ( L_ORDERKEY INTEGER NOT NULL, L_PARTKEY INTEGER NOT NULL, L_SUPPKEY INTEGER NOT NULL, L_LINENUMBER INTEGER NOT NULL, L_QUANTITY DECIMAL(15,2) NOT NULL, L_EXTENDEDPRICE DECIMAL(15,2) NOT NULL, L_DISCOUNT DECIMAL(15,2) NOT NULL, L_TAX DECIMAL(15,2) NOT NULL, L_RETURNFLAG CHAR(1) NOT NULL, L_LINESTATUS CHAR(1) NOT NULL, L_SHIPDATE DATE NOT NULL, L_COMMITDATE DATE NOT NULL, L_RECEIPTDATE DATE NOT NULL, L_SHIPINSTRUCT CHAR(25) NOT NULL, L_SHIPMODE CHAR(10) NOT NULL, L_COMMENT VARCHAR(44) NOT NULL); load data local infile '/home/ec2-user/2.18.0_rc2/dbgen/lineitem.tbl' into table LINEITEM fields terminated by '|' lines terminated by '\n';
This script can be executed by a command 'mysql -u username -p databasename -h hostamazon < file.sql'. Of course, the mysql client must be installed.
Once connected to the database, we can execute the 1.sql query found on the page https://github.com/catarinaribeir0/queries-tpch-dbgen-mysql
Tutorials to read: https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html and https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2.html and https://docs.aws.amazon.com/fr_fr/cli/latest/userguide/install-cliv2-linux.html
Configure your CLI by first reading https://docs.aws.amazon.com/fr_fr/cli/latest/userguide/cli-chap-configure.html and then, type the command :
$ aws configure
As a prerequisite to this command, you must retrieve the access key and the secret key associated with your AWS account.
Handling the CLI
$ aws ec2 describe-key-pairs --key-name myfirstkey aws ec2 run-instances --image-id ami-08660f1c6fb6b01e7 --count 1 --instance-type t2.micro --key-name myfirstkey .... $ ssh -i myfirstkey.pem ubuntu@ec2-34-243-23-80.eu-west-1.compute.amazonaws.com $ aws ec2 terminate-instances --instance-ids i-02f2f4efd1218ad23
Read https://en.wikipedia.org/wiki/Amazon_DynamoDB
https://aws.amazon.com/fr/dynamodb/getting-started/
https://aws.amazon.com/fr/getting-started/hands-on/create-nosql-table/
Install and test DynamoDB with the benchmark https://en.wikipedia.org/wiki/YCSB whose sources are located here https://github.com/brianfrankcooper/YCSB.
Rekognition is a service to automatically recognize objects in a bitmap image. The technology that is hidden inside is the artificial neural network technology.
Read the page https://aws.amazon.com/fr/rekognition/getting-started/ (follow the 3 steps).
Program the sending of a bitmap image to Rekognition, its analysis and finally the display of the results (name of the objects and confidence interval) in an HTML page. You have at least two ways to do it:
https://chalice-workshop.readthedocs.io/en/latest/media-query/01-intro-rekognition.html with the CLI
https://docs.aws.amazon.com/code-samples/latest/catalog/code-catalog-python-example_code-rekognition.html with the Python SDK
https://docs.aws.amazon.com/fr_fr/vpc/latest/userguide/what-is-amazon-vpc.html and https://www.qwiklabs.com/focuses/15673?parent=catalog
According to Wikipedia AWS CloudFormation allows to create AWS resources (called in CloudFormation terminology stacks) using YAML or JSON documents, with AWS taking care of using the definitions to provision and configure the required resources. CloudFormation is an early example of a declarative Infrastructure as Code tool. You can create AWS CloudFormation documents using Amazon CloudFormer tools or manually using your favorite text editor, once your documents are created you can executed them using AWS Web Console or AWS CLI.
Creating an Amazon Virtual Private Cloud (VPC) with AWS CloudFormation
https://www.qwiklabs.com/focuses/15515?parent=catalog
The problem is now to ensure a good balance of the loads, and to control it in an automatic way (there are no servers that do nothing while others have a heavy workload). You can also read the following section.
Working with Elastic Load Balancing
https://docs.aws.amazon.com/fr_fr/elasticloadbalancing/latest/classic/introduction.html and https://www.qwiklabs.com/focuses/14125?parent=catalog
https://docs.aws.amazon.com/fr_fr/autoscaling/ec2/userguide/GettingStartedTutorial.html
https://www.qwiklabs.com/focuses/14834?parent=catalog
https://aws.amazon.com/fr/getting-started/hands-on/
We did this with the EC2 service GUI. We can choose Ubuntu, CentOs, Suse, Amazon distributions or even a Windows Server 2019 instance. We need to choose an instance eligible for our free offer. To connect inside the VM, either we use the graphical interface proposed by (no need for security keys), or we generate and download the keys (.pem file) then we connect, in command line, via the command ssh -i
You need to configure the Amazon firewall to say that you (your Apache server) want to be contacted. In other words, you need to expose ports 80 (http) and 443 (https) on the Internet because these are the ports on which Apache servers communicate. This manipulation is independent of any command line configuration of the instance. It is done with the Amazon graphical interface. It is also necessary to check that the Apache service is operational (via the command line sudo systemctl status apache2 or httpd depending on whether you have deployed an Ubuntu or CentOs or…). After this check you can type http://
An “EBS” is a sequence of bytes. We have to associate it with a type (ext4 in our Labs, but we could have chosen other types : https://fr.wikipedia.org/wiki/Système_de_fichiers) with the mkfs command. Then we have to tell where the users will be able to read/write with this storage. This is done with the mount command. Be careful, when creating the instance (AMI) you have to declare that you want to use an EBS (and give its size). We also installed the bonnie++ benchmark to test the performance difference between the SSD and the remote EBS disk. (installation of make and g++ to compile the sources).
It's a bit like a dropbox. The Lab is only done with the graphical mode… and no more in command line (phew). If you want someone to be able to access your bucket you have to make it public as well as the objects inside the bucket. If you are on a VM inside Amazon and all the development of the application in progress does not require exposure outside Amazon, there is no need to make the bucket public.