Amazon Web Services (AWS) house a collection of cloud compute services. The idea behind compute in AWS is to provide users with complete control of their computing resources, to run on Amazon’s proven computing environment. Let’s take a high-level look at the compute services AWS has to offer!
EC2 - Elastic Compute Cloud
EC2 is one of the most common compute services used within AWS, allowing you to deploy virtual servers within your AWS environment. The set up of an instance with elastic compute cloud has the following options:
- Amazon machine images (AMIs)
- Instant types
- User data
- Storage options
ECS - Elastic Container Service
An application container holds everything an application needs to run in one place. This keeps the application independent of the operating system, making container applications extremely portable. The application will always run as expected regardless of the deployment location.
AWS Fargate is a server-less compute engine for containers. It removes the need to provision and manage servers by only allocating the right amount of compute. There is no longer a need to choose instances and scale cluster capacity, so customers only pay for the resources required to run their containers.
Amazon Elastic Container Service (Amazon ECS) uses AWS Fargate to remove the need for you to manage your cluster management system. No additional cluster management or monitor software is required.
When launching an ECS Cluster, there are two deployment modes: Farget and EC2. The Fargate launch requires less configuration. However, with an EC2 launch you have a far greater scope of customisation and configurable parameters.
The clusters themselves work together to collate resources, such as CPU and memory. They are dynamically scalable, but only only within a single region.
ECR - Elastic Container Registry
Relating back to the ECS, Docker is piece of software that allows you to automate the installation and distribution of applications inside containers. A Docker image is therefore a file that is used to execute code in a Docker container.
The Amazon Elastic Container Registry (Amazon ECR) provides a secure location to store and manage your docker images that can be distributed and deployed across your applications. The ECR has the five following components: the registry, authorisation token, repository, repository policy, and image.
EKS - ECS for Kubernetes
Kubernetes is an open-source container orchestration tool designed for automation, deployment, scaling, and operation of applications with containers. The Elastic Container Service for Kubernetes (EKS) ergo allows Kubernetes to run across AWS infrastructure.
Traditionally, the Kubernetes control plane dictated how Kubernetes and clusters communicated with each other. However, under EKS, AWS is responsible for the control plane duties.
AWS Elastic Beanstalk
The AWS Elastic Beanstalk analyses potential web applications to automatically provide the required AWS resources for operation. The deployment, provisions, monitoring and scale of the environment becomes the responsibility of AWS and removes the manual infrastructure creation from engineers – providing a simple, effective, and quick solution to deploying web applications.
Once a web application is created, the engineers upload the version to AWS which then launches the appropriate environment. From there, environment can be edited and managed by the engineers.
AWS Lambda is another method in which AWS removes responsibility from application engineers. In this case, providing a server-less compute service to allow application code to run, without an engineer having to manage and provision EC2 instances. AWS manage and provision the compute resources required for an application to run.
There are four steps for AWS Lambda operations:
- Uploading your code
- Configuring your Lambda function(s) execution
- Upon a configured event, Lambda will run your code (using the defined compute power)
- Lambda records compute time and resources for the functions for cost calculation
Batch processing refers to the execution a series of jobs or tasks using a vast amount of compute power across a cluster. AWS Batch removes the constraints of non-cloud batch processing, with these provisioning, monitoring, maintenance and management of the clusters themselves being again taken care of by AWS.
Within AWS Batch are jobs, a unit of work to be run, defined by how the job will run and with what configuration. These are typically scheduled on a first-in-first-out basis but also from evaluating current job queue priorities.
Amazon Lightsail provides simple virtual private server services, designed for easy configuration and set-up for small business or single users. You can run multiple Lightsail instances together, connect to other AWS resources and peer connect to an existing virtual private server.