AWS Compute
Tech

What is Compute in AWS?

Amazon Web Services (AWS) house a collection of cloud compute services. The idea behind compute in AWS is to provide users with complete control of their computing resources, to run on Amazon’s proven computing environment. Let’s take a high-level look at the compute services AWS has to offer!

EC2 - Elastic Compute Cloud

EC2 is one of the most common compute services used within AWS, allowing you to deploy virtual servers within your AWS environment. The set up of an instance with elastic compute cloud has the following options:

        • Amazon machine images (AMIs)
        • Instant types
        • Tenancy
        • User data
        • Storage options
Firstly, Amazon machine images are essentially pre-configured EC2 instances. Meaning an AMI is an image baseline that will include an operating system and applications along with any custom configurations. Amazon have their own AMIs for general setups and a collection of AMIs shared by the community, as well as the option to purchase AMIs from The AWS Marketplace. You can also create custom AMIs for quick deployment.
 
The instance type refers to the size of an instance, which is determined by the required number of EC2 compute units, the number of virtual CPUs, and other factors affecting memory, storage and network performance. Amazon offers a range of instance types for different uses and applications, so you can select the most appropriate size or power of an instance for optimal performance.
 
EC2 tenancy could be considered rather self-explanatory. Like a housing tenancy, EC2 tenancy refers to the physical server where your instance is located in an AWS Data Center. So, you can have a shared tenancy (with other AWS customers), dedicated instances (where the hardware is not shared) and dedicated hosts (where you have additional control over the physical host).
 
Configuration of an EC2 instance also includes user data and storage options. The former allows you to enter commands that will run during the first boot cycle of the instance, which allows you to automate actions upon boot. Storage for EC2 can be classified in one of two categories: persistent storage and ephemeral storage. Persistent, or permanent, storage is available by attaching elastic block storage EBS volumes. Whereas ephemeral, or temporary, storage is created by some EC2 instances themselves using a local storage on the physical server.

ECS - Elastic Container Service

An application container holds everything an application needs to run in one place. This keeps the application independent of the operating system, making container applications extremely portable. The application will always run as expected regardless of the deployment location.

AWS Fargate is a server-less compute engine for containers. It removes the need to provision and manage servers by only allocating the right amount of compute. There is no longer a need to choose instances and scale cluster capacity, so customers only pay for the resources required to run their containers.

Amazon Elastic Container Service (Amazon ECS) uses AWS Fargate to remove the need for you to manage your cluster management system. No additional cluster management or monitor software is required.

When launching an ECS Cluster, there are two deployment modes: Farget and EC2. The Fargate launch requires less configuration. However, with an EC2 launch you have a far greater scope of customisation and configurable parameters.

The clusters themselves work together to collate resources, such as CPU and memory. They are dynamically scalable, but only only within a single region.

ECR - Elastic Container Registry

Relating back to the ECS, Docker is piece of software that allows you to automate the installation and distribution of applications inside containers. A Docker image is therefore a file that is used to execute code in a Docker container. 

The Amazon Elastic Container Registry (Amazon ECR) provides a secure location to store and manage your docker images that can be distributed and deployed across your applications. The ECR has the five following components: the registry, authorisation token, repository, repository policy, and image.

EKS - ECS for Kubernetes

Kubernetes is an open-source container orchestration tool designed for automation, deployment, scaling, and operation of  applications with containers. The Elastic Container Service for Kubernetes (EKS) ergo allows Kubernetes to run across AWS infrastructure.

Traditionally, the Kubernetes control plane dictated how Kubernetes and clusters communicated with each other. However, under EKS, AWS is responsible for the control plane duties. 

AWS Elastic Beanstalk

The AWS Elastic Beanstalk analyses potential web applications to automatically provide the required AWS resources for operation. The deployment, provisions, monitoring and scale of the environment becomes the responsibility of AWS and removes the manual infrastructure creation from engineers –  providing a simple, effective, and quick solution to deploying web applications.

Once a web application is created, the engineers upload the version to AWS which then launches the appropriate environment. From there, environment can be edited and managed by the engineers.

AWS Lambda

AWS Lambda is another method in which AWS removes responsibility from application engineers. In this case, providing a server-less compute service to allow application code to run, without an engineer having to manage and provision EC2 instances. AWS manage and provision the compute resources required for an application to run.

There are four steps for AWS Lambda operations:

        1. Uploading your code
        2. Configuring your Lambda function(s) execution
        3. Upon a configured event, Lambda will run your code (using the defined compute power)
        4. Lambda records compute time and resources for the functions for cost calculation

AWS Batch

Batch processing refers to the execution a series of jobs or tasks using a vast amount of compute power across a cluster. AWS Batch removes the constraints of non-cloud batch processing, with these provisioning, monitoring, maintenance and management of the clusters themselves being again taken care of by AWS.

Within AWS Batch are jobs, a unit of work to be run, defined by how the job will run and with what configuration. These are typically scheduled on a first-in-first-out basis but also from evaluating current job queue priorities.

Amazon Lightsail

Amazon Lightsail provides simple virtual private server services, designed for easy configuration and set-up for small business or single users. You can run multiple Lightsail instances together, connect to other AWS resources and peer connect to an existing virtual private server.

 

Given AWS cloud compute and the growing popularity of infrastructure as code, where do you think the future of virtualisation will take us? Leave a comment below.

Leave a Reply

Your email address will not be published. Required fields are marked *