1. Cloud computing#

What is a client-server model?#

In computing, a client can be a web browser or desktop application that a person interacts with to make requests to computer servers. A server can be services, such as a remote system (hardware or software) that processes requests and returns responses like (AWS EC2, Google Cloud).

Hello Friend

What are the deployment models?#

Cloud-Based

Run, migrate, or build applications in the cloud.

  • Use case: An application built with virtual servers, databases, and networking components, fully based in the cloud.
On-Premises

Resources are deployed in on-premises data centers.

  • Use case: Applications running on technology fully kept in an on-premises data center with enhanced resource utilization.
Hybrid

Combines cloud-based and on-premises resources.

  • Use case: Legacy applications remain on premises, while batch data processing and analytics services run in the cloud. or Suitable for legacy applications with regulatory requirements for cloud integration.

What are the benefits of cloud computing?#

Trade Upfront Expense for Variable Expense
  • Upfront Expense: Traditional IT requires heavy initial investment in data centers, servers, and infrastructure before usage.
  • Variable Expense: Cloud computing allows businesses to pay only for the resources they consume, avoiding large capital expenditures.
Stop Spending Money on Data Center Maintenance
  • Traditional Challenge: Computing in data centers often requires you to spend more money and time managing infrastructure and servers.
  • Cloud Advantage: Cloud computing eliminates the burden of maintaining physical hardware, allowing businesses to focus on applications and customers.
  • Key Benefit: Is the ability to focus less on these tasks and more on your applications and customers.
Stop Guessing Capacity
  • Traditional Challenge: You don’t have to predict how much infrastructure capacity you will need before deploying an application.
  • Cloud Advantage: Cloud providers offer auto-scaling features that adjust resources based on demand.
Benefit from massive economies of scale

By using cloud computing, you can achieve a lower variable cost than you can get on your own.

Increase speed and agility

The flexibility of cloud computing makes it easier for you to develop and deploy applications.

Go global in minutes

The global footprint of the AWS Cloud enables you to deploy applications to customers around the world quickly, while providing them with low latency. This means that even if you are located in a different part of the world than your customers, customers are able to access your applications with minimal delays.

2. EC2#

  • Amazon Elastic Compute Cloud (EC2) provides secure and resizable compute capacity in the cloud.
  • EC2 instances are isolated from each other, ensuring secure multitenant environments.
  • Multitenancy: Instances share physical hardware securely using a hypervisor.
  • Configuration Control: Customize the operating system (Windows or Linux), software, and networking settings.
  • Resizable Instances: Adjust instance size based on resource needs.
  • Networking Control: Define public or private access to instances.

What types of EC2 instances exist?#

General purpose instances

Provide a balance of compute, memory, and networking resources. Applications built on open source software such as

  • Backend servers for enterprise applications
  • Microservices
  • Caching fleets
  • Gaming servers
  • Small and medium databases
Compute optimized instances

Compute-bound applications that benefit from high-performance processors.

  • Compute-intensive applications servers
  • Dedicated gaming servers
  • High performance web servers
  • High performance computing (HPC)
  • Media transcoding
  • Scientific modeling
  • Dedicated gaming servers
  • Machine learning inference
  • Batch processing workloads that require processing many transactions in a single group
  • Video encoding
  • Scientific modeling
  • Distributed analytics
Memory optimized instances

Designed to deliver fast performance for workloads that process large datasets in memory.

  • High-performance database, in-memory caches
  • Real-time processing of a large amount of unstructured data
  • Real-time big data analytics.
Accelerated computing instances

Use hardware accelerators, or coprocessors, to perform some functions more efficiently than is possible in software running on CPUs.

  • Generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more.
  • HPC applications at scale in pharmaceutical discovery, seismic analysis, weather forecasting, and financial modeling.
  • Functions include floating-point number calculations
  • Graphics applications processing
  • Data pattern matching
  • Game streaming
Storage optimized instances

Are designed for workloads that require high, sequential read and write access to very large data sets on local storage Storage optimized instances are designed to deliver tens of thousands of low-latency, random IOPS.

  • I/O intensive workloads that require real-time latency access to data such as relational databases (MySQL, PostgreSQL)
  • Real-time databases, NoSQL databases (Aerospike, Apache Druid, Clickhouse, MongoDB)
  • Real- time analytics such as Apache Spark.
  • Distributed file systems
  • Data warehousing applications
  • High-frequency online transaction processing (OLTP) systems

What types of EC2 pricing exist?#

On-Demand Instances

Ideal for short-term, irregular workloads that cannot be interrupted.

Reserved Instances

You can purchase Standard Reserved and Convertible Reserved Instances for a 1-year or 3-year term. You realize greater cost savings with the 3-year option. Types include:

  • Standard Reserved Instances: Best when you know the exact EC2 instance type, size, and AWS Region for steady-state applications
  • Convertible Reserved Instances: Ideal when you need flexibility across Availability Zones or instance types
EC2 Instance Savings Plans

Reduce costs by committing to an hourly spend for 1-3 years, offering savings up to 72% compared to On-Demand rates.

Spot Instances

Ideal for flexible, interruptible workloads, offering up to 90% savings by using unused EC2 capacity.

Dedicated Hosts

Physical servers fully dedicated to your use (most expensive option).

Amazon EC2 Auto Scaling#

Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in response to changing application demands.

  • Dynamic scaling responds to changing demand.
  • Predictive scaling automatically schedules the right number of Amazon EC2 instances based on predicted demand.

What types of scaling there are?:#

Minimum Capacity

The number of Amazon EC2 instances that launch immediately after creating the Auto Scaling group.

Desired Capacity

The current number of instances running in your Auto Scaling group, which may be set higher than your minimum requirement (e.g., 2 instances when only 1 is needed).

Maximum Capacity

The upper limit that your Auto Scaling group can scale to in response to increased demand.

Elastic Load Balancing#

  • Elastic Load Balancing is the AWS service that automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances.
  • A load balancer is an application that takes in requests and routes them to the instances to be processed.
ELB

3. Messaging and Queuing#

3.1 Amazon Simple Queue Service or SQS#

SQS allows you to send, store, and receive messages between software components at any volume. This is without losing messages or requiring other services to be available.

3.2 Amazon Simple Notification Service or SNS#

Amazon SNS is similar in that it is used to send out messages to services, but it can also send out notifications to end users. It does this in a different way called a publish/subscribe or pub/sub model. This means that you can create something called an SNS topic which is just a channel for messages to be delivered.

4. Serverless computing#

AWS Lambda#

AWS Lambda is a service that lets you run code without needing to provision or manage servers.

Lambda

While using AWS Lambda, you pay only for the compute time that you consume. Charges apply only when your code is running. You can also run code for virtually any type of application or backend service, all with zero administration.

What is the difference between Serverless and Serverful computing?#

Serverless
  1. You upload your code to Lambda.
  2. You set your code to trigger from an event source, such as AWS services, mobile applications, or HTTP endpoints.
  3. Lambda runs your code only when triggered.
Serverful
  1. Provision instances (virtual servers)
  2. Upload your code.
  3. Continue to manage the instances while your application is running.
  4. Manage the instances while your application is running.

5. Other Services#

Amazon Elastic Container Service (Amazon ECS)#

A highly scalable, high-performance container management system that enables you to run and scale containerized applications on AWS.

Amazon Elastic Kubernetes Service (Amazon EKS)#

A fully managed service that you can use to run Kubernetes on AWS.

AWS Fargate#

AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS.

AWS Global Infrastructure#

How to select the right AWS region?
  1. Compliance: If your company requires data to stay within a specific country, choose a Region that meets those requirements (e.g., London for UK data residency).
  2. Proximity: Running infrastructure close to customers improves content delivery (e.g., a Washington, DC company might use the Singapore Region for customers there).
  3. Availability: The closest Region may lack certain features, as AWS expands services gradually by building hardware in each Region.
  4. Pricing: Costs vary by Region due to local tax structures (e.g., running workloads in São Paulo may be 50% more expensive than in Oregon).
What are the AWS Global Infrastructure scopes?
  1. AWS Region: A physically isolated cluster of data centers in a specific location (e.g., US-East-1, Frankfurt). (36 Regions)
  2. Availability Zone (AZ): One or more discrete data centers in a Region, separated to minimize failure risk. (114 AZ)
  3. AWS Data Center: A single facility within an AZ, containing physical servers, networking, and storage.
  4. EC2 Instance – A virtual machine running inside a specific AZ.
Ways to Interact with AWS Services
  1. AWS Management Console
  • A web-based graphical interface for managing AWS services.
  • Best for visual management, monitoring, and quick configurations.
  • Provides dashboards and interactive controls for easy service navigation.
  1. AWS Command Line Interface (CLI)
  • A command-line tool for interacting with AWS services programmatically.
  • Best for automation, scripting, and DevOps workflows.
  • Allows bulk operations and remote management of AWS resources.
  1. AWS Software Development Kits (SDKs)
  • SDKs allow developers to integrate AWS services within applications using programming languages.
  • Available for Python (Boto3), JavaScript, Java, Go, .NET, and more.
  • Best for developers building cloud-native applications with AWS APIs.
AWS Infrastructure as Code (IaC) Options
  1. AWS Elastic Beanstalk
  • A Platform as a Service (PaaS) that simplifies application deployment.
  • Manages infrastructure automatically, handling scaling, monitoring, and maintenance.
  • Best for developers deploying applications without worrying about infrastructure setup.
  1. AWS CloudFormation
  • A fully-managed Infrastructure as Code (IaC) service.
  • Uses YAML or JSON templates to define and provision AWS resources.
  • Best for DevOps teams managing AWS infrastructure through automation.

Edge locations#

An edge location is a site that Amazon CloudFront uses to store cached copies of your content closer to your customers for faster delivery. (more tan 700 Edges)

Edge Locations

6. Networking#

AWS Direct Connect#

Is a service that lets you to establish a dedicated private connection between your data center and a VPC. AWS Direct Connect provides a physical line that connects your network to your AWS VPC.

Internet Gateway IGW#

To allow public traffic from the internet to access your VPC, you attach an internet gateway to the VPC.

Virtual Private Gateway#

A virtual private gateway enables you to establish a virtual private network (VPN) connection between your VPC and a private network, such as an on-premises data center or internal corporate network. A virtual private gateway allows traffic into the VPC only if it is coming from an approved network.

Virtual Private Conection VPC#

Flow

Amazon VPC enables you to provision an isolated section of the AWS Cloud. In this isolated section, you can launch resources in a virtual network that you define.

Public subnets#

Public subnets contain resources that need to be accessible by the public, such as an online store’s website.

Private subnets#

Private subnets contain resources that should be accessible only through your private network, such as a database that contains customers’ personal information and order histories.

A network ACL#

Is a virtual firewall that controls inbound and outbound traffic at the subnet level. By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules. Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound.

Security groups SG#

A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance. By default, a security group denies all inbound traffic and allows all outbound traffic. Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.

ACL and SG

Amazon Route 53#

how does it works in a DNS service?
  1. When you enter the domain name into your browser, this request is sent to a customer DNS resolver.
  2. The customer DNS resolver asks the company DNS server for the IP address that corresponds to AnyCompany’s website.
  3. The company DNS server responds by providing the IP address for AnyCompany’s website, 192.0.2.0.
how does it works using CloudFront?
  1. A customer requests data from the application by going to AnyCompany’s website.
  2. Amazon Route 53 uses DNS resolution to identify AnyCompany.com’s corresponding IP address, 192.0.2.0. This information is sent back to the customer.
  3. The customer’s request is sent to the nearest edge location through Amazon CloudFront.
  4. Amazon CloudFront connects to the Application Load Balancer, which sends the incoming packet to an Amazon EC2 instance.

7. Storage and Databases#

What is a instance store?
  • An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance.
  • When the instance is terminated, you lose any data in the instance store.
What is a object storage?
  • In object storage, each object consists of data, metadata, and a key.
  • The data might be an image, video, text document, or any other type of file.
  • Metadata contains information about what the data is, how it is used, the object size, and so on.
  • An object’s key is its unique identifier.

Amazon Elastic Block Store (Amazon EBS)#

Block-level storage volumes behave like physical hard drives.

EBS is a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available.

When you create an EBS volume, you define the configuration (such as volume size and type) and provision it. After you create an EBS volume, it can attach to an Amazon EC2 instance.

Because EBS volumes are for data that needs to persist, it’s important to back up the data. You can take incremental backups of EBS volumes by creating Amazon EBS snapshots.

What is a EBS snapshot?

An EBS snapshot is an incremental backup. The first snapshot of an EBS volume copies all the data, while subsequent snapshots only save the blocks of data that have changed since the last snapshot. This approach differs from full backups, where all data is copied every time, including unchanged data. Incremental backups are more efficient, saving storage space and reducing backup time compared to full backups.

S3 Bucket#

Amazon S3 (Simple Storage Service) is an object-level storage service that stores data as objects within buckets. It supports uploading any file type (e.g., images, videos, documents) and is commonly used for backups, media storage, and archiving. Amazon S3 provides unlimited storage space, with a maximum object size of 5 TB. Users can set permissions to control access and visibility of files and enable versioning to track changes to objects over time.

There are two questions you have to answer about what kind of data you want to store in S3:

  1. How often you plan to retrieve data from S3?
  2. How available you need your data to be in S3?
What is the S3 Standard?
  • Designed for frequently accessed data
  • Stores data in a minimum of three Availability Zones
  • Has a higher cost than other storage

This makes it a good choice for a wide range of use cases, such as websites, content distribution, and data analytics.

What is the S3 Standard-IA (Infrequently Accessed)?
  • Ideal for infrequently accessed data
  • Stores data in a minimum of three Availability Zones
  • Has a lower storage price and higher retrieval price

This makes it a good choice for a wide range of use cases, backup storage for disaster recovery, video, images, or other media files that are not part of a frequently viewed library but need to be accessible without long retrieval delays.

What is the S3 One Zone-IA?
  • Stores data in a single Availability Zone
  • Has a lower storage price than Amazon S3 Standard-IA
  • if you want to save costs on storage.
  • You can easily reproduce your data in the event of an Availability Zone failure.
What is the S3 Intelligent-Tiering?
  • Ideal for data with unknown or changing access patterns
  • Requires a small monthly monitoring and automation fee per object

If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.

What is the S3 Glacier Instant Retrieval?
  • Works well for archived data that requires immediate access
  • Can retrieve objects within a few milliseconds

You can retrieve objects stored in the S3 Glacier Instant Retrieval storage class within milliseconds, with the same performance as S3 Standard.

What is the S3 Glacier Flexible Retrieval?
  • Low-cost storage designed for data archiving
  • Able to retrieve objects within a few minutes to hours

You can retrieve objects stored in the S3 Glacier Flexible Retrieval storage class within a few hours, with a lower retrieval price than S3 Glacier Instant Retrieval.

What is the S3 Glacier Deep Archive?
  • Lowest-cost object storage class ideal for archiving
  • Able to retrieve objects within 12 hours
  • All objects from this storage class are replicated and stored across at least three geographically dispersed Availability Zones.

This storage class is the lowest-cost storage in the AWS Cloud, with data retrieval from 12 to 48 hours.

What is the S3 Outpost?

Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts environment. Amazon S3 Outposts is designed to store data durably and redundantly across multiple devices and servers on your Outposts.

EBS S3
Up to 16 TiB Unlimited storage
Survive termination of a EC2 instance Individual objects up to 5 TBs
SDD by default Write one/read many
HDD options 99.999999999% durability

Amazon Elastic File System (Amazon EFS)#

In file storage, multiple clients (such as users, applications, servers, and so on) can access data that is stored in shared file folders.

As you add and remove files, Amazon EFS grows and shrinks automatically. It can scale on demand to petabytes without disrupting applications.

EFS vs EBS?
  • An Amazon EBS volume stores data in a single Availability Zone and is attach to a EC2 instance.
  • In order to attach EC2 to EBS, you need to be in the same AZ.
  • If you provision a two terabyte EBS volume and fill it up, it doesn’t automatically scale to give you more storage.
  • Amazon EFS can have multiple instances reading and writing from it at the same time. It is a true file system for Linux. Any EC2 instance in the Region can write to the EFS file system. It automatically scales

Amazon Relational Database Service (Amazon RDS)#

Relational databases use structured query language (SQL) to store and query data. This approach allows data to be stored in an easily understandable, consistent, and scalable way.

Amazon RDS is a managed service that automates tasks such as hardware provisioning, database setup, patching, and backups.

You can integrate Amazon RDS with other services to fulfill your business and operational needs, such as using AWS Lambda to query your database from a serverless application.

  • Amazon Aurora (mysql, postgresQL)

Amazon NonRelational databases#

NonRelational databases are sometimes referred to as “NoSQL databases” because they use structures other than rows and columns to organize data. One type of structural approach for nonrelational databases is key-value pairs.

With key-value pairs, data is organized into items (keys), and items have attributes (values). You can think of attributes as being different features of your data.

DynamoDB is serverless, which means that you do not have to provision, patch, or manage servers.

As the size of your database shrinks or grows, DynamoDB automatically scales to adjust for changes in capacity while maintaining consistent performance.

Amazon Redshift#

Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.

AWS Database Migration Service (AWS DMS)#

With AWS DMS, you move data between a source database and a target database. The source and target databases can be of the same type or different types.

What are the common use cases for AWS DMS?
  1. Development and test database migrations: Enabling developers to test applications against production data without affecting production users.
  2. Database consolidation: Combining several databases into a single database.
  3. Continuous replication: Sending ongoing copies of your data to other target sources instead of doing a one-time migration

Amazon DocumentDB#

Is a document database service that supports MongoDB workloads. (MongoDB is a document database program.)

Amazon Neptune#

Is a graph database service. You can use Amazon Neptune to build and run applications that work with highly connected datasets, such as recommendation engines, fraud detection, and knowledge graphs.

Amazon Quantum Ledger Database (Amazon QLDB)#

Is a ledger database service. You can use Amazon QLDB to review a complete history of all the changes that have been made to your application data.

Amazon Managed Blockchain#

Is a service that you can use to create and manage blockchain networks with open-source frameworks.

Blockchain is a distributed ledger system that lets multiple parties run transactions and share data without a central authority.

Amazon ElastiCache#

Is a service that adds caching layers on top of your databases to help improve the read times of common requests.

It supports two types of data stores: Redis and Memcached.

Amazon DynamoDB Accelerator (DAX)#

Is an in-memory cache for DynamoDB.

It helps improve response times from single-digit milliseconds to microseconds.

8. Security#

The shared responsibility model divides into customer responsibilities (commonly referred to as “security in the cloud”) and AWS responsibilities (commonly referred to as “security of the cloud”).

Share responsability

IAM users#

It represents the identity of the person or application that interacts with AWS services and resources. It consists of a name and credentials.

IAM policies#

An IAM policy is a document that allows or denies permissions to AWS services and resources.

IAM groups#

An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.

IAM roles#

An IAM role is an identity that you can assume to gain temporary access to permissions.

Before an IAM user, application, or service can assume an IAM role, they must be granted permissions to switch to the role. When someone assumes an IAM role, they abandon all previous permissions that they had under a previous role and assume the permissions of the new role.

AWS Organizations#

In AWS Organizations, you can centrally control permissions for the accounts in your organization by using service control policies (SCPs). SCPs enable you to place restrictions on the AWS services, resources, and individual API actions that users and roles in each account can access.

By organizing separate accounts into OUs, you can more easily isolate workloads or applications that have specific security requirements.

Compliance#

AWS Artifact is a service that provides on-demand access to AWS security and compliance reports and select online agreements. AWS Artifact consists of two main sections:

  1. AWS Artifact Agreements: In AWS Artifact Agreements, you can review, accept, and manage agreements for an individual account and for all your accounts in AWS Organizations. Different types of agreements are offered to address the needs of customers who are subject to specific regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
  2. AWS Artifact Reports: AWS Artifact Reports provide compliance reports from third-party auditors. These auditors have tested and verified that AWS is compliant with a variety of global, regional, and industry-specific security standards and regulations.
The customer compliance center?

In the Customer Compliance Center, you can read customer compliance stories to discover how companies in regulated industries have solved various compliance, governance, and audit challenges. You can find:

  • AWS answers to key compliance questions
  • An overview of AWS risk and compliance
  • An auditing security checklist

Types of attacks:#

What is a DDoS attack?

In a distributed denial-of-service (DDoS) attack, multiple sources are used to start an attack that aims to make a website or application unavailable.

This can come from a group of attackers, or even a single attacker. The single attacker can use multiple infected computers (also known as “bots”) to send excessive traffic to a website or application.

To help minimize the effect of DoS and DDoS attacks on your applications, you can use AWS Shield.

What is a UDP flood attack?

A UDP flood attack exploits internet services like the National Weather Service, which provide large responses to small requests.

The attacker sends a simple request (e.g., “Give me weather”) but spoofs the return address to the victim’s server. As a result, the service floods the victim with massive amounts of data, overwhelming its capacity.

To help minimize the effect of DoS and DDoS attacks on your applications, you can use AWS Security Groups and AWS Shield.

What is an HTTP-level attack?

HTTP-level attacks mimic legitimate user behavior, such as repeatedly performing complex product searches. These attacks come from a network of compromised bots, overwhelming the server with excessive requests.

As a result, real customers are unable to access the service.

To help minimize the effect of DoS and DDoS attacks on your applications, you can use AWS WAF to filter malicious traffic and rate-limit abusive requests.

What is a Slowloris attack?

The Slowloris attack exploits server connection limits by making slow, incomplete requests, forcing the server to keep connections open indefinitely.

This is like someone taking excessively long to order at a coffee shop, blocking others from being served. A few attackers can exhaust server resources, severely impacting availability.

To help minimize the effect of DoS and DDoS attacks on your applications, you can use Elastic Load Balancer (ELB) to handle HTTP requests efficiently.

What is a SYN flood attack?

A SYN flood attack exploits the TCP handshake process by sending a large number of SYN requests without completing the connection.

The victim’s server keeps resources allocated for half-open connections, eventually exhausting its capacity and preventing legitimate users from connecting.

To help minimize the effect of SYN flood attacks on your applications, you can use AWS Shield Advanced and Elastic Load Balancer (ELB) to detect and mitigate abnormal connection attempts.

What is a DNS amplification attack?

A DNS amplification attack abuses open DNS resolvers by sending small queries with a spoofed source IP (the victim’s IP). These queries trigger much larger responses, which flood the victim with data.

This attack amplifies the attacker’s bandwidth, making it highly efficient and destructive.

To help minimize the effect of DNS amplification attacks on your applications, you can use AWS Route 53 with AWS Shield Advanced to filter out malicious DNS traffic.

What is an ICMP flood attack?

An ICMP flood (ping flood) attack overwhelms a network by sending excessive ICMP (ping) requests. This consumes both bandwidth and processing power, leading to service disruption.

To help minimize the effect of ICMP flood attacks on your applications, you can use AWS Security Groups and Network ACLs to block unnecessary ICMP traffic.

What is a Smurf attack?

A Smurf attack abuses ICMP by sending spoofed requests to a network’s broadcast address. Every device on that network responds to the victim’s spoofed IP, causing a massive traffic overload.

To help minimize the effect of Smurf attacks on your applications, you can use AWS Security Groups and AWS WAF to filter malicious ICMP traffic.

AWS Shield#

  1. AWS Shield Standard: automatically protects all AWS customers at no cost. It protects your AWS resources from the most common, frequently occurring types of DDoS attacks.

  2. AWS Shield Advanced: is a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks.

AWS Key Management Service (AWS KMS)#

AWS Key Management Service (AWS KMS) enables you to perform encryption operations through the use of cryptographic keys. A cryptographic key is a random string of digits used for locking (encrypting) and unlocking (decrypting) data.

AWS WAF#

AWS WAF is a web application firewall that lets you monitor network requests that come into your web applications.

AWS WAF works together with Amazon CloudFront and an Application Load Balancer.

AWS WAF works in a similar way to block or allow traffic. However, it does this by using a web access control list (ACL) to protect your AWS resources.

Amazon Inspector#

Amazon Inspector helps to improve the security and compliance of applications by running automated security assessments. It checks applications for security vulnerabilities and deviations from security best practices, such as open access to Amazon EC2 instances and installations of vulnerable software versions.

Amazon GuardDuty#

Amazon GuardDuty is a service that provides intelligent threat detection for your AWS infrastructure and resources.

After you have enabled GuardDuty for your AWS account, GuardDuty begins monitoring your network and account activity. You do not have to deploy or manage any additional security software. GuardDuty then continuously analyzes data from multiple AWS sources, including VPC Flow Logs and DNS logs.

9. Monitoring#

Amazon CloudWatch#

Amazon CloudWatch is a web service that enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics.

CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics to CloudWatch.

What is CloudWatch alarms?

With CloudWatch, you can create alarms that automatically perform actions if the value of your metric has gone above or below a predefined threshold.

You could create a CloudWatch alarm that automatically stops an Amazon EC2 instance when the CPU utilization percentage has remained below a certain threshold for a specified period.

What is CloudWatch dashboard?

Enables you to access all the metrics for your resources from a single location.

You can use a CloudWatch dashboard to monitor the CPU utilization of an Amazon EC2 instance, the total number of requests made to an Amazon S3 bucket, and more.

CloudTrail#

AWS CloudTrail records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more.

What is CloudTrail Insights?

You can also enable CloudTrail Insights. This optional feature allows CloudTrail to automatically detect unusual API activities in your AWS account.

For example, CloudTrail Insights might detect that a higher number of Amazon EC2 instances than usual have recently launched in your account.

AWS Trusted Advisor#

AWS Trusted Advisor is a web service that inspects your AWS environment and provides real-time recommendations in accordance with AWS best practices.

AWS Trusted Advisor dashboard is an automated advisor that evaluates your AWS account against five pillars:

  1. Cost Optimization
  2. Performance
  3. Security
  4. Fault Tolerance
  5. Service Limits

10. Pricing and Support#

AWS Free Tier#

The AWS Free Tier enables you to begin using certain services without having to worry about incurring costs for the specified period.

Three types of offers are available:

  1. Always Free: AWS Lambda allows 1 million free requests and up to 3.2 million seconds of compute time per month.
  2. 12 Months Free: Amazon S3 Standard Storage, thresholds for monthly hours of Amazon EC2 compute time
  3. Trials: Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that enables you to run virtual private servers) offers 750 free hours of usage over a 30-day period.
How AWS pricing works?

AWS offers a range of cloud computing services with pay-as-you-go pricing.

  • You pay for exactly the amount of resources that you actually use, without requiring long-term contracts or complex licensing.
  • Some services offer reservation options that provide a significant discount compared to On-Demand Instance pricing.
  • Some services offer tiered pricing, so the per-unit cost is incrementally lower with increased usage.

AWS Pricing Calculator#

The AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can organize your AWS estimates by groups that you define.

Suppose that your company is interested in using Amazon EC2. However, you are not yet sure which AWS Region or instance type would be the most cost-efficient for your use case. In the AWS Pricing Calculator, you can enter details, such as the kind of operating system you need, memory requirements, and input/output (I/O) requirements. By using the AWS Pricing Calculator, you can review an estimated comparison of different EC2 instance types across AWS Regions.

Billing Dashboard#

Use the AWS Billing & Cost Management dashboard to pay your AWS bill, monitor your usage, and analyze and control your costs.

  1. Compare your current month-to-date balance with the previous month, and get a forecast of the next month based on current usage.
  2. View month-to-date spend by service.
  3. View Free Tier usage by service.
  4. Access Cost Explorer and create budgets.
  5. Purchase and manage Savings Plans.
  6. Publish AWS Cost and Usage Reports.

Consolidated Billing#

The consolidated billing feature of AWS Organizations enables you to receive a single bill for all AWS accounts in your organization. By consolidating, you can easily track the combined costs of all the linked accounts in your organization.

AWS Budgets#

In AWS Budgets, you can create budgets to plan your service usage, service costs, and instance reservations.

The information in AWS Budgets updates three times a day. This helps you to accurately determine how close your usage is to your budgeted amounts or to the AWS Free Tier limits.

In AWS Budgets, you can also set custom alerts when your usage exceeds (or is forecasted to exceed) the budgeted amount.

AWS Cost Explorer#

AWS Cost Explorer is a tool that lets you visualize, understand, and manage your AWS costs and usage over time.

AWS Cost Explorer includes a default report of the costs and usage for your top five cost-accruing AWS services. You can apply custom filters and groups to analyze your data.

AWS Support#

AWS offers four different Support plans to help you troubleshoot issues, lower costs, and efficiently use AWS services.

Support

Technical Account Manager (TAM)#

The Enterprise On-Ramp and Enterprise Support plans include access to a Technical Account Manager (TAM).

The TAM is your primary point of contact at AWS. If your company subscribes to Enterprise Support or Enterprise On-Ramp, your TAM educates, empowers, and evolves your cloud journey across the full range of AWS services. TAMs provide expert engineering guidance, help you design solutions that efficiently integrate AWS services, assist with cost-effective and resilient architectures, and provide direct access to AWS programs and a broad community of experts.

AWS Marketplace#

AWS Marketplace is a digital catalog that includes thousands of software listings from independent software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS.

11. Migration and Innovation#

AWS Cloud Adoption Framework (AWS CAF)#

At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into six areas of focus, called Perspectives. Each Perspective addresses distinct responsibilities. The planning process helps the right people across the organization prepare for the changes ahead.

The Business Perspective

Ensures that IT aligns with business needs and that IT investments link to key business results.

Common roles in the Business Perspective include:

  • Business managers
  • Finance managers
  • Budget owners
  • Strategy stakeholders
The People Perspective

The People Perspective supports development of an organization-wide change management strategy for successful cloud adoption.

Common roles in the People Perspective include:

  • Human resources
  • Staffing
  • People managers
The Governance Perspective

The Governance Perspective focuses on the skills and processes to align IT strategy with business strategy. This ensures that you maximize the business value and minimize risks.

Common roles in the Governance Perspective include:

  • Chief Information Officer (CIO)
  • Program managers
  • Enterprise architects
  • Business analysts
  • Portfolio managers
The Platform Perspective

The Platform Perspective includes principles and patterns for implementing new solutions on the cloud, and migrating on-premises workloads to the cloud.

Common roles in the Platform Perspective include:

  • Chief Technology Officer (CTO)
  • IT managers
  • Solutions architects
The Security Perspective

The Security Perspective ensures that the organization meets security objectives for visibility, auditability, control, and agility.

Common roles in the Security Perspective include:

  • Chief Information Security Officer (CISO)
  • IT security managers
  • IT security analysts
The Operations Perspective

The Operations Perspective helps you to enable, run, use, operate, and recover IT workloads to the level agreed upon with your business stakeholders.

Common roles in the Operations Perspective include:

  • IT operations managers
  • IT support managers

6 strategies for migration#

Rehosting

Rehosting also known as “lift-and-shift” involves moving applications without changes.

In the scenario of a large legacy migration, in which the company is looking to implement its migration and scale quickly to meet a business case, the majority of applications are rehosted.

Replatforming

Replatforming, also known as “lift, tinker, and shift,” involves making a few cloud optimizations to realize a tangible benefit. Optimization is achieved without changing the core architecture of the application.

Refactoring

Refactoring (also known as re-architecting) involves reimagining how an application is architected and developed by using cloud-native features. Refactoring is driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment.

Repurchasing

Repurchasing involves moving from a traditional license to a software-as-a-service model.

For example, a business might choose to implement the repurchasing strategy by migrating from a customer relationship management (CRM) system to Salesforce.com.

Retaining

Retaining consists of keeping applications that are critical for the business in the source environment. This might include applications that require major refactoring before they can be migrated, or, work that can be postponed until a later time.

Retiring

Retiring is the process of removing applications that are no longer needed.

AWS Snow Family#

The AWS Snow Family is a collection of physical devices that help to physically transport up to exabytes of data into and out of AWS.

Security Features Across All Snow Family Devices

  • Designed to be secure and tamper-resistant both on-site and during transit
  • Hardware and software are cryptographically signed
  • Data is automatically encrypted using 256-bit encryption keys, which are owned and managed by the customer (integrated with AWS Key Management Service)
AWS Snowcone
  • Smallest device with up to 8 TB capacity
  • Includes edge computing options (Amazon EC2 instances and AWS IoT Greengrass)
  • Workflow: Order → Ship → Plug in & copy data → Ship back → Data transferred to your AWS account (typically an S3 bucket)
  • Ideal for terabytes of data like analytics, video libraries, image collections, backups, etc.
AWS Snowball Edge
  • Available in two versions:
    1. Compute Optimized: Storage: 80 TB of hard disk drive (HDD) capacity for block volumes and Amazon S3 compatible object storage, and 1 TB of SATA solid state drive (SSD) for block volumes.
    2. Storage Optimized: Storage: 80-TB usable HDD capacity for Amazon S3 compatible object storage or Amazon EBS compatible block volumes and 28 TB of usable NVMe SSD capacity for Amazon EBS compatible block volumes.
  • Designed for situations where 8 TB is not enough
  • Rack-mountable and clusterable for greater computing needs
  • Can run AWS Lambda functions, EC2-compatible AMIs, or AWS IoT Greengrass for on-site data processing
  • Use cases include IoT data streams, image compression, video transcoding, and industrial signaling
AWS Snowmobile
  • Largest offering, housed in a 45-foot rugged shipping container
  • Capable of storing up to 100 petabytes
  • Ideal for massive migrations and data center shutdowns
  • Delivered by truck; appears as a network-attached storage device upon installation
  • Features robust physical security: tamper-resistant, waterproof, temperature controlled, fire suppression, GPS tracking, 24/7 video surveillance, and escort security during transit

12. The Cloud Journey#

The AWS Well-Architected Framework#

The AWS Well-Architected Framework helps you understand how to design and operate reliable, secure, efficient, and cost-effective systems in the AWS Cloud. It provides a way for you to consistently measure your architecture against best practices and design principles and identify areas for improvement.

The Well-Architected Framework is based on six pillars:

Operational Excellence

Operational excellence is the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

Design principles for operational excellence in the cloud include performing operations as code, annotating documentation, anticipating failure, and frequently making small, reversible changes.

Security

The Security pillar is the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

When considering the security of your architecture, apply these best practices:

  • Automate security best practices when possible.
  • Apply security at all layers.
  • Protect data in transit and at rest.
Reliability

Reliability is the ability of a system to do the following:

  • Recover from infrastructure or service disruptions
  • Dynamically acquire computing resources to meet demand
  • Mitigate disruptions such as misconfigurations or transient network issues

Reliability includes testing recovery procedures, scaling horizontally to increase aggregate system availability, and automatically recovering from failure.

Performance

Performance efficiency is the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve.

Evaluating the performance efficiency of your architecture includes experimenting more often, using serverless architectures, and designing systems to be able to go global in minutes.

Cost Optimization

Cost optimization is the ability to run systems to deliver business value at the lowest price point.

Cost optimization includes adopting a consumption model, analyzing and attributing expenditure, and using managed services to reduce the cost of ownership.

Cost Optimization

Sustainability is the ability to continually improve sustainability impacts by reducing energy consumption and increasing efficiency across all components of a workload by maximizing the benefits from the provisioned resources and minimizing the total resources required.

To facilitate good design for sustainability:

  • Understand your impact
  • Establish sustainability goals
  • Maximize utilization
  • Anticipate and adopt new, more efficient hardware and software offerings
  • Use managed services
  • Reduce the downstream impact of your cloud workloads

Advantages of cloud computing#

Operating in the AWS Cloud offers many benefits over computing in on-premises or hybrid environments.

In this section, you will learn about six advantages of cloud computing:

Trade upfront expenses for variable expenses

Upfront expenses include data centers, physical servers, and other resources that you would need to invest in before using computing resources.

Instead of investing heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources.

Benefit from massive economies of scale

By using cloud computing, you can achieve a lower variable cost than you can get on your own.

Because usage from hundreds of thousands of customers aggregates in the cloud, providers such as AWS can achieve higher economies of scale. Economies of scale translate into lower pay-as-you-go prices.

Stop guessing capacity and scale in or out

With cloud computing, you don’t have to predict how much infrastructure capacity you will need before deploying an application.

For example, you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances when needed and pay only for the compute time you use. Instead of paying for resources that are unused or dealing with limited capacity, you can access only the capacity that you need, and scale in or out in response to demand.

Increase speed and agility

The flexibility of cloud computing makes it easier for you to develop and deploy applications.

This flexibility also provides your development teams with more time to experiment and innovate.

Stop spending money and maintaining infrastructure or data centers

Cloud computing in data centers often requires you to spend more money and time managing infrastructure and servers.

A benefit of cloud computing is the ability to focus less on these tasks and more on your applications and customers.

Go global in minutes

The AWS Cloud global footprint enables you to quickly deploy applications to customers around the world, while providing them with low latency.