AWS

Question 1:
Your company has an application that consists of ELB, EC2 instances, and an RDS database. Recently, the number of read requests to the RDS database has been increasing, resulting in poor performance.
Select the changes you should make to your architecture to improve RDS performance.
Options:
A. Install CloudFront before accessing the DB
B. Improve processing by making RDS a multi-AZ configuration
C. Increase read replicas of RDS
D. Place DynamoDB as a cache layer in front of the RDS DB
Answer: C
Explanation
Adding a Read Replica to Amazon RDS improves the performance and durability of the database (DB) instance read process. This feature allows you to stretch and scale the capacity of a single DB instance to ease the overall workload of frequently read databases. You can create up to 5 Read Replicas for your RDS DB instance. It can support high volume read traffic for your application and improve overall read throughput. Therefore, option 3 is the correct answer.
Option 1 is incorrect because CloudFront is used to speed up global content delivery processing, not to improve database reading processing.
Option 2 is incorrect. You can improve the availability of your DB instance by configuring RDS in a multi-AZ configuration, but yit will not improve read performance.
Option 4 is incorrect. By installing ElastiCache in front of RDS instead of DynamoDB, it is possible to improve read performance by cache processing. However DynamoDB is not a suitable solution,

 

Domain 1 — Design Resilient Architectures — 30%
Domain 2 — Design High-Performing Architectures — 28%
Domain 3 — Design Secure Applications and Architectures — 24%
Domain 4 — Design Cost-Optimized Architectures — 18%

Index
1. IAM
2. Billing Alarm
3. S3
4. Creation of S3 Bucket
5. S3 Pricing Tiers
6. S3 Security and Encryption
7. S3 Version Control
8. S3 Life Cycle Management
9. S3 Lock Policies and Glacier Vault Lock
10. S3 Performance
11. S3 Select and Glacier Select
12. AWS Organizations & Consolidate Billing
13. Sharing S3 Buckets between Accounts
14. Cross Region Replication
15. Transfer Acceleration
16. DataSync Overview
17. CloudFront Overview
18. CloudFront Signed URL’s and Cookies
19. Snowball
20. Storage Gateway
21. Athena versus Macie
22. EC2
23. Security Groups
24. EBS
25. Volumes & Snapshots
26. AMI Types (EBS vs Instance Store)
27. ENI vs ENA vs EFA
28. Encrypted Root Device Volumes & Snapshots
29. Spot Instances & Spot Fleets
30. EC2 Hibernate
31. Cloud Watch
32. AWS Command Line
33. IAM Roles with EC2
34. Boot Strap Scripts
35. EC2 Instance Meta Data
36. EFS
37. FSX for Windows & FSX for Lustre
38. EC2 Placement Groups
39. HPC
40. WAF
41. Databases
42. Create an RDS Instance
43. RDS Backups, Multi-AZ & Read Replicas
44. Dynamo DB
45. Advanced Dynamo DB
46. Redshift
47. Aurora
48. Elasticache
49. Database Migration Services (DMS)
50. Caching Strategies
51. EMR
52. Directory Service
53. IAM Policies
54. Resource Access Manager (RAM)
55. Single Sign-On
56. Route 53 – Domain Name Server (DNS)
57. Route 53 – Register a Domain Name Lab
58. Route 53 Routing Policies
59. Route 53 Simple Routing Policy
60. Route 53 Weighted Routing Policy
61. Route 53 Latency Routing Policy
62. Route 53 Failover Routing Policy
63. Route 53 Geolocation Routing Policy
64. Route 53 Geoproximity Routing Policy (Traffic Flow Only)
65. Route 53 Multivalue Answer
66. VPCs
67. Build a Custom VPC
68. Network Address Translation (NAT)
69. Access Control List (ACL)
70. Custom VPCs and ELBs
71. VPC Flow Logs
72. Bastions
73. Direct Connect
74. Setting Up a VPN Over a Direct Connect Connection
75. Global Accelerator
76. VPC End Points
77. VPC Private Link
78. Transit Gateway
79. VPN Hub
80. Networking Costs
81. ELB
82. ELBs and Health Checks – LAB
83. Advanced ELB
84. ASG
85. Launch Configurations & Autoscaling Groups Lab
86. HA Architecture
87. Building a fault tolerant WordPress site – Lab 1
88. Building a fault tolerant WordPress site – Lab 2
89. Building a fault tolerant WordPress site – Lab 3 : Adding Resilience & Autoscaling
90. Building a fault tolerant WordPress site – Lab 4 : Cleaning Up
91. Building a fault tolerant WordPress site – Lab 5 : Cloud Formation
92. Elastic Beanstalk Lab
93. Highly Available Bastions
94. On Premise Strategies
95. SQS
96. SWF
97. SNS
98. Elastic Transcoder
99. API Gateway
100. Kinesis
101. Web Identity Federation – Cognito
102. Reducing Security Threats
103. Key Management Service (KMS)
104. Cloud HSM
105. Parameter Store
106. Lambda
107. Build a Serverless Webpage with API Gateway and Lambda
108. Build an Alexa Skill
109. Serverless Application Model (SAM)
110. Elastic Container Service (ECS)

1. IAM
Identity and Access Management and is a Global Service.
Root account is created by default and shouldn’t be used or shared. Instead we create Users. Users are people within the organization and can be grouped like developers, operations etc. These groups only contain users, not other groups. A user can belong to multiple groups. For example a user ‘A’ part of developers group can also be part of audit group. Similarly a user ‘B’ part of operations group can also be part of audit group. JSON (Java Script Object Notation) documents called Policies will be assigned to Users or Groups. The Policies define the permissions of the users and apply least privilege principle. Means dont give more permissions to a user than he needs.

IAM allows to manage users and their level of access to the AWS console. IAM allows to set up users, groups, policies (permissions) and roles. Also allows to grant access to different parts of AWS platform.

AWS Root Account Security Best Practices:
1) Use a strong password to help protect account-level access to the AWS Management Console. 2) Never share your AWS account root user password or access keys with anyone. 3) If you do have an access key for your AWS account root user, delete it. If you must keep it, rotate (change) the access key regularly. You should not encrypt the access keys and save them on Amazon S3. 4) If you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. 5) Enable AWS multi-factor authentication (MFA) on your AWS account root user account.

Key features:
i) Centralized control of AWS account
ii) Shared access to AWS account
iii) Granular permissions — restricting access only to few services
iv) Identity Federation — Including Active Directory, FB or Linkedin etc.  Users can log into AWS console using the same username and password that they used to log into windows PCs.
v) Multifactor authentication
vi) Provides temporary access for user/ devices and services where necessary.
vii) Allows to set up own password rotation policy.
viii) Integrates with many different AWS services.
ix) Supports PCI DSS compliance.

Key Terminology:
i) Users: End users such as people, employees in an organization etc.
ii) Groups: A collection of users. Each user in the group will inherit the permissions of the group.
iii) Policies: Policies are made up of documents called Policy documents. These documents will be in JSON (Java Script Object Notation) format and they give permissions as to what a user/ group/ role is able to do
iv) Roles: Create roles and then assign them to AWS resources.

US East (N. Virginia) is the region where all the new products and services will be launched first.

In console: Security, Identity & Compliance >> IAM

Access Key ID can be considered as a user name that we are going to use programmatically access. Secret Access Key is the password to access programmatically.

Tips:
i) IAM is global/ universal and does not apply to regions. So creation of user, group or role are universal.
ii) The ‘root account’ is simply the account created when we first setup our AWS account. It has complete admin access. It uses an email address to create an account.
iii) New users have no permissions/ policies when first created.
iv) New users are assigned Access Key ID & Secret Access Key when first created. These are not same as a password. We cannot use the Access Key ID & Secret Access Key to login into the console. We can use this to access AWS programmatically via APIs and the Command line. Save the Access Key ID & Secret Access Key in secure location as we get only once and if we lose, we have to regenerate them.
v) We can access AWS in two ways. Console Access & Programmatic Access.
vi) Always set up MFA (Multi Factor Authentication) on your root account.
vii) We can create and customize own password rotation policies.

Questions:
i. What is an Availability Zone?
A. data center
B. multiple VPCs
C. multiple regions
D. single region
E. multiple EC2 server instances
Answer (A)

ii. What are two features that correctly describe Availability Zone (AZ)
architecture?
A. multiple regions per AZ
B. interconnected with private WAN links
C. multiple AZ per region
D. interconnected with public WAN links
E. data auto-replicated between zones in different regions
F. Direct Connect supports Layer 2 connectivity to region
Answer (B,C)

iii. An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account. As a solutions architect, which of the following steps would you recommend?
Answer: Create a new IAM role with the required permissions to access the resources in the PROD environment. The users can then assume this IAM role while accessing the resources from the PROD env.
Explanation: IAM roles allow you to delegate access to users or services that normally don’t have access to your organization’s AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls. Consequently, you don’t have to share long-term credentials for access to a resource. Using IAM roles, it is possible to access cross-account resources.

iv. An IT consultant is helping the owner of a medium-sized business set up an AWS account. What are the security recommendations he must follow while creating the AWS account root user? (Select two)
Answer: a. Create a strong password for the AWS account root user
b. Enable MFA for the AWS account root user

v. A development team requires permissions to list an S3 bucket and delete objects from that bucket. A systems administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows the principle of least privilege.

"Version": "2021-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::example-bucket"
            ],
            "Effect": "Allow"
        }
    ]

Which statement should a solutions architect add to the policy to address this issue?
Answer:

"Action": [
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::example-bucket/*"
            ],
            "Effect": "Allow"

The main elements of a policy statement are:
Effect: Specifies whether the statement will Allow or Deny an action (Allow is the effect defined here).
Action: Describes a specific action or actions that will either be allowed or denied to run based on the Effect entered. API actions are unique to each service (DeleteObject is the action defined here).
Resource: Specifies the resources—for example, an S3 bucket or objects—that the policy applies to in Amazon Resource Name (ARN) format ( example-bucket/* is the resource defined here).
This policy provides the necessary delete permissions on the resources of the S3 bucket to the group.

vi. A new DevOps engineer has joined a large financial services company recently. As part of his onboarding, the IT department is conducting a review of the checklist for tasks related to AWS Identity and Access Management. As a solutions architect, which best practices would you recommend (Select two)?
Answer: a. Configure AWS CloudTrail to log all IAM actions
b. Enable MFA for privileged users

vii. A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected. Which of the following would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?
Answer: Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once.
Explanation: If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.

viii. A large IT company wants to federate its workforce into AWS accounts and business applications. Which of the following AWS services can help build a solution for this requirement? (Select two)
Answer: a. Use AWS Single Sign-On(SSO)
b. Use AWS Identity and Access Management (IAM)

ix. A financial services company uses Amazon GuardDuty for analyzing its AWS account metadata to meet the compliance guidelines. However, the company has now decided to stop using GuardDuty service. All the existing findings have to be deleted and cannot persist anywhere on AWS Cloud. Which of the following techniques will help the company meet this requirement?
Answer: Disable the service in the general settings
Explanation: Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately. Disabling the service will delete all remaining data, including your findings and configurations before relinquishing the service permissions and resetting the service. So, this is the correct option for our use case.

x. An IT security consultancy is working on a solution to protect data stored in S3 from any malicious activity as well as check for vulnerabilities on EC2 instances.
Answer: Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon inspector to check for vulnerabilities on EC2 instance.

xi. A retail company uses Amazon EC2 instances, API Gateway, Amazon RDS, Elastic Load Balancer and CloudFront services. To improve the security of these services, the Risk Advisory group has suggested a feasibility check for using the Amazon GuardDuty service. Which of the following would you identify as data sources supported by GuardDuty?
Answer: VPC Flow Logs, DNS logs, CloudTrail events
Explanation: Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.
GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.

2. Billing Alarm
Setting up minimum threshold amount. An email (alarm) gets triggered when the amount cross beyond the threshold limit.
Management & Governance >> CloudWatch >> Billing >> Click on ‘Create Alarm’ which is at bottom (below to Name, State, Conditions, Actions)

Tips:
*i) How can you get automatic notifications if your account goes over like a threshold limit (say some thousand dollars or so) — Go to CloudWatch and create a billing alarm. Billing alarm uses an SNS topic, a way of sending email whenever alarm goes over the thousand dollars threshold limit.

3. S3
S3 = Simple Storage Service
S3 is used to store objects and one of the main building blocks of AWS.
Its advertised as ‘infinitely scaling’ storage. Means we can store as many objects as we want. Many websites use S3 as a backbone. Many AWS services uses S3 as an integration as well. For example, the EBS snapshots are actually stored in S3 but we dont see them.

S3 provides developers and IT teams with secure, durable, highly-scalable object storage. S3 is easy to use with a simple web services interface to store and retrieve any amount of data from anywhere on the web. S3 is a sage place to store files and its an Object-based storage. The data is spread across multiple devices and facilities.

Basics of S3:
i) S3 is object based – i.e allows to upload files
ii) Files can be from 0 bytes to 5 TB
iii) There is an unlimited storage
iv) Files are stored in Buckets.
v) S3 is a universal namespace. i.e the names must be unique globally. Its actually creating a web address so they have to be unique. Suppose we are going to create a bucket in Northern Virginia (default AWS region) with name as ‘testbucket’ then we are going to have a web address as https://testbucket.s3.amazonaws.com/ If we are going to create a bucket in ireland then web address would be https://testbucket.eu-west-1.amazonaws.com/
*vi) When we upload file into S3 then we will receive a HTTP 200 code on the browser, if the upload was successful.

Use cases:
i) We can do backup and storage
ii) We can use it for disaster recovery and capture data on S3 across different regions.
iii) We can also archive data until S3 is free.
iv) We can have hybrid cloud storage
v) Application and Media hosting
vi) Data lakes and big data analytics

S3 Buckets Overview:
i) S3 allows people to store objects(files) in buckets(directories).
ii) Buckets must have a globally unique name (across all regions all accounts)
iii) Buckets are defined at region level
iv) S3 is a global service but buckets are created in a region
v) Follow naming convention like : No uppercase, No underscore, 3-63 characters long, not an IP, must start with lowercase letter or number.

S3 Objects Overview:
Objects consist the following:
i) Key – This is simply the name of the object (file).
ii) Value – This is simply the data and is made up of a sequence if bytes.
iii) Version ID – If versioning is enabled and is important for versioning.
iv) Metadata (data about data you are storing) – Like this object belongs to Finance department or HR department.
v) Subresources are ACL (permissions of the object. We can lock each object individually or lock entire bucket), Torrent,
vi) The key represents the full path to the objects. Ex: s3://my-bucket/my_object.txt
vii) The key is composed of prefix + object name. Ex: s3://my-bucket/folder1/folder2/my_object.txt
Here folder1/folder2 is prefix and my_object.txt is an object.
viii) There is no concept of directories within buckets
ix) Just keys with very long names that contain slashes (/)
x) Object values are the content of the body
xi) Max object size is 5TB (5000GB). If uploading more than 5GB, must use ‘multi part upload’.
xii) Tags (unicode key-value pair up to 10) – useful for security/ lifecycle.

S3 Data Consistency:
i) Read after write consistency for PUTS of new objects — Able to read immediately once we upload the file. If we write a new file and read it immediately afterwards, we will be able to view that data.
ii) Eventual consistency for overwrite PUTS and DELETES (can take some time to propagate) — We have version 1 file in S3 and we upload version 2 and we immediately try to read the object then we might get either version 1 or version 2 but if you want for couple of seconds then we always get version 2 object. So its only when we try to overwrite or delete a file eventually its going to be consistent. If we update an existing file or delete a file and read it immediately, we may get older version or may get latest version. Basically changes to objects can take a little bit of time to propagate.

S3 guarantees:
i) S3 is built for 99.99% availability. However Amazon guarantees 99.9% availability.
ii) Amazon guarantees 99.9999999% durability (11*9s)

S3 Features:
i) Tiered storage — We have different tiered storages available
ii) Lifecycle Management — We can move the objects between tiers based on the number of the days. Like if the file is 30 days old move to this tier, if the file is 90 days old move to this tier.
iii) Versioning — We can have multiple versions of objects in S3 buckets. We can also encrypt these objects.
iv) MFA Delete — We use MFA for deletion of objects.
v) Secure data using ACLs and Bucket Policies

S3 Storage Classes:

S3 Standard S3 – IA (Infrequently Accessed) S3 One Zone – IA S3 – Intelligent Tiering S3 Glacier S3 Glacier Deep Archive
99.99% availability and 99.999999999% durability. Stored redundantly across multiple devices in multiple facilities and is designed to sustain the loss of 2 facilities concurrently. For data that is accessed less frequently, but requires rapid access when needed. Lower fee than S3, but you are charged a retrieval fee. For where you want lower-cost option for IA data, but do not require the multiple AZ data resilience. Sometimes we can refer this storage as RRS (Reduce Redundancy Storage) which is depleted. Designed to optimize costs by automatically moving data to the most cost-effective access tier, w/o performance impact or operational overhead. Secure, durable and low-cost storage class for data archiving. We can reliably store any amount of data at costs that are  cheaper than on-premises solutions. Retrieval times configurable from minutes to hours. Lowest-cost storage class where a retrieval time of 12 hours is acceptable

S3 Comparison:

S3 Standard S3 Intelligent Tiering S3 Standard IA S3 One Zone IA S3 Glacier S3 Glacier Deep Archive
Designed for durability 99.999999999% (11 9’s) 11 9’s 11 9’s 11 9’s 11 9’s 11 9’s
Designed for availability  99.99% 99.9% 99.9% 99.5% NA NA
Availability SLA  99.9% 99% 99% 99% NA NA
Availability Zones  >=3  >=3  >=3 1  >=3  >=3
Min capacity charge per object  NA NA 128KB 128KB 40KB 40KB
Min storage duration charge  NA 30 days 30 days 30 days 90 days 180 days
Retrieval fee  NA NA per GB retrieved per GB retrieved per GB retrieved per GB retrieved
First byte latency  milliseconds  milliseconds  milliseconds  milliseconds minutes or hours hours
Example: Public Access – Use Bucket Policy
We have a S3 bucket and an user who is not using your account.
Anonymous www website visitor — Trying to read files from — S3 Bucket
By default when we try to read a file using web browser, access will be denied. To solve this we have to attach a S3 bucket policy to S3 bucket which is going to allow public access.

Example: User Access to S3 – IAM Permissions
We have a S3 bucket and an user who is within our account. We attach an IAM policy to user saying that user can access the S3 buckets. Here we dont need any extra bucket policy as we did on public access.

Example: EC2 instance access – Use IAM Roles
EC2 instance wants to access S3 buckets == Create an EC2 instance role and then attach an IAM permissions to that EC2 instance role, then EC2 instance will be able to access S3 buckets.

Advanced: Cross Account Access – Use Bucket Policy
Use an extra bucket policy for cross account access. Suppose we have an IAM user in another account and it common to have multiple accounts and we want to give this user to access the content of S3 bucket. Then create an S3 bucket policy which allows cross account access and user from another account will be able to access S3 buckets.

S3 Bucket Policies:
i) They are JSON based policies. Just like IAM policies and they look very similar. We define Resource – buckets and objects that we access. We define Action – Set of API to allow or deny. Effect – Allow or deny. Principal – The account or user to apply the policy to.
ii) Use S3 bucket policy to grant public access to the bucket. Also to force objects to be encrypted at upload. Also to grant access to another account, also called as Cross Account.

S3 Websites:
S3 can host static websites and have them accessible on the www
The website url will be: <bucket-name>.s3-website-<aws-region>.amazon.aws.com or <bucket-name>.s3-website.<aws-region>.amazon.aws.com
In case we do not make S3 bucket public in first place then we get 403 (Forbidden) error.

Tips:
i) S3 is object based and allows to upload files. Objects (files) are stored in buckets.
ii)File size can be from 0 bytes to 5 TB
iii) There is an unlimited storage
iv) S3 is a universal namespace. That is, names must be unique globally.
v) In default region US East (N. Virginia) the domain name will be
https://testbucket.s3.amazonaws.com/
In other regions (ireland here) it will be https://testbucket.eu-west-1.amazonaws.com/
vi) Not suitable to install an OS or DB on S3 as S3 is object based. We need block based storage.
vi) Successful uploads will generate a HTTP 200 status code
vii) We can turn on MFA delete
viii) Control access to buckets either using a bucket ACL or using bucket policies.

Questions:
i. What two statements correctly describe versioning for protecting data at rest on
S3 buckets?
A. enabled by default
B. overwrites most current file version
C. restores deleted files
D. saves multiple versions of a single file
E. disabled by default
Answer (C,E)

ii. You have a requirement to create an index to search customer objects stored in
S3 buckets. The solution should enable you to create a metadata search index for
each object stored to an S3 bucket. Select the most scalable and cost effective
solution?
A. RDS, ElastiCache
B. DynamoDB, Lambda
C. RDS, EMR, ALB
D. RedShift
Answer (B)

iii. A junior scientist working with the Deep Space Research Laboratory at NASA is trying to upload a high-resolution image of a nebula into Amazon S3. The image size is approximately 3GB. The junior scientist is using S3 Transfer Acceleration (S3TA) for faster image upload. It turns out that S3TA did not result in an accelerated transfer. Given this scenario, which of the following is correct regarding the charges for this image transfer?
Answer: Does not need to pay any transfer charges for the image upload.
Explanation: No S3 data transfer charges when data is transferred in from the internet. Also with S3TA, pay only for transfers that are accelerated. Since S3TA did not result in an accelerated transfer so no transfer charges.

iv. An audit department generates and accesses the audit reports only twice in a financial year. The department uses AWS Step Functions to orchestrate the report creating process that has failover and retry scenarios built into the solution. The underlying data to create these audit reports is stored on S3, runs into hundreds of Terabytes and should be available with millisecond latency. As a solutions architect, which is the MOST cost-effective storage class that you would recommend to be used for this use-case?
Answer: Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Explanation: Since the data is accessed only twice in a financial year but needs rapid access when required, the most cost-effective storage class for this use-case is S3 Standard-IA

v. The IT department at a consulting firm is conducting a training workshop for new developers. As part of an evaluation exercise on Amazon S3, the new developers were asked to identify the invalid storage class lifecycle transitions for objects stored on S3. Can you spot the INVALID lifecycle transitions from the options below? (Select two)
Answer: S3 Intelligent Tiering => S3 Standard and S3 One Zone IA => S3 Standard IA
Explanation:

Unsupported life cycle transitions for S3 storage classes Supported life cycle transitions for S3 storage classes
Any storage class to the S3 Standard storage class The S3 Standard storage class to any other storage class
Any storage class to the Reduced Redundancy storage class Any storage class to the S3 Glacier or S3 Glacier Deep Archive storage classes
The S3 Intelligent-Tiering storage class to the S3 Standard-IA storage class The S3 Standard-IA storage class to the S3 Intelligent-Tiering or S3 One Zone-IA storage classes
The S3 One Zone-IA storage class to the S3 Standard-IA or S3 Intelligent-Tiering storage classes The S3 Intelligent-Tiering storage class to the S3 One Zone-IA storage class
The S3 Glacier storage class to the S3 Glacier Deep Archive storage class
vi. A leading video streaming service delivers billions of hours of content from Amazon S3 to customers around the world. Amazon S3 also serves as the data lake for its big data analytics solution. The data lake has a staging zone where intermediary query results are kept only for 24 hours. These results are also heavily referenced by other parts of the analytics pipeline. Which of the following is the MOST cost-effective strategy for storing this intermediary query data?
Answer: Store the intermediary query results in S3 Standard storage class.
Explanation:

S3 Standard storage class S3 Intelligent-Tiering storage class S3 Standard-Infrequent Access storage S3 One Zone-Infrequent Access storage
S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. As there is no minimum storage duration charge and no retrieval fee (remember that intermediary query results are heavily referenced by other parts of the analytics pipeline), this is the MOST cost-effective storage class amongst the given options. The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. The minimum storage duration charge is 30 days, so this option is NOT cost-effective because intermediary query results need to be kept only for 24 hours. Hence this option is not correct. S3 Standard-IA is for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. The minimum storage duration charge is 30 days, so this option is NOT cost-effective because intermediary query results need to be kept only for 24 hours. Hence this option is not correct. S3 One Zone-IA is for data that is accessed less frequently but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. The minimum storage duration charge is 30 days, so this option is NOT cost-effective because intermediary query results need to be kept only for 24 hours. Hence this option is not correct.
To summarize again, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA have a minimum storage duration charge of 30 days (so instead of 24 hours, you end up paying for 30 days). S3 Standard-IA and S3 One Zone-IA also have retrieval charges (as the results are heavily referenced by other parts of the analytics pipeline, so the retrieval costs would be pretty high). Therefore, these 3 storage classes are not cost optimal for the given use-case.

vii. A media agency stores its re-creatable assets on Amazon S3 buckets. The assets are accessed by a large number of users for the first few days and the frequency of access falls down drastically after a week. Although the assets would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the assets on S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible. As a Solutions Architect, can you suggest a way to lower the storage costs while fulfilling the business requirements?
Answer: Configure a lifecycle policy to transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
Explanation: S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed and re-creatable data but do not require the availability and resilience of S3 Standard or S3 Standard-IA. The minimum storage duration is 30 days before you can transition objects from S3 Standard to S3 One Zone-IA. S3 One Zone-IA offers the same high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.

viii. A news network uses Amazon S3 to aggregate the raw video footage from its reporting teams across the US. The news network has recently expanded into new geographies in Europe and Asia. The technical teams at the overseas branch offices have reported huge delays in uploading large video files to the destination S3 bucket. Which of the following are the MOST cost-effective options to improve the file upload speed into S3? (Select two)
Answer: a. Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket.
b. Use multipart uploads for faster file uploads into the destination S3 buckets.
Explanation: Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. Multipart upload provides improved throughput, therefore it facilitates faster file uploads.

4. Creation of S3 Bucket
Console >> US East (N. Virginia) >> Stogare >> S3 [region changed to global] >> Create Bucket >> Enter unique bucket name >> Select region (US East (N. Virginia))

5. S3 Pricing Tiers
S3 Charges: What makes up the cost of S3?
i) Storage: More we store in S3, more we get billed
ii) Requests and data retrievals: More number of requests to S3, more we get billed
iii) Storage Management Pricing: Objects placed in different tiers.
iv) Data transfer pricing
v) Cross region replication pricing
vi) Transfer acceleration – Enables fast, easy and secure transfer of files over long distances between end users and an S3 bucket. Transfer acceleration takes advantage of Amazons CloudFront globally distributed edge locations. As the data arrives at edge location, the data is routed to S3 over an optimized network path.

Different Tiers in terms of cost savings in descending order:
i) S3 Standard – Most expensive
ii) S3 – IA
iii) S3 – Intelligent Tiering
iv) S3 One Zone – IA
v) S3 Glacier
vi) S3 Glacier Deep Archive – Less expensive

6. S3 Security and Encryption
By default all the newly created buckets are PRIVATE. We can setup access control to our buckets using:
i) Bucket Policies — Work at bucket level
ii) Access Control Lists — Work at individual objects level
S3 buckets can be configured to create access logs which log all requests made to S3 bucket. This can be sent to another bucket or even another bucket in another account.

If we type https in browser then that traffic is going to be encrypted. So basically between your computer and server the traffic is encrypted and no one will be able to break it to understand what we are looking for.

We have two types of Encryptions
i) Encryption in Transit is achieved by SSL/TLS
ii) Encryption at rest (server side) is achieved by:
a) S3 Managed Keys – SSE-S3
b) AWS Key Management Service, Managed Keys – SSE-KMS
c) Server side encryption with customer provided keys – SSE – C

In console we have two types encryption:
i) AES-256
ii) AWS-KMS
a) aws/s3
b) Custom KMS ARN

The different components involved in S3 Security are:
i) User based: IAM policies – We are going to attach a policy to Users to allow them to get access to S3 buckets.
ii) Resource based:
Bucket Policies – bucket wide rules from the S3 console – allows cross account. A rule attached directly to S3 bucket to allow or deny request coming from other accounts or public requests.
Object Access Control List – finer grain. We can define at object level who can do what.
Bucket Access Control List – less common
Note: An IAM principal can access an S3 object if the user IAM permissions allow it OR the resource policy allows it and there is no explicit deny.
iii) Encryption: Encrypt objects in S3 using encryption keys. Encrypt the object and ensure no one but you can receive and decrypt these objects.

7. S3 Version Control
i) We can version files in S3.
ii) Versioning is enabled at bucket level
iii) We get a new version anytime we update a file like Version 1, Version 2, Version 3…
iv) Its best practice to version the buckets as it protects against unintended deletes (ability to restore a version). So if someone deletes a file, we will be able to restore the file back to a previous version. Or if wrong file is uploaded, we can easily roll back to previous version.
v) Any file that is not versioned prior to enabling versioning will have version ‘NULL’
vi) Suspend versioning does not delete previous versions.
vii) Stores all versions of an object (including all writes and even if we delete an object)
viii) Great backup tool
ix) Once enabled, versioning cannot be disabled, only suspended
x) Also integrates with Lifecycle rules
xi) Versioning also comes with MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security.
xii) When we upload new version then its automatically going to be private. But the older versions permissions do not change.
xiii) When we hide the version and try to delete a file then delete marker file will be created. To restore the file we have to delete the delete marker file. If we want to delete the file permanently without delete marker creation then do this activity by selecting show version.

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.
For example: If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version. If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version. You can always restore the previous version.

S3 Sever Access Logging:
i) For audit purpose, we may want to log all access to S3 buckets.
ii) Any request made to S3, from any account, authorized or denied will be logged into another S3 bucket.
iii) That data can be analyzed using data analysis tool.
iv) Very helpful to come down to the root cause of an issue or audit usage, view suspicious patterns etc…

S3 Replication Overview:
CRR = Cross Region Replication
SRR = Same Region Replication
We have our buckets in eu-west-1 and want to replicate all the contents continuously into another bucket in us-east-1. For this we can set up S3 replication and asynchronous system behind the scenes, replication will be happening. So all the files will be copied from one bucket to another. To achieve this:
i) We must enable versioning in the source and the destination buckets.
ii) Enable CRR or SRR, depends if you are replicating in same region or in a different region
iii) Buckets can be in different accounts
iv) Copying is asynchronous
v) Must give proper IAM permissions to S3.
CRR – Use case: Compliance, lower latency access, replication across accounts.
SRR – Use case: Log aggregation, live replication between production and test accounts

Questions:
i. A healthcare startup needs to enforce compliance and regulatory guidelines for objects stored in Amazon S3. One of the key requirements is to provide adequate protection against accidental deletion of objects. As a solutions architect, what are your recommendations to address these guidelines? (Select two)
Answer: a. Enable versioning on the bucket
b. Enable MFA delete on the bucket

ii. Which of the following features of an Amazon S3 bucket can only be suspended once they have been enabled?
Answer: Versioning
Explanation: Server Access Logging, Static Website Hosting and Requester Pays features can be disabled even after they have been enabled.

8. S3 Life Cycle Management
i) Life Cycle Management automates moving objects from one storage tier to another storage tier and eventually archive it of to Glacier.
ii) We can also use it to delete the objects permanently.
iii) Can be used in conjunction with versioning.
iv) Can be applied to current versions and previous versions.

9. S3 Lock Policies and Glacier Vault Lock
S3 Object Lock: We can use S3 Object Lock to store objects using a Write Once Read Many (WORM) model. It can help you to prevent objects from being deleted or modified for a fixed amount of time or indefinitely. So if you have got an object and you dont want somebody to be able to go in and modify it or change the data inside it or you dont want somebody be able to delete it, we can use S3 Object Lock. We can use S3 Object Lock to meet regulatory requirements that require WORM storage or add an extra layer of protection against object changes and deletion.
Like all other Object Lock settings, retention periods apply to individual object versions. Different versions of a single object can have different retention modes and periods.

S3 Object Lock Modes:
i) Governance Mode: Users cant overwrite or delete an object version or alter its lock settings unless they have special permissions. With governance mode, we protect objects against being deleted by most users, but we can still grant some users permission to alter the retention settings or delete the object if necessary.
ii) Compliance Mode: A protected object version cant be overwritten or deleted by any user, including the root user of AWS account. When an object is locked in Compliance mode, its retention mode cant be changed and its retention period cant be shortened. Compliance mode ensures an object version cant be overwritten or deleted for the duration of the retention period.

Retention Periods:
A retention period protects an object version for a fixed amount of time. When we place a retention period on an object version, S3 stores a timestamp in the object versions metadata to indicate when the retention period expires. After the retention period expires, the object version can be overwritten or deleted unless the user is placed a legal hold on the object version.
You can place a retention period on an object version either explicitly or through a bucket default setting. When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version. Amazon S3 stores the Retain Until Date setting in the object version’s metadata and protects the object version until the retention period expires.

Legal Holds:
S3 Object Lock also enables to place a legal hold on an object version.  Like a retention period, a legal hold also prevents an object version from being overwritten or deleted. However, a legal hold doesnt have an associated retention period and remains in effect until removed. Legal holds can be freely placed and removed by any user who has the s3:PutObjectLegalHold permission.

Glacier Vault Lock:
S3 Glacier Vault Lock allows to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a Vault Lock policy. We can specify controls such as WORM, in a Vault Lock Policy and lock the policy from future edits. Once locked, the policy can longer be changed.

Recap:
i) Use S3 Objects Lock to store objects using a write once, read many (WORM) model.
ii) Object locks can be on individual objects or applied across the bucket as a whole.
iii) Object locks come into two modes: Governance mode and Compliance mode
iv) With governance mode, users cant overwrite or delete an object version or alter its lock settings unless they have special permissions.
v) With compliance mode, a protected object version cant be overwritten or deleted by any user, including the root user in your aws account.
vi) S3 Glacier Vault Lock: Allows you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a Vault Lock Policy. You can specify controls such as WORM in a Vault Lock policy and lock the policy from future edits. Once locked, the policy can no longer be changed.

Questions:
i. A company uses Amazon S3 buckets for storing sensitive customer data. The company has defined different retention periods for different objects present in the Amazon S3 buckets, based on the compliance requirements. But, the retention rules do not seem to work as expected. Which of the following options represent a valid configuration for setting up retention periods for objects in Amazon S3 buckets? (Select two)
Answer: a. When you apply a retention period to an object version explicitly, you specify a ‘Retain Until Date’ for the object version
b. Different versions of a single object can have different retention modes and periods

10. S3 Performance
Prefix within S3: Prefix is simply the middle portion between the bucket name and the object.
mybucketname/folder1/subfolder1/myobject.jpg — Here /folder1/subfolder1 is Prefix.
mybucketname/folder2/subfolder1/myobject.jpg — Here /folder2/subfolder1 is Prefix.
mybucketname/folder3/myobject.jpg — Here /folder3 is Prefix.
mybucketname/folder4/subfolder4/myobject.jpg — Here /folder4/subfolder4 is Prefix.

S3 has extremely low latency. We can get the first byte out of S3 within 100-200 milliseconds. We can also achieve a high number of requests: 3500 PUT/COPY/POST/DELETE and 5500 GET/HEAD requests per second per prefix. We get better performance by spreading our reads across different prefixes. If we are using two prefixes then we achieve 11000 requests per second. If we are using four prefixes then we achieve 22000 requests per second. The more prefixes we have the better performance we can achieve.

S3 Limitations when using Server Side Encryption – KMS
i) If we are using SSE-KMS to encrypt our objects in S3, we must keep in mind the KMS limits.
ii) When we upload a file, we will call GenerateDataKey in the KMS API.
iii) When we download a file, we will call Decrypt in the KMS API.
iv) Uploading/ downloading will count toward the KMS quota.
v) Currently we cannot request a quota increase for KMS.
vi) Region-specfic, however, its either 5500, 10000 or 30000 requests per second.

Multipart Uploads:
i) Recommended for files over 100MB
ii) Required for files over 5GB
iii) Allows parallelize uploads (essentially increases efficiency)
iv) For downloads we use S3 Byte-Range fetches
v) Parallelize downloads by specifying byte ranges
vi) If there’s a failure in the download, its only for a specific byte range
vii) S3 byte range fetches can be used to speed up downloads
viii) S3 byte range fetches can be used to download partial amounts of the file (eg: header information)

Questions:
i. A file-hosting service uses Amazon S3 under the hood to power its storage offerings. Currently all the customer files are uploaded directly under a single S3 bucket. The engineering team has started seeing scalability issues where customer file uploads have started failing during the peak access hours with more than 5000 requests per second. Which of the following is the MOST resource efficient and cost-optimal way of addressing this issue?
Answer: Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations.

11. S3 Select and Glacier Select
S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions. We can get data by rows or columns using simple SQL expressions. S3 select retrieves only the data that is needed by application and we can achieve drastic performance increases. In many cases we can get as much as a 400% improvement.

Assume that your data is stored in S3 in zip files that contain csv files. Without S3 Select, we need to download, decompress and process the entire csv to get the data we needed.

With S3 Select, we can use a simple SQL expression to return only the data from the store we are intended to retrieve instead of retrieving the entire object. This means we are dealing with an order of magnitude less data, which improves the performance of underlying applications.

Glacier Select:
Some companies in highly regulated industries like financial services, healthcare and others write data directly to Amazon Glacier to satisfy compliance needs like SEC Rule 17a-4 or HIPAA. Many S3 users have lifecycle policies designed to save on storage costs by moving their data into Glacier when they no longer need to access it on a regular basis.

Glacier Select allows to run SQL queries against Glacier directly.

12. AWS Organizations & Consolidate Billing
AWS Organizations: AWS Organizations is an account management service that enables to consolidate multiple AWS accounts into an organization that we create and centrally manage.

Advantages of Consolidate Billing:
i) One bill per AWS account
ii) Very easy to track charges and allocate costs
iii) Volume pricing discount

Recap:
i) Paying (root) account should be used for billing purposes only. Do not deploy resources into the paying account.
ii) Enable/ Disable AWS services using Service Control Policies (SCP) either on OU or on individual accounts.

13. Sharing S3 Buckets between Accounts
We have three ways to share S3 buckets across accounts.
i) Using Bucket Policies & IAM (applies across the entire bucket) – Programmatic access only.
ii) Using Bucket ACLs & IAM (applies on individual objects) – Programmatic access only.
iii) Cross-account IAM roles – Programmatic and Console access

14. Cross Region Replication
i) Transferring bucket from one region to another.
ii) Versioning must be enabled on both the source and destination buckets.
iii) Files in an existing bucket are not replicated automatically.
iv) All subsequent updated files will be replicated automatically.
v) Delete markers are not replicated.
vi) Deleting individual versions or delete markers will not be replicated.

15. Transfer Acceleration
S3 Transfer Acceleration utilises the CloudFront edge network to accelerate the uploads to S3. Instead of uploading directly to S3 bucket, we can use a distinct URL to upload directly to an edge location which will then transfer that file to S3. We will get a distinct URL to upload to: abc.s3-accelerate.amazonaws.com

16. DataSync Overview
DataSync basically allows to move large amounts of data into AWS and typically use it on your on premise data center.
On-premises Data Center [The DataSync agent is deployed as an agent on a server and connects to NAS or file system to copy data to AWS and write data from AWS] >> Network Transfers [DataSync automatically encrypts data and accelerates transfer over the WAN. DataSync performs automatic data integrity checks in-transit and at-rest] >> AWS Region [DataSync seamlessly and securely connects to Amazon S3, Amazon EFS or Amazon FSx for Windows file server to copy data and metadata to and from AWS]

Recap:
i) Used to move large amounts of data from on-premises to AWS
ii) Used with NFS and SMB compatible file systems
iii) Replication can be done hourly, daily or weekly
iv) Install the DataSync agent to start the replication
v) Can be used to replicate EFS to EFS

17. CloudFront Overview
A content delivery network (CDN) is a system of distributed servers (network) that deliver webpages and other web content to a user based on the geographic locations of the user, the origin of the webpage, and a content delivery server.

Edge location: This is the location where content will be cached. This is separate to an AWS Region/ AZ. Edge locations are not just READ only, we can write to them too (i.e put an object on to them). Objects are cached for the life of the TTL. We can clear cached objects but we will be charged.

Origin: This is the origin of all the files that the CDN will distribute. This can be an S3 bucket, an EC2 instance, an ELB or Route53.

Distribution: This is the name given to the CDN, which consists of a collection of Edge locations.

CloudFront: CloudFront is Amazons CDN. Its a way of caching large files at locations that’s close to end users. Can be used to deliver your entire website, including dynamic, static, streaming, and interactive content using a global n/w of edge locations. Requests for your content are automatically routed to the nearest edge location, so content is delivered with the best possible performance. CloudFront is global service.

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery)

CloudFront Distributions:

Web Distribution RTMP
Typically used for websites Used for media streaming
Questions:
i. Where does Amazon retrieve web content when it is not in the nearest
CloudFront edge location?
A. secondary location
B. file server
C. EBS
D. S3 bucket
Answer (D)

ii. How are origin servers located within CloudFront (Select two)
A. DNS request
B. distribution list
C. web distribution
D. RTMP protocol
E. source mapping
Answer (A,C)

iii. Where are HTML files sourced from when they are not cached at a CloudFront
edge location?
A. S3 object
B. origin HTTP server
C. S3 bucket
D. nearest edge location
E. RTMP server
F. failover edge location
Answer (B)

iv. What feature permits tenants to use a private domain name instead of the domain name that CloudFront assigns to a distribution?
A. Route 53
B. CNAME record
C. MX record
D. RTMP
E. Signed URL
Answer (B)

v. Your company is deploying a web site with dynamic content to
customers in US, EU and APAC regions of the world. Content will include live
streaming videos to customers. SSL certificates are required for security
purposes. Select the AWS service delivers all requirements and provides the
lowest latency?
A. DynamoDB
B. CloudFront
C. S3
D. Redis
Answer (B)

vi. CloudFront offers a multi-tier cache in the form of regional edge caches that improve latency. However, there are certain content types that bypass the regional edge cache, and go directly to the origin. Which of the following content types skip the regional edge cache? (Select two)
Answer: a. Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
b. Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin

18. CloudFront Signed URL’s and Cookies
A signed URL is for individual files. 1 file = 1 URL
A signed Cookie is for multiple files. 1 cookie = multiple files

When we create a signed URL or signed cookie, we attach a policy and the policy can include:
i. URL expiration
ii. IP ranges
iii. Trusted signers

If Origin is EC2 then use CloudFront.

Question:
i. What two attributes are only associated with CloudFront private content?
A. Amazon S3 URL
B. signed cookies
C. web distribution
D. signed URL
E. object
Answer (B,D)

19. Snowball
Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS. Using Snowball addresses common challenges with large-scale data transfers including high n/w costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure and can be as little as one-fifth the cost of high-speed internet.
Snowball comes in either a 50TB or 80TB size. Snowball uses multiple layers of security designed to protect your data including tamper-resistant enclosures, 256-bit encryption, and an industry-standard Trusted Platform Module (TPM) designed to ensure both security and full chain-of-custody of your data. Once the data transfer job has been processed and verified, AWS performs a s/w erasure of the Snowball appliance.

Snowball Edge: Is a 100TB data transfer device with on-board storage and compute capabilities. We can use Snowball edge to move large amounts of data into and out of AWS, as a temporary storage tier for large local datasets, or to support local workloads in remote or offline locations. Snowball edge connects to your existing applications and infrastructure using standard storage interfaces, streamlining the data transfer process and minimizing setup and integration. Snowball Edge can cluster together to form a local storage tier and process your data on-premises, helping ensure your applications continue to run even when they are not able to access the cloud.
Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.
As each Snowball Edge Storage Optimized device can handle 80TB of data, you can order 10 such devices to take care of the data transfer for all applications.

Snowmobile: Is an exabyte-scale data transfer service used to move extremely large amounts of data to AWS. We can transfer up yo 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. Transferring data with Snowmobile is secure, fast and cost effective.

20. Storage Gateway
Storage gateway is a service that connects an on-premises s/w appliance with cloud-based storage to provide seamless and secure integration between an organizations on-premises IT environment and AWSs storage infrastructure. The service enables you to securely store data to the AWS cloud for scalable and cost effective storage.
Storage Gateways s/w appliance is available for download as virtual machine (VM) image that you install on a host in your datacenter. Storage Gateway supports either VMware ESXi or Microsoft Hyper-V. Once you have installed your gateway and associated it with your AWS account through the activation process, we can use the AWS management console to create the storage gateway option that is right for you.

Three different types of Storage Gateway are as follows:

File Gateway Volume Gateway(iSCSI) Tape Gateway
NFS & SMB Stored volumes & Cached volumes VTL (Virtual Type Library)
For flat files, stored directly on S3
Stored volumes: Entire dataset is stored on site and is asynchronously backed up to S3.

Cached volumes: Entire dataset is stored on S3 and the most frequently accessed data is cached on site.

Tape Gateway allows moving tape backups to the cloud.
It provides a backup solution that seamlessly connects to the AWS cloud and stores data files and backup images in S3 cloud storage. Provide cloud backup iSCSI block storage volumes for on-premises applications using either cached volumes or stored volumes. Providing virtual tape storage and VTL management to store data on S3 and Glacier.
Questions:
i. As part of a pilot program, a biotechnology company wants to integrate data files from its on-premises analytical application with AWS Cloud via an NFS interface. Which of the following AWS service is the MOST efficient solution for the given use-case?
Answer: AWS Storage Gateway – File Gateway

21. Athena versus Macie
Athena: Interactive query service which enables to analyze and query data located in S3 using standard SQL.
i. Serverless, nothing to provision, pay per query/ per TB scanned
ii. No need to set up complex ETL process
iii. Works directly with data stored in S3.

Athena can be used for:
i. Can be used to query log files stored in S3, eg ELB logs, S3 access logs etc
ii. Generate business reports in data stored in S3
iii. Analyse AWS cost and usage reports
iv. Run queries on click-stream data

PII (Personally Identifiable Information)
i. Personal data used to establish an individuals identity
ii. This data could be exploited by criminals, used in identity theft and financial fraud
iii. Home address, email address, SSN
iv. Passport number, drivers licence number
v. DOB, phone number, bank account, credit card number

Macie: Security service which used ML & NLP to discover, classify and protect sensitive data stored in S3.
i. Uses AI to recognise if S3 objects contain sensitive data such as PII
ii. Dashboards, reporting & alerts
iii. Works directly with data stored in S3
iv. Can also analyze CloudTrail logs
v. Great for PCI-DSS and preventing ID theft.

22. EC2
EC2 is a virtual machine in the cloud. Acts like a web server in the cloud. Elastic Compute Cloud (EC2) is a web service that provides resizable compute capacity in the cloud. EC2 reduces the time required to obtain and boot new server instances to minutes, allowing us to quickly scale capacity, both up and down, as the computing requirements change.

EC2 Pricing Model:

On Demand Reserved Spot Dedicated Hosts
Allows you to pay a fixed rate by the hour (or by the second) with no commitment. Provides with a capacity reservation and offers a significant discount on the hourly charge for an instance. Contract terms are 1 year or 3 year terms Enables to bid whatever price we want for instance capacity, providing for even greater savings if your applications have flexible start and end times Physical EC2 server dedicated for our use. Dedicated hosts can help to reduce costs by allowing you to use your existing server-bound software licenses.
On demand pricing is useful for:
i. Users that want the low cost and flexibility of EC2 w/o any up-front payment or long-term commitment.
ii. Applications with short term, spiky or unpredictable workloads that cannot be interrupted.
iii. Applications being developed or tested on EC2 for the first time. Reserved pricing is useful for:
i. Applications with steady state or predictable usage
ii. Applications that require reserved capacity
iii. Users able to make upfront payments to reduce their total computing costs even further.Reserved Pricing Types:
Standard Reserved Instances Convertible Reserved Instances Scheduled Reserved Instances
These offer up to 75% off on demand instances. The more we pay up front and the longer the contract, the greater the discount. These offer up to 54% off on demand capability to change the attributes of the RI as long as the exchange results in the creation of reserved instances if equal or greater value. These are available to launch within the time windows we reserve. This option allows to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, week or month.
Spot pricing is useful for:
i. Applications that have flexible start and end times.
ii. Applications that are only feasible at very low compute prices.
iii. Users with urgent computing needs for large amounts of additional capacity. Dedicated hosts pricing is useful for:
i. Useful for regulatory requirements that may not support multi-tenant virtualization.
ii. Great for licensing which does not support multi-tenancy or cloud deployments.
iii. Can be purchased on-demand (hourly)
iv. Can be purchased as a reservation for up to 70% off the on-demand price.
If the spot instance is terminated by EC2, you will not be charged for a partial hour of usage. However, if you terminate the instance yourself, you will be charged for any hour in which the instance ran.
EC2 Instance Types: (not required for SAA)

Family Speciality Use Case
F1 Field programmable gate array Genomics research, financial analytics, real time video processing, big data etc
I3 High speed storage No SQL DBs, data warehousing etc
G3 Graphics intensive Video encoding/ 3D Application streaming
H1 High disk throughput Map reduce based workloads, distributed file systems such HDFS and MapR-FS
T3 Lowest cost, general purpose Web servers/ Small DBs
D2 Dense storage Fileservers/ data warehousing/ hadoop
R5 Memory optimized Memory intensive apps/ dbs
M5 General purpose Application servers
C5 Compute optimized CPU intensive apps/ dbs
P3 Graphics/ general purpose GPU Machine learning, Bit coin mining etc
X1 Memory optimized SAP HANA/ Apache Spark etc
Z1D High compute capacity and a high memory footprint Ideal for electronic design automation (EDA) and certain relational DB workloads with high per-core licensing costs.
A1 Arm-based workloads Scale-out workloads such as web servers
U-6tb1 Bare metal Bare metal capabilities that eliminate virtualization overhead.
EC2 Instance Types – Mnemonic (FIGHT DR MC PXZ AU)
F – FPGA
I – IOPS = Input Output Per Second. Determines how fast the hard
G – Graphics
H – High disk throughput
T – Cheap general purpose (think T2 micro)
D – Density
R – RAM
M – Main choice for general purpose apps
C – Compute
P – Graphics (think pics)
X – Extreme memory
Z – Extreme memory and CPU
A – Arm based workloads
U – Bare Metal

Shared responsibility model for EC2 storage

AWS User
Infrastructure Setting up backup/ snapshot procedures
Replication of data for EBS volumes and EFS drives Setting up data encryption
Replacing faulty hardware Responsibility of any data on drives
Ensuring AWS employees cannot access our data Understanding the risk of using EC2 instance store
EC2 Creation Steps:
Step 1: Choose an Amazon Machine Image (AMI) — Amazon Linux 2 AMI (HVM), SSD Volume Type
Step 2: Choose an Instance Type — t2.micro
Step 3: Configure Instance Details
Step 4: Add Storage — Root device volume = Virtual disk on the cloud. This is where operating system is going to be installed.
Step 5: Add Tags
Step 6: Configure Security Group — Security group is just a virtual firewall.
HTTP – Port 80
SSH – Port 22
Step 7: Review Instance Launch

Recap:
i. Termination protection is tuned off by default, we must turn it on.
ii. On an EBS backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated.
iii. EBS root volumes of default AMI’s can be encrypted. We can also use a third party tool (such as bit locker etc) to encrypt the root volume, or this can be done when creating AMIs in the AWS console or using the API.
iv. Additional volumes can also be encrypted.

Questions:
i. Select the stateless protocol from the following?
A. FTP
B. TCP
C. HTTP
D. SSH
Answer (C)

 

23. Security Groups
Security groups are virtual firewalls that control traffic to our EC2 instances.

Inbound rules:
Type: HTTP – Protocol: TCP – Port Range: 80 – Source: 0.0.0.0/0 (IPv4)
Type: HTTP – Protocol: TCP – Port Range: 80 – Source: ::/0 (IPv6)
Type: SSH – Protocol: TCP – Port Range: 22 – Source: 0.0.0.0/0 (IPv4)

Outbound rules:
Type: All traffic – Protocol: All – Port Range: All – Destination: 0.0.0.0/0 (IPv4)

We cannot block individual IP addresses using security groups and we cannot block an individual port. In SG, everything is blocked by default. We have to allow explicitly.

We can attach more than one SG to EC2 instance.

Commonly used ports are:
HTTP – Port 80
HTTPS – 443
FTP – 21
FTPS / SSH – 22
POP3 – 110
POP3 SSL – 995
IMAP – 143
IMAP SSL – 993
SQL Server – 1433
MySQL – 3306

Tips:
i. All inbound traffic is blocked by default
ii. All outbound traffic is allowed
iii. Changes to security group take effect immediately
iv. We can have any number of EC2 instances within a security group
v. We can have multiple SGs attached to EC2 instances.
vi. SGs are stateful
vii. If we create an inbound rule allowing traffic in, that traffic is automatically allowed back out again.
viii. We cannot block specific IP addresses using SGs, instead we use NACLs (Network Access Control Lists)
ix. We can specify allow rules, but not deny rules.

24. EBS
EBS = Elastic Block Store
i. One of the storage option for EC2 instance.
ii. EBS provides persistent block storage volumes for use with EC2 instances in the AWS cloud. Each EBS volume is automatically replicated within its AZ.
iii. Block storage volume to use with EC2
iv. EBS volume is automatically replicated within its AZ to protect from component failure.
v. Offers high availability and durability
vi. Virtual hard disk in the cloud

EC2 Instance >> EBS >> Volumes, Snapshots and Life cycle manager

EBS Types

General Purpose (SSD) (gp2) Provisioned IOPS (SSD) (iO1) Throughput Optimized Hard Disk Drive (St1) Cold Hard Disk Drive (SC1) Magnetic
Balances price & performance for a wide variety of transactional workloads For fast inputs & outputs per second Physical HDD Magnetic Previous generation HDD
Use case: Most work loads Highest performance SSD volume designed for mission critical applications Its not SSD, its magnetic Lowest cost HDD volume designed for less frequently accessed workloads. Use case: Workloads where data is infrequently accessed
API Name: gp2 Use case: Databases Low cost HDD volume designed for frequently accessed, throughput intensive workloads Use case: File servers API Name: standard
Volume size: 1 GB – 16 TB API Name: iO1 Use case: Big data & data warehouses API Name: Sc1 1GB – 1TB
Max IOPS/ Volume: 16000 4GB – 16TB API Name: St1 500GB – 16TB 40-200
64000 500GB – 16TB 250
500
i. EBS volume will always be in the same AZ as in EC2 instance. Terminating EC2 instance will delete root EBS volume automatically. While creating an EC2 instance then at Step 4: Add Storage is the step where we create EBS volumes. Default volume type is root with device as /dev/xvda and the EBS volume type is General Purpose SSD (gp2). Delete on termination is checked automatically. On the root device volume OS is installed.
ii. Increase in size means increase the HDD. Also we may need to extend the OS file system on the volume to use any newly allocated.space. We can change volume type or size from one volume type or size to another w/o shutdown or terminate. Ideally EBS volume and EC2 instance remains in same AZ. However we can move EBS root device volume to another AZ.
iii. Snapshot = Photograph of disk. Create snapshot of EBS root device volume. We can create volume or create image from EBS snapshot. In example, created image and use that image to be deployed into another AZ. We can see these images under Images >> AMIs & Bundle Tasks
iv. Select AMI & click on launch >> rest steps follows as creation of EC2 instance. Starts from step 2: Choose an instance type. In step 3: configure instance details, choose different subnet [east-1a]
v. Terminating an EC2 instance will delete root EBS volume but the additional volumes that are attached to EC2 instance still be available. We can delete these additional volumes manually. We can delete snapshots and deregister AMIs manually.
vi. Volumes exist on EBS. EBS is like a virtual HDD in the cloud. Snapshots exists on S3. Think of snapshots as a photograph of the disk. Snapshots are point in time copies of volumes. Snapshots are incremental. This means that only the blocks that have changed since your last snapshot are moved to S3. First snapshot will take time to create. To create a snapshot for EBS volumes that serve as root devices, the best practice is to stop the instance before taking the snapshot. However we can take a snapshot while the instance is running. We can create AMIs from snapshots and volumes.
vii. We can change EBS volume sizes on the fly, including changing the storage type. Volumes will always be in the same AZ as EC2 instance. To move EC2 volume from one AZ to another, take a snapshot it, create an AMI from snapshot and then use the AMI to launch the EC2 instance in a new AZ. To move an EC2 volume from one region to another, take a snaphot of it, create an AMI from the snapshot and then copy the AMI from one region to another then use AMI to launch the EC2 instance in a new region.
viii. Termination protection is turned off by default and we must turn it on explicitly. On the EBS backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated. So if we terminate an EC2 instance the root device volume will be automatically deleted. But if we attach additional volumes to EC2 instance, then those volumes wont be deleted automatically, unless we check that checkbox.
ix. EBS root volumes of default AMIs can be encrypted. We can also use a third party tool like bit locker to encrypt the root volume or this can be done when creating AMIs in AWS console or using API. Additional volumes can also be encrypted. Snapshots of encrypted volumes are encrypted automatically. Volumes restored from encrypted snapshots are encrypted automatically.

25. Volumes & Snapshots
Network drive attached to one EC2 instance at a time.
An EBS Volume is a network drive (not physical drive) you can attach to EC2 instances while they run. Means to communicate between EC2 instance and EBS volume it will be using the network. Since the network is used there will be a bit latency from one server to another. EBS volume because of their network drive, it can be detached from one EC2 instance and attach to another one quickly.
It allows EC2 instances to persist (continue to exist) data, even after the instance is terminated.
The EBS volumes can be mounted to one instance at a time (at the CCP level).
The EBS volumes are bound up/ linked/ tied to a specific AZs. An EBS volume in us-east-1a cannot be attached to us-east-1b. But if we do a snapshot then we are able to move a EBS volume across different availability zones.
We can think of EBS Volumes as ‘Network USB Stick’, where we dont physically put from one computer to another but it is attached through the network.
Free tier: 30GB of free EBS storage of type General Purpose (SSD) or Magnetic per month.
We can provision capacity in advance (size in GBs and IOPS). We get billed for all the provisioned capacity and we can increase the capacity of the drive over time, if we want to have a better performance or more size.

AZ 1 >> EC2 Instance 1 >> EBS Volume 1 attached
AZ 1 >> EC2 Instance 2 >> EBS Volume 2 and EBS Volume 3 are attached but EBS Volume 1 cannot be attached to EC2 Instance 2.
Similarly EBS Volume 1, 2 & 3 cannot be attached to another EC2 instance in AZ 2. However we can detach EBS Volume  from an EC2 instance and leave them unattached.

We have an option ‘Delete on Termination’ to delete EBS volume when we create an EC2 instance. By default root volume will be deleted (attribute enabled) and any other attached EBS volume will not be deleted (attribute disabled). Use case: Preserve root volume when the instance is terminated.

EBS Multi-Attach
While we say that EBS volumes cannot be attached to multiple instances, it is not true for io1 and io2 volume types: this is called the EBS Multi-Attach feature.

We can EBS Volumes and make a snapshot which is also called a Backup. Make a backup (snapshot) of EBS volume at any point of time. Even if the EBS volume is terminated, later on we could restore it from back up.
It is not necessary to detach volume to do snapshot but recommended just to make sure everything is clean on EBS volume.
We can copy snapshots across AZs or Regions.

AZ 1 >> EC2 Instance 1 >> EBS Volume 1 attached >> Stop (recommended) EC2 instance 2 and create a snapshot >> Restore EBS Volume 2 and attach to EC2 Instance 1 in AZ2

Questions:
i. How is a volume selected (identified) when making an EBS Snapshot?
A. account id
B. volume id
C. tag
D. ARN
Answer (D)

ii. What two methods are recommended by AWS for protecting EBS data at rest?
A. replication
B. snapshots
C. encryption
D. VPN
Answer (B,C)

26. AMI Types (EBS vs Instance Store)
AMI = Amazon Machine Image
AMIs are ready to use EC2 instances with customizations.  Represents customization of EC2 instance.
Within custom AMI we can have our own software configuration, OS and monitoring tool…
Faster boot/ configuration time because all the software is pre-packaged through AMI.
AMIs are built for a specific region and can be copied across regions.
We can launch EC2 instances from:
i) Public AMI: AWS provided. Most popular is ‘Amazon Linux 2 AMI’
ii) Custom AMI: Create and maintain by user
iii) AWS Marketplace: An AMI created and sold by someone else

AMI Process (from an EC2 instance)
i) Start an EC2 instance and customize it.
ii) Stop the instance for data integrity
iii) Build an AMI – This will also create EBS snapshots
iv) Launch instances from other AMIs

us-east-1a >> EC2 instance >> AMI >> Create custom AMI >> use this custom AMI in another EC2 instance in us-east-1b

EC2 Image Builder
Used to automate the creation of virtual machines or container images or AMIs. Means able to automate the creation, maintain, validate and test EC2 AMIs.
EC2 Image Builder when runs will create an EC2 instance called ‘Builder EC2 Instance’. This instance is going to build components and customized s/w installs. A new AMI is going to be created out of that instance. EC2 Image Builder will automatically creates a ‘Test EC2 Instance’ from the newly created AMI and going to run bunch of tests that are defined in advance. We can skip the test, if we do not want to run. But the test validates whether AMI is working properly and secured? application running correctly? Once the AMI is tested then the AMI is going to be distributed. The Image Builder is regional service and AMI lets you to distribute across regions.
The image builder can run on a schedule basis. Like weekly schedule or whenever the packages are updated or we can run it manually and is a free service.

Instance Store
High performance hardware disk attached to EC2 instance.
EBS volumes are network drives with good but limited performance. If we want a high performance, then attach a hard disk to EC2 instance. EC2 instance is a virtual machine but it is attached to real hardware server. Some of the servers do have disk space that is directly attached with the physical connection on to the server. Better I/O performance and good throughput.
If we stop or terminate EC2 instance that has an instance store then the storage will be lost and its called Ephemeral storage. Use case: Good for buffer/ cache/ scratch data/ temporary content but not for long term storage. For long term storage EBS would be the best use case. If the underlying server of EC2 instance fails then we have risk of data loss as the hardware attached to the instance also fails. So if we decide to use an instance store then its our responsibility to maintain backups and replications.

Questions:
i. A company wants some EBS volumes with maximum possible Provisioned IOPS (PIOPS) to support high-performance database workloads on EC2 instances. The company also wants some EBS volumes that can be attached to multiple EC2 instances in the same Availability Zone. As an AWS Certified Solutions Architect Associate, which of the following options would you identify as correct for the given requirements? (Select two)
Answer: a. Use io2 Block express volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000
b. Use io1/ io2 volumes to enable Multi-Attach on Nitro-based EC2 instances
Explanation: EBS io2 Block Express is the next generation of Amazon EBS storage server architecture. It has been built for the purpose of meeting the performance requirements of the most demanding I/O intensive applications that run on Nitro-based Amazon EC2 instances. With io2 Block Express volumes, you can provision volumes with Provisioned IOPS (PIOPS) up to 256,000, with an IOPS:GiB ratio of 1,000:1.
Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances that are in the same Availability Zone. You can attach multiple Multi-Attach enabled volumes to an instance or set of instances. Each instance to which the volume is attached has full read and write permission to the shared volume. Multi-Attach makes it easier for you to achieve higher application availability in clustered Linux applications that manage concurrent write operations.

ii. The solo founder at a tech startup has just created a brand new AWS account. The founder has provisioned an EC2 instance 1A which is running in region A. Later, he takes a snapshot of the instance 1A and then creates a new AMI in region A from this snapshot. This AMI is then copied into another region B. The founder provisions an instance 1B in region B using this new AMI in region B. At this point in time, what entities exist in region B?
Answer: 1 EC2 instance, 1 AMI and 1 snapshot exist in region B
Explanation: An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. When the new AMI is copied from region A into region B, it automatically creates a snapshot in region B because AMIs are based on the underlying snapshots. Further, an instance is created from this AMI in region B. Hence, we have 1 EC2 instance, 1 AMI and 1 snapshot in region B.

27. ENI vs ENA vs EFA

ENI – Elastic Network Interface EN – Enhanced Networking EFA – Elastic Fabric Adapter
A virtual network card on EC2 instance Uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance & lower CPU utilization when compared to traditional virtualized network interfaces A network device that you can attach to EC2 instance to accelerate HPC and machine learning applications.
When we provision an EC2 instance it going to have an ENI attached to it automatically and also we can add additional ones Provides higher bandwidth, higher packet per sec (PPS) performances & consistently lower-instance latencies. No additional cost for using EN. Provides lower & more consistently latency & higher throughput than traditional TCP transport used in cloud-based HPC systems.
It allows a primary private IPv4 address from the IPv4 address range of VPC Use when we want good performance EFA can use OS-bypass. OS-bypass enables HPC & ML applications to by pass the OS Kernel & to communicate directly with EFA device. It makes a lot faster with a lot lower latency. Not supported on windows, currently only Linux
It allows one or more secondary private IPv4 addresses from the IPv4 address range of VPC We go with EN when ENIs doesnt support When we need to accelerate HPC & ML application or if we need to do an OS bypass
One elastic IP address (IPv4) per private IPv4 address
Depending on instance type, enhanced networking can be enabled using

Elastic Network Adapter (ENA) Virtual Function (VF)
Supports n/w speed upto 100 Gbps for supported instance types Intel 82599 VF interface supports n/w speed upto 10Gbps for supported instance types. Typically used in order instances

One public IPv4 address. One or more IPv6 addresses. One or more security groups When we need speeds between 10 Gbps & 100 Gbps. Anywhere we need reliable, high throughput
A MAC address
A source/ destination check flag
A description usage scenario:
We might have multiple ENIs if we want to create a management network
We need additional ENIs when:
i. We create a management n/w
ii. Use n/w & security appliances in VPC
iii. Create dual-homed instances with workloads/ roles on distinct subnets.
iv. Create a low budget, high availability solution.
For basic networking: Perhaps we need a separate management n/w to production n/w or a separate logging n/w & we need to do this at low cost. In this scenario use multiple ENIs for each n/w
28. Encrypted Root Device Volumes & Snapshots
i. A root device volume is basically just the hard disk that has OS on it.
ii. EBS volums that has OS on it w/o encryption when we first provision an EC2 instance.

provision EC2 instance with an unencrypted root device volume >> Snapshot [Create a snapshot of unencrypted root device volume] >> Copy of Snapshot [While copying we can encrypt root device volume] >> provision AMI from copied snapshot [create image] >> Launch EC2 instance as encrypted root device volume.
i. Create a snapshot of unencrypted root device volume.
ii. Create a copy of snapshot & select encrypt option
iii. Create an AMI from encrypted snapshot
iv. Use that AMI to launch new encrypted instances.

i. While creating EC2 instance in Step 4: Add Storage, we find encryption & we can select encryption while creation as well.
ii. Snapshots of encrypted volumes are encrypted automatically
iii. Volumes restored from encrypted snapshots are encrypted automatically
iv. We can share snapshots only if they are unencrypted
v. These snapshots can be shared with other AWS accounts or made public but they have to be unencrypted
vi. We can encrypt root device volumes upon creation of EC2 instances.

29. Spot Instances & Spot Fleets
i. EC2 spot instances let you to take advantage of unused EC2 capacity in the AWS cloud. Spot instances are available at upto a 90% discount compared to on demand prices. We can use spot instances for various stateless, fault tolerant or flexible applications such as big data, containerized work loads, CI/ CD, web services, HPC and other test & development work loads.
ii. To use spot instances, we must first decide on our max. spot price. The instance will be provisional as long as the spot price is below max. spot price.
iii. The hourly spot price varies depending on capacity & region
iv. If the spot price goes about our maximum, we have 2 minutes to choose whether to stop or terminate instance.
v. Use spot block to stop our spot instances from being terminated even if the spot price goes over max. spot price we can set spot blocks for between one to six hours currently.
vi. Spot instances are useful for the following tasks: Big data & analytics, containerized workloads, CI/CD & testing, web services, image & media rendering & HPC
vii. Spot instances are not good for: Persistent workloads, critical jobs & databases.

Spot Fleets:
i. A collection of spot instances and optionally on demand instances.
ii. Spot fleet attempts to launch the number of spot instances & on-demand instances to meet the target capacity we specified in the spot fleet request.
iii. The request for spot instances if fulfilled if there is available capacity and the max price we specified in the request exceeds the current spot price.
iv. The spot fleet also attempts to maintain its target capacity fleet if your spot instances are interrupted.
v. Spot fleets will try & match the target capacity within price restraints:
a. Set up different launch pools. Define things like EC2 instance type, OS & AZ
b. We can have multiple pools & the fleet will choose the best way to implement depending on the strategy we define.
c. Spot fleet will stop launching instances once you reach your price threshold or capacity desires.

We can have following strategies with spot fleets:
i. Capacity optimized: The spot instances come from the pool with optimal capacity for the number of instances launching.
ii. Diversified: The spot instances are distributed across all pools
iii. Lowest price: The spot instances come from the pool with the lowest price. This is the default strategy.
iv. Instance pools to use count: The spot instances are distributed across the number of spot instances pools you specify. These parameters valid only when used in combination with lowest price

i. Spot instances save upto 90% of the cost of on-demand instances
ii. Useful for any type of computing where we dont need persistent storage
iii. We can block spot instances from terminating using spot block
iv. A spot fleet is a collection of spot instances, optionally on demand instances.

30. EC2 Hibernate
EBS behavior: We can stop & terminate EC2 instances. If we stop the instance, the data is kept on the disk with EBS and will remain on the disk until the EC2 instance is started. If the instance is terminated, then by default root device volume will also be terminated.

Following things happen when we start EC2 instance:
i. OS boots up. Load windows or linux
ii. Run user data script (bootstrap scripts)
iii. Start applications (can take sometime) like mysql

EC2 Hibernate: When we hibernate an EC2 instance , the OS is told to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to EBS root volume. We persist the instances root volume and any attached EBS data volumes.

Starting an EC2 instance with EC2 hibernate:
When we start an instance out of hibernation:
i. The EBS root volume is restored to its previous state
ii. The RAM contents are reloaded
iii. The processes that were previously running on the instance are resumed.
iv. Previously attached data  volumes are reattached & the instance retains its instance ID. So it keeps same instance ID as if we stop an instance or restart it.

With EC2 hibernate, the instance booths much faster. The OS doesnt need to reboot because the in-memory state (RAM) is preserved. This is useful for i. long running processed ii. services that take lot of time to initialize.

i. If we are going to use hibernation, the root device volume must be encrypted.
ii. EC2 hibernate preserves the in-memory RAM on persistent storage (EBS)
iii. Much faster to boot up because we do not need to reload the OS
iv. Instance RAM must be less than 150 GB
v. Instance families include C3, C4, C5, M3, M4, M5, R3, R4 & R5
vi. Available for windows, Amazon Linux 2 AMI & Ubuntu
vii. Instances cant be hibernated for more than 60 days
viii. Available for on-demand instances & reserved instances.

31. Cloud Watch
i. Monitoring service to monitor AWS resources as well as applications that run on AWS {like a gym trainer who watches the performance}. Monitors performance.

Cloud Watch can monitor things like:

Compute Storage & Content Delivery
EC2 instances. Cloud Watch with EC2 will monitor events every 5 minutes by default EBS volumes
Autoscaling groups Storage gateways
ELB (Elastic Load Balancers) Cloud Front
Route 53 health checks
We can have one minute interval by turning on detailed monitoring. Create Cloud Watch alarms which triggers notifications.

i. Monitor at a host level & host level metrics consist of CPU, n/w, disk, status check
ii. Cloud Trail {think of CC TV or auditing} increases visibility into user & resource activity by recording AWS management console actions and API calls. For example when we create a S3 bucket or an EC2 instance, we are making an API call to AWS and this is all recorded using Cloud Trail. We can identify which users & accounts called AWS, what is the source IP address from which these calls were made & when the calls were made.
iii. Cloud Watch monitors performance
iv. Cloud Trail monitors API calls in the AWS platform
v. We can see monitor in step 3: configure instance details
vi. Standard monitoring = 5 min and detailed monitoring = 1 min

What we can do with Cloud Watch:
i. Dashboards: Creates awesome dashboards to see what is happening in AWS environment
ii. Alarms: Allows to set alarms that notify when particular thresholds are hit.
iii. Events: Cloud Watch events helps you to respond to state changes in AWS resources
iv. Logs: Cloud Watch logs help to aggregate, monitor & storelogs

Questions:
i. Log data is stored indefinitely and alarm history is deleted. Supports ELB.

ii. How is CloudWatch integrated with Lambda? (Select two)
A. tenant must enable CloudWatch monitoring
B. network metrics such as latency are not monitored
C. Lambda functions are automatically monitored through Lambda service
D. log group is created for each event source
E. log group is created for each function
Answer (C,E)

iii. What two statements correctly describe AWS monitoring and audit operations?
A. CloudTrail captures API calls, stores them in an S3 bucket and generates
a Cloudwatch event
B. CloudWatch alarm can send a message to a Lambda function
C. CloudWatch alarm can send a message to an SNS Topic that triggers an
event for a Lambda function
D. CloudTrail captures all AWS events and stores them in a log file
E. VPC logs do not support events for security groups
Answer (A,C)

iv. What are two features of CloudWatch operation?
A. CloudWatch does not support custom metrics
B. CloudWatch permissions are granted per feature and not AWS resource
C. Collect and monitor operating system and application generated log files
D. AWS services automatically create logs for CloudWatch
E. CloudTrail generates logs automatically when AWS account is activated
Answer (B,C)

v. You are asked to select an AWS solution that will create a log entry anytime a
snapshot of an RDS database instance and deletes the original instance. Select
the AWS service that would provide that feature?
A. VPC Flow Logs
B. RDS Access Logs
C. CloudWatch
D. CloudTrail
Answer (D)

vi. What is required to enable application and operating system generated logs and publish to CloudWatch Logs?
A. Syslog
B. enable access logs
C. IAM cross-account enabled
D. CloudWatch Log Agent
Answer (D)

vii. What two statements correctly describe CloudWatch monitoring of database
instances?
A. Metrics are sent automatically from DynamoDB and RDS to CloudWatch
B. alarms must be configured for DynamoDB and RDS within CloudWatch
C. metrics are not enabled automatically for DynamoDB and RDS
D. RDS does not support monitoring of operating system metrics
Answer (A,B)

viii. What Amazon AWS service provides account transaction monitoring and security audit?
A. CloudFront
B. CloudTrail
C. CloudWatch
D. security group
Answer (B)

ix. What AWS service is used to monitor tenant remote access and various security errors including authentication retries?
A. SSH
B. Telnet
C. CloudFront
D. CloudWatch
Answer (D)

x. What feature enables CloudWatch to manage capacity dynamically for EC2
instances?
A. replication lag
B. Auto-Scaling
C. Elastic Load Balancer

D. vertical scaling
Answer (B)

xi. Select two cloud infrastructure services and/or components included with default CloudWatch monitoring?
A. SQS queues
B. operating system metrics
C. hypervisor metrics
D. virtual appliances
E. application level metrics
Answer (A,C)

32. AWS Command Line
i. Control multiple AWS services from the command line and automate them through scripts.
ii. CLI lets to interact with AWS from anywhere by simply using a command line
iii. Following actions can be performed using CLI:
a. List buckets, upload data to S3
b. Launch, stop, start and terminate EC2 instances
c. Update security groups, create subnets
iv. Important CLI flags: Easily switch between AWS accounts using –profile. Change the –output between JSON, table & text
v. CLI is installed using a Python script.

33. IAM Roles with EC2

34. Boot Strap Scripts

35. EC2 Instance Meta Data

36. EFS
EFS = Elastic File System
One of the storage option for EC2 instance. EFS is a ‘Managed Network File System’ and can be mounted on 100s of EC2 instances at a time. EBS volumes are attached to only one EC2 instance but EFS can mount up to 100s of ECS instances. EFS works only with Linux EC2 instances and works across multiple AZs. EFS is highly available, scalable, expensive (3*gp2), pay per use and no capacity planning.

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

AZ >> Security Group >> EFS

EBS EFS
AZ >> EC2 Instance >> EBS Volume
EBS Volumes are bound to specific AZ. Many EC2 instances in AZ1 and many instances in AZ2, using mount target we can mount all the instances at same time
Create a snapshot and restore a snapshot to move EBS Volume from one AZ to another AZ.
i. File storage service for EC2 instances.
ii. EFS is easy to use & provides a simple interface that allows to create & configure file systems quickly and easily.
iii. EFS storage capacity is elastic. It grows and shrinks automatically when we add or remove files, so the applications have the storage they need, when they need it.
iv. EFS supports NFS v4 protocol
v. With EFS we only pay for the storage we use (no pre-provisioning required)
vi. Can scale upto the petabytes
vii. Can support thousands of concurrent NFS connections
viii. Data is stored across multiple AZs within a region
ix. Read after write consistency

Questions:
i. The sourcing team at the US headquarters of a global e-commerce company is preparing a spreadsheet of the new product catalog. The spreadsheet is saved on an EFS file system created in us-east-1 region. The sourcing team counterparts from other AWS regions such as Asia Pacific and Europe also want to collaborate on this spreadsheet. As a solutions architect, what is your recommendation to enable this collaboration with the LEAST amount of operational overhead?
Answer: The spreadsheet on the EFS file system can be accessed from EC2 instances running in other AWS regions by using an inter-region VPC peering connection. Copying the spreadsheet into S3 or RDS database is not the correct solution as it involves a lot of operational overhead. For RDS, one would need to write custom code to replicate the spreadsheet functionality. S3 does not allow in-place edit of an object. Additionally, it’s also not POSIX compliant. So one would need to develop a custom application to “simulate in-place edits” to support collaboration as per the use-case. Creating copies of the spreadsheet into EFS file systems of other AWS regions would mean no collaboration would be possible between the teams. In this case, each team would work on “its own file” instead of a single file accessed and updated by all teams.

ii. A company has moved its business critical data to Amazon EFS file system which will be accessed by multiple EC2 instances. As an AWS Certified Solutions Architect Associate, which of the following would you recommend to exercise access control such that only the permitted EC2 instances can read from the EFS file system? (Select three)
Answer: a. Use VPC security group to control the n/w traffic to and from your file system
b. Use EFS access points to manage application access
c. Attach an IAM policy to your file system to control clients who can mount your file system with the required permissions.

37. FSX for Windows & FSX for Lustre
i. FSx for windows file server provides a fully managed native windows file system so we can easily move windows-based application that require file storage to AWS. Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. FSx for Windows does not allow you to present S3 objects as files and does not allow you to write changed data back to S3. Therefore you cannot reference the “cold data” with quick access for reads and updates at low cost.
ii. FSx is built on windows server.
iii. Difference between windows FSx and EFS

Windows FSx EFS
A managed windows server that runs windows ‘Server Message Block’ (SMB) – based file services A managed NAS file for EC2 instances based on NFSv4
Designed for windows and windows applications One of the first network file sharing protocols native to unix & linux
Supports AD users, ACLs, groups & security policies, DFS namespaces & replication
iv. FSx for lustre is a fully managed file system that is optimized for compute intensive workloads, such as high performance computing, machine learning, media data processing workflows & electronic design automation (EDA)
v. With FSx, we can launch and run a lustre file system that can process massive data sets upto 100s of GBs/ sec of throughput, millions of IOPs and sub milli second latencies.
vi. Difference between Lustre FSx and EFS

Lustre FSx EFS
Designed specifically for fast processing of workloads such as ML, HPC, video processing, financial modelling, EDA A managed NAS file for EC2 instances based on NFSv4
Lets us launch & run file system that provides sub-millisecond access to our data and allows to read and write data at speeds of upto hundereds of GBs/ sec of throughput & millions of 10 Ps One of the first network file sharing protocols native to unix & linux
vii. EFS: When we need distributed, highly resilient storage for linux instances and linux based applications
viii. FSx for Windows: When we need centralized storage for windows based applications such as sharepoint, Microsoft SQL Server, workspaces, IIS web server or any other native Microsoft application, when we need SMB (server message block) storage.
ix. FSx for Lustre: When we need high speed, high capacity distributed storage. For high speed application like big data, HPC, ML. FSx for Lustre can store data directly on S3.

Questions:
i. What is required for remote management access to your Linux-based instance?
A. ACL
B. Telnet
C. SSH
D. RDP
Answer (C)

ii. How does Amazon AWS isolate metrics from different applications for monitoring, store and reporting purposes?
A. EC2 instances
B. Beanstalk
C. CloudTrail
D. namespaces
E. Docker
Answer (D)

iii. An Electronic Design Automation (EDA) application produces massive volumes of data that can be divided into two categories. The ‘hot data’ needs to be both processed and stored quickly in a parallel and distributed fashion. The ‘cold data’ needs to be kept for reference with quick access for reads and updates at a low cost. Which of the following AWS services is BEST suited to accelerate the aforementioned chip design process?
Answer: Amazon FSx for Lustre
Explanation: Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling. The open-source Lustre file system is designed for applications that require fast storage – where you want your storage to keep up with your compute. FSx for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system. When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allows you to write changed data back to S3. FSx for Lustre provides the ability to both process the ‘hot data’ in a parallel and distributed fashion as well as easily store the ‘cold data’ on Amazon S3

iv. A large financial institution operates an on-premises data center with hundreds of PB of data managed on Microsoft’s Distributed File System (DFS). The CTO wants the organization to transition into a hybrid cloud environment and run data-intensive analytics workloads that support DFS. Which of the following AWS services can facilitate the migration of these workloads?
Answer: Amazon FSx for Windows File Server
Explanation: Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) to organize shares into a single folder structure up to hundreds of PB in size.

38. EC2 Placement Groups
Three types of placement groups

Clustered Placement Group Spread Placement Group Partitioned Placement Group
Grouping of instances within a single AZ. CPG are recommended for applications that need low network latency, high network throughput or both Group of instances that are each placed on distinct underlying hardware When using PPG, EC2 divides each group into logical segments called Partitions. EC2 ensures that each partition within a placement group has its own set of racks. Each rack has its own n/w & power source. No two partitions within a placement group share the same racks, allowing to isolate the impact of hardware failure within application.
Putting EC2 instances very very close to each other These will be on separate racks, separate n/w inputs, separate power requirements. So if you have one rack that fails its only going to affect that one EC2 instance. Multiple EC2 instances
Only certain instances can be launched into a CPG SPG are recommended for applications that have a small number of critical instances that should be kept separate from each other. Use cases: HDFC, HBase, Cassandra
Opposite of CPG
Think of a single instance
Individual critical EC2 instances & we need them to be on separate pieces of hardware.
Cluster placement groups pack instances close together inside an Availability Zone. These are recommended for applications that benefit from low network latency, high network throughput, or both.
i. A CPG cant span across multiple AZs.
ii. A Spread & Partitioned placement group can span across multiple AZs but they have to be within the same region.
iii. The name we specify for a placement group must be unique within AWS account
iv. Only certain types of instances can be launched in a placement group (compute optimized, CPU, memory optimized & storage optimized)
v. AWS recommend homogenous instances within clustered placement groups
vi. We cant merge placement groups
vii. We can move an existing instance into a placement groups. Before you move the instance, the instance must be in the stopped state. We can move or remove an instance using AWS CLI or AWS SDK, we cant do it via the console yet.

39. HPC
HPC = High Performance Compute
Different services we can use to achieve HPC are:
i. Data Transfer
ii.  Compute & Networking
iii. Storage
iv. Orchestration & Automation

Ways to get our data into AWS (Data Transfer):
i. Snowball, snowmobile (terabytes/ petabytes worth of data) (large amounts of data)
ii. AWS DataSync to store on S3, EFS, FSx for Windows etc
iii. Direct Connect: A cloud service solution that makes it easy to establish a dedicated n/w connection from your premises to AWS. We can establish private connectivity between AWS and your data center, office or co-location environment. In many cases we can reduce n/w costs, increase bandwidth throughput & provide a more consistent n/w experience than internet based connections.

Compute & networking services that allow to achieve HPC:

Compute Services Networking Services
EC2 instances that are GPU or CPU optimized Enhanced networking (EN)
EC2 fleets (spot instances or spot fleets) Elastic network adapters (ENA)
Placement groups (cluster placement groups) Elastic fabric adapters (EFA)
Storage services that allow to achieve HPC:
Instance attached storage:
i. EBS: Scale up to 64000 IOPS with provisioned IOPS (PIOPS)
ii. Instance store: Scale up to millions of IOPS with low latency
Network storage:
i. S3: Distributed object based storage. Not a file system
ii. EFS: File system. Scale IOPS based on total size, or use PIOPS
iii. FSx for lustre: HPC optimized distributed file system, millions of IOPS, also backed by S3.

Orchestration & automation services that allow to achieve HPC:
AWS Batch:
i. Enables developers, scientists & engineers to easily & efficiently run hundereds of thousands of batch computing jobs on AWS.
ii. AWS batch supports multinode parallel jobs, which allows us to run a single job that spans multiple EC2 instances.
iii. We can easily schedule jobs & launch EC2 instances according to needs.
AWS Parallel cluster:
i. Open source cluster management tool that makes it easy for you to deploy & manage HPS clusters on AWS.
ii. Parallel cluster uses a simple text file to model & provision all the resources needed for your HPC application in an automated & secure manner
iii. Automate creation of VPC, subnet, cluster type & instance type

Questions:
i. An ivy-league university is assisting NASA to find potential landing sites for exploration vehicles of unmanned missions to our neighboring planets. The university uses High Performance Computing (HPC) driven application architecture to identify these landing sites. Which of the following EC2 instance topologies should this application be deployed on?
Answer: The EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low n/w latency and high n/w throughput

40. WAF
WAF = Web Application Firewall
i. WAF allows to monitor HTTP & HTTPS request that are forwarded to Cloud Front, ALB or API Gateway.
ii. Lets you to control access to your content.
iii. HTTP & HTTPS happens at application layer (layer 7)
iv. WAF is layer 7 web firewall
v. We can configure conditions such as what IP addresses are allowed to make this request or what query string parameters need to be passed for the request to be allowed.
vi. Then the ALB or Cloud Front or API gateway will either allow this content to be recovered or to give a http 403 status code.

AWS WAF is a web application firewall service that lets you monitor web requests and protect your web applications from malicious requests. Use AWS WAF to block or allow requests based on conditions that you specify, such as the IP addresses. You can also use AWS WAF preconfigured protections to block common attacks like SQL injection or cross-site scripting.

WAF allows three different behaviours:
i. Allow all requests except the ones you specify
ii. Block all requests except the ones you specify
iii. Count the requests that match the properties you specify

Extra protection against web attacks using conditions you specify. You can define conditions by using characteristics of web requests such as:
i. IP addresses that requests originate from
ii. Country that requests originate from
iii. Values in request headers
iv. Strings that appear in requests, either specific strings or string that match regular expression patterns
v. Length of requests
vi. Presence of SQL code that is likely to be malicious (known as SQL injection)
vii. Presence of script that is likely to be malicious (known as cross site scripting)

Questions:
i. A media company runs a photo-sharing web application that is accessed across three different countries. The application is deployed on several Amazon EC2 instances running behind an Application Load Balancer. With new government regulations, the company has been asked to block access from two countries and allow access only from the home country of the company. Which configuration should be used to meet this changed requirement?
Answer: Configure AWS WAF on the ALB in a VPC
Explanation: You can use AWS WAF with your Application Load Balancer to allow or block requests based on the rules in a web access control list (web ACL). Geographic (Geo) Match Conditions in AWS WAF allows you to use AWS WAF to restrict application access based on the geographic location of your viewers. With geo match conditions you can choose the countries from which AWS WAF should allow access.
Geo match conditions are important for many customers. For example, legal and licensing requirements restrict some customers from delivering their applications outside certain countries. These customers can configure a whitelist that allows only viewers in those countries. Other customers need to prevent the downloading of their encrypted software by users in certain countries. These customers can configure a blacklist so that end-users from those countries are blocked from downloading their software.

41. Databases
Relational Database Service (RDS) is not Serverless.
We have 6 relational databases (RDS) on AWS (OTLP).
i) Microsoft SQL Server
ii) Oracle
iii) MySQL Server
iv) PostgreSQL
v) Amazon Aurora
vi) Maria DB

Relational Database Service (RDS) has two key features:
i) Multi AZ – For disaster recovery
ii) Read Replicas – For performance

Non relational databases consist:
i) Collection: Collection is just a table.
ii) Document: Inside collection we have document. Document is simply a row.
iii) Key-Value pairs: These are basically fields/ columns.

Sample JSON/ nosql/ non relational DB:

{
 "_id" : "123abcDEF",
 "firstname" : "John",
 "lastname" : "Smith"
}

Data Warehousing:
Used for BI. Tools like Cognos, Jaspersoft, SQL Server reporting services, Oracle Hyperion, SAP Netweaver etc. Used to pull very large and complex data sets. Usually used by management to do queries on data (such as current performance vs targets etc).

Online Transaction Processing (OLTP):
OLTP differs from Online Analytics Processing (OLAP) in terms of types of queries we will run.
OLTP Example: Pulls up a row of data such as name, date,ship to, deliver to, phone number etc
OLAP Example: Net profit for EMEA and Asia Pacific for the digital radio product. Pulls in large number of records.
Data Warehousing databases use different type of architecture, both from a DB perspective and infra structure layer.
Amazons data warehouse solution is called ‘Redshift’. Redshift is used for Amazons OLAP.
Dynamo DB is Amazons no sql solution.

Read Replica:
Read Replicas allow to have a read only copy of production DB.

Questions:
i. What two configuration features are required to create a private database
instance?
A. security group
B. network ACL
C. CloudWatch
D. Elastic IP (EIP)
E. Nondefault VPC
F. DNS
Answer (A,F)

42. Create an RDS Instance
RDS runs on a virtual machine. We cannot log into these OS. Patching of RDS OS and DB is Amazons responsibility. RDS is not serverless. Aurora is serverless.

Questions:
i. What two fault tolerant features does Amazon RDS support?
A. copy snapshot to a different region
B. create read replica to a different region
C. copy unencrypted read-replica only
D. copy read/write replica and snapshot
Answer (A,B)

ii. What managed services are included with Amazon RDS? (select four)
A. assign network capacity to database instances
B. install database software
C. perform regular backups
D. data replication across multiple availability zones
E. data replication across single availability zone only
F. configure database
G. performance tuning
Answer (A,B,C,D)

iii. What features are supported with Amazon RDS? (Select three)
A. horizontal scaling with multiple read replicas
B. elastic load balancing RDS read replicas
C. replicate read replicas cross-region
D. automatic failover to master database instance
E. application load balancer (ALB)
Answer (A,C,E)

iv. What are three advantages of standby replica in a Multi-AZ RDS deployment?
A. fault tolerance
B. eliminate I/O freezes
C. horizontal scaling
D. vertical scaling
E. data redundancy
Answer (A,B,E)

v. What does RDS use for database and log storage?
A. EBS
B. S3
C. instance store
D. local store
E. SSD
Answer (A)

vi. Select two features available with Amazon RDS for MySQL?
A. Auto-Scaling
B. read requests to standby replicas
C. real-time database replication
D. active read requests only
Answer (B,C)

vii. What are two characteristics of Amazon RDS?
A. database managed service
B. NoSQL queries
C. native load balancer
D. database write replicas
E. automatic failover of read replica
Answer (A,C)

viii. What is the maximum volume size of a MySQL RDS database?
A. 6 TB
B. 3 TB
C. 16 TB
D. unlimited
Answer (C)

43. RDS Backups, Multi-AZ & Read Replicas
Two different types of backups for RDS:

Automated Backups Database Snapshots
Allows to recover DB to any point in time within a retention period. DB Snapshots are done manually (i.e they are user initiated).
Retention period can be between 1 to 35 days. They are stored even after we delete the original RDS instance, unlike automated backups.
Automated backups will take a full daily snapshot and will also store transaction logs throughout the day.
When we do a recovery, AWS will first choose the most recent daily backup and then apply transaction logs relevant to that day.
This allows to do a point in time recovery down to a second, within the retention period.
Automated backups are enabled by default
The backup data is stored in S3 and we get free storage space equal to the size of DB. So if we have an RDS instance of 10 GB, we will get 10 GB worth of storage
Backups are taken within a defined window. During the backup window, storage I/O may be suspended and may experience elevated latency.
Restoring Backups:
Whenever we restore either an automatic backup or DB Snapshot, the restored version of the DB will be a new RDS instance with a new DNS endpoint.

Encryption at Rest:
Encryption at Rest is supported for MySQL, Oracle, SQL Server, PostgreSQL, Maria DB & Aurora. Encryption is done using the AWS KMS. Once the RDS is encrypted , the data stored at rest in the underlying storage is also encrypted, as are its automated backups, read replicas and snapshots. So soon as you turn encryption on basically anything that you are doing such as backups, read replicas and snapshots are also going to be encrypted as well.

Multi-AZ:
Allows to have an exact copy of production database in another AZ and its done synchronously through synchronous replication. AWS handles the replication, so when DB is written, this write will automatically be synchronized to the standby DB.
In the event of planned DB maintenance, DB instance failure or AZ failure RDS will automatically failover to the standby so that DB operations can resume quickly w/o administrative intervention.
Multi AZ is for disaster recovery only. Its not primarily used for improving performance. For performance improvement, we need ‘Read Replicas’.
Multi AZ is available for following DBs:
i) Microsoft SQL Server
ii) Oracle
iii) My SQL Server
iv) PostgreSQL
v) Maria DB
Multi AZs are used for DR. We can force a failover from one AZ to another by rebooting the RDS instance.

Read Replicas:
Read Replicas allow to have a read-only copy of production DB. This is achieved by using asynchronous replication from the primary RDS instance to the read replica. We use read replicas primarily for very heavy DB workloads.
We can improve performance of DB by using Read Replicas and Elasti Cache.
Read Replicas are available for following DBs:
i. MySQL Server
ii. Oracle
iii. PostgreSQL
iv. Maria DB
v. Aurora
Used for scaling not for DR (Disaster Recovery).
Must have automatic backups turned on in order to deploy a Read Replica.
We can have upto 5 read replica copies of any DB
We can have read replicas of read replicas (but watch out for latency)
Each read replica will have its own DNS end point.
We can have read replicas that have multi AZ
We can create read replicas of multi AZ source DBs
Read replicas can be promoted to be their own DBs. This breaks replication.
We can have read replica in second region
Read Replicas:
i. Can be multi-AZ
ii. Used to increase performance
iii. Must have backups turned on
iv. Can be in different regions
v. Can be promoted to master, this will break read replica

Questions:
i. What are the first two conditions used by Amazon AWS default
termination policy for Multi-AZ architecture?
A. unprotected instance with oldest launch configuration
B. Availability Zone (AZ) with the most instances
C. at least one instance that is not protected from scale in
D. unprotected instance closest to the next billing hour
E. random selection of any unprotected instance
Answer (B,C)

ii. What storage type is recommended for an online transaction processing (OLTP) application deployed to Multi-AZ RDS with significant workloads?
A. General Purpose SSD
B. Magnetic
C. EBS volumes
D. Provisioned IOPS
Answer (D)

44. Dynamo DB
Amazons no sql DB solution which is opposite of RDS. Dynamo DB is a fast and flexible no sql DB service for all applications that need consistent, single digit millisecond latency at any scale. It is a fully managed DB and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web-gaming ad-tech, IOT and many other applications.
The basics of dynamo are as follows:
i. Its stored on SSD storage. So thats why it is so fast.
ii. Its spread across 3 geographically distinct data centers.
iii. Eventual consistency reads (default)
iv. Strongly consistency reads

Eventual Consistency Reads Strongly Consistency Reads
Consistency across all copies of data is usually reached within a sec. Repeating a read after a short time should return the updated data (best read performance) A Strongly Consistency Reads returns a result that reflects all writes that received a successful response prior to the read.
As long as application is happy that it doesnt need to read the data thats been written within one second, then we want eventual consistency reads If you got an application that needs to as soon as an update has been done to dynamo DB table and that application needs to read that update in one sec or less then we are going to use strongly consistency reads.
Questions:
i. What encryption support is available for tenants that are deploying AWS
DynamoDB?
A. server-side encryption
B. client-side encryption
C. client-side and server-side encryption
D. encryption not supported
E. block level encryption
Answer (B)

ii. What are two primary advantages of DynamoDB?
A. SQL support
B. managed service
C. performance
D. CloudFront integration
Answer (B,C)

iii. What consistency model is the default used by DynamoDB?
A. strongly consistent
B. eventually consistent
C. no default model
D. casual consistency
E. sequential consistency
Answer (B)

iv. What are three primary characteristics of DynamoDB?
A. less scalable than RDS
B. static content
C. store metadata for S3 objects
D. replication to three Availability Zones
E. high read/write throughput
Answer (C,D,E)

v. What are three advantages of using DynamoDB over S3 for
storing IoT sensor data where there are 100,000 datapoint samples sent per
minute?
A. S3 must create a single file for each event
B. IoT can write data directly to DynamoDB
C. DynamoDB provides fast read/writes to a structured table for queries
D. DynamoDB is designed for frequent access and fast lookup of small
records
E. S3 is designed for frequent access and fast lookup of smaller records
F. IoT can write data directly to S3
Answer (B,C,D)

vi. What happens when read or write requests exceed capacity units (throughput
capacity) for a DynamoDB table or index? (Select two)
A. DynamoDB automatically increases read/write units
B. DynamoDB can throttle requests so that requests are not exceeded
C. HTTP 400 code is returned (Bad Request)
D. HTTP 500 code is returned (Server Error)
E. DynamoDB automatically increases read/write units if provisioned
throughput is enabled
Answer (B,C)

vii. What read consistency method provides lower latency for GetItem requests?
A. strongly persistent
B. eventually consistent
C. strongly consistent
D. write consistent
Answer (B)

viii. You must specify strongly consistent read and write capacity for your
DynamoDB database. You have determined read capacity of 128 Kbps and write
capacity of 25 Kbps is required for your application. What is the read and write
capacity units required for DynamoDB table?
A. 32 read units, 25 write units
B. 1 read unit, 1 write unit
C. 16 read units, 2.5 write units
D. 64 read units, 10 write units
Answer (A)

ix. What DynamoDB capacity management technique is based on the tenant
specifying an upper and lower range for read/write capacity units?
A. demand
B. provisioned throughput
C. reserved capacity
D. auto scaling
E. general purpose
Answer (D)

x. What is the maximum size of a DynamoDB record (item)?
A. 400 KB
B. 64 KB
C. 1 KB
D. 10 KB
Answer (A)

45. Advanced Dynamo DB
Dynamo DB Accelerator (DAX):
i. This is fully managed, highly available, in-memory cache
ii. Gives 10 times performance improvement
iii. Also reduces request time from milli seconds to micro seconds even under load
iv) No need of developers to manage caching logic
v) Compatible with dynamo DB API calls

Transactions:
i) Multiple ‘all or nothing’ operations
ii) Financial transactions
iii) Fulfilling orders
iv) Two underlying reads or writes – prepare/ commit
v) Upto 25 items or 4 MB of data

On-demand capacity:
i) Pay-per-request pricing
ii) Balance cost & performance
iii) No minimum capacity
iv) No charge for read/ write – only storage & backups
v) Pay more per request than with provisioned capacity
vi) Use for new product launches

On-demand backup & restore:
i) Full backups at any time
ii) Zero impact on table performance or availability
iii) Consistent within seconds & retained until deleted
iv) Operates within same region as the source table

Point-in-time recovery (PITR):
i) Complimentary of backup & restore
ii) Protects against accidental writes or deletes
iii) Restore data to any point in the last 35 days
iv) Maintains incremental backups
v) PITR is not enabled by default. We have to turn it on manually
vi) Latest restorable timestamp is typically five minutes in the past.

Streams:
i) Streams are time-ordered sequence of item-level changes in a Dynamo table.
ii) Stream records appear in the same sequence as the item
iii) Information is stored for a period of 24 hours
iv) This provides stream of inserts, updates & deletes to your Dynamo table items
v) Structure of Stream: Stream consists of stream records. Each stream record represents a single data modification in the Dynamo DB table to which the stream belongs. Each stream record is assigned to sequence number, reflecting the order in which the record was published to the stream & stream records are organized into groups or shards.
vi) A Shard acts as a container for multiple stream records and the Shard contains info required for accessing & iterating through these records.

Questions:
i. An IT company wants to review its security best-practices after an incident was reported where a new developer on the team was assigned full access to DynamoDB. The developer accidentally deleted a couple of tables from the production environment while building out a new feature. Which is the MOST effective way to address this issue so that such incidents do not recur?
Answer: Use permissions boundary to control the maximum permissions employees can grant to the IAM principals.
Explanation: As an IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined.

ii. A retail company has developed a REST API which is deployed in an Auto Scaling group behind an Application Load Balancer. The API stores the user data in DynamoDB and any static content, such as images, are served via S3. On analyzing the usage trends, it is found that 90% of the read requests are for commonly accessed data across all users. As a Solutions Architect, which of the following would you suggest as the MOST efficient solution to improve the application performance?
Answer: Enable DynamoDB Accelerator (DAX) for Dynamo DB and CloudFront for S3

iii. A company uses DynamoDB as a data store for various kinds of customer data, such as user profiles, user events, clicks, and visited links. Some of these use-cases require a high request rate (millions of requests per second), low predictable latency, and reliability. The company now wants to add a caching layer to support high read volumes. As a solutions architect, which of the following AWS services would you recommend as a caching layer for this use-case? (Select two)
Answer: a. DynamoDB Accelerator (DAX)
b. ElastiCache

46. Redshift
i. Way of doing BI or data warehousing in the cloud.
ii. Fast & powerful, fully managed, petabyte scale data warehouse service in the cloud.
iii. Customers can start small for $0.25 (25 cents)/ hour with no commitments or upfront costs and scale to a petabyte or more for $1000 per terabyte per year which is less than one tenth cost of most other data warehousing solutions.
iv) Data warehousing DBs use different type of architecture, both from a DB perspective & infrastructure layer.
v) Amazons data warehouse solution is called Redshift
vi) Redshift can be configured as follows:
a) Single node: 160 GB
b) Multi node
bi) Leader node: Manages client connections & receives queries
bii) Compute node: Store data & perform queries & computations. Have upto 128 compute nodes.
vii) Advanced compression: Columnar data stores can be compressed much more than the row based data stores because similar data is stored sequentially on the disk. Redshift employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores. Also Redshift does not require indexes or materialized views and uses less space than traditional relational DB systems. When loading data into empty table, Redshift automatically samples data & selects most appropriate compression scheme.

Massively Parallel Processing (MPP):
Redshift automatically distributes data and query load across all nodes. Redshift makes it easy to add nodes to data warehouse and enables to maintain fast query performance as the data warehouse grows.

Backups:
i) Enabled by default with 1 day retention period.
ii) Max. retention period is 35 days
iii) Redshift always attempts to maintain at least three copies of data (the original, replica on the compute nodes and backup in S3)
iv) Redshift can also asynchronously replicate snapshots to S3 in another region for DR.

Redshift is priced as follows:
i) Compute node hours: Total number of hours you run across all your compute nodes for the billing period. We are billed for 1 unit per node per hour, so a 3 node data warehouse cluster running persistently for an entire month would incur 2160 instance hours. You will not be charged for leader node hours, only compute nodes will incur charges.
ii) Backup
iii) Data transfer (only within VPC, not outside it)

Security considerations:
i) Encrypted in transit using SSL
ii) Encrypted at rest using AES-256 encryption
iii) By default Redshift takes care of key management
a) Manage own keys through HSM
b) AWS KMS

Availability:
i) Currently available only in one AZ
ii) Can restore snapshots to new AZs in the event of an outage.

Questions:
i. Your company has developed an IoT application that sends Telemetry data from
100,000 sensors. The sensors send a datapoint of 1 KB at one-minute intervals to
a DynamoDB collector for monitoring purposes. What AWS stack would enable
you to store data for real-time processing and analytics using BI tools?
A. Sensors -> Kinesis Stream -> Firehose -> DynamoDB
B. Sensors -> Kinesis Stream -> Firehose -> DynamoDB -> S3
C. Sensors -> AWS IoT -> Firehose -> RedShift
D. Sensors -> Kinesis Data Streams -> Firehose -> RDS
Answer (C)

ii. Your company is a provider of online gaming that customers access with various network access devices including mobile phones. What is a data warehousing solutions for large amounts of information on player behavior, statistics and events for analysis using SQL tools?
A. RedShift
B. DynamoDB
C. RDS
D. DynamoDB
E. Elasticsearch
Answer (A)

iii. What two statements are correct when comparing Elasticsearch
and RedShift as analytical tools?
A. Elasticsearch is a text search engine and document indexing tool
B. RedShift supports complex SQL-based queries with Petabyte sized data
store
C. Elasticsearch supports SQL queries
D. RedShift provides only basic analytical services
E. Elasticsearch does not support JSON data type
Answer (A,B)
Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

47. Aurora
i. Aurora is Amazons own proprietary DB.
ii. Aurora is a MySql & Postgresql compatible relational DB engine that combines the speed and availability of high-end commercial DBs with the simplicity and cost-effectiveness of open source databases.
iii. Aurora provides upto five times better performance than MySql and three times better performance than Postgresql DBs at a much lower price point, whilst delivering similar performance and availability.
iv. Starts with 10 GB, scales in 10 GB increments to 64 TB (storage auto scaling).
v. Compute resources can scale up to 32 CPUs & 244 GB of memory
vi. Two copies of your data is contained in each AZ, with a min. of 3 AZs. So total of 6 copies of data exists.

Scaling Aurora:
i. Aurora is designed to transparently handle the loss of up to two copies of data w/o affecting DB write availability & up to 3 copies of data w/o affecting DB read availability.
ii. We can loose a couple of AZs and still not have any issues on performance.
iii. Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.

Aurora Replicas:
i. Aurora replicas – currently 15
ii. MySql read replicas – currently 5
iii. Postgresql – currently 1

Feature Amazon Aurora Replica My SQL Replica
Number of replicas Upto 15 Upto 5
Replication type Asynchronous (milliseconds) Asynchronous (seconds)
Performance impact on primary Low High
Replica location In-region Cross-region
Act as failover target Yes (no data loss) Yes (min. data loss)
Automated failover Yes No
Support for user defined replication delay No Yes
Support for different data or schema vs primary No Yes
Backups with Aurora:
i. Automated backups are always enabled on Aurora DB instances. Backups do not impact DB performance.
ii. We can also take snapshots with Aurora. This doesnt impact on performance.
iii. We can share Aurora snapshots with other AWS accounts.

Exam question:
Amazon Aurora serverless is an on-demand, autoscaling configuration for MySql compatible & Postgresql compatible editions of Amazon Aurora. An Aurora serverless DB cluster automatically starts up, shuts down & scales capacity up or down based on your application needs.

Aurora serverless provides a relatively simple, cost-effective option for infrequent, intermittent or unpredictable workloads.

Questions:
1. A gaming company uses Amazon Aurora as its primary database service. The company has now deployed 5 multi-AZ read replicas to increase the read throughput and for use as failover target. The replicas have been assigned the following failover priority tiers and corresponding sizes are given in parentheses: tier-1 (16TB), tier-1 (32TB), tier-10 (16TB), tier-15 (16TB), tier-15 (32TB). In the event of a failover, Amazon RDS will promote which of the following read replicas?
Answer: Tier-1 (32TB)

2. Your company is currently managing data using the Amazon Aurora MySQL database. The data stored in this database is very important to your business, so you need to make the data available in another region in the event of a disaster. The operating ratio of this database is an SLA of 99% or more. Choose the method that achieves this requirement and is the method that is the fastest to recover.
Options:

Answer:

48. Elasticache
i. Elasticache is a web service that makes it easy to deploy, operate and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast managed, in-memory caches, instead of relying entirely on slower disk based DBs.
ii. Supports two open-source in-memory caching engines: Memcached and Redis
iii. Elasticache is used to speed up the performance of existing DBs
iv. Way of caching frequently identical queries.

Requirement Memcached Redis
Simple cache to offload DB Yes Yes
Ability to scale horizontally Yes Yes
Multi-threaded performance Yes No
Advanced data types No Yes
Ranking/ sorting data sets No Yes
Pub(Publishing)/ Sub(Subscribing) Capabilities No Yes
Persistence No Yes
Multi AZ No Yes
Backup & restore capabilities No Yes
v. Use Elasticache to increase DB & web application performance.
vi. When the DB is overloaded, what two steps could you take to make the DB perform better – Read Replica and ElastiCache.
vii. Amazon ElastiCache for Memcached is an ideal front-end for data stores like Amazon RDS or Amazon DynamoDB, providing a high-performance middle tier for applications with extremely high request rates and/or low latency requirements.

Questions:
i. What are three primary reasons for deploying ElastiCache?
A. data security
B. managed service
C. replication with Redis
D. durability
E. low latency
Answer (B,C,E)

ii. What service does not support session data persistence store to enable web-based stateful applications?
A. RDS
B. Memcached
C. DynamoDB
D. Redis
E. RedShift
Answer (B)

iii. How does Memcached implement horizontal scaling?
A. Auto-Scaling
B. database store
C. partitioning
D. EC2 instances
E. S3 bucket
Answer (C)

iv. What two options are available for tenants to access ElastiCache?
A. VPC peering link
B. EC2 instances
C. EFS mount
D. cross-region VPC
Answer (A,B)

v. What two statements correctly describe in-transit encryption support on
ElastiCache platform ?
A. not supported for ElastiCache platform
B. supported on Redis replication group
C. encrypts cached data at rest
D. not supported on Memcached cluster
E. IPsec must be enabled first
Answer (B,D)

vi. What caching engines are supported with Amazon ElastiCache? (Select two)
A. HAProxy
B. Route 53
C. RedShift
D. Redis
E. Memcached
F. CloudFront
Answer (D,E)

49. Database Migration Services (DMS)
i. AWS DMS is a cloud service that makes it easy to migrate relational DBs, data warehouses, no SQL DBs and other types of data stores. We can use AWS DMS to migrate our data into AWS cloud between on-premises instances (through an AWS cloud setup) or between combinations of cloud & on premises setup.
ii. DMS work: AWS DMS is a server in the AWS cloud that runs replication software. We create a source and target connection to tell DMS where to extract from and load to. Then we schedule a task that runs on the server to move data. AWS DMS creates the tables and associated primary key if they do not exist in the target. We can pre-create the target tables manually or we can use AWS schema conversion tool (SCT) to create some or all of the target tables, indexes, views, triggers etc.

Types of DMS:
i. Homogeneous migrations – Oracle (on-premise) –> Oracle (AWS Cloud)
ii. Heterogeneous migrations – Microsoft SQL Server (on-premise) –> Amazon Aurora (AWS Cloud)

Sources and Targets:

Sources Targets
On-premises & EC2 instances DBs. Oracle, Microsoft SQL server, MySQL, Maria DB, Postgresql, SAP, Mongo DB, DB2 On-premises & EC2 instances DBs. Oracle, Microsoft SQL server, MySQL, Maria DB, Postgresql, SAP
Azure SQL DB RDS
Amazon RDS (including Aurora) Redshift
Amazon S3 Dynamo DB
S3
Elastisearch
Kinesis data streams
Document DB
i. Homogeneous Migrations: on-premises DB —> EC2 instance running DMS —> RDS
ii. Heterogeneous Migrations: on-premises DB —> EC2 instance running DMS + SCT —> RDS
iii. We do not need SCT (Schema Conversion Tool) in homogeneous migrations (identical DBs). We need SCT for heterogeneous migrations.
iv. DMS allows to migrate DBs from one source to AWS
iv. The source can either be on-premises or inside AWS itself or another cloud provides such as Microsoft Azure

50. Caching Strategies
The following services have caching capabilities:
i. Cloud Front: Caches media files, videos, pictures at edge locations near end user.
ii. API Gateway
iii. ElastiCache: Consists of Memcached & Redis
iv. Dynamo DB Accelerator (DAX)

51. EMR
EMR = Elastic Map Reduce
Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi and Presto. With EMR, we can run petabyte-scale analysis at less than half the cost of traditional on-premises solutions and over three times faster than standard Apache Spark.

The central component of EMR is the cluster. A cluster is a collection of EC2 instances. Each instance in the cluster is called a node. Each node has a role within the cluster, referred to as the node type.

Amazon EMR uses Hadoop, an open-source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances.

EMR also installs different s/w components on each node type, giving each node a role in a distributed application like Apache Hadoop.

EMR Node types:
i) Master node: A node that manages the cluster. The master node tracks the status of tasks and monitors the health of the cluster. Every cluster has a master node.
ii) Core node: A node with s/w components that runs tasks and stores data in HDFS (Hadoop Distributed File System) on cluster. Multi-node clusters have at least one core node.
iii) Task node: A node with s/w components that only runs tasks and does not store data in HDFS. Task nodes are optional.

We can configure a cluster to periodically archive the log files stored on the master node to S3. This ensures the log files are available after the cluster terminates, whether this is through normal shutdown or due to an error. EMR archives the log files to S3 at five-minute intervals.

AWS Glue – AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing.

Recap:
i. EMR is used for big data processing
ii. Consists of a master node, a core node and optionally a task node.
iii. By default, log data is stored on the master node.
iv. We can configure replication to S3 on five-minute intervals for all log data from the master node. However this can only be configured when creating the cluster for the first time.

Questions:
i. What Amazon AWS platform is designed for complex analytics of a variety of
large data sets based on custom code. The applications include machine learning
and data transformation?
A. EC2
B. Beanstalk
C. Redshift
D. EMR
Answer (D)

52. Directory Service
AWS Directory Service:
i. Its not a single service. Its a family of managed services.
ii. These allow to connect AWS resources with existing on-premises active directory.
iii. This is a standalone directory in the cloud
iv. It allows users to access AWS resources & applications with existing corporate credentials.
v. Enables single sign on (SSO) to any domain joined EC2 instance.

Active Directory:
i. On premise directory service.
ii. It is a hierarchical DB of users, groups & computers organized in trees and forests.
iii. We can apply group policies to manage users & devices on a network.
iv. Active directory is based on two protocols – LDAP (Light weight Directory Access Protocol) and DNS (Domain Name Service)
v. It supports Kerberos, LDAP & NTLM authentication
vi. An AD is intended to be configured in a highly available configuration requiring multiple servers.

AWS Managed Microsoft AD
i. This provides AD domain controllers (DCs) running on windows server. By default we get two DCs for high availability, each of those in its own AZ.
ii. DCs are reachable by applications in VPC
iii. We can add additional DCs for HA, performance, to increase availability or transaction rates.
iv. We have exclusive access to DCs
v. We can extend existing AD to on-premises using AD trust

AWS Managed Customer Managed
Multi AZ deployment Users, gropus, GPOs
Patch, Monitor, Recover Standard AD tools
Instance Rotation Scale out DCs
Snapshot & Restore Trusts (resource forest)
Certificate authorities using LDAPs
Federation
Simple AD:
i. We use simple AD as a standalone directory in the cloud to support windows workload that need basic AD features.
ii. Two sizes. Small <= 500; Large <= 5000 users
iii. Easier to manage EC2 instances
iv. Linux workloads that need LDAP
v. Does not support trusts (cant join on-premises AD)

AD Connector:
i. Directory gateway (proxy) for on premises AD
ii. Avoids caching information in the cloud
iii. Allow on-premises users to log into AWS cloud using AD
iv. Join EC2 instances to existing AD domain
v. Scale across multiple AD connectors

The three Microsoft compatible services are:
i. Microsoft managed AD
ii. Simple AD
iii. AD Connectors

Cloud Directory:
i. Directory based store for developers
ii. Multiple hierarchies with hundreds of millions of objects.
iii. Use cases: Org charts, Course catalogs, device registries
iv. Fully managed service

Amazon Cognito User Pools:
i. Managed user directory for SAAS applications
ii. Sign up & Sign in for web & mobile
iii. Works with social media identities

AD Compatible Non-AD Compatible
Managed Microsoft AD Cloud directory
AD connector Cognito user pools
Simple AD
53. IAM Policies
Amazon Resource Name (ARN).
ARNs begin with
arn:partition:service:region:account_id

arn partition service region account_id
aws|aws-in S3|EC2|RDS us-east-1|eu-central-1 12 digit account id
ARNs end with
resource
resource_type/resource
resource_type/resource/qualifier
resource_type/resource:qualifier
resource_type:resource
resource_type:resource:qualifier

Examples:
arn:aws:iam::123456789012:user/mark
arn:aws:s3:::my_bucket/image.png
arn:aws:dynamodb:us-east-1:123456789012:table/orders
arn:aws:ec2:us-east-1:123456789012:instance/*

IAM Policies:
A JSON document that defines permissions.

Identity Policy Resource Policy
Attached to an IAM user, group or role Attached to a resource
These policies let you specify what an identity can do We can specify who has access to the resource and what actions they can perform
i. No effect until attached
ii. Simply structured as a list of statements. A policy document is a list of statements. Each statement matches an API request.
iii. Not explicitly allowed, means implicitly denied
iv. Explicit deny is greater than everything else
v. Used to delegate administration to other users
vi. Prevent privilege escalation or unnecessarily broad permissions.

54. Resource Access Manager (RAM)
RAM allows resource sharing between accounts.
Resources that can be shared using RAM:
i. App Mesh
ii. Aurora
iii. Code build
iv. EC2
v. EC2 image builder
vi. License manager
vii. Resource groups
viii. Route 53

55. Single Sign-On

56. Route 53 – Domain Name Server (DNS)
In AWS, Route53 is DNS (Domain Name System) & DNS is a collection of rules and records which helps clients to understand how to reach a server through URLs. DNS operates on port 53. Amazon decided to call it route 53 so that’s where the name comes from. It’s a global service.

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

We can refer DNS to a telephone directory. Name == Contact Number, similarly domain name == IP address (internet protocol)
DNS is used to convert domain name (https://www.google.com/) into IP  address (82.124.53.1). IP addresses are used by computers to identify each other in the network. We have two forms of IP addresses (IPv4 and IPv6).

IPv4 Addresses are running out. The IPv4 space is a 32 bit field and has over 4 billion different addresses. IPv6 was created to solve this depletion issue and has an address space of 128 bits (340 undecillion addresses)

Top Level Domains: If we look at common domain names like google.com, bbc.co.uk etc, we notice a string of characters separated by dots (periods). The last word in the domain name represents ‘Top Level’ domain. The second last word in the domain name represents ‘Second Level’ domain name (this is optional and depends on domain name).
Ex: .com, .edu, .gov,  == Top level domain names
.co.uk, gov.uk, .com.au == .uk and .au are top level domain names and .co, .gov and .com are second level domain names.
The top level domain names are controlled by IANA (Internet Assigned Numbers Authority) in a root zone DB which is essentially a DB of all available top level domains.

Domain Registrars: To maintain the uniqueness of domain name we go with ‘Domain Registrars’. A registrar is an authority that can assign domain names directly under one or more top-level domains. These domains are registered with InterNIC, a service of ICANN, which enforces uniqueness of domain names across the internet. Each domain name get registered in a central DB known as WhoIS database. The most popular domain registrars are: Amazon, GoDaddy.com, 123-reg.co.uk, ..etc.

Start of Authority Record (SOA):
The SOA record stores information about:
i) The name of the server that supplied the data for the zone.
ii) The administrator of the zone.
iii) The current version of the data file
iv) The default number of seconds for the time-to-live file on resource records.

Name Server Record (NS): They are used by top level domain servers to direct traffic to the Content DNS server which contains the authoritative DNS records.

User enters google.com (domain name) into their browser. Browser doesnt know the IP address for that domain >> domain name goes to the top level domain server and it queries the authoritative DNS record saying I have got this domain name google.com and I need the IP address for it. The top level domain doesnt contain the IP address. Its going to contain something similar to this 172800 IN NS ns.awsdns.com >> Its then query the NS records and NS records are going to give us start of authority. Inside SOA we are going to have all our DNS records.
User >> Top level domain >> NS records >> SOA

‘A’ record: This is the fundamental type of DNS record. ‘A’ stands for ‘Address’. The A record is used by the computer to translate the domain name to an IP address. Ex: http://www.google.com might have IP address http://123.10.10.80

Time To Live (TTL): The length that a DNS record is cached on either the resolving server of users local machine is equal to the value of TTL in seconds. The lower the TTL, the faster changes to DNS records take to propagate throughout the internet. The default TTL is 48 hours and if we make a DNS change, that DNS change can take 48 hours to propagate throughout the entire internet.

Canonical Name (C Name): Can be used to resolve one domain name to another. For example, we may have a mobile website with domain name like http://m.abc.com which is used by users on mobile devices. You may also want the domain name like http://mobile.abc.com to resolve the same address. Means, instead of having two separate IP addresses, just map one to another.

Alias Records: Used to map resource record sets in your hosted zone to ELB, CloudFront distributions or S3 buckets that re configured as websites. Alias records work like a CNAME record where we map one DNS name (www.example.com) to another target DNS name (elb1234.elb.amazonaws.com). Key difference between Alias records and CNAME is CNAME cant be used for naked domain names (also called as zone apex record). We cannot have a CNAME for http://www.google.com, it must be either an ‘A’ record or an Alias.

Recap:
ELBs do not have pre-defined IPv4 addresses. We need to resolve them using a DNS name.
Understand the difference between Alias record (refer telephone directory where we have person name and telephone number) and CNAME (in telephone directory person name is referred to another person name to get telephone number).
Given the choice, always choose an Alias record over a CNAME.
Common DNS Types: SOA Records, NS Records, A Records, CNAMES, MX Records (use for mail), PTR Records (reverse of A records – looking up a domain name against an IP address)

Questions:
i. What DNS records can be used for pointing a zone apex to an Elastic Load
Balancer or CloudFront distribution? (Select two)
A. Alias
B. CNAME
C. MX
D. A
E. Name Server
Answer (A,D)

ii. What services are primarily provided by DNS Route 53? (Select
three)
A. load balancing web servers within a private subnet
B. resolve hostnames and IP addresses
C. load balancing web servers within a public subnet
D. load balancing data replication requests between ECS containers
E. resolve queries and route internet traffic to AWS resources
F. automated health checks to EC2 instances
Answer (B,E,F)

iii. How is Route 53 configured for Warm Standby fault tolerance? (Select two)
A. automated health checks
B. path-based routing
C. failover records
D. Alias records
Answer (A,C)

iv. How is DNS Route 53 configured for Multi-Site fault tolerance? (Select two)
A. IP address
B. weighted records (non-zero)
C. health checks
D. Alias records
E. zero weighted records
Answer (B,C)

v. How are DNS records managed with Amazon AWS to enable high availability?
A. Auto-Scaling
B. server health checks
C. reverse proxy
D. elastic load balancing
Answer (C)

vi. What is the difference between Warm Standby and Multi-Site fault tolerance?
(Select two)
A. Multi-Site enables lower RTO and most recent RPO
B. Warm Standby enables lower RTO and most recent RPO
C. Multi-Site provides active/active load balancing
D. Multi-Site provides active/standby load balancing
E. DNS Route 53 is not required for Warm Standby
Answer (A,C)

57. Route 53 – Register a Domain Name Lab
Domain registration is not free.
AWS Management Console >> Services >> Route 53 (Under Networking & Content Delivery) >> Click on ‘Get Started Now’ under Domain Registration >> Register Domain >> Enter your desired domain name like testwebsite and click on Check. If its available, add to cart >> Continue >> Add details >> Continue & Register
Create three EC2 instances at different three different regions.

Tips:
You can buy domain names directly with AWS.
It can take up to 3 days to register depending on the circumstances.

58. Route 53 Routing Policies
Following routing policies are available with Route 53:
i) Simple Routing
ii) Weighted Routing
iii) Latency Based Routing
iv) Failover Routing
v) Geolocation Routing
vi) Geoproximity Routing (Traffic Flow Only)
vii) Multivalue Answer Routing

59. Route 53 Simple Routing Policy
If we choose simple routing policy we can only have one record with multiple IP addresses and we cant have any health checks. If we specify multiple values in a record, Route 53 returns all values to the user in a random order.  User making a DNS request to Route 53 and we have got two IP addresses (30.0.0.1 & 30.0.0.2). Route 53 just picks these in random orders.

60. Route 53 Weighted Routing Policy
Allows to split traffic based on different weights assigned. For example, we can set 10% traffic to go to US-EAST-1 and 90% traffic to go to EU-WEST-1. User types domain name into browser >> Navigate to Route 53 >> Route 53 distributes 10% traffic to US-EAST-1 and 90% traffic to EU-WEST-1.

Health Checks:
i) We can set health checks on individual record sets.
ii) If a record set fails a health check it will be removed from Route 53 until it passes the health check.
iii) We can set SNS notifications to alert if a health check is failed.

61. Route 53 Latency Routing Policy
Allows you to route traffic based on the lowest network latency for end user (i.e which region will give them the fastest response time). To use latency based routing, we create a latency resource record set for EC2 (or ELB) resource in each region that hosts our website. When Route 53 receives a query for our site, it selects the latency resource record set for the region that gives the the user the lowest latency. Route 53 then responds with the value associated with that resource record set.
User >> Make a request to Route 53 >> Route 53 is going to look at the different response times from different regions. Suppose a user get 55 milliseconds response time from EU-WEST-2 whereas user get 300 milli seconds response time from AP-SOUTHEAST-2. Its essentially going to send the traffic to the EU West as it has much lower latency.

62. Route 53 Failover Routing Policy
Failover routing policies are used when we want to create an active/ passive set up. For example, you may want your primary site to be in EU-WEST-2 and secondary DR site in AP-SOUTHEAST-2. Route 53 will monitor the health of primary site using a health check. A health check monitors the health of your end points.
Users are connecting to Route 53 by doing a DNS request. Active site is at EU-WEST-2 and passive site is at AP-SOUTHEAST-2. If there is a failure in EU-WEST-2 region then the traffic will automatically be send to AP-SOUTHEAST-2.

63. Route 53 Geolocation Routing Policy
Geolocation routing lets to choose where your traffic will be sent based on the geographic location of your users (i.e the location from which the DNS queries originate). For example, you might want all queries from Europe to be routed to a fleet of EC2 instances that are specifically configured for European customers. These servers may have local language of European customers and all prices are displayed in Euros. In simple terms, it allows to send European customers to European servers and allows to send US customers to US servers. So its basically routes the traffic based on your users location. Notice the difference between Geolocation Routing Policy and Latency Routing Policy.

Questions:
i. One of the biggest football leagues in Europe has granted the distribution rights for live streaming its matches in the US to a silicon valley based streaming services company. As per the terms of distribution, the company must make sure that only users from the US are able to live stream the matches on their platform. Users from other countries in the world must be denied access to these live-streamed matches. Which of the following options would allow the company to enforce these streaming restrictions? (Select two)
Answer: a. Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights.
b. Use georestriction to prevent users in specific geographic locations from accessing content that you are distributing through a CloudFront web distribution.

64. Route 53 Geoproximity Routing Policy (Traffic Flow Only)
Geoproximity routing lets Route 53 to route traffic to your resources based on the geographic location of your users as well as your resources. Also you can optionally choose to route more traffic or less to a given resource by specifying a value known as bias. A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource. To use geoproximity routing, you must use Route 53 traffic flow.

65. Route 53 Multivalue Answer
Multivalue answer routing lets you to configure Route 53 to return multiple values, such as IP addresses for web servers, in response to DNS queries. We can specify multiple values for almost any record, but multivalue answer routing also lets you to check the health of each resource, so Route 53 returns only values for healthy resources.
This is similar to simple routing however it allows you to put health checks on each record set.

66. VPCs
We may consider VPC (Virtual Private Cloud) as a virtual or logical data center in the cloud. VPC lets us to provision a logically isolated section of the AWS cloud where we can launch AWS resources in a virtual network that we define. We have complete control over virtual networking environment, including selection of own IP address range, creation of subnets and configuration of route tables and network gateways. We can easily customize the network configuration for Amazon VPC. For example, we can create a public-facing subnet for webservers that has access to the internet, and place your backend systems such as databases or application servers in a private-facing subnet with no internet access. We can leverage multiple layers of security, including security groups and NACLs, to help control access to EC2 instances in each subnet. Also we can create a hardware VPN connection between corporate datacenter and VPC and leverage the AWS cloud as an extension of corporate datacenter.

Using VPC we can
i) Launch instances into a subnet of our choice
ii) Assign custom IP address ranges in each subnet
iii) Configure route tables between subnets
iv) Create internet gateway and attach it to our VPC
v) Much better security control over your AWS resources
vi) Instance security groups
vii) Subnet network ACLs

Default VPC vs Custom VPC:
i) Default VPC is user friendly, allowing us to immediately deploy the instances.
ii) All subnets in default VPC have a route out to the internet
iii) Each EC2 instance has both a public and private IP address.

VPC Peering:
i) Allows to connect one VPC with another via a direct network route using private IP addresses
ii) Instances behave as if they were on the same private network
iii) We can peer VPCs with other AWS accounts as well as with other VPCs in the same account
iv) Peering is in a star configuration. i.e 1 central VPC peers with 4 others. No transitive peering.
v) We can peer between regions.

Tips:
i) Consists of Internet gateways (or virtual private gateways), route tables, NACLs, subnets and security groups.
ii) 1 subnet = 1 AZ. We can have multiple subnets in 1 AZ
iii) Security groups are stateful. NACLs are stateless

Questions:
i. What AWS services work in concert to integrate security monitoring and
audit within a VPC? (Select three)
A. Syslog
B. CloudWatch
C. WAF
D. CloudTrail
E. VPC Flow Log
Answer: B, D, E

ii. What statements correctly describe support for Microsoft SQL Server within
Amazon VPC? (Select three)
A. read/write replica
B. read replica only
C. vertical scaling
D. native load balancing
E. EBS storage only
F. S3 storage only
Answer (B,C,D)

iii. You have enabled Amazon RDS database services in VPC1 for an
application with public web servers in VPC2. How do you connect the web
servers to the RDS database instance so they can communicate considering the
VPC’s are in different regions?
A. VPC endpoints
B. VPN gateway
C. path-based routing
D. publicly accessible database
E. VPC peering
Answer (D)

67. Build a Custom VPC

68. Network Address Translation (NAT)
NAT instances are individual EC2 instances. NAT gateways are high available gateway spread across multiple AZs and allows to have private subnets communicate out to the internet w/o becoming public. NAT gateways are not dependent on a single instance.

Tips on NAT instances:
i) When creating a NAT instance, disable source/ destination check on the instance
ii) NAT instances must be in a public subnet
iii) There must be route out of the private subnet to the NAT instance, in order for this to work
iv) The amount of traffic that NAT instances can support depends on the instance size. If you are bottlenecking, increase the instance size.
v) We can create high availability using Autoscaling groups, multiple subnets in different AZs and a script to automate failover.
vi) Behind a Security Group

Tips on NAT Gateways:
i) Redundant inside the AZ
ii) Preferred by the enterprise
iii) Starts at 5Gbps and scales currently to 45Gbps
iv) No need to patch
v) Not associated with security groups
vi) Automatically assigned a public IP address
vii) Remember to update route tables
viii) No need to disable source/ destination checks
ix) If you have resources in multiple AZs and they share one NAT gateway, in the event that the NAT gateways AZ is down, resources in the other AZ lose internet access. To create an AZ independent architecture, create a NAT gateway in each AZ and configure your routing to ensure that resources use the NAT gateway in the same AZ.

69. Access Control List (ACL)
Network ACL is created by default when we create VPC and called as ‘Default Network ACL’. Every time we add a subnet to VPC, its going to be associated with our default NACL. We can then associate the subnet with a NACL but a subnet itself can only be associated with only one NACL at any given time. NACL can have multiple subnets on them.

NACL always act first before Security groups.

Recap:
i. VPC automatically comes with a default NACL and by default it allows all outbound and inbound traffic.
ii. We can create custom NACLs. By default, each custom n/w ACL denies all inbound and outbound traffic until we add rules.
iii. Each subnet in VPC must be associated with a NACL. If we dont explicitly associate a subnet with a NACL, the subnet is automatically associated with the default NACL.
iv. We can block IP addresses using NACLs but not via security groups.
v. We can associate a NACL with multiple subnets. However a subnet can be associated with only one NACL at a time. When we associate a NACL with a subnet, the previous association is removed.
vi. NACLs contain a numbered list of rules that is evaluated in chronological order starting with the lowest numbered rule.
vii. NACLs have separate inbound and outbound rules and each rule can either allow or deny traffic.
viii. NACLs are stateless, responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa)

70. Custom VPCs and ELBs
We need atleast two public subnets in order to create a load balancer.

71. VPC Flow Logs
VPC flow logs is a feature that enables to capture information about the IP traffic going to and from n/w interfaces in a VPC. Flow log data is stored using Cloud Watch logs. After we created a flow log, we can view and retrieve its data in Cloud Watch logs.

Flow logs can be created at 3 levels:
i. VPC
ii. Subnet
iii. Netwrok interface level

i. We cannot enable flow logs for VPCs that are peered with our VPC unless the peer VPC is in our account.
ii. We can tag flow logs.
iii. After we have created a flow log, we cannot change its configuration. For example, we cant associate a different IAM role with the flow log.

Not all IP traffic is being monitored:
i. Traffic generated by instances when they contact the DNS server. If you use your own DNS server, then all traffic to that DNS server is logged.
ii. Traffic generated by a windows instance for Amazon windows license activation.
iii. Traffic to and from 169.254.169.254 for instance metadata
iv. DHCP traffic
v. Traffic to the reserved IP address for the default VPC router.

Questions:
i. What is the purpose of VPC Flow Logs?
A. capture VPC error messages
B. capture IP traffic on network interfaces
C. monitor network performance
D. monitor netflow data from subnets
E. enable Syslog services for VPC
Answer (B)

72. Bastions
A Bastion Host:
A bastion host is a special purpose computer on a n/w specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer. It is hardened in this manner primarily due to its location and purpose, which is either on the outside of a firewall or in a DMZ and usually involves access from untrusted n/ws or computers.

Tips:
i. A NAT Gateway or NAT instance is used to provide internet traffic to EC2 instances in a private subnets
ii. A Bastion is used to securely administer EC2 instances (Using SSH or RDP). Bastions are called Jump Boxes in Australia
iii. We cannot use a NAT Gateway as a Bastion Host.

73. Direct Connect
Direct connect is a cloud service solution that makes it easy to establish a dedicated n/w connection from your premises to AWS. Using Direct Connect, we can establish private connectivity between AWS and your datacenter, office or colocation environment, which in many cases can reduce your n/w costs, increase bandwidth throughput, and provide a more consistent n/w experience than internet-based connections. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations.

Tips:
i. Direct Connect directly connects your data center to AWS
ii. Useful for high throughput workloads (i.e lots of n/w traffic)
iii. Or if you need a stable and reliable secure connection

Questions:
i. The engineering team at an e-commerce company wants to establish a dedicated, encrypted, low latency, and high throughput connection between its data center and AWS Cloud. The engineering team has set aside sufficient time to account for the operational overhead of establishing this connection. As a solutions architect, which of the following solutions would you recommend to the company?
Answer: Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS cloud
Explanation: With AWS Direct Connect plus VPN, you can combine one or more AWS Direct Connect dedicated network connections with the Amazon VPC VPN. This combination provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections.

74. Setting Up a VPN Over a Direct Connect Connection
Steps to setting up Direct Connect:
i. Create a virtual interface in the Direct Connect console. This is a Public Virtual Interface
ii. Go to the VPC console and then to VPN connections. Create a Customer Gateway
iii. Create a Virtual Private Gateway
iv. Attach the Virtual Private Gateway to the desired VPC
v. Select VPN connections and create new VPC connection
vi. Select the Virtual Private Gateway and the Customer Gateway
vii. Once the VPN is available, set up the VPN on the customer gateway or firewall.

Questions:
i. A video analytics organization has been acquired by a leading media company. The analytics organization has 10 independent applications with an on-premises data footprint of about 70TB for each application. The CTO of the media company has set a timeline of two weeks to carry out the data migration from on-premises data center to AWS Cloud and establish connectivity. Which of the following are the MOST cost-effective options for completing the data transfer and establishing connectivity? (Select two)
Answer: a. Setting Site-to-Site VPN to establish connectivity between the on-premises data center and AWS cloud.
b. Order 10 Snowball edge storage optimized devices to complete the one-time data transfer.
Explanation: AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC. Direct Connect involves significant monetary investment and takes at least a month to set up, therefore it’s not the correct fit for this use-case.

75. Global Accelerator
AWS Global Accelerator is a service in which we create accelerators to improve availability and performance of our applications for local and global users. Global Accelerator directs traffic to optimal endpoints over the AWS global n/w. This improves the availability and performance of internet applications that are used by a global audience.

AWS Global Accelerator utilizes the Amazon global network, allowing you to improve the performance of your applications by lowering first-byte latency (the round trip time for a packet to go from a client to your endpoint and back again) and jitter (the variation of latency), and increasing throughput (the amount of time it takes to transfer data) as compared to the public internet.

AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.

User >> Edge Locations >> AWS Global Accelerator >> Endpoint group >> Endpoints

Global Accelerator includes the following components:
i. Static IP addresses
ii. Accelerator
iii. DNS Name
iv. Network Zone
v. Listener
vi. Endpoint Group
vii. Endpoint

i. Static IP addresses:
By default, Global Accelerator provides two static IP addresses that associate with accelerator. Also we can create our own IP addresses.
1.2.3.4 and 5.6.7.8

ii. Accelerator:
An Accelerator directs traffic to optimal endpoints over the AWS global n/w to improve the availability and performance of internet applications. Each accelerator includes one or more listeners.

iii. DNS Name
Global accelerator assigns each accelerator a default DNS similar to a1234567890abc.awsglobalaccelerator.com – that points to the static IP addresses that Global Accelerator assigns to you. Depending on the use case, we can use our accelerators static IP addresses or DNS name to route traffic to accelerator, or set up DNS records to route traffic using own custom domain name.

iv. Network Zone:
A n/w zone services the static IP addresses for your accelerator from a unique IP subnet. Similar to an AWS AZ, a n/w zone is an isolated unit with its own set of physical infrastructure. When we configure an accelerator by default, Global accelerator allocates two IPv4 addresses for it. If one IP address from a n/w zone becomes unavailable due to IP address blocking by certain client n/ws, or n/w disruptions, client applications can retry on the healthy static IP address from the other isolated n/w zone.

v. Listener:
A listener processes inbound connections from clients to Global accelerator, based on the port (or port range) and protocol that we configure. Global accelerator supports both TCP and UDP protocols.
Each listener has one or more endpoint groups associated with it, and traffic is forwarded to endpoints in one of the groups.
We associate endpoint groups with listeners by specifying the regions that you want to distribute traffic to. Traffic is distributed to optimal endpoints within the endpoint groups associated with a listener.

vi. Endpoint Group
Each endpoint group is associated with a specific AWS region. Endpoint groups include one or more endpoints in the region. We can increase or reduce the percentage of traffic that would be otherwsie directed to an endpoint group by adjusting a setting called a traffic dial. The traffic dial lets you easily do performance testing or blue/ green deployment testing for new releases across different AWS regions.

vii. Endpoint:
Endpoints can be n/w load balancers, application load balancers, EC2 instances or Elastic IP addresses. An application load balancer end point can be an internet-facing or internal. Traffic is routed to endpoints based on configuration options that you choose such as endpoint weights. For each endpoint, we can configure weights which are numbers that we can use to specify the proportion of traffic to route to each one. This can be useful, for example, to do performance testing within a Region.

Questions:
i. A gaming company is looking at improving the availability and performance of its global flagship application which utilizes UDP protocol and needs to support fast regional failover in case an AWS Region goes down. Which of the following AWS services represents the best solution for this use-case?
Answer: AWS Global Accelerator
Explanation: Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.

76. VPC End Points
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink w/o requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in VPC do not require public IP addresses to communicate with resources in the service. Traffic between VPC and other service doesnt leave the Amazon n/w.
Endpoints are virtual devices. They are horizontally scaled, redundant and highly available VPC components that allow communication between instances in your VPC and services w/o imposing availability risks or bandwidth constraints on n/w traffic.

There are two types of VPC endpoints:
i. Interface Endpoints – An interface endpoint is an elastic n/w interface with a private IP address that serves as an entry point for traffic destined to a supported service.
ii. Gateway Endpoints

77. VPC Private Link
To open our applications up to other VPCs we can either:
i. Open the VPC up to the internet. But disadvantages are Security considerations, everything in the public subnet is public and a lot more to manage
ii. Use VPC peering. But disadvantage is we will have to create and manage many different peering relationships.The whole n/w will be accessible. This isnt good if we have multiple applications within VPC.

Opening services in a VPC to another VPC using private link:
i. The best way to expose a service VPC to tens, hundreds or thousands of customer VPCs
ii. Doesn’t require VPC peering, no route tables, NAT, IGWs etc
iii. Requires a n/w load balancer on the service VPC and an ENI on the customer VPC.

78. Transit Gateway
i. Allows to have transitive peering between thousands of VPCs and on-premises data centers.
ii. Works on a hub and spoke model
iii. Works on a regional basis, but we can have it across multiple regions.
iv. We can use it across multiple AWS accounts using RAM
v. We can use route tables to limit how VPCs talk to one another.
vi. Works with Direct Connect as well as VPN connections.
vii. Supports IP multicast (not supported by any other AWS service)
viii. A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks. A transit gateway by itself cannot establish a low latency and high throughput connection between a data center and AWS Cloud.

79. VPN Hub
i. If you have multiple sites, each with its own VPN connection, you can use AWS VPN CloudHub to connect those sites together
ii. Hub-and-spoke model
iii. Low cost and easy to manage
iv. It operates over the public internet, but all traffic between the customer gateway and the AWS VPN CloudHub is encrypted.

80. Networking Costs
i. Use private IP addresses over public IP addresses to save on costs. This then utilizes the AWS backbone n/w.
ii. If you want to cut all n/w costs, group EC2 instances in the same AZ and use private IP addresses. This will be cost-free, but make sure to keep in mind single point of failure issues.

81. ELB
ELB = Elastic Load Balancers
Balances load across multiple servers. Load balancers are servers that forward internet traffic to multiple servers (EC2 instances) downstream. ELBs also called back hand EC2 instances. More the users we have , more it will balance the load across multiple instances. Spread load across multiple downstream instances. Expose single point of access (DNS or hostname) for our application. Seamlessly handle failures of downstream instances. We do regular health checks on EC2 instances and if one of them is failing, then the load balancer will not direct traffic to the instance, so we can hide the failure of any instance using a load balancer. Provide SSL termination (https) for your websites. Able to use load balancer across multiple AZs which makes an application highly available.

ELB is a managed load balancer. So we do not need to provision servers, AWS will do it for us and AWS guarantees that it will be working. AWS takes care of upgrades, maintenance and high availability.
Less expensive but setting up own load balancer on EC2 instance will involves lot of effort like maintenance, integration, maintaining and taking care of OS, upgrades etc.

Three types of Load Balancers:
i) Application Load Balancer: They load balance http and https traffic. They operate at layer 7 and are application aware. They are intelligent and can create advanced request routing, sending specified requests to specific web servers.
ii) Network Load Balancer: They load balance TCP, TLS, UDP traffic where extreme performance (ultra high performance) is required. They operate at layer 4 and are capable of handling millions of requests per second, while maintaining ultra-low latencies.
iii) Classic Load Balancer: They are legacy (old) ELB. Simple routing and basic load balancing at the most cost effective rate. We can load balance http/ https applications and use layer 7 specific features such as x-forwarded and sticky sessions. We can also use strict layer 4 load balancing for applications that rely purely on TCP protocol. If application stops responding then classic load balancer responds with a 504 error. This means that the application is having an issue. This could be either at web server or at DB server. Identify where the application is failing and scale it up or out where possible. 504 error means the gateway has timed out. This means that the application not responding within the idle timeout period.
On Nov 10th 2020, AWS released a Gateway Load Balancer

User >> ELB >> Multiple EC2 instances

If we need the IPv4 address of end user then look for X-Forwarded-For header.
Instances monitored by ELB are reported as: InService or OutService.
Load Balancers have their own DNS name. We are never given an IP address.

Load Balancers cannot help with back-end autoscaling. You should use Auto Scaling Groups.

Questions:
i. Select two custom origin servers from the following?
A. S3 bucket
B. S3 object
C. EC2 instance
D. Elastic Load Balancer
E. API gateway
Answer (C,D)

ii. What two features describe an Application Load Balancer (ALB)?
A. dynamic port mapping
B. SSL listener
C. layer 7 load balancer
D. backend server authentication
E. multi-region forwarding
Answer (A,C)

iii. What three features are characteristic of Classic Load Balancer?
A. dynamic port mapping
B. path-based routing
C. SSL listener
D. backend server authentication
E. ECS
F. Layer 4 based load balancer
Answer (C,D,F)

iv. What security feature is only available with Classic Load Balancer?
A. IAM role
B. SAML
C. back-end server authentication
D. security groups
E. LDAP
Answer (C)

v. What is a primary difference between Classic and Network Load Balancer?
A. IP address target
B. Auto-Scaling
C. protocol target
D. cross-zone load balancing
E. listener
Answer (A)

vi. What DNS records can be used for pointing a zone apex to an Elastic Load
Balancer or CloudFront distribution? (Select two)
A. Alias
B. CNAME
C. MX
D. A
E. Name Server
Answer (A,D)

vii. You have an Elastic Load Balancer assigned to a VPC with public
and private subnets. ELB is configured to load balance traffic to a group of EC2
instances assigned to an Auto-Scaling group. What three statements are correct?
A. Elastic Load Balancer is assigned to a public subnet
B. network ACL is assigned to Elastic Load Balancer
C. security group is assigned to Elastic Load Balancer
D. cross-zone load balancing is not supported
E. Elastic Load Balancer forwards traffic to primary private IP address
(eth0 interface) on each instance
Answer (A,C,E)

viii. How is load balancing enabled for multiple tasks to the same container instance?
A. path-based routing
B. reverse proxy
C. NAT
D. dynamic port mapping
E. dynamic listeners
Answer (D)

ix. A solutions architect has created a new Application Load Balancer and has configured a target group with IP address as a target type. Which of the following types of IP addresses are allowed as a valid value for this target type?
Answer: Private IP address
Explanation: When you create a target group, you specify its target type, which can be an Instance, IP or a Lambda function. For IP address target type, you can route traffic using any private IP address from one or more network interfaces.

x. The engineering team at a data analytics company has observed that its flagship application functions at its peak performance when the underlying EC2 instances have a CPU utilization of about 50%. The application is built on a fleet of EC2 instances managed under an Auto Scaling group. The workflow requests are handled by an internal Application Load Balancer that routes the requests to the instances. As a solutions architect, what would you recommend so that the application runs near its peak performance state?
Answer: Configure the ASG to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
Explanation: With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. With step scaling and simple scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. Neither step scaling nor simple scaling can be configured to use a target metric for CPU utilization. Also an Auto Scaling group cannot directly use a Cloudwatch alarm as the source for a scale-in or scale-out event

xi. An e-commerce company is looking for a solution with high availability, as it plans to migrate its flagship application to a fleet of Amazon EC2 instances. The solution should allow for content-based routing as part of the architecture. As a Solutions Architect, which of the following will you suggest for the company?
Answer: Use an ALB for distributing traffic to the EC2 instances spread across different AZs. Configure ASG to mask any failure of an instance.
Explanation: The Application Load Balancer (ALB) is best suited for load balancing HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), the Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.
This is the correct option since the question has a specific requirement for content-based routing which can be configured via the Application Load Balancer. Different AZs provide high availability to the overall architecture and Auto Scaling group will help mask any instance failures.

82. ELBs and Health Checks – LAB
Launch an two EC2 instances in different AZs >>

83. Advanced ELB
Sticky Sessions: Classic load balancer routes each request independently to the registered EC2 instance with the smallest load. Sticky session allows users session to bind to a specific EC2 instance. This ensures that all requests from the user during the session are sent to the same EC2 instance. We can enable sticky sessions for Application Load Balancers as well, but the traffic will be sent at the Target Group Level rather than individual EC2 instance.
User trying to visit your website via Classic Load Balancer. Suppose we have two EC2 instances and all the traffic is being sent to only one EC2 instance. Then we have to disable sticky sessions.

No Cross Zone Load Balancing: ELB (Suppose Classic) wont be able to send traffic to another AZ.
Cross Zone Load Balancing: ELB (Suppose Classic) will be able to send traffic to another AZ.
User >> Route 53 >> 100% traffic is being sent to EC2 instances via ELB, which are in one AZ (US-EAST-1A). 0% traffic is being sent to another EC2 instances via ELB, which are in another AZ (US-EAST-1B). We should enable Cross Zone Load Balancing for traffic to flow to second EC2 instance.

Path Patterns: We can create a listener with rules to forward requests based on the URL path. This is known as path-based routing. If we are running microservices, we can route traffic to multiple back-end services using path-based routing. For example, we can route general requests to one target group and requests to render images to another target group.
User >> Route 53 >> 100% traffic is being sent to EC2 instances via ELB (Application Load Balancer as its reading URL path), which are in one AZ (US-EAST-1A) to access www.myurl.com. In order to access images (www.myurl.com/images/) in another AZ (US-EAST-1B), we need to enable ‘Path Patterns’.

Tips:
i) Sticky Sessions enable users to stick to the same EC2 instance. Can be useful if you are storing information locally to that instance.
ii) Cross Zone Load Balancing enables to load balance across multiple AZs
iii) Path Patterns allow to direct traffic to different EC2 instances based on the URL contained in the request.

Questions:
i. What enables load balancing between multiple applications per load balancer?
A. listeners
B. sticky sessions
C. path-based routing
D. backend server authentication
Answer (C)

84. ASG
ASG = Auto Scaling Groups
Scalability means an application/ system can handle greater loads by adapting to new conditions. In real life the load on websites and application can change. In the cloud, we can create and get rid of servers very quickly. The goal of ASG is to:
i) Scale out(add EC2 instances) to match an increased load
ii) Scale in(remove EC2 instances) to match an decreased load
iii) Ensure to have a min and max number of machines running.
iv) Automatically register new instances to a load balancer
v) Replace unhealthy instances
Cost savings: Only run at an optimal capacity
ASG offers the capacity to scale-out and scale-in by adding or removing instances based on demand.

Minimum size >> Actual size/ Desired capacity >> Maximum size

Two types of scalability: Vertical scalability and Horizontal scalability (also called elasticity). Scalability is going to be linked but different to high availability. High Availability means applications running at least in two AZs to survive a data center loss.

Vertical Scalability Horizontal Scalability
Increase the size of an instance (scale up/ down) Increase the number instances/ systems for applications (scale out/ in). Scale out = add
Example: A junior developer being promoted to senior developer and can handle more work load. Example: Hiring multiple junior developers in a team
In AWS if your application runs on t2.micro then upgrading to t2.large
Very common for non distributed systems such as database This implies distributed systems and very common for web applications/ modern applications
There’s usually a limit to how much we can vertically scale (hardware limit) Easy to scale horizontally
ASG, ELB
Auto Scaling has three components:

Groups Configuration Templates Scaling Options
Logical component to put EC2 instances. Webserver group or Application group or DB group etc. Groups uses a launch template or a launch configuration as a configuration template for its EC2 instances. We can specify information such as AMI ID, instance type, key pair, security groups and block device mapping for EC2 instances. Scaling options provides several ways to scale Auto Scaling groups. For example, we can configure a group to scale based on the occurrence of specified conditions (dynamic scaling) or on a schedule.
Five types of Scaling Options:

Maintain current instance levels at all times Scale manually Scheduled sacling or Scale based on a schedule Scale based on demand Predictive scaling
We can configure Auto Scaling group to maintain a specified number of running instances at all times. To maintain current instance levels, EC2 Auto Scaling performs a periodic health check on running instances within an Auto Scaling group. When EC2 Auto Scaling finds an unhealthy instance, it terminates that instance and launches a new one. Manual scaling is the most basic way to scale your resources, where you specify only the change in the maximum, minimum or desired capacity of your Auto Scaling group. EC2 Auto Scaling manages the process of creating or terminating instances to maintain the updated capacity. Scaling actions are performed automatically as a function of date and time. Something like scaling 10 instances on every Monday morning at 9AM. This is useful when we know exactly when to increase or decrease the number of instances in a group, simply because the need arises on a predictable schedule. A more advanced way to scale your resources – using scaling policies which lets you define parameters that control the scaling process. For example, lets say you have a web application that currently runs on two instances and you want CPU utilization of the Auto Scaling group to stay at around 50% when the load on the application changes. This method is useful for scaling in response to changing conditions, when we dont know when those conditions will change. We can set up EC2 Auto Scaling to respond. Use Amazon EC2 Auto Scaling and AWS Auto Scaling to scale resources across multiple services. AWS Auto Scaling can help you maintain optimal availability and performance by combining predictive scaling and dynamic scaling (proactive & reactive approaches) to scale your EC2 capacity faster. Predictive scaling is based on previous performance when we need scaling options.

Scalability Elasticity Agility
Ability to accommodate larger load by making hardware stronger (scale up), or by adding nodes (scale in) Once the system is scalable, elasticity means that there will be some ‘auto-scaling’ so that the system can scale based on the load. This is cloud friendly, pay per use, match demand and optimize costs. Not related to scalability or elasticity. New IT resources are only a click away, which means that you reduce the time to make those resources available to developers from weeks to just minutes.
Questions:
1. What AWS best practice is recommended for creating fault tolerant systems?
A. vertical scaling
B. Elastic IP (EIP)
C. security groups
D. horizontal scaling
E. RedShift
Answer (D)