AWS Dumps

1. IAM
2. Billing Alarm
3. S3
4. Creation of S3 Bucket
5. S3 Pricing Tiers
6. S3 Security and Encryption
7. S3 Version Control
8. S3 Life Cycle Management
9. S3 Lock Policies and Glacier Vault Lock
10. S3 Performance
11. S3 Select and Glacier Select
12. AWS Organizations & Consolidate Billing
13. Sharing S3 Buckets between Accounts
14. Cross Region Replication
15. Transfer Acceleration
16. DataSync Overview
17. CloudFront Overview
18. CloudFront Signed URL’s and Cookies
19. Snowball
20. Storage Gateway
21. Athena versus Macie
22. EC2
23. Security Groups
24. EBS
25. Volumes & Snapshots
26. AMI Types (EBS vs Instance Store)
27. ENI vs ENA vs EFA
28. Encrypted Root Device Volumes & Snapshots
29. Spot Instances & Spot Fleets
30. EC2 Hibernate
31. Cloud Watch
32. AWS Command Line
33. IAM Roles with EC2
34. Boot Strap Scripts
35. EC2 Instance Meta Data
36. EFS
37. FSX for Windows & FSX for Lustre
38. EC2 Placement Groups
39. HPC
40. WAF
41. Databases
42. Create an RDS Instance
43. RDS Backups, Multi-AZ & Read Replicas
44. Dynamo DB
45. Advanced Dynamo DB
46. Redshift
47. Aurora
48. Elasticache
49. Database Migration Services (DMS)
50. Caching Strategies
51. EMR
52. Directory Service
53. IAM Policies
54. Resource Access Manager (RAM)
55. Single Sign-On
56. Route 53 – Domain Name Server (DNS)
57. Route 53 – Register a Domain Name Lab
58. Route 53 Routing Policies
59. Route 53 Simple Routing Policy
60. Route 53 Weighted Routing Policy
61. Route 53 Latency Routing Policy
62. Route 53 Failover Routing Policy
63. Route 53 Geolocation Routing Policy
64. Route 53 Geoproximity Routing Policy (Traffic Flow Only)
65. Route 53 Multivalue Answer
66. VPCs
67. Build a Custom VPC
68. Network Address Translation (NAT)
69. Access Control List (ACL)
70. Custom VPCs and ELBs
71. VPC Flow Logs
72. Bastions
73. Direct Connect
74. Setting Up a VPN Over a Direct Connect Connection
75. Global Accelerator
76. VPC End Points
77. VPC Private Link
78. Transit Gateway
79. VPN Hub
80. Networking Costs
81. ELB
82. ELBs and Health Checks – LAB
83. Advanced ELB
84. ASG
85. Launch Configurations & Autoscaling Groups Lab
86. HA Architecture
87. Building a fault tolerant WordPress site – Lab 1
88. Building a fault tolerant WordPress site – Lab 2
89. Building a fault tolerant WordPress site – Lab 3 : Adding Resilience & Autoscaling
90. Building a fault tolerant WordPress site – Lab 4 : Cleaning Up
91. Building a fault tolerant WordPress site – Lab 5 : Cloud Formation
92. Elastic Beanstalk Lab
93. Highly Available Bastions
94. On Premise Strategies
95. SQS
96. SWF
97. SNS
98. Elastic Transcoder
99. API Gateway
100. Kinesis
101. Web Identity Federation – Cognito
102. Reducing Security Threats
103. Key Management Service (KMS)
104. Cloud HSM
105. Parameter Store
106. Lambda
107. Build a Serverless Webpage with API Gateway and Lambda
108. Build an Alexa Skill
109. Serverless Application Model (SAM)
110. Elastic Container Service (ECS)
111. Miscellaneous


1. IAM

Question 1:
As an operations administrator, you are running a set of applications hosted on AWS. Your company decided to introduce an API gateway and use it for inter-application co-operation. For this, you need to implement API Gateway permission management and give developers, IT administrators, and users the appropriate level of permissions to manage them.
Select the most appropriate setting method to implement this authority management task.
Options:
A. Use STS to set access rights to individual users
B. Use IAM policy to set access rights to individual users
C. Use AWS Config to set access permissions for individual users
D. Use the IAM access key to set access privileges for individual users
Answer: B
Explanation:
This scenario asks how to configure API Gateway to give developers, IT administrators, and users permissions to the appropriate level of API. Access to Amazon API Gateway can be controlled by permissions using IAM policies. In order to allow API callers to call APIs, it is necessary to create and set an IAM policy.
Option A is incorrect. STS is a function that gives temporary authentication permission and is not suitable for granting medium- to long-term access authority.
Option C is incorrect. AWS Config does not have the ability to grant permissions.
Option D is incorrect. Privilege management by IAM user or IAM role is required instead of IAM access key

Question 2:
The following IAM policy sets permissions for EC2 instances.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Action”: “ec2:*”,
“Effect”: “Allow”,
“Resource”: “*”
},
{
“Effect”: “Deny”,
“Action”: [
“ec2:*ReservedInstances*”,
“ec2:TerminateInstances”
],
“Resource”: “*”
}
]
}
Select the correct description for these settings.
Options:
A. Operations on Reserved instances are allowed
B. All operations on EC2 instances are allowed
C. Operations that terminate all EC2 instance types are rejected
D. Operations that terminate only for all Reserved instances is rejected
Answer: C
Explanation
This IAM policy allows all EC2 actions, but prohibits “all operations on Reserved Instances” and “terminate operations on all instances”.
The first half of the statement gives permission for all EC2. This is a full access right.
{
“Action”: “ec2:*”,
“Effect”: “Allow”,
“Resource”: “*”
},
In the second half of the statement, it is set to reject only “All actions of Reserved Instances” and “Actions to terminate all EC2 instances”.
“Effect”: “Deny”,
“Action”: [
“ec2:*ReservedInstances*”,
“ec2:TerminateInstances”
],
“Resource”: “*”
As a result, with this setting, “any action for Reserved Instances” and “instance termination processing for all EC2 instances” cannot be performed, and Option 3 is the correct answer.

Question 3:
As a Solutions Architect, you plan to use SQS queues and Lambda to take advantage of serverless configurations in the AWS cloud. In this configuration, the SQS queue runs Lambda in parallel and then stores the data in DynamoDB.
Select the settings you need in order to send messages using Lambda.
Options:
A. Need to use FIFO queue
B. Integrate Lambda functions with API Gateway
C. Set the IAM role to a Lambda function
D. Set security group to Lambda function
Answer: C
Explanation
If your Lambda function needs to access other AWS resources, your Lambda function must have an IAM role that grants access to that service. This time the Lambda function needs access to SQS. Therefore, option 3 is the correct answer.
Use the IAM role for Lambda permissions. To grant permissions to other accounts that use your Lambda resource, or to grant permissions to other AWS resources, set the policy that applies to the resource itself in your IAM role.
Your Amazon SQS role must include the following permissions:
lambda: CreateEventSourceMapping
lambda: ListEventSourceMappings
lambda: ListFunctions
The lambda execution role must include the following permissions:
sqs: DeleteMessage
sqs: GetQueueAttributes
sqs: ReceiveMessage
If you want to associate an encrypted queue with your Lambda function, add the
kms: Decrypt
permission to your Lambda execution role.
Option 1 is incorrect. Standard queues are also available for SQS queues.
Option 2 is incorrect. You don’t need to integrate your Lambda function with API Gateway because it executes your Lambda function triggered by an SQS queue.
Option 4 is incorrect. You don’t need to set the security group in your Lambda function.

Question 4:
A developer created an application that uses Amazon EC2 and an Amazon RDS MySQL database instance. The developer stored the database user name and password in a configuration file on the root EBS volume of the EC2 application instance. A Solutions Architect has been asked to design a more secure solution.
What should the Solutions Architect do to achieve this requirement?
Options:
A. Attach an additional volume to the EC2 instance with encryption enabled. Move the configuration file to the encrypted volume
B. Install an Amazon-trusted root certificate on the application instance and use SSL/TLS encrypted connections to the database
C. Move the configuration file to an Amazon S3 bucket. Create an IAM role with permission to the bucket and attach it to the EC2 instance
D. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance
Answer: D
Explanation
The key problem here is having plain text credentials stored in a file. Even if you encrypt the volume there is still as security risk as the credentials are loaded by the application and passed to RDS.
The best way to secure this solution is to get rid of the credentials completely by using an IAM role instead. The IAM role can be assigned permissions to the database instance and can be attached to the EC2 instance. The instance will then obtain temporary security credentials from AWS STS which is much more secure.
CORRECT: “Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance” is the correct answer.
INCORRECT: “Move the configuration file to an Amazon S3 bucket. Create an IAM role with permission to the bucket and attach it to the EC2 instance” is incorrect. This just relocates the file; the contents are still unsecured and must be loaded by the application and passed to RDS. This is an insecure process.
INCORRECT: “Attach an additional volume to the EC2 instance with encryption enabled. Move the configuration file to the encrypted volume” is incorrect. This will only encrypt the file at rest, it still must be read, and the contents passed to RDS which is insecure.
INCORRECT: “Install an Amazon-trusted root certificate on the application instance and use SSL/TLS encrypted connections to the database” is incorrect. The file is still unsecured on the EBS volume so encrypting the credentials in an encrypted channel between the EC2 instance and RDS does not solve all security issues.

Question 5:
A company requires that all AWS IAM user accounts have specific complexity requirements and minimum password length.
How should a Solutions Architect accomplish this?
Options:
A. Set a password policy for each IAM user in the AWS account
B. Create an IAM policy that enforces the requirements and apply it to all users
C. Set a password policy for the entire AWS account
D. Use an AWS config rule to enforce the requirements when creating user accounts.
Answer: C
Explanation
The easiest way to enforce this requirement is to update the password policy that applies to the entire AWS account. When you create or change a password policy, most of the password policy settings are enforced the next time your users change their passwords. However, some of the settings are enforced immediately such as the password expiration period.
CORRECT: “Set a password policy for the entire AWS account” is the correct answer.
INCORRECT: “Set a password policy for each IAM user in the AWS account” is incorrect. There’s no need to set an individual password policy for each user, it will be easier to set the policy for everyone.
INCORRECT: “Create an IAM policy that enforces the requirements and apply it to all users” is incorrect. As there is no specific targeting required it is easier to update the account password policy.
INCORRECT: “Use an AWS Config rule to enforce the requirements when creating user accounts” is incorrect. You cannot use AWS Config to enforce the password requirements at the time of creating a user account.

Question 6:
An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account.
As a solutions architect, which of the following steps would you recommend?
Options:
A. Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment
B. Both IAM roles and IAM users can be used interchangeably for cross-account access
C. It is not possible to access cross-account resources
D. Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
Answer: D
Explanation
Correct option:
Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
IAM roles allow you to delegate access to users or services that normally don’t have access to your organization’s AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls. Consequently, you don’t have to share long-term credentials for access to a resource. Using IAM roles, it is possible to access cross-account resources.
Incorrect options:
Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment – There is no need to create new IAM user credentials for the production environment, as you can use IAM roles to access cross-account resources.
It is not possible to access cross-account resources – You can use IAM roles to access cross-account resources.
Both IAM roles and IAM users can be used interchangeably for cross-account access – IAM roles and IAM users are separate IAM entities and should not be mixed. Only IAM roles can be used to access cross-account resources.

Question 7:
A large IT company wants to federate its workforce into AWS accounts and business applications.
Which of the following AWS services can help build a solution for this requirement? (Select two)
Options:
A. Use AWS Organizations
B. Use Multi-Factor Authentication
C. Use AWS Identity and Access Management(IAM)
D. Use AWS Security Token Service (AWS STS) to get temporary security credentials
E. Use AWS Single Sign-On(SSO)
Answer: C & E
Explanation
Correct options:
Use AWS Single Sign-On (SSO)
Use AWS Identity and Access Management (IAM)
Identity federation is a system of trust between two parties for the purpose of authenticating users and conveying the information needed to authorize their access to resources. In this system, an identity provider (IdP) is responsible for user authentication, and a service provider (SP), such as a service or an application, controls access to resources. By administrative agreement and configuration, the SP trusts the IdP to authenticate users and relies on the information provided by the IdP about them. After authenticating a user, the IdP sends the SP a message, called an assertion, containing the user’s sign-in name and other attributes that the SP needs to establish a session with the user and to determine the scope of resource access that the SP should grant. Federation is a common approach to building access control systems that manage users centrally within a central IdP and govern their access to multiple applications and services acting as SPs.
You can use two AWS services to federate your workforce into AWS accounts and business applications: AWS Single Sign-On (SSO) or AWS Identity and Access Management (IAM). AWS SSO is a great choice to help you define federated access permissions for your users based on their group memberships in a single centralized directory. If you use multiple directories or want to manage the permissions based on user attributes, consider AWS IAM as your design alternative.
Incorrect options:
Use Multi-Factor Authentication – AWS multi-factor authentication (AWS MFA) provides an extra level of security that you can apply to your AWS environment. You can enable AWS MFA for your AWS account and for individual AWS Identity and Access Management (IAM) users you create under your account. MFA added another layer of security to IAM and is not a stand-alone service.
Use AWS Security Token Service (AWS STS) to get temporary security credentials – Temporary security credentials consist of the AWS access key ID, secret access key, and security token. Temporary security credentials are valid for a specified duration and for a specific set of permissions. If you’re making direct HTTPS API requests to AWS, you can sign those requests with the temporary security credentials that you get from AWS Security Token Service (AWS STS). STS is not a federation service.
Use AWS Organizations – AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. It does not offer federation capability, as is needed in the use case.

Question 8:
An IT company wants to review its security best-practices after an incident was reported where a new developer on the team was assigned full access to DynamoDB. The developer accidentally deleted a couple of tables from the production environment while building out a new feature.
Which is the MOST effective way to address this issue so that such incidents do not recur?
Options:
A. The CTO should review the permissions for each new developer’s IAM user so that such incidents don’t recur
B. Only root user should have full database access in the organization
C. Use permissions boundary to control the maximum permissions employees can grant to the IAM principals
D. Remove full database access for all IAM users in the organization
Answer: C
Explanation
Correct option:
Use permissions boundary to control the maximum permissions employees can grant to the IAM principals
A permissions boundary can be used to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. Therefore, using the permissions boundary offers the right solution for this use-case.
Incorrect options:
Remove full database access for all IAM users in the organization – It is not practical to remove full access for all IAM users in the organization because a select set of users need this access for database administration. So this option is not correct.
The CTO should review the permissions for each new developer’s IAM user so that such incidents don’t recur – Likewise the CTO is not expected to review the permissions for each new developer’s IAM user, as this is best done via an automated procedure. This option has been added as a distractor.
Only root user should have full database access in the organization – As a best practice, the root user should not access the AWS account to carry out any administrative procedures. So this option is not correct.

Question 9:
A development team requires permissions to list an S3 bucket and delete objects from that bucket. A systems administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows the principle of least privilege.
“Version”: “2021-10-17”,
“Statement”: [
{
“Action”: [
“s3:ListBucket”,
“s3:DeleteObject”
],
“Resource”: [
“arn:aws:s3:::example-bucket”
],
“Effect”: “Allow”
}
]
Which statement should a solutions architect add to the policy to address this issue?
Answer:
{
“Action”: [
“s3:DeleteObject”
],
“Resource”: [
“arn:aws:s3:::example-bucket/*”
],
“Effect”: “Allow”
}
The main elements of a policy statement are:
Effect: Specifies whether the statement will Allow or Deny an action (Allow is the effect defined here).
Action: Describes a specific action or actions that will either be allowed or denied to run based on the Effect entered. API actions are unique to each service (DeleteObject is the action defined here).
Resource: Specifies the resources—for example, an S3 bucket or objects—that the policy applies to in Amazon Resource Name (ARN) format ( example-bucket/* is the resource defined here).
This policy provides the necessary delete permissions on the resources of the S3 bucket to the group.

Question 10:
An IT consultant is helping the owner of a medium-sized business set up an AWS account. What are the security recommendations he must follow while creating the AWS account root user? (Select two)
Options:
A. Encrypt the access keys and save them on Amazon S3
B. Create AWS account root user access keys and share those keys only with the business owner
C. Enable Multi Factor Authentication (MFA) for the AWS account root user account
D. Send an email to the business owner with details of the login username and password for the AWS root user. This will help the business owner to troubleshoot any login issues in future
E. Create a strong password for the AWS account root user
Answer: C & E
Explanation
Correct options:
Create a strong password for the AWS account root user
Enable Multi Factor Authentication (MFA) for the AWS account root user account
Here are some of the best practices while creating an AWS account root user:
1) Use a strong password to help protect account-level access to the AWS Management Console. 2) Never share your AWS account root user password or access keys with anyone. 3) If you do have an access key for your AWS account root user, delete it. If you must keep it, rotate (change) the access key regularly. You should not encrypt the access keys and save them on Amazon S3. 4) If you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. 5) Enable AWS multi-factor authentication (MFA) on your AWS account root user account.
Incorrect options:
Encrypt the access keys and save them on Amazon S3 – AWS recommends that if you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. Even an encrypted access key for the root user poses a significant security risk. Therefore, this option is incorrect.
Create AWS account root user access keys and share those keys only with the business owner – AWS recommends that if you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. Hence, this option is incorrect.
Send an email to the business owner with details of the login username and password for the AWS root user. This will help the business owner to troubleshoot any login issues in future – AWS recommends that you should never share your AWS account root user password or access keys with anyone. Sending an email with AWS account root user credentials creates a security risk as it can be misused by anyone reading the email. Hence, this option is incorrect.

Question 11:
A new DevOps engineer has joined a large financial services company recently. As part of his onboarding, the IT department is conducting a review of the checklist for tasks related to AWS Identity and Access Management.
As a solutions architect, which best practices would you recommend (Select two)?
Options:
A. Create a minimum number of accounts and share these account credentials among employees
B. Configure AWS CloudTrail to record all account activity
C. Enable MFA for privileged users
D. Grant maximum privileges to avoid assigning privileges again
E. Use user credentials to provide access specific permissions for Amazon EC2 instances
Answer: B & C
Explanation
Correct options:
Enable MFA for privileged users – As per the AWS best practices, it is better to enable Multi Factor Authentication (MFA) for privileged users via an MFA-enabled mobile device or hardware MFA token.
Configure AWS CloudTrail to record all account activity – AWS recommends to turn on CloudTrail to log all IAM actions for monitoring and audit purposes.
Incorrect options:
Create a minimum number of accounts and share these account credentials among employees – AWS recommends that user account credentials should not be shared between users. So, this option is incorrect.
Grant maximum privileges to avoid assigning privileges again – AWS recommends granting the least privileges required to complete a certain job and avoid giving excessive privileges which can be misused. So, this option is incorrect.
Use user credentials to provide access specific permissions for Amazon EC2 instances – It is highly recommended to use roles to grant access permissions for EC2 instances working on different AWS services. So, this option is incorrect.

Question 17: Skipped
What does this IAM policy do?
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “Mystery Policy”,
“Action”: [
“ec2:RunInstances”
],
“Effect”: “Allow”,
“Resource”: “*”,
“Condition”: {
“IpAddress”: {
“aws:SourceIp”: “34.50.31.0/24”
}
}
}
]
}
• It allows starting EC2 instances only when they have a Public IP within the 34.50.31.0/24 CIDR block
• It allows starting EC2 instances only when the IP where the call originates is within the 34.50.31.0/24 CIDR block (Correct)
• It allows starting EC2 instances only when they have a Private IP within the 34.50.31.0/24 CIDR block
• It allows starting EC2 instances only when they have an Elastic IP within the 34.50.31.0/24 CIDR block
Explanation
Correct option:
It allows starting EC2 instances only when the IP where the call originates is within the 34.50.31.0/24 CIDR block
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.
Consider the following snippet from the given policy document:
“Condition”: {
“IpAddress”: {
“aws:SourceIp”: “34.50.31.0/24”
}
}
The aws:SourceIP in this condition always represents the IP of the caller of the API. That is very helpful if you want to restrict access to certain AWS API for example from the public IP of your on-premises infrastructure.
Please see this overview of Elastic vs Public vs Private IP addresses:
Elastic IP address – An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.
Private IP address – A private IPv4 address is an IP address that’s not reachable over the Internet. You can use private IPv4 addresses for communication between instances in the same VPC.
Public IP address – A public IP address is an IPv4 address that’s reachable from the Internet. You can use public addresses for communication between your instances and the Internet.
Please note 34.50.31.0/24 is a public IP range, not a private IP range. Private IP ranges are: 192.168.0.0 – 192.168.255.255 (65,536 IP addresses) 172.16.0.0 – 172.31.255.255 (1,048,576 IP addresses) 10.0.0.0 – 10.255.255.255 (16,777,216 IP addresses)
Incorrect options:
It allows starting EC2 instances only when they have a Public IP within the 34.50.31.0/24 CIDR block
It allows starting EC2 instances only when they have an Elastic IP within the 34.50.31.0/24 CIDR block
It allows starting EC2 instances only when they have a Private IP within the 34.50.31.0/24 CIDR block
Each of these three options suggests that the IP addresses of the EC2 instances must belong to the 34.50.31.0/24 CIDR block for the EC2 instances to start. Actually, the policy states that the EC2 instance should start only when the IP where the call originates is within the 34.50.31.0/24 CIDR block. Hence these options are incorrect.

Question 32:
What does this IAM policy do?
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “Mystery Policy”,
“Action”: [
“ec2:RunInstances”
],
“Effect”: “Allow”,
“Resource”: “*”,
“Condition”: {
“StringEquals”: {
“aws:RequestedRegion”: “eu-west-1”
}
}
}
]
}
A• It allows running EC2 instances in any region when the API call is originating from the eu-west-1 region
B• It allows running EC2 instances anywhere but in the eu-west-1 region
C• It allows to run EC2 instances in the eu-west-1 region, when the API call is made from the eu-west-1 region
D• It allows running EC2 instances only in the eu-west-1 region, and the API call can be made from anywhere in the world
Answer: D
Explanation
Correct option:
It allows running EC2 instances only in the eu-west-1 region, and the API call can be made from anywhere in the world
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.
You can use the aws:RequestedRegion key to compare the AWS Region that was called in the request with the Region that you specify in the policy. You can use this global condition key to control which Regions can be requested.
aws:RequestedRegion represents the target of the API call. So in this example, we can only launch EC2 instances in eu-west-1, and we can do this API call from anywhere.
Incorrect options:
It allows running EC2 instances anywhere but in the eu-west-1 region
It allows running EC2 instances in any region when the API call is originating from the eu-west-1 region
It allows running EC2 instances in the eu-west-1 region when the API call is made from the eu-west-1 region
These three options contradict the earlier details provided in the explanation. To summarize, aws:RequestedRegion represents the target of the API call. So, we can only launch EC2 instances in eu-west-1 region and we can do this API call from anywhere. Hence these options are incorrect.

Question 35:
You have a team of developers in your company, and you would like to ensure they can quickly experiment with AWS Managed Policies by attaching them to their accounts, but you would like to prevent them from doing an escalation of privileges, by granting themselves the AdministratorAccess managed policy. How should you proceed?
A• Attach an IAM policy to your developers, that prevents them from attaching the AdministratorAccess policy
B• Create a Service Control Policy (SCP) on your AWS account that restricts developers from attaching themselves the AdministratorAccess policy
C• For each developer, define an IAM permission boundary that will restrict the managed policies they can attach to themselves
D• Put the developers into an IAM group, and then define an IAM permission boundary on the group that will restrict the managed policies they can attach to themselves
Answer: C
Explanation
Correct option:
For each developer, define an IAM permission boundary that will restrict the managed policies they can attach to themselves
AWS supports permissions boundaries for IAM entities (users or roles). A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. Here we have to use an IAM permission boundary. They can only be applied to roles or users, not IAM groups.
Incorrect options:
Create a Service Control Policy (SCP) on your AWS account that restricts developers from attaching themselves the AdministratorAccess policy – Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled. SCPs aren’t available if your organization has enabled only the consolidated billing features. Attaching an SCP to an AWS Organizations entity (root, OU, or account) defines a guardrail for what actions the principals can perform. If you consider this option, since AWS Organizations is not mentioned in this question, so we can’t apply an SCP.
Attach an IAM policy to your developers, that prevents them from attaching the AdministratorAccess policy – This option is incorrect as the developers can remove this policy from themselves and escalate their privileges.
Put the developers into an IAM group, and then define an IAM permission boundary on the group that will restrict the managed policies they can attach to themselves – IAM permission boundary can only be applied to roles or users, not IAM groups. Hence this option is incorrect.

Question 38:
You would like to store a database password in a secure place, and enable automatic rotation of that password every 90 days. What do you recommend?
A• Key Management Service (KMS)
B• CloudHSM
C• Secrets Manager
D• SSM Parameter Store
Answer: C
Explanation
Correct option:
“Secrets Manager”
AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB. The correct answer here is Secrets Manager
Incorrect options:
“KMS” – AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify a customer-managed CMK that you have already created. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom. KMS is an encryption service, it’s not a secrets store. So this option is incorrect.
“CloudHSM” – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM is standards-compliant and enables you to export all of your keys to most other commercially-available HSMs, subject to your configurations. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.
CloudHSM is also an encryption service, not a secrets store. So this option is incorrect.
“SSM Parameter Store” – AWS Systems Manager Parameter Store (aka SSM Parameter Store) provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, EC2 instance IDs, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can reference Systems Manager parameters in your scripts, commands, SSM documents, and configuration and automation workflows by using the unique name that you specified when you created the parameter.
SSM Parameter Store can serve as a secrets store, but you must rotate the secrets yourself, it doesn’t have an automatic capability for this. So this option is incorrect.


2. Billing Alarm


3. S3

Question 1:
Your company is designing a web application that stores static content in an Amazon S3 bucket. As a non-functional requirement of the application, this bucket must handle more than 150 PUT requests per second quickly.
What should you do to ensure optimal performance?
Options:
A. Use a random prefix for object key name
B. Use a prefix such as date for object key name
C. Use a multi part upload
D. Enable S3 lifecycle rule
Answer: B
Explanation:
Option B is the correct answer. Amazon S3 can automatically improve performance so that it can support at least 3,500 requests / second when adding data with existing settings, and it can support 5,500 requests / second when retrieving data. Previously, performance improvements with the S3 prefix were essential for S3 to deliver this performance, but S3’s existing settings can now support request rates.
Option 1 is incorrect. This setting was previously correct, but with improved S3 request rate performance, you no longer need to set the object prefix to random.
Option 3 is incorrect. Multipart upload is a feature used when uploading large files to S3 and has no effect on this requirement.
Option 4 is incorrect. S3 lifecycle rules have nothing to do with improving processing performance.

Question 2:
Your company wants to use AWS as a mechanism for managing their documents. Documents stored by your company may be used frequently in the early stages, but after four months they will be used less frequently, so you will need to archive the documents appropriately.
Which AWS service settings do you need to configure to meet this requirement?
Options:
A. Set a life cycle rule to store data in EBS and move to S3 after 4 months
B. Set a life cycle rule to store data in S3 Standard and move to Glacier after 4 months
C. Set a life cycle rule to store data in EFS and move to Glacier after 4 months
D. Set a life cycle rule to store data in S3 RRS and move to Glacier after 4 months
Answer: B
Explanation:
Documents are stored in S3, and the life cycle policy is set to move to a storage type with lower cost.
In the early stages, documents are accessed frequently, so you need a storage type with suitable access efficiency, like S3 Standard. After that, it is common to use Glacier (or Glacier deep archive) as storage for long-term storage.

Question 3:
The following bucket policy sets permissions for S3 buckets.
{
“Version”: “2012-10-17”,
“Id”: “S3PolicyId1”,
“Statement”: [
{
“Sid”: “IPAllow”,
“Effect”: “Deny”,
“Principal”: “*”,
“Action”: “s3:*”,
“Resource”: “arn:aws:s3:::examplebucket/*”,
“Condition”: {
“NotIpAddress”: {“aws:SourceIp”: “54.240.143.0/24”}
}
}
]
}
Select the correct description of this setting.
Options:
A. All actions from the specified IP address range can be performed on this S3 bucket
B. All actions can be performed on this S3 bucket from outside the specified IP address range
C. Access to this S3 bucket from the specified IP address range is denied
D. Access to this S3 bucket from outside of the specified IP address range is denied
Answer: D
Explanation
In this bucket policy, the first half of the statement denies all actions from all users to the example bucket.
“Effect”: “Deny”,
“Principal”: “*”,
“Action”: “s3:*”,
“Resource”: “arn:aws:s3:::examplebucket/*”,
The latter statement specifies 54.240.143.0/24 as a condition for the allowed IP address range. The Condition block uses the NotIpAddress condition and aws: SourceIp. Since the NotIpAddress condition is used here, it means that IP addresses other than 54.240.143.0/24 are affected by this policy.
“Condition”: {
“NotIpAddress”: {“aws:SourceIp”: “54.240.143.0/24”}
}
Therefore, access to objects in this S3 bucket is denied if the request to this bucket is from an IP outside of the specified IP address range. Therefore, option 4 is the correct answer.

Question 4:
You have set up S3 for your data management application. This application makes several requests, including read / write and update, on objects in the S3 bucket.
If you update an object with the same key name, how will the updated object be reflected on? E.g will there be any error/discrepancies in the object upon inspection after update?
Options:
A. Since S3 uses eventaul consistency models, there may be differences when reflecting data
B. Since S3 uses eventaul consistency models, there is no difference when reflecting data
C. Since S3 uses strong consistency models, there may be differences when reflecting data
D. Since S3 uses strong consistency models, there is no difference when reflecting data
Answer: D
Explanation
Option 4 is the correct answer. S3 utilizes a strong consistency model, so there are no errors in reflection. Before December 2020, S3 used an eventual consistency model. If an update was made on an object with the same key name as the original object, the read request immediately after might not reflect the updated object. However, S3 now uses a strong consistency model, so these discrepancies no longer occur.
S3 adopted the “strong consistency model” for data registration / update / deletion.
Options 1 and 2 are incorrect. S3 used an eventual consistency model, but recently it as now improved to a strong consistency model.
Option 3 is incorrect. S3 now utilizes a strong consistency model, which eliminates the possibility of reflection errors.

Question 5:
Your company uses a business application hosted on AWS to manage records related to daily business. According to industry regulations, recorded data must be retained for 5 years. Most of these archives are rarely accessed, but data must be provided within 24 hours in response to an audit request.
Which of the following storage should you choose as the most cost-effective storage?
Options:
A. Amazon Glacier (standard retrieval)
B. Amazon S3 Glacier Deep Archive
C. S3 Standard
D. S3 One Zone IA
E. S3 Standard IA
Answer: B
Explanation
Option 2 is the correct answer. In this scenario, storage requirements are cost-effective to store data over the medium to long term and extract data within 24 hours. The Glacier Deep Archive storage class is designed to offer durable, secure, high-volume data storage at the lowest prices on AWS. Data is stored across three or more of his AWS Availability Zones and can be retrieved within 12 hours.
Option 1 is incorrect. Glacier is cheap and suitable for long-term storage of data, but it is a storage that takes several hours to acquire data. Data can be acquired in about 1 to 5 minutes by using quick reading. However, the Glacier Deep Archive storage class is cheaper than Glacier.
Option 3 is incorrect. S3 Standard is the most costly data storage in S3 and does not meet this requirement.
Option 4 is incorrect. S3 One Zone-IA saves money by storing infrequently accessed data in a single, less resilient, Availability Zone. However, the Glacier Deep Archive storage class is cheaper than the S3 One Zone-IA.
Option 5 is incorrect. Standard-IA is for infrequent access, but it can be read quickly, so it can be used suddenly. However, the Glacier Deep Archive storage class is cheaper than Standard-IA.

Question 6:
As a Solutions Architect, you use AWS to build solutions for managing and storing corporate documents. Once the data is saved, it is rarely used, but it is required to be obtained within 10 hours according to the administrator’s instructions if necessary. You have decided to use Amazon Glacier and are considering how to set it up.
How should you set the data acquisition method for Glacier?
Options:
A. Expedited retrievals
B. Standard retrievals
C. Bulk retrievals
D. Vault lock
Explanation
Glacier’s standard retrieval is the optimal setting because of the requirement to retrieve the data within 10 hours according to the administrator’s instructions as needed. With standard retrieval, you can access all archives within 3-5 hours. Therefore, option 2 is the correct answer.
Option 1 is incorrect. Glacier’s Expedited retrievals gives you quick access to your data if you need a subset of your archives quickly. For all archives except the largest archives (250 MB and above), the data accessed with Expedited retrievals is typically available within 1-5 minutes. However, Expedited retrievals are not cost-optimal or preferred in this situaiton.
Option 3 is incorrect. Bulk retrievals is Glacier’s cheapest retrieval option, which allows you to retrieve large amounts of data (including petabytes of data) within a day. Bulk retrievals typically takes 5-12 hours, so data acquisition cannot be completed within 10 hours.
Option 4 is incorrect. Glacier vault locks allow you to easily deploy and apply compliance management for each Glacier vault using vault lock policies. Specify a control such as write once read many (WORM) in the vault lock policy to lock the policy so that it cannot be edited in the future. This feature is irrelevant to this requirement.

Question 7:
Your company develops and operates an application that provides image data in the public domain. The image data is stored in S3, and the application temporarily displays it in response to the user’s request. This image should be protected so that it is only available to specific users.
What mechanism do I need to use to meet this requirement?
Options:
A. Distribute images with a time-limited pre-signed URL
B. Image distribution by CloudFront distribution
C. Protect your images with an encryption key
D. Limit users by switching to EFS image sharing
Explanation
If you create a pre-signed URL and have permissions to the object, only the user who has the pre-signed URL can access the object. By using this function, the application can grant a specific user permission to the target image for a limited time. Therefore, option 1 is the correct answer.
Option 2 is wrong. It is not possible to limit the users to whom images are delivered to using CloudFront delivery settings alone. It is necessary to use signed URLs and signed cookies in CloudFront.
Option 3 is wrong. There is no setting that allows a specific user to share images with an encryption key.
Option 4 is wrong. EFS is a storage that allows data sharing between instances, but it cannot be accessed by a third party via the Internet. Therefore, it is more appropriate to use S3 as a storage service for showing data to the outside.

Question 8:
Some companies store employee user profiles and access logs in S3. As this data is uploaded and modified on a daily basis, there is a concern that users may accidentally delete objects in their S3 bucket. Therefore, it is necessary to take preventive measures, but it should not affect the business.
Choose the best way to prevent accidental deletion of objects in your S3 bucket (Select two)
Options:
A. Enable the versioning feature on S3 bucket
B. Enable encryption in S3 bucket
C. Enable MFA authentication on S3 bucket
D. Set data deletion not possible for S3 bucket
E. Set deletion refusal by IAM role in S3 bucket
Answer: A & C
Explanation
By enabling MFA authentication for your S3 bucket, users will be required to perform MFA authentication every time they try to perform a deletion process, which will prevent deletion due to operational mistakes. Furthermore, you can restore deleted files by enabling the versioning function. Therefore, options 1 and 3 are correct.
Option 2 is incorrect. You can increase data protection by enabling encryption in your S3 bucket, but it does not prevent data loss.
Option 4 is incorrect. The S3 bucket can be configured so that objects cannot be deleted by default, but this is only available during initial setup. You can’t change the settings of an S3 bucket that you’re already using. In addition, there are cases where data deletion operations are required, which is inappropriate in this case.
Option 5 is incorrect. Access permissions must be set by the IAM user, not the IAM role.

Question 9:
A video production company is planning to move some of its workloads to the AWS Cloud. The company will require around 5 TB of storage for video processing with the maximum possible I/O performance. They also require over 400 TB of extremely durable storage for storing video files and 800 TB of storage for long-term archival.
Which combinations of services should a Solutions Architect use to meet these requirements?
Options:
A. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage
B. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
C. Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage
D. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
Answer: B
Explanation
The best I/O performance can be achieved by using instance store volumes for the video processing. This is safe to use for use cases where the data can be recreated from the source files so this is a good use case.
For storing data durably Amazon S3 is a good fit as it provides 99.999999999% of durability. For archival the video files can then be moved to Amazon S3 Glacier which is a low cost storage option that is ideal for long-term archival.
CORRECT: “Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage” is the correct answer.
INCORRECT: “Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage” is incorrect. EBS is not going to provide as much I/O performance as an instance store volume so is not the best choice for this use case.
INCORRECT: “Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage” is incorrect. EFS does not provide as much durability as Amazon S3 and will not be as cost-effective.
INCORRECT: “Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage” is incorrect. EBS and EFS are not the best choices here as described above.

Question 10:
A company has uploaded some highly critical data to an Amazon S3 bucket. Management are concerned about data availability and require that steps are taken to protect the data from accidental deletion. The data should still be accessible, and a user should be able to delete the data intentionally.
Which combination of steps should a solutions architect take to accomplish this? (Select TWO.)
Options:
A. Enable MFA delete on the S3 bucket
B. Create a bucket policy on the S3 bucket
C. Enable default encryption on the S3 bucket
D. Enable versioning on the S3 bucket
E. Create a lifecycle policy for the objects in the S3 bucket
Answer: A & D
Explanation
Multi-factor authentication (MFA) delete adds an additional step before an object can be deleted from a versioning-enabled bucket.
With MFA delete the bucket owner must include the x-amz-mfa request header in requests to permanently delete an object version or change the versioning state of the bucket.
CORRECT: “Enable versioning on the S3 bucket” is a correct answer.
CORRECT: “Enable MFA Delete on the S3 bucket” is also a correct answer.
INCORRECT: “Create a bucket policy on the S3 bucket” is incorrect. A bucket policy is not required to enable MFA delete.
INCORRECT: “Enable default encryption on the S3 bucket” is incorrect. Encryption does protect against deletion.
INCORRECT: “Create a lifecycle policy for the objects in the S3 bucket” is incorrect. A lifecycle policy will move data to another storage class but does not protect against deletion.

Question 11:
A solutions architect is creating a document submission application for a school. The application will use an Amazon S3 bucket for storage. The solution must prevent accidental deletion of the documents and ensure that all versions of the documents are available. Users must be able to upload and modify the documents.
Which combination of actions should be taken to meet these requirements? (Select TWO.)
Options:
A. Enable MFA delete on the bucket
B. Encrypt the bucket using AWS SSE-S3
C. Set read-only permissions on the bucket
D. Attach an IAM policy to the bucket
E. Enable versioning on the bucket
Answer: A & E
Explanation
None of the options present a good solution for specifying permissions required to write and modify objects so that requirement needs to be taken care of separately. The other requirements are to prevent accidental deletion and the ensure that all versions of the document are available.
The two solutions for these requirements are versioning and MFA delete. Versioning will retain a copy of each version of the document and multi-factor authentication delete (MFA delete) will prevent any accidental deletion as you need to supply a second factor when attempting a delete.
CORRECT: “Enable versioning on the bucket” is a correct answer.
CORRECT: “Enable MFA Delete on the bucket” is also a correct answer.
INCORRECT: “Set read-only permissions on the bucket” is incorrect as this will also prevent any writing to the bucket which is not desired.
INCORRECT: “Attach an IAM policy to the bucket” is incorrect as users need to modify documents which will also allow delete. Therefore, a method must be implemented to just control deletes.
INCORRECT: “Encrypt the bucket using AWS SSE-S3” is incorrect as encryption doesn’t stop you from deleting an object.

Question 12:
A team are planning to run analytics jobs on log files each day and require a storage solution. The size and number of logs is unknown and data will persist for 24 hours only.
What is the MOST cost-effective solution?
Options:
A. Amazon S3 One-Zone Infrequent Access (S3 One Zone-IA)
B. Amazon S3 Standard
C. Amazon S3 Glacier Deep Archive
D. Amazon S3 Intelligent Tiering
Answer: B
Explanation
S3 standard is the best choice in this scenario for a short term storage solution. In this case the size and number of logs is unknown and it would be difficult to fully assess the access patterns at this stage. Therefore, using S3 standard is best as it is cost-effective, provides immediate access, and there are no retrieval fees or minimum capacity charge per object.
CORRECT: “Amazon S3 Standard” is the correct answer.
INCORRECT: “Amazon S3 Intelligent-Tiering” is incorrect as there is an additional fee for using this service and for a short-term requirement it may not be beneficial.
INCORRECT: “Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)” is incorrect as this storage class has a minimum capacity charge per object (128 KB) and a per GB retrieval fee.
INCORRECT: “Amazon S3 Glacier Deep Archive” is incorrect as this storage class is used for archiving data. There are retrieval fees and it take hours to retrieve data from an archive.

Question 13:
A solutions architect needs to backup some application log files from an online ecommerce store to Amazon S3. It is unknown how often the logs will be accessed or which logs will be accessed the most. The solutions architect must keep costs as low as possible by using the appropriate S3 storage class.
Which S3 storage class should be implemented to meet these requirements?
Options:
A. S3 Intelligent Tiering
B. S3 One Zone Infrequent Access (S3 One Zone-IA)
C. S3 Glacier
D. S3 Standard-Infrequent Access (S3 Standard-IA)
Answer: A
Explanation
The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead.
It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. This is an ideal use case for intelligent-tiering as the access patterns for the log files are not known.
CORRECT: “S3 Intelligent-Tiering” is the correct answer.
INCORRECT: “S3 Standard-Infrequent Access (S3 Standard-IA)” is incorrect as if the data is accessed often retrieval fees could become expensive.
INCORRECT: “S3 One Zone-Infrequent Access (S3 One Zone-IA)” is incorrect as if the data is accessed often retrieval fees could become expensive.
INCORRECT: “S3 Glacier” is incorrect as if the data is accessed often retrieval fees could become expensive. Glacier also requires more work in retrieving the data from the archive and quick access requirements can add further costs.

Question 14:
Which of the following features of an Amazon S3 bucket can only be suspended once they have been enabled?
Options:
A. Static Website Hosting
B. Versioning
C. Server Access Logging
D. Requester Pays
Answer: B
Explanation
Correct option:
Versioning
Once you version-enable a bucket, it can never return to an unversioned state. Versioning can only be suspended once it has been enabled.
Incorrect options:
Server Access Logging
Static Website Hosting
Requester Pays
Server Access Logging, Static Website Hosting and Requester Pays features can be disabled even after they have been enabled.

Question 15:
A healthcare startup needs to enforce compliance and regulatory guidelines for objects stored in Amazon S3. One of the key requirements is to provide adequate protection against accidental deletion of objects.
As a solutions architect, what are your recommendations to address these guidelines? (Select two)
Options:
A. Establish a process to get managerial approval for deleting S3 objects
B. Create an event trigger on deleting any S3 object. The event invokes an SNS notification via email to the IT manager
C. Enable versioning on the bucket
D. Change the configuration on AWS S3 console so that the user needs to provide additional confirmation while deleting any S3 object
E. Enable MFA delete on the bucket
Answer: C & E
Explanation
Correct options:
Enable versioning on the bucket – Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.
For example:
If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version. If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version. You can always restore the previous version. Hence, this is the correct option.
Enable MFA delete on the bucket – To provide additional protection, multi-factor authentication (MFA) delete can be enabled. MFA delete requires secondary authentication to take place before objects can be permanently deleted from an Amazon S3 bucket. Hence, this is the correct option.
Incorrect options:
Create an event trigger on deleting any S3 object. The event invokes an SNS notification via email to the IT manager – Sending an event trigger after object deletion does not meet the objective of preventing object deletion by mistake because the object has already been deleted. So, this option is incorrect.
Establish a process to get managerial approval for deleting S3 objects – This option for getting managerial approval is just a distractor.
Change the configuration on AWS S3 console so that the user needs to provide additional confirmation while deleting any S3 object – There is no provision to set up S3 configuration to ask for additional confirmation before deleting an object. This option is incorrect.

Question 34:
An audit department generates and accesses the audit reports only twice in a financial year. The department uses AWS Step Functions to orchestrate the report creating process that has failover and retry scenarios built into the solution. The underlying data to create these audit reports is stored on S3, runs into hundreds of Terabytes and should be available with millisecond latency.
As a solutions architect, which is the MOST cost-effective storage class that you would recommend to be used for this use-case?
Options:
A. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
B. Amazon S3 Glacier (S3 Glacier)
C. Amazon S3 Standard
D. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
Answer: A
Explanation
Correct option:
Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Since the data is accessed only twice in a financial year but needs rapid access when required, the most cost-effective storage class for this use-case is S3 Standard-IA. S3 Standard-IA storage class is for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA matches the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. Standard-IA is designed for 99.9% availability compared to 99.99% availability of S3 Standard. However, the report creation process has failover and retry scenarios built into the workflow, so in case the data is not available owing to the 99.9% availability of S3 Standard-IA, the job will be auto re-invoked till data is successfully retrieved. Therefore this is the correct option.
Incorrect options:
Amazon S3 Standard – S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. As described above, S3 Standard-IA storage is a better fit than S3 Standard, hence using S3 standard is ruled out for the given use-case.
Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) – The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. S3 Standard-IA matches the high durability, high throughput, and low latency of S3 Intelligent-Tiering, with a low per GB storage price and per GB retrieval fee. Moreover, Standard-IA has the same availability as that of S3 Intelligent-Tiering. So, it’s cost-efficient to use S3 Standard-IA instead of S3 Intelligent-Tiering.
Amazon S3 Glacier (S3 Glacier) – S3 Glacier on the other hand, is a secure, durable, and low-cost storage class for data archiving. S3 Glacier cannot support millisecond latency, so this option is ruled out.

Question 35:
The IT department at a consulting firm is conducting a training workshop for new developers. As part of an evaluation exercise on Amazon S3, the new developers were asked to identify the invalid storage class lifecycle transitions for objects stored on S3.
Can you spot the INVALID lifecycle transitions from the options below? (Select two)
Options:
A. S3 Intelligent-Tiering => S3 Standard
B. S3 One Zone-IA => S3 Standard-IA
C. S3 Standard => S3 Intelligent-Tiering
D. S3 Standard-IA => S3 Intelligent-Tiering
E. S3 Standard-IA => S3 One Zone-IA
Answer: A & B
Explanation
Correct options:
As the question wants to know about the INVALID lifecycle transitions, the following options are the correct answers –
S3 Intelligent-Tiering => S3 Standard
S3 One Zone-IA => S3 Standard-IA
Following are the unsupported life cycle transitions for S3 storage classes – Any storage class to the S3 Standard storage class. Any storage class to the Reduced Redundancy storage class. The S3 Intelligent-Tiering storage class to the S3 Standard-IA storage class. The S3 One Zone-IA storage class to the S3 Standard-IA or S3 Intelligent-Tiering storage classes.
Incorrect options:
S3 Standard => S3 Intelligent-Tiering
S3 Standard-IA => S3 Intelligent-Tiering
S3 Standard-IA => S3 One Zone-IA
Here are the supported life cycle transitions for S3 storage classes – The S3 Standard storage class to any other storage class. Any storage class to the S3 Glacier or S3 Glacier Deep Archive storage classes. The S3 Standard-IA storage class to the S3 Intelligent-Tiering or S3 One Zone-IA storage classes. The S3 Intelligent-Tiering storage class to the S3 One Zone-IA storage class. The S3 Glacier storage class to the S3 Glacier Deep Archive storage class.

Question 36:
A media agency stores its re-creatable assets on Amazon S3 buckets. The assets are accessed by a large number of users for the first few days and the frequency of access falls down drastically after a week. Although the assets would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the assets on S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible.
As a Solutions Architect, can you suggest a way to lower the storage costs while fulfilling the business requirements?
A. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days
B. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days
C. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days
D. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days
Answer: B
Explanation
Correct option:
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days – S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed and re-creatable data but do not require the availability and resilience of S3 Standard or S3 Standard-IA. The minimum storage duration is 30 days before you can transition objects from S3 Standard to S3 One Zone-IA.
S3 One Zone-IA offers the same high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
Incorrect options:
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days
As mentioned earlier, the minimum storage duration is 30 days before you can transition objects from S3 Standard to S3 One Zone-IA or S3 Standard-IA, so both these options are added as distractors.
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days – S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. But, it costs more than S3 One Zone-IA because of the redundant storage across availability zones. As the data is re-creatable, so you don’t need to incur this additional cost.

Question 37:
A file-hosting service uses Amazon S3 under the hood to power its storage offerings. Currently all the customer files are uploaded directly under a single S3 bucket. The engineering team has started seeing scalability issues where customer file uploads have started failing during the peak access hours with more than 5000 requests per second.
Which of the following is the MOST resource efficient and cost-optimal way of addressing this issue?
A. Change the application architecture to create a new S3 bucket for each customer and then upload each customer’s files directly under the respective buckets
B. Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations
C. Change the application architecture to create a new S3 bucket for each day’s data and then upload the daily files directly under that day’s bucket
D. Change the application architecture to use EFS instead of Amazon S3 for storing the customers’ uploaded files
Answer: B
Explanation
Correct option:
Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Your applications can easily achieve thousands of transactions per second in request performance when uploading and retrieving storage from Amazon S3. Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket.
There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second. Please see this example for more clarity on prefixes: if you have a file f1 stored in an S3 object path like so s3://your_bucket_name/folder1/sub_folder_1/f1, then /folder1/sub_folder_1/ becomes the prefix for file f1.
Some data lake applications on Amazon S3 scan millions or billions of objects for queries that run over petabytes of data. These data lake applications achieve single-instance transfer rates that maximize the network interface used for their Amazon EC2 instance, which can be up to 100 Gb/s on a single instance. These applications then aggregate throughput across multiple instances to get multiple terabits per second. Therefore creating customer-specific custom prefixes within the single bucket and then uploading the daily files into those prefixed locations is the BEST solution for the given constraints.
Incorrect options:
Change the application architecture to create a new S3 bucket for each customer and then upload each customer’s files directly under the respective buckets – Creating a new S3 bucket for each new customer is an inefficient way of handling resource availability (S3 buckets need to be globally unique) as some customers may use the service sparingly but the bucket name is locked for them forever. Moreover, this is really not required as we can use S3 prefixes to improve the performance.
Change the application architecture to create a new S3 bucket for each day’s data and then upload the daily files directly under that day’s bucket – Creating a new S3 bucket for each new day’s data is also an inefficient way of handling resource availability (S3 buckets need to be globally unique) as some of the bucket names may not be available for daily data processing. Moreover, this is really not required as we can use S3 prefixes to improve the performance.
Change the application architecture to use EFS instead of Amazon S3 for storing the customers’ uploaded files – EFS is a costlier storage option compared to S3, so it is ruled out.

Question 38:
A leading video streaming service delivers billions of hours of content from Amazon S3 to customers around the world. Amazon S3 also serves as the data lake for its big data analytics solution. The data lake has a staging zone where intermediary query results are kept only for 24 hours. These results are also heavily referenced by other parts of the analytics pipeline.
Which of the following is the MOST cost-effective strategy for storing this intermediary query data?
A. Store the intermediary query results in S3 Intelligent-Tiering storage class
B. Store the intermediary query results in S3 Standard-Infrequent Access storage class
C. Store the intermediary query results in S3 One Zone-Infrequent Access storage class
D. Store the intermediary query results in S3 Standard storage class
Answer: D
Explanation
Correct option:
Store the intermediary query results in S3 Standard storage class
S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. As there is no minimum storage duration charge and no retrieval fee (remember that intermediary query results are heavily referenced by other parts of the analytics pipeline), this is the MOST cost-effective storage class amongst the given options.
Incorrect options:
Store the intermediary query results in S3 Intelligent-Tiering storage class – The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. The minimum storage duration charge is 30 days, so this option is NOT cost-effective because intermediary query results need to be kept only for 24 hours. Hence this option is not correct.
Store the intermediary query results in S3 Standard-Infrequent Access storage class – S3 Standard-IA is for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. The minimum storage duration charge is 30 days, so this option is NOT cost-effective because intermediary query results need to be kept only for 24 hours. Hence this option is not correct.
Store the intermediary query results in S3 One Zone-Infrequent Access storage class – S3 One Zone-IA is for data that is accessed less frequently but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less than S3 Standard-IA. The minimum storage duration charge is 30 days, so this option is NOT cost-effective because intermediary query results need to be kept only for 24 hours. Hence this option is not correct.
To summarize again, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA have a minimum storage duration charge of 30 days (so instead of 24 hours, you end up paying for 30 days). S3 Standard-IA and S3 One Zone-IA also have retrieval charges (as the results are heavily referenced by other parts of the analytics pipeline, so the retrieval costs would be pretty high). Therefore, these 3 storage classes are not cost optimal for the given use-case.

Question 39:
A social photo-sharing company uses Amazon S3 to store the images uploaded by the users. These images are kept encrypted in S3 by using AWS-KMS and the company manages its own Customer Master Key (CMK) for encryption. A member of the DevOps team accidentally deleted the CMK a day ago, thereby rendering the user’s photo data unrecoverable. You have been contacted by the company to consult them on possible solutions to this crisis.
As a solutions architect, which of the following steps would you recommend to solve this issue?
Options:
A. Contact AWS support to retrieve the CMK from their backup
B. The CMK can be recovered by the AWS root account user
C. The company should issue a notification on its web application informing the users about the loss of their data
D. As the CMK was deleted a day ago, it must be in the ‘pending deletion’ status and hence you can just cancel the CMK deletion and recover the key
Answer: D
Explanation
Correct option:
As the CMK was deleted a day ago, it must be in the ‘pending deletion’ status and hence you can just cancel the CMK deletion and recover the key
AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules that have been validated under FIPS 140-2.
Deleting a customer master key (CMK) in AWS Key Management Service (AWS KMS) is destructive and potentially dangerous. Therefore, AWS KMS enforces a waiting period. To delete a CMK in AWS KMS you schedule key deletion. You can set the waiting period from a minimum of 7 days up to a maximum of 30 days. The default waiting period is 30 days. During the waiting period, the CMK status and key state is Pending deletion. To recover the CMK, you can cancel key deletion before the waiting period ends. After the waiting period ends you cannot cancel key deletion, and AWS KMS deletes the CMK.
Incorrect options:
Contact AWS support to retrieve the CMK from their backup
The CMK can be recovered by the AWS root account user
The AWS root account user cannot recover CMK and the AWS support does not have access to CMK via any backups. Both these options just serve as distractors.
The company should issue a notification on its web application informing the users about the loss of their data – This option is not required as the data can be recovered via the cancel key deletion feature.

Question 40:
A company uses Amazon S3 buckets for storing sensitive customer data. The company has defined different retention periods for different objects present in the Amazon S3 buckets, based on the compliance requirements. But, the retention rules do not seem to work as expected.
Which of the following options represent a valid configuration for setting up retention periods for objects in Amazon S3 buckets? (Select two)
Options:
A. When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version
B. You cannot place a retention period on an object version through a bucket default setting
C. When you use bucket default settings, you specify a Retain Until Date for the object version
D. Different versions of a single object can have different retention modes and periods
E. The bucket default settings will override any explicit retention mode or period you request on an object version
Answer: A & D
Explanation
Correct options:
When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version – You can place a retention period on an object version either explicitly or through a bucket default setting. When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version. Amazon S3 stores the Retain Until Date setting in the object version’s metadata and protects the object version until the retention period expires.
Different versions of a single object can have different retention modes and periods – Like all other Object Lock settings, retention periods apply to individual object versions. Different versions of a single object can have different retention modes and periods.
For example, suppose that you have an object that is 15 days into a 30-day retention period, and you PUT an object into Amazon S3 with the same name and a 60-day retention period. In this case, your PUT succeeds, and Amazon S3 creates a new version of the object with a 60-day retention period. The older version maintains its original retention period and becomes deletable in 15 days.
Incorrect options:
You cannot place a retention period on an object version through a bucket default setting – You can place a retention period on an object version either explicitly or through a bucket default setting.
When you use bucket default settings, you specify a Retain Until Date for the object version – When you use bucket default settings, you don’t specify a Retain Until Date. Instead, you specify a duration, in either days or years, for which every object version placed in the bucket should be protected.
The bucket default settings will override any explicit retention mode or period you request on an object version – If your request to place an object version in a bucket contains an explicit retention mode and period, those settings override any bucket default settings for that object version.

Question 41:
A data analytics company measures what the consumers watch and what advertising they’re exposed to. This real-time data is ingested into its on-premises data center and subsequently, the daily data feed is compressed into a single file and uploaded on Amazon S3 for backup. The typical compressed file size is around 2 GB.
Which of the following is the fastest way to upload the daily compressed file into S3?
Options:
A. Upload the compressed file using multipart upload with S3 transfer acceleration
B. Upload the compressed file in a single operation
C. Upload the compressed file using multipart upload
D. FTP the compressed file into an EC2 instance that runs in the same region as the S3 bucket. Then transfer the file from the EC2 instance into the S3 bucket
Answer: A
Explanation
Correct option:
Upload the compressed file using multipart upload with S3 transfer acceleration
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. If you’re uploading large objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance. If you’re uploading over a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
Incorrect options:
Upload the compressed file in a single operation – In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. Multipart upload provides improved throughput – you can upload parts in parallel to improve throughput. Therefore, this option is not correct.
Upload the compressed file using multipart upload – Although using multipart upload would certainly speed up the process, combining with S3 transfer acceleration would further improve the transfer speed. Therefore just using multipart upload is not the correct option.
FTP the compressed file into an EC2 instance that runs in the same region as the S3 bucket. Then transfer the file from the EC2 instance into the S3 bucket – This is a roundabout process of getting the file into S3 and added as a distractor. Although it is technically feasible to follow this process, it would involve a lot of scripting and certainly would not be the fastest way to get the file into S3.

Question 43:
A technology blogger wants to write a review on the comparative pricing for various storage types available on AWS Cloud. The blogger has created a test file of size 1GB with some random data. Next he copies this test file into AWS S3 Standard storage class, provisions an EBS volume (General Purpose SSD (gp2)) with 100GB of provisioned storage and copies the test file into the EBS volume, and lastly copies the test file into an EFS Standard Storage filesystem. At the end of the month, he analyses the bill for costs incurred on the respective storage types for the test file.
What is the correct order of the storage charges incurred for the test file on these three storage types?
Options:
A. Cost of test file storage on S3 Standard < Cost of test file storage on EBS < Cost of test file storage on EFS
B. Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS
C. Cost of test file storage on EFS < Cost of test file storage on S3 Standard < Cost of test file storage on EBS
D. Cost of test file storage on EBS < Cost of test file storage on S3 Standard < Cost of test file storage on EFS
Answer: B
Explanation
Correct option:
Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS
With Amazon EFS, you pay only for the resources that you use. The EFS Standard Storage pricing is $0.30 per GB per month. Therefore the cost for storing the test file on EFS is $0.30 for the month.
For EBS General Purpose SSD (gp2) volumes, the charges are $0.10 per GB-month of provisioned storage. Therefore, for a provisioned storage of 100GB for this use-case, the monthly cost on EBS is $0.10*100 = $10. This cost is irrespective of how much storage is actually consumed by the test file.
For S3 Standard storage, the pricing is $0.023 per GB per month. Therefore, the monthly storage cost on S3 for the test file is $0.023.
Therefore this is the correct option.
Incorrect options:
Cost of test file storage on S3 Standard < Cost of test file storage on EBS < Cost of test file storage on EFS
Cost of test file storage on EFS < Cost of test file storage on S3 Standard < Cost of test file storage on EBS
Cost of test file storage on EBS < Cost of test file storage on S3 Standard < Cost of test file storage on EFS
Following the computations shown earlier in the explanation, these three options are incorrect.

Question 37:
An IT company provides S3 bucket access to specific users within the same account for completing project specific work. With changing business requirements, cross-account S3 access requests are also growing every month. The company is looking for a solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets.
As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case?
A• Use Security Groups
B• Use Amazon S3 Bucket Policies
C• Use Identity and Access Management (IAM) policies
D• Use Access Control Lists (ACLs)
Answer: B
Explanation
Correct option:
Use Amazon S3 Bucket Policies
Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket. Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions. With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.
You can further restrict access to specific resources based on certain conditions. For example, you can restrict access based on request time (Date Condition), whether the request was sent using SSL (Boolean Conditions), a requester’s IP address (IP Address Condition), or based on the requester’s client application (String Conditions). To identify these conditions, you use policy keys.
Incorrect options:
Use Identity and Access Management (IAM) policies – AWS IAM enables organizations with many employees to create and manage multiple users under a single AWS account. IAM policies are attached to the users, enabling centralized control of permissions for users under your AWS Account to access buckets or objects. With IAM policies, you can only grant users within your own AWS account permission to access your Amazon S3 resources. So, this is not the right choice for the current requirement.
Use Access Control Lists (ACLs) – Within Amazon S3, you can use ACLs to give read or write access on buckets or objects to groups of users. With ACLs, you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources. So, this is not the right choice for the current requirement.
Use Security Groups – A security group acts as a virtual firewall for EC2 instances to control incoming and outgoing traffic. S3 does not support Security Groups, this option just acts as a distractor.


4. Creation of S3 Bucket
5. S3 Pricing Tiers
6. S3 Security and Encryption


7. S3 Version Control

Question 1:
As a Solutions Architect, you are building an SFA on AWS. This SFA has a business requirement for sales reps to upload sales daily. In addition, those records should be kept for sales reports. Report storage requires durable and highly available storage. Since many sales people use SFA, it is an important requirement to prevent these records from being accidentally deleted due to some kind of operation error.
Choose data protection measures to meet these requirements.
Options:
A. Use S3 for storage and enable its versioning function
B. Automatically take snapshots on a regular basis while accumulating data on EBS
C. Take snapshots automatically on a regular basis while accumulating data in S3
D. Automatically take snapshots on a regular basis while accumulating data on RDS
Answer: A
Explanation
Option 1 is the correct answer. The S3 standard storage class is best for storing frequently used data. On top of that, you can easily restore previous versions of the object by setting versioning. Versioning is a way to keep multiple variants of an object in the same bucket. You can use versioning to store, retrieve, and restore any version of any object stored in your Amazon S3 bucket. Versioning makes it easy to recover data from unintended user actions and application failures.
Option 2 is incorrect. EBS is less durable than S3. EBS is not suitable for data sharing.
Option 3 is incorrect. S3 does not have snapshot functionality.
Option 4 is incorrect. RDS is a relational database and does not meet the requirements for durable and available storage.


8. S3 Life Cycle Management
9. S3 Lock Policies and Glacier Vault Lock
10. S3 Performance
11. S3 Select and Glacier Select
12. AWS Organizations & Consolidate Billing
13. Sharing S3 Buckets between Accounts
14. Cross Region Replication


15. Transfer Acceleration

Question 1:
A news network uses Amazon S3 to aggregate the raw video footage from its reporting teams across the US. The news network has recently expanded into new geographies in Europe and Asia. The technical teams at the overseas branch offices have reported huge delays in uploading large video files to the destination S3 bucket.
Which of the following are the MOST cost-effective options to improve the file upload speed into S3? (Select two)
Options:
A. Create multiple site-to-site VPN connections between the AWS Cloud and branch offices in Europe and Asia. Use these VPN connections for faster file uploads into S3
B. Use AWS Global Accelerator for faster file uploads into the destination S3 bucket
C. Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket
D. Create multiple AWS direct connect connections between the AWS Cloud and branch offices in Europe and Asia. Use the direct connect connections for faster file uploads into S3
E. Use multipart uploads for faster file uploads into the destination S3 bucket
Answer: C & E
Explanation
Correct options:
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket – Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
Use multipart uploads for faster file uploads into the destination S3 bucket – Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. Multipart upload provides improved throughput, therefore it facilitates faster file uploads.
Incorrect options:
Create multiple AWS direct connect connections between the AWS Cloud and branch offices in Europe and Asia. Use the direct connect connections for faster file uploads into S3 – AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Direct connect takes significant time (several months) to be provisioned and is an overkill for the given use-case.
Create multiple site-to-site VPN connections between the AWS Cloud and branch offices in Europe and Asia. Use these VPN connections for faster file uploads into S3 – AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections are a good solution if you have low to modest bandwidth requirements and can tolerate the inherent variability in Internet-based connectivity. Site-to-site VPN will not help in accelerating the file transfer speeds into S3 for the given use-case.
Use AWS Global Accelerator for faster file uploads into the destination S3 bucket – AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances. AWS Global Accelerator will not help in accelerating the file transfer speeds into S3 for the given use-case.

Question 2:
A junior scientist working with the Deep Space Research Laboratory at NASA is trying to upload a high-resolution image of a nebula into Amazon S3. The image size is approximately 3GB. The junior scientist is using S3 Transfer Acceleration (S3TA) for faster image upload. It turns out that S3TA did not result in an accelerated transfer.
Given this scenario, which of the following is correct regarding the charges for this image transfer?
Options:
A. The junior scientist only needs to pay S3 transfer charges for the image upload
B. The junior scientist does not need to pay any transfer charges for the image upload
C. The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges for the image upload
D. The junior scientist only needs to pay S3TA transfer charges for the image upload
Answer: B
Explanation
Correct option:
The junior scientist does not need to pay any transfer charges for the image upload
There are no S3 data transfer charges when data is transferred in from the internet. Also with S3TA, you pay only for transfers that are accelerated. Therefore the junior scientist does not need to pay any transfer charges for the image upload because S3TA did not result in an accelerated transfer.
Incorrect options:
The junior scientist only needs to pay S3TA transfer charges for the image upload – Since S3TA did not result in an accelerated transfer, there are no S3TA transfer charges to be paid.
The junior scientist only needs to pay S3 transfer charges for the image upload – There are no S3 data transfer charges when data is transferred in from the internet. So this option is incorrect.
The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges for the image upload – There are no S3 data transfer charges when data is transferred in from the internet. Since S3TA did not result in an accelerated transfer, there are no S3TA transfer charges to be paid.


16. DataSync Overview

Question 1:
An organization has a large amount of data on Windows (SMB) file shares in their on-premises data center. The organization would like to move data into Amazon S3. They would like to automate the migration of data over their AWS Direct Connect link.
Which AWS service can assist them?
Options:
A. AWS Snowball
B. AWS DataSync
C. AWS CloudFormation
D. AWS Database Migration Service (DMS)
Answer: B
Explanation
AWS DataSync can be used to move large amounts of data online between on-premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). DataSync eliminates or automatically handles many of these tasks, including scripting copy jobs, scheduling and monitoring transfers, validating data, and optimizing network utilization. The source datastore can be Server Message Block (SMB) file servers.
CORRECT: “AWS DataSync” is the correct answer.
INCORRECT: “AWS Database Migration Service (DMS)” is incorrect. AWS Database Migration Service (DMS) is used for migrating databases, not data on file shares.
INCORRECT: “AWS CloudFormation” is incorrect. AWS CloudFormation can be used for automating infrastructure provisioning. This is not the best use case for CloudFormation as DataSync is designed specifically for this scenario.
INCORRECT: “AWS Snowball” is incorrect. AWS Snowball is a hardware device that is used for migrating data into AWS. The organization plan to use their Direct Connect link for migrating data rather than sending it in via a physical device. Also, Snowball will not automate the migration.

Question 2:
A company runs an application in an on-premises data center that collects environmental data from production machinery. The data consists of JSON files stored on network attached storage (NAS) and around 5 TB of data is collected each day. The company must upload this data to Amazon S3 where it can be processed by an analytics application. The data must be transferred securely.
Which solution offers the MOST reliable and time-efficient data transfer?
Options:
A. AWS Database Migration Service over the internet
B. Multiple AWS Snowcone devices
C. AWS DataSync over AWS Direct Connect
D. Amazon S3 Transfer Acceleration over the Internet
Answer: C
Explanation
The most reliable and time-efficient solution that keeps the data secure is to use AWS DataSync and synchronize the data from the NAS device directly to Amazon S3. This should take place over an AWS Direct Connect connection to ensure reliability, speed, and security.
AWS DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windows File Server file systems.
CORRECT: “AWS DataSync over AWS Direct Connect” is the correct answer.
INCORRECT: “AWS Database Migration Service over the Internet” is incorrect. DMS is for migrating databases, not files.
INCORRECT: “Amazon S3 Transfer Acceleration over the Internet” is incorrect. The Internet does not offer the reliability, speed or performance that this company requires.
INCORRECT: “Multiple AWS Snowcone devices” is incorrect. This is not a time-efficient approach as it can take time to ship these devices in both directions.


17. CloudFront Overview

Question 1:
You are hosting a web server on AWS with an EC2 instance. Recently, the number of image acquisition requests for applications has increased, and these requests occupy most of the CPU usage, resulting in poor application response performance.
What is the appropriate way to improve the usability of this application?
Options:
A. Increase EC2 instances by setting the ASG
B. Install an ELB to enable load balancing
C. Install CloudFront on the front to handle image processing
D. Set up Dynamo DB to handle high-speed data processing
Answer: C
Explanation
In order to improve usability due to the increase in image acquisition requests, it is desirable to set up CloudFront instead of Auto Scaling and leave the image distribution to AWS. CloudFront is a high-speed content delivery network (CDN) service that delivers content securely to viewers with low-latency, high-speed forwarding. CloudFront connects directly to AWS’s global infrastructure as well as its other AWS services.
Option 1 is incorrect. It is possible to improve the processing on the WEB server side by increasing the number of EC2 instances by setting the Auto Scaling group, but it is recommended first to set CloudFront to improve the content distribution processing of the WEB application.
Option 2 is incorrect. It has nothing to do with load balancing and fast image delivery processing.
Option 4 is incorrect. DynamoDB cannot be used to speed up image distribution. DynamoDB is suitable for managing session data and metadata, and for KVS data processing such as high-speed processing.

Question 2:
Your company operates an image distribution application. The Application is using CloudFront to optimize image delivery, but what happens when the content isn’t on the edge location?
Choose an action that CloudFront will take in this situation
Options:
A. CloudFront will take advantage of another edge location where the content is being stored
B. CloudFront accesses the origin server to retrieve data and then stores it at the edge location
C. Displays a 404 error because the data is not found
D. Stock requests in CloudFront and waits for the requested data to reach the edge location
Answer: B
Explanation
CloudFront optimizes content delivery by caching data at the edge location closest to your users. If the data doesn’t exist at the edge location, CloudFront will retrieve the data from the origin server before delivering it, but from the next request onwards, it will be processed from the cache at the edge location. Therefore, option 2 is the correct answer.
Option 1 is incorrect. There is no way to handle the request from another edge location. CloudFront delivers from the edge closer to the user. Therefore, if CloudFront doesn’t have a deliverable cache on the appropriate edge for the user, it will retrieve this data from the origin server.
Option 3 is incorrect. CloudFront doesn’t show a 404 error because CloudFront doesn’t have the right data on the edge.
If it doesn’t have a the appropriate data cache on the right edge location for the user, it goes to the origin server to get this data.
Option 4 is incorrect. CloudFront doesn’t stock requests and wait for data to reach the edge location.

Question 3:
As a Solutions Architect, you plan to use Route 53 as your DNS server. As a requirement, in order to speed up image distribution etc., it is necessary to use CloudFront distribution using your company’s domain name.
Choose the best method to meet this requirement.
Options:
A. Create a CNAME record to specify CloudFront delivery
B. Create a A record and specify CloudFront delivery
C. Create an ALIAS record to specify CloudFront delivery
D. Create a NS record and specify CloudFront delivery
Answer: C
Explanation
You can configure CloudFront on Route 53 to associate a domain by creating an ALIAS record and configuring CloudFront. Therefore, option 3 is the correct answer.
Regular Route 53 records use standard DNS records, but you should make use of ALIAS records when configuring AWS resources such as CloudFront. ALIAS records provide Route 53-specific extensions to DNS functionality. Instead of an IP address or domain name, the ALIAS record should be a CloudFront, Elastic Beanstalk environment, ELB, a pointer to an Amazon S3 bucket configured as a static website, or another Route 53 record in the same hosted zone.
Option 1 is incorrect. The CNAME record is used to associate another domain with an existing domain.
Option 2 is incorrect. The A record is used to associate the IPv4 address with a domain.
Option 4 is incorrect. NS records are records that specify an authoritative server for a zone.

Question 4:
A company offers an online product brochure that is delivered from a static website running on Amazon S3. The company’s customers are mainly in the United States, Canada, and Europe. The company is looking to cost-effectively reduce the latency for users in these regions.
What is the most cost-effective solution to these requirements?
Options:
A. Create an Amazon CloudFront distribution and use Lambda@Edge to run the website’s data processing closer to the users
B. Create an Amazon CloudFront distribution that uses origins in U.S, Canada and Europe
C. Create an Amazon CloudFront distribution and set the price class to use all Edge Locations for best performance
D. Create an Amazon CloudFront distribution and set the price class to use only U.S, Canada and Europe.
Options: D
Explanation
With Amazon CloudFront you can set the price class to determine where in the world the content will be cached. One of the price classes is “U.S, Canada and Europe” and this is where the company’s users are located. Choosing this price class will result in lower costs and better performance for the company’s users.
CORRECT: “Create an Amazon CloudFront distribution and set the price class to use only U.S, Canada and Europe.” is the correct answer.
INCORRECT: “Create an Amazon CloudFront distribution and set the price class to use all Edge Locations for best performance” is incorrect. This will be more expensive as it will cache content in Edge Locations all over the world.
INCORRECT: “Create an Amazon CloudFront distribution that uses origins in U.S, Canada and Europe” is incorrect. The origin can be in one place, there’s no need to add origins in different Regions. The price class should be used to limit the caching of the content to reduce cost.
INCORRECT: “Create an Amazon CloudFront distribution and use Lambda@Edge to run the website’s data processing closer to the users” is incorrect. Lambda@Edge will not assist in this situation as there is no data processing required, the content from the static website must simply be cached at an edge location.

Question 5:
A company runs a dynamic website that is hosted on an on-premises server in the United States. The company is expanding to Europe and is investigating how they can optimize the performance of the website for European users. The website’s backed must remain in the United States. The company requires a solution that can be implemented within a few days.
What should a Solutions Architect recommend?
Options:
A. Use Amazon CloudFront with Lambda@Edge to direct traffic to an on-premises origin
B. Use Amazon CloudFront with a custom origin pointing to the on-premises servers
C. Launch an Amazon EC2 instance in an AWS Region in the United States and migrate the website to it
D. Migrate the website to Amazon S3. Use cross-Region replication between Regions and a latency-based Route 53 policy
Answer: B
Explanation
A custom origin can point to an on-premises server and CloudFront is able to cache content for dynamic websites. CloudFront can provide performance optimizations for custom origins even if they are running on on-premises servers. These include persistent TCP connections to the origin, SSL enhancements such as Session tickets and OCSP stapling.
Additionally, connections are routed from the nearest Edge Location to the user across the AWS global network. If the on-premises server is connected via a Direct Connect (DX) link this can further improve performance.
CORRECT: “Use Amazon CloudFront with a custom origin pointing to the on-premises servers” is the correct answer.
INCORRECT: “Use Amazon CloudFront with Lambda@Edge to direct traffic to an on-premises origin” is incorrect. Lambda@Edge is not used to direct traffic to on-premises origins.
INCORRECT: “Launch an Amazon EC2 instance in an AWS Region in the United States and migrate the website to it” is incorrect. This would not necessarily improve performance for European users.
INCORRECT: “Migrate the website to Amazon S3. Use cross-Region replication between Regions and a latency-based Route 53 policy” is incorrect. You cannot host dynamic websites on Amazon S3 (static only).

Question 6:
A company delivers content to subscribers distributed globally from an application running on AWS. The application uses a fleet of Amazon EC2 instance in a private subnet behind an Application Load Balancer (ALB). Due to an update in copyright restrictions, it is necessary to block access for specific countries.
What is the EASIEST method to meet this requirement?
Options:
A. Modify the security group for EC2 instances to deny incoming traffic from blocked countries
B. Use a Network ACL to block the IP address ranges associated with the specific countries
C. Modify the ALB security group to deny incoming traffic from blocked countries
D. Use Amazon CloudFront to serve the application and deny access to blocked countries
Answer: D
Explanation
When a user requests your content, CloudFront typically serves the requested content regardless of where the user is located. If you need to prevent users in specific countries from accessing your content, you can use the CloudFront geo restriction feature to do one of the following:
Allow your users to access your content only if they’re in one of the countries on a whitelist of approved countries.
Prevent your users from accessing your content if they’re in one of the countries on a blacklist of banned countries.
For example, if a request comes from a country where, for copyright reasons, you are not authorized to distribute your content, you can use CloudFront geo restriction to block the request.
This is the easiest and most effective way to implement a geographic restriction for the delivery of content.
CORRECT: “Use Amazon CloudFront to serve the application and deny access to blocked countries” is the correct answer.
INCORRECT: “Use a Network ACL to block the IP address ranges associated with the specific countries” is incorrect as this would be extremely difficult to manage.
INCORRECT: “Modify the ALB security group to deny incoming traffic from blocked countries” is incorrect as security groups cannot block traffic by country.
INCORRECT: “Modify the security group for EC2 instances to deny incoming traffic from blocked countries” is incorrect as security groups cannot block traffic by country.

Question 7:
An organization want to share regular updates about their charitable work using static webpages. The pages are expected to generate a large amount of views from around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.
Which action should the solutions architect take to accomplish this?
Options:
A. Use cross-region replication to all regions
B. Use Amazon CloudFront with the S3 bucket as its origin
C. Use geoproximity feature of Amazon Route 53
D. Generate presigned URLs for the files
Answer: B
Explanation
Amazon CloudFront can be used to cache the files in edge locations around the world and this will improve the performance of the webpages.
To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations:
Using a REST API endpoint as the origin with access restricted by an origin access identity (OAI)
Using a website endpoint as the origin with anonymous (public) access allowed
Using a website endpoint as the origin with access restricted by a Referer header
CORRECT: “Use Amazon CloudFront with the S3 bucket as its origin” is the correct answer.
INCORRECT: “Generate presigned URLs for the files” is incorrect as this is used to restrict access which is not a requirement.
INCORRECT: “Use cross-Region replication to all Regions” is incorrect as this does not provide a mechanism for directing users to the closest copy of the static webpages.
INCORRECT: “Use the geoproximity feature of Amazon Route 53” is incorrect as this does not include a solution for having multiple copies of the data in different geographic locations.

Question 8:
A website runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB) which serves as an origin for an Amazon CloudFront distribution. An AWS WAF is being used to protect against SQL injection attacks. A review of security logs revealed an external malicious IP that needs to be blocked from accessing the website.
What should a solutions architect do to protect the application?
Options:
A. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address
B. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address
C. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address
D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.
Answer: C
Explanation
A new version of the AWS Web Application Firewall was released in November 2019. With AWS WAF classic you create “IP match conditions”, whereas with AWS WAF (new version) you create “IP set match statements”. Look out for wording on the exam.
The IP match condition / IP set match statement inspects the IP address of a web request’s origin against a set of IP addresses and address ranges. Use this to allow or block web requests based on the IP addresses that the requests originate from.
AWS WAF supports all IPv4 and IPv6 address ranges. An IP set can hold up to 10,000 IP addresses or IP address ranges to check.
CORRECT: “Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address” is the correct answer.
INCORRECT: “Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address” is incorrect as CloudFront does not sit within a subnet so network ACLs do not apply to it.
INCORRECT: “Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address” is incorrect as the source IP addresses of the data in the EC2 instances subnets will be the ELB IP addresses.
INCORRECT: “Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.” is incorrect as you cannot create deny rules with security groups.

Question 9:
CloudFront offers a multi-tier cache in the form of regional edge caches that improve latency. However, there are certain content types that bypass the regional edge cache, and go directly to the origin.
Which of the following content types skip the regional edge cache? (Select two)
Options:
A. Static content such as style sheets, JavaScript files
B. E-commerce assets such as product photos
C. Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin
D. User-generated videos
E. Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
Answer: C & E
Explanation
Correct options:
Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.
Dynamic content, as determined at request time (cache-behavior configured to forward all headers), does not flow through regional edge caches, but goes directly to the origin. So this option is correct.
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin from the POPs and do not proxy through the regional edge caches. So this option is also correct.
Incorrect Options:
E-commerce assets such as product photos
User-generated videos
Static content such as style sheets, JavaScript files
The following type of content flows through the regional edge caches – user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos and static content such as style sheets, JavaScript files. Hence these three options are not correct.


18. CloudFront Signed URL’s and Cookies

Question 1:
Your company shares some HR videos stored in an Amazon S3 bucket via CloudFront. You need to restrict access to the private content so users coming from specific IP addresses can access the videos and ensure direct access via the Amazon S3 bucket is not possible.
How can this be achieved?
Options:
A. Configure CloudFront to require users to access the files using a signed URL, and configure the S3 bucket as a website endpoint
B. Configure CloudFront to require users to access the files using a signed URL, create an origin access identity (OAI) and restrict access to the files in the Amazon S3 bucket to the OAI
C. Configure CloudFront to require users to access the files using signed cookies, create an origin access identity (OAI) and instruct users to login with the OAI
D. Configure CloudFront to require users to access the files using signed cookies, and move the files to an encrypted EBS volume
Answer: B
Explanation
A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. You can also specify the IP address or range of IP addresses of the users who can access your content.
If you use CloudFront signed URLs (or signed cookies) to limit access to files in your Amazon S3 bucket, you may also want to prevent users from directly accessing your S3 files by using Amazon S3 URLs. To achieve this you can create an origin access identity (OAI), which is a special CloudFront user, and associate the OAI with your distribution.
You can then change the permissions either on your Amazon S3 bucket or on the files in your bucket so that only the origin access identity has read permission (or read and download permission).
CORRECT: “Configure CloudFront to require users to access the files using a signed URL, create an origin access identity (OAI) and restrict access to the files in the Amazon S3 bucket to the OAI” is the correct answer.
INCORRECT: “Configure CloudFront to require users to access the files using signed cookies, create an origin access identity (OAI) and instruct users to login with the OAI” is incorrect. Users cannot login with an OAI.
INCORRECT: “Configure CloudFront to require users to access the files using signed cookies, and move the files to an encrypted EBS volume” is incorrect. You cannot use CloudFront to pull data directly from an EBS volume.
INCORRECT: “Configure CloudFront to require users to access the files using a signed URL, and configure the S3 bucket as a website endpoint” is incorrect. You cannot use CloudFront and an OAI when your S3 bucket is configured as a website endpoint.


19. Snowball

Question 1:
A video analytics organization has been acquired by a leading media company. The analytics organization has 10 independent applications with an on-premises data footprint of about 70TB for each application. The CTO of the media company has set a timeline of two weeks to carry out the data migration from on-premises data center to AWS Cloud and establish connectivity.
Which of the following are the MOST cost-effective options for completing the data transfer and establishing connectivity? (Select two)
A. Order 1 Snowmobile to complete the one-time data transfer
B. Setup AWS direct connect to establish connectivity between the on-premises data center and AWS Cloud
C. Order 70 Snowball Edge Storage Optimized devices to complete the one-time data transfer
D. Setup Site-to-Site VPN to establish connectivity between the on-premises data center and AWS Cloud
E. Order 10 Snowball Edge Storage Optimized devices to complete the one-time data transfer
Answer: D & E
Explanation
Correct options:
Order 10 Snowball Edge Storage Optimized devices to complete the one-time data transfer
Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.
As each Snowball Edge Storage Optimized device can handle 80TB of data, you can order 10 such devices to take care of the data transfer for all applications.
Exam Alert:
The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. You may see the Snowball device on the exam, just remember that the original Snowball device had 80TB of storage space.
Setup Site-to-Site VPN to establish connectivity between the on-premises data center and AWS Cloud
AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
Therefore this option is the right fit for the given use-case as the connectivity can be easily established within the given timeframe.
Incorrect options:
Order 1 Snowmobile to complete the one-time data transfer – Each Snowmobile has a total capacity of up to 100 petabytes. To migrate large datasets of 10PB or more in a single location, you should use Snowmobile. For datasets less than 10PB or distributed in multiple locations, you should use Snowball. So Snowmobile is not the right fit for this use-case.
Setup AWS direct connect to establish connectivity between the on-premises data center and AWS Cloud – AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC. Direct Connect involves significant monetary investment and takes at least a month to set up, therefore it’s not the correct fit for this use-case.
Order 70 Snowball Edge Storage Optimized devices to complete the one-time data transfer – As the data-transfer can be completed with just 10 Snowball Edge Storage Optimized devices, there is no need to order 70 devices.

Question 2:
You would like to use Snowball to move on-premises backups into a long term archival tier on AWS. Which solution provides the MOST cost savings?
• Create a Snowball job and target a Glacier Deep Archive Vault
• Create a Snowball job and target an S3 bucket. Create a lifecycle policy to immediately move data to Glacier
• Create a Snowball job and target an S3 bucket. Create a lifecycle policy to immediately move data to Glacier Deep Archive
(Correct)
• Create a Snowball job and target a Glacier Vault
Explanation
Correct option:
Create a Snowball job and target an S3 bucket. Create a lifecycle policy to immediately move data to Glacier Deep Archive
AWS Snowball, a part of the AWS Snow Family, is a data migration and edge computing device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale data transfer. Snowball Edge Compute Optimized devices provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments.
Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.
The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. You may see the Snowball device on the exam, just remember that the original Snowball device had 80TB of storage space.
You can’t move data directly from Snowball into Glacier, you need to go through S3 first, and then use a lifecycle policy. So this option is correct.
Incorrect options:
Create a Snowball job and target a Glacier Vault
Create a Snowball job and target a Glacier Deep Archive Vault
Amazon S3 Glacier and S3 Glacier Deep Archive are a secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup. They are designed to deliver 99.999999999% durability and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements. Finally, Glacier Deep Archive provides more cost savings than Glacier.
Both these options are incorrect as you can’t move data directly from Snowball into a Glacier Vault or a Glacier Deep Archive Vault. You need to go through S3 first and then use a lifecycle policy.
Create a Snowball job and target an S3 bucket. Create a lifecycle policy to immediately move data to Glacier – As Glacier Deep Archive provides more cost savings than Glacier, so you should use Glacier Deep Archive for long term archival for this use-case.


20. Storage Gateway
Question 1: A company wants to host its internal storage on AWS. This storage is required to be connected to an on-premises application server via an iSCSI device. In addition, after the migration is complete, they plan to use the storage on AWS as their primary storage. Choose a configuration method that can meet this requirement.
Options:
A. Create an S3 bucket and use the S3 connector as an iSCSI device
B. Create an EBS and use the EBS connector as an iSCSI device
C. Create a Glacier archive and use Glacier connector as an iSCSI device
D. Use the AWS storage gateway as an iSCSI device
Answer: D
Explanation
Option D is the correct answer. Storage gateway cached volumes allow you to use Amazon S3 as your primary data storage while keeping frequently accessed data locally. Volumes cached in an on-premises environment provide low-latency access to frequently accessed data. You can create storage volumes up to 32 TiB in size and attach them from your on-premises application server via an iSCSI device. The cached volume is the method to be selected when using the AWS side as the primary.
Options A, B and C are incorrect. These services do not have the ability to connect to the on-premises side via an iSCSI device.

Question 2:
Your company owns 3TB volume data in its on-premises repository and stores a large number of files there. This repository is increasing in capacity by 500 GB annually and should be used as a single logical volume. As a Solutions Architect, you have decided to extend this repository to S3 storage to avoid local storage capacity constraints. You also want to maintain optimal response times for frequently accessed data. The plan is to use S3 as the primary.
Which of the following AWS Storage Gateway configurations meets this requirement?
Options:
A. Cached volume that uses snapshots scheduled to move to S3
B. Storage type that uses snapshots scheduled to move to S3
C. Cached that utilize snapshots scheduled to move to Glacier
D. A virtual type library that utilizes snapshots scheduled to move to S3
Answer: A
Explanation
Cached volumes on the storage gateway allow you to use S3 as your primary data storage while keeping frequently accessed data in your local environment. Therefore, option 1 is the correct answer.
Cached volumes minimize the need to scale your on-premises storage infrastructure. At the same time, applications will continue to have low-latency access to frequently accessed data. You can create up to 32TiB of storage volumes and attach them as iSCSI devices to your on-premises application server. The gateway stores the data in a storage volume created in Amazon S3, which keeps the recently loaded data in the cache of the on-premises storage gateway, and uploads it to buffer storage.
Option 2 is incorrect. Storage type volumes utilize local storage as the primary and asynchronously back up that data to S3. This time the cached volume meets the requirements.
Option 3 is incorrect. It is appropriate to use S3 storage for hybrid configurations with storage gateways. In addition, Glacier is used to save infrequently used files over the medium to long term, so Option 3 is inappropriate for this requirement.
Option 4 is incorrect. The virtual tape library is used for tape-format backups, and option 4 is inappropriate.

Question 3:
A company is investigating methods to reduce the expenses associated with on-premises backup infrastructure. The Solutions Architect wants to reduce costs by eliminating the use of physical backup tapes. It is a requirement that existing backup applications and workflows should continue to function.
What should the Solutions Architect recommend?
Options
A. Create an Amazon EFS file system and connect the backup applications using the iSCSI protocol
B. Connect the backup applications to an AWS Storage Gateway using the NFS protocol
C. Create an Amazon EFS file system and connect the backup applications using the NFS protocol
D. Connect the backup applications to an AWS Storage Gateway using an iSCSI-virtual tape library (VTL)
Answer: D
Explanation
The AWS Storage Gateway Tape Gateway enables you to replace using physical tapes on premises with virtual tapes in AWS without changing existing backup workflows. Tape Gateway emulates physical tape libraries, removes the cost and complexity of managing physical tape infrastructure, and provides more durability than physical tapes.
CORRECT: “Connect the backup applications to an AWS Storage Gateway using an iSCSI-virtual tape library (VTL)” is the correct answer.
INCORRECT: “Create an Amazon EFS file system and connect the backup applications using the NFS protocol” is incorrect. The NFS protocol is used by AWS Storage Gateway File Gateways but these do not provide virtual tape functionality that is suitable for replacing the existing backup infrastructure.
INCORRECT: “Create an Amazon EFS file system and connect the backup applications using the iSCSI protocol” is incorrect. The NFS protocol is used by AWS Storage Gateway File Gateways but these do not provide virtual tape functionality that is suitable for replacing the existing backup infrastructure.
INCORRECT: “Connect the backup applications to an AWS Storage Gateway using the NFS protocol” is incorrect. The iSCSI protocol is used by AWS Storage Gateway Volume Gateways but these do not provide virtual tape functionality that is suitable for replacing the existing backup infrastructure.

Question 4:
Storage capacity has become an issue for a company that runs application servers on-premises. The servers are connected to a combination of block storage and NFS storage solutions. The company requires a solution that supports local caching without re-architecting its existing applications.
Which combination of changes can the company make to meet these requirements? (Select TWO.)
Options:
A. Use AWS Direct Connect and mount an Amazon FSx for Windows File Server using iSCSI
B. Use Amazon Elastic File System (EFS) volumes to replace the block storage
C. Use the mount command on servers to mount Amazon S3 buckets using NFS
D. Use an AWS Storage Gateway volume gateway to replace the block storage
E. Use an AWS Storage Gateway file gateway to replace the NFS storage
Answer: D & E
Explanation
In this scenario the company should use cloud storage to replace the existing storage solutions that are running out of capacity. The on-premises servers mount the existing storage using block protocols (iSCSI) and file protocols (NFS). As there is a requirement to avoid re-architecting existing applications these protocols must be used in the revised solution.
The AWS Storage Gateway volume gateway should be used to replace the block-based storage systems as it is mounted over iSCSI and the file gateway should be used to replace the NFS file systems as it uses NFS.
CORRECT: “Use an AWS Storage Gateway file gateway to replace the NFS storage” is a correct answer.
CORRECT: “Use an AWS Storage Gateway volume gateway to replace the block storage” is a correct answer.
INCORRECT: “Use the mount command on servers to mount Amazon S3 buckets using NFS” is incorrect. You cannot mount S3 buckets using NFS as it is an object-based storage system (not file-based) and uses an HTTP REST API.
INCORRECT: “Use AWS Direct Connect and mount an Amazon FSx for Windows File Server using iSCSI” is incorrect. You cannot mount FSx for Windows File Server file systems using iSCSI, you must use SMB.
INCORRECT: “Use Amazon Elastic File System (EFS) volumes to replace the block storage” is incorrect. You cannot use EFS to replace block storage as it uses NFS rather than iSCSI.

Question 5:
A company runs an application in a factory that has a small rack of physical compute resources. The application stores data on a network attached storage (NAS) device using the NFS protocol. The company requires a daily offsite backup of the application data.
Which solution can a Solutions Architect recommend to meet this requirement?
Options:
A. Create an IPSec VPN to AWS and configure the application to mount the Amazon EFS file system. Run a copy job to backup the data to EFS
B. Use an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3
C. Use an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3
D. Use an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3
Answer: C
Explanation
The AWS Storage Gateway Hardware Appliance is a physical, standalone, validated server configuration for on-premises deployments. It comes pre-loaded with Storage Gateway software, and provides all the required CPU, memory, network, and SSD cache resources for creating and configuring File Gateway, Volume Gateway, or Tape Gateway.
A file gateway is the correct type of appliance to use for this use case as it is suitable for mounting via the NFS and SMB protocols.
CORRECT: “Use an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3” is the correct answer.
INCORRECT: “Use an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3” is incorrect. Volume gateways are used for block-based storage and this solution requires NFS (file-based storage).
INCORRECT: “Use an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3” is incorrect. Volume gateways are used for block-based storage and this solution requires NFS (file-based storage).
INCORRECT: “Create an IPSec VPN to AWS and configure the application to mount the Amazon EFS file system. Run a copy job to backup the data to EFS” is incorrect. It would be better to use a Storage Gateway which will automatically take care of synchronizing a copy of the data to AWS.

Question 6:
As part of a pilot program, a biotechnology company wants to integrate data files from its on-premises analytical application with AWS Cloud via an NFS interface.
Which of the following AWS service is the MOST efficient solution for the given use-case?
Options:
A. AWS Site-to-Site VPN
B. AWS Storage Gateway – Volume Gateway
C. AWS Storage Gateway – Tape Gateway
D. AWS Storage Gateway – File Gateway
Answer: D
Explanation
Correct option:
AWS Storage Gateway – File Gateway
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. The service provides three different types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access.
AWS Storage Gateway’s file interface, or file gateway, offers you a seamless way to connect to the cloud in order to store application data files and backup images as durable objects on Amazon S3 cloud storage. File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. As the company wants to integrate data files from its analytical instruments into AWS via an NFS interface, therefore AWS Storage Gateway – File Gateway is the correct answer.
Incorrect options:
AWS Storage Gateway – Volume Gateway – You can configure the AWS Storage Gateway service as a Volume Gateway to present cloud-based iSCSI block storage volumes to your on-premises applications. Volume Gateway does not support NFS interface, so this option is not correct.
AWS Storage Gateway – Tape Gateway – AWS Storage Gateway – Tape Gateway allows moving tape backups to the cloud. Tape Gateway does not support NFS interface, so this option is not correct.
AWS Site-to-Site VPN – AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN (Site-to-Site VPN) connection. It uses internet protocol security (IPSec) communications to create encrypted VPN tunnels between two locations. You cannot use AWS Site-to-Site VPN to integrate data files via the NFS interface, so this option is not correct.


21. Athena versus Macie


22. EC2

Question 1:
As a system operator for your company, you manage a set of web servers hosted on EC2 instances with public IP addresses. These IP addresses are associated with specific domain names. Yesterday, the servers were shut down for emergency maintenance. When the servers were started-up again, the website couldn’t be displayed on the internet.
Choose an option that may be the root cause of this issue.
Options:
A. It is necessary to reconfigure traffic on Route53 after restarting the EC2 instance
B. Elastic IP was not configured on EC2 instance
C. ELB health check failed for EC2 instance
D. Elastic IP is not set for the IP address of the subnet
Answer: B
Explanation
By default, the EC2 instance’s public IP address is released after the instance is stopped. As a result, the previous IP address that was mapped to the domain name becomes invalid and you cannot access it. By setting an Elastic IP for the EC2 instance, the IP address will be maintained even after the EC2 instance is restarted, and the domain name corresponding to the IP address can be used continuously. Therefore, option 2 is the correct answer.
Option 1 is incorrect. The overall correct solution is to prevent the IP address from changing at all. With this issue now made (loss of IP address), Route53 setting changes will be required by this is simply a follow-up response. It is not needed if the initial mistake was not made.
Option 3 is incorrect. If you get an ELB health check error on your EC2 instance, it should show an anomaly even before the reboot. This is not a reboot related problem.
Option 4 is incorrect. The IP address of the subnet will not be affected by the reboot.

Question 2:
Your company has told you the requirements for building a database using AWS. This company is required to manage the database environment in-house. As a Solutions Architect, you need to choose the best AWS service from your database requirements.
Select a database construction method that meets this requirement.
Options:
A. Build a DB using RDS
B. Build a DB using DynamoDB
C. Build a DB using Aurora
D. Build a DB using EC2 instances
Answer: D
Explanation
In order to manage the database environment in-house, it is necessary to completely control the underlying database instance by building a DB using EC2 instances. Therefore, option 4 is the correct answer.
Options 1, 2 and 3 are incorrect. Since other RDS / DynamoDB / Aurora are managed services, the infrastructure environment that configures the database cannot be managed in-house.

Question 3:
The solo founder at a tech startup has just created a brand new AWS account. The founder has provisioned an EC2 instance 1A which is running in region A. Later, he takes a snapshot of the instance 1A and then creates a new AMI in region A from this snapshot. This AMI is then copied into another region B. The founder provisions an instance 1B in region B using this new AMI in region B.
At this point in time, what entities exist in region B?
Options:
A. 1 EC2 instance and 1 snapshot exist in region B
B. 1 EC2 instance, 1 AMI and 1 snapshot exist in region B
C. 1 EC2 instance and 1 AMI exist in region B
D. 1 EC2 instance and 2 AMIs exist in region B
Answer: B
Explanation
Correct option:
1 EC2 instance, 1 AMI and 1 snapshot exist in region B
An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. When the new AMI is copied from region A into region B, it automatically creates a snapshot in region B because AMIs are based on the underlying snapshots. Further, an instance is created from this AMI in region B. Hence, we have 1 EC2 instance, 1 AMI and 1 snapshot in region B.
Incorrect options:
1 EC2 instance and 1 AMI exist in region B
1 EC2 instance and 2 AMIs exist in region B
1 EC2 instance and 1 snapshot exist in region B
As mentioned earlier in the explanation, when the new AMI is copied from region A into region B, it also creates a snapshot in region B because AMIs are based on the underlying snapshots. In addition, an instance is created from this AMI in region B. So, we have 1 EC2 instance, 1 AMI and 1 snapshot in region B. Hence all three options are incorrect.

Question 04:
A software engineering intern at an e-commerce company is documenting the process flow to provision EC2 instances via the Amazon EC2 API. These instances are to be used for an internal application that processes HR payroll data. He wants to highlight those volume types that cannot be used as a boot volume.
Can you help the intern by identifying those storage volume types that CANNOT be used as boot volumes while creating the instances? (Select two)
Options:
A. Throughput Optimized HDD (st1)
B. Cold HDD (sc1)
C. General Purpose SSD (gp2)
D. Provisioned IOPS SSD (io1)
E. Instance Store
Answer: A & B
Explanation
Correct options:
Throughput Optimized HDD (st1)
Cold HDD (sc1)
The EBS volume types fall into two categories:
SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS.
HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS.
Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types CANNOT be used as a boot volume, so these two options are correct.
Incorrect options:
General Purpose SSD (gp2)
Provisioned IOPS SSD (io1)
Instance Store
General Purpose SSD (gp2), Provisioned IOPS SSD (io1), and Instance Store can be used as a boot volume.

Question 9: Skipped
An application is currently hosted on four EC2 instances (behind Application Load Balancer) deployed in a single Availability Zone (AZ). To maintain an acceptable level of end-user experience, the application needs at least 4 instances to be always available.
As a solutions architect, which of the following would you recommend so that the application achieves high availability with MINIMUM cost?
• Deploy the instances in one Availability Zones. Launch two instances in the Availability Zone
• Deploy the instances in two Availability Zones. Launch two instances in each Availability Zone
• Deploy the instances in three Availability Zones. Launch two instances in each Availability Zone(Correct)
• Deploy the instances in two Availability Zones. Launch four instances in each Availability Zone
Explanation
Correct option:
Deploy the instances in three Availability Zones. Launch two instances in each Availability Zone
The correct option is to deploy the instances in three Availability Zones and launch two instances in each Availability Zone. Even if one of the AZs goes out of service, still we shall have 4 instances available and the application can maintain an acceptable level of end-user experience. Therefore, we can achieve high availability with just 6 instances in this case.
Incorrect options:
Deploy the instances in two Availability Zones. Launch two instances in each Availability Zone – When we launch two instances in two AZs, we run the risk of falling below the minimum acceptable threshold of 4 instances if one of the AZs fails. So this option is ruled out.
Deploy the instances in two Availability Zones. Launch four instances in each Availability Zone – When we launch four instances in two AZs, we have to bear costs for 8 instances which is NOT cost-optimal. So this option is ruled out.
Deploy the instances in one Availability Zones. Launch two instances in the Availability Zone – We can’t have just two instances in a single AZ as that is below the minimum acceptable threshold of 4 instances.

Question 25:
An application runs big data workloads on EC2 instances. The application needs at least 20 instances to maintain a minimum acceptable performance threshold and the application needs 300 instances to handle spikes in the workload. Based on historical workloads processed by the application, it needs 80 instances 80% of the time.
As a solutions architect, which of the following would you recommend as the MOST cost-optimal solution so that it can meet the workload demand in a steady state?
A• Purchase 80 on-demand instances. Use Auto Scaling Group to provision the remaining instances as spot instances per the workload demand
B• Purchase 80 spot instances. Use Auto Scaling Group to provision the remaining instances as on-demand instances per the workload demand
C• Purchase 80 on-demand instances. Provision additional on-demand and spot instances per the workload demand (Use Auto Scaling Group with launch template to provision the mix of on-demand and spot instances)
D• Purchase 80 reserved instances. Provision additional on-demand and spot instances per the workload demand (Use Auto Scaling Group with launch template to provision the mix of on-demand and spot instances)
Answer: D
Explanation
Correct option:
Purchase 80 reserved instances. Provision additional on-demand and spot instances per the workload demand (Use Auto Scaling Group with launch template to provision the mix of on-demand and spot instances)
As the steady-state workload demand is 80 instances, we can save on costs by purchasing 80 reserved instances. Based on additional workload demand, we can specify a mix of on-demand and spot instances using Application Load Balancer with a launch template to provision the mix of on-demand and spot instances.
Incorrect options:
Purchase 80 on-demand instances. Use Auto Scaling Group to provision the remaining instances as spot instances per the workload demand – Provisioning 80 on-demand instances would end up costlier than the option where we provision 80 reserved instances. So this option is ruled out.
Purchase 80 on-demand instances. Provision additional on-demand and spot instances per the workload demand (Use Auto Scaling Group with launch template to provision the mix of on-demand and spot instances) – Provisioning 80 on-demand instances would end up costlier than the option where we provision 80 reserved instances. So this option is ruled out.
Purchase 80 spot instances. Use Auto Scaling Group to provision the remaining instances as on-demand instances per the workload demand – The option to purchase 80 spot instances is incorrect, as there is no guarantee regarding the availability of the spot instances, which means we may not even meet the steady-state workload.

Question 28:
An engineering team wants to examine the feasibility of the user data feature of Amazon EC2 for an upcoming project.
Which of the following are true about the EC2 user data configuration? (Select two)
A• By default, user data is executed every time an EC2 instance is re-started
B• By default, user data runs only during the boot cycle when you first launch an instance
C• By default, scripts entered as user data do not have root user privileges for executing
D• When an instance is running, you can update user data by using root user credentials
E• By default, scripts entered as user data are executed with root user privileges
Answer: B & E
Explanation
Correct options:
User Data is generally used to perform common automated configuration tasks and even run scripts after the instance starts. When you launch an instance in Amazon EC2, you can pass two types of user data – shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text or as a file.
By default, scripts entered as user data are executed with root user privileges – Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script. Any files you create will be owned by root; if you need non-root users to have file access, you should modify the permissions accordingly in the script.
By default, user data runs only during the boot cycle when you first launch an instance – By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance. You can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance.
Incorrect options:
By default, user data is executed every time an EC2 instance is re-started – As discussed above, this is not a default configuration of the system. But, can be achieved by explicitly configuring the instance.
When an instance is running, you can update user data by using root user credentials – You can’t change the user data if the instance is running (even by using root user credentials), but you can view it.
By default, scripts entered as user data do not have root user privileges for executing – Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script.


23. Security Groups

Question 1:
As a Solutions Architect, you plan to build a web application consisting of a web server and a database server. The web server and database server will be hosted on different EC2 instances, each located on a different subnet. The database server should only allow traffic from the web server.
Please choose a response to meet this requirement.
Options:
A. Control traffic with VPC endpoints
B. Control traffic with security groups
C. Control traffic with NACLs
D. Allow access from the web server to the DB server with the IAM role
Answer: B
Explanation
Security groups are a good way to control traffic between instances. You can control traffic from a particular EC2 instance by specifying the IP address of the EC2 instance within a security group. Therefore, option 2 is the correct answer.
Option 1 is incorrect. A VPC endpoint is a mechanism that allows AWS resources inside a VPC to access AWS services outside the VPC, and is not used to control traffic.
Option 3 is incorrect. Network ACLs can also control traffic, but this applies only to traffic between a subnet and the internet, not subnet to subnet communication. The security group controls the traffic on his EC2 and other instances. Therefore, security groups are the suitable solution for controlling traffic between EC2 instances.
Option 4 is incorrect. Instead of controlling the traffic from the web server to the database server with the IAM role, the security group is used for control. RDS can perform access authentication by using the database authentication function that uses the IAM role.

Question 2:
Your company has set up security groups on multiple EC2 instances. As an operations personnel, you have decided to change the access settings to your EC2 instance. You have set the security group rules to allow inbound traffic on a new port and with new protocol. You then used this security group to launch a new EC2 instance.
How will the security group settings be reflected?
Options:
A. Security group changes are immediately reflected in all EC2 instances
B. It takes time for the SG to be reflected in the EC2 instances for which the security group has been set
C. Unlike the reflection in the existing EC2 instance, the security group is reflected in the new EC2 instance immediately
D. It takes a few minutes for the security group to be reflected on all EC2 instances
Answer: A
Explanation
Security group changes and new settings are immediately reflected in all EC2 instances.
Therefore, option 1 is the correct answer.
All other options are incorrect.

Question 3:
A company has moved its business critical data to Amazon EFS file system which will be accessed by multiple EC2 instances.
As an AWS Certified Solutions Architect Associate, which of the following would you recommend to exercise access control such that only the permitted EC2 instances can read from the EFS file system? (Select three)
A. Attach an IAM policy to your file system to control clients who can mount your file system with the required permissions
B. Use VPC security groups to control the network traffic to and from your file system
C. Use Network ACLs to control the network traffic to and from your Amazon EC2 instance
D. Set up the IAM policy root credentials to control and configure the clients accessing the EFS file system
E. Use EFS Access Points to manage application access
F. Use Amazon GuardDuty to curb unwanted access to EFS file system
Answer: A, B & E
Explanation
Correct options:
Use VPC security groups to control the network traffic to and from your file system
Attach an IAM policy to your file system to control clients who can mount your file system with the required permissions
Use EFS Access Points to manage application access
You control which EC2 instances can access your EFS file system by using VPC security group rules and AWS Identity and Access Management (IAM) policies. Use VPC security groups to control the network traffic to and from your file system. Attach an IAM policy to your file system to control which clients can mount your file system and with what permissions, and use EFS Access Points to manage application access. Control access to files and directories with POSIX-compliant user and group-level permissions.
Files and directories in an Amazon EFS file system support standard Unix-style read, write, and execute permissions based on the user ID and group IDs. When an NFS client mounts an EFS file system without using an access point, the user ID and group ID provided by the client is trusted. You can use EFS access points to override user ID and group IDs used by the NFS client. When users attempt to access files and directories, Amazon EFS checks their user IDs and group IDs to verify that each user has permission to access the objects
Incorrect options:
Use Network ACLs to control the network traffic to and from your Amazon EC2 instance – Network ACLs operate at the subnet level and not at the instance level.
Set up the IAM policy root credentials to control and configure the clients accessing the EFS file system – There is no such thing as an IAM policy root credentials and this statement has been added as a distractor.
Use Amazon GuardDuty to curb unwanted access to EFS file system – Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. It cannot be used for access control to the EFS file system.

Question 04:
The engineering team at an e-commerce company is working on cost optimizations for EC2 instances. The team wants to manage the workload using a mix of on-demand and spot instances across multiple instance types. They would like to create an Auto Scaling group with a mix of these instances.
Which of the following options would allow the engineering team to provision the instances for this use-case?
• You can only use a launch configuration to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
• You can use a launch configuration or a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
• You can neither use a launch configuration nor a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
• You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost (Correct)
Explanation
Correct option:
You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
A launch template is similar to a launch configuration, in that it specifies instance configuration information such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances. Also, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.
With launch templates, you can provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost. Hence this is the correct option.
Incorrect options:
You can only use a launch configuration to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
You can use a launch configuration or a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping.
You cannot use a launch configuration to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances. Therefore both these options are incorrect.
You can neither use a launch configuration nor a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost – You can use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances. So this option is incorrect.

Question 13: Skipped
A developer has configured inbound traffic for the relevant ports in both the Security Group of the EC2 instance as well as the Network Access Control List (NACL) of the subnet for the EC2 instance. The developer is, however, unable to connect to the service running on the Amazon EC2 instance.
As a solutions architect, how will you fix this issue?
• IAM Role defined in the Security Group is different from the IAM Role that is given access in the Network ACLs
• Security Groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network ACLs are stateless, so you must allow both inbound and outbound traffic (Correct)
• Rules associated with Network ACLs should never be modified from command line. An attempt to modify rules from command line blocks the rule and results in an erratic behavior
• Network ACLs are stateful, so allowing inbound traffic to the necessary ports enables the connection. Security Groups are stateless, so you must allow both inbound and outbound traffic
Explanation
Correct option:
Security Groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network ACLs are stateless, so you must allow both inbound and outbound traffic – Security groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network ACLs are stateless, so you must allow both inbound and outbound traffic.
To enable the connection to a service running on an instance, the associated network ACL must allow both inbound traffic on the port that the service is listening on as well as allow outbound traffic from ephemeral ports. When a client connects to a server, a random port from the ephemeral port range (1024-65535) becomes the client’s source port.
The designated ephemeral port then becomes the destination port for return traffic from the service, so outbound traffic from the ephemeral port must be allowed in the network ACL.
By default, network ACLs allow all inbound and outbound traffic. If your network ACL is more restrictive, then you need to explicitly allow traffic from the ephemeral port range.
If you accept traffic from the internet, then you also must establish a route through an internet gateway. If you accept traffic over VPN or AWS Direct Connect, then you must establish a route through a virtual private gateway.
Incorrect options:
Network ACLs are stateful, so allowing inbound traffic to the necessary ports enables the connection. Security Groups are stateless, so you must allow both inbound and outbound traffic – This is incorrect as already discussed.
IAM Role defined in the Security Group is different from the IAM Role that is given access in the Network ACLs – This is a made-up option and just added as a distractor.
Rules associated with Network ACLs should never be modified from command line. An attempt to modify rules from command line blocks the rule and results in an erratic behavior – This option is a distractor. AWS does not support modifying rules of Network ACLs from the command line tool.


24. EBS

Question 1:
A company runs an application on an Amazon EC2 instance the requires 250 GB of storage space. The application is not used often and has small spikes in usage on weekday mornings and afternoons. The disk I/O can vary with peaks hitting a maximum of 3,000 IOPS. A Solutions Architect must recommend the most cost-effective storage solution that delivers the performance required.
Which solution should the solutions architect recommend?
Options:
A. Amazon EBS Throughput Optimized HDD (st1)
B. Amazon EBS Provisioned IOPS SSD (io1)
C. Amazon EBS Cold HDD (sc1)
D. Amazon EBS General Purpose SSD (gp2)
Answer: D
Explanation
General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time.
Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver their provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB.
In this configuration the volume will provide a baseline performance of 750 IOPS but will always be able to burst to the required 3,000 IOPS during periods of increased traffic.
CORRECT: “Amazon EBS General Purpose SSD (gp2)” is the correct answer.
INCORRECT: “Amazon EBS Provisioned IOPS SSD (i01)” is incorrect. The i01 volume type will be more expensive and is not necessary for the performance levels required.
INCORRECT: “Amazon EBS Cold HDD (sc1)” is incorrect. The sc1 volume type is not going to deliver the performance requirements as it cannot burst to 3,000 IOPS.
INCORRECT: “Amazon EBS Throughput Optimized HDD (st1)” is incorrect. The st1 volume type is not going to deliver the performance requirements as it cannot burst to 3,000 IOPS.

Question 2:
A persistent database must be migrated from an on-premises server to an Amazon EC2 instances. The database requires 64,000 IOPS and, if possible, should be stored on a single Amazon EBS volume.
Which solution should a Solutions Architect recommend?
Options:
A. Create an Amazon EC2 instance with four Amazon EBS General Purpose SSD (gp2) volumes attached. Max out the IOPS on each volume and use a RAID 0 stripe set
B. Create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (i01) volume attached. Provision 64,000 IOPS for the volume
C. Create an Amazon EC2 instance with two Amazon EBS Provisioned IOPS SSD (i01) volumes attached. Provision 32,000 IOPS per volume and create a logical volume using the OS that aggregates the capacity
D. Use an instance from the I3 I/O optimized family and leverage instance store storage to achieve the IOPS requirement
Answer: B
Explanation
Amazon EC2 Nitro-based systems are not required for this solution but do offer advantages in performance that will help to maximize the usage of the EBS volume. For the data storage volume an i01 volume can support up to 64,000 IOPS so a single volume with sufficient capacity (50 IOPS per GiB) can be deliver the requirements.
CORRECT: “Create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (i01) volume attached. Provision 64,000 IOPS for the volume” is the correct answer.
INCORRECT: “Use an instance from the I3 I/O optimized family and leverage instance store storage to achieve the IOPS requirement” is incorrect.
INCORRECT: “Create an Amazon EC2 instance with four Amazon EBS General Purpose SSD (gp2) volumes attached. Max out the IOPS on each volume and use a RAID 0 stripe set” is incorrect. This is not a good use case for gp2 volumes. It is much better to use io1 which also meets the requirement of having a single volume with 64,000 IOPS.
INCORRECT: “Create an Amazon EC2 instance with two Amazon EBS Provisioned IOPS SSD (i01) volumes attached. Provision 32,000 IOPS per volume and create a logical volume using the OS that aggregates the capacity” is incorrect. There is no need to create two volumes and aggregate capacity through the OS, the Solutions Architect can simply create a single volume with 64,000 IOPS.

Question 3:
A company plans to make an Amazon EC2 Linux instance unavailable outside of business hours to save costs. The instance is backed by an Amazon EBS volume. There is a requirement that the contents of the instance’s memory must be preserved when it is made unavailable.
How can a solutions architect meet these requirements?
A. Terminate the instance outsode business hours. Recover the instance again when required
B. Stop the instance outside business outs. Start the instance again when required
C. Hibernate the instance outside business hours. Start the instance again when required
D. Use Auto Scaling to scale down the instance oustide of business hours. Scale up the instance when required.
Answer: C
Explanation
When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance’s EBS root volume and any attached EBS data volumes. When you start your instance:
– The EBS root volume is restored to its previous state
– The RAM contents are reloaded
– The processes that were previously running on the instance are resumed
– Previously attached data volumes are reattached and the instance retains its instance ID
CORRECT: “Hibernate the instance outside business hours. Start the instance again when required” is the correct answer.
INCORRECT: “Stop the instance outside business hours. Start the instance again when required” is incorrect. When an instance is stopped the operating system is shut down and the contents of memory will be lost.
INCORRECT: “Use Auto Scaling to scale down the instance outside of business hours. Scale out the instance when required” is incorrect. Auto Scaling scales does not scale up and down, it scales in by terminating instances and out by launching instances. When scaling out new instances are launched and no state will be available from terminated instances.
INCORRECT: “Terminate the instance outside business hours. Recover the instance again when required” is incorrect. You cannot recover terminated instances, you can recover instances that have become impaired in some circumstances.


25. Volumes & Snapshots

Question 1:
One company uses EC2 instances with EBS volumes as server infrastructure. The company’s system operations policy states that all data must be backed up efficiently.
Choose the cost-optimal EBS volume backup method.
Options:
A. Set up periodic snapshot acquisition for EBS
B. Use EBS volume encryption
C. Use the EC2 instance store
D. Configure mirroring for two EBS volumes
Answer: A
Explanation
Option 1 is the correct answer. EBS snapshots allow you to back up data on Amazon EBS volumes to Amazon S3. Snapshots are incremental backups. That is, after the first snapshot, only the blocks on the device that have changed since the last time are saved. This minimizes the time required to take a snapshot and saves storage costs.
Option 2 is incorrect. EBS volume encryption is used for data protection and is not used for data backup.
Option 3 is incorrect. The EC2 instance store is used to store temporary data and is not relevant to this requirement.
Option 4 is incorrect. It is inefficient to have a mirroring configuration for the entire EBS volume. The mirroring configuration is not for backup, but for the redundancy of EBS volumes, allowing you to continue processing on another disk if one volume fails.

Question 2:
A company wants some EBS volumes with maximum possible Provisioned IOPS (PIOPS) to support high-performance database workloads on EC2 instances. The company also wants some EBS volumes that can be attached to multiple EC2 instances in the same Availability Zone.
As an AWS Certified Solutions Architect Associate, which of the following options would you identify as correct for the given requirements? (Select two)
Options:
A. Use io1/io2 volumes to enable Multi-Attach on Nitro-based EC2 instances
B. Use io2 volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000
C. Use gp2 volumes to enable Multi-Attach on Nitro-based EC2 instances
D. Use gp3 volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000
E. Use io2 Block Express volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000
Answer: A & E
Explanation
Correct options:
Use io2 Block Express volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000
EBS io2 Block Express is the next generation of Amazon EBS storage server architecture. It has been built for the purpose of meeting the performance requirements of the most demanding I/O intensive applications that run on Nitro-based Amazon EC2 instances. With io2 Block Express volumes, you can provision volumes with Provisioned IOPS (PIOPS) up to 256,000, with an IOPS:GiB ratio of 1,000:1
Use io1/io2 volumes to enable Multi-Attach on Nitro-based EC2 instances
Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances that are in the same Availability Zone. You can attach multiple Multi-Attach enabled volumes to an instance or set of instances. Each instance to which the volume is attached has full read and write permission to the shared volume. Multi-Attach makes it easier for you to achieve higher application availability in clustered Linux applications that manage concurrent write operations.
Incorrect options:
Use io2 volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000 For io2, Provisioned IOPS SSD volumes can range in size from 4 GiB to 16 TiB and you can provision from 100 IOPS up to 64,000 IOPS per volume. You can achieve only up to 64,000 IOPS on the instances built on the Nitro System.
Use gp3 volumes on Nitro-based EC2 instances to achieve a maximum Provisioned IOPS of 256,000 – gp3 volumes cannot be used with Nitro-based EC2 instances. In addition, gp3 volumes support a maximum PIOPS of 16,000.
Use gp2 volumes to enable Multi-Attach on Nitro-based EC2 instances – gp2 volumes are not supported for Multi-Attach.

Question 8: Skipped
A junior DevOps engineer wants to change the default configuration for EBS volume termination. By default, the root volume of an EC2 instance for an EBS-backed AMI is deleted when the instance terminates.
Which option below helps change this default behavior to ensure that the volume persists even after the instance terminates?
• Set the TerminateOnDelete attribute to false
• Set the DeleteOnTermination attribute to false (Correct)
• Set the TerminateOnDelete attribute to true
• Set the DeleteOnTermination attribute to true
Explanation
Correct option:
Set the DeleteOnTermination attribute to false
An EC2 instance can be launched from either an instance store-backed AMI or an Amazon EBS-backed AMI. Instances that use Amazon EBS for the root device automatically have an Amazon EBS volume attached. By default, the root volume for an AMI backed by Amazon EBS is deleted when the instance terminates.<br/> The default behavior can be changed to ensure that the volume persists after the instance terminates. To change the default behavior, set the DeleteOnTermination attribute to false using a block device mapping.
Incorrect options:
Set the TerminateOnDelete attribute to true
Set the TerminateOnDelete attribute to false
Both these options are incorrect as there is no such attribute as TerminateOnDelete. These options have been added as distractors.
Set the DeleteOnTermination attribute to true – If you set the DeleteOnTermination attribute to true, then the root volume for an AMI backed by Amazon EBS would be deleted when the instance terminates. Therefore, this option is incorrect.


26. AMI Types (EBS vs Instance Store)

Question 1:
A company’s application is running on Amazon EC2 instances in a single Region. In the event of a disaster, a solutions architect needs to ensure that the resources can also be deployed to a second Region.
Which combination of actions should the solutions architect take to accomplish this? (Select TWO.)
Options:
A. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination
B. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume
C. Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance
D. Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region
E. Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
Answer: A & E
Explanation
You can copy an Amazon Machine Image (AMI) within or across AWS Regions using the AWS Management Console, the AWS Command Line Interface or SDKs, or the Amazon EC2 API, all of which support the CopyImage action.
Using the copied AMI the solutions architect would then be able to launch an instance from the same EBS volume in the second Region.
Note: the AMIs are stored on Amazon S3, however you cannot view them in the S3 management console or work with them programmatically using the S3 API.
CORRECT: “Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region for the destination” is a correct answer.
CORRECT: “Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region” is also a correct answer.
INCORRECT: “Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second Region” is incorrect. You cannot copy EBS volumes directly from EBS to Amazon S3.
INCORRECT: “Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the new instance” is incorrect. You cannot create an EBS volume directly from Amazon S3.
INCORRECT: “Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the second Region using that EBS volume” is incorrect. You cannot create an EBS volume directly from Amazon S3.

Question 2:
A research group needs a fleet of EC2 instances for a specialized task that must deliver high random I/O performance. Each instance in the fleet would have access to a dataset that is replicated across the instances. Because of the resilient application architecture, the specialized task would continue to be processed even if any instance goes down, as the underlying application architecture would ensure the replacement instance has access to the required dataset.
Which of the following options is the MOST cost-optimal and resource-efficient solution to build this fleet of EC2 instances?
Options
A. Use EBS based EC2 instances
B. Use EC2 instances with EFS mount points
C. Use EC2 instances with access to S3 based storage
D. Use Instance Store based EC2 instances
Answer: D
Explanation
Correct option:
Use Instance Store based EC2 instances
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance store volumes are included as part of the instance’s usage cost.
As Instance Store based volumes provide high random I/O performance at low cost (as the storage is part of the instance’s usage cost) and the resilient architecture can adjust for the loss of any instance, therefore you should use Instance Store based EC2 instances for this use-case.
Incorrect options:
Use EBS based EC2 instances – EBS based volumes would need to use Provisioned IOPS (io1) as the storage type and that would incur additional costs. As we are looking for the most cost-optimal solution, this option is ruled out.
Use EC2 instances with EFS mount points – Using EFS implies that extra resources would have to be provisioned. As we are looking for the most resource-efficient solution, this option is also ruled out.
Use EC2 instances with access to S3 based storage – Using EC2 instances with access to S3 based storage does not deliver high random I/O performance, this option is just added as a distractor.


27. ENI vs ENA vs EFA

Question 1:
A legacy tightly-coupled High Performance Computing (HPC) application will be migrated to AWS. Which network adapter type should be used?
Options:
A. Elastic Network Adapter (ENA)
B. Elastic IP Address
C. Elastic Network Interface (ENI)
D. Elastic Fabric Adapter (EFA)
Answer: D
Explanation
An Elastic Fabric Adapter is an AWS Elastic Network Adapter (ENA) with added capabilities. The EFA lets you apply the scale, flexibility, and elasticity of the AWS Cloud to tightly-coupled HPC apps. It is ideal for tightly coupled app as it uses the Message Passing Interface (MPI).
CORRECT: “Elastic Fabric Adapter (EFA)” is the correct answer.
INCORRECT: “Elastic Network Interface (ENI)” is incorrect. The ENI is a basic type of adapter and is not the best choice for this use case.
INCORRECT: “Elastic Network Adapter (ENA)” is incorrect. The ENA, which provides Enhanced Networking, does provide high bandwidth and low inter-instance latency but it does not support the features for a tightly-coupled app that the EFA does.
INCORRECT: “Elastic IP Address” is incorrect. An Elastic IP address is just a static public IP address, it is not a type of network adapter.


28. Encrypted Root Device Volumes & Snapshots


29. Spot Instances & Spot Fleets

Question 1:
Your company uses EC2 instances to develop video distribution services. The EC2 instance polls the queue, receives transcoding requests, and uses Amazon Elastic Transcoder to run the transcoding process. If a process is interrupted by one EC2 instance, transcoding is restarted by another instance according to the queue. A lot of video processing backlogs occur during transcoding processing. When they occur, you should help get through these backlogs by increasing the EC2 instance and improving the processing capacity. These instances will only be needed until the backlog is reduced.
Choose the cost-optimal instance type for backlog processing that meets this requirement.
Options:
A. Reserved instance
B. On-demand instance
C. Spot instance
D. Dedicated instance
Answer: C
Explanation
In Amazon Elastic Transcode, video transcoding jobs start in the order that the pipeline receives the requests. During the job process, many variables such as input file size, resolution, and bit rate affect the conversion speed. For example, a 10-minute video transcoding operation with an iPhone 4 preset is about 5 minutes. When Amazon Elastic Transcode accepts a large number of jobs, the jobs are queued up as a backlog (in the queue).
In this scenario, it is necessary to temporarily increase the number of instances for transcoding processing in order to suppress the occurrence of backlog. The best instance type is Spot Instances because this additional processing need is only temporary. Spot instances are typically used for temporary processing such as batch processing jobs. Therefore, option 3 is the correct answer for this situation.
Option 1 is incorrect. Reserved Instances are an option to purchase EC2 Instances that is discounted because its booked for a long-term use period of 1 or 3 years. It is not suitable for temporary processing like this in the scenario.
Option 2 is incorrect. On-demand instances are also an option, but they do not meet the cost-optimal requirement.
Option 4 is incorrect. Dedicated instances are used when you want to occupy a physical host server. It does not meet the cost optimization requirement due to its high cost.

Question 2:
A company runs a large batch processing job at the end of every quarter. The processing job runs for 5 days and uses 15 Amazon EC2 instances. The processing must run uninterrupted for 5 hours per day. The company is investigating ways to reduce the cost of the batch processing job.
Which pricing model should the company choose?
Options:
A. Scheduled reserved instances
B. Reserved instances
C. Spot block instances
D. On-demand instances
Answer: C
Explanation
Spot Instances with a defined duration (also known as Spot blocks) are designed not to be interrupted and will run continuously for the duration you select. This makes them ideal for jobs that take a finite time to complete, such as batch processing, encoding and rendering, modeling and analysis, and continuous integration.
Spot Block is the best solution for this job as it only runs once a quarter for 5 days and therefore reserved instances would not be beneficial. Note that the maximum duration of a Spot Block is 6 hours.
CORRECT: “Spot Block Instances” is the correct answer.
INCORRECT: “Reserved Instances” is incorrect. Reserved instances are good for continuously running workloads that run for a period of 1 or 3 years.
INCORRECT: “On-Demand Instances” is incorrect. There is no cost benefit to using on-demand instances.
INCORRECT: “Scheduled Reserved Instances” is incorrect. These reserved instances are ideal for workloads that run for a certain number of hours each day, but not for just 5 days per quarter.

Question 3:
Amazon EC2 instances in a development environment run between 9am and 5pm Monday-Friday. Production instances run 24/7. Which pricing models should be used? (choose 2)
Options:
A. Use On-Demand instances for the production environment
B. Use scheduled reserved instances for the development environment
C. Use Spot instances for the development environment
D. Use Reserved instances for the production environment
E. Use Reserved instances for the development environment
Answer: B & D
Explanation
Scheduled Instances are a good choice for workloads that do not run continuously but do run on a regular schedule. This is ideal for the development environment.
Reserved instances are a good choice for workloads that run continuously. This is a good option for the production environment.
CORRECT: “Use scheduled reserved instances for the development environment” is a correct answer.
CORRECT: “Use Reserved instances for the production environment” is also a correct answer.
INCORRECT: “Use Spot instances for the development environment” is incorrect. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. Spot instances are not suitable for the development environment as important work may be interrupted.
INCORRECT: “Use Reserved instances for the development environment” is incorrect as they should be used for the production environment.
INCORRECT: “Use On-Demand instances for the production environment” is incorrect. There is no long-term commitment required when you purchase On-Demand Instances. However, you do not get any discount and therefore this is the most expensive option.

Question 4:
A solutions architect is creating a system that will run analytics on financial data for 4 hours a night, 5 days a week. The analysis is expected to run for the same duration and cannot be interrupted once it is started. The system will be required for a minimum of 1 year.
Which type of Amazon EC2 instances should be used to reduce the cost of the system?
Options:
A. Spot instances
B. Standard reserved instances
C. On-demand instances
D. Scheduled reserved instances
Answer: D
Explanation
Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. You reserve the capacity in advance, so that you know it is available when you need it. You pay for the time that the instances are scheduled, even if you do not use them.
Scheduled Instances are a good choice for workloads that do not run continuously, but do run on a regular schedule. For example, you can use Scheduled Instances for an application that runs during business hours or for batch processing that runs at the end of the week.
CORRECT: “Scheduled Reserved Instances” is the correct answer.
INCORRECT: “Standard Reserved Instances” is incorrect as the workload only runs for 4 hours a day this would be more expensive.
INCORRECT: “On-Demand Instances” is incorrect as this would be much more expensive as there is no discount applied.
INCORRECT: “Spot Instances” is incorrect as the workload cannot be interrupted once started. With Spot instances workloads can be terminated if the Spot price changes or capacity is required.


30. EC2 Hibernate

Question 01:
You have an in-memory database launched on an EC2 instance and you would like to be able to stop and start the EC2 instance without losing the in-memory state of your database. What do you recommend?
A• Create an AMI from the instance
B• Mount an in-memory EBS Volume
C• Use EC2 Instance Hibernate
D• Use an EC2 Instance Store
Answer: C
Explanation
Correct option:
Use EC2 Instance Hibernate
When you hibernate an instance, AWS signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon EBS root volume. AWS then persists the instance’s Amazon EBS root volume and any attached Amazon EBS data volumes. When you start your instance: The Amazon EBS root volume is restored to its previous state The RAM contents are reloaded The processes that were previously running on the instance are resumed Previously attached data volumes are reattached and the instance retains its instance ID
For the given use-case, we must use EC2 Instance Hibernate, which preserves the in-memory state of our EC2 instance upon hibernating it.
Incorrect options:
Create an AMI from the instance – An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.
Creating an AMI won’t help, because it is a snapshot of an EBS volume, which represents all the files written on disk, not the state of the memory.
Use an EC2 Instance Store – An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
Using an EC2 Instance Store won’t help either, and we can’t stop an instance that has an instance store anyway.
Mount an in-memory EBS Volume – Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. In-memory EBS volumes don’t exist. This option has been added as a distractor.

Question 39:
A Machine Learning research group uses a proprietary computer vision application hosted on an EC2 instance. Every time the instance needs to be stopped and started again, the application takes about 3 minutes to start as some auxiliary software programs need to be executed so that the application can function. The research group would like to minimize the application boostrap time whenever the system needs to be stopped and then started at a later point in time.
As a solutions architect, which of the following solutions would you recommend for this use-case?
A• Use EC2 User-Data
B• Use EC2 Instance Hibernate
C• Use EC2 Meta-Data
D• Create an AMI and launch your EC2 instances from that
Answer: B
Explanation
Correct option:
Use EC2 Instance Hibernate
When you hibernate an instance, AWS signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon EBS root volume. AWS then persists the instance’s Amazon EBS root volume and any attached Amazon EBS data volumes.
When you start your instance:
The Amazon EBS root volume is restored to its previous state
The RAM contents are reloaded
The processes that were previously running on the instance are resumed
Previously attached data volumes are reattached and the instance retains its instance ID
By using EC2 hibernate, we have the capability to resume it at any point of time, with the application already launched, thus helping us cut the 3 minutes start time.
Incorrect options:
Use EC2 User-Data – EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance. Here, the problem is that the application takes 3 minutes to launch, no matter what. EC2 user data won’t help us because it’s just here to help us execute a list of commands, not speed them up.
Use EC2 Meta-Data – EC2 instance metadata is data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, host name, events, and security groups. The EC2 meta-data is a distractor and can only help us determine some metadata attributes on our EC2 instances.
Create an AMI and launch your EC2 instances from that – An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.
Creating an AMI may help with all the system dependencies, but it won’t help us with speeding up the application start time.


31. Cloud Watch
32. AWS Command Line
33. IAM Roles with EC2
34. Boot Strap Scripts
35. EC2 Instance Meta Data


36. EFS

Question 1:
One company is building file storage using AWS. This storage requirement requires the use of data transfer over the NFSv4 protocol.
Choose a storage type that meets this requirement.
Options:
A. Amazon FSx
B. EBS
C. EFS
D. S3 Standard
Answer: C
Explanation
Amazon EFS uses a file permission model based on the NFSv4 protocol, file locking performance, with a hierarchical directory structure to enable secure access from thousands of EC2 instances and on-premises servers. Therefore, option 3 is the correct answer.
Option 1 is incorrect. Amazon FSx is an NTFS file system that is accessible to up to thousands of compute instances based on the SMB protocol.
Option 2 is incorrect. Amazon Elastic Block Store (EBS) does not use the NFSv4 protocol.
Option 4 is incorrect. S3 can transfer files directly using Secure File Transfer Protocol (SFTP).

Question 2:
Your company operates a set of EC2 instances hosted on AWS. These are all Linux-based instances and require access to shared data via a standard file interface. Since it is used by multiple instances, the storage where the data is stored requires strong integrity and file locking. So, as a Solutions Architect, you are looking for the best storage option.
Choose the best storage option that meets this requirement.
Options:
A. EFS
B. S3
C. EBS
D. Glacier
Answer: A
Explanation
Option 1 is the correct answer. EFS allows multiple EC2 instances to access the EFS file system and share data at the same time. EFS provides a file system interface and file system access semantics (such as strong consistency and file locks) that allow simultaneous access from up to thousands of Amazon EC2 instances.
Option 2 is incorrect. S3 is an object storage service. S3 can use stored data from anywhere via the Internet API. It can be used from multiple instances, but it cannot meet all requirements, such as file locks.
Option 3 is incorrect. Amazon EBS is a block-level storage service dedicated to Amazon EC2. With the exception of some instances, data cannot be shared between EC2 instances and so does not meet the requirements.
Option 4 is incorrect. Glacier is a storage for medium- to long-term storage and cannot be used for frequently accessed data.

Question 3:
A company is deploying a fleet of Amazon EC2 instances running Linux across multiple Availability Zones within an AWS Region. The application requires a data storage solution that can be accessed by all of the EC2 instances simultaneously. The solution must be highly scalable and easy to implement. The storage must be mounted using the NFS protocol.
Which solution meets these requirements?
Options:
A. Create an Amazon RDS database and store the data in a BLOB format. Point the application instances to the RDS endpoint
B. Create an Amazon EFS file system with mount targets in each Availability Zone. Configure the application instances to mount the file system
C. Create an Amazon EBS volume and use EBS Multi-Attach to mount the volume to all EC2 instances across each Availability Zone
D. Create an Amazon S3 bucket and create an S3 gateway endpoint to allow access to the file system using the NFS protocol
Answer: B
Explanation
Amazon EFS provides scalable file storage for use with Amazon EC2. You can use an EFS file system as a common data source for workloads and applications running on multiple instances. The EC2 instances can run in multiple AZs within a Region and the NFS protocol is used to mount the file system.
With EFS you can create mount targets in each AZ for lower latency. The application instances in each AZ will mount the file system using the local mount target.
CORRECT: “Create an Amazon EFS file system with mount targets in each Availability Zone. Configure the application instances to mount the file system” is the correct answer.
INCORRECT: “Create an Amazon S3 bucket and create an S3 gateway endpoint to allow access to the file system using the NFS protocol” is incorrect. You cannot use NFS with S3 or with gateway endpoints.
INCORRECT: “Create an Amazon EBS volume and use EBS Multi-Attach to mount the volume to all EC2 instances across each Availability Zone” is incorrect. You cannot use Amazon EBS Multi-Attach across multiple AZs.
INCORRECT: “Create an Amazon RDS database and store the data in a BLOB format. Point the application instances to the RDS endpoint” is incorrect. This is not a suitable storage solution for a file system that is mounted over NFS.

Question 4:
An application is being created that will use Amazon EC2 instances to generate and store data. Another set of EC2 instances will then analyze and modify the data. Storage requirements will be significant and will continue to grow over time. The application architects require a storage solution.
Which actions would meet these needs?
Options:
A. Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances
B. Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances
C. Store the data in an Amazon EFS filesystem. Mount the file system on the application instances
D. Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances
Answer: C
Explanation
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, providing a common data source for workloads and applications running on more than one instance or server.
For this scenario, EFS is a great choice as it will provide a scalable file system that can be mounted by multiple EC2 instances and accessed simultaneously.
CORRECT: “Store the data in an Amazon EFS filesystem. Mount the file system on the application instances” is the correct answer.
INCORRECT: “Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances” is incorrect. Though there is a new feature that allows (EBS multi-attach) that allows attaching multiple Nitro instances to a volume, this is not on the exam yet, and has some specific constraints.
INCORRECT: “Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances” is incorrect as S3 Glacier is not a suitable storage location for live access to data, it is used for archival.
INCORRECT: “Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway appliance and the EC2 instances” is incorrect. There is no reason to store the data on-premises in a Storage Gateway, using EFS is a much better solution.

Question 19: Skipped
You would like to mount a network file system on Linux instances, where files will be stored and accessed frequently at first, and then infrequently. What solution is the MOST cost-effective?
• S3 Intelligent Tiering
• Glacier Deep Archive
• EFS IA (Correct)
• FSx for Lustre
Explanation
Correct option:
EFS IA
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability.
Amazon EFS Infrequent Access (EFS IA) is a storage class that provides price/performance that is cost-optimized for files, not accessed every day, with storage prices up to 92% lower compared to Amazon EFS Standard. Therefore, this is the correct option.
Incorrect options:
S3 Intelligent Tiering – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access.
You can’t mount a network file system on S3 Intelligent Tiering as it’s an object storage service, so this option is incorrect.
Glacier Deep Archive – Amazon S3 Glacier and S3 Glacier Deep Archive are a secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup. They are designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.
You can’t mount a network file system on S3 Intelligent Tiering as it’s an object storage/archival service, so this option is incorrect.
FSx for Lustre – Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling. Amazon FSx enables you to use Lustre file systems for any workload where storage speed matters.
FSx for Lustre is a file system better suited for distributed computing for HPC (high-performance computing) and is very expensive

Question 23:
A startup has just developed a video backup service hosted on a fleet of EC2 instances. The EC2 instances are behind an Application Load Balancer and the instances are using EBS volumes for storage. The service provides authenticated users the ability to upload videos that are then saved on the EBS volume attached to a given instance. On the first day of the beta launch, users start complaining that they can see only some of the videos in their uploaded videos backup. Every time the users log into the website, they claim to see a different subset of their uploaded videos.
Which of the following is the MOST optimal solution to make sure that users can view all the uploaded videos? (Select two)
A• Mount EFS on all EC2 instances. Write a one time job to copy the videos from all EBS volumes to EFS. Modify the application to use EFS for storing the videos
B• Write a one time job to copy the videos from all EBS volumes to S3 Glacier Deep Archive and then modify the application to use S3 Glacier Deep Archive for storing the videos
C• Write a one time job to copy the videos from all EBS volumes to S3 and then modify the application to use Amazon S3 standard for storing the videos
D• Write a one time job to copy the videos from all EBS volumes to DynamoDB and then modify the application to use DynamoDB for storing the videos
E• Write a one time job to copy the videos from all EBS volumes to RDS and then modify the application to use RDS for storing the videos
Answer: A & C
Explanation
Correct options:
Write a one time job to copy the videos from all EBS volumes to S3 and then modify the application to use Amazon S3 standard for storing the videos
Mount EFS on all EC2 instances. Write a one time job to copy the videos from all EBS volumes to EFS. Modify the application to use EFS for storing the videos
Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
As EBS volumes are attached locally to the EC2 instances, therefore the uploaded videos are tied to specific EC2 instances. Every time the user logs in, they are directed to a different instance and therefore their videos get dispersed across multiple EBS volumes. The correct solution is to use either S3 or EFS to store the user videos.
Incorrect options:
Write a one time job to copy the videos from all EBS volumes to S3 Glacier Deep Archive and then modify the application to use S3 Glacier Deep Archive for storing the videos – Glacier Deep Archive is meant to be used for long term data archival. It cannot be used to serve static content such as videos or images via a web application. So this option is incorrect.
Write a one time job to copy the videos from all EBS volumes to RDS and then modify the application to use RDS for storing the videos – RDS is a relational database and not the right candidate for storing videos.
Write a one time job to copy the videos from all EBS volumes to DynamoDB and then modify the application to use DynamoDB for storing the videos – DynamoDB is a NoSQL database and not the right candidate for storing videos.


37. FSX for Windows & FSX for Lustre

Question 1:
You are considering storage that allows you to share data between multiple EC2 instances. This storage requires the Windows File Server mechanism.
Choose a storage service that can meet this requirement.
Options:
A. Amazon FSx for windows
B. EFS
C. Amazon S3
D. EBS
Explanation
Option 1 is the correct answer. The service that can use the mechanism of Windows File Server is Amazon FSx for Windows. This is an AWS service that provides a fully managed native Microsoft Windows file system. Building on Windows Server, Amazon FSx provides compatibility and functionality that Microsoft applications depend on. Amazon FSx uses the SMB protocol to provide an NTFS file system accessible to up to thousands of compute instances.
Option 2 is incorrect. EFS is a NAS-type file storage dedicated to AWS. EFS provides a file system interface and file system access semantics (such as strong integrity and file locking) that allow simultaneous access from up to thousands of Amazon EC2 instances. It is not compatible with Windows File Server.
Option 3 is incorrect. Amazon S3 is an object storage, not a file storage.
Option 4 is incorrect. EBS is a block storage, not a file storage.

Question 2:
A company is migrating from an on-premises infrastructure to the AWS Cloud. One of the company’s applications stores files on a Windows file server farm that uses Distributed File System Replication (DFSR) to keep data in sync. A solutions architect needs to replace the file server farm.
Which service should the solutions architect use?
Options:
A. AWS Storage Gateway
B. Amazon FSx
C. Amazon S3
D. Amazon EFS
Answer: B
Explanation
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol.
Amazon FSx is built on Windows Server and provides a rich set of administrative features that include end-user file restore, user quotas, and Access Control Lists (ACLs).
Additionally, Amazon FSX for Windows File Server supports Distributed File System Replication (DFSR) in both Single-AZ and Multi-AZ deployments as can be seen in the feature comparison table below.
CORRECT: “Amazon FSx” is the correct answer.
INCORRECT: “Amazon EFS” is incorrect as EFS only supports Linux systems.
INCORRECT: “Amazon S3” is incorrect as this is not a suitable replacement for a Microsoft filesystem.
INCORRECT: “AWS Storage Gateway” is incorrect as this service is primarily used for connecting on-premises storage to cloud storage. It consists of a software device installed on-premises and can be used with SMB shares but it actually stores the data on S3. It is also used for migration. However, in this case the company need to replace the file server farm and Amazon FSx is the best choice for this job.

Question 3:
A Microsoft Windows file server farm uses Distributed File System Replication (DFSR) to synchronize data in an on-premises environment. The infrastructure is being migrated to the AWS Cloud.
Which service should the solutions architect use to replace the file server farm?
Options:
A. Amazon EBS
B. Amazon FSx
C. AWS Storage Gateway
D. Amazon EFS
Answer: B
Explanation
Amazon FSx for Windows file server supports DFS namespaces and DFS replication. This is the best solution for replacing the on-premises infrastructure.
CORRECT: “Amazon FSx” is the correct answer.
INCORRECT: “Amazon EFS” is incorrect. You cannot replace a Windows file server farm with EFS as it uses a completely different protocol.
INCORRECT: “Amazon EBS” is incorrect. Amazon EBS provides block-based volumes that are attached to EC2 instances. It cannot be used for replacing a shared Windows file server farm using DFSR.
INCORRECT: “AWS Storage Gateway” is incorrect. This service is used for providing cloud storage solutions for on-premises servers. In this case the infrastructure is being migrated into the AWS Cloud.

Question 4:
An Electronic Design Automation (EDA) application produces massive volumes of data that can be divided into two categories. The ‘hot data’ needs to be both processed and stored quickly in a parallel and distributed fashion. The ‘cold data’ needs to be kept for reference with quick access for reads and updates at a low cost.
Which of the following AWS services is BEST suited to accelerate the aforementioned chip design process?
Options:
A. AWS Glue
B. Amazon EMR
C. Amazon FSx for Windows File Server
D. Amazon FSx for Lustre
Answer: D
Explanation
Correct option:
Amazon FSx for Lustre
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling. The open-source Lustre file system is designed for applications that require fast storage – where you want your storage to keep up with your compute. FSx for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system. When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allows you to write changed data back to S3.
FSx for Lustre provides the ability to both process the ‘hot data’ in a parallel and distributed fashion as well as easily store the ‘cold data’ on Amazon S3. Therefore this option is the BEST fit for the given problem statement.
Incorrect options:
Amazon FSx for Windows File Server – Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. FSx for Windows does not allow you to present S3 objects as files and does not allow you to write changed data back to S3. Therefore you cannot reference the “cold data” with quick access for reads and updates at low cost. Hence this option is not correct.
Amazon EMR – Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR uses Hadoop, an open-source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances. EMR does not offer the same storage and processing speed as FSx for Lustre. So it is not the right fit for the given high-performance workflow scenario.
AWS Glue – AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing. AWS Glue does not offer the same storage and processing speed as FSx for Lustre. So it is not the right fit for the given high-performance workflow scenario.

Question 5:
A large financial institution operates an on-premises data center with hundreds of PB of data managed on Microsoft’s Distributed File System (DFS). The CTO wants the organization to transition into a hybrid cloud environment and run data-intensive analytics workloads that support DFS.
Which of the following AWS services can facilitate the migration of these workloads?
Options:
A. AWS Managaed Microsoft AD
B. Amazon FSx for Windows File Server
C. Amazon FSx for Lustre
D. Microsoft SQL Server on Amazon
Answer: B
Explanation
Correct option:
Amazon FSx for Windows File Server
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) to organize shares into a single folder structure up to hundreds of PB in size. So this option is correct.
Incorrect options:
Amazon FSx for Lustre
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling. Amazon FSx enables you to use Lustre file systems for any workload where storage speed matters. FSx for Lustre does not support Microsoft’s Distributed File System (DFS), so this option is incorrect.
AWS Managed Microsoft AD
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud. AWS Managed Microsoft AD is built on the actual Microsoft Active Directory and does not require you to synchronize or replicate data from your existing Active Directory to the cloud. AWS Managed Microsoft AD does not support Microsoft’s Distributed File System (DFS), so this option is incorrect.
Microsoft SQL Server on Amazon
Microsoft SQL Server on AWS offers you the flexibility to run Microsoft SQL Server database on AWS Cloud. Microsoft SQL Server on AWS does not support Microsoft’s Distributed File System (DFS), so this option is incorrect.

Question 15: Skipped
Your company has an on-premises Distributed File System Replication (DFSR) service to keep files synchronized on multiple Windows servers, and would like to migrate to AWS cloud.
What do you recommend as a replacement for the DFSR?
• Amazon S3
• EFS
• FSx for Windows (Correct)
• FSx for Lustre
Explanation
Correct option:
FSx for Windows
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. The Distributed File System Replication (DFSR) service is a new multi-master replication engine that is used to keep folders synchronized on multiple servers. Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) to organize shares into a single folder structure up to hundreds of PB in size.
FSx for Windows is a perfect distributed file system, with replication capability, and can be mounted on Windows.
Incorrect options:
FSx for Lustre – Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling. The open-source Lustre file system is designed for applications that require fast storage – where you want your storage to keep up with your compute. Amazon FSx enables you to use Lustre file systems for any workload where storage speed matters. FSx for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system.
FSx for Lustre is for Linux only, so this option is incorrect.
EFS – Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
EFS is a network file system but for Linux only, so this option is incorrect.
Amazon S3 – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Amazon S3 cannot be mounted as a file system on Windows, so this option is incorrect.


38. EC2 Placement Groups


39. HPC

Question 01:
An ivy-league university is assisting NASA to find potential landing sites for exploration vehicles of unmanned missions to our neighboring planets. The university uses High Performance Computing (HPC) driven application architecture to identify these landing sites.
Which of the following EC2 instance topologies should this application be deployed on?
Options:
A. The EC2 instances should be deployed in a partition placement group so that distributed workloads can be handled effectively
B. The EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput
C. The EC2 instances should be deployed in a spread placement group so that there are no correlated failures
D. The EC2 instances should be deployed in an Auto Scaling group so that application meets high availability requirements
Answer: B
Explanation
Correct option:
The EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput
The key thing to understand in this question is that HPC workloads need to achieve low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications. Cluster placement groups pack instances close together inside an Availability Zone. These are recommended for applications that benefit from low network latency, high network throughput, or both. Therefore this option is the correct answer.
Incorrect options:
The EC2 instances should be deployed in a partition placement group so that distributed workloads can be handled effectively – A partition placement group spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka. A partition placement group can have a maximum of seven partitions per Availability Zone. Since a partition placement group can have partitions in multiple Availability Zones in the same region, therefore instances will not have low-latency network performance. Hence the partition placement group is not the right fit for HPC applications.
The EC2 instances should be deployed in a spread placement group so that there are no correlated failures – A spread placement group is a group of instances that are each placed on distinct racks, with each rack having its own network and power source. The instances are placed across distinct underlying hardware to reduce correlated failures. You can have a maximum of seven running instances per Availability Zone per group. Since a spread placement group can span multiple Availability Zones in the same Region, therefore instances will not have low-latency network performance. Hence spread placement group is not the right fit for HPC applications.
The EC2 instances should be deployed in an Auto Scaling group so that application meets high availability requirements – An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling. You do not use Auto Scaling groups per se to meet HPC requirements.


40. WAF

Question 1:
Your company uses S3 as storage for data and runs an application that provides S3 objects to users. As an application administrator, you recently discovered that the URL links for the data provided by this application are being used without permission. You need to address this issue by making external links permanently unavailable.
Select the service you need for this requirement.
Options:
A. Deliver data as an object with a pre-signed URL
B. Apply Referrer restrictions for links provided by AWS WAF
C. Deliver data as an object with signed cookies
D. Restrict delivery by encrypting access processing to S3
Answer: B
Explanation
You can configure content delivery with CloudFront on S3 and leverage AWS WAF to implement Referrer limits. AWS WAF is a web application firewall that monitors HTTP and HTTPS requests forwarded to CloudFront and allows you to control access to your content. You can restrict the direct reference of URL links by the Referrer restriction of AWS WAF. Therefore, option 2 is the correct answer.
Options 1 and 3 are incorrect. CloudFront signed URLs and signed cookies provide much the same functionality and give you control over who can access your content. However, it is not correct because you cannot permanently prohibit direct links.
Option 4 is incorrect. It is not possible to restrict distribution by encrypting the access process to S3.

Question 2:
A media company runs a photo-sharing web application that is accessed across three different countries. The application is deployed on several Amazon EC2 instances running behind an Application Load Balancer. With new government regulations, the company has been asked to block access from two countries and allow access only from the home country of the company.
Which configuration should be used to meet this changed requirement?
Options:
A. Use Geo Restriction feature of Amazon CloudFront in a VPC
B. Configure the security group for the EC2 instances
C. Configure the security group on the Application Load Balancer
D. Configure AWS WAF on the Application Load Balancer in a VPC
Answer: D
Explanation
Correct option:
AWS WAF is a web application firewall service that lets you monitor web requests and protect your web applications from malicious requests. Use AWS WAF to block or allow requests based on conditions that you specify, such as the IP addresses. You can also use AWS WAF preconfigured protections to block common attacks like SQL injection or cross-site scripting.
Configure AWS WAF on the Application Load Balancer in a VPC
You can use AWS WAF with your Application Load Balancer to allow or block requests based on the rules in a web access control list (web ACL). Geographic (Geo) Match Conditions in AWS WAF allows you to use AWS WAF to restrict application access based on the geographic location of your viewers. With geo match conditions you can choose the countries from which AWS WAF should allow access.
Geo match conditions are important for many customers. For example, legal and licensing requirements restrict some customers from delivering their applications outside certain countries. These customers can configure a whitelist that allows only viewers in those countries. Other customers need to prevent the downloading of their encrypted software by users in certain countries. These customers can configure a blacklist so that end-users from those countries are blocked from downloading their software.
Incorrect options:
Use Geo Restriction feature of Amazon CloudFront in a VPC – Geo Restriction feature of CloudFront helps in restricting traffic based on the user’s geographic location. But, CloudFront works from edge locations and doesn’t belong to a VPC. Hence, this option itself is incorrect and given only as a distractor.
Configure the security group on the Application Load Balancer
Configure the security group for the EC2 instances
Security Groups cannot restrict access based on the user’s geographic location.

Question 24:
To improve the performance and security of the application, the engineering team at a company has created a CloudFront distribution with an Application Load Balancer as the custom origin. The team has also set up a Web Application Firewall (WAF) with CloudFront distribution. The security team at the company has noticed a surge in malicious attacks from a specific IP address to steal sensitive data stored on the EC2 instances.
As a solutions architect, which of the following actions would you recommend to stop the attacks?
A• Create a ticket with AWS support to take action against the malicious IP
B• Create a deny rule for the malicious IP in the NACL associated with each of the instances
C• Create a deny rule for the malicious IP in the Security Groups associated with each of the instances
D• Create an IP match condition in the WAF to block the malicious IP address
Answer: D
Explanation
Correct option:
Create an IP match condition in the WAF to block the malicious IP address
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
If you want to allow or block web requests based on the IP addresses that the requests originate from, create one or more IP match conditions. An IP match condition lists up to 10,000 IP addresses or IP address ranges that your requests originate from. So, this option is correct.
Incorrect options:
Create a deny rule for the malicious IP in the NACL associated with each of the instances – NACLs are not associated with instances. So this option is also ruled out.
Create a deny rule for the malicious IP in the Security Groups associated with each of the instances – You cannot deny rules in Security Groups. So this option is ruled out.
Create a ticket with AWS support to take action against the malicious IP – Managing the security of your application is your responsibility, not that of AWS, so you cannot raise a ticket for this issue.


41. Databases


42. Create an RDS Instance

Question 1:
Your company uses an Amazon RDS MySQL database. As a Solutions Architect, you have changed your settings to create a read-only read replica and it seems to handle the heavy read load of the database. However, there is an event where old data is being displayed in the report at a certain time.
What are the most likely root causes of this problem?
Options:
A. Since it is a multi-AZ configuration of RDS, the RDS data in another AZ is still old
B. Old data may be displayed due to replication lag
C. The read replica is not set up properly
D. The backup of the original DB has not been set up properly
Answer: B
Explanation
Because Read Replicas are separate database instances that are asynchronously replicated, you may not be able to see some of the latest transactions due to delays in replication data. This is called the replication lag. Therefore, option 2 is the correct answer.
Option 1 is incorrect. The RDS multi-AZ configuration does not utilize the secondary database unless a failover is performed, so the multi-AZ configuration does not affect data processing.
Option 3 is incorrect. Data becoming old due to the misconfiguration of the read replica does not occur.
Option 4 is incorrect. Even if the backup of the master DB instance is not successfully obtained, it does not affect normal data processing.

Question 2:
As a Solutions Architect, you plan to use your RDS instance as a database for your applications. To meet your security requirements, you need to ensure that the data stored in your database is encrypted.
What should I do to achieve this requirement?
Options:
A. Enable server-side encryption when configuring RDS
B. Choose a volume that is automatically encrypted when you are to select an EBS volume
C. Enable encryption by choosing an appropriate cluster configuration
D. Enable encryption by setting the security group
Answer: A
Explanation
Database encryption can be done during the database creation. To encrypt your Amazon RDS DB instance and snapshot, enable the encryption option in the Amazon RDS DB Instance Settings menu. Data to be encrypted includes DB instances, automatic backups, read replicas, and snapshots. Therefore, option 1 is the correct answer.
Option 2 is incorrect. EBS volumes are independent of RDS and are not used to encrypt RDS data.
Option 3 is incorrect. Cluster configuration have nothing to do with encryption. The cluster configuration are a setting for making read processing highly available.
Option 4 is incorrect. Security groups are used for traffic control and are not related to encryption.

Question 3:
A company uses an Amazon RDS MySQL database instance to store customer order data. The security team have requested that SSL/TLS encryption in transit must be used for encrypting connections to the database from application servers. The data in the database is currently encrypted at rest using an AWS KMS key.
How can a Solutions Architect enable encryption in transit?
Options:
A. Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption in transit enabled
B. Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance
C. Add a self-signed certificate to the RDS DB instance. Use the certificates in all connections to the RDS DB instance
D. Enable encryption in transit using the RDS Management console and obtain a key using AWS KMS
Answer: B
Explanation
Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when Amazon RDS provisions the instance. These certificates are signed by a certificate authority. The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against spoofing attacks.
You can download a root certificate from AWS that works for all Regions or you can download Region-specific intermediate certificates.
CORRECT: “Download the AWS-provided root certificates. Use the certificates when connecting to the RDS DB instance” is the correct answer.
INCORRECT: “Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption in transit enabled” is incorrect. There is no need to do this as a certificate is created when the DB instances is launched.
INCORRECT: “Enable encryption in transit using the RDS Management console and obtain a key using AWS KMS” is incorrect. You cannot enable/disable encryption in transit using the RDS management console or use a KMS key.
INCORRECT: “Add a self-signed certificate to the RDS DB instance. Use the certificates in all connections to the RDS DB instance” is incorrect. You cannot use self-signed certificates with RDS.

Question 4:
A company runs an application that uses an Amazon RDS PostgreSQL database. The database is currently not encrypted. A Solutions Architect has been instructed that due to new compliance requirements all existing and new data in the database must be encrypted. The database experiences high volumes of changes and no data can be lost.
How can the Solutions Architect enable encryption for the database without incurring any data loss?
Options:
A. Create an RDS read replica and specify an encryption key. Promote the encrypted read replica to primary. Update the application to point to the new RDS DB endpoint
B. Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot and update the application. Use AWS DMS to synchronize data between the source and destination RDS DBs
C. Update the RDS DB to Multi-AZ mode and enable encryption for the standby replica. Perform a failover to the standby instance and then delete the unencrypted RDS DB instance
D. Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot. Configure the application to use the new DB endpoint
Answer: B
Explanation
You cannot change the encryption status of an existing RDS DB instance. Encryption must be specified when creating the RDS DB instance. The best way to encrypt an existing database is to take a snapshot, encrypt a copy of the snapshot and restore the snapshot to a new RDS DB instance. This results in an encrypted database that is a new instance. Applications must be updated to use the new RDS DB endpoint.
In this scenario as there is a high rate of change, the databases will be out of sync by the time the new copy is created and is functional. The best way to capture the changes between the source (unencrypted) and destination (encrypted) DB is to use AWS Database Migration Service (DMS) to synchronize the data.
CORRECT: “Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot and update the application. Use AWS DMS to synchronize data between the source and destination RDS DBs” is the correct answer.
INCORRECT: “Create a snapshot of the existing RDS DB instance. Create an encrypted copy of the snapshot. Create a new RDS DB instance from the encrypted snapshot. Configure the application to use the new DB endpoint” is incorrect. This answer creates an encrypted DB instance but does not synchronize the data.
INCORRECT: “Create an RDS read replica and specify an encryption key. Promote the encrypted read replica to primary. Update the application to point to the new RDS DB endpoint” is incorrect. You cannot create an encrypted read replica of an unencrypted RDS DB. The read replica will always have the same encryption status as the RDS DB it is created from.
INCORRECT: “Update the RDS DB to Multi-AZ mode and enable encryption for the standby replica. Perform a failover to the standby instance and then delete the unencrypted RDS DB instance” is incorrect. You also cannot have an encrypted Multi-AZ standby instance of an unencrypted RDS DB.

Question 29:
A retail company wants to share sensitive accounting data that is stored in an Amazon RDS DB instance with an external auditor. The auditor has its own AWS account and needs its own copy of the database.
Which of the following would you recommend to securely share the database with the auditor?
A• Create a snapshot of the database in Amazon S3 and assign an IAM role to the auditor to grant access to the object in that bucket
B• Export the database contents to text files, store the files in Amazon S3, and create a new IAM user for the auditor with access to that bucket
C• Set up a read replica of the database and configure IAM standard database authentication to grant the auditor access
D• Create an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key
Answer: D
Explanation
Correct option:
Create an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key
You can share the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used to encrypt the snapshot with any accounts that you want to be able to access the snapshot. You can share AWS KMS CMKs with another AWS account by adding the other account to the AWS KMS key policy.
Making an encrypted snapshot of the database will give the auditor a copy of the database, as required for the given use case.
Incorrect options:
Create a snapshot of the database in Amazon S3 and assign an IAM role to the auditor to grant access to the object in that bucket – RDS stores the DB snapshots in the Amazon S3 bucket belonging to the same AWS region where the RDS instance is located. RDS stores these on your behalf and you do not have direct access to these snapshots in S3, so it’s not possible to grant access to the snapshot objects in S3.
Export the database contents to text files, store the files in Amazon S3, and create a new IAM user for the auditor with access to that bucket – This solution is feasible though not optimal. It requires a lot of unnecessary work and is difficult to audit when such bulk data is exported into text files.
Set up a read replica of the database and configure IAM standard database authentication to grant the auditor access – Read Replicas make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Creating Read Replicas for audit purposes is overkill. Also, the question mentions that the auditor needs to have their own copy of the database, which is not possible with replicas.

Question 33:
An IT company is working on a client project to build a Supply Chain Management application. The web-tier of the application runs on an EC2 instance and the database tier is on Amazon RDS MySQL. For beta testing, all the resources are currently deployed in a single Availability Zone. The development team wants to improve application availability before the go-live.
Given that all end users of the web application would be located in the US, which of the following would be the MOST resource-efficient solution?
A• Deploy the web-tier EC2 instances in two Availability Zones, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in read replica configuration
B• Deploy the web-tier EC2 instances in two Availability Zones, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration
C• Deploy the web-tier EC2 instances in two regions, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in read replica configuration
D• Deploy the web-tier EC2 instances in two regions, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration
Answer: B
Explanation
Correct option:
Deploy the web-tier EC2 instances in two Availability Zones, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Therefore, deploying the web-tier EC2 instances in two Availability Zones, behind an Elastic Load Balancer would improve the availability of the application.
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Deploying the Amazon RDS MySQL database in Multi-AZ configuration would improve availability and hence this is the correct option.
Incorrect options:
Deploy the web-tier EC2 instances in two Availability Zones, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in read replica configuration
Deploy the web-tier EC2 instances in two regions, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in read replica configuration
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Read replicas are meant to address scalability issues. You cannot use read replicas for improving availability, so both these options are incorrect.
Deploy the web-tier EC2 instances in two regions, behind an Elastic Load Balancer. Deploy the Amazon RDS MySQL database in Multi-AZ configuration – As Elastic Load Balancing does not work across regions, so this option is incorrect.


43. RDS Backups, Multi-AZ & Read Replicas

Question 1:
Your company has an application that consists of ELB, EC2 instances, and an RDS database. Recently, the number of read requests to the RDS database has been increasing, resulting in poor performance.
Select the changes you should make to your architecture to improve RDS performance.
Options:
A. Install CloudFront before accessing the DB
B. Improve processing by making RDS a multi-AZ configuration
C. Increase read replicas of RDS
D. Place DynamoDB as a cache layer in front of the RDS DB
Answer: C
Explanation
Adding a Read Replica to Amazon RDS improves the performance and durability of the database (DB) instance read process. This feature allows you to stretch and scale the capacity of a single DB instance to ease the overall workload of frequently read databases. You can create up to 5 Read Replicas for your RDS DB instance. It can support high volume read traffic for your application and improve overall read throughput. Therefore, option 3 is the correct answer.
Option 1 is incorrect because CloudFront is used to speed up global content delivery processing, not to improve database reading processing.
Option 2 is incorrect. You can improve the availability of your DB instance by configuring RDS in a multi-AZ configuration, but yit will not improve read performance.
Option 4 is incorrect. By installing ElastiCache in front of RDS instead of DynamoDB, it is possible to improve read performance by cache processing. However DynamoDB is not a suitable solution.

Question 2:
An Amazon RDS Read Replica is being deployed in a separate region. The master database is not encrypted but all data in the new region must be encrypted. How can this be achieved?
Options:
A. Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica
B. Enable encryption using Key Management Service (KMS) when creating the cross-region Read Replica
C. Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read Replica from the snapshot
D. Enabled encryption on the master DB instance, then create an encrypted cross-region Read Replica
Answer: A
Explanation
You cannot create an encrypted Read Replica from an unencrypted master DB instance. You also cannot enable encryption after launch time for the master DB instance. Therefore, you must create a new master DB by taking a snapshot of the existing DB, encrypting it, and then creating the new DB from the snapshot. You can then create the encrypted cross-region Read Replica of the master DB.
CORRECT: “Encrypt a snapshot from the master DB instance, create a new encrypted master DB instance, and then create an encrypted cross-region Read Replica” is the correct answer.
INCORRECT: “Enable encryption using Key Management Service (KMS) when creating the cross-region Read Replica” is incorrect. All other options will not work due to the limitations explained above.
INCORRECT: “Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read Replica from the snapshot” is incorrect. All other options will not work due to the limitations explained above.
INCORRECT: “Enabled encryption on the master DB instance, then create an encrypted cross-region Read Replica” is incorrect. All other options will not work due to the limitations explained above.

Question 3:
A new DevOps engineer has just joined a development team and wants to understand the replication capabilities for RDS Multi-AZ as well as RDS Read-replicas.
Which of the following correctly summarizes these capabilities for the given database?
Options:
A. Multi-AZ follows asynchronous replication and spans one Availability Zone within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
B. Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
C. Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
D. Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
Answer: C
Explanation
Correct option:
Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Multi-AZ spans at least two Availability Zones within a single region.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance.
Amazon RDS replicates all databases in the source DB instance. Read replicas can be within an Availability Zone, Cross-AZ, or Cross-Region.
Incorrect Options:
Multi-AZ follows asynchronous replication and spans one Availability Zone within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region
These three options contradict the earlier details provided in the explanation. To summarize, Multi-AZ follows synchronous replication for RDS. Hence these options are incorrect.

Question 6: Skipped
The engineering manager for a content management application wants to set up RDS read replicas to provide enhanced performance and read scalability. The manager wants to understand the data transfer charges while setting up RDS read replicas.
Which of the following would you identify as correct regarding the data transfer charges for RDS read replicas?
• There are data transfer charges for replicating data across AWS Regions (Correct)
• There are data transfer charges for replicating data within the same AWS Region
• There are no data transfer charges for replicating data across AWS Regions
• There are data transfer charges for replicating data within the same Availability Zone
Explanation
Correct option:
There are data transfer charges for replicating data across AWS Regions
RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
A read replica is billed as a standard DB Instance and at the same rates. You are not charged for the data transfer incurred in replicating data between your source DB instance and read replica within the same AWS Region.
Incorrect options:
There are data transfer charges for replicating data within the same Availability Zone
There are data transfer charges for replicating data within the same AWS Region
There are no data transfer charges for replicating data across AWS Regions
These three options contradict the explanation provided above, so these options are incorrect.

Question 21:
What is true about RDS Read Replicas encryption?
A• If the master database is encrypted, the read replicas can be either encrypted or unencrypted
B• If the master database is unencrypted, the read replicas are encrypted
C• If the master database is unencrypted, the read replicas can be either encrypted or unencrypted
D• If the master database is encrypted, the read replicas are encrypted
Answer: D
Explanation
Correct option:
If the master database is encrypted, the read replicas are encrypted
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance. read replicas can be within an Availability Zone, Cross-AZ, or Cross-Region.
On a database instance running with Amazon RDS encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots. Therefore, this option is correct.
Incorrect options:
If the master database is encrypted, the read replicas can be either encrypted or unencrypted – If the master database is encrypted, the read replicas are necessarily encrypted, so this option is incorrect.
If the master database is unencrypted, the read replicas can be either encrypted or unencrypted
If the master database is unencrypted, the read replicas are encrypted
If the master database is not encrypted, the read replicas cannot be encrypted, so both these options are incorrect.


44. Dynamo DB

Question 1:
Your company needs to use a fully managed NoSQL database on the AWS cloud. The database is required to be configured for backup and have high availability.
Which database meets this requirement?
Options:
A. Amazon Aurora
B. RDS
C. Dynamo DB
D. Redshift
Answer: C
Explanation
Amazon DynamoDB is a fully managed NoSQL database service that provides seamless, scalable, fast and predictable performance. Therefore, option 3 is the correct answer to meet the requirements.
Option 1 is incorrect. Amazon RDS is a managed relational database and is so incorrect.
Option 2 is incorrect. Amazon Aurora is a relational database built for the cloud that is compatible with MySQL and PostgreSQL and is incorrect.
Option 4 is incorrect. Amazon Redshift is a fast, simple and cost-effective data warehouse service that doesn’t meet your requirements.

Question 2:
Your company is developing a new mobile application on AWS. Currently, as a Solutions Architect, you are considering how to save your user settings. The size of the individual custom data will be approximately 10KB. It is estimated that tens of thousands of customers will use this mobile application during the release phase. High-speed processing using this user setting data is required. The datastore that stores user settings should be cost-effective, highly available, scalable, and secure.
Choose the best database to meet this requirement.
Options:
A. Accumulate user settings using RDS
B. Accumulate user settings using S3
C. Accumulate user setting using Redshift cluster
D. Accumulate user settings using DynamoDB
Answer: D
Explanation
In this scenario, the size of the individual custom data will be approximately 10KB. It is best to use a NoSQL database to store and process such small data. On AWS, DynamoDB is an ideal database service for storing session data, user settings, metadata, and more. DynamoDB is a highly scalable managed service that can meet this requirement. It is estimated that tens of thousands of customers will use this mobile application during the release phase. Since high-speed processing using this user setting data may be required, NoSQL type high-speed processing in DynamoDB is optimal. Therefore, option 4 is the correct answer.
Option 1 is incorrect. Although it is possible to store user-configured data in RDS, DynamoDB is the best choice for high-speed processing of data volumes and mobile applications.
Option 2 is incorrect. S3 is not suitable for high-speed processing of data volumes and mobile applications. S3 is used for data storage such as objects, not data processing.
Option 3 is incorrect. Redshift is a relational database type data warehouse used for data analysis. NoSQL type DynamoDB is more suitable for retaining user settings and fast processing.

Question 3:
An Amazon VPC contains several Amazon EC2 instances. The instances need to make API calls to Amazon DynamoDB. A solutions architect needs to ensure that the API calls do not traverse the internet.
How can this be accomplished? (Select TWO.)
Options:
A. Create a new DynamoDB table that uses the endpoint
B. Create a VPC peering connection between the VPC and DynamoDB
C. Create an ENI for the endpoint in each of the subnets of the VPC
D. Create a gateway endpoint for DynamoDB
E. Create a route table entry for the endpoint
Answer: D & E
Explanation
Amazon DynamoDB and Amazon S3 support gateway endpoints, not interface endpoints. With a gateway endpoint you create the endpoint in the VPC, attach a policy allowing access to the service, and then specify the route table to create a route table entry in.
CORRECT: “Create a route table entry for the endpoint” is a correct answer.
CORRECT: “Create a gateway endpoint for DynamoDB” is also a correct answer.
INCORRECT: “Create a new DynamoDB table that uses the endpoint” is incorrect as it is not necessary to create a new DynamoDB table.
INCORRECT: “Create an ENI for the endpoint in each of the subnets of the VPC” is incorrect as an ENI is used by an interface endpoint, not a gateway endpoint.
INCORRECT: “Create a VPC peering connection between the VPC and DynamoDB” is incorrect as you cannot create a VPC peering connection between a VPC and a public AWS service as public services are outside of VPCs.

Question 18: Skipped
A social photo-sharing web application is hosted on EC2 instances behind an Elastic Load Balancer. The app gives the users the ability to upload their photos and also shows a leaderboard on the homepage of the app. The uploaded photos are stored in S3 and the leaderboard data is maintained in DynamoDB. The EC2 instances need to access both S3 and DynamoDB for these features.
As a solutions architect, which of the following solutions would you recommend as the MOST secure option?
• Save the AWS credentials (access key Id and secret access token) in a configuration file within the application code on the EC2 instances. EC2 instances can use these credentials to access S3 and DynamoDB
• Attach the appropriate IAM role to the EC2 instance profile so that the instance can access S3 and DynamoDB (Correct)
• Configure AWS CLI on the EC2 instances using a valid IAM user’s credentials. The application code can then invoke shell scripts to access S3 and DynamoDB via AWS CLI
• Encrypt the AWS credentials via a custom encryption library and save it in a secret directory on the EC2 instances. The application code can then safely decrypt the AWS credentials to make the API calls to S3 and DynamoDB
Explanation
Correct option:
Attach the appropriate IAM role to the EC2 instance profile so that the instance can access S3 and DynamoDB
Applications that run on an EC2 instance must include AWS credentials in their AWS API requests. You could have your developers store AWS credentials directly within the EC2 instance and allow applications in that instance to use those credentials. But developers would then have to manage the credentials and ensure that they securely pass the credentials to each instance and update each EC2 instance when it’s time to rotate the credentials.
Instead, you should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you don’t have to distribute long-term credentials (such as a username and password or access keys) to an EC2 instance. The role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests. Therefore, this option is correct.
Incorrect options:
Save the AWS credentials (access key Id and secret access token) in a configuration file within the application code on the EC2 instances. EC2 instances can use these credentials to access S3 and DynamoDB
Configure AWS CLI on the EC2 instances using a valid IAM user’s credentials. The application code can then invoke shell scripts to access S3 and DynamoDB via AWS CLI
Encrypt the AWS credentials via a custom encryption library and save it in a secret directory on the EC2 instances. The application code can then safely decrypt the AWS credentials to make the API calls to S3 and DynamoDB
Keeping the AWS credentials (encrypted or plain text) on the EC2 instance is a bad security practice, therefore these three options using the AWS credentials are incorrect.


45. Advanced Dynamo DB

Question 1:
A retail company has developed a REST API which is deployed in an Auto Scaling group behind an Application Load Balancer. The API stores the user data in DynamoDB and any static content, such as images, are served via S3. On analyzing the usage trends, it is found that 90% of the read requests are for commonly accessed data across all users.
As a Solutions Architect, which of the following would you suggest as the MOST efficient solution to improve the application performance?
Options:
A. Enable ElastiCache Redis for DynamoDB and CloudFront for S3
B. Enable DAX for DynamoDB and ElastiCache Memcached for S3
C. Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3
D. Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3
Answer: C
Explanation
Correct option:
Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3
DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.
DAX is tightly integrated with DynamoDB—you simply provision a DAX cluster, use the DAX client SDK to point your existing DynamoDB API calls at the DAX cluster, and let DAX handle the rest. Because DAX is API-compatible with DynamoDB, you don’t have to make any functional application code changes. DAX is used to natively cache DynamoDB reads.
CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.
When a user requests content that you serve with CloudFront, their request is routed to a nearby Edge Location. If CloudFront has a cached copy of the requested file, CloudFront delivers it to the user, providing a fast (low-latency) response. If the file they’ve requested isn’t yet cached, CloudFront retrieves it from your origin – for example, the S3 bucket where you’ve stored your content.
So, you can use CloudFront to improve application performance to serve static content from S3.
Incorrect options:
Enable ElastiCache Redis for DynamoDB and CloudFront for S3
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.
Although you can integrate Redis with DynamoDB, it’s much more involved than using DAX which is a much better fit.
Enable DAX for DynamoDB and ElastiCache Memcached for S3
Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3
Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database.
ElastiCache cannot be used as a cache to serve static content from S3, so both these options are incorrect.

Question 2:
The engineering team at an in-home fitness company is evaluating multiple in-memory data stores with the ability to power its on-demand, live leaderboard. The company’s leaderboard requires high availability, low latency, and real-time processing to deliver customizable user data for the community of users working out together virtually from the comfort of their home.
As a solutions architect, which of the following solutions would you recommend? (Select two)
Options:
A. Power the on-demand, live leaderboard using DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements
B. Power the on-demand, live leaderboard using AWS Neptune as it meets the in-memory, high availability, low latency requirements
C. Power the on-demand, live leaderboard using DynamoDB as it meets the in-memory, high availability, low latency requirements
D. Power the on-demand, live leaderboard using RDS Aurora as it meets the in-memory, high availability, low latency requirements
E. Power the on-demand, live leaderboard using ElastiCache Redis as it meets the in-memory, high availability, low latency requirements
Answer: E
Explanation
Correct options:
Power the on-demand, live leaderboard using ElastiCache Redis as it meets the in-memory, high availability, low latency requirements
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store. ElastiCache for Redis can be used to power the live leaderboard, so this option is correct.
Power the on-demand, live leaderboard using DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multiregion, multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. So DynamoDB with DAX can be used to power the live leaderboard.
Incorrect options:
Power the on-demand, live leaderboard using AWS Neptune as it meets the in-memory, high availability, low latency requirements – Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. Neptune is not an in-memory database, so this option is not correct.
Power the on-demand, live leaderboard using DynamoDB as it meets the in-memory, high availability, low latency requirements – DynamoDB is not an in-memory database, so this option is not correct.
Power the on-demand, live leaderboard using RDS Aurora as it meets the in-memory, high availability, low latency requirements – Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. Aurora is not an in-memory database, so this option is not correct.

Question 03:
A company uses DynamoDB as a data store for various kinds of customer data, such as user profiles, user events, clicks, and visited links. Some of these use-cases require a high request rate (millions of requests per second), low predictable latency, and reliability. The company now wants to add a caching layer to support high read volumes.
As a solutions architect, which of the following AWS services would you recommend as a caching layer for this use-case? (Select two)
Options:
A. DynamoDB Accelerator (DAX)
B. ElastiCache
C. Elastisearch
D. RDS
E. Redshift
Answer: A & B
Explanation
Correct options:
DynamoDB Accelerator (DAX) – Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management. Therefore, this is a correct option.
ElastiCache – Amazon ElastiCache for Memcached is an ideal front-end for data stores like Amazon RDS or Amazon DynamoDB, providing a high-performance middle tier for applications with extremely high request rates and/or low latency requirements. Therefore, this is also a correct option.
Incorrect options:
RDS – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. RDS cannot be used as a caching layer for DynamoDB.
Elasticsearch – Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. It cannot be used as a caching layer for DynamoDB.
Redshift – Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. It cannot be used as a caching layer for DynamoDB.


46. Redshift

Question 1:
As a solutions architect, you are building a business analysis system. This system requires a highly available relational database with an initial storage capacity of 8 TB. As a requirement, you predicts that the amount of data will increase by 10 GB daily. In addition, parallel processing is required for data processing in order to handle this expected traffic volume.
Choose the best service that meets this requirement.
Options:
A. Dynamo DB
B. RDS
C. Aurora
D. Redshift
Answer: D
Explanation
Option 4 is the correct answer. Redshift is the best database for business analysis systems. Redshift is capable of big data storage and parallel query analysis processing to meet your requirements. Redshift is a petabyte-scale, relational database-type ,data warehouse service that is fully managed in the cloud. Redshift distributes table rows to compute nodes so you can process data in parallel. By choosing the appropriate distribution key for each table, you can optimize the distribution of data, distribute the workload, and minimize the movement of data between nodes.
Option 1 is incorrect. DynamoDB is a NoSQL database and does not meet the requirements of a highly available relational database.
Option 2 is incorrect. RDS is a relational database, but it cannot be used as a business analysis system. RDS can parallelize the read process by configuring a read replica. However, data analysis itself cannot be processed in parallel.
Option 3 is incorrect. Aurora MySQL can parallelize part of data-intensive query processing and computational processing. However, considering the requirements of a business analysis system, Redshift is more suitable than Aurora, so that is the priority in this scenario.

Question 2:
Your company is trying to build a BI (business intelligence) system using AWS Redshift. As the Solutions Architect, you are required to use Redshift clusters in a cost-effective manner.
Which of the options will help to meet this requirement?
Options:
A. Removing unnecessary snapshot settings
B. Not use VPC enhanced routing
C. Using Spot instances in your cluster
D. Removing unnecessary CloudWatch metric settings.
Answer: A
Explanation
Redshift offers free storage for snapshots, but you’ll be charged if you run out of storage. For this reason, you will be charged when the free snapshot space reaches the upper limit. To avoid this, you should save automatic snapshots and delete manual snapshots that you no longer need. You could do this by reducing the retention period. Therefore, option 1 is the correct answer.
Option 2 is incorrect. With enhanced VPC routing in Amazon Redshift, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and data repository to go through your Amazon VPC. The presence or absence of this setting does not affect the cost.
Option 3 is incorrect. If you use Spot Instances instead of On-Demand Instances, the process may be stopped in the middle of running, so this is an inappropriate setting.
Reserved Instances are available on Amazon Redshift. Knowing this, you could avoid the spot instance stoppage risks and save up to 75% (vs. on-demand pricing) by signing a one-year or three-year contract.
Option 4 is incorrect. CloudWatch metrics are free to use, but you’ll be charged for custom metrics. Deleting unnecessary CloudWatch metric settings is not appropriate because it does not stop any additional charges.

Question 22:
An IT company has built a solution wherein a Redshift cluster writes data to an Amazon S3 bucket belonging to a different AWS account. However, it is found that the files created in the S3 bucket using the UNLOAD command from the Redshift cluster are not even accessible to the S3 bucket owner.
What could be the reason for this denial of permission for the bucket owner?
A• When objects are uploaded to S3 bucket from a different AWS account, the S3 bucket owner will get implicit permissions to access these objects. This issue seems to be due to an upload error that can be fixed by providing manual access from AWS console
B• The owner of an S3 bucket has implicit access to all objects in his bucket. Permissions are set on objects after they are completely copied to the target location. Since the owner is unable to access the uploaded files, the write operation may be still in progress
C• When two different AWS accounts are accessing an S3 bucket, both the accounts must share the bucket policies. An erroneous policy can lead to such permission failures
D• By default, an S3 object is owned by the AWS account that uploaded it. So the S3 bucket owner will not implicitly have access to the objects written by the Redshift cluster
Answer: D
Explanation
Correct option:
By default, an S3 object is owned by the AWS account that uploaded it. So the S3 bucket owner will not implicitly have access to the objects written by Redshift cluster – By default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. Because the Amazon Redshift data files from the UNLOAD command were put into your bucket by another account, you (the bucket owner) don’t have default permission to access those files.
To get access to the data files, an AWS Identity and Access Management (IAM) role with cross-account permissions must run the UNLOAD command again. Follow these steps to set up the Amazon Redshift cluster with cross-account permissions to the bucket:
1. From the account of the S3 bucket, create an IAM role (Bucket Role) with permissions to the bucket.
2. From the account of the Amazon Redshift cluster, create another IAM role (Cluster Role) with permissions to assume the Bucket Role.
3. Update the Bucket Role to grant bucket access and create a trust relationship with the Cluster Role.
4. From the Amazon Redshift cluster, run the UNLOAD command using the Cluster Role and Bucket Role.
This solution doesn’t apply to Amazon Redshift clusters or S3 buckets that use server-side encryption with AWS Key Management Service (AWS KMS).
Incorrect options:
When objects are uploaded to S3 bucket from a different AWS account, the S3 bucket owner will get implicit permissions to access these objects. This issue seems to be due to an upload error that can be fixed by providing manual access from AWS console – By default, an S3 object is owned by the AWS account that uploaded it. So, the bucket owner will not have any default permissions on the objects. Therefore, this option is incorrect.
The owner of an S3 bucket has implicit access to all objects in his bucket. Permissions are set on objects after they are completely copied to the target location. Since the owner is unable to access the uploaded files, the write operation may be still in progress – This is an incorrect statement, given only as a distractor.
When two different AWS accounts are accessing an S3 bucket, both the accounts must share the bucket policies. An erroneous policy can lead to such permission failures – This is an incorrect statement, given only as a distractor.


47. Aurora

Question 1:
One company plans to migrate its PostgreSQL database to AWS. As a Solutions Architect, you have been entrusted with selecting the best database. The requirements are :
The database needs to be a standard database that performs SQL processing for business purposes.
The amount of data exceeds 15TB, and the amount of transactions per day requires a large amount of processing ability, exceeding 10,000 accesses.
You are also required to configure replicas for automatic backup to increase availability.
Choose a service that meets this requirement.
Options:
A. PostgreSQL RDS
B. DynamoDB
C. Configure PostgreSQL on EC2 instance
D. Aurora
Answer: D
Explanation
Option 4 is the correct answer. Amazon Aurora is a MySQL and PostgreSQL compatible relational database for the cloud that combines the performance and availability of traditional databases with the simplicity and cost efficiency of open source databases.
In this scenario, the amount of data is over 15TB and the daily transaction volume is over 10,000 accesses.
Options 1 and 3 are incorrect. RDS and EC2 instance-based PostgreSQL are not enough to meet this transaction volume, and the correct answer is to choose Aurora, which has higher performance than RDS PostgreSQL.
Option 2 is incorrect. Since DynamoDB is a NoSQL database, it does not meet the requirements of this case. Amazon Aurora is up to 5 times faster than a standard MySQL database and up to 3 times faster than a standard PostgreSQL database. It also offers the same security, availability, and reliability as a commercial database at one-tenth the cost. Amazon Aurora is a fully managed service with RDS that automates time-consuming administrative tasks such as hardware provisioning, database setup, patching, and backup.

Question 2:
An insurance company has a web application that serves users in the United Kingdom and Australia. The application includes a database tier using a MySQL database hosted in eu-west-2. The web tier runs from eu-west-2 and ap-southeast-2. Amazon Route 53 geoproximity routing is used to direct users to the closest web tier. It has been noted that Australian users receive slow response times to queries.
Which changes should be made to the database tier to improve performance?
Options:
A. Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions
B. Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance
C. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in the Australian Region
D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in ap-southeast-2
Answer: D
Explanation
The issue here is latency with read queries being directed from Australia to UK which is great physical distance. A solution is required for improving read performance in Australia.
An Aurora global database consists of one primary AWS Region where your data is mastered, and up to five read-only, secondary AWS Regions. Aurora replicates data to the secondary AWS Regions with typical latency of under a second. You issue write operations directly to the primary DB instance in the primary AWS Region.
This solution will provide better performance for users in the Australia Region for queries. Writes must still take place in the UK Region but read performance will be greatly improved.
CORRECT: “Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure read replicas in ap-southeast-2” is the correct answer.
INCORRECT: “Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in the Australian Region” is incorrect. The database is located in UK. If the database is migrated to Australia then the reverse problem will occur. Multi-AZ does not assist with improving query performance across Regions.
INCORRECT: “Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable replication to additional Regions” is incorrect as a relational database running on MySQL is unlikely to be compatible with DynamoDB.
INCORRECT: “Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of MySQL to reduce the load on the primary instance” is incorrect as you can only put ALBs in front of the web tier, not the DB tier.

Question 3:
A company runs a web application that serves weather updates. The application runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an Application Load Balancer (ALB). The instances store data in an Amazon Aurora database. A solutions architect needs to make the application more resilient to sporadic increases in request rates.
Which architecture should the solutions architect implement? (Select TWO.)
Options:
A. Add an Amazon CloudFront distribution in front of ALB
B. Add an AWS WAF in front of ALB
C. Add an AWS Global Accelerator endpoint
D. Add Amazon Aurora Replicas
E. Add an AWS Transit Gateway to the AZs
Answer: A & D
Explanation
The architecture is already highly resilient but the may be subject to performance degradation if there are sudden increases in request rates. To resolve this situation Amazon Aurora Read Replicas can be used to serve read traffic which offloads requests from the main database. On the frontend an Amazon CloudFront distribution can be placed in front of the ALB and this will cache content for better performance and also offloads requests from the backend.
CORRECT: “Add Amazon Aurora Replicas” is the correct answer.
CORRECT: “Add an Amazon CloudFront distribution in front of the ALB” is the correct answer.
INCORRECT: “Add and AWS WAF in front of the ALB” is incorrect. A web application firewall protects applications from malicious attacks. It does not improve performance.
INCORRECT: “Add an AWS Transit Gateway to the Availability Zones” is incorrect as this is used to connect on-premises networks to VPCs.
INCORRECT: “Add an AWS Global Accelerator endpoint” is incorrect as this service is used for directing users to different instances of the application in different regions based on latency.

Question 4:
A financial services company has a web application with an application tier running in the U.S and Europe. The database tier consists of a MySQL database running on Amazon EC2 in us-west-1. Users are directed to the closest application tier using Route 53 latency-based routing. The users in Europe have reported poor performance when running queries.
Which changes should a Solutions Architect make to the database tier to improve performance?
Options:
A. Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions
B. Create an Amazon RDS Read Replica in one of the European regions. Configure the application tier in Europe to use the read replica for queries
C. Migrate the database to Amazon RedShift. Use AWS DMS to synchronize data. Configure applications to use the RedShift data warehouse for queries
D. Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure the application tier in Europe to use the local reader endpoint
Answer: D
Explanation
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.
A global database can be configured in the European region and then the application tier in Europe will need to be configured to use the local database for reads/queries.
CORRECT: “Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure the application tier in Europe to use the local reader endpoint” is the correct answer.
INCORRECT: “Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions” is incorrect. You cannot configure a multi-AZ DB instance to run in another Region, it must be in the same Region but in a different Availability Zone.
INCORRECT: “Migrate the database to Amazon RedShift. Use AWS DMS to synchronize data. Configure applications to use the RedShift data warehouse for queries” is incorrect. RedShift is a data warehouse and used for running analytics queries on data that is exported from transactional database systems. It should not be used to reduce latency for users of a database, and is not a live copy of the data.
INCORRECT: “Create an Amazon RDS Read Replica in one of the European regions. Configure the application tier in Europe to use the read replica for queries” is incorrect. You cannot create an RDS Read Replica of a database that is running on Amazon EC2. You can only create read replicas of databases running on Amazon RDS.

Question 5:
The flagship application for a gaming company connects to an Amazon Aurora database and the entire technology stack is currently deployed in the United States. Now, the company has plans to expand to Europe and Asia for its operations. It needs the games table to be accessible globally but needs the users and games_played tables to be regional only.
How would you implement this with minimal application refactoring?
Options:
A. Use a DynamoDB global table for the games table and use Amazon Aurora for the users and games_played tables
B. Use an Amazon Aurora Global Database for the games table and use DynamoDB tables for the users and games_played tables
C. Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables
D. Use a DynamoDB global table for the games table and use DynamoDB tables for the users and games_played tables
Answer: C
Explanation
Correct option:
Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. Aurora is not an in-memory database.
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. Amazon Aurora Global Database is the correct choice for the given use-case.
For the given use-case, we, therefore, need to have two Aurora clusters, one for the global table (games table) and the other one for the local tables (users and games_played tables).
Incorrect options:
Use an Amazon Aurora Global Database for the games table and use DynamoDB tables for the users and games_played tables
Use a DynamoDB global table for the games table and use Amazon Aurora for the users and games_played tables
Use a DynamoDB global table for the games table and use DynamoDB tables for the users and games_played tables
Here, we want minimal application refactoring. DynamoDB and Aurora have a completely different API, due to Aurora being SQL and DynamoDB being NoSQL. So all three options are incorrect, as they have DynamoDB as one of the components.

Question 6:
A gaming company uses Amazon Aurora as its primary database service. The company has now deployed 5 multi-AZ read replicas to increase the read throughput and for use as failover target. The replicas have been assigned the following failover priority tiers and corresponding sizes are given in parentheses: tier-1 (16TB), tier-1 (32TB), tier-10 (16TB), tier-15 (16TB), tier-15 (32TB).
In the event of a failover, Amazon RDS will promote which of the following read replicas?
Options:
A. Tier-1 (32TB)
B. Tier-1 (16TB)
C. Tier-15 (32TB)
D. Tier-10 (16TB)
Answer: A
Explanation
Correct option:
Tier-1 (32TB)
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).
For Amazon Aurora, each Read Replica is associated with a priority tier (0-15). In the event of a failover, Amazon Aurora will promote the Read Replica that has the highest priority (the lowest numbered tier). If two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon Aurora promotes an arbitrary replica in the same promotion tier.
Therefore, for this problem statement, the Tier-1 (32TB) replica will be promoted.
Incorrect options:
Tier-15 (32TB)
Tier-1 (16TB)
Tier-10 (16TB)
Given the failover rules discussed earlier in the explanation, these three options are incorrect.

Question 7:
A company manages a multi-tier social media application that runs on EC2 instances behind an Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones and use an Amazon Aurora database. As a solutions architect, you have been tasked to make the application more resilient to periodic spikes in request rates.
Which of the following solutions would you recommend for the given use-case? (Select two)
Options:
A. Use AWS Global Accelerator
B. Use AWS Shield
C. Use AWS Direct Connect
D. Use Aurora Replica
E. Use CloudFront distribution in front of the Application Load Balancer
Answer: D & E
Explanation
Correct options:
You can use Aurora replicas and CloudFront distribution to make the application more resilient to spikes in request rates.
Use Aurora Replica
Aurora Replicas have two main purposes. You can issue queries to them to scale the read operations for your application. You typically do so by connecting to the reader endpoint of the cluster. That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you have in the cluster. Aurora Replicas also help to increase availability. If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region.
Use CloudFront distribution in front of the Application Load Balancer
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.
CloudFront offers an origin failover feature to help support your data resiliency needs. CloudFront is a global service that delivers your content through a worldwide network of data centers called edge locations or points of presence (POPs). If your content is not already cached in an edge location, CloudFront retrieves it from an origin that you’ve identified as the source for the definitive version of the content.
Incorrect options:
* Use AWS Shield* – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency. There are two tiers of AWS Shield – Standard and Advanced. Shield cannot be used to improve application resiliency to handle spikes in traffic.
Use AWS Global Accelerator – AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Since CloudFront is better for improving application resiliency to handle spikes in traffic, so this option is ruled out.
Use AWS Direct Connect – AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC. Direct Connect cannot be used to improve application resiliency to handle spikes in traffic.

Question 4: Skipped
A company is developing a healthcare application that cannot afford any downtime for database write operations. The company has hired you as an AWS Certified Solutions Architect Associate to build a solution using Amazon Aurora.
Which of the following options would you recommend?
• Set up an Aurora multi-master DB cluster (Correct)
• Set up an Aurora provisioned DB cluster
• Set up an Aurora Global Database cluster
• Set up an Aurora serverless DB cluster
Explanation
Correct option:
Set up an Aurora multi-master DB cluster
In a multi-master cluster, all DB instances can perform write operations. There isn’t any failover when a writer DB instance becomes unavailable, because another writer DB instance is immediately available to take over the work of the failed instance. AWS refers to this type of availability as continuous availability, to distinguish it from the high availability (with brief downtime during failover) offered by a single-master cluster. For applications where you can’t afford even brief downtime for database write operations, a multi-master cluster can help to avoid an outage when a writer instance becomes unavailable. The multi-master cluster doesn’t use the failover mechanism, because it doesn’t need to promote another DB instance to have read/write capability.
Incorrect options:
Set up an Aurora serverless DB cluster
Set up an Aurora provisioned DB cluster
Set up an Aurora Global Database cluster
These three options represent Aurora single-master clusters. In a single-master cluster, a single DB instance performs all write operations and any other DB instances are read-only. If the writer DB instance becomes unavailable, a failover mechanism promotes one of the read-only instances to be the new writer. As there is a brief downtime during this failover, so these three options are incorrect for the given use case.


48. Elasticache


49. Database Migration Services (DMS)

Question 1:
The database tier of a web application is running on a Windows server on-premises. The database is a Microsoft SQL Server database. The application owner would like to migrate the database to an Amazon RDS instance.
How can the migration be executed with minimal administrative effort and downtime?
Options:
A. Use the AWS Server Migration Service (SMS) to migrate the server to Amazon EC2. Use AWS Database Migration Service (DMS) to migrate the database to RDS
B. Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS
C. Use AWS DataSync to migrate the data from the database to Amazon S3. Use AWS Database Migration Service (DMS) to migrate the database to RDS
D. Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS. Use the Schema Conversion Tool (SCT) to enable conversion from Microsoft SQL Server to Amazon RDS
Answer: B
Explanation
You can directly migrate Microsoft SQL Server from an on-premises server into Amazon RDS using the Microsoft SQL Server database engine. This can be achieved using the native Microsoft SQL Server tools, or using AWS DMS.
CORRECT: “Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS” is the correct answer.
INCORRECT: “Use the AWS Server Migration Service (SMS) to migrate the server to Amazon EC2. Use AWS Database Migration Service (DMS) to migrate the database to RDS” is incorrect. You do not need to use the AWS SMS service to migrate the server into EC2 first. You can directly migrate the database online with minimal downtime.
INCORRECT: “Use AWS DataSync to migrate the data from the database to Amazon S3. Use AWS Database Migration Service (DMS) to migrate the database to RDS” is incorrect. AWS DataSync is used for migrating data, not databases.
INCORRECT: “Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS. Use the Schema Conversion Tool (SCT) to enable conversion from Microsoft SQL Server to Amazon RDS” is incorrect. You do not need to use the SCT as you are migrating into the same destination database engine (RDS is just the platform).


50. Caching Strategies
51. EMR
52. Directory Service
53. IAM Policies
54. Resource Access Manager (RAM)
55. Single Sign-On
56. Route 53 – Domain Name Server (DNS)
57. Route 53 – Register a Domain Name Lab


58. Route 53 Routing Policies

Question 1:
The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Application Load Balancer. The team wants to route traffic to multiple back-end services based on the URL path of the HTTP header. So it wants requests for www.example.com/orders to go to a specific microservice and requests for www.example.com/products to go to another microservice.
Which of the following features of Application Load Balancers can be used for this use-case?
Options:
A. Path-based Routing
B. HTTP header-based routing
C. Query string parameter-based routing
D. Host-based routing
Answer: A
Explanation
Correct option:
Path-based Routing
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.
If your application is composed of several individual services, an Application Load Balancer can route a request to a service based on the content of the request. Here are the different types –
Host-based Routing:
You can route a client request based on the Host field of the HTTP header allowing you to route to multiple domains from the same load balancer.
Path-based Routing:
You can route a client request based on the URL path of the HTTP header.
HTTP header-based routing:
You can route a client request based on the value of any standard or custom HTTP header.
HTTP method-based routing:
You can route a client request based on any standard or custom HTTP method.
Query string parameter-based routing:
You can route a client request based on the query string or query parameters.
Source IP address CIDR-based routing:
You can route a client request based on source IP address CIDR from where the request originates.
Path-based Routing Overview:
You can use path conditions to define rules that route requests based on the URL in the request (also known as path-based routing).
The path pattern is applied only to the path of the URL, not to its query parameters.
Incorrect options:
Query string parameter-based routing
HTTP header-based routing
Host-based Routing
As mentioned earlier in the explanation, none of these three types of routing support requests based on the URL path of the HTTP header. Hence these three are incorrect.


59. Route 53 Simple Routing Policy

Question 1:
As a Solutions Architect, you are building a WEB application configured using two EC2 instances. You would like to configure it to randomly route to each server using Route53.
Choose a routing policy that meets this requirement.
Options:
A. Simple routing policy
B. Weighted routing policy
C. Latency routing policy
D. Failover routing policy
Answer: A
Explanation
Option 1 is the correct answer. Simple routing is used when a domain has a single resource that performs a specific function. Simple routing routes traffic randomly across multiple instances. Therefore, simple routing is sufficient for random routing.
Option 2 is incorrect. Weighted routing allows you to associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and set the routing weight for each resource. it is used to route traffic to multiple resources in proportions that you specify.
Option 3 is incorrect. Latency routing can improve user performance by routing requests to the AWS Region with the lowest network latency when you are hosting your application in multiple AWS Regions.
Option 4 is incorrect. Failover routing allows you to stop routing to anomalous resources and route traffic to healthy resources.


60. Route 53 Weighted Routing Policy
61. Route 53 Latency Routing Policy


62. Route 53 Failover Routing Policy

Question 1:
As a Solutions Architect, you are building an application that uses AWS. The application has primary and secondary configurations across two regions, each utilizing ELB, Auto scaling, and EC2 instances.
Choose the best Route 53 routing policy if your primary infrastructure goes down.
Options:
A. Weighted routing
B. Simple routing
C. Multi-value answer routing
D. Failover routing
Answer: D
Explanation
You can use the Failover routing policy to create an active-passive failover configuration. You can create primary and secondary failover records of the same name and type and associate health checks with each to achieve a cross-region failover configuration. Option 4 is the correct answer.
Amazon Route 53 Health Check monitors the health and performance of web applications, web servers, and other resources. Route 53 monitors performance in one of the following ways:
1. Health check for specified resources such as web servers
2. Status of other health checks such as ELB
3. Amazon CloudWatch Alarm Status
Option 1 is incorrect. Weighted routing allows you to route traffic to multiple resources with a custom ratio.
Option 2 is incorrect. Simple routing is used when a domain has a single resource that performs a particular function. For example, a single web server that serves content to a website.
Option 3 is incorrect. Multi-value answer routing is used when Route 53 responds to DNS queries with up to eight randomly selected healthy records.

Question 2:
A company has deployed a new website on Amazon EC2 instances behind an Application Load Balancer (ALB). Amazon Route 53 is used for the DNS service. The company has asked a Solutions Architect to create a backup website with support contact details that users will be directed to automatically if the primary website is down.
How should the Solutions Architect deploy this solution cost-effectively?
Options:
A. Deploy the backup website on EC2 and ALB in another Region and use Route 53 health checks for failover routing
B. Configure a static website using Amazon S3 and create a Route 53 weighted routing policy” is incorrect
C. Configure a static website using Amazon S3 and create a Route 53 failover routing policy
D. Create the backup website on EC2 and ALB in another Region and create an AWS Global Accelerator endpoint
Answer: C
Explanation
The most cost-effective solution is to create a static website using an Amazon S3 bucket and then use a failover routing policy in Amazon Route 53. With a failover routing policy users will be directed to the main website as long as it is responding to health checks successfully.
If the main website fails to respond to health checks (its down), Route 53 will begin to direct users to the backup website running on the Amazon S3 bucket. It’s important to set the TTL on the Route 53 records appropriately to ensure that users resolve the failover address within a short time.
CORRECT: “Configure a static website using Amazon S3 and create a Route 53 failover routing policy” is the correct answer.
INCORRECT: “Configure a static website using Amazon S3 and create a Route 53 weighted routing policy” is incorrect. Weighted routing is used when you want to send a percentage of traffic between multiple endpoints. In this case all traffic should go to the primary until if fails, then all should go to the backup.
INCORRECT: “Deploy the backup website on EC2 and ALB in another Region and use Route 53 health checks for failover routing” is incorrect. This is not a cost-effective solution for the backup website. It can be implemented using Route 53 failover routing which uses health checks but would be an expensive option.
INCORRECT: “Create the backup website on EC2 and ALB in another Region and create an AWS Global Accelerator endpoint” is incorrect. Global Accelerator is used for performance as it directs traffic to the nearest healthy endpoint. It is not useful for failover in this scenario and is also a very expensive solution.

Question 11: Skipped
A manufacturing company receives unreliable service from its data center provider because the company is located in an area prone to natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failover environment on AWS in case the on-premises data center fails. The company runs web servers that connect to external vendors. The data available on AWS and on-premises must be uniform.
Which of the following solutions would have the LEAST amount of downtime?
• Set up a Route 53 failover record. Execute an AWS CloudFormation template from a script to provision EC2 instances behind an Application Load Balancer. Set up AWS Storage Gateway with stored volumes to back up data to S3
• Set up a Route 53 failover record. Run an AWS Lambda function to execute an AWS CloudFormation template to launch two EC2 instances. Set up AWS Storage Gateway with stored volumes to back up data to S3. Set up an AWS Direct Connect connection between a VPC and the data center
• Set up a Route 53 failover record. Run application servers on EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to S3 (Correct)
• Set up a Route 53 failover record. Set up an AWS Direct Connect connection between a VPC and the data center. Run application servers on EC2 in an Auto Scaling group. Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer
Explanation
Correct option:
Set up a Route 53 failover record. Run application servers on EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to S3
If you have multiple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource.
Elastic Load Balancing is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running. You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed. Your load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group.
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. It provides low-latency performance by caching frequently accessed data on-premises while storing data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data. Storage Gateway also integrates natively with Amazon S3 cloud storage which makes your data available for in-cloud processing.
Incorrect options:
Set up a Route 53 failover record. Execute an AWS CloudFormation template from a script to provision EC2 instances behind an Application Load Balancer. Set up AWS Storage Gateway with stored volumes to back up data to S3
Set up a Route 53 failover record. Run an AWS Lambda function to execute an AWS CloudFormation template to launch two EC2 instances. Set up AWS Storage Gateway with stored volumes to back up data to S3. Set up an AWS Direct Connect connection between a VPC and the data center
Set up a Route 53 failover record. Set up an AWS Direct Connect connection between a VPC and the data center. Run application servers on EC2 in an Auto Scaling group. Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer
AWS CloudFormation is a convenient provisioning mechanism for a broad range of AWS and third-party resources. It supports the infrastructure needs of many different types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources, and container-based solutions.
These three options involve CloudFormation as part of the solution. Now, CloudFormation takes time to provision the resources and hence is not the right solution when LEAST amount of downtime is mandated for the given use case. Therefore, these options are not the right fit for the given requirement.


63. Route 53 Geolocation Routing Policy

Question 1:
Your company hosts many infrastructures in the Tokyo region. As a Solutions Architect, you are trying to replicate these infrastructure configurations on the Singapore and Sydney regions to extend your application. Optimal language selection and routing control is required to satisfy users close to the region.
What do you need to do to achieve optimal language selection for your users and ELB routing control?
Options:
A. Set up geo-location routing on Route53
B. Perform load balancing for all regions using NLB
C. Configure low latency routing on Route53
D. Perform load balancing for all regions using ALB
Answer: A
Explanation
Option 1 is the correct answer. With geo-location routing, resources are selected to handle traffic based on the user’s geographic location. As a result, language display and traffic processing will be easy to implement.
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
Failover routing policy – Use this when you want to configure active-passive failover.
Geo-location routing policy – Use when you want to route traffic based on the location of your users.
Geo-proximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
Multi-value answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
Weighted routing policy – Use to route traffic to multiple resources in proportions that you choose.

Question 2:
A company hosts an application on Amazon EC2 instances behind Application Load Balancers in several AWS Regions. Distribution rights for the content require that users in different geographies must be served content from specific regions.
Which configuration meets these requirements?
Options:
A. Configure Amazon CloudFront with multiple origins and AWS WAF
B. Create Amazon Route 53 records with a geoproximity routing policy
C. Create Amazon Route 53 records with a geolocation routing policy
D. Configure Application Load Balancers with multi-Region routing
Answer: C
Explanation
To protect the distribution rights of the content and ensure that users are directed to the appropriate AWS Region based on the location of the user, the geolocation routing policy can be used with Amazon Route 53.
Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from.
When you use geolocation routing, you can localize your content and present some or all of your website in the language of your users. You can also use geolocation routing to restrict distribution of content to only the locations in which you have distribution rights.
CORRECT: “Create Amazon Route 53 records with a geolocation routing policy” is the correct answer.
INCORRECT: “Create Amazon Route 53 records with a geoproximity routing policy” is incorrect. Use this routing policy when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
INCORRECT: “Configure Amazon CloudFront with multiple origins and AWS WAF” is incorrect. AWS WAF protects against web exploits but will not assist with directing users to different content (from different origins).
INCORRECT: “Configure Application Load Balancers with multi-Region routing” is incorrect. There is no such thing as multi-Region routing for ALBs.

Question 03:
One of the biggest football leagues in Europe has granted the distribution rights for live streaming its matches in the US to a silicon valley based streaming services company. As per the terms of distribution, the company must make sure that only users from the US are able to live stream the matches on their platform. Users from other countries in the world must be denied access to these live-streamed matches.
Which of the following options would allow the company to enforce these streaming restrictions? (Select two)
A. Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights
B. Use Route 53 based latency routing policy to restrict distribution of content to only the locations in which you have distribution rights
C. Use Route 53 based weighted routing policy to restrict distribution of content to only the locations in which you have distribution rights
D. Use georestriction to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution
E. Use Route 53 based failover routing policy to restrict distribution of content to only the locations in which you have distribution rights
Answer: A & D
Explanation
Correct options:
Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights
Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from. For example, you might want all queries from Europe to be routed to an ELB load balancer in the Frankfurt region. You can also use geolocation routing to restrict the distribution of content to only the locations in which you have distribution rights.
Use georestriction to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution
You can use georestriction, also known as geo-blocking, to prevent users in specific geographic locations from accessing content that you’re distributing through a CloudFront web distribution. When a user requests your content, CloudFront typically serves the requested content regardless of where the user is located. If you need to prevent users in specific countries from accessing your content, you can use the CloudFront geo restriction feature to do one of the following: Allow your users to access your content only if they’re in one of the countries on a whitelist of approved countries. Prevent your users from accessing your content if they’re in one of the countries on a blacklist of banned countries. So this option is also correct.
Incorrect options:
Use Route 53 based latency routing policy to restrict distribution of content to only the locations in which you have distribution rights – Use latency based routing when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the lowest latency. To use latency-based routing, you create latency records for your resources in multiple AWS Regions. When Route 53 receives a DNS query for your domain or subdomain (example.com or acme.example.com), it determines which AWS Regions you’ve created latency records for, determines which region gives the user the lowest latency, and then selects a latency record for that region. Route 53 responds with the value from the selected record, such as the IP address for a web server.
Use Route 53 based weighted routing policy to restrict distribution of content to only the locations in which you have distribution rights – Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of the software.
Use Route 53 based failover routing policy to restrict distribution of content to only the locations in which you have distribution rights – Failover routing lets you route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy. The primary and secondary records can route traffic to anything from an Amazon S3 bucket that is configured as a website to a complex tree of records
Weighted routing or failover routing or latency routing cannot be used to restrict the distribution of content to only the locations in which you have distribution rights. So all three options above are incorrect.


64. Route 53 Geoproximity Routing Policy (Traffic Flow Only)
65. Route 53 Multivalue Answer


66. VPCs

Question 1: Company-A operates a business system that uses AWS resources such as VPC. Recently, the management of company-A has acquired company-B, and you. as a solution architect, have been put in charge of IT integration between these two companies. Company-B also has its own set of resources that are hosted on AWS. The requirement is to allow AWS resources in the company-A’s VPC to access AWS resources in Company B’s VPC. What action do you need to take to meet this requirement?
Options:
A. Install a NAT instance in each VPC and connect between VPCs
B. Install a NAT gateway in each VPC and connect between VPCs
C. Connect VPC’s through the organization settings of AWS Organizations
D. Connect VPCs by VPC peering
Answer: D
Explanation:
A VPC peering connection allows you to network between two VPCs for private traffic routing. This allows instances configured in two VPCs to communicate with each other as if they were in the same network. Therefore, option 4 is the correct answer. A VPC peering connection is a network connection that allows you to route traffic between VPCs using a private IPv4 or IPv6 address. This allows instances in both VPCs to communicate with each other as if they were in the same network. VPC peering connections are work for connections between VPCs from one AWS account, or even between multiple AWS accounts, regardless of region.
Options 1 and 2 are incorrect. A NAT instance or NAT gateway is a gateway that allows an instance in a private subnet to reply to the Internet. This is done by translating a private IP address into a public IP address. This function, however, has nothing to do with the connection between VPCs.
Option 3 is incorrect. AWS Organizations is a feature that enables integrated management of multiple AWS accounts. You can use this to share VPCs between accounts, but it will not be used to connect between VPCs.

Question 2:
As a Solutions Architect, you are building an application on AWS. This application is setting up an EC2 instance with a public IP in the subnet of a VPC. It appears you couldn’t connect to your EC2 instance over the internet. The security group seems to be set up correctly.
What should I do to connect to an EC2 instance from the internet?
Options:
A. Set the correct route in the route table
B. Set Elastic IP to your EC2 instance
C. Set the secondary IP address to your EC2 instance
D. Set up a NAT gateway
Answer: A
Explanation
In order for this EC2 instance to be accessible from the Internet, the security groups and network ACLs must be properly configured and the subnet’s route table in place must have an entry to the Internet gateway. Therefore, option 1 is the correct answer.
Option 2 is incorrect. Elastic IP is not required to access from the internet.
Option 3 is incorrect. A secondary IP is not required to access from the internet.
Option 4 is incorrect. The NAT gateway is used to access an EC2 instance in the private subnet, not public.

Question 3:
A company has two accounts for perform testing and each account has a single VPC: VPC-TEST1 and VPC-TEST2. The operations team require a method of securely copying files between Amazon EC2 instances in these VPCs. The connectivity should not have any single points of failure or bandwidth constraints.
Which solution should a Solutions Architect recommend?
Options:
A. Attach a virtual private gateway to VPC-TEST1 and VPC-TEST2 and enable routing
B. Create a VPC peering connection between VPC-TEST1 and VPC-TEST2
C. Create a VPC gateway endpoint for each EC2 instance and update route tables
D. Attach a Direct Connect gateway to VPC-TEST1 and VPC-TEST2 and enable routing
Answer: B
Explanation
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network.
You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).
CORRECT: “Create a VPC peering connection between VPC-TEST1 and VPC-TEST2” is the correct answer.
INCORRECT: “Create a VPC gateway endpoint for each EC2 instance and update route tables” is incorrect. You cannot create VPC gateway endpoints for Amazon EC2 instances. These are used with DynamoDB and S3 only.
INCORRECT: “Attach a virtual private gateway to VPC-TEST1 and VPC-TEST2 and enable routing” is incorrect. You cannot create an AWS Managed VPN connection between two VPCs.
INCORRECT: “Attach a Direct Connect gateway to VPC-TEST1 and VPC-TEST2 and enable routing” is incorrect. Direct Connect gateway is used to connect a Direct Connect connection to multiple VPCs, it is not useful in this scenario as there is no Direct Connect connection.

Question 4:
The sourcing team at the US headquarters of a global e-commerce company is preparing a spreadsheet of the new product catalog. The spreadsheet is saved on an EFS file system created in us-east-1 region. The sourcing team counterparts from other AWS regions such as Asia Pacific and Europe also want to collaborate on this spreadsheet.
As a solutions architect, what is your recommendation to enable this collaboration with the LEAST amount of operational overhead?
Options
A. The spreadsheet will have to be copied in Amazon S3 which can then be accessed from any AWS region
B. The spreadsheet data will have to be moved into an RDS MySQL database which can then be accessed from any AWS region
C. The spreadsheet on the EFS file system can be accessed in other AWS regions by using an inter-region VPC peering connection
D. The spreadsheet will have to be copied into EFS file systems of other AWS regions as EFS is a regional service and it does not allow access from other AWS regions
Answer: C
Explanation
Correct option:
The spreadsheet on the EFS file system can be accessed in other AWS regions by using an inter-region VPC peering connection
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.
You can connect to Amazon EFS file systems from EC2 instances in other AWS regions using an inter-region VPC peering connection, and from on-premises servers using an AWS VPN connection. So this is the correct option.
Incorrect options:
The spreadsheet will have to be copied in Amazon S3 which can then be accessed from any AWS region
The spreadsheet data will have to be moved into an RDS MySQL database which can then be accessed from any AWS region
Copying the spreadsheet into S3 or RDS database is not the correct solution as it involves a lot of operational overhead. For RDS, one would need to write custom code to replicate the spreadsheet functionality running off of the database. S3 does not allow in-place edit of an object. Additionally, it’s also not POSIX compliant. So one would need to develop a custom application to “simulate in-place edits” to support collabaration as per the use-case. So both these options are ruled out.
The spreadsheet will have to be copied into EFS file systems of other AWS regions as EFS is a regional service and it does not allow access from other AWS regions – Creating copies of the spreadsheet into EFS file systems of other AWS regions would mean no collaboration would be possible between the teams. In this case, each team would work on “its own file” instead of a single file accessed and updated by all teams. Hence this option is incorrect.

Question 7: Skipped
A systems administrator has created a private hosted zone and associated it with a Virtual Private Cloud (VPC). However, the DNS queries for the private hosted zone remain unresolved.
As a Solutions Architect, can you identify the Amazon VPC options to be configured in order to get the private hosted zone to work?
• Enable DNS hostnames and DNS resolution for private hosted zones (Correct)
• Fix the Name server (NS) record and Start Of Authority (SOA) records that may have been created with wrong configurations
• Remove any overlapping namespaces for the private and public hosted zones
• Fix conflicts between your private hosted zone and any Resolver rule that routes traffic to your network for the same domain name, as it results in ambiguity over the route to be taken
Explanation
Correct option:
Enable DNS hostnames and DNS resolution for private hosted zones – DNS hostnames and DNS resolution are required settings for private hosted zones. DNS queries for private hosted zones can be resolved by the Amazon-provided VPC DNS server only. As a result, these options must be enabled for your private hosted zone to work.
DNS hostnames: For non-default virtual private clouds that aren’t created using the Amazon VPC wizard, this option is disabled by default. If you create a private hosted zone for a domain and create records in the zone without enabling DNS hostnames, private hosted zones aren’t enabled. To use a private hosted zone, this option must be enabled.
DNS resolution: Private hosted zones accept DNS queries only from a VPC DNS server. The IP address of the VPC DNS server is the reserved IP address at the base of the VPC IPv4 network range plus two. Enabling DNS resolution allows you to use the VPC DNS server as a Resolver for performing DNS resolution. Keep this option disabled if you’re using a custom DNS server in the DHCP Options set, and you’re not using a private hosted zone.
Incorrect options:
Remove any overlapping namespaces for the private and public hosted zones – If you have private and public hosted zones that have overlapping namespaces, such as example.com and accounting.example.com, then the Resolver routes traffic based on the most specific match. It won’t result in unresolved queries, hence this option is wrong.
Fix the Name server (NS) record and Start Of Authority (SOA) records that may have been created with wrong configurations – When you create a hosted zone, Amazon Route 53 automatically creates a name server (NS) record and a start of authority (SOA) record for the zone for public hosted zone. However, this issue is about the private hosted zone, hence this is an incorrect option.
Fix conflicts between your private hosted zone and any Resolver rule that routes traffic to your network for the same domain name, as it results in ambiguity over the route to be taken – If you have a private hosted zone (example.com) and a Resolver rule that routes traffic to your network for the same domain name, the Resolver rule takes precedence. It won’t result in unresolved queries.

Question 27:
You have multiple AWS accounts within a single AWS Region managed by AWS Organizations and you would like to ensure all EC2 instances in all these accounts can communicate privately. Which of the following solutions provides the capability at the CHEAPEST cost?
A• Create a VPC peering connection between all VPCs
B• Create a VPC in an account and share one or more of its subnets with the other accounts using Resource Access Manager
C• Create a Private Link between all the EC2 instances
D• Create a Transit Gateway and link all the VPC in all the accounts together
Answer: B
Explanation
Correct option:
Create a VPC in an account and share one or more of its subnets with the other accounts using Resource Access Manager
AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM. RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own. You can create resources centrally in a multi-account environment, and use RAM to share those resources across accounts in three simple steps: create a Resource Share, specify resources, and specify accounts. RAM is available to you at no additional charge.
The correct solution is to share the subnet(s) within a VPC using RAM. This will allow all EC2 instances to be deployed in the same VPC (although from different accounts) and easily communicate with one another.
Incorrect options:
Create a Private Link between all the EC2 instances – AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. Private Link is a distractor in this question. Private Link is leveraged to create a private connection between an application that is fronted by an NLB in an account, and an Elastic Network Interface (ENI) in another account, without the need of VPC peering and allowing the connections between the two to remain within the AWS network.
Create a VPC peering connection between all VPCs – A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection). VPC peering connections will work, but won’t efficiently scale if you add more accounts (you’ll have to create many connections).
Create a Transit Gateway and link all the VPC in all the accounts together – AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. A Transit Gateway will work but will be an expensive solution. Here we want to minimize cost.


67. Build a Custom VPC


68. Network Address Translation (NAT)

Question 1:
As a Solutions Architect, you use AWS to host a database server within your company. This server should not be able to connect to the Internet unless you want to download the required database patches.
Choose an AWS service configuration that meets this requirement.
Option:
A. Build the DB in a public subnet and allow only inbound traffic with network ACLs
B. Build the DB in the public subnet and allow only inbound traffic in the security group
C. Build the DB in a private subnet and allow only outbound traffic in the security group
D. Build the DB in a private subnet and set the NAT instance in the route table
Answer: D
Explanation
To restrict internet access to your database, you need to have a DB instance in your private subnet. On top of that, the DB should only be allowed to reply to the Internet via NAT. Therefore, option 4 is the correct answer.
EC2 instances located on public subnets can send outbound traffic directly to the Internet, but EC2 instances located on private subnets cannot. Instead, instances located on the private subnet can use a Network Address Translation (NAT) gateway located on the public subnet to return traffic to the Internet side. This allows the database server to connect to the Internet through a NAT instance for software updates, but a connection to the database server from the Internet cannot be established.
Options 1 and 2 are incorrect. If you build the database in the public subnet, you can access it directly from the Internet, so it is better to install it in the private subnet.
Option 3 is incorrect. After building the database in a private subnet, the security group controls inbound traffic. Security groups can restrict access to your database by allowing access only from specific EC2 instances.

Question 2:
Your company operates infrastructure located on AWS’s private and public subnets. A database server is installed in the private subnet. In addition, a NAT instance is installed in the public subnet because the instance in the private subnet sends the reply traffic to the Internet side. Recently, you have discovered that your NAT instance is becoming bottlenecked.
How should you do to solve this issue?
Options:
A. Use VPC connection for a wider bandwidth
B. Set access settings using VPC endpoints
C. Change the NAT instance to a NAT gateway
D. Scale-up the NAT instance
Answer: C
Explanation
Option 3 is the correct answer. A NAT gateway is a managed service that you can use instead of a NAT instance. Since availability is guaranteed as a managed service on the AWS side, using a NAT gateway will improve the bottleneck of your current NAT instance. Scaling, such as changing the instance type of the NAT instance itself, can help, but it does not guarantee that the problem will not occur in the future. Therefore, you can easily improve performance and eliminate bottlenecks by changing your NAT instance to a NAT gateway.
Option 1 is incorrect. There is no function called VPC connection.
Option 2 is incorrect. A VPC endpoint is a communication path used to connect AWS resources from inside to outside the VPC.
Option 4 is incorrect. It is possible to deal with this by extending the NAT instance, but AWS provides a NAT gateway as a managed service, so it is more effective to use this instead.


69. Access Control List (ACL)


70. Custom VPCs and ELBs

Question 1:
A solutions architect is designing the infrastructure to run an application on Amazon EC2 instances. The application requires high availability and must dynamically scale based on demand to be cost efficient.
What should the solutions architect do to meet these requirements?
Options:
A. Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions
B. Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones
C. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones
D. Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions
Answer: C
Explanation
The Amazon EC2-based application must be highly available and elastically scalable. Auto Scaling can provide the elasticity by dynamically launching and terminating instances based on demand. This can take place across availability zones for high availability.
Incoming connections can be distributed to the instances by using an Application Load Balancer (ALB).
CORRECT: “Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Availability Zones” is the correct answer.
INCORRECT: “Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances to multiple Availability Zones” is incorrect as API gateway is not used for load balancing connections to Amazon EC2 instances.
INCORRECT: “Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances to multiple Regions” is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group.
INCORRECT: “Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy instances to multiple Regions” is incorrect as you cannot launch instances in multiple Regions from a single Auto Scaling group.

Question 2:
A Solutions Architect has deployed an application on several Amazon EC2 instances across three private subnets. The application must be made accessible to internet-based clients with the least amount of administrative effort.
How can the Solutions Architect make the application available on the internet?
Options:
A. Create an Amazon Machine Image (AMI) of the instances in the private subnet and launch new instances from the AMI in public subnets. Create an Application Load Balancer and add the public instances to the ALB
B. Create an Application Load Balancer and associate three private subnets from the same Availability Zones as the private instances. Add the private instances to the ALB
C. Create a NAT gateway in a public subnet. Add a route to the NAT gateway to the route tables of the three private subnets
D. Create an Application Load Balancer and associate three public subnets from the same Availability Zones as the private instances. Add the private instances to the ALB
Answer: D
Explanation
To make the application instances accessible on the internet the Solutions Architect needs to place them behind an internet-facing Elastic Load Balancer. The way you add instances in private subnets to a public facing ELB is to add public subnets in the same AZs as the private subnets to the ELB. You can then add the instances and to the ELB and they will become targets for load balancing.
CORRECT: “Create an Application Load Balancer and associate three public subnets from the same Availability Zones as the private instances. Add the private instances to the ALB” is the correct answer.
INCORRECT: “Create an Application Load Balancer and associate three private subnets from the same Availability Zones as the private instances. Add the private instances to the ALB” is incorrect. Public subnets in the same AZs as the private subnets must be added to make this configuration work.
INCORRECT: “Create an Amazon Machine Image (AMI) of the instances in the private subnet and launch new instances from the AMI in public subnets. Create an Application Load Balancer and add the public instances to the ALB” is incorrect. There is no need to use an AMI to create new instances in a public subnet. You can add instances in private subnets to a public-facing ELB.
INCORRECT: “Create a NAT gateway in a public subnet. Add a route to the NAT gateway to the route tables of the three private subnets” is incorrect. A NAT gateway is used for outbound traffic not inbound traffic and cannot make the application available to internet-based clients.

Question 3:
A company’s web application is using multiple Amazon EC2 Linux instances and storing data on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the application in case of a failure.
What should a solutions architect do to meet these requirements?
Options:
A. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance
B. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance
D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: C
Explanation
To increase the resiliency of the application the solutions architect can use Auto Scaling groups to launch and terminate instances across multiple availability zones based on demand. An application load balancer (ALB) can be used to direct traffic to the web application running on the EC2 instances.
Lastly, the Amazon Elastic File System (EFS) can assist with increasing the resilience of the application by providing a shared file system that can be mounted by multiple EC2 instances from multiple availability zones.
CORRECT: “Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data on Amazon EFS and mount a target on each instance” is the correct answer.
INCORRECT: “Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2 instance” is incorrect as the EBS volumes are single points of failure which are not shared with other instances.
INCORRECT: “Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Mount an instance store on each EC2 instance” is incorrect as instance stores are ephemeral data stores which means data is lost when powered down. Also, instance stores cannot be shared between instances.
INCORRECT: “Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)” is incorrect as there are data retrieval charges associated with this S3 tier. It is not a suitable storage tier for application files.

Question 4:
A developer has created a new Application Load Balancer but has not registered any targets with the target groups. Which of the following errors would be generated by the Load Balancer?
A. HTTP 504: Gateway timeout
B. HTTP 502: Bad gateway
C. HTTP 503: Service unavailable
D. HTTP 500: Internal server error
Answer: C
Explanation
Correct option:
HTTP 503: Service unavailable
The Load Balancer generates the HTTP 503: Service unavailable error when the target groups for the load balancer have no registered targets.
Incorrect options:
HTTP 500: Internal server error
HTTP 502: Bad gateway
HTTP 504: Gateway timeout

Question 5:
An e-commerce company is looking for a solution with high availability, as it plans to migrate its flagship application to a fleet of Amazon EC2 instances. The solution should allow for content-based routing as part of the architecture.
As a Solutions Architect, which of the following will you suggest for the company?
Options:
A. Use a Network Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Private IP address to mask any failure of an instance
B. Use an Application Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure Auto Scaling group to mask any failure of an instance
C. Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure an Elastic IP address to mask any failure of an instance
D. Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Public IP address to mask any failure of an instance
Answer: B
Explanation
Correct option:
Use an Application Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure Auto Scaling group to mask any failure of an instance
The Application Load Balancer (ALB) is best suited for load balancing HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), the Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.
This is the correct option since the question has a specific requirement for content-based routing which can be configured via the Application Load Balancer. Different AZs provide high availability to the overall architecture and Auto Scaling group will help mask any instance failures.
Incorrect options:
Use a Network Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Private IP address to mask any failure of an instance – Network Load Balancer cannot facilitate content-based routing so this option is incorrect.
Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure an Elastic IP address to mask any failure of an instance
Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Public IP address to mask any failure of an instance
Both these options are incorrect as you cannot use the Auto Scaling group to distribute traffic to the EC2 instances.
An Elastic IP address is a static, public, IPv4 address allocated to your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. Elastic IPs do not change and remain allocated to your account until you delete them.


71. VPC Flow Logs


72. Bastions

Question 1:
Your company runs an application hosted on AWS. The application utilizes two EC2 instances in two public subnets. Only specific users in the company access the WEB server via the Internet. The other instance is set up as a database server. As a security officer, you have begun to consider improving the security of this current architecture.
Which of the following is the most secure configuration?
Options:
A. Create a new private subnet and place a NAT instance on it
B. Move the web server to a private subnet
C. Move the DB server to a private subnet
D. Migrate both servers to the new private subnet and set up a bastion server on the public subnet
Answer: D
Explanation
Option 4 is the correct answer. The most secure configuration is to migrate both the web server and the database server to a private subnet and put the NAT gateway on the public subnet. Allows access to the WEB server via the public subnet bastion server or ELB.
If the web server requires unspecified access from the internet, this web server should be on a public subnet. However, in this case, only internal users access the web server from the internal network, so we can see that this web server is limited to internal access. Therefore, it is desirable for security to install the WEB server on a private subnet.
Option 1 is incorrect. It is necessary to create a new private subnet and relocate the server. However, the need to install a NAT instance will not achieve the requirements of this scenario.
Option 2 is incorrect. Not only should the the web server move to the private subnet, but the database server should also move to the private subnet.
Option 3 is incorrect. Not only should the DB server move to the private subnet, but the WEB server should also move to the private subnet.


73. Direct Connect

Question 1:
As a Solutions Architect, you are considering migrating from your on-premises environment to AWS. The company currently holds large amounts of data in its data centers, and this on-premises environment will continue to be used. Therefore, a high performance, leased line connection of 50 Mbps is required to connect this data center to AWS.
Choose the best connection method to meet this requirement.
Options:
A. Make a connection to your on-premises environment through VPC peering
B. Connect to on-premises environment via VPN
C. Make a connection to on-premises environment with AWS Direct Connect
D. Make a connection to on-premises environment through an internet gateway
Answer: C
Explanation
AWS Direct Connect makes it easy to establish a dedicated network connection to AWS from an on-premises environment such as a data center. This can often reduce network costs, increase bandwidth throughput, and provide a stable network experience. Therefore, option 3 is the correct answer.
Other options are inappropriate as they are not high performance, leased line connectivity services.
Option 1 is incorrect. VPC peering is a function used to connect between two VPCs. It does not provide a connection function with an on-premises environment using a dedicated line.
Option 2 is incorrect. VPN is not a dedicated line connection, but a network connection using the Internet.
Option 4 is incorrect. The Internet gateway is a gateway used for communication between the VPC and the Internet.

Question 2:
The engineering team at an e-commerce company wants to establish a dedicated, encrypted, low latency, and high throughput connection between its data center and AWS Cloud. The engineering team has set aside sufficient time to account for the operational overhead of establishing this connection.
As a solutions architect, which of the following solutions would you recommend to the company?
Options:
A. Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS Cloud
B. Use site-to-site VPN to establish a connection between the data center and AWS Cloud
C. Use VPC transit gateway to establish a connection between the data center and AWS Cloud
D. Use AWS Direct Connect to establish a connection between the data center and AWS Cloud
Answer: A
Explanation
Correct option:
Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS Cloud
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations.
With AWS Direct Connect plus VPN, you can combine one or more AWS Direct Connect dedicated network connections with the Amazon VPC VPN. This combination provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections.
This solution combines the AWS managed benefits of the VPN solution with low latency, increased bandwidth, more consistent benefits of the AWS Direct Connect solution, and an end-to-end, secure IPsec connection. Therefore, AWS Direct Connect plus VPN is the correct solution for this use-case.

Incorrect options:

Use site-to-site VPN to establish a connection between the data center and AWS Cloud – AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity. However, Site-to-site VPN cannot provide low latency and high throughput connection, therefore this option is ruled out.

Use VPC transit gateway to establish a connection between the data center and AWS Cloud – A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks. A transit gateway by itself cannot establish a low latency and high throughput connection between a data center and AWS Cloud. Hence this option is incorrect.

Use AWS Direct Connect to establish a connection between the data center and AWS Cloud – AWS Direct Connect by itself cannot provide an encrypted connection between a data center and AWS Cloud, so this option is ruled out.

 


74. Setting Up a VPN Over a Direct Connect Connection


75. Global Accelerator

Question 1:
A new application is to be published in multiple regions around the world. The Architect needs to ensure only 2 IP addresses need to be whitelisted. The solution should intelligently route traffic for lowest latency and provide fast regional failover.
How can this be achieved?
Options:
A. Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator
B. Launch EC2 instances into multiple regions behind an NLB with a static IP address
C. Launch EC2 instances into multiple regions behind an ALB and use a Route 53 failover routing policy
D. Launch EC2 instances into multiple regions behind an ALB and use Amazon CloudFront with a pair of static IP addresses
Answer: A
Explanation
AWS Global Accelerator uses the vast, congestion-free AWS global network to route TCP and UDP traffic to a healthy application endpoint in the closest AWS Region to the user.
This means it will intelligently route traffic to the closest point of presence (reducing latency). Seamless failover is ensured as AWS Global Accelerator uses anycast IP address which means the IP does not change when failing over between regions so there are no issues with client caches having incorrect entries that need to expire.
This is the only solution that provides deterministic failover.
CORRECT: “Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator” is the correct answer.
INCORRECT: “Launch EC2 instances into multiple regions behind an NLB with a static IP address” is incorrect. An NLB with a static IP is a workable solution as you could configure a primary and secondary address in applications. However, this solution does not intelligently route traffic for lowest latency.
INCORRECT: “Launch EC2 instances into multiple regions behind an ALB and use a Route 53 failover routing policy” is incorrect. A Route 53 failover routing policy uses a primary and standby configuration. Therefore, it sends all traffic to the primary until it fails a health check at which time it sends traffic to the secondary. This solution does not intelligently route traffic for lowest latency.
INCORRECT: “Launch EC2 instances into multiple regions behind an ALB and use Amazon CloudFront with a pair of static IP addresses” is incorrect. Amazon CloudFront cannot be configured with “a pair of static IP addresses”.

Question 2:
A gaming company is looking at improving the availability and performance of its global flagship application which utilizes UDP protocol and needs to support fast regional failover in case an AWS Region goes down.
Which of the following AWS services represents the best solution for this use-case?
Options:
A. Amazon CloudFront
B. AWS Elastic Load Balancing (ELB)
C. Amazon Route 53
D. AWS Global Accelerator
Answer: D
Explanation
Correct option:
AWS Global Accelerator – AWS Global Accelerator utilizes the Amazon global network, allowing you to improve the performance of your applications by lowering first-byte latency (the round trip time for a packet to go from a client to your endpoint and back again) and jitter (the variation of latency), and increasing throughput (the amount of time it takes to transfer data) as compared to the public internet.
Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.
Incorrect options:
Amazon CloudFront – Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery), while Global Accelerator improves performance for a wide range of applications over TCP or UDP.
AWS Elastic Load Balancing (ELB) – Both of the services, ELB and Global Accelerator solve the challenge of routing user requests to healthy application endpoints. AWS Global Accelerator relies on ELB to provide the traditional load balancing features such as support for internal and non-AWS endpoints, pre-warming, and Layer 7 routing. However, while ELB provides load balancing within one Region, AWS Global Accelerator provides traffic management across multiple Regions.
A regional ELB load balancer is an ideal target for AWS Global Accelerator. By using a regional ELB load balancer, you can precisely distribute incoming application traffic across backends, such as Amazon EC2 instances or Amazon ECS tasks, within an AWS Region.
If you have workloads that cater to a global client base, AWS recommends that you use AWS Global Accelerator. If you have workloads hosted in a single AWS Region and used by clients in and around the same Region, you can use an Application Load Balancer or Network Load Balancer to manage your resources.
Amazon Route 53 – Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.


76. VPC End Points

Question 1:
A company wishes to restrict access to their Amazon DynamoDB table to specific, private source IP addresses from their VPC. What should be done to secure access to the table?
Options:
A. Create the Amazon DynamoDB table in the VPC
B. Create a gateway VPC endpoint and add an entry to the route table
C. Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
D. Create an AWS VPN connection to the Amazon DynamoDB endpoint
Answer: B
Explanation
There are two different types of VPC endpoint: interface endpoint, and gateway endpoint. With an interface endpoint you use an ENI in the VPC. With a gateway endpoint you configure your route table to point to the endpoint. Amazon S3 and DynamoDB use gateway endpoints. This solution means that all traffic will go through the VPC endpoint straight to DynamoDB using private IP addresses.
CORRECT: “Create a gateway VPC endpoint and add an entry to the route table” is the correct answer.
INCORRECT: “Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)” is incorrect. As mentioned above, an interface endpoint is not used for DynamoDB, you must use a gateway endpoint.
INCORRECT: “Create the Amazon DynamoDB table in the VPC” is incorrect. You cannot create a DynamoDB table in a VPC, to connect securely using private addresses you should use a gateway endpoint instead.
INCORRECT: “Create an AWS VPN connection to the Amazon DynamoDB endpoint” is incorrect. You cannot create an AWS VPN connection to the Amazon DynamoDB endpoint.


77. VPC Private Link


78. Transit Gateway

Question 01:
A company has many VPC in various accounts, that need to be connected in a star network with one another and connected with on-premises networks through Direct Connect.
What do you recommend?
• VPC Peering
• VPN Gateway
• Transit Gateway (Correct)
• Private Link
Explanation
Correct option:
Transit Gateway
AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. With AWS Transit Gateway, you only have to create and manage a single connection from the central gateway into each Amazon VPC, on-premises data center, or remote office across your network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. So, this is a perfect use-case for the Transit Gateway.
Incorrect options:
VPC Peering – A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection). VPC Peering helps connect two VPCs and is not transitive. It would require to create many peering connections between all the VPCs to have them connect. This alone wouldn’t work, because we would need to also connect the on-premises data center through Direct Connect and Direct Connect Gateway, but that’s not mentioned in this answer.
VPN Gateway – A virtual private gateway (also known as a VPN Gateway) is the endpoint on the VPC side of your VPN connection. You can create a virtual private gateway before creating the VPC itself. VPN Gateway is a distractor here because we haven’t mentioned a VPN.
Private Link – AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. Private Link is utilized to create a private connection between an application that is fronted by an NLB in an account, and an Elastic Network Interface (ENI) in another account, without the need of VPC peering, and allowing the connections between the two to remain within the AWS network.


79. VPN Hub

Question 1:
Your company has decided to move from an on-premises environment to AWS. It’s Tuesday now, and the data migration should be completed in 72 hours, starting from Friday night and finishing by Monday morning. This is done so that the migration doesn’t affect business operations. The data capacity for migration is 10TB, and it is necessary to protect the migration data with secure communication.
Select a migration method that meets this condition.
Options:
A. Data migration with Snowball
B. Data transfer via Direct Connect connection
C. Data transfer via VPN connection using AWS site-to-site VPN
D. Data transfer by Storage Gateway
Answer: C
Explanation
Option 3 is the correct answer. Instances launched in Amazon VPC cannot communicate with your on-premises network by default. Therefore, you need to connect your data center or office network to AWS through an AWS site-to-Site VPN (Site-to-Site VPN) connection. You can then use Internet Protocol Security (IPsec) communication to create an encrypted VPN tunnel between the two points.
In this scenario, you need to choose a migration method based on the amount of migration data and the migration schedule. It is difficult to order Direct Connect or Snowball from AWS and perform the migration on the weekend because it is Tuesday now and the data transfer is to be carried out on the weekend. This is important because these preparations require time to coordinate with AWS, more time than on offer in this scenario. The only means that can be implemented immediately is VPN connection settings. In addition, 10TB of data transfer can easily be completed in 72 hours by transfer via VPN connection.
Option 1 is incorrect. Snowball uses equipment borrowed from AWS. It is convenient when the amount of data is large, but it is not suitable for short-notice and for such a small amount of data.
Option 2 is incorrect. Direct Connect physically requires the AWS side to set up a dedicated line connection settings. This requires application and settings to AWS, and may not be ready in time.
Option 4 is incorrect. Storage Gateway is used for data transfer and backup configuration between S3 and on-premises storage. It can also be used for data migration of storage, but this time it is inappropriate because it is not only storage that we targeted for data migration in this scenario.


80. Networking Costs


81. ELB

Question 1:
You are building a two-tier web application that delivers content while processing transactions on AWS. The data layer utilizes an online transaction processing (OLTP) database. At the WEB layer, it is necessary to create a flexible and scalable architectural configuration.
Choose the best way to meet this requirement.
Options:
A. Set up ELB and Auto Scaling groups on your EC2 instance
B. Set up a multi-AZ configuration for RDS
C. Deploy EC2 instances in multi-AZ to configure failover routing with Route53
D. Launch more EC2 instances than expected capacity
Answer: A
Explanation
Option 1 is the correct answer. This can be achieved by configuring Auto Scaling and ELB on your EC2 instance for flexible and scalable server processing on AWS. ELB distributes traffic to multiple instances for increased redundancy, and Auto Scaling automatically scales under heavy load.
Option 2 is incorrect. Since it is a requirement to create a flexible and scalable architecture configuration in the WEB layer, the setting of the RDS multi-AZ configuration in the database layer is incorrect.
Option 3 is incorrect. Failover routing with Route53 does not meet your requirements. Failover routing improves fault tolerance, but not performance.
Option 4 is incorrect. Placing more EC2 instances than the expected capacity requirement is incorrect because it does not meet the requirements for flexible configuration.

Question 2:
An e-commerce application is hosted in AWS. The last time a new product was launched, the application experienced a performance issue due to an enormous spike in traffic. Management decided that capacity must be doubled this week after the product is launched.
What is the MOST efficient way for management to ensure that capacity requirements are met?
Options:
A. Add a Step Scaling Policy
B. Add a Scheduled Scaling Action
C. Add a Simple Scaling Policy
D. Add Amazon EC2 Spot instances
Answer: B
Explanation
Scaling based on a schedule allows you to set your own scaling schedule for predictable load changes. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. This is ideal for situations where you know when and for how long you are going to need the additional capacity.
CORRECT: “Add a Scheduled Scaling action” is the correct answer.
INCORRECT: “Add a Step Scaling policy” is incorrect. Step scaling policies increase or decrease the current capacity of your Auto Scaling group based on a set of scaling adjustments, known as step adjustments. The adjustments vary based on the size of the alarm breach. This is more suitable to situations where the load unpredictable.
INCORRECT: “Add a Simple Scaling policy” is incorrect. AWS recommend using step over simple scaling in most cases. With simple scaling, after a scaling activity is started, the policy must wait for the scaling activity or health check replacement to complete and the cooldown period to expire before responding to additional alarms (in contrast to step scaling). Again, this is more suitable to unpredictable workloads.
INCORRECT: “Add Amazon EC2 Spot instances” is incorrect. Adding spot instances may decrease EC2 costs but you still need to ensure they are available. The main requirement of the question is that the performance issues are resolved rather than the cost being minimized.

Question 3:
A solutions architect has created a new Application Load Balancer and has configured a target group with IP address as a target type.
Which of the following types of IP addresses are allowed as a valid value for this target type?
Options:
A. Elastic IP address
B. Public IP address
C. Dynamic IP address
D. Private IP address
Answer: D
Explanation
Correct option:
Private IP address
When you create a target group, you specify its target type, which can be an Instance, IP or a Lambda function.
For IP address target type, you can route traffic using any private IP address from one or more network interfaces.
Incorrect options:
Public IP address
Elastic IP address
You can’t specify publicly routable IP addresses as values for IP target type, so both these options are incorrect.
Dynamic IP address – There is no such thing as a dynamic IP address. This option has been added as a distractor.

Question 30:
You would like to deploy an application behind an Application Load Balancer, that will have some Auto Scaling capability and efficiently leverage a mix of Spot Instances and On-Demand instances to meet demand.
What do you recommend to manage the instances?
A• Create a Spot Instance Request
B• Create an ASG with a launch template
C• Create a Spot Fleet Request
D• Create an ASG with a launch configuration
Answer: B
Explanation
Correct option:
Create an ASG with a launch template
A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping.
A launch template is similar to a launch configuration, in that it specifies instance configuration information. Included are the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances. However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.
Launch Templates do support a mix of On-Demand and Spot instances, and thanks to the ASG, we get auto-scaling capabilities. Hence this is the correct option.
Incorrect options:
Create a Spot Instance Request – A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price.
Spot Instance Requests only help to launch spot instances so we have to rule that out.
Create a Spot Fleet Request – Spot Fleet requests will help launch a mix of On-Demand and Spot, but won’t have the auto-scaling capability we need. So this option is incorrect.
Create an ASG with a launch configuration – ASG Launch Configurations do not support a mix of On-Demand and Spot instances. So this option is incorrect as well.

Question 34:
The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Elastic Load Balancer. The team wants to route traffic to multiple back-end services based on the content of the request.
Which of the following types of load balancers would allow routing based on the content of the request?
A• Classic Load Balancer
B• Both Application Load Balancer and Network Load Balancer
C• Application Load Balancer
D• Network Load Balancer
Answer: C
Explanation
Correct option:
Application Load Balancer
An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action. You can configure listener rules to route requests to different target groups based on the content of the application traffic. Each target group can be an independent microservice, therefore this option is correct.
Incorrect options:
Network Load Balancer – Network Load Balancer operates at the connection level (Layer 4), routing connections to targets – Amazon EC2 instances, microservices, and containers – within Amazon Virtual Private Cloud (Amazon VPC) based on IP protocol data.
Classic Load Balancer – Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.
Network Load Balancer or Classic Load Balancer cannot be used to route traffic based on the content of the request. So both these options are incorrect.
Both Application Load Balancer and Network Load Balancer – Network Load Balancer cannot be used to route traffic based on the content of the request. So this option is also incorrect.


82. ELBs and Health Checks – LAB
83. Advanced ELB


84. ASG

Question 1:
You have set up an Auto Scaling group to increase the availability of your infrastructure. However, due to a configuration issue, the group was unable to launch the instance for more than 30 hours, even if the Auto Scaling configuration criteria were met.
Given this situation, what would Auto scaling’s action/reaction be?
Options:
A. Auto Scaling will continue to try launching an instance for up to 48 hours
B. Auto Scaling stops the startup process
C. Auto Scaling begins the startup process in another AZ
D. Auto Scaling notifies CloudWatch when it fails to start
Answer: B
Explanation
If Auto Scaling encounters a problem when launching an instance, Auto Scaling will suspend processes running within that group and this can be restarted later on at any time. You would use this time to analyze configuration issues. Therefore, option 2 is the correct answer.

Question 2:
A company runs an application on six web application servers in an Amazon EC2 Auto Scaling group in a single Availability Zone. The application is fronted by an Application Load Balancer (ALB). A Solutions Architect needs to modify the infrastructure to be highly available without making any modifications to the application.
Which architecture should the Solutions Architect choose to enable high availability?
Options:
A. Create an Auto Scaling group to launch three instances across each of two Regions
B. Modify the Auto Scaling group to use two instances across each of three Availability Zones
C. Create an Amazon CloudFront distribution with a custom origin across multiple Regions
D. Create a launch template that can be used to quickly create more instances in another Region
Answer: B
Explanation
The only thing that needs to be changed in this scenario to enable HA is to split the instances across multiple Availability Zones. The architecture already uses Auto Scaling and Elastic Load Balancing so there is plenty of resilience to failure. Once the instances are running across multiple AZs there will be AZ-level fault tolerance as well.
CORRECT: “Modify the Auto Scaling group to use two instances across each of three Availability Zones” is the correct answer.
INCORRECT: “Create an Amazon CloudFront distribution with a custom origin across multiple Regions” is incorrect. CloudFront is not used to create HA for your application, it is used to accelerate access to media content.
INCORRECT: “Create a launch template that can be used to quickly create more instances in another Region” is incorrect. Multi-AZ should be enabled rather than multi-Region.
INCORRECT: “Create an Auto Scaling group to launch three instances across each of two Regions” is incorrect. HA can be achieved within a Region by simply enabling more AZs in the ASG. An ASG cannot launch instances in multiple Regions.

Question 3:
A company hosts a multiplayer game on AWS. The application uses Amazon EC2 instances in a single Availability Zone and users connect over Layer 4. Solutions Architect has been tasked with making the architecture highly available and also more cost-effective.
How can the solutions architect best meet these requirements? (Select TWO.)
A. Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically
B. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically
C. Configure a Network Load Balancer in front of the EC2 instances
D. Increase the number of instances and use smaller EC2 instance types
E. Configure an Application Load Balancer in front of the EC2 instances
Answer: B & C
Explanation
The solutions architect must enable high availability for the architecture and ensure it is cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be created to add and remove instances across multiple availability zones.
In order to distribute the traffic to the instances the architecture should use a Network Load Balancer which operates at Layer 4. This architecture will also be cost-effective as the Auto Scaling group will ensure the right number of instances are running based on demand.
CORRECT: “Configure a Network Load Balancer in front of the EC2 instances” is a correct answer.
CORRECT: “Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically” is also a correct answer.
INCORRECT: “Increase the number of instances and use smaller EC2 instance types” is incorrect as this is not the most cost-effective option. Auto Scaling should be used to maintain the right number of active instances.
INCORRECT: “Configure an Auto Scaling group to add or remove instances in the Availability Zone automatically” is incorrect as this is not highly available as it’s a single AZ.
INCORRECT: “Configure an Application Load Balancer in front of the EC2 instances” is incorrect as an ALB operates at Layer 7 rather than Layer 4.

Question 4:
The payroll department at a company initiates several computationally intensive workloads on EC2 instances at a designated hour on the last day of every month. The payroll department has noticed a trend of severe performance lag during this hour. The engineering team has figured out a solution by using Auto Scaling Group for these EC2 instances and making sure that 10 EC2 instances are available during this peak usage hour. For normal operations only 2 EC2 instances are enough to cater to the workload.
As a solutions architect, which of the following steps would you recommend to implement the solution?
Options:
A. Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour
B. Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the min count as well as the max count of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour
C. Configure your Auto Scaling group by creating a target tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour
D. Configure your Auto Scaling group by creating a simple tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour
Answer: A
Explanation
Correct option:
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour
Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date.
A scheduled action sets the minimum, maximum, and desired sizes to what is specified by the scheduled action at the time specified by the scheduled action. For the given use case, the correct solution is to set the desired capacity to 10. When we want to specify a range of instances, then we must use min and max values.
Incorrect options:
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the min count as well as the max count of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour – As mentioned earlier in the explanation, only when we want to specify a range of instances, then we must use min and max values. As the given use-case requires exactly 10 instances to be available during the peak hour, so we must set the desired capacity to 10. Hence this option is incorrect.
Configure your Auto Scaling group by creating a target tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a simple tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour
Target tracking policy or simple tracking policy cannot be used to effect a scaling action at a certain designated hour. Both these options have been added as distractors.

Question 5:
A social gaming startup has its flagship application hosted on a fleet of EC2 servers running behind an Elastic Load Balancer. These servers are part of an Auto Scaling Group. 90% of the users start logging into the system at 6 pm every day and continue till midnight. The engineering team at the startup has observed that there is a significant performance lag during the initial hour from 6 pm to 7 pm. The application is able to function normally thereafter.
As a solutions architect, which of the following steps would you recommend addressing the performance bottleneck during that initial hour of traffic spike?
A. Configure your Auto Scaling group by creating a scheduled action that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm
B. Configure your Auto Scaling group by creating a lifecycle hook that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm
C. Configure your Auto Scaling group by creating a target tracking policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm
D. Configure your Auto Scaling group by creating a step scaling policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm
Answer: A
Explanation
Correct option:
Configure your Auto Scaling group by creating a scheduled action that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm
The scheduled action tells the Amazon EC2 Auto Scaling group to perform a scaling action at specified times. To create a scheduled scaling action, you specify the start time when the scaling action should take effect, and the new minimum, maximum, and desired sizes for the scaling action. For the given use-case, the engineering team can create a daily scheduled action to kick-off before 6 pm which would cause the scale-out to happen even before peak traffic kicks in at 6 pm. Hence this is the correct option.
Incorrect options:
Configure your Auto Scaling group by creating a lifecycle hook that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm – Auto Scaling group lifecycle hooks enable you to perform custom actions as the Auto Scaling group launches or terminates instances. For example, you could install or configure software on newly launched instances, or download log files from an instance before it terminates. Therefore, lifecycle hooks cannot cause a scale-out to happen at a specified time. Hence this option is incorrect.
Configure your Auto Scaling group by creating a target tracking policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm – With target tracking scaling policies, you choose a scaling metric and set a target value. Application Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. Target tracking policy cannot cause a scale-out to happen at a specified time. Hence this option is incorrect.
Configure your Auto Scaling group by creating a step scaling policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm – With step scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process as well as define how your scalable target should be scaled when a threshold is in breach for a specified number of evaluation periods. Step scaling policy cannot cause a scale-out to happen at a specified time. Hence this option is incorrect.
In addition, both the target tracking as well as step scaling policies entail a lag wherein the instances will be provisioned only when the underlying CloudWatch alarms go off. Therefore we would still see performance lag during some part of the initial hour.

Question 6:
The DevOps team at an e-commerce company wants to perform some maintenance work on a specific EC2 instance that is part of an Auto Scaling group using a step scaling policy. The team is facing a maintenance challenge – every time the team deploys a maintenance patch, the instance health check status shows as out of service for a few minutes. This causes the Auto Scaling group to provision another replacement instance immediately.
As a solutions architect, which are the MOST time/resource efficient steps that you would recommend so that the maintenance work can be completed at the earliest? (Select two)
Options:
A. Take a snapshot of the instance, create a new AMI and then launch a new instance using this AMI. Apply the maintenance patch to this new instance and then add it back to the Auto Scaling Group by using the manual scaling policy. Terminate the earlier instance that had the maintenance issue
B. Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service
C. Delete the Auto Scaling group and apply the maintenance fix to the given instance. Create a new Auto Scaling group and add all the instances again using the manual scaling policy
D. Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can manually set the instance’s health status back to healthy and activate the ReplaceUnhealthy process type again
E. Suspend the ScheduledActions process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can you can manually set the instance’s health status back to healthy and activate the ScheduledActions process type again
Answer: B & D
Explanation
Correct options:
Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service – You can put an instance that is in the InService state into the Standby state, update some software or troubleshoot the instance, and then return the instance to service. Instances that are on standby are still part of the Auto Scaling group, but they do not actively handle application traffic.
Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can manually set the instance’s health status back to healthy and activate the ReplaceUnhealthy process type again – The ReplaceUnhealthy process terminates instances that are marked as unhealthy and then creates new instances to replace them. Amazon EC2 Auto Scaling stops replacing instances that are marked as unhealthy. Instances that fail EC2 or Elastic Load Balancing health checks are still marked as unhealthy. As soon as you resume the ReplaceUnhealthly process, Amazon EC2 Auto Scaling replaces instances that were marked unhealthy while this process was suspended.
Incorrect options:
Take a snapshot of the instance, create a new AMI and then launch a new instance using this AMI. Apply the maintenance patch to this new instance and then add it back to the Auto Scaling Group by using the manual scaling policy. Terminate the earlier instance that had the maintenance issue – Taking the snapshot of the existing instance to create a new AMI and then creating a new instance in order to apply the maintenance patch is not time/resource optimal, hence this option is ruled out.
Delete the Auto Scaling group and apply the maintenance fix to the given instance. Create a new Auto Scaling group and add all the instances again using the manual scaling policy – It’s not recommended to delete the Auto Scaling group just to apply a maintenance patch on a specific instance.
Suspend the ScheduledActions process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can you can manually set the instance’s health status back to healthy and activate the ScheduledActions process type again – Amazon EC2 Auto Scaling does not execute scaling actions that are scheduled to run during the suspension period. This option is not relevant to the given use-case.

Question 07:
The engineering team at a data analytics company has observed that its flagship application functions at its peak performance when the underlying EC2 instances have a CPU utilization of about 50%. The application is built on a fleet of EC2 instances managed under an Auto Scaling group. The workflow requests are handled by an internal Application Load Balancer that routes the requests to the instances.
As a solutions architect, what would you recommend so that the application runs near its peak performance state?
Options:
A. Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target metric with a target value of 50%
B. Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target metric with a target value of 50%
C. Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
D. Configure the Auto Scaling group to use a Cloudwatch alarm triggered on a CPU utilization threshold of 50%
Answer: C
Explanation
Correct option:
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value.
For example, you can use target tracking scaling to:
Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 50 percent. This meets the requirements specified in the given use-case and therefore, this is the correct option.
Incorrect options:
Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target metric with a target value of 50%
Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target metric with a target value of 50%
With step scaling and simple scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. Neither step scaling nor simple scaling can be configured to use a target metric for CPU utilization, hence both these options are incorrect.
Configure the Auto Scaling group to use a Cloudwatch alarm triggered on a CPU utilization threshold of 50% – An Auto Scaling group cannot directly use a Cloudwatch alarm as the source for a scale-in or scale-out event, hence this option is incorrect.

Question 08:
A tax computation software runs on Amazon EC2 instances behind a Classic Load Balancer. The instances are managed by an Auto Scaling Group. The tax computation software has an optimization module, which can take up to 10 minutes to find the optimal answer.
How do you ensure that when the Auto Scaling Group initiates a scale-in event, the users do not see their current requests interrupted?
• Increase the deregistration delay to more than 10 minutes (Correct)
• Enable Stickiness on the CLB
• Enable ELB health checks on the ASG
• Create an ASG Scheduled Action
Explanation
Correct option:
Increase the deregistration delay to more than 10 minutes
Elastic Load Balancing stops sending requests to targets that are deregistering. By default, Elastic Load Balancing waits 300 seconds before completing the deregistration process, which can help in-flight requests to the target to complete. We need to update this value to more than 10 minutes to allow our tax software to complete in-flight requests. Therefore this is the correct option.
Incorrect options:
Create an ASG Scheduled Action – Scheduled scaling allows you to set your scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date.
You cannot use a scheduled action to stop the current request from being interrupted in case of a scale-in event. Hence this option is incorrect.
Enable Stickiness on the CLB – By default, a Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user’s session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance. You cannot use sticky sessions to stop the current request from being interrupted in case of a scale-in event. Hence this option is incorrect.
Enable ELB health checks on the ASG – The default health checks for an Auto Scaling group are EC2 status checks only. If an instance fails these status checks, the Auto Scaling group considers the instance unhealthy and replaces it. To ensure that the group can determine an instance’s health based on additional tests provided by the load balancer, you can optionally configure the Auto Scaling group to use Elastic Load Balancing health checks.
ELB health checks, when enabled on an ASG, help let the ASG when instances are unhealthy and trigger a scale-in event. You cannot use ELB health checks on the ASG to stop the current request from being interrupted in case of a scale-in event. Hence this option is incorrect.


85. Launch Configurations & Autoscaling Groups Lab
86. HA Architecture
87. Building a fault tolerant WordPress site – Lab 1
88. Building a fault tolerant WordPress site – Lab 2
89. Building a fault tolerant WordPress site – Lab 3 : Adding Resilience & Autoscaling
90. Building a fault tolerant WordPress site – Lab 4 : Cleaning Up
91. Building a fault tolerant WordPress site – Lab 5 : Cloud Formation
92. Elastic Beanstalk Lab
93. Highly Available Bastions
94. On Premise Strategies


95. SQS

Question 1:
Your company operates an application for uploading, processing and publishing user-submitted videos. This application is hosted on an EC2 instance for processing videos uploaded by users. It has an EC2 worker process that processes and publishes the video and also has an Auto Scaling group set up.
Select the services you should use to increase the reliability of your worker processes.
Options:
A. SQS
B. SNS
C. SES
D. CloudFront
Answer: A
Explanation:
Amazon SQS is used for decentralization, like with worker processing. The video processing request of the worker process is stored in the queue, enabling reliable processing execution by asynchronous processing. Multiple “worker” processes can be executed in parallel by distributing EC2 instances for use, responding to requests in the queue. Each message is consumed only once.
Distributed parallel processing of SQS queues can increase the reliability of worker processes. Therefore, option A is the correct answer.
Option 2 is incorrect. Messaging is the primary role of Amazon SNS and is used to configure worker processes to be triggered by specific events. SQS must be used to enable distributed processing of worker processes by queuing.
Option 3 is incorrect. You can implement the email function by using Amazon SES. SQS is used for distributed processing of worker processes.
Option 4 is incorrect. CloudFront is a service used for content distribution.

Question 2:
As a Solutions Architect, you are trying to add messaging processing using AWS messaging services to the application you are currently building. The most important requirement is to maintain the order of the messages and not send duplicate messages.
Which of the following services will help you meet this requirement?
Options:
A. SQS
B. SNS
C. SES
D. Lambda
Answer: A
Explanation
Option 1 is the correct answer. SQS is a managed message queuing service that allows you to monitor messages transferred between application components as queues. Utilizing FIFO queues enables high throughput, best effort ordering, and at least one delivery. FIFO queues guarantee message order and supports at least one message delivery.
Duplicate messages can be prevented by using the message deduplication ID in the SQS FIFO queue. The message deduplication ID is the token used to deduplication the sent message. If a message with a specific message deduplication ID is sent successfully, the outgoing message with the same ID will not be delivered during the 5-minute deduplication interval.
Option 2 is incorrect. SNS is a push-type messaging service. The order of the messages sent is not guaranteed.
Option 3 is incorrect. SES is a service that can implement the email function. The order of the messages is not guaranteed as it only performs email notifications.
Option 4 is incorrect. Lambda does not have a message notification feature.

Question 3:
As a Solutions Architect, you are building a web application on AWS. This application provides data conversion services to users. The files to be converted are first uploaded to S3, and then a Spotfleet processes the data conversion. Users are divided into free users and paid users. Files submitted by paid users should be prioritized for processing.
Choose an solution that meets these requirements.
Options:
A. Use Route53 to configure traffic routing according to customer type
B. Use SQS to set a specific queue that preferentially processes paid users, and then use a regular queue for free users
C. Use the Lambda function to send a message that allows preferentially processes of the paid users, and set the other as the default setting
D. Use SNS to send the message that processing of paid users is to be processed preferentially, and set the other as the default setting
Answer: B
Explanation
SQS allows you to set priorities for queues. By doing so, it is possible to divide the queue further into queues that are processed preferentially and queues that are not. When each queue is polled separately, the higher priority queue is polled first. With this SQS setting, it is possible to set a queue to be processed preferentially for paid users and to use the default queue for free users. Therefore, option 2 is the correct answer.
Follow the settings for prioritization:
1. Prepare multiple queues for each priority using SQS.
2. Requests that are prioritized are to be placed in a high-priority queue.
3.Prepare the number of servers that process the queue based on their priority.
4. It is also possible to delay the processing start time by using the “delayed message transmission” function of the queue.

Question 4:
Your company has a database system that uses DynamoDB. Recently, due to the increase in the number of write processes, many processing delays and failures have occurred in the database. As a Solutions Architect, you are required to take action to ensure that write operations are not lost under any circumstances.
Choose the best way to meet this requirement.
Options:
A. Use IOPS volume for DynamoDB
B. Set up a distributed processing using SQS queues for DynamoDB write processing
C. Set up a distributed processing using SQS queue and set the Lambda function for the write process of DynamoDB
D. Perform DynamoDB data processing with an EC2 instance
Answer: C
Explanation
A “pending write request to the database” can be stored in the SQS queue for asynchronous processing. For DynamoDB data processing execution, it is also possible to execute DB processing by queue in cooperation with Lambda. By queuing the processing process, you can set the write processing queue so that it is not lost. This ensures that the request message is not lost, which meets the requirements. Therefore, option 3 is the correct answer.
Option 1 is incorrect. An IOPS volume configuration cannot be chosen for DynamoDB.
Option 2 is incorrect. The distributed processing process cannot be executed only with the SQS queue for the writing processing of DynamoDB. An SQS-triggered Lambda function is essential.
Option 4 is incorrect. Performing DynamoDB data processing with an EC2 instance is inefficient. DynamoDB data process can create more efficient architecture configurations by linking with Lambda function for serverless processing.

Question 5:
As a Solutions Architect, you are developing a workflow to send video data from your system to AWS for transcoding video data. We plan to build this mechanism using an EC2 worker instance that pulls transcode jobs from SQS.
Choose the correct feature of SQS that helps you complete this.
Options:
A. SQS provides a health checks for worker instances
B. SQS can achieve horizontal scaling
C. SQS is best suited for this type of process because maintains the order of operations
D. Processing according to a set schedule can be executed by SQS
Answer: B
Explanation
Option 2 is the correct answer. SQS allows load distribution by distributing system processing through queues. This helps you scale your AWS resources horizontally. SQS queues enable parallel processing with multiple EC2 instances, achieving load distribution and processing process optimization.
Option 1 is incorrect. SQS does not health check the status of worker instances.
Option 3 is incorrect. The order of the queues is not particularly important in this video processing, so it is not included as a requirement.
Option 4 is incorrect. SQS does not perform scheduled queuing.

Question 6:
A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components.
Which solution should the Architect use?
Options:
A. Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3
B. Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream
C. Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue
D. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
Answer: D
Explanation
This is a good use case for Amazon SQS. SQS is a service that is used for decoupling applications, thus reducing interdependencies, through a message bus. The front-end application can place messages on the queue and the back-end can then poll the queue for new messages. Please remember that Amazon SQS is pull-based (polling) not push-based (use SNS for push-based).
CORRECT: “Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages” is the correct answer.
INCORRECT: “Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream” is incorrect. Amazon Kinesis Firehose is used for streaming data. With Firehose the data is immediately loaded into a destination that can be Amazon S3, RedShift, Elasticsearch, or Splunk. This is not an ideal use case for Firehose as this is not streaming data and there is no need to load data into an additional AWS service.
INCORRECT: “Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3” is incorrect as per the previous explanation.
INCORRECT: “Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue ” is incorrect as SQS is pull-based, not push-based. EC2 instances must poll the queue to find jobs to process.

Question 7:
An eCommerce application consists of three tiers. The web tier includes EC2 instances behind an Application Load balancer, the middle tier uses EC2 instances and an Amazon SQS queue to process orders, and the database tier consists of an Auto Scaling DynamoDB table. During busy periods customers have complained about delays in the processing of orders. A Solutions Architect has been tasked with reducing processing times.
Which action will be MOST effective in accomplishing this requirement?
Options:
A. Replace the Amazon SQS queue with Amazon Kinesis Data Firehose
B. Add an Amazon CloudFront distribution with a custom origin to cache the responses for the web tier
C. Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier
D. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth
Answer: D
Explanation
The most likely cause of the processing delays is insufficient instances in the middle tier where the order processing takes place. The most effective solution to reduce processing times in this case is to scale based on the backlog per instance (number of messages in the SQS queue) as this reflects the amount of work that needs to be done.
CORRECT: “Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth” is the correct answer.
INCORRECT: “Replace the Amazon SQS queue with Amazon Kinesis Data Firehose” is incorrect. The issue is not the efficiency of queuing messages but the processing of the messages. In this case scaling the EC2 instances to reflect the workload is a better solution.
INCORRECT: “Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier” is incorrect. The DynamoDB table is configured with Auto Scaling so this is not likely to be the bottleneck in order processing.
INCORRECT: “Add an Amazon CloudFront distribution with a custom origin to cache the responses for the web tier” is incorrect. This will cache media files to speed up web response times but not order processing times as they take place in the middle tier.

Question 8:
A web application allows users to upload photos and add graphical elements to them. The application offers two tiers of service: free and paid. Photos uploaded by paid users should be processed before those submitted using the free tier. The photos are uploaded to an Amazon S3 bucket which uses an event notification to send the job information to Amazon SQS.
How should a Solutions Architect configure the Amazon SQS deployment to meet these requirements?
Options:
A. Use one SQS standard queue. Use batching for the paid photos and short polling for the free photos
B. Use a separate SQS FIFO queue for each tier. Set the free queue to use short polling and the paid queue to use long polling
C. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first
D. Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue
Answer: D
Explanation
AWS recommend using separate queues when you need to provide prioritization of work. The logic can then be implemented at the application layer to prioritize the queue for the paid photos over the queue for the free photos.
CORRECT: “Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue” is the correct answer.
INCORRECT: “Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first” is incorrect. FIFO queues preserve the order of messages but they do not prioritize messages within the queue. The orders would need to be placed into the queue in a priority order and there’s no way of doing this as the messages are sent automatically through event notifications as they are received by Amazon S3.
INCORRECT: “Use one SQS standard queue. Use batching for the paid photos and short polling for the free photos” is incorrect. Batching adds efficiency but it has nothing to do with ordering or priority.
INCORRECT: “Use a separate SQS FIFO queue for each tier. Set the free queue to use short polling and the paid queue to use long polling” is incorrect. Short polling and long polling are used to control the amount of time the consumer process waits before closing the API call and trying again. Polling should be configured for efficiency of API calls and processing of messages but does not help with message prioritization.

Question 9:
A company is working with a strategic partner that has an application that must be able to send messages to one of the company’s Amazon SQS queues. The partner company has its own AWS account.
How can a Solutions Architect provide least privilege access to the partner?
Options:
A. Create a user account that and grant the sqs:SendMessage permission for Amazon SQS. Share the credentials with the partner company
B. Update the permission policy on the SQS queue to grant all permissions to the partner’s AWS account
C. Update the permission policy on the SQS queue to grant the sqs:SendMessage permission to the partner’s AWS account
D. Create a cross-account role with access to all SQS queues and use the partner’s AWS account in the trust document for the role
Answer: C
Explanation
Amazon SQS supports resource-based policies. The best way to grant the permissions using the principle of least privilege is to use a resource-based policy attached to the SQS queue that grants the partner company’s AWS account the sqs:SendMessage privilege.
CORRECT: “Update the permission policy on the SQS queue to grant the sqs:SendMessage permission to the partner’s AWS account” is the correct answer.
INCORRECT: “Create a user account that and grant the sqs:SendMessage permission for Amazon SQS. Share the credentials with the partner company” is incorrect. This would provide the permissions for all SQS queues, not just the queue the partner company should be able to access.
INCORRECT: “Create a cross-account role with access to all SQS queues and use the partner’s AWS account in the trust document for the role” is incorrect. This would provide access to all SQS queues and the partner company should only be able to access one SQS queue.
INCORRECT: “Update the permission policy on the SQS queue to grant all permissions to the partner’s AWS account” is incorrect. This provides too many permissions; the partner company only needs to send messages to the queue.

Question 10:
An application running on Amazon EC2 needs to asynchronously invoke an AWS Lambda function to perform data processing. The services should be decoupled.
Which service can be used to decouple the compute services?
Options:
A. Amazon SNS
B. AWS Step Functions
C. Amazon MQ
D. AWS Config
Answer: A
Explanation
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.
CORRECT: “Amazon SNS” is the correct answer.
INCORRECT: “AWS Config” is incorrect. AWS Config is a service that is used for continuous compliance, not application decoupling.
INCORRECT: “Amazon MQ” is incorrect. Amazon MQ is similar to SQS but is used for existing applications that are being migrated into AWS. SQS should be used for new applications being created in the cloud.
INCORRECT: “AWS Step Functions” is incorrect. AWS Step Functions is a workflow service. It is not the best solution for this scenario.

Question 11:
A major bank is using SQS to migrate several core banking applications to the cloud to ensure high availability and cost efficiency while simplifying administrative complexity and overhead. The development team at the bank expects a peak rate of about 1000 messages per second to be processed via SQS. It is important that the messages are processed in order.
Which of the following options can be used to implement this system?
Options:
A. Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate
B. Use Amazon SQS FIFO queue to process the messages
C. Use Amazon SQS standard queue to process the messages
D. Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process the messages at the peak rate
Answer: A
Explanation
Correct option:
Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues – Standard queues vs FIFO queues.
For FIFO queues, the order in which messages are sent and received is strictly preserved (i.e. First-In-First-Out). On the other hand, the standard SQS queues offer best-effort ordering. This means that occasionally, messages might be delivered in an order different from which they were sent.
By default, FIFO queues support up to 300 messages per second (300 send, receive, or delete operations per second). When you batch 10 messages per operation (maximum), FIFO queues can support up to 3,000 messages per second. Therefore you need to process 4 messages per operation so that the FIFO queue can support up to 1200 messages per second, which is well within the peak rate.
Incorrect options:
Use Amazon SQS standard queue to process the messages – As messages need to be processed in order, therefore standard queues are ruled out.
Use Amazon SQS FIFO queue to process the messages – By default, FIFO queues support up to 300 messages per second and this is not sufficient to meet the message processing throughput per the given use-case. Hence this option is incorrect.
Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process the messages at the peak rate – As mentioned earlier in the explanation, you need to use FIFO queues in batch mode and process 4 messages per operation, so that the FIFO queue can support up to 1200 messages per second. With 2 messages per operation, you can only support up to 600 messages per second.

Question 5:
You are establishing a monitoring solution for desktop systems, that will be sending telemetry data into AWS every 1 minute. Data for each system must be processed in order, independently, and you would like to scale the number of consumers to be possibly equal to the number of desktop systems that are being monitored.
What do you recommend?
• Use an SQS FIFO queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID (Correct)
• Use an SQS FIFO queue, and send the telemetry data as is
• Use a Kinesis Data Stream, and send the telemetry data with a Partition ID that uses the value of the Desktop ID
• Use an SQS standard queue, and send the telemetry data as is
Explanation
Correct option:
Use an SQS FIFO queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
We, therefore, need to use an SQS FIFO queue. If we don’t specify a GroupID, then all the messages are in absolute order, but we can only have 1 consumer at most. To allow for multiple consumers to read data for each Desktop application, and to scale the number of consumers, we should use the “Group ID” attribute. So this is the correct option.
Incorrect options:
Use an SQS FIFO queue, and send the telemetry data as is – This is incorrect because if we send the telemetry data as is then we will not be able to scale the number of consumers to be equal to the number of desktop systems. In this case, each message will have its consumer. So we should use the “Group ID” attribute so that multiple consumers can read data for each Desktop application.
Use an SQS standard queue, and send the telemetry data as is – An SQS standard queue has no ordering capability so that’s ruled out.
Use a Kinesis Data Stream, and send the telemetry data with a Partition ID that uses the value of the Desktop ID – Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. A Kinesis Data Stream would work and would give us the data for each desktop application within shards, but we can only have as many consumers as shards in Kinesis (which is in practice, much less than the number of producers).


96. SWF


97. SNS

Question 1:
The engineering team at a Spanish professional football club has built a notification system for its website using Amazon SNS notifications which are then handled by a Lambda function for end-user delivery. During the off-season, the notification systems need to handle about 100 requests per second. During the peak football season, the rate touches about 5000 requests per second and it is noticed that a significant number of the notifications are not being delivered to the end-users on the website.
As a solutions architect, which of the following would you suggest as the BEST possible solution to this issue?
Options:
A. Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit
B. Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise the account limit
C. The engineering team needs to provision more servers running the Lambda service
D. The engineering team needs to provision more servers running the SNS service
Answer: A
Explanation
Correct option:
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.
With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume—there’s no charge when your code isn’t running.
AWS Lambda currently supports 1000 concurrent executions per AWS account per region. If your Amazon SNS message deliveries to AWS Lambda contribute to crossing these concurrency quotas, your Amazon SNS message deliveries will be throttled. You need to contact AWS support to raise the account limit. Therefore this option is correct.
Incorrect options:
Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise the account limit – Amazon SNS leverages the proven AWS cloud to dynamically scale with your application. You don’t need to contact AWS support, as SNS is a fully managed service, taking care of the heavy lifting related to capacity planning, provisioning, monitoring, and patching. Therefore, this option is incorrect.
The engineering team needs to provision more servers running the SNS service
The engineering team needs to provision more servers running the Lambda service
As both Lambda and SNS are serverless and fully managed services, the engineering team cannot provision more servers. Both of these options are incorrect.

Question 20: Skipped
A cybersecurity company uses a fleet of EC2 instances to run a proprietary application. The infrastructure maintenance group at the company wants to be notified via an email whenever the CPU utilization for any of the EC2 instances breaches a certain threshold.
Which of the following services would you use for building a solution with the LEAST amount of development effort? (Select two)
• Amazon SNS (Correct)
• AWS Lambda
• Amazon SQS
• Amazon CloudWatch (Correct)
• AWS Step Functions
Explanation
Correct options:
Amazon SNS – Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Amazon CloudWatch allows you to monitor AWS cloud resources and the applications you run on AWS.
You can use CloudWatch Alarms to send an email via SNS whenever any of the EC2 instances breaches a certain threshold. Hence both these options are correct.
Incorrect options:
AWS Lambda – With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume—there’s no charge when your code isn’t running. You can run code for virtually any type of application or backend service—all with zero administration. You cannot use AWS Lambda to monitor CPU utilization of EC2 instances or send notification emails, hence this option is incorrect.
Amazon SQS – Amazon SQS Standard offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. You cannot use SQS to monitor CPU utilization of EC2 instances or send notification emails, hence this option is incorrect.
AWS Step Functions – AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services, such as AWS Lambda, AWS Fargate, and Amazon SageMaker, into feature-rich applications. You cannot use Step Functions to monitor CPU utilization of EC2 instances or send notification emails, hence this option is incorrect.


98. Elastic Transcoder


99. API Gateway

Question 1:
As a Solutions Architect, you are building business applications on a serverless architecture. In this application, the process of acquiring, registering, and changing DynamoDB data is performed by a Lambda function. You need to be able to call this application over HTTP.
How can this requirement be achieved?
Options:
A. Install API Gateway and integrate with Lambda functions
B. Set the IAM role to a Lambda function to allow HTTP access
C. Set HTTP permissions in the Lambda function settings
D. Configure NACLs to allow HTTP access to Lambda function
Answer: A
Explanation
Option 1 is the correct answer. The API gateway provides HTTP access to back-end services via the API. API gateway is a service that can be integrated with Lambda, and by implementing this setting to integrate API gateway with Lambda function, you can access Lambda function with HTTP.
Option 2 is incorrect. The IAM role allows one AWS resources to access another AWS resource. It is not a function that allows HTTP access.
Option 3 is incorrect. There is no function to allow HTTP access in the Lambda function settings.
Option 4 is incorrect. There is no ability to configure the network ACLs to allow HTTP access to Lambda functions.

Question 2:
The product team at a startup has figured out a market need to support both stateful and stateless client-server communications via the APIs developed using its platform. You have been hired by the startup as a solutions architect to build a solution to fulfill this market need using AWS API Gateway.
Which of the following would you identify as correct?
Options:
A. API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
B. API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server
C. API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server
D. API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
Answer: D
Explanation
Correct option:
API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the front door for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications.
API Gateway creates RESTful APIs that:
Are HTTP-based.
Enable stateless client-server communication.
Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE.
API Gateway creates WebSocket APIs that:
Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server. Route incoming messages based on message content.
So API Gateway supports stateless RESTful APIs as well as stateful WebSocket APIs. Therefore this option is correct.
Incorrect options:
API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server
API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server
API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server.
These three options contradict the earlier details provided in the explanation. To summarize, API Gateway supports stateless RESTful APIs and stateful WebSocket APIs. Hence these options are incorrect.

Question 3:
A Big Data analytics company wants to set up an AWS cloud architecture that throttles requests in case of sudden traffic spikes. The company is looking for AWS services that can be used for buffering or throttling to handle such traffic variations.
Which of the following services can be used to support this requirement?
Options:
A. Amazon Gateway Endpoints, Amazon SQS and Amazon Kinesis
B. Amazon API Gateway, Amazon SQS and Amazon Kinesis
C. Elastic Load Balancer, Amazon SQS, AWS Lambda
D. Amazon SQS, Amazon SNS and AWS Lambda
Answer: B
Explanation
Correct option:
Throttling is the process of limiting the number of requests an authorized program can submit to a given operation in a given amount of time.
Amazon API Gateway, Amazon SQS and Amazon Kinesis – To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request. Specifically, API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account. In the token bucket algorithm, the burst is the maximum bucket size.
Amazon SQS – Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.
Amazon Kinesis – Amazon Kinesis is a fully managed, scalable service that can ingest, buffer, and process streaming data in real-time.
Incorrect options:
Amazon SQS, Amazon SNS and AWS Lambda – Amazon SQS has the ability to buffer its messages. Amazon Simple Notification Service (SNS) cannot buffer messages and is generally used with SQS to provide the buffering facility. AWS Lambda is a compute service and does not provide any buffering capability. So, this combination of services is incorrect.
Amazon Gateway Endpoints, Amazon SQS and Amazon Kinesis – A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. This cannot help in throttling or buffering of requests. Amazon SQS and Kinesis can buffer incoming data. Since Gateway Endpoint is an incorrect service for throttling or buffering, this option is incorrect.
Elastic Load Balancer, Amazon SQS, AWS Lambda – Elastic Load Balancer cannot throttle requests. Amazon SQS can be used to buffer messages. AWS Lambda cannot be used for buffering. So, this combination is also incorrect.


100. Kinesis

Question 1:
A company provides a REST-based interface to an application that allows a partner company to send data in near-real time. The application then processes the data that is received and stores it for later analysis. The application runs on Amazon EC2 instances.
The partner company has received many 503 Service Unavailable Errors when sending data to the application and the compute capacity reaches its limits and is unable to process requests when spikes in data volume occur.
Which design should a Solutions Architect implement to improve scalability?
Options:
A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions
B. Use Amazon API Gateway in front of the existing application. Create a usage plan with a quota limit for the partner company
C. Use Amazon SNS to ingest the data and trigger AWS Lambda functions to process the data in near-real time
D. Use Amazon SQS to ingest the data. Configure the EC2 instances to process messages from the SQS queue
Answer: A
Explanation
Amazon Kinesis enables you to ingest, buffer, and process streaming data in real-time. Kinesis can handle any amount of streaming data and process data from hundreds of thousands of sources with very low latencies. This is an ideal solution for data ingestion.
To ensure the compute layer can scale to process increasing workloads, the EC2 instances should be replaced by AWS Lambda functions. Lambda can scale seamlessly by running multiple executions in parallel.
CORRECT: “Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions” is the correct answer.
INCORRECT: “Use Amazon API Gateway in front of the existing application. Create a usage plan with a quota limit for the partner company” is incorrect. A usage plan will limit the amount of data that is received and cause more errors to be received by the partner company.
INCORRECT: “Use Amazon SQS to ingest the data. Configure the EC2 instances to process messages from the SQS queue” is incorrect. Amazon Kinesis Data Streams should be used for near-real time or real-time use cases instead of Amazon SQS.
INCORRECT: “Use Amazon SNS to ingest the data and trigger AWS Lambda functions to process the data in near-real time” is incorrect. SNS is not a near-real time solution for data ingestion. SNS is used for sending notifications.

Question 2:
A manufacturing company captures data from machines running at customer sites. Currently, thousands of machines send data every 5 minutes, and this is expected to grow to hundreds of thousands of machines in the near future. The data is logged with the intent to be analyzed in the future as needed.
What is the SIMPLEST method to store this streaming data at scale?
Options:
A. Create an Amazon SQS queue, and have the machines write to the queue
B. Create an Amazon EC2 instance farm behind an ELB to store the data in Amazon EBS Cold HDD volumes
C. Create an Auto Scaling Group of Amazon EC2 instances behind ELBs to write data into Amazon RDS
D. Create an Amazon Kinesis Firehose delivery stream to store the data in Amazon S3
Answer: D
Explanation
Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It captures, transforms, and loads streaming data and you can deliver the data to “destinations” including Amazon S3 buckets for later analysis
CORRECT: “Create an Amazon Kinesis Firehose delivery stream to store the data in Amazon S3” is the correct answer.
INCORRECT: “Create an Amazon EC2 instance farm behind an ELB to store the data in Amazon EBS Cold HDD volumes” is incorrect. Storing the data in EBS wold be expensive and as EBS volumes cannot be shared by multiple instances you would have a bottleneck of a single EC2 instance writing the data.
INCORRECT: “Create an Amazon SQS queue, and have the machines write to the queue” is incorrect. Using an SQS queue to store the data is not possible as the data needs to be stored long-term and SQS queues have a maximum retention time of 14 days.
INCORRECT: “Create an Auto Scaling Group of Amazon EC2 instances behind ELBs to write data into Amazon RDS” is incorrect. Writing data into RDS via a series of EC2 instances and a load balancer is more complex and more expensive. RDS is also not an ideal data store for this data.

Question 3:
A retail company with many stores and warehouses is implementing IoT sensors to gather monitoring data from devices in each location. The data will be sent to AWS in real time. A solutions architect must provide a solution for ensuring events are received in order for each device and ensure that data is saved for future processing.
Which solution would be MOST efficient?
Options:
A. Use an Amazon SQS standard queue for real-time events with one queue for each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3
B. Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS
C. Use an Amazon SQS FIFO queue for real-time events with one queue for each device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS
D. Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3
Answer: D
Explanation
Amazon Kinesis Data Streams collect and process data in real time. A Kinesis data stream is a set of shards. Each shard has a sequence of data records. Each data record has a sequence number that is assigned by Kinesis Data Streams. A shard is a uniquely identified sequence of data records in a stream.
A partition key is used to group data by shard within a stream. Kinesis Data Streams segregates the data records belonging to a stream into multiple shards. It uses the partition key that is associated with each data record to determine which shard a given data record belongs to.
CORRECT: “Use Amazon Kinesis Data Streams for real-time events with a partition key for each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3” is the correct answer.
INCORRECT: “Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS” is incorrect as you cannot save data to EBS from Kinesis.
INCORRECT: “Use an Amazon SQS FIFO queue for real-time events with one queue for each device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS” is incorrect as SQS is not the most efficient service for streaming, real time data.
INCORRECT: “Use an Amazon SQS standard queue for real-time events with one queue for each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3” is incorrect as SQS is not the most efficient service for streaming, real time data.

Question 4:
A geological research agency maintains the seismological data for the last 100 years. The data has a velocity of 1GB per minute. You would like to store the data with only the most relevant attributes to build a predictive model for earthquakes.
What AWS services would you use to build the most cost-effective solution with the LEAST amount of infrastructure maintenance?
Options:
A. Ingest the data in a Spark Streaming Cluster on EMR use Spark Streaming transformations before writing to S3
B. Ingest the data in AWS Glue job and use Spark transformations before writing to S3
C. Ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming stream before the output is dumped on S3
D. Ingest the data in Kinesis Data Analytics and use SQL queries to filter and transform the data before writing to S3
Answer: C
Explanation
Correct option:
Ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming stream before the output is dumped on S3
Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
The correct choice is to ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming data before the output is dumped on S3. This way you only store a sliced version of the data with only the relevant data attributes required for your model. Also it should be noted that this solution is entirely serverless and requires no infrastructure maintenance.
Incorrect options:
Ingest the data in Kinesis Data Analytics and use SQL queries to filter and transform the data before writing to S3 – Amazon Kinesis Data Analytics is the easiest way to analyze streaming data in real-time. Kinesis Data Analytics enables you to easily and quickly build queries and sophisticated streaming applications in three simple steps: setup your streaming data sources, write your queries or streaming applications, and set up your destination for processed data. Kinesis Data Analytics cannot directly ingest data from the source as it ingests data either from Kinesis Data Streams or Kinesis Data Firehose, so this option is ruled out.
Ingest the data in AWS Glue job and use Spark transformations before writing to S3 – AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing and it’s not the right fit for a near real-time data processing use-case.
Ingest the data in a Spark Streaming Cluster on EMR use Spark Streaming transformations before writing to S3 – Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR uses Hadoop, an open-source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances. Using an EMR cluster would imply managing the underlying infrastructure so it’s ruled out because the correct solution for the given use-case should require the least amount of infrastructure maintenance.

Question 5:
A gaming company is developing a mobile game that streams score updates to a backend processor and then publishes results on a leaderboard. The company has hired you as an AWS Certified Solutions Architect Associate to design a solution that can handle major traffic spikes, process the mobile game updates in the order of receipt, and store the processed updates in a highly available database. The company wants to minimize the management overhead required to maintain the solution.
Which of the following will you recommend to meet these requirements?
Options:
A. Push score updates to Kinesis Data Streams which uses a fleet of EC2 instances (with Auto Scaling) to process the updates in Kinesis Data Streams and then store these processed updates in DynamoDB
B. Push score updates to an SNS topic, subscribe a Lambda function to this SNS topic to process the updates and then store these processed updates in a SQL database running on Amazon EC2
C. Push score updates to Kinesis Data Streams which uses a Lambda function to process these updates and then store these processed updates in DynamoDB
D. Push score updates to an SQS queue which uses a fleet of EC2 instances (with Auto Scaling) to process these updates in the SQS queue and then store these processed updates in an RDS MySQL database
Answer: C
Explanation
Correct option:
Push score updates to Kinesis Data Streams which uses a Lambda function to process these updates and then store these processed updates in DynamoDB
To help ingest real-time data or streaming data at large scales, you can use Amazon Kinesis Data Streams (KDS). KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources. The data collected is available in milliseconds, enabling real-time analytics. KDS provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications.
Lambda integrates natively with Kinesis Data Streams. The polling, checkpointing, and error handling complexities are abstracted when you use this native integration. The processed data can then be configured to be saved in DynamoDB.
Incorrect options:
Push score updates to an SQS queue which uses a fleet of EC2 instances (with Auto Scaling) to process these updates in the SQS queue and then store these processed updates in an RDS MySQL database
Push score updates to Kinesis Data Streams which uses a fleet of EC2 instances (with Auto Scaling) to process the updates in Kinesis Data Streams and then store these processed updates in DynamoDB
Push score updates to an SNS topic, subscribe a Lambda function to this SNS topic to process the updates, and then store these processed updates in a SQL database running on Amazon EC2
These three options use EC2 instances as part of the solution architecture. The use-case seeks to minimize the management overhead required to maintain the solution. However, EC2 instances involve several maintenance activities such as managing the guest operating system and software deployed to the guest operating system, including updates and security patches, etc. Hence these options are incorrect.

Question 6:
A telecom company operates thousands of hardware devices like switches, routers, cables, etc. The real-time status data for these devices must be fed into a communications application for notifications. Simultaneously, another analytics application needs to read the same real-time status data and analyze all the connecting lines that may go down because of any device failures.
As a Solutions Architect, which of the following solutions would you suggest, so that both the applications can consume the real-time status data concurrently?
Options:
A. Amazon Kinesis Data Streams
B. Amazon Simple Notification Service (SNS)
C. Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS)
D. Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon SES)
Answer: A
Explanation
Correct option:
Amazon Kinesis Data Streams – Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).
AWS recommends Amazon Kinesis Data Streams for use cases with requirements that are similar to the following:
Routing related records to the same record processor (as in streaming MapReduce). For example, counting and aggregation are simpler when all records for a given key are routed to the same record processor.
Ordering of records. For example, you want to transfer log data from the application host to the processing/archival host while maintaining the order of log statements.
Ability for multiple applications to consume the same stream concurrently. For example, you have one application that updates a real-time dashboard and another that archives data to Amazon Redshift. You want both applications to consume data from the same stream concurrently and independently.
Ability to consume records in the same order a few hours later. For example, you have a billing application and an audit application that runs a few hours behind the billing application. Because Amazon Kinesis Data Streams stores data for up to 7 days, you can run the audit application up to 7 days behind the billing application.
Incorrect options:
Amazon Simple Notification Service (SNS) – Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging. SNS is a notification service and cannot be used for real-time processing of data.
Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS) – Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. Since multiple applications need to consume the same data stream concurrently, Kinesis is a better choice when compared to the combination of SQS with SNS.
Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon SES) – As discussed above, Kinesis is a better option for this use case in comparison to SQS. Also, SES does not fit this use-case. Hence, this option is an incorrect answer.

Question 36:
An IT company is working on client engagement to build a real-time data analytics tool for the Internet of Things (IoT) data. The IoT data is funneled into Kinesis Data Streams which further acts as the source of a delivery stream for Kinesis Firehose. The engineering team has now configured a Kinesis Agent to send IoT data from another set of devices to the same Firehose delivery stream. They noticed that data is not reaching Firehose as expected.
As a solutions architect, which of the following options would you attribute as the MOST plausible root cause behind this issue?
A• Kinesis Firehose delivery stream has reached its limit and needs to be scaled manually
B• The data sent by Kinesis Agent is lost because of a configuration error
C• Kinesis Agent can only write to Kinesis Data Streams, not to Kinesis Firehose
D• Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data Streams
Answer: D
Explanation
Correct option:
**Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
When a Kinesis data stream is configured as the source of a Firehose delivery stream, Firehose’s PutRecord and PutRecordBatch operations are disabled and Kinesis Agent cannot write to Firehose delivery stream directly. Data needs to be added to the Kinesis data stream through the Kinesis Data Streams PutRecord and PutRecords operations instead. Therefore, this option is correct.
Incorrect options:
Kinesis Agent can only write to Kinesis Data Streams, not to Kinesis Firehose – Kinesis Agent is a stand-alone Java software application that offers an easy way to collect and send data to Kinesis Data Streams or Kinesis Firehose. So this option is incorrect.
Kinesis Firehose delivery stream has reached its limit and needs to be scaled manually – Kinesis Firehose is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. Therefore this option is not correct.
The data sent by Kinesis Agent is lost because of a configuration error – This is a made-up option and has been added as a distractor.


101. Web Identity Federation – Cognito

Question 1:
You are building a mobile application. The security requirement for this application is that each user access it with MFA authentication.
Choose a method that meets this requirement.
Options:
A. Set up IAM policies for customer accounts to enable MFA authentication
B. Implement MFA functionality by integrating API Gateway, Lambda functions and SNS
C. Implement mobile authentication using AWS Cognito
D. Implement MFA authentication function by CloudHSM
Answer: C
Explanation
Option 3 is the correct answer. You can use Amazon Cognito to implement the authentication function of your application. With Amazon Cognito, you can add multi-factor authentication and encryption of stored and transferred data to your mobile application. You can also implement sign-in capabilities using social identity providers such as Google, Facebook, and Amazon, and enterprise identity providers such as Microsoft Active Directory with SAML.
Option 1 is incorrect. IAM policy is a service for user management within AWS and cannot be used as a customer management function of the application.
Option 2 is incorrect. You can’t implement MFA using API Gateway or Lambda functions.
Option 4 is incorrect. CloudHSM is a cloud-based hardware security module (HSM). This makes it easy to generate and use encryption keys in the AWS cloud. It has nothing to do with MFA verification.

Question 12: Skipped
You have been hired as a Solutions Architect to advise a company on the various authentication/authorization mechanisms that AWS offers to authorize an API call within the API Gateway. The company would prefer a solution that offers built-in user management.
Which of the following solutions would you suggest as the best fit for the given use-case?
• Use Amazon Cognito User Pools (Correct)
• Use Amazon Cognito Identity Pools
• Use AWS_IAM authorization
• Use API Gateway Lambda authorizer
Explanation
Correct option:
Use Amazon Cognito User Pools – A user pool is a user directory in Amazon Cognito. You can leverage Amazon Cognito User Pools to either provide built-in user management or integrate with external identity providers, such as Facebook, Twitter, Google+, and Amazon. Whether your users sign-in directly or through a third party, all members of the user pool have a directory profile that you can access through a Software Development Kit (SDK).
User pools provide: 1. Sign-up and sign-in services. 2. A built-in, customizable web UI to sign in users. 3. Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, as well as sign-in with SAML identity providers from your user pool. 4. User directory management and user profiles. 5. Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification. 6. Customized workflows and user migration through AWS Lambda triggers.
After creating an Amazon Cognito user pool, in API Gateway, you must then create a COGNITO_USER_POOLS authorizer that uses the user pool.
Incorrect options:
Use AWS_IAM authorization – For consumers who currently are located within your AWS environment or have the means to retrieve AWS Identity and Access Management (IAM) temporary credentials to access your environment, you can use AWS_IAM authorization and add least-privileged permissions to the respective IAM role to securely invoke your API. API Gateway API Keys is not a security mechanism and should not be used for authorization unless it’s a public API. It should be used primarily to track a consumer’s usage across your API.
Use API Gateway Lambda authorizer – If you have an existing Identity Provider (IdP), you can use an API Gateway Lambda authorizer to invoke a Lambda function to authenticate/validate a given user against your IdP. You can use a Lambda authorizer for custom validation logic based on identity metadata.
A Lambda authorizer can send additional information derived from a bearer token or request context values to your backend service. For example, the authorizer can return a map containing user IDs, user names, and scope. By using Lambda authorizers, your backend does not need to map authorization tokens to user-centric data, allowing you to limit the exposure of such information to just the authorization function.
When using Lambda authorizers, AWS strictly advises against passing credentials or any sort of sensitive data via query string parameters or headers, so this is not as secure as using Cognito User Pools.
In addition, both these options do not offer built-in user management.
Use Amazon Cognito Identity Pools – The two main components of Amazon Cognito are user pools and identity pools. Identity pools provide AWS credentials to grant your users access to other AWS services. To enable users in your user pool to access AWS resources, you can configure an identity pool to exchange user pool tokens for AWS credentials. So, identity pools aren’t an authentication mechanism in themselves and hence aren’t a choice for this use case.

Question 31:
A social media application is hosted on an EC2 server fleet running behind an Application Load Balancer. The application traffic is fronted by a CloudFront distribution. The engineering team wants to decouple the user authentication process for the application, so that the application servers can just focus on the business logic.
As a Solutions Architect, which of the following solutions would you recommend to the development team so that it requires minimal development effort?
A• Use Cognito Authentication via Cognito Identity Pools for your CloudFront distribution
B• Use Cognito Authentication via Cognito User Pools for your CloudFront distribution
C• Use Cognito Authentication via Cognito Identity Pools for your Application Load Balancer
D• Use Cognito Authentication via Cognito User Pools for your Application Load Balancer
Answer: D
Explanation
Correct option:
Use Cognito Authentication via Cognito User Pools for your Application Load Balancer
Application Load Balancer can be used to securely authenticate users for accessing your applications. This enables you to offload the work of authenticating users to your load balancer so that your applications can focus on their business logic. You can use Cognito User Pools to authenticate users through well-known social IdPs, such as Amazon, Facebook, or Google, through the user pools supported by Amazon Cognito or through corporate identities, using SAML, LDAP, or Microsoft AD, through the user pools supported by Amazon Cognito. You configure user authentication by creating an authenticate action for one or more listener rules.
Incorrect options:
Use Cognito Authentication via Cognito Identity Pools for your Application Load Balancer – There is no such thing as using Cognito Authentication via Cognito Identity Pools for managing user authentication for the application. Application-specific user authentication can be provided via Cognito User Pools. Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token.
Use Cognito Authentication via Cognito User Pools for your CloudFront distribution – You cannot directly integrate Cognito User Pools with CloudFront distribution as you have to create a separate Lambda@Edge function to accomplish the authentication via Cognito User Pools. This involves additional development effort, so this option is not the best fit for the given use-case.
Use Cognito Authentication via Cognito Identity Pools for your CloudFront distribution – You cannot use Cognito Identity Pools for managing user authentication, so this option is not correct.


102. Reducing Security Threats


103. Key Management Service (KMS)

Question 1:
A US-based healthcare startup is building an interactive diagnostic tool for COVID-19 related assessments. The users would be required to capture their personal health records via this tool. As this is sensitive health information, the backup of the user data must be kept encrypted in S3. The startup does not want to provide its own encryption keys but still wants to maintain an audit trail of when an encryption key was used and by whom.
Which of the following is the BEST solution for this use-case?
Options:
A. Use SSE-KMS to encrypt the user data on S3
B. Use SSE-S3 to encrypt the user data on S3
C. Use SSE-C to encrypt the user data on S3
D. Use client-side encryption with client provided keys and then upload the encrypted user data to S3
Answer: A
Explanation
Correct option:
Use SSE-KMS to encrypt the user data on S3
AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify a customer-managed CMK that you have already created. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom. Therefore SSE-KMS is the correct solution for this use-case.
Incorrect options:
Use SSE-S3 to encrypt the user data on S3 – When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. However this option does not provide the ability to audit trail the usage of the encryption keys.
Use SSE-C to encrypt the user data on S3 – With Server-Side Encryption with Customer-Provided Keys (SSE-C), you manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption when you access your objects. However this option does not provide the ability to audit trail the usage of the encryption keys.
Use client-side encryption with client provided keys and then upload the encrypted user data to S3 – Using client-side encryption is ruled out as the startup does not want to provide the encryption keys.

Question 14: Skipped
A financial services company has developed its flagship application on AWS Cloud with data security requirements such that the encryption key must be stored in a custom application running on-premises. The company wants to offload the data storage as well as the encryption process to Amazon S3 but continue to use the existing encryption key.
Which of the following S3 encryption options allows the company to leverage Amazon S3 for storing data with given constraints?
• Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
• Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
• Client-Side Encryption with data encryption is done on the client-side before sending it to Amazon S3
• Server-Side Encryption with Customer-Provided Keys (SSE-C)(Correct)
Explanation
Correct option:
Server-Side Encryption with Customer-Provided Keys (SSE-C)
You have the following options for protecting data at rest in Amazon S3:
Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.
Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
For the given use-case, the company wants to manage the encryption keys via its custom application and let S3 manage the encryption, therefore you must use Server-Side Encryption with Customer-Provided Keys (SSE-C).
Incorrect options:
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) – When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. So this option is incorrect.
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS) – Server-Side Encryption with Customer Master Keys (CMKs) stored in AWS Key Management Service (SSE-KMS) is similar to SSE-S3. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom. Additionally, you can create and manage customer-managed CMKs or use AWS managed CMKs that are unique to you, your service, and your Region.
Client-Side Encryption with data encryption is done on the client-side before sending it to Amazon S3 – You can encrypt the data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.


104. Cloud HSM
105. Parameter Store


106. Lambda

Question 1:
Currently, as a Solutions Architect, you are designing the architecture of your application using AWS services. This application is virtually stateless and you want to build a cost-optimal application. You also want to add the ability to expand based on processing power needs.
Which AWS service should you choose to meet this requirement?
Options:
A. Lambda
B. DynamoDB
C. Kinesis
D. EC2
Answer: A
Explanation
A stateless application is an application that does not require information of the client’s state to be communicated constantly in the system and does not retain session information each time. Therefore, the application will provide the same response to all end users when the same input is given. It will respond as if it were the clients first session each time. Lambda functions can achieve stateless application processing cost-optimally. Therefore, option 1 is the correct answer.
Option 2 is incorrect. DynamoDB is a NoSQL type DB and is used for high-speed and simple database processing. It can be used for serverless data processing linked with Lambda functions, but it is not correct because it is not a service for stateless application development. Rather, DynamoDB is used to store session data.
Option 3 is incorrect. Kinesis is a service used for processing and analysis of streaming data. It is not correct because it is not a service for stateless application development.
Option 4 is incorrect. It is possible to build stateless applications using EC2 instances. However, it is not a cost-optimal application compared to a serverless application that uses Lambda functions.

Question 2:
A solutions architect is designing a new service that will use an Amazon API Gateway API on the frontend. The service will need to persist data in a backend database using key-value requests. Initially, the data requirements will be around 1 GB and future growth is unknown. Requests can range from 0 to over 800 requests per second.
Which combination of AWS services would meet these requirements? (Select TWO.)
Options:
A. Fargate
B. Lambda
C. RDS
D. EC2 Auto Scaling
E. Dynamo DB
Answer: B & E
Explanation
In this case AWS Lambda can perform the computation and store the data in an Amazon DynamoDB table. Lambda can scale concurrent executions to meet demand easily and DynamoDB is built for key-value data storage requirements and is also serverless and easily scalable. This is therefore a cost effective solution for unpredictable workloads.
CORRECT: “AWS Lambda” is a correct answer.
CORRECT: “Amazon DynamoDB” is also a correct answer.
INCORRECT: “AWS Fargate” is incorrect as containers run constantly and therefore incur costs even when no requests are being made.
INCORRECT: “Amazon EC2 Auto Scaling” is incorrect as this uses EC2 instances which will incur costs even when no requests are being made.
INCORRECT: “Amazon RDS” is incorrect as this is a relational database not a No-SQL database. It is therefore not suitable for key-value data storage requirements.

Question 3:
An IT Company wants to move all the compute components of its AWS Cloud infrastructure into serverless architecture. Their development stack comprises a mix of backend programming languages and the company would like to explore the support offered by the AWS Lambda runtime for their programming languages stack.
Can you identify the programming languages supported by the Lambda runtime? (Select two)
Options:
A. C
B. C#/ .NET
C. PHP
D. Go
E. R
Answer: B & D
Explanation
Correct options:
C#/.NET
Go
A runtime is a version of a programming language or framework that you can use to write Lambda functions. AWS Lambda supports runtimes for the following languages:
C#/.NET
Go
Java
Node.js
Python
Ruby
Incorrect options:
C
PHP
R
Given the list of supported runtimes above, these three options are incorrect.


107. Build a Serverless Webpage with API Gateway and Lambda
108. Build an Alexa Skill
109. Serverless Application Model (SAM)


110. Elastic Container Service (ECS)

Question 1:
Your company decided to use Amazon ECS to set up a Docker container-based CI / CD environment on AWS. You are in charge of building this environment as a solutions architect. The requirement requested by your boss is to have a minimal spent on configuration when starting the container.
Choose how to set up ECS to achieve this requirement.
Options:
A. Select the auto scaling launch type in ECS
B. Select the Fargate launch type in ECS
C. Select the EC2 launch type in ECS
D. Select the Chef launch type in ECS
Answer: B
Explanation
The launch type of Amazon ECS determines the type of infrastructure in which tasks and services are hosted. And you can choose from two types, Fargate startup type and EC2 startup type.
The Fargate launch type allows you to run containerized applications without having to provision and manage your backend infrastructure. Simply register the task definition and Fargate will start the container. This configuration eliminates the need for tedious instance setup to launch a container. Therefore, option 2 is the correct answer.
Option 3 is incorrect. The EC2 launch type allows you to run containerized applications on a cluster of Amazon EC2 instances that you manage. It is an incorrect answer because it is a startup type that requires EC2 settings and does not meet the requirements.
Options 1 and 4 are incorrect because there are no other activation types.

Question 2:
An application running on an Amazon ECS container instance using the EC2 launch type needs permissions to write data to Amazon DynamoDB.
How can you assign these permissions only to the specific ECS task that is running the application?
Options:
A. Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB
B. Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter
C. Use a security group to allow outbound connections to DynamoDB and assign it to the container instance
D. Create an IAM policy with permissions to DynamoDB and attach it to the container instance
Answer: B
Explanation
To specify permissions for a specific task on Amazon ECS you should use IAM Roles for Tasks. The permissions policy can be applied to tasks when creating the task definition, or by using an IAM task role override using the AWS CLI or SDKs. The taskRoleArn parameter is used to specify the policy.
CORRECT: “Create an IAM policy with permissions to DynamoDB and assign It to a task using the taskRoleArn parameter” is the correct answer.
INCORRECT: “Create an IAM policy with permissions to DynamoDB and attach it to the container instance” is incorrect. You should not apply the permissions to the container instance as they will then apply to all tasks running on the instance as well as the instance itself.
INCORRECT: “Use a security group to allow outbound connections to DynamoDB and assign it to the container instance” is incorrect. Though you will need a security group to allow outbound connections to DynamoDB, the question is asking how to assign permissions to write data to DynamoDB and a security group cannot provide those permissions.
INCORRECT: “Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB” is incorrect. The AmazonECSTaskExecutionRolePolicy policy is the Task Execution IAM Role. This is used by the container agent to be able to pull container images, write log file etc.

Question 3:
A leading social media analytics company is contemplating moving its dockerized application stack into AWS Cloud. The company is not sure about the pricing for using Elastic Container Service (ECS) with the EC2 launch type compared to the Elastic Container Service (ECS) with the Fargate launch type.
Which of the following is correct regarding the pricing for these two services?
Options:
A. Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on vCPU and memory resources that the containerized application requests
B. Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on EC2 instances and EBS volumes used
C. ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
D. Both ECS with EC2 launch type and ECS with Fargate launch type are just charged based on Elastic Container Service used per hour
Answer: C
Explanation
Correct option:
ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. ECS allows you to easily run, scale, and secure Docker container applications on AWS.
With the Fargate launch type, you pay for the amount of vCPU and memory resources that your containerized application requests. vCPU and memory resources are calculated from the time your container images are pulled until the Amazon ECS Task* terminates, rounded up to the nearest second. With the EC2 launch type, there is no additional charge for the EC2 launch type. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application.
Incorrect options:
Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on vCPU and memory resources that the containerized application requests
Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on EC2 instances and EBS volumes used
As mentioned above – with the Fargate launch type, you pay for the amount of vCPU and memory resources. With EC2 launch type, you pay for AWS resources (e.g. EC2 instances or EBS volumes). Hence both these options are incorrect.
Both ECS with EC2 launch type and ECS with Fargate launch type are just charged based on Elastic Container Service used per hour
This is a made-up option and has been added as a distractor.


111. Miscellaneous

Question 1:
Your company has a development system in which a production environment and a test environment are separately prepared on AWS. As a Solutions Architect, you are working on a stack-based deployment model of AWS resources. Different layers are needed for the application’s server and database.
Choose the appropriate course of action that meets this requirement.
Options:
A. Use OpsWorks to define a stack for each layer of your application
B. Use CloudFormation to define a stack for each layer of your application
C. Use CodePipeline to define a stack for each layer of your application
D. Use Elastic Beanstalk to define a stack for each layer of your application
Answer: A
Explanation:
Option A is the correct answer. AWS OpsWorks Stacks allows you to manage your applications and servers on AWS and on-premises. OpsWorks Stacks allows you to model your application as a stack which contains various layers such as load distribution, databases, and application servers.
Option B is incorrect. In order to prepare different layers on a stack basis, it is preferable to make detailed settings in OpsWorks rather than CloudFormation.
Option C is incorrect. CodePipeline is a fully managed, continuous delivery service that automates releases for fast and efficient updates of applications and infrastructure. CodePipeline cannot set application layers.
Option D is incorrect. Elastic Beanstalk is a service for deploying and versioning web applications and services developed using Java, .NET, PHP, Node.js, Python, Ruby, Go and Docker on servers such as Apache. Elastic Beanstalk cannot set application layer.

Question 2:
Company-A has EC2 instances hosted in two AZs in a single region and the web application is also has ELB and Auto-Scaling. The application needs database tier synchronization. If/when one AZ becomes unavailable, Auto Scaling will take time to launch a new instance in a remaining AZ. You have been asked to make appropriate adjustments so that this application still remains fully available, even during the time when Auto scaling is spinning up replacements instances.
Choose the architectural enhancements you need to meet these requirements.
Options:
A. Deploy EC2 instances in 3 AZs with each AZ set to handle up to 50% peak load capacity
B. Deploy EC2 instances in 3 AZs with each AZ set to handle up to 40% peak load capacity
C. Deploy EC2 instances in 2 AZs, across 2 regions, with each AZ set to handle up to 50% peak load capacity
D. Deploy EC2 instances in 2 AZs with each AZ set to handle up to 50% peak load capacity
Answer: A
Explanation:
In this scenario, you need to maintain 100% availability as a requirement that the application never stops, even if one AZ were to go down. Therefore, it is necessary to choose a setting that can maintain 100% of the EC2 instance’s peak load, even if one AZ becomes unavailable. If you deploy your EC2 instances over 3 AZ, each set with the ability to handle 50% peak load, you can maintain 100% even if one AZ goes down.
Therefore, option 1 is the correct answer.
Option 2 is incorrect because it will operate at 80% availability instead of the 100% availability required if one AZ goes down.
This question requires that the ability to handle peak load for instances does not fall below 100%, even if one AZ falls. Although it is possible to recover the peak load over time via Auto Scaling, there will still be a short time when the peak load cannot be appropriately processed. Then, in order to maintain 100% capacity to handle peak load, you need a current processing capacity exceeding 100%, (by enough to offset the losses due to AZ failure) until you can restore the processing capacity with Auto Scaling.
Options 3 and 4 are incorrect as the two AZs cannot achieve 100% availability if one AZ were to go down. They would only handle up to 50%.

Question 3:
Your customer wants to import an existing virtual machine into the AWS cloud. As a Solutions Architect, you have decided to consider a migration method.
Which service should you use?
Options:
A. AWS Import/ Export
B. VM Import/ Export
C. Direct Connect
D. VPC Peering
Answer: B
Explanation:
VM Import / Export allows you to import virtual machine (VM) images from your existing virtualized environment into Amazon EC2. You can use this service to migrate applications and workloads to Amazon EC2, copy VM image catalog to Amazon EC2, and create VM image repositories for backup and disaster recovery.
Other services are incorrect because they are not available to import existing virtual machines into the AWS cloud.
Option 1 is incorrect. AWS Import / Export is a service that you can use to transfer large amounts of data from your physical storage device to AWS. This is not appropriate because it is not used to import existing virtual machines into the AWS cloud.
Option 3 is incorrect. Direct connect is a dedicated line service that connects your on-premises environment to your VPC. This is incorrect because it is not used to import existing virtual machines into the AWS cloud.
Option 4 is incorrect. VPC peering is a function that connects two VPCs. This is incorrect because it is not used to import existing virtual machines into the AWS cloud.

Question 4:
As a Solutions Architect, you plan to move your infrastructure to the AWS cloud. I want to take advantage of the Chef recipes you are currently using to manage the configuration of your infrastructure.
Which AWS service is best for this requirement?
Options:
A. Elastic Beanstalk
B. OpsWorks
C. CloudFormation
D. ECS
Answer: B
Explanation
Option 2 is the correct answer. With AWS OpsWorks, you can leverage Chef to deploy your infrastructure on AWS. AWS OpsWorks is an environment automation service that uses Puppet or Chef to set up and operate applications in a cloud environment. OpsWorks Stacks and OpsWorks for Chef Automate allow you to use Chef cookbooks and solutions for configuration management.
Option 1 is incorrect. Elastic Beanstalk is used for deploying web applications and does not use Chef.
Option 3 is incorrect. CloudFormation is a tool that automates AWS resource deployment with JSON / YAML. This also doesn’t use Chef.
Option 4 is incorrect. ECS is a container orchestration service that uses Docker. This also doesn’t use Chef.

Question 5:
As a Solutions Architect, you develop and test your applications on AWS. In doing so, we want to provision the test environment quickly and make it easy to remove.
Choose the best AWS service settings to meet this requirement.
Options:
A. Setting CodePipeline enables quick configuration and deletion
B. Use CloudFormation template for creating a test environment
C. Automate environment construction using AMI and Bash script of EC2 instance
D. Setting ECR allows for quick configuration and deletion
Answer: B
Explanation
You can use CloudFormation templates to provision AWS resources with constant settings at all times. This makes it easy to create an environment like a test environment. Option 2 is the correct answer.
Option 1 is incorrect. CodePipeline automates the release step by configuring services like CodeDeploy and ECS as a pipeline. CodePipeline needs to use other services such as CloudFormation to set up the infrastructure environment.
Option 3 is incorrect. AMI and Bash are settings limited to EC2 instances and cannot be used to automate overall infrastructure construction.
Option 4 is incorrect. ECR is a service that saves a file called a Docker image.

Question 6:
An AWS Organization has an OU with multiple member accounts in it. The company needs to restrict the ability to launch only specific Amazon EC2 instance types. How can this policy be applied across the accounts with the least effort?
Options:
A. Use AWS Resource Access Manager to control which launch types can be used
B. Create an SCP with an allow rule that allows launching the specific instance types
C. Create an IAM policy to deny launching all but the specific instance types
D. Create an SCP with a deny rule that denies all but the specific instance types
Answer: D
Explanation
To apply the restrictions across multiple member accounts you must use a Service Control Policy (SCP) in the AWS Organization. The way you would do this is to create a deny rule that applies to anything that does not equal the specific instance type you want to allow.
CORRECT: “Create an SCP with a deny rule that denies all but the specific instance types” is the correct answer.
INCORRECT: “Create an SCP with an allow rule that allows launching the specific instance types” is incorrect as a deny rule is required.
INCORRECT: “Create an IAM policy to deny launching all but the specific instance types” is incorrect. With IAM you need to apply the policy within each account rather than centrally so this would require much more effort.
INCORRECT: “Use AWS Resource Access Manager to control which launch types can be used” is incorrect. AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. It is not used for restricting access or permissions.

Question 7:
A web application runs in public and private subnets. The application architecture consists of a web tier and database tier running on Amazon EC2 instances. Both tiers run in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
Options:
A. Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs
B. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)
D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ
E. Create new public and private subnets in the same AZ for high availability
Answer: A & B
Explanation
To add high availability to this architecture both the web tier and database tier require changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will ensure there are always instances running and traffic is being distributed to them.
The database tier should be migrated from the EC2 instances to Amazon RDS to take advantage of a managed database with Multi-AZ functionality. This will ensure that if there is an issue preventing access to the primary database a secondary database can take over.
CORRECT: “Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs” is the correct answer.
CORRECT: “Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment” is the correct answer.
INCORRECT: “Create new public and private subnets in the same AZ for high availability” is incorrect as this would not add high availability.
INCORRECT: “Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)” is incorrect because the existing servers are in a single subnet. For HA we need to instances in multiple subnets.
INCORRECT: “Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ” is incorrect because we also need HA for the database layer.

Question 9:
An eCommerce company runs an application on Amazon EC2 instances in public and private subnets. The web application runs in a public subnet and the database runs in a private subnet. Both the public and private subnets are in a single Availability Zone.
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
Options:
A. Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment
B. Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs
C. Create new public and private subnets in the same AZ but in a different Amazon VPC” is incorrect
D. Create an EC2 Auto Scaling group in the public subnet and use an Application Load Balancer
E. Create new public and private subnets in a different AZ. Create a database using Amazon EC2 in one AZ
Answer: A & B
Explanation
High availability can be achieved by using multiple Availability Zones within the same VPC. An EC2 Auto Scaling group can then be used to launch web application instances in multiple public subnets across multiple AZs and an ALB can be used to distribute incoming load.
The database solution can be made highly available by migrating from EC2 to Amazon RDS and using a Multi-AZ deployment model. This will provide the ability to failover to another AZ in the event of a failure of the primary database or the AZ in which it runs.
CORRECT: “Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs” is a correct answer.
CORRECT: “Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment” is also a correct answer.
INCORRECT: “Create new public and private subnets in the same AZ but in a different Amazon VPC” is incorrect. You cannot use multiple VPCs for this solution as it would be difficult to manage and direct traffic (you can’t load balance across VPCs).
INCORRECT: “Create an EC2 Auto Scaling group in the public subnet and use an Application Load Balancer” is incorrect. This does not achieve HA as you need multiple public subnets across multiple AZs.
INCORRECT: “Create new public and private subnets in a different AZ. Create a database using Amazon EC2 in one AZ” is incorrect. The database solution is not HA in this answer option.

Question 10:
A company uses Docker containers for many application workloads in an on-premise data center. The company is planning to deploy containers to AWS and the chief architect has mandated that the same configuration and administrative tools must be used across all containerized environments. The company also wishes to remain cloud agnostic to safeguard mitigate the impact of future changes in cloud strategy.
How can a Solutions Architect design a managed solution that will align with open-source software?
Options:
A. Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS worker nodes
B. Launch the containers on a fleet of Amazon EC2 instances in a cluster placement group
C. Launch the containers on Amazon Elastic Container Service (ECS) with AWS Fargate instances
D. Launch the containers on Amazon Elastic Container Service (ECS) with Amazon EC2 instance worker nodes
Answer: A
Explanation
Amazon EKS is a managed service that can be used to run Kubernetes on AWS. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification.
This solution ensures that the same open-source software is used for automating the deployment, scaling, and management of containerized applications both on-premises and in the AWS Cloud.
CORRECT: “Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS worker nodes” is the correct answer.
INCORRECT: “Launch the containers on a fleet of Amazon EC2 instances in a cluster placement group” is incorrect
INCORRECT: “Launch the containers on Amazon Elastic Container Service (ECS) with AWS Fargate instances” is incorrect
INCORRECT: “Launch the containers on Amazon Elastic Container Service (ECS) with Amazon EC2 instance worker nodes” is incorrect

Question 11:
A recent security audit uncovered some poor deployment and configuration practices within your VPC. You need to ensure that applications are deployed in secure configurations.
How can this be achieved in the most operationally efficient manner?
Options:
A. Remove the ability for staff to deploy applications
B. Use AWS Inspector to apply secure configurations
C. Use CloudFormation with securely configured templates
D. Manually check all application configurations before deployment
Answer: C
Explanation
CloudFormation helps users to deploy resources in a consistent and orderly way. By ensuring the CloudFormation templates are created and administered with the right security configurations for your resources, you can then repeatedly deploy resources with secure settings and reduce the risk of human error.
CORRECT: “Use CloudFormation with securely configured templates” is the correct answer.
INCORRECT: “Remove the ability for staff to deploy applications” is incorrect. Removing the ability of staff to deploy resources does not help you to deploy applications securely as it does not solve the problem of how to do this in an operationally efficient manner.
INCORRECT: “Manually check all application configurations before deployment” is incorrect. Manual checking of all application configurations before deployment is not operationally efficient.
INCORRECT: “Use AWS Inspector to apply secure configurations” is incorrect. Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It is not used to secure the actual deployment of resources, only to assess the deployed state of the resources.

Question 12:
A Solutions Architect has been tasked with re-deploying an application running on AWS to enable high availability. The application processes messages that are received in an ActiveMQ queue running on a single Amazon EC2 instance. Messages are then processed by a consumer application running on Amazon EC2. After processing the messages the consumer application writes results to a MySQL database running on Amazon EC2.
Which architecture offers the highest availability and low operational complexity?
Options:
A. Deploy a second Active MQ server to another Availability Zone. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone
B. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled
C. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled
D. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone
Answer: C
Explanation
The correct answer offers the highest availability as it includes Amazon MQ active/standby brokers across two AZs, an Auto Scaling group across two AZ,s and a Multi-AZ Amazon RDS MySQL database deployment.
This architecture not only offers the highest availability it is also operationally simple as it maximizes the usage of managed services.
CORRECT: “Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled” is the correct answer.
INCORRECT: “Deploy a second Active MQ server to another Availability Zone. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone” is incorrect. This architecture does not offer the highest availability as it does not use Auto Scaling. It is also not the most operationally efficient architecture as it does not use AWS managed services.
INCORRECT: “Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone” is incorrect. This architecture does not use Auto Scaling for best HA or the RDS managed service.
INCORRECT: “Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled” is incorrect. This solution does not use Auto Scaling.

Question 13:
A retail company uses Amazon EC2 instances, API Gateway, Amazon RDS, Elastic Load Balancer and CloudFront services. To improve the security of these services, the Risk Advisory group has suggested a feasibility check for using the Amazon GuardDuty service.
Which of the following would you identify as data sources supported by GuardDuty?
Options:
A. VPC Flow Logs, API Gateway logs, S3 access logs
B. ELB logs, DNS logs, CloudTrail events
C. VPC Flow Logs, DNS logs, CloudTrail events
D. CloudFront logs, API Gateway logs, CloudTrail events
Answer: C
Explanation
Correct option:
VPC Flow Logs, DNS logs, CloudTrail events – Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.
GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.
Incorrect options:
VPC Flow Logs, API Gateway logs, S3 access logs
ELB logs, DNS logs, CloudTrail events
CloudFront logs, API Gateway logs, CloudTrail events
These three options contradict the explanation provided above, so these options are incorrect.

Question 14:
A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected.
Which of the following would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?
Options:
A. AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs
B. Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts
C. Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once
D. AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby resulting in increased costs
Answer: C
Explanation
Correct option:
Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once – If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.
Incorrect options:
AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby resulting in increased costs – AWS Shield Advanced does offer protection to resources outside of AWS. This should not cause unexpected spike in billing costs.
AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs – AWS Shield Standard is automatically enabled for all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service.
Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts – This option has been added as a distractor. Savings Plans is a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. Savings Plans is not applicable for the AWS Shield Advanced service.

Question 15:
A leading carmaker would like to build a new car-as-a-sensor service by leveraging fully serverless components that are provisioned and managed automatically by AWS. The development team at the carmaker does not want an option that requires the capacity to be manually provisioned, as it does not want to respond manually to changing volumes of sensor data.
Given these constraints, which of the following solutions is the BEST fit to develop this car-as-a-sensor service?
Options:
A. Ingest the sensor data in a Kinesis Data Stream, which is polled by a Lambda function in batches, and the data is written into an auto-scaled DynamoDB table for downstream processing
B. Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing
C. Ingest the sensor data in an Amazon SQS standard queue, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
D. Ingest the sensor data in a Kinesis Data Stream, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
Answer: B
Explanation
Correct option:
Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
AWS manages all ongoing operations and underlying infrastructure needed to provide a highly available and scalable message queuing service. With SQS, there is no upfront cost, no need to acquire, install, and configure messaging software, and no time-consuming build-out and maintenance of supporting infrastructure. SQS queues are dynamically created and scale automatically so you can build and grow applications quickly and efficiently. As there is no need to manually provision the capacity, so this is the correct option.
Incorrect options:
Ingest the sensor data in a Kinesis Data Stream, which is polled by a Lambda function in batches, and the data is written into an auto-scaled DynamoDB table for downstream processing – Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. However, the user is expected to manually provision an appropriate number of shards to process the expected volume of the incoming data stream. The throughput of an Amazon Kinesis data stream is designed to scale without limits via increasing the number of shards within a data stream. Therefore Kinesis Data Streams is not the right fit for this use-case.
Ingest the sensor data in an Amazon SQS standard queue, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
Ingest the sensor data in a Kinesis Data Stream, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
Using an application on an EC2 instance is ruled out as the carmaker wants to use fully serverless components. So both these options are incorrect.

Question 16:
A financial services company uses Amazon GuardDuty for analyzing its AWS account metadata to meet the compliance guidelines. However, the company has now decided to stop using GuardDuty service. All the existing findings have to be deleted and cannot persist anywhere on AWS Cloud.
Which of the following techniques will help the company meet this requirement?
Options:
A. Suspend the service in the general settings
B. De-register the service under services tab
C. Disable the service in the general settings
D. Raise a service request with Amazon to completely delete the data from all their backups
Answer: C
Explanation
Correct option:
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
Disable the service in the general settings – Disabling the service will delete all remaining data, including your findings and configurations before relinquishing the service permissions and resetting the service. So, this is the correct option for our use case.
Incorrect options:
Suspend the service in the general settings – You can stop Amazon GuardDuty from analyzing your data sources at any time by choosing to suspend the service in the general settings. This will immediately stop the service from analyzing data, but does not delete your existing findings or configurations.
De-register the service under services tab – This is a made-up option, used only as a distractor.
Raise a service request with Amazon to completely delete the data from all their backups – There is no need to create a service request as you can delete the existing findings by disabling the service.

Question 17:
An IT security consultancy is working on a solution to protect data stored in S3 from any malicious activity as well as check for any vulnerabilities on EC2 instances.
As a solutions architect, which of the following solutions would you suggest to help address the given requirement?
Options:
A. Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
B. Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
C. Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
D. Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
Answer: B
Explanation
Correct option:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions.
Incorrect options:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
These three options contradict the explanation provided above, so these options are incorrect.

Question 16: Skipped
A big data consulting firm needs to set up a data lake on Amazon S3 for a Health-Care client. The data lake is split in raw and refined zones. For compliance reasons, the source data needs to be kept for a minimum of 5 years. The source data arrives in the raw zone and is then processed via an AWS Glue based ETL job into the refined zone. The business analysts run ad-hoc queries only on the data in the refined zone using AWS Athena. The team is concerned about the cost of data storage in both the raw and refined zones as the data is increasing at a rate of 1TB daily in each zone.
As a solutions architect, which of the following would you recommend as the MOST cost-optimal solution? (Select two)
• Create a Lambda function based job to delete the raw zone data after 1 day
• Setup a lifecycle policy to transition the refined zone data into Glacier Deep Archive after 1 day of object creation
• Use Glue ETL job to write the transformed data in the refined zone using a compressed file format (Correct)
• Use Glue ETL job to write the transformed data in the refined zone using CSV format
• Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation(Correct)
Explanation
Correct options:
Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation
You can manage your objects so that they are stored cost-effectively throughout their lifecycle by configuring their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.
For the given use-case, the raw zone consists of the source data, so it cannot be deleted due to compliance reasons. Therefore, you should use a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation.
Use Glue ETL job to write the transformed data in the refined zone using a compressed file format
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You cannot transition the refined zone data into Glacier Deep Archive because it is used by the business analysts for ad-hoc querying. Therefore, the best optimization is to have the refined zone data stored in a compressed format via the Glue job. The compressed data would reduce the storage cost incurred on the data in the refined zone.
Incorrect options:
Create a Lambda function based job to delete the raw zone data after 1 day – As mentioned in the use-case, the source data needs to be kept for a minimum of 5 years for compliance reasons. Therefore the data in the raw zone cannot be deleted after 1 day.
Setup a lifecycle policy to transition the refined zone data into Glacier Deep Archive after 1 day of object creation – You cannot transition the refined zone data into Glacier Deep Archive because it is used by the business analysts for ad-hoc querying. Hence this option is incorrect.
Use Glue ETL job to write the transformed data in the refined zone using CSV format – It is cost-optimal to write the data in the refined zone using a compressed format instead of CSV format. The compressed data would reduce the storage cost incurred on the data in the refined zone. So, this option is incorrect.


112




1. You are trying to launch an EC2 instance, however the instance seems to go into a terminated status immediately. What would probably not be a reason that this is happening?
A. The AMI is missing the required part
B. The snapshot is corrupt
C. You need to create a storage in EBS first
D. You have reached your volume limit.

Answer: C


2. In the context of AWS support, why must an EC2 instance be unreachable for 20 minutes rather than allowing customers to open tickets immediately.
A. Because most reachability issues are resolved by automated processes in less than 20 mins
B. Because all EC2 instances are unreachable for 20 min. every day when AWS does route maintenance
C. Because all EC2 instances are unreachable for 20 mins when first launched
D. Because of all the reasons listed here

Answer: A

Explanation: An EC2 instance must be unreachable for 20 mins before opening a ticket, because most reachability issues are resolved by automated processes in less than 20 mins and will not require any action on the part of the customer. If the instance is still unreachable after this time frame has passed, then you should open a case with support.


3. EBS provides the ability to create backups of any EC2 volume into what is known as
A. Snapshots
B. Images
C. Instance backups
D. Mirrors

Answer: A

Explanation: Amazon allows to make backups of the data stored in EBS volumes through snapshots that can later be used to create a new EBS volume.


4. A user is storing large number of objects on S3. The user wants to implement search functionality among the objects. How the user can achieve this?
A. Use the indexing feature of S3
B. Tag the objects with the metadata to search on that
C. Use the query functionality of S3
D. Make your own DB system which stores the S3 metadata for the search functionality.

Answer: D

Explanation: In AWS, S3 doesn’t provide any query facility. To retrieve a specific object, the user needs to know the exact bucket/ object key. In this case it is recommended to have an own DB system which manages the S3 metadata and key mapping.


5. After setting up a VPC network, a more experienced cloud engineer suggests that to achieve a low n/w latency and high n/w throughput you should look into setting up a placement group. You know nothing about this, but to begin to do some research about it and are especially curious about its limitations. Which of the below statements is wrong in describing the limitations of a placement group.
A. Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed.
B. A placement group can span multiple AZs
C. You cant move an existing instance into a placement group
D. A placement group can span peered VPCs

Answer: B

A placement group is a logical grouping of instances within a single AZ. Using placement groups enables applications to participate in a low-latency, 10Gbps n/w. Placement groups are recommended for applications that benefit from low n/w latency, high n/w throughput, or both. To provide the lowest latency, and the highest packet-per-second n/w performance for your placement group, choose an instance type that supports enhanced networking. Placement groups have the following limitations: The name you specify for a placement group a name must be unique within your AWS account. A placement group cant span multiple AZs. Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed. We recommend using the instance type for all instances in a placement group. You cant merge placement groups. Instead, you must terminate the instances in one placement group, and then relaunch those instances into the other placement groups. A placement group can span peered VPCs, however, you will not get full bisection bandwidth between instances in peered VPCs. You cant move an existing instance into a placement group. You can create an AMI from an existing instance, and then launch a new instance from the AMI into a placement group.


6. What is a placement group in Amazon EC2?
A. It is a group of EC2 instances within a single AZ
B. It is edge location of web content
C. It is the AWS region where you run the EC2 instance of web content
D. It is a group used to span multiple AZ

Answer: A

Explanation: A placement group is a logical grouping of instances within single AZ.


7. You are migrating an internal server of your DC to an EC2 instance with EBS volume. Your server disk usage is around 500 GB so you just copied all your data to a 2 TB disk to be used with AWS import/ export. Where will the data be imported once it arrives at Amazon.
A. To a 2 TB EBS volume
B. To a S3 bucket with two objects of 1 TB
C. To 500 GB EBS volume
D. To S3 bucket as a 2 TB snapshot

Answer: B

Explanation: An import to EBS will have different results depending on whether the capacity of your storage device is <= 1 TB or > TB. The max size of EBS snapshot is 1 TB, so if the device image is larger than 1 TB, the image is chunked and stored on S3. The target location is determined based on the total capacity of the device, not the amount of data on the device.


8. A client needs you to import some existing infrastructure from a dedicated hosting provider to AWS to try and save on the cost of running his current website. He also needs an automated process that manages backups, s/w patching, automatic failure detection, and recovery. You are aware that his existing set up currently uses an Oracle DB. Which of the following AWS DBs could be best for accomplishing this task?
A. Amazon RDS
B. Amazon Redshift
C. Amazon Simple DB
D. Amazon Elasti Cache

Answer: A

Explanation: Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server or Postgresql DB engine. This means that the code, applications, and tools that you are already use today with your existing DBs can be used with Amazon RDS. Amazon RDS automatically patches the DB s/w and backs up DB, storing the back ups for a user defined retention period and enabling point in time recovery.


9. True or False: A VPC contains multiple subnets, where each subnet can span multiple AZs
A. True, only if requested during the setup of VPCs
B. True
C. False
D. True, only for US region.

Answer: C

Explanation: A VPC can span several AZs. In contrast a subnet must reside in a single AZ.


10. A edge location refers to which Amazon web service.
A. An edge location is referred to the n/w configured within a zone or region
B. An edge location is referred to AWS region
C. An edge location is the location of the data center used for Amazon cloud front
D. An edge location is a zone within the AWS region

Answer: C

Explanation: Amazon cloud front is a content distributed n/w. A content delivery n/w or content distribution n/w (CDN) is a large distributed system of servers deployed in multiple data centers across the world. The location of the data center used for CDN is called edge location. Amazon cloud front can cache static content at each edge location. This means that your popular static content (ex: your sites logo, navigational images, CSS, JS code, etc) will be available at a nearby edge location for the browsers to download with low latency and improved performance for viewers. Caching popular static content with Cloud Front also helps you off load request for such files from your origin server. Cloud Front serves the cached copy when available and only makes a browsers request does not have a copy of the file.


10. You are looking at ways to improve some existing infrastructure as it seems a lot engineering resources are being taken up with basic management and monitoring tasks and the cost seems to be excessive. You are thinking of deploying Amazon Elasti Cache to help. Which of the following statements is true in regards to Elasti Cache.
A. You can improve load and response time to user actions and queries. However the cost associated with scaling web application will be more.
B. You cant improve load and response times to user actions and queries but you can reduce the cost associated with scaling web applications
C. You can improve load and response times to user action and queries, however the cost associated with scaling web application will remain the same.
D. You can improve load and response times to user actions and queries and also you can reduce the cost associated with scaling web applications

Answer: D

Explanation: Elasti Cache is a web service that makes it easy to deploy and run MemCached or Redis protocol complaint server nodes in the cloud. Elasti Cache improve the performance of web applications by allowing to you to retrieve information from a fast, managed, in-memory caching system, instead of relying entirely on slower disk based DBs. The service simplifies and offloads the management, monitoring and operation of in-memory cache environments, enabling your engineering resources to focus on developing applications. Using Amazon Elasti Cache you can not only improve load and response times to user action and queries, but also reduce the cost associated with scaling web applications.


11. Your supervisor has asked to build a simple file synchronization service for your dept. He doesnt want to spend too much money and he wants to be notified of any changes to files by email. What do you think would be best amazon service to use for the email solution.
A. Amazon SES (Simple Email Service)
B. Amazon Cloud Search
C. Amazon SWF (Simple Workflow Service)
D. Amazon Appstream

Answer: A

Explanation: File change notifications can be sent via email to users following the resource with Amazon SES, an easy to use, cost effective email solution.


12. Your manager has just given access to multiple VPN connections that someone else has recently set up between all your company’s offices. She needs you to make sure that the communication between VPN is secured. Which of the following services would be the best for providing a low-cost hub-and-scope model for primary and backup connectivity between these remote offices.
A. Amazon Cloud Front
B. AWS Direct Connect
C. AWS Cloud HSM
D. AWS VPN CloudHub

Answer: D

Explanation: If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. The VPN Cloud Hub operates on a simple hub-and-spoke mode that you can use with or w/o a VPC. This design is suitable for customers with multiple branch offices and existing internet connections who would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or back up connectivity between these remote offices.


Pending first four

1. EC2 Compute – 42
2. Virtual Private Cloud – 46
3. Storage Services – 27
4. Security Architecture – 36
5. Database Services – 34
6. Fault Tolerant Systems – 19
7. Deployment and Orchestration – 33
8. Monitoring Services – 17
Total == 254

EC2 Compute
Question 1:
What three attributes are selectable when creating an EBS volume for an EC2
instance?
A. volume type
B. IOPS
C. region
D. CMK
E. ELB
F. EIP
Answer (A,B,D)

Question 2: You have been asked to migrate a 10 GB unencrypted EBS volume
to an encrypted volume for security purposes. What are three key steps required
as part of the migration?
A. pause the unencrypted instance
B. create a new encrypted volume of the same size and availability zone
C. create a new encrypted volume of the same size in any availability zone
D. start converter instance
E. shutdown and detach the unencrypted instance
Answer (B,D,E)

Question 3: What is EC2 instance protection?
A. prevents Auto Scaling from selecting specific EC2 instance to be
replaced when scaling in
B. prevents Auto Scaling from selecting specific EC2 instance to be
replaced when scaling out
C. prevents Auto Scaling from selecting specific EC2 instance for
termination when scaling out
D. prevents Auto Scaling from selecting specific EC2 instance for
termination when scaling in
E. prevents Auto Scaling from selecting specific EC2 instance for
termination when paused
F. prevents Auto Scaling from selecting specific EC2 instance for
termination when stopped
Answer (D)

Question 4:
What two features are supported with EBS volume Snapshot feature?
A. EBS replication across regions
B. EBS multi-zone replication
C. EBS single region only
D. full snapshot data only
E. unencrypted snapshot only
Answer (A,B)
Question 5:
What two resource tags are supported for an EC2 instance?
A. VPC endpoint
B. EIP
C. network interface
D. security group
E. Flow Log
Answer (A,E)
Question 6:
What two options are available to alert tenants when an EC2 instance is
terminated?
A. SNS
B. CloudTrail
C. Lambda function
D. SQS
E. STS
Answer (A,C)
Question 7:
What class of EC2 instance type is recommended for running data analytics?
A. memory optimized
B. compute optimized
C. storage optimized
D. general purpose optimized
Answer (B)
Question 8:
What class of EC2 instance type is recommended for database servers?
A. memory optimized
B. compute optimized
C. storage optimized
D. general purpose optimized
Answer (A)
Question 9:
What two attributes distinguish each pricing model?
A. reliability
B. amazon service
C. discount
D. performance
E. redundancy
Answer (A,C)
Question 10:
What are three standard AWS pricing models?
A. elastic
B. spot
C. reserved
D. dynamic
E. demand
Answer (B,C,E)
Question 11:
How is an EBS root volume created when launching an EC2 instance from a
new EBS-backed AMI?
A. S3 template
B. original AMI
C. snapshot
D. instance store
Answer (C)
Question 12:
What Amazon AWS sources are available for creating an EBS-Backed Linux
AMI? (select two)
A. EC2 instance
B. Amazon SMS
C. VM Import/Export
D. EBS Snapshot
E. S3 bucket
Answer (A,D)
Question 13:
What is required to prevent an instance from being launched and incurring costs?
A. stop instance
B. terminate instance
C. terminate AMI and de-register instance
D. stop and de-register instance
E. stop, deregister AMI and terminate instance
Answer (E)
Question 14:
What is an EBS Snapshot?
A. backup of an EBS root volume and instance data
B. backup of an EC2 instance
C. backup of configuration settings
D. backup of instance store
Answer (A)
Question 15:
Where are ELB and Auto-Scaling groups deployed as a unified solution for
horizontal scaling?
A. database instances
B. all instances
C. web server instances
D. default VPC only
Answer (C)
Question 16: What feature is supported when attaching or detaching an EBS
volume from an EC2 instance?
A. EBS volume can be attached and detached to an EC2 instance in the
same region
B. EBS volume can be attached and detached to an EC2 instance that is
cross-region
C. EBS volume can only be copied and attached to an EC2 instance that is
cross-region
D. EBS volume can only be attached and detached to an EC2 instance in the
same Availability Zone
Answer (D)
Question 17:
What two statements correctly describe how to add or modify IAM roles to a
running EC2 instance?
A. attach an IAM role to an existing EC2 instance from the EC2 console
B. replace an IAM role attached to an existing EC2 instance from the EC2
console
C. attach an IAM role to the user account and relaunch the EC2 instance
D. add the EC2 instance to a group where the role is a member
Answer (A,B)
Question 18: What is the default behavior for an EC2 instance when
terminated? (Select two)
A. DeleteOnTermination attribute cannot be modified
B. EBS root device volume and additional attached volumes are deleted
immediately
C. EBS data volumes that you attach at launch persist
D. EBS root device volume is automatically deleted when instance
terminates
Answer (C,D)
Question 19:
How do you launch an EC2 instance after it is terminated? (Select two)
A. launch a new instance using the same AMI
B. reboot instance from CLI
C. launch a new instance from a Snapshot
D. reboot instance from management console
E. contact AWS support to reset
Answer (A,C)
Question 20:
What service can automate EBS snapshots (backups) for restoring EBS
volumes?
A. CloudWatch event
B. SNS topic
C. CloudTrail
D. Amazon Inspector
E. CloudWatch alarm
Answer (A)
Question 21:
What will cause AWS to terminate an EC2 instance on launch? (Select two)
A. security group error
B. number of EC2 instances on AWS account exceeded
C. EBS volume limits exceeded
D. multiple IP addresses assigned to instance
E. unsupported instance type assigned
Answer (B,C)
Question 22: You recently made some configuration changes to an EC2
instance. You then launched a new EC2 instance from the same AMI however
none of the settings were saved. What is the cause of this error?
A. did not save configuration changes to EC2 instance
B. did not save configuration changes to AMI
C. did not create new AMI
D. did not reboot EC2 instance to enable changes
Answer (C)
Question 23: What statements are correct concerning DisableApiTermination
attribute? (Select two)
A. cannot enable termination protection for Spot instances
B. termination protection is disabled by default for an EC2 instance
C. termination protection is enabled by default for an EC2 instance
D. can enable termination protection for Spot instances
E. DisableApiTermination attribute supported for EBS-backed instances
only
Answer (A,B)
Question 24:
What is required to copy an encrypted EBS snapshot cross-account? (Select two)
A. copy the unencrypted EBS snapshot to an S3 bucket
B. distribute the custom key from CloudFront
C. share the custom key for the snapshot with the target account
D. share the encrypted EBS snapshot with the target account
E. share the encrypted EBS snapshots publicly
F. enable root access security on both accounts
Answer (C,D)
Question 25:
What three services enable Single-AZ as a default?
A. EC2
B. ELB
C. Auto-Scaling
D. DynamoDB
E. S3
Answer (A,B,C)
Question 26:
What AWS service automatically publishes access logs every five minutes?
A. VPC Flow Logs
B. Elastic Load Balancer
C. CloudTrail
D. DNS Route 53
Answer (B)
Question 27:
You have developed a web-based application for file sharing that will allow
customers to access files. There are a variety of sizes that include larger .pdf and
video files. What two solution stacks could tenants use for an online file sharing
service? (Select two)
A. EC2, ELB, Auto-Scaling, S3
B. Route 53, Auto-Scaling, DynamoDB
C. EC2, Auto-Scaling, RDS
D. CloudFront
Answer (A,D)
Question 28:
What infrastructure services are provided to EC2 instances? (Select three)
A. VPN
B. storage
C. compute
D. transport
E. security
F. support
Answer (B,C,D)
Question 29:
What steps are required from AWS console to copy an EBS-backed AMI for a
database instance cross-region?
A. create Snapshot of data volume, select Copy, select destination region
B. select Copy EBS-backed AMI option and destination region
C. select copy database volume and destination region
D. create Snapshot of EBS-backed AMI, select Copy Snapshot option,
select destination region
E. create Snapshot of Instance-store AMI, select Copy AMI option, select
destination region
Answer (D)
Question 30:
How is capacity (compute, storage and network speed) managed and assigned to
EC2 instances?
A. AMI
B. instance type
C. IOPS
D. Auto-Scaling
Answer (B)
Question 31:
What storage type enable permanent attachment of volumes to EC2 instances?
A. S3
B. RDS
C. TDS
D. EBS
E. instance store
Answer (D)
Question 32: What is the recommended method for migrating (copying) an EC2
instance to a different region?
A. terminate instance, select region, copy instance to destination region
B. select AMI associated with EC2 instance and use Copy AMI option
C. stop instance and copy AMI to destination region
D. cross-region copy is not currently supported
Answer (B)
Question 33:
What are two attributes that define an EC2 instance type?
A. vCPU
B. license type
C. EBS volume storage
D. IP address
E. Auto-Scaling
Answer (A,C)
Question 34:
How is an Amazon Elastic Load Balancer (ELB) assigned?
A. per EC2 instance
B. per Auto-Scaling group
C. per subnet
D. per VPC
Answer (A)
Question 35:
What method detects when to replace an EC2 instance that is assigned to an
Auto-Scaling group?
A. health check
B. load balancing algorithm
C. EC2 health check
D. not currently supported
E. dynamic path detection
F. Auto-Scaling
Answer (A)
Question 36:
What two statements correctly describe Auto-Scaling groups?
A. horizontal scaling of capacity
B. decrease number of instances only
C. EC2 instances are assigned to a group
D. database instances only
E. no support for multiple availability zones
Answer (A,C)
Question 37:
What is the default maximum number of Elastic IP addresses assignable per
Amazon AWS region?
A. 1
B. 100
C. 5
D. unlimited
Answer (C)
Question 38:
How are snapshots for an EBS volume created when it is the root device for an
instance?
A. pause instance, unmount volume and snapshot
B. terminate instance and snapshot
C. unencrypt volume and snapshot dynamically
D. stop instance, unmount volume and snapshot
Answer (D)
Question 39:
What cloud compute components are configured by tenants and not Amazon
AWS support engineers? (Select three)
A. hypervisor
B. upstream physical switch
C. virtual appliances
D. guest operating system
E. applications and databases
F. RDS
Answer (C,D,E)
Question 40:
What three attributes are used to define a launch configuration template for an
Auto-Scaling group?
A. instance type
B. private IP address
C. Elastic IP
D. security group
E. AMI
Answer (A,D,E)
Question 41:
What three characteristics or limitations differentiate EC2 instance types?
A. VPC only
B. application type
C. EBS volume only
D. virtualization type
E. AWS service selected
Answer (A,C,D)
Question 42:
Select two difference between HVM and PV virtualization types?
A. HVM supports all current generation instance types
B. HVM is similar to bare metal hypervisor architecture
C. PV provides better performance than HVM for most instance types
D. HVM doesn’t support enhanced networking
E. HVM doesn’t support current generation instance types
Answer (A,B)
Virtual Private Cloud (VPC) Question 1: What are the minimum
components required to enable a web-based application with public web servers
and a private database tier? (select three)
A. Internet gateway
B. Assign EIP addressing to database instances on private subnet
C. Virtual private gateway
D. Assign database instances to private subnet and private IP addressing
E. Assign EIP and private IP addressing to web servers on public subnet
Answer (A,D,E)
Question 2:
Refer to the network drawing. How are packets routed from private subnet
to public subnet for the following web-based application with a database tier?
A. Internet gateway
B. custom route table
C. 10.0.0.0/16
D. nat-instance-id
E. igw-id
F. add custom route table
Answer (D)
Question 3:
What VPC component provides Network Address Translation?
A. NAT instance
B. NAT gateway
C. virtual private gateway
D. Internet gateway
E. ECS
Answer (D)
Question 4:
What are the advantages of NAT gateway over NAT instance? (Select two)
A. NAT gateway requires a single EC2 instance
B. NAT gateway is scalable
C. NAT gateway translates faster
D. NAT gateways is a managed service
E. NAT gateway is Linux-based
Answer (B,D)
Question 5:
What is the management responsibility of tenants and not Amazon AWS?
A. EC2 instances
B. RDS
C. Beanstalk
D. NAT instance
Answer (A,D)
Question 6:
What two features provide an encrypted (VPN) connection from VPC to an
enterprise data center?
A. Internet gateway
B. Amazon RDS
C. Virtual private gateway
D. CSR 1000V router
E. NAT gateway
Answer (C,D)
Question 7:
What two attributes are supported when configuring an Amazon Virtual private
gateway (VPG)?
A. route propagation
B. Elastic IP (EIP)
C. DHCP
D. public IPv4 address
E. public subnets
Answer (A,C)
Question 8:
What two features are available with AWS Direct Connect service?
A. internet access
B. extend on-premises VLANs to cloud
C. bidirectional forwarding detection (BFD)
D. load balancing between Direct Connect and VPN connection
E. public and private AWS services
Answer (C,E)
Question 9:
When is Direct Connect a preferred solution over VPN IPsec?
A. fast and reliable connection
B. redundancy is a key requirement
C. fast and easy to deploy
D. layer 3 connectivity
E. layer 2 connectivity
Answer (A)
Question 10:
You have been asked to setup a VPC endpoint connection between VPC and S3
buckets for storing backups and snapshots. What AWS components are currently
required when configuring a VPC endpoint?
A. Internet gateway
B. NAT instance
C. Elastic IP
D. private IP address
Answer (D)
Question 11:
What are the primary advantages of VPC endpoints? (Select two)
A. reliability
B. cost
C. throughput
D. security
Answer (B,D)
Question 12:
What are the DHCP option attributes used to assign private DNS servers to your
VPC?
A. dns resolution and domain name
B. hostnames and internet domain
C. domain servers and domain name
D. domain-name-servers and domain-name
Answer (D)
Question 13:
What DNS attributes are configured when Default VPC option is selected?
A. DNS resolution: yes / DNS hostnames: yes
B. DNS resolution: yes / DNS hostnames: no
C. DNS resolution: no / DNS hostnames: yes
D. DNS resolution: no / DNS hostnames: no
Answer (A)
Question 14:
What configuration settings are required from the remote VPC in order to create
cross-account peering? (Select three)
A. VPC ID
B. account username
C. account ID
D. CMK keys
E. VPC CIDR block
F. volume type
Answer (A,C,E)
Question 15:
What CIDR block range is supported for IPv4 addressing and subnetting within
a single VPC?
A. /16 to /32
B. /16 to /24
C. /16 to /28
D. /16 to /20
Answer (C)
Question 16: What problem is caused by the fact that VPC peering does not
permit transitive routing?
A. additional VPC route tables to manage
B. virtual private gateway is required
C. Internet gateway is required for each VPC
D. routing between connected spokes through hub VPC is complex
E. increased number of peer links required
Answer (E)
Question 17:
What two statements correctly describes Elastic Load Balancer operation?
A. spans multiple regions
B. assigned per EC2 instance
C. assigned per subnet
D. assigned per Auto-Scaling group
E. no cross-region support
Answer (D,E)
Question 18:
What are two advantages of Elastic IP (EIP) over AWS public IPv4 addresses?
A. EIP can be reassigned
B. EIP is private
C. EIP is dynamic
D. EIP is persistent
E. EIP is public and private
Answer (A,D)
Question 19:
What AWS services are globally managed? (Select four)
A. IAM
B. S3
C. CloudFront
D. Route 53
E. DynamoDB
F. WAF
G. ELB
Answer (A,C,D,F)
Question 20:
What methods are available for creating a VPC? (Select three)
A. AWS management console
B. AWS marketplace
C. VPC wizard
D. VPC console
E. Direct Connect
Answer (A,C,D)
Question 21: What two default settings are configured for tenants by AWS
when Default VPC option is selected?
A. creates a size /20 default subnet in each Availability Zone
B. creates an Internet gateway
C. creates a main route table with local route 10.0.0.0/16
D. create a virtual private gateway
E. create a security group that explicitly denies all traffic
Answer (A,B)
Question 22:
What three statements correctly describes IP address allocation within a VPC?
A. EC2 instance must be terminated to reassign an IP address
B. EC2 instance that is paused can reassign IP address
C. EC2 instance that is stopped can reassign IP address
D. private IP addresses are allocated from a pool and can be reassigned
E. private IP addresses can be assigned by tenant
F. VPC supports dual stack mode (IPv4/IPv6)
Answer (A,E,F)
Question 23:
What are two advantages of selecting default tenancy option for your VPC when
creating it?
A. performance and reliability
B. some AWS services do not work with a dedicated tenancy VPC
C. tenant can launch instances within VPC as default or dedicated instances
D. instance launch is faster
Answer (B,C)
Question 24: What is the purpose of a local route within a VPC route table?
A. local route is derived from the default VPC CIDR block 10.0.0.0/16
B. communicate between instances within the same subnet or different
subnets
C. used to communicate between instances within the same subnet
D. default route for communicating between private and public subnets
E. only installed in the main route table
Answer (C)
Question 25:
What is the default behavior when adding a new subnet to your VPC? (Select
two)
A. new subnet is associated with the main route table
B. new subnet is associated with the custom route table
C. new subnet is associated with any selected route table
D. new subnet is assigned to the default subnet
E. new subnet is assigned from the VPC CIDR block
Answer (A,E)
Question 26: You have enabled Amazon RDS database services in VPC1 for an
application that has public web servers in VPC2. How do you connect the web
servers to the RDS database instance so they can communicate considering the
VPC’s are in the same region?
A. VPC endpoints
B. VPN gateway
C. path-based routing
D. VPC peering
E. AWS Network Load Balancer
Answer (D)
Question 27:
What AWS services now support VPC endpoints feature for optimizing security?
(Select three)
A. Kinesis
B. DNS Route 53
C. S3
D. DynamoDB
E. RDS
Answer (A,C,D)
Question 28:
What are three characteristics of an Amazon Virtual Private Cloud?
A. public and private IP addressing
B. broadcasts
C. multiple private IP addresses per network interface
D. dedicated single tenant hardware only
E. persistent public IP addresses
F. HSRP
Answer (A,C,E)
Question 29: What is the difference between VPC main route table and custom
route table?
A. VPC only creates a main route table when started
B. custom route table is the default
C. custom route table is created for public subnets
D. custom route table is created for private subnets
E. main route table is created for public and private subnets
Answer (C)
Question 30:
What is the purpose of the native VPC router?
A. route packets across the internet
B. route packets between private cloud instances
C. route packets between subnets
D. route packets from instances to S3 storage volumes
E. route packets across VPN
Answer (C)
Question 31:
How are private DNS servers assigned to an Amazon VPC?
A. not supported
B. select nondefault VPC
C. select default VPC
D. select EC-2 classic
Answer (B)
Question 32:
What are two characteristics of an Amazon security group?
A. instance level packet filtering
B. deny rules only
C. permit rules only
D. subnet level packet filtering
E. inbound only
Answer (A,C)
Question 33:
What statement is true of Network Access Control Lists (ACL) operation within
an Amazon VPC?
A. instance and subnet level packet filtering
B. subnet level packet filtering
C. inbound only
D. only one ACL allowed per VPC
E. outbound only
Answer (B)
Question 34:
How are packets forwarded between public and private subnets within VPC?
A. EIP
B. NAT
C. main route table
D. VPN
Answer (B)
Question 35:
What two statements accurately describe Amazon VPC architecture?
A. Elastic Load Balancer (ELB) cannot span multiple availability zones
B. VPC does not support DMVPN connection
C. VPC subnet cannot span multiple availability zones
D. VPC cannot span multiple regions
E. Flow logs are not supported within a VPC
Answer (C,D)
Question 36:
What is a requirement for attaching EC2 instances to on-premises clients and
applications?
A. Amazon Virtual Private Gateway (VPN)
B. Amazon Internet Gateway
C. VPN Connection
D. Elastic Load Balancer (ELB)
E. NAT
Answer (B)
Question 37:
What two statements correctly describe Amazon virtual private gateway?
A. assign to private subnets only
B. assign to public subnets only
C. single virtual private gateway per VPC
D. multiple virtual private gateways per VPC
E. single virtual private gateway per region
Answer (A,C)
Question 38:
What is the maximum access port speed available with Amazon Direct Connect
service?
A. 1 Gbps
B. 10 Gbps
C. 500 Mbps
D. 100 Gbps
E. 100 Mbps
Answer (B)
Question 39:
Refer to the drawing. Your company has asked you to configure a peering link
between two VPCs that are currently not connected or exchanging any packets.
What destination and target is configured in the routing table of VPC1 to enable
packet forwarding to VPC2?
A. destination = 172.16.0.0/16
target = pcx-vpc2vpc1
B. destination = 10.0.0.0/16
target = pcx-vpc2
C. destination = 172.16.0.0/16
target = 10.0.0.0/16
D. destination = 172.16.0.0/16
target = pcx-vpc1vpc2
E. default route only
Answer (D)
Question 40:
How is routing enabled by default within a VPC for an EC2 instance?
A. add a default route
B. main route table
C. custom route table
D. must be configured explicitly
Answer (B)
Question 41:
What three features are not supported with VPC peering?
A. overlapping CIDR blocks
B. IPv6 addressing
C. Gateways
D. transitive routing
E. RedShift
F. ElastiCache
Answer (A,C,D)
Question 42:
What route is used in a VPC routing table for packet forwarding to a Gateway?
A. static route
B. 10.0.0.0/16
C. tenant configured
D. 0.0.0.0/0
E. 0.0.0.0/16
Answer (D)
Question 43: You are asked to deploy a web application comprised of multiple
public web servers with only private addressing assigned. What Amazon AWS
solutions enables multiple servers on a private subnet with only a single EIP
required and Availability Zone redundancy?
A. NAT instance
B. Internet gateway
C. virtual private gateway
D. NAT gateway
E. Elastic Network Interface (ENI)
Answer (D)
Question 44:
What is the IP addressing schema assigned to a default VPC?
A. 172.31.0.0/16 CIDR block subnetted with 172.31.0.0/20
B. 172.16.0.0/16 CIDR block subnetted with 172.16.0.0/24
C. 10.0.0.0/16 CIDR block subnetted with 10.0.0.0/24
D. 172.16.0.0/24 CIDR block subnetted with 172.31.0.0/18
Answer (A)
Question 45:
What default configuration and components are added by AWS when Default
VPC type is selected? (Select three)
A. Internet gateway
B. virtual private gateway
C. NAT instance
D. security group
E. DNS
Answer (A,D,E)
Question 46:
What feature requires tenants to disable source/destination check?
A. Elastic IP (EIP)
B. data replication
C. VPC peering
D. NAT
E. Internet gateway
Answer (D)
Storage Services
Question 1:
What AWS storage solution allows thousands of EC2 instances to
simultaneously upload, access, delete and share files?
A. EBS
B. S3
C. Glacier
D. EFS
Answer (D)
Question 2:
What is required for an EFS mount target? (Select two)
A. EIP
B. DNS name
C. IP address
D. DHCP
E. IAM role
Answer (B,C)
Question 3:
What connectivity features are recommended for copying on-premises files to
EFS? (Select two)
A. VPN IPsec
B. Internet Gateway
C. Direct Connect
D. File Sync
E. FTP
F. AWS Storage Gateway
Answer (C,D)
Question 4:
What AWS services encrypts data at rest by default? (Select two)
A. S3
B. AWS Storage Gateway
C. EBS
D. Glacier
E. RDS
Answer (B,D)
Question 5:
What fault tolerant features does S3 storage provide? (Select three)
A. cross-region replication
B. versioning must be disabled
C. cross-region asynchronous replication of objects
D. synchronous replication of objects within a region
E. multiple destination buckets
Answer (A,C,D)
Question 6:
What is the fastest technique for deleting 900 objects in an S3 bucket with a
single HTTP request?
A. Multi-Part Delete API
B. Multi-Object Delete API
C. 100 objects is maximum per request
D. Fast-Delete API
Answer (B)
Question 7:
What security controls technique is recommended for S3 cross-account access?
A. IAM group
B. security groups
C. S3 ACL
D. bucket policies
Answer (D)
Question 8:
What are two advantages of cross-region replication of an S3 bucket?
A. cost
B. security compliance
C. scalability
D. Beanstalk support
E. minimize latency
Answer (B,E)
Question 9:
What are two primary difference between Amazon S3 Standard and S3/RRS
storage classes?
A. Amazon Standard does not replicate at all
B. RRS provides higher durability
C. RRS provides higher availability
D. RRS does not replicate objects as many times
E. application usage is different
Answer (D,E)
Question 10:
What two features are enabled with S3 services?
A. store objects of any size
B. dynamic web content
C. supports Provisioned IOPS
D. store virtually unlimited amounts of data
E. bucket names are globally unique
Answer (D,E)
Question 11:
What new feature was recently added to SQS that defines how messages are
ordered?
A. streams
B. SNS
C. FIFO
D. TLS
E. decoupling
Answer (C)
Question 12:
What two AWS storage types are persistent?
A. ephemeral
B. S3
C. EBS
D. instance store
E. SAML
Answer (B,C)
Question 13:
Select three on-premises backup solutions used for copying data to an Amazon
AWS S3 bucket?
A. AWS Import/Export
B. RDS
C. Snowball
D. Availability Zone (AZ) replication
E. AWS Storage Gateway
Answer (A,C,E)
Question 14:
You have 1 TB of data and want to archive the data that won’t be accessed that
often. What Amazon AWS storage solution is recommended?
A. Glacier
B. EBS
C. ephemeral
D. CloudFront
Answer (A)
Question 15:
What are three methods of accessing DynamoDB for customization purposes?
A. CLI
B. AWS console
C. API call
D. vCenter
E. Beanstalk
Answer (A,B,C)
Question 16:
What are two primary differences between Glacier and S3 storage services?
A. Glacier is lower cost
B. S3 is lower cost
C. Glacier is preferred for frequent data access with lower latency
D. S3 is preferred for frequent data access with lower latency
E. S3 supports larger file size
Answer (A,D)
Question 17:
What statement correctly describes the operation of AWS Glacier archive?
A. archive is a group of vaults
B. archive is an unencrypted vault
C. archive supports aggregated files only
D. maximum file size is 1 TB
E. archive supports single and aggregated files
Answer (E)
Question 18: What are three primary differences between S3 vs EBS?
A. S3 is a multi-purpose public internet-based storage
B. EBS is directly assigned to a tenant VPC EC2 instance
C. EBS and S3 provide persistent storage
D. EBS snapshots are typically stored on S3 buckets
E. EBS and S3 use buckets to manage files
F. EBS and S3 are based on block level storage
Answer (A,B,D)
Question 19:
What on-premises solution is available from Amazon AWS to minimize latency
for all data?
A. Gateway-VTL
B. Gateway-cached volumes
C. Gateway-stored volumes
D. EBS
E. S3 bucket
F. ElastiCache
Answer (C)
Question 20:
What feature transitions S3 storage to Standard-IA for cost optimization?
A. RRS/S3
B. Glacier vault
C. storage class analysis
D. path-based routing
Answer (C)
Question 21:
How does AWS uniquely identify S3 objects?
A. bucket name
B. version
C. key
D. object tag
Answer (C)
Question 22:
What is the advantage of read-after-write consistency for S3 buckets?
A. no stale reads for PUT of any new object in all regions
B. higher throughput for all requests
C. stale reads for PUT requests in some regions
D. no stale reads for GET requests in a single regions
Answer (A)
Question 23:
What is the maximum single file object size supported with Amazon S3?
A. 5 GB
B. 5 TB
C. 1 TB
D. 100 GB
Answer (B)
Question 24:
What security problem is solved by using Cross-Origin Resource Sharing
(CORS)?
A. enable HTTP requests from within scripts to a different domain
B. enable sharing of web-based files between different buckets
C. provide security for third party objects within AWS
D. permits sharing objects between AWS services
Answer (A)
Question 25:
What is recommended for migrating 40 TB of data from on-premises to S3
when the internet link is often overutilized?
A. AWS Storage gateway
B. AWS Snowball
C. AWS Import/Export
D. AWS Elastic File System
E. AWS Elasticsearch
F. AWS Multi-Part Upload API
Answer (B)
Question 26:
Your company is publishing an online catalog of books that is currently using
DynamoDB for storing the information associated with each item. There is a
requirement to add images for each book. What solution is most cost effective
and designed for that purpose?
A. RedShift
B. EBS
C. RDS
D. S3
E. Kinesis
Answer (D)
Question 27:
You have an application that collects monitoring data from 10,000 sensors (IoT)
deployed in the USA. The datapoints are comprised of video events for home
security and environment status alerts. The application will be deployed to AWS
with EC2 instances as data collectors. What AWS storage service is preferred for
storing video files from sensors?
A. RedShift
B. RDS
C. S3
D. DynamoDB
Answer (C)
Security Architecture
Question 1:
What statements correctly describe security groups within a VPC? (Select three)
A. default security group only permit inbound traffic
B. security groups are stateful firewalls
C. only allow rules are supported
D. allow and deny rules are supported
E. security groups are associated to network interfaces
Answer (B,C,E)
Question 2:
What three items are required to configure a security group rule?
A. protocol type
B. VPC name
C. port number
D. source IP
E. destination IP
F. description
Answer (A,C,D)
Question 3:
What two source IP address types are permitted in a security group rule?
A. only CIDR blocks with /16 subnet mask
B. source IP address 0.0.0.0/0
C. single source IP address with /24 subnet mask
D. security group id
E. IPv6 address with /64 prefix length
Answer (B,D)
Question 4:
What protocols must be enabled for remote access to Linux-based and Windowsbased EC2 instances?
A. SSH, ICMP, Telnet
B. SSH, HTTP, RDP
C. SSH, HTTP, SSL
D. SSH, RDP, ICMP
Answer (D)
Question 5:
Distinguish network ACLs from security groups within a VPC? (Select three)
A. ACL filters at the subnet level
B. ACL is based on deny rules only
C. ACL is applied to instances and subnets
D. ACL is stateless
E. ACL supports a numbered list for filtering
Answer (A,D,E)
Question 6:
What happens to the security permissions of a tenant when an IAM role is
granted? (Select two)
A. tenant inherits only permissions assigned to the IAM role temporarily
B. add security permissions of the IAM role to existing permissions
C. previous security permissions are no longer in effect
D. previous security permissions are deleted unless reconfigured
E. tenant inherits only read permissions assigned to the IAM role
Answer (A,C)
Question 7:
Where are IAM permissions granted to invoke and execute a Lambda function
for S3 access? (Select two)
A. S3 bucket
B. EC2 instance
C. Lambda function
D. IAM role
E. event mapping
Answer (A,D)
Question 8:
You have some developers working on code for an application and they require
temporary access to AWS cloud up to an hour. What is the easiest web-based
solution from AWS to provides access and minimize security exposure?
A. ACL
B. security group
C. IAM group
D. STS
E. EFS
Answer (D)
Question 9:
What two methods are used to request temporary credentials based on AWS
Security Token Service (STS)?
A. Web Identity Federation
B. LDAP
C. IAM identity
D. dynamic ACL
E. private key rotation
Answer (A,C)
Question 10:
What two components are required for enabling SAML authentication requests
to AWS Identity and Access Management (IAM)?
A. access keys
B. session token
C. SSO
D. identity provider (IdP)
E. SAML provider entity
Answer (D,E)
Question 11:
What are two reasons for deploying Origin Access Identity (OAI) when enabling
CloudFront?
A. prevent users from deleting objects in S3 buckets
B. mitigate distributed denial of service attacks (DDoS)
C. prevent users from accessing objects with Amazon S3 URL
D. prevent users from accessing objects with CloudFront URL
E. replace IAM for internet-based customer authentication
Answer (B,C)
Question 12:
What solutions are recommended to mitigate DDoS attacks? (Select three)
A. host-based firewall
B. elastic load balancer
C. WAF
D. SSL/TLS
E. Bastion host
F. NAT gateway
Answer (B,C,E)
Question 13:
What features are required to prevent users from bypassing AWS CloudFront
security? (Select three)
A. Bastion host
B. signed URL
C. IP whitelist
D. signed cookies
E. origin access identity (OAI)
Answer (B,D,E)
Question 14:
What is the advantage of resource-based policies for cross-account access?
A. trusted account permissions are not replaced
B. trusted account permissions are replaced
C. resource-based policies are easier to deploy
D. trusting account manages all permissions
Answer (A)
Question 15:
Select three requirements for configuring a Bastion host?
A. EIP
B. SSH inbound permission
C. default route
D. CloudWatch logs group
E. VPN
F. Auto-Scaling
Answer (A,B,D)
Question 16:
What rule must be added to the security group assigned to a mount target
instance that enables EFS access from an EC2 instance?
A. Type = EC2, protocol = IP, port = 2049, source = remote security group
id
B. Type = EC2, protocol = EFS, port = 2049, source = 0.0.0.0/0
C. Type = NFS, protocol = TCP, port = 2049, source = remote security
group id
D. Type = NFSv4, protocol = UDP, port = 2049, source = remote security
group id
Answer (C)
Question 17: What statement correctly describes IAM architecture?
A. IAM security is unified per region and replicated based on requirements
for an AWS tenant account
B. IAM security is defined per region for roles only on an AWS tenant
account
C. IAM security is globally unified across the AWS cloud for an AWS
tenant account
D. IAM security is defined separately per region and cross-region security
enabled for an AWS tenant account
Answer (C)
Question 18:
What are two advantages of customer-managed encryption keys (CMK)?
A. create and rotate encryption keys
B. AES-128 cipher for data at rest
C. audit encryption keys
D. encrypts data in-transit for server-side encryption only
Answer (A,C)
Question 19:
What feature is not available with AWS Trusted Advisor?
A. cost optimization
B. infrastructure best practices
C. vulnerability assessment
D. monitor application metrics
Answer (C)
Question 20:
What is required to Ping from a source instance to a destination instance?
A. Network ACL: not required Security Group: allow ICMP outbound on
source/destination EC2 instances
B. Network ACL: allow ICMP inbound/outbound on source/destination subnets
Security Group: not required
C. Network ACL: allow ICMP inbound/outbound on source/destination subnets
Security Group: allow ICMP outbound on source EC2 instance Security
Group: allow ICMP inbound on destination EC2 instance
D. Network ACL: allow TCP inbound/outbound on source/destination subnets
Security Group: allow TCP and ICMP inbound on source EC2 instance
Answer (C)
Question 21:
What two steps are required to grant cross-account permissions between AWS
accounts?
A. create an IAM user
B. attach a trust policy to S3
C. create a transitive policy
D. attach a trust policy to the role
E. create an IAM role
Answer (D,E)
Question 22: You have configured a security group to allow ICMP, SSH and
RDP inbound and assigned the security group to all instances in a subnet. There
is no access to any Linux-based or Windows-based instances and you cannot
Ping any instances. The network ACL for the subnet is configured to allow all
inbound traffic to the subnet. What is the most probable cause?
A. on-premises firewall rules
B. security group and network ACL outbound rules
C. network ACL outbound rules
D. security group outbound rules
E. Bastion host required
Answer (C)
Question 23:
What three techniques provide authentication security on S3 volumes?
A. bucket policies
B. network ACL
C. Identity and Access Management (IAM)
D. encryption
E. AES256
Answer (A,B,C)
Question 24: What statement correctly describes support for AWS encryption of
S3 objects?
A. tenants manage encryption for server-side encryption of S3 objects
B. Amazon manages encryption for server-side encryption of S3 objects
C. client-side encryption of S3 objects is not supported
D. S3 buckets are encrypted only
E. SSL is only supported with Glacier storage
Answer (B)
Question 25:
What authentication method provides Federated Single Sign-On (SSO) for
cloud applications?
A. ADS
B. ISE
C. RADIUS
D. TACACS
E. SAML
Answer (E)
Question 26:
Based on the Amazon security model, what infrastructure configuration and
associated security is the responsibility of tenants and not Amazon AWS? (Select
two)
A. dedicated cloud server
B. hypervisor
C. operating system level
D. application level
E. upstream physical switch
Answer (C,D)
Question 27:
What security authentication is required before configuring or modifying EC2
instances? (Select three)
A. authentication at the operating system level
B. EC2 instance authentication with asymmetric keys
C. authentication at the application level
D. Telnet username and password
E. SSH/RDP session connection
Answer (A,B,E)
Question 28:
What feature is part of Amazon Trusted Advisor?
A. security compliance
B. troubleshooting tool
C. EC2 configuration tool
D. security certificates
Answer (A)
Question 29:
What are two best practices for account management within Amazon AWS?
A. do not use root account for common administrative tasks
B. create a single AWS account with multiple IAM users that have root
privilege
C. create multiple AWS accounts with multiple IAM users per AWS
account
D. use root account for all administrative tasks
E. create multiple root user accounts for redundancy
Answer (A,C)
Question 30:
What AWS feature is recommended for optimizing data security?
A. Multi-factor authentication
B. username and encrypted password
C. Two-factor authentication
D. SAML
E. Federated LDAP
Answer (A)
Question 31:
What IAM class enables an EC2 instance to access a file object in an S3 bucket?
A. user
B. root
C. role
D. group
Answer (C)
Question 32:
What are three recommended solutions that provide protection and mitigation
from distributed denial of service (DDoS) attacks?
A. security groups
B. CloudWatch
C. encryption
D. WAF
E. data replication
F. Auto-Scaling
Answer (A,B,D)
Question 33:
What are three recommended best practices when configuring Identity and
Access Management (IAM) security services?
A. Lock or delete your root access keys when not required
B. IAM groups are not recommended for storage security
C. create an IAM user with administrator privileges
D. share your password and/or access keys with members of your group
only
E. delete any AWS account where the access keys are unknown
Answer (A,C,E)
Question 34:
What two features create security zones between EC2 instances within a VPC?
A. security groups
B. Virtual Security Gateway
C. network ACL
D. WAF
Answer (A,B)
Question 35:
What AWS service provides vulnerability assessment services to tenants within
the cloud?
A. Amazon WAF
B. Amazon Inspector
C. Amazon Cloud Logic
D. Amazon Trusted Advisor
Answer (B)
Question 36:
What are two primary differences between AD Connector and Simple AD for
cloud directory services?
A. Simple AD requires an on-premises ADS directory
B. Simple AD is fully managed and setup in minutes
C. AD Connector requires an on-premises ADS directory
D. Simple AD is more scalable than AD Connector
E. Simple AD provides enhanced integration with IAM
Answer (B,C)
Database Services
Question 1:
How is load balancing enabled for multiple tasks to the same container instance?
A. path-based routing
B. reverse proxy
C. NAT
D. dynamic port mapping
E. dynamic listeners
Answer (D)
Question 2:
What encryption support is available for tenants that are deploying AWS
DynamoDB?
A. server-side encryption
B. client-side encryption
C. client-side and server-side encryption
D. encryption not supported
E. block level encryption
Answer (B)
Question 3:
What are three primary reasons for deploying ElastiCache?
A. data security
B. managed service
C. replication with Redis
D. durability
E. low latency
Answer (B,C,E)
Question 4:
What service does not support session data persistence store to enable web-based
stateful applications?
A. RDS
B. Memcached
C. DynamoDB
D. Redis
E. RedShift
Answer (B)
Question 5:
How does Memcached implement horizontal scaling?
A. Auto-Scaling
B. database store
C. partitioning
D. EC2 instances
E. S3 bucket
Answer (C)
Question 6:
What two options are available for tenants to access ElastiCache?
A. VPC peering link
B. EC2 instances
C. EFS mount
D. cross-region VPC
Answer (A,B)
Question 7:
What two statements correctly describe in-transit encryption support on
ElastiCache platform ?
A. not supported for ElastiCache platform
B. supported on Redis replication group
C. encrypts cached data at rest
D. not supported on Memcached cluster
E. IPsec must be enabled first
Answer (B,D)
Question 8:
What Amazon AWS platform is designed for complex analytics of a variety of
large data sets based on custom code. The applications include machine learning
and data transformation?
A. EC2
B. Beanstalk
C. Redshift
D. EMR
Answer (D)
Question 9:
What are two primary advantages of DynamoDB?
A. SQL support
B. managed service
C. performance
D. CloudFront integration
Answer (B,C)
Question 10:
What two fault tolerant features does Amazon RDS support?
A. copy snapshot to a different region
B. create read replica to a different region
C. copy unencrypted read-replica only
D. copy read/write replica and snapshot
Answer (A,B)
Question 11:
What managed services are included with Amazon RDS? (select four)
A. assign network capacity to database instances
B. install database software
C. perform regular backups
D. data replication across multiple availability zones
E. data replication across single availability zone only
F. configure database
G. performance tuning
Answer (A,B,C,D)
Question 12:
What two configuration features are required to create a private database
instance?
A. security group
B. network ACL
C. CloudWatch
D. Elastic IP (EIP)
E. Nondefault VPC
F. DNS
Answer (A,F)
Question 13:
What storage type is recommended for an online transaction processing (OLTP)
application deployed to Multi-AZ RDS with significant workloads?
A. General Purpose SSD
B. Magnetic
C. EBS volumes
D. Provisioned IOPS
Answer (D)
Question 14:
What features are supported with Amazon RDS? (Select three)
A. horizontal scaling with multiple read replicas
B. elastic load balancing RDS read replicas
C. replicate read replicas cross-region
D. automatic failover to master database instance
E. application load balancer (ALB)
Answer (A,C,E)
Question 15:
What are three advantages of standby replica in a Multi-AZ RDS deployment?
A. fault tolerance
B. eliminate I/O freezes
C. horizontal scaling
D. vertical scaling
E. data redundancy
Answer (A,B,E)
Question 16:
What consistency model is the default used by DynamoDB?
A. strongly consistent
B. eventually consistent
C. no default model
D. casual consistency
E. sequential consistency
Answer (B)
Question 17:
What does RDS use for database and log storage?
A. EBS
B. S3
C. instance store
D. local store
E. SSD
Answer (A)
Question 18:
What statements correctly describe support for Microsoft SQL Server within
Amazon VPC? (Select three)
A. read/write replica
B. read replica only
C. vertical scaling
D. native load balancing
E. EBS storage only
F. S3 storage only
Answer (B,C,D)
Question 19:
Select two features available with Amazon RDS for MySQL?
A. Auto-Scaling
B. read requests to standby replicas
C. real-time database replication
D. active read requests only
Answer (B,C)
Question 20:
What are two characteristics of Amazon RDS?
A. database managed service
B. NoSQL queries
C. native load balancer
D. database write replicas
E. automatic failover of read replica
Answer (A,C)
Question 21:
What caching engines are supported with Amazon ElastiCache? (Select two)
A. HAProxy
B. Route 53
C. RedShift
D. Redis
E. Memcached
F. CloudFront
Answer (D,E)
Question 22:
What are three primary characteristics of DynamoDB?
A. less scalable than RDS
B. static content
C. store metadata for S3 objects
D. replication to three Availability Zones
E. high read/write throughput
Answer (C,D,E)
Question 23:
What are three examples of using Lambda functions to move data between AWS
services?
A. read data directly from DynamoDB streams to RDS
B. read data from Kinesis stream and write data to DynamoDB
C. read data from DynamoDB stream to Firehose and write to S3
D. read data from S3 and write metadata to DynamoDB
E. read data from Kinesis Firehose to Kinesis data stream
Answer (B,C,D)
Question 24: You have enabled Amazon RDS database services in VPC1 for an
application with public web servers in VPC2. How do you connect the web
servers to the RDS database instance so they can communicate considering the
VPC’s are in different regions?
A. VPC endpoints
B. VPN gateway
C. path-based routing
D. publicly accessible database
E. VPC peering
Answer (D)
Question 25:
You have a requirement to create an index to search customer objects stored in
S3 buckets. The solution should enable you to create a metadata search index for
each object stored to an S3 bucket. Select the most scalable and cost effective
solution?
A. RDS, ElastiCache
B. DynamoDB, Lambda
C. RDS, EMR, ALB
D. RedShift
Answer (B)
Question 26: What are three advantages of using DynamoDB over S3 for
storing IoT sensor data where there are 100,000 datapoint samples sent per
minute?
A. S3 must create a single file for each event
B. IoT can write data directly to DynamoDB
C. DynamoDB provides fast read/writes to a structured table for queries
D. DynamoDB is designed for frequent access and fast lookup of small
records
E. S3 is designed for frequent access and fast lookup of smaller records
F. IoT can write data directly to S3
Answer (B,C,D)
Question 27:
Your company is a provider of online gaming that customers access with various
network access devices including mobile phones. What is a data warehousing
solutions for large amounts of information on player behavior, statistics and
events for analysis using SQL tools?
A. RedShift
B. DynamoDB
C. RDS
D. DynamoDB
E. Elasticsearch
Answer (A)
Question 28: What two statements are correct when comparing Elasticsearch
and RedShift as analytical tools?
A. Elasticsearch is a text search engine and document indexing tool
B. RedShift supports complex SQL-based queries with Petabyte sized data
store
C. Elasticsearch supports SQL queries
D. RedShift provides only basic analytical services
E. Elasticsearch does not support JSON data type
Answer (A,B)
Question 29:
What happens when read or write requests exceed capacity units (throughput
capacity) for a DynamoDB table or index? (Select two)
A. DynamoDB automatically increases read/write units
B. DynamoDB can throttle requests so that requests are not exceeded
C. HTTP 400 code is returned (Bad Request)
D. HTTP 500 code is returned (Server Error)
E. DynamoDB automatically increases read/write units if provisioned
throughput is enabled
Answer (B,C)
Question 30:
What read consistency method provides lower latency for GetItem requests?
A. strongly persistent
B. eventually consistent
C. strongly consistent
D. write consistent
Answer (B)
Question 31:
You must specify strongly consistent read and write capacity for your
DynamoDB database. You have determined read capacity of 128 Kbps and write
capacity of 25 Kbps is required for your application. What is the read and write
capacity units required for DynamoDB table?
A. 32 read units, 25 write units
B. 1 read unit, 1 write unit
C. 16 read units, 2.5 write units
D. 64 read units, 10 write units
Answer (A)
Question 32:
What DynamoDB capacity management technique is based on the tenant
specifying an upper and lower range for read/write capacity units?
A. demand
B. provisioned throughput
C. reserved capacity
D. auto scaling
E. general purpose
Answer (D)
Question 33:
What is the maximum volume size of a MySQL RDS database?
A. 6 TB
B. 3 TB
C. 16 TB
D. unlimited
Answer (C)
Question 34:
What is the maximum size of a DynamoDB record (item)?
A. 400 KB
B. 64 KB
C. 1 KB
D. 10 KB
Answer (A)
Fault Tolerant Systems
Question 1:
What two features describe an Application Load Balancer (ALB)?
A. dynamic port mapping
B. SSL listener
C. layer 7 load balancer
D. backend server authentication
E. multi-region forwarding
Answer (A,C)
Question 2:
What enables load balancing between multiple applications per load balancer?
A. listeners
B. sticky sessions
C. path-based routing
D. backend server authentication
Answer (C)
Question 3:
What three features are characteristic of Classic Load Balancer?
A. dynamic port mapping
B. path-based routing
C. SSL listener
D. backend server authentication
E. ECS
F. Layer 4 based load balancer
Answer (C,D,F)
Question 4:
What security feature is only available with Classic Load Balancer?
A. IAM role
B. SAML
C. back-end server authentication
D. security groups
E. LDAP
Answer (C)
Question 5:
What is a primary difference between Classic and Network Load Balancer?
A. IP address target
B. Auto-Scaling
C. protocol target
D. cross-zone load balancing
E. listener
Answer (A)
Question 6: What are the first two conditions used by Amazon AWS default
termination policy for Multi-AZ architecture?
A. unprotected instance with oldest launch configuration
B. Availability Zone (AZ) with the most instances
C. at least one instance that is not protected from scale in
D. unprotected instance closest to the next billing hour
E. random selection of any unprotected instance
Answer (B,C)
Question 7:
What feature is used for horizontal scaling of consumers to process data records
from a Kinesis data stream?
A. vertical scaling shards
B. Auto-Scaling
C. Lambda
D. Elastic Load Balancer
Answer (B)
Question 8:
What DNS records can be used for pointing a zone apex to an Elastic Load
Balancer or CloudFront distribution? (Select two)
A. Alias
B. CNAME
C. MX
D. A
E. Name Server
Answer (A,D)
Question 9: What services are primarily provided by DNS Route 53? (Select
three)
A. load balancing web servers within a private subnet
B. resolve hostnames and IP addresses
C. load balancing web servers within a public subnet
D. load balancing data replication requests between ECS containers
E. resolve queries and route internet traffic to AWS resources
F. automated health checks to EC2 instances
Answer (B,E,F)
Question 10:
What are two features that correctly describe Availability Zone (AZ)
architecture?
A. multiple regions per AZ
B. interconnected with private WAN links
C. multiple AZ per region
D. interconnected with public WAN links
E. data auto-replicated between zones in different regions
F. Direct Connect supports Layer 2 connectivity to region
Answer (B,C)
Question 11:
How is Route 53 configured for Warm Standby fault tolerance? (Select two)
A. automated health checks
B. path-based routing
C. failover records
D. Alias records
Answer (A,C)
Question 12:
How is DNS Route 53 configured for Multi-Site fault tolerance? (Select two)
A. IP address
B. weighted records (non-zero)
C. health checks
D. Alias records
E. zero weighted records
Answer (B,C)
Question 13:
What is an Availability Zone?
A. data center
B. multiple VPCs
C. multiple regions
D. single region
E. multiple EC2 server instances
Answer (A)
Question 14:
How are DNS records managed with Amazon AWS to enable high availability?
A. Auto-Scaling
B. server health checks
C. reverse proxy
D. elastic load balancing
Answer (C)
Question 15:
What is the difference between Warm Standby and Multi-Site fault tolerance?
(Select two)
A. Multi-Site enables lower RTO and most recent RPO
B. Warm Standby enables lower RTO and most recent RPO
C. Multi-Site provides active/active load balancing
D. Multi-Site provides active/standby load balancing
E. DNS Route 53 is not required for Warm Standby
Answer (A,C)
Question 16:
What AWS best practice is recommended for creating fault tolerant systems?
A. vertical scaling
B. Elastic IP (EIP)
C. security groups
D. horizontal scaling
E. RedShift
Answer (D)
Question 17:
What two statements correctly describe versioning for protecting data at rest on
S3 buckets?
A. enabled by default
B. overwrites most current file version
C. restores deleted files
D. saves multiple versions of a single file
E. disabled by default
Answer (C,E)
Question 18:
What two methods are recommended by AWS for protecting EBS data at rest?
A. replication
B. snapshots
C. encryption
D. VPN
Answer (B,C)
Question 19: You have an Elastic Load Balancer assigned to a VPC with public
and private subnets. ELB is configured to load balance traffic to a group of EC2
instances assigned to an Auto-Scaling group. What three statements are correct?
A. Elastic Load Balancer is assigned to a public subnet
B. network ACL is assigned to Elastic Load Balancer
C. security group is assigned to Elastic Load Balancer
D. cross-zone load balancing is not supported
E. Elastic Load Balancer forwards traffic to primary private IP address
(eth0 interface) on each instance
Answer (A,C,E)
Deployment
Question 1:
What Amazon AWS service is available for container management?
A. ECS
B. Docker
C. Kinesis
D. Lambda
Answer (A)
Question 2:
What is associated with Microservices? (Select two)
A. Application Load Balancer
B. Kinesis
C. RDS
D. DynamoDB
E. ECS
Answer (A,E)
Question 3:
Where does Amazon retrieve web content when it is not in the nearest
CloudFront edge location?
A. secondary location
B. file server
C. EBS
D. S3 bucket
Answer (D)
Question 4:
What two features of an API Gateway minimize the effects of peak traffic events
and minimize latency?
A. load balancing
B. firewalling
C. throttling
D. scaling
E. caching
Answer (C,E)
Question 5:
What three characteristics differentiate Lambda from traditional EC2
deployment or containerization?
A. Lambda is based on Kinesis scripts
B. Lambda is serverless
C. tenant has ownership of EC2 instances
D. tenant has no control of EC2 instances
E. Lambda is a code-based service
F. Lambda supports only S3 and Glacier
Answer (B,D,E)
Question 6:
How is code uploaded to Lambda?
A. Lambda instance
B. Lambda container
C. Lambda entry point
D. Lambda function
E. Lambda AMI
Answer (D)
Question 7:
How are Lambda functions triggered?
A. EC2 instance
B. hypervisor
C. Kinesis
D. operating system
E. event source
Answer (E)
Question 8: What three statements correctly describe standard Lambda
operation?
A. Lambda function is allocated 500 MB ephemeral disk space
B. Lambda function is allocated 100 MB EBS storage
C. Lambda stores code in S3
D. Lambda stores code in a Glacier vault
E. Lambda stores code in containers
F. maximum execution time is 300 seconds
Answer (A,C,F)
Question 9: What network events are restricted by Lambda? (Select two)
A. only inbound TCP network connections are blocked by AWS Lambda
B. all inbound network connections are blocked by AWS Lambda
C. all inbound and outbound connections are blocked
D. outbound connections support only TCP/IP sockets
E. outbound connections support only SSL sockets
Answer (B,D)
Question 10:
How is versioning supported with Lambda? (Select two)
A. Lambda native support
B. ECS container
C. not supported
D. Aliases
E. replication
F. S3 versioning
Answer (A,D)
Question 11: What is the difference between Stream-based and AWS Services
when enabling Lambda?
A. streams maintains event source mapping in Lambda
B. streams maintains event source mapping in event source
C. streams maintains event source mapping in EC2 instance
D. streams maintains event source mapping in notification
E. streams maintains event source mapping in API
Answer (A)
Question 12:
Select two custom origin servers from the following?
A. S3 bucket
B. S3 object
C. EC2 instance
D. Elastic Load Balancer
E. API gateway
Answer (C,D)
Question 13:
What two attributes are only associated with CloudFront private content?
A. Amazon S3 URL
B. signed cookies
C. web distribution
D. signed URL
E. object
Answer (B,D)
Question 14:
How are origin servers located within CloudFront (Select two)
A. DNS request
B. distribution list
C. web distribution
D. RTMP protocol
E. source mapping
Answer (A,C)
Question 15:
Where are HTML files sourced from when they are not cached at a CloudFront
edge location?
A. S3 object
B. origin HTTP server
C. S3 bucket
D. nearest edge location
E. RTMP server
F. failover edge location
Answer (B)
Question 16:
What is the capacity of a single Kinesis shard? (Select two)
A. 2000 PUT records per second
B. 1 MB/sec data input and 2 MB/sec data output
C. 10 MB/sec data input and 10 MB/sec data output
D. 1000 PUT records per second
E. unlimited
Answer (B,D)
Question 17:
What Amazon AWS service supports real-time processing of data stream from
multiple consumers and replay of records?
A. DynamoDB
B. EMR
C. Kinesis data streams
D. SQS
E. RedShift
Answer (C)
Question 18: Your company has asked you to capture and forward a real-time
data stream on a massive scale directly to RedShift for analysis with BI tools.
What AWS tool is most appropriate that provides the feature set and cost
effective?
A. DynamoDB
B. SQS
C. Elastic Map Reduce
D. Kinesis Firehose
E. SNS
F. CloudFront
Answer (D)
Question 19:
What feature permits tenants to use a private domain name instead of the domain
name that CloudFront assigns to a distribution?
A. Route 53
B. CNAME record
C. MX record
D. RTMP
E. Signed URL
Answer (B)
Question 20:
What Amazon AWS service is available to guarantee the consuming of a unique
message only once?
A. Beanstalk
B. SQL
C. Exchange
D. SQS
Answer (D)
Question 21:
What is the fastest and easiest method for migrating an on-premises VMware
virtual machine to the AWS cloud?
A. Amazon Marketplace
B. AWS Server Migration Service
C. AWS Storage Gateway
D. EC2 Import/Export
Answer (B)
Question 22:
Select the stateless protocol from the following?
A. FTP
B. TCP
C. HTTP
D. SSH
Answer (C)
Question 23:
What are three valid endpoints for an API gateway?
A. RESTful API
B. Lambda function
C. AWS service
D. web server
E. HTTP method
Answer (B,C,D)
Question 24:
How is a volume selected (identified) when making an EBS Snapshot?
A. account id
B. volume id
C. tag
D. ARN
Answer (D)
Question 25:
What deployment service enables tenants to replicate an existing AWS stack?
A. Beanstalk
B. CloudFormation
C. RedShift
D. EMR
Answer (B)
Question 26:
What three services can invoke a Lambda function?
A. SNS topic
B. CloudWatch event
C. EC2 instance
D. security group
E. S3 bucket notification
Answer (A,B,E)
Question 27:
What two services enable automatic polling of a stream for new records only and
forward them to an AWS storage service?
A. SNS
B. Kinesis
C. Lambda
D. DynamoDB
Answer (B,C)
Question 28: Your company is deploying a web site with dynamic content to
customers in US, EU and APAC regions of the world. Content will include live
streaming videos to customers. SSL certificates are required for security
purposes. Select the AWS service delivers all requirements and provides the
lowest latency?
A. DynamoDB
B. CloudFront
C. S3
D. Redis
Answer (B)
Question 29:
What are the advantages of Beanstalk? (Select two)
A. orchestration and deployment abstraction
B. template-oriented deployment service
C. easiest solution for developers to deploy cloud applications
D. does not support cloud containers
Answer (A,C)
Question 30: You are a network analyst with JSON scripting experience and
asked to select an AWS solution that enables automated deployment of cloud
services. The template design would include a nondefault VPC with EC2
instances, ELB, Auto-Scaling and active/active failover. What AWS solution is
recommended?
A. Beanstalk
B. OpsWorks
C. CloudTrail
D. CloudFormation
Answer (D)
Question 31:
Select two statements that correctly describe OpsWorks?
A. Opsworks provides operational and configuration automation
B. OpsWorks is a lower cost alternative to BeanStalk
C. OpsWorks is primarily a monitoring service
D. Chef scripts (recipes) are a key aspect of OpsWorks
Answer (A,D)
Question 32:
Your company has developed an IoT application that sends Telemetry data from
100,000 sensors. The sensors send a datapoint of 1 KB at one-minute intervals to
a DynamoDB collector for monitoring purposes. What AWS stack would enable
you to store data for real-time processing and analytics using BI tools?
A. Sensors -> Kinesis Stream -> Firehose -> DynamoDB
B. Sensors -> Kinesis Stream -> Firehose -> DynamoDB -> S3
C. Sensors -> AWS IoT -> Firehose -> RedShift
D. Sensors -> Kinesis Data Streams -> Firehose -> RDS
Answer (C)
Question 33:
Your company has an application that was developed and migrated to AWS
cloud. The application leverages some AWS services as part of the architecture.
The stack includes EC2 instances, RDS database, S3 buckets, RedShift and
Lambda functions. In addition there is IAM security permissions configured
with defined users, groups and roles.
The application is monitored with CloudWatch and STS was recently added for
permitting Web Identity Federation sign-on from Google accounts. You want a
solution that can leverage the experience of your employees with AWS cloud
infrastructure as well. What AWS service can create a template of the design and
configuration for easier deployment of the application to multiple regions?
A. Snowball
B. Opsworks
C. CloudFormation
D. Beanstalk
Answer (C)
Monitoring Services
Question 1:
What statement correctly describes CloudWatch operation within AWS cloud?
A. log data is stored indefinitely
B. log data is stored for 15 days
C. alarm history is never deleted
D. ELB is not supported
Answer (A)
Question 2:
What are two AWS subscriber endpoint services that are supported with SNS?
A. RDS
B. Kinesis
C. SQS
D. Lambda
E. EBS
F. ECS
Answer (C,D)
Question 3:
What AWS services work in concert to integrate security monitoring and
audit within a VPC? (Select three)
A. Syslog
B. CloudWatch
C. WAF
D. CloudTrail
E. VPC Flow Log
Answer (B,D,E)
Question 4:
How is CloudWatch integrated with Lambda? (Select two)
A. tenant must enable CloudWatch monitoring
B. network metrics such as latency are not monitored
C. Lambda functions are automatically monitored through Lambda service
D. log group is created for each event source
E. log group is created for each function
Answer (C,E)
Question 5:
What two statements correctly describe AWS monitoring and audit operations?
A. CloudTrail captures API calls, stores them in an S3 bucket and generates
a Cloudwatch event
B. CloudWatch alarm can send a message to a Lambda function
C. CloudWatch alarm can send a message to an SNS Topic that triggers an
event for a Lambda function
D. CloudTrail captures all AWS events and stores them in a log file
E. VPC logs do not support events for security groups
Answer (A,C)
Question 6:
What is required for remote management access to your Linux-based instance?
A. ACL
B. Telnet
C. SSH
D. RDP
Answer (C)
Question 7:
What are two features of CloudWatch operation?
A. CloudWatch does not support custom metrics
B. CloudWatch permissions are granted per feature and not AWS resource
C. collect and monitor operating system and application generated log files
D. AWS services automatically create logs for CloudWatch
E. CloudTrail generates logs automatically when AWS account is activated
Answer (B,C)
Question 8:
You are asked to select an AWS solution that will create a log entry anytime a
snapshot of an RDS database instance and deletes the original instance. Select
the AWS service that would provide that feature?
A. VPC Flow Logs
B. RDS Access Logs
C. CloudWatch
D. CloudTrail
Answer (D)
Question 9:
What is required to enable application and operating system generated logs and
publish to CloudWatch Logs?
A. Syslog
B. enable access logs
C. IAM cross-account enabled
D. CloudWatch Log Agent
Answer (D)
Question 10:
What is the purpose of VPC Flow Logs?
A. capture VPC error messages
B. capture IP traffic on network interfaces
C. monitor network performance
D. monitor netflow data from subnets
E. enable Syslog services for VPC
Answer (B)
Question 11:
Select two cloud infrastructure services and/or components included with default
CloudWatch monitoring?
A. SQS queues
B. operating system metrics
C. hypervisor metrics
D. virtual appliances
E. application level metrics
Answer (A,C)
Question 12:
What feature enables CloudWatch to manage capacity dynamically for EC2
instances?
A. replication lag
B. Auto-Scaling
C. Elastic Load Balancer
D. vertical scaling
Answer (B)
Question 13:
What AWS service is used to monitor tenant remote access and various security
errors including authentication retries?
A. SSH
B. Telnet
C. CloudFront
D. CloudWatch
Answer (D)
Question 14:
How does Amazon AWS isolate metrics from different applications for
monitoring, store and reporting purposes?
A. EC2 instances
B. Beanstalk
C. CloudTrail
D. namespaces
E. Docker
Answer (D)
Question 15:
What Amazon AWS service provides account transaction monitoring and
security audit?
A. CloudFront
B. CloudTrail
C. CloudWatch
D. security group
Answer (B)
Question 16:
What two statements correctly describe CloudWatch monitoring of database
instances?
A. metrics are sent automatically from DynamoDB and RDS to
CloudWatch
B. alarms must be configured for DynamoDB and RDS within CloudWatch
C. metrics are not enabled automatically for DynamoDB and RDS
D. RDS does not support monitoring of operating system metrics
Answer (A,B)
Question 17: What AWS service can send notifications to customer
smartphones and mobile applications with attached video and/or alerts?
A. EMR
B. Lambda
C. SQS
D. SNS
E. CloudTrail
Answer (D) Amazon Books• AWS Certified Solutions Architect
Associate Exam: Study Notes • AWS Certified Solutions Architect Associate
Exam: Certification Practice Questions (full answer key version)