111. Miscellaneous

 

Question 1:
Your company has a development system in which a production environment and a test environment are separately prepared on AWS. As a Solutions Architect, you are working on a stack-based deployment model of AWS resources. Different layers are needed for the application’s server and database.
Choose the appropriate course of action that meets this requirement.
Options:
A. Use OpsWorks to define a stack for each layer of your application
B. Use CloudFormation to define a stack for each layer of your application
C. Use CodePipeline to define a stack for each layer of your application
D. Use Elastic Beanstalk to define a stack for each layer of your application
Answer: A
Explanation:
Option A is the correct answer. AWS OpsWorks Stacks allows you to manage your applications and servers on AWS and on-premises. OpsWorks Stacks allows you to model your application as a stack which contains various layers such as load distribution, databases, and application servers.
Option B is incorrect. In order to prepare different layers on a stack basis, it is preferable to make detailed settings in OpsWorks rather than CloudFormation.
Option C is incorrect. CodePipeline is a fully managed, continuous delivery service that automates releases for fast and efficient updates of applications and infrastructure. CodePipeline cannot set application layers.
Option D is incorrect. Elastic Beanstalk is a service for deploying and versioning web applications and services developed using Java, .NET, PHP, Node.js, Python, Ruby, Go and Docker on servers such as Apache. Elastic Beanstalk cannot set application layer.

Question 2:
Company-A has EC2 instances hosted in two AZs in a single region and the web application is also has ELB and Auto-Scaling. The application needs database tier synchronization. If/when one AZ becomes unavailable, Auto Scaling will take time to launch a new instance in a remaining AZ. You have been asked to make appropriate adjustments so that this application still remains fully available, even during the time when Auto scaling is spinning up replacements instances.
Choose the architectural enhancements you need to meet these requirements.
Options:
A. Deploy EC2 instances in 3 AZs with each AZ set to handle up to 50% peak load capacity
B. Deploy EC2 instances in 3 AZs with each AZ set to handle up to 40% peak load capacity
C. Deploy EC2 instances in 2 AZs, across 2 regions, with each AZ set to handle up to 50% peak load capacity
D. Deploy EC2 instances in 2 AZs with each AZ set to handle up to 50% peak load capacity
Answer: A
Explanation:
In this scenario, you need to maintain 100% availability as a requirement that the application never stops, even if one AZ were to go down. Therefore, it is necessary to choose a setting that can maintain 100% of the EC2 instance’s peak load, even if one AZ becomes unavailable. If you deploy your EC2 instances over 3 AZ, each set with the ability to handle 50% peak load, you can maintain 100% even if one AZ goes down.
Therefore, option 1 is the correct answer.
Option 2 is incorrect because it will operate at 80% availability instead of the 100% availability required if one AZ goes down.
This question requires that the ability to handle peak load for instances does not fall below 100%, even if one AZ falls. Although it is possible to recover the peak load over time via Auto Scaling, there will still be a short time when the peak load cannot be appropriately processed. Then, in order to maintain 100% capacity to handle peak load, you need a current processing capacity exceeding 100%, (by enough to offset the losses due to AZ failure) until you can restore the processing capacity with Auto Scaling.
Options 3 and 4 are incorrect as the two AZs cannot achieve 100% availability if one AZ were to go down. They would only handle up to 50%.

Question 3:
Your customer wants to import an existing virtual machine into the AWS cloud. As a Solutions Architect, you have decided to consider a migration method.
Which service should you use?
Options:
A. AWS Import/ Export
B. VM Import/ Export
C. Direct Connect
D. VPC Peering
Answer: B
Explanation:
VM Import / Export allows you to import virtual machine (VM) images from your existing virtualized environment into Amazon EC2. You can use this service to migrate applications and workloads to Amazon EC2, copy VM image catalog to Amazon EC2, and create VM image repositories for backup and disaster recovery.
Other services are incorrect because they are not available to import existing virtual machines into the AWS cloud.
Option 1 is incorrect. AWS Import / Export is a service that you can use to transfer large amounts of data from your physical storage device to AWS. This is not appropriate because it is not used to import existing virtual machines into the AWS cloud.
Option 3 is incorrect. Direct connect is a dedicated line service that connects your on-premises environment to your VPC. This is incorrect because it is not used to import existing virtual machines into the AWS cloud.
Option 4 is incorrect. VPC peering is a function that connects two VPCs. This is incorrect because it is not used to import existing virtual machines into the AWS cloud.

Question 4:
As a Solutions Architect, you plan to move your infrastructure to the AWS cloud. I want to take advantage of the Chef recipes you are currently using to manage the configuration of your infrastructure.
Which AWS service is best for this requirement?
Options:
A. Elastic Beanstalk
B. OpsWorks
C. CloudFormation
D. ECS
Answer: B
Explanation
Option 2 is the correct answer. With AWS OpsWorks, you can leverage Chef to deploy your infrastructure on AWS. AWS OpsWorks is an environment automation service that uses Puppet or Chef to set up and operate applications in a cloud environment. OpsWorks Stacks and OpsWorks for Chef Automate allow you to use Chef cookbooks and solutions for configuration management.
Option 1 is incorrect. Elastic Beanstalk is used for deploying web applications and does not use Chef.
Option 3 is incorrect. CloudFormation is a tool that automates AWS resource deployment with JSON / YAML. This also doesn’t use Chef.
Option 4 is incorrect. ECS is a container orchestration service that uses Docker. This also doesn’t use Chef.

Question 5:
As a Solutions Architect, you develop and test your applications on AWS. In doing so, we want to provision the test environment quickly and make it easy to remove.
Choose the best AWS service settings to meet this requirement.
Options:
A. Setting CodePipeline enables quick configuration and deletion
B. Use CloudFormation template for creating a test environment
C. Automate environment construction using AMI and Bash script of EC2 instance
D. Setting ECR allows for quick configuration and deletion
Answer: B
Explanation
You can use CloudFormation templates to provision AWS resources with constant settings at all times. This makes it easy to create an environment like a test environment. Option 2 is the correct answer.
Option 1 is incorrect. CodePipeline automates the release step by configuring services like CodeDeploy and ECS as a pipeline. CodePipeline needs to use other services such as CloudFormation to set up the infrastructure environment.
Option 3 is incorrect. AMI and Bash are settings limited to EC2 instances and cannot be used to automate overall infrastructure construction.
Option 4 is incorrect. ECR is a service that saves a file called a Docker image.

Question 6:
An AWS Organization has an OU with multiple member accounts in it. The company needs to restrict the ability to launch only specific Amazon EC2 instance types. How can this policy be applied across the accounts with the least effort?
Options:
A. Use AWS Resource Access Manager to control which launch types can be used
B. Create an SCP with an allow rule that allows launching the specific instance types
C. Create an IAM policy to deny launching all but the specific instance types
D. Create an SCP with a deny rule that denies all but the specific instance types
Answer: D
Explanation
To apply the restrictions across multiple member accounts you must use a Service Control Policy (SCP) in the AWS Organization. The way you would do this is to create a deny rule that applies to anything that does not equal the specific instance type you want to allow.
CORRECT: “Create an SCP with a deny rule that denies all but the specific instance types” is the correct answer.
INCORRECT: “Create an SCP with an allow rule that allows launching the specific instance types” is incorrect as a deny rule is required.
INCORRECT: “Create an IAM policy to deny launching all but the specific instance types” is incorrect. With IAM you need to apply the policy within each account rather than centrally so this would require much more effort.
INCORRECT: “Use AWS Resource Access Manager to control which launch types can be used” is incorrect. AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. It is not used for restricting access or permissions.

Question 7:
A web application runs in public and private subnets. The application architecture consists of a web tier and database tier running on Amazon EC2 instances. Both tiers run in a single Availability Zone (AZ).
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
Options:
A. Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs
B. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)
D. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ
E. Create new public and private subnets in the same AZ for high availability
Answer: A & B
Explanation
To add high availability to this architecture both the web tier and database tier require changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will ensure there are always instances running and traffic is being distributed to them.
The database tier should be migrated from the EC2 instances to Amazon RDS to take advantage of a managed database with Multi-AZ functionality. This will ensure that if there is an issue preventing access to the primary database a secondary database can take over.
CORRECT: “Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs” is the correct answer.
CORRECT: “Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment” is the correct answer.
INCORRECT: “Create new public and private subnets in the same AZ for high availability” is incorrect as this would not add high availability.
INCORRECT: “Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)” is incorrect because the existing servers are in a single subnet. For HA we need to instances in multiple subnets.
INCORRECT: “Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ” is incorrect because we also need HA for the database layer.

Question 8:
An eCommerce company runs an application on Amazon EC2 instances in public and private subnets. The web application runs in a public subnet and the database runs in a private subnet. Both the public and private subnets are in a single Availability Zone.
Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)
Options:
A. Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment
B. Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs
C. Create new public and private subnets in the same AZ but in a different Amazon VPC” is incorrect
D. Create an EC2 Auto Scaling group in the public subnet and use an Application Load Balancer
E. Create new public and private subnets in a different AZ. Create a database using Amazon EC2 in one AZ
Answer: A & B
Explanation
High availability can be achieved by using multiple Availability Zones within the same VPC. An EC2 Auto Scaling group can then be used to launch web application instances in multiple public subnets across multiple AZs and an ALB can be used to distribute incoming load.
The database solution can be made highly available by migrating from EC2 to Amazon RDS and using a Multi-AZ deployment model. This will provide the ability to failover to another AZ in the event of a failure of the primary database or the AZ in which it runs.
CORRECT: “Create an EC2 Auto Scaling group and Application Load Balancer that spans across multiple AZs” is a correct answer.
CORRECT: “Create new public and private subnets in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment” is also a correct answer.
INCORRECT: “Create new public and private subnets in the same AZ but in a different Amazon VPC” is incorrect. You cannot use multiple VPCs for this solution as it would be difficult to manage and direct traffic (you can’t load balance across VPCs).
INCORRECT: “Create an EC2 Auto Scaling group in the public subnet and use an Application Load Balancer” is incorrect. This does not achieve HA as you need multiple public subnets across multiple AZs.
INCORRECT: “Create new public and private subnets in a different AZ. Create a database using Amazon EC2 in one AZ” is incorrect. The database solution is not HA in this answer option.

Question 9:
A company uses Docker containers for many application workloads in an on-premise data center. The company is planning to deploy containers to AWS and the chief architect has mandated that the same configuration and administrative tools must be used across all containerized environments. The company also wishes to remain cloud agnostic to safeguard mitigate the impact of future changes in cloud strategy.
How can a Solutions Architect design a managed solution that will align with open-source software?
Options:
A. Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS worker nodes
B. Launch the containers on a fleet of Amazon EC2 instances in a cluster placement group
C. Launch the containers on Amazon Elastic Container Service (ECS) with AWS Fargate instances
D. Launch the containers on Amazon Elastic Container Service (ECS) with Amazon EC2 instance worker nodes
Answer: A
Explanation
Amazon EKS is a managed service that can be used to run Kubernetes on AWS. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification.
This solution ensures that the same open-source software is used for automating the deployment, scaling, and management of containerized applications both on-premises and in the AWS Cloud.
CORRECT: “Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS worker nodes” is the correct answer.
INCORRECT: “Launch the containers on a fleet of Amazon EC2 instances in a cluster placement group” is incorrect
INCORRECT: “Launch the containers on Amazon Elastic Container Service (ECS) with AWS Fargate instances” is incorrect
INCORRECT: “Launch the containers on Amazon Elastic Container Service (ECS) with Amazon EC2 instance worker nodes” is incorrect

Question 10:
A recent security audit uncovered some poor deployment and configuration practices within your VPC. You need to ensure that applications are deployed in secure configurations.
How can this be achieved in the most operationally efficient manner?
Options:
A. Remove the ability for staff to deploy applications
B. Use AWS Inspector to apply secure configurations
C. Use CloudFormation with securely configured templates
D. Manually check all application configurations before deployment
Answer: C
Explanation
CloudFormation helps users to deploy resources in a consistent and orderly way. By ensuring the CloudFormation templates are created and administered with the right security configurations for your resources, you can then repeatedly deploy resources with secure settings and reduce the risk of human error.
CORRECT: “Use CloudFormation with securely configured templates” is the correct answer.
INCORRECT: “Remove the ability for staff to deploy applications” is incorrect. Removing the ability of staff to deploy resources does not help you to deploy applications securely as it does not solve the problem of how to do this in an operationally efficient manner.
INCORRECT: “Manually check all application configurations before deployment” is incorrect. Manual checking of all application configurations before deployment is not operationally efficient.
INCORRECT: “Use AWS Inspector to apply secure configurations” is incorrect. Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It is not used to secure the actual deployment of resources, only to assess the deployed state of the resources.

Question 11:
A Solutions Architect has been tasked with re-deploying an application running on AWS to enable high availability. The application processes messages that are received in an ActiveMQ queue running on a single Amazon EC2 instance. Messages are then processed by a consumer application running on Amazon EC2. After processing the messages the consumer application writes results to a MySQL database running on Amazon EC2.
Which architecture offers the highest availability and low operational complexity?
Options:
A. Deploy a second Active MQ server to another Availability Zone. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone
B. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled
C. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled
D. Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone
Answer: C
Explanation
The correct answer offers the highest availability as it includes Amazon MQ active/standby brokers across two AZs, an Auto Scaling group across two AZ,s and a Multi-AZ Amazon RDS MySQL database deployment.
This architecture not only offers the highest availability it is also operationally simple as it maximizes the usage of managed services.
CORRECT: “Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ enabled” is the correct answer.
INCORRECT: “Deploy a second Active MQ server to another Availability Zone. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone” is incorrect. This architecture does not offer the highest availability as it does not use Auto Scaling. It is also not the most operationally efficient architecture as it does not use AWS managed services.
INCORRECT: “Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL database replication to another Availability Zone” is incorrect. This architecture does not use Auto Scaling for best HA or the RDS managed service.
INCORRECT: “Deploy Amazon MQ with active/standby brokers configured across two Availability Zones. Launch an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled” is incorrect. This solution does not use Auto Scaling.

Question 12:
A retail company uses Amazon EC2 instances, API Gateway, Amazon RDS, Elastic Load Balancer and CloudFront services. To improve the security of these services, the Risk Advisory group has suggested a feasibility check for using the Amazon GuardDuty service.
Which of the following would you identify as data sources supported by GuardDuty?
Options:
A. VPC Flow Logs, API Gateway logs, S3 access logs
B. ELB logs, DNS logs, CloudTrail events
C. VPC Flow Logs, DNS logs, CloudTrail events
D. CloudFront logs, API Gateway logs, CloudTrail events
Answer: C
Explanation
Correct option:
VPC Flow Logs, DNS logs, CloudTrail events – Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.
GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.
Incorrect options:
VPC Flow Logs, API Gateway logs, S3 access logs
ELB logs, DNS logs, CloudTrail events
CloudFront logs, API Gateway logs, CloudTrail events
These three options contradict the explanation provided above, so these options are incorrect.

Question 13:
A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected.
Which of the following would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?
Options:
A. AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs
B. Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts
C. Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once
D. AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby resulting in increased costs
Answer: C
Explanation
Correct option:
Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once – If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.
Incorrect options:
AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby resulting in increased costs – AWS Shield Advanced does offer protection to resources outside of AWS. This should not cause unexpected spike in billing costs.
AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs – AWS Shield Standard is automatically enabled for all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service.
Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts – This option has been added as a distractor. Savings Plans is a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. Savings Plans is not applicable for the AWS Shield Advanced service.

Question 14:
A leading carmaker would like to build a new car-as-a-sensor service by leveraging fully serverless components that are provisioned and managed automatically by AWS. The development team at the carmaker does not want an option that requires the capacity to be manually provisioned, as it does not want to respond manually to changing volumes of sensor data.
Given these constraints, which of the following solutions is the BEST fit to develop this car-as-a-sensor service?
Options:
A. Ingest the sensor data in a Kinesis Data Stream, which is polled by a Lambda function in batches, and the data is written into an auto-scaled DynamoDB table for downstream processing
B. Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing
C. Ingest the sensor data in an Amazon SQS standard queue, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
D. Ingest the sensor data in a Kinesis Data Stream, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
Answer: B
Explanation
Correct option:
Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
AWS manages all ongoing operations and underlying infrastructure needed to provide a highly available and scalable message queuing service. With SQS, there is no upfront cost, no need to acquire, install, and configure messaging software, and no time-consuming build-out and maintenance of supporting infrastructure. SQS queues are dynamically created and scale automatically so you can build and grow applications quickly and efficiently. As there is no need to manually provision the capacity, so this is the correct option.
Incorrect options:
Ingest the sensor data in a Kinesis Data Stream, which is polled by a Lambda function in batches, and the data is written into an auto-scaled DynamoDB table for downstream processing – Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. However, the user is expected to manually provision an appropriate number of shards to process the expected volume of the incoming data stream. The throughput of an Amazon Kinesis data stream is designed to scale without limits via increasing the number of shards within a data stream. Therefore Kinesis Data Streams is not the right fit for this use-case.
Ingest the sensor data in an Amazon SQS standard queue, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
Ingest the sensor data in a Kinesis Data Stream, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing
Using an application on an EC2 instance is ruled out as the carmaker wants to use fully serverless components. So both these options are incorrect.

Question 15:
A financial services company uses Amazon GuardDuty for analyzing its AWS account metadata to meet the compliance guidelines. However, the company has now decided to stop using GuardDuty service. All the existing findings have to be deleted and cannot persist anywhere on AWS Cloud.
Which of the following techniques will help the company meet this requirement?
Options:
A. Suspend the service in the general settings
B. De-register the service under services tab
C. Disable the service in the general settings
D. Raise a service request with Amazon to completely delete the data from all their backups
Answer: C
Explanation
Correct option:
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
Disable the service in the general settings – Disabling the service will delete all remaining data, including your findings and configurations before relinquishing the service permissions and resetting the service. So, this is the correct option for our use case.
Incorrect options:
Suspend the service in the general settings – You can stop Amazon GuardDuty from analyzing your data sources at any time by choosing to suspend the service in the general settings. This will immediately stop the service from analyzing data, but does not delete your existing findings or configurations.
De-register the service under services tab – This is a made-up option, used only as a distractor.
Raise a service request with Amazon to completely delete the data from all their backups – There is no need to create a service request as you can delete the existing findings by disabling the service.

Question 16:
An IT security consultancy is working on a solution to protect data stored in S3 from any malicious activity as well as check for any vulnerabilities on EC2 instances.
As a solutions architect, which of the following solutions would you suggest to help address the given requirement?
Options:
A. Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
B. Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
C. Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
D. Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
Answer: B
Explanation
Correct option:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions.
Incorrect options:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon Inspector to check for vulnerabilities on EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon GuardDuty to check for vulnerabilities on EC2 instances
These three options contradict the explanation provided above, so these options are incorrect.

Question 17:
A big data consulting firm needs to set up a data lake on Amazon S3 for a Health-Care client. The data lake is split in raw and refined zones. For compliance reasons, the source data needs to be kept for a minimum of 5 years. The source data arrives in the raw zone and is then processed via an AWS Glue based ETL job into the refined zone. The business analysts run ad-hoc queries only on the data in the refined zone using AWS Athena. The team is concerned about the cost of data storage in both the raw and refined zones as the data is increasing at a rate of 1TB daily in each zone.
As a solutions architect, which of the following would you recommend as the MOST cost-optimal solution? (Select two)
• Create a Lambda function based job to delete the raw zone data after 1 day
• Setup a lifecycle policy to transition the refined zone data into Glacier Deep Archive after 1 day of object creation
• Use Glue ETL job to write the transformed data in the refined zone using a compressed file format (Correct)
• Use Glue ETL job to write the transformed data in the refined zone using CSV format
• Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation(Correct)
Explanation
Correct options:
Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation
You can manage your objects so that they are stored cost-effectively throughout their lifecycle by configuring their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.
For the given use-case, the raw zone consists of the source data, so it cannot be deleted due to compliance reasons. Therefore, you should use a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation.
Use Glue ETL job to write the transformed data in the refined zone using a compressed file format
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You cannot transition the refined zone data into Glacier Deep Archive because it is used by the business analysts for ad-hoc querying. Therefore, the best optimization is to have the refined zone data stored in a compressed format via the Glue job. The compressed data would reduce the storage cost incurred on the data in the refined zone.
Incorrect options:
Create a Lambda function based job to delete the raw zone data after 1 day – As mentioned in the use-case, the source data needs to be kept for a minimum of 5 years for compliance reasons. Therefore the data in the raw zone cannot be deleted after 1 day.
Setup a lifecycle policy to transition the refined zone data into Glacier Deep Archive after 1 day of object creation – You cannot transition the refined zone data into Glacier Deep Archive because it is used by the business analysts for ad-hoc querying. Hence this option is incorrect.
Use Glue ETL job to write the transformed data in the refined zone using CSV format – It is cost-optimal to write the data in the refined zone using a compressed format instead of CSV format. The compressed data would reduce the storage cost incurred on the data in the refined zone. So, this option is incorrect.

Question 18:
The DevOps team at a leading social media company uses AWS OpsWorks, which is a fully managed configuration management service. OpsWorks eliminates the need to operate your configuration management systems or worry about maintaining its infrastructure.
Can you identify the configuration management tools for which OpsWorks provides managed instances? (Select two)
A• Salt
B• CFEngine
C• Puppet
D• Chef
E• Ansible
Answer: C & D
Explanation
Correct options:
Chef
Puppet
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.
Incorrect options:
Ansible
CFEngine
Salt

Question 19:
You would like to migrate an AWS account from an AWS Organization A to an AWS Organization B. What are the steps do to it?
A• Send an invite to the new organization. Accept the invite to the new organization from the member account. Remove the member account from the old organization
B• Send an invite to the new organization. Remove the member account from the old organization. Accept the invite to the new organization from the member account
C• Open an AWS Support ticket to ask them to migrate the account
D• Remove the member account from the old organization. Send an invite to the new organization. Accept the invite to the new organization from the member account
Answer: D
Explanation
Correct option:
Remove the member account from the old organization. Send an invite to the new organization. Accept the invite to the new organization from the member account
AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. Through integrations with other AWS services, you can use Organizations to define central configurations and resource sharing across accounts in your organization.
To migrate accounts from one organization to another, you must have root or IAM access to both the member and master accounts. Here are the steps to follow: 1. Remove the member account from the old organization 2. Send an invite to the new organization 3. Accept the invite to the new organization from the member account
Incorrect options:
Send an invite to the new organization. Accept the invite to the new organization from the member account. Remove the member account from the old organization
Send an invite to the new organization. Remove the member account from the old organization. Accept the invite to the new organization from the member account
These two options contradict the steps described earlier for account migration from one organization to another.
Open an AWS Support ticket to ask them to migrate the account – You don’t need to contact AWS support for account migration.

Question 20:
A financial services company wants to identify any sensitive data stored on its Amazon S3 buckets. The company also wants to monitor and protect all data stored on S3 against any malicious activity.
As a solutions architect, which of the following solutions would you recommend to help address the given requirements?
• Use Amazon Macie to monitor any malicious activity on data stored in S3. Use Amazon GuardDuty to identify any sensitive data stored on S3
• Use Amazon GuardDuty to monitor any malicious activity on data stored in S3 as well as to identify any sensitive data stored on S3
• Use Amazon Macie to monitor any malicious activity on data stored in S3 as well as to identify any sensitive data stored on S3
• Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use Amazon Macie to identify any sensitive data stored on S3
Answer: D
Explanation
Correct option:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use Amazon Macie to identify any sensitive data stored on S3
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data on Amazon S3. Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers. It also gives you constant visibility of the data security and data privacy of your data stored in Amazon S3.
Incorrect options:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3 as well as to identify any sensitive data stored on S3
Use Amazon Macie to monitor any malicious activity on data stored in S3 as well as to identify any sensitive data stored on S3
Use Amazon Macie to monitor any malicious activity on data stored in S3. Use Amazon GuardDuty to identify any sensitive data stored on S3
These three options contradict the explanation provided above, so these options are.

Question 21:
An IT company is looking to move its on-premises infrastructure to AWS Cloud. The company has a portfolio of applications with a few of them using server bound licenses that are valid for the next year. To utilize the licenses, the CTO wants to use dedicated hosts for a one year term and then migrate the given instances to default tenancy thereafter.
As a solutions architect, which of the following options would you identify as CORRECT for changing the tenancy of an instance after you have launched it? (Select two)
• You can change the tenancy of an instance from default to host
• You can change the tenancy of an instance from default to dedicated
• You can change the tenancy of an instance from dedicated to default
• You can change the tenancy of an instance from dedicated to host
• You can change the tenancy of an instance from host to dedicated
Answer: D & E
Explanation
Correct options:
You can change the tenancy of an instance from dedicated to host
You can change the tenancy of an instance from host to dedicated
Each EC2 instance that you launch into a VPC has a tenancy attribute. This attribute has the following values. via – https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-monitoring.html
By default, EC2 instances run on a shared-tenancy basis.
Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are physically isolated at the hardware level. However, Dedicated Instances may share hardware with other instances from the same AWS account that is not Dedicated Instances.
A Dedicated Host is also a physical server that’s dedicated to your use. With a Dedicated Host, you have visibility and control over how instances are placed on the server.
Incorrect options:
You can change the tenancy of an instance from default to dedicated – You can only change the tenancy of an instance from dedicated to host, or from host to dedicated after you’ve launched it. Therefore, this option is.
You can change the tenancy of an instance from dedicated to default – You can only change the tenancy of an instance from dedicated to host, or from host to dedicated after you’ve launched it. Therefore, this option is.
You can change the tenancy of an instance from default to host – You can only change the tenancy of an instance from dedicated to host, or from host to dedicated after you’ve launched it. Therefore, this option is.

Question 22:
A financial services company has recently migrated from on-premises infrastructure to AWS Cloud. The DevOps team wants to implement a solution that allows all resource configurations to be reviewed and make sure that they meet compliance guidelines. Also, the solution should be able to offer the capability to look into the resource configuration history across the application stack.
As a solutions architect, which of the following solutions would you recommend to the team?
• Use AWS Config to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes(Correct)
• Use AWS CloudTrail to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes
• Use Amazon CloudWatch to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes
• Use AWS Systems Manager to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes
Explanation
Correct option:
“Use AWS Config to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes”
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. You can use Config to answer questions such as – “What did my AWS resource look like at xyz point in time?”.
Incorrect options:
Use Amazon CloudWatch to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes – AWS CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. You cannot use CloudWatch to maintain a history of resource configuration changes.
Use AWS CloudTrail to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes – With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. You can use AWS CloudTrail to answer questions such as – “Who made an API call to modify this resource?”. CloudTrail provides an event history of your AWS account activity thereby enabling governance, compliance, operational auditing, and risk auditing of your AWS account. You cannot use CloudTrail to maintain a history of resource configuration changes.
Use AWS Systems Manager to review resource configurations to meet compliance guidelines and maintain a history of resource configuration changes – Using AWS Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. You cannot use Systems Manager to maintain a history of resource configuration changes.
Exam Alert:
You may see scenario-based questions asking you to select one of CloudWatch vs CloudTrail vs Config. Just remember this thumb rule –
Think resource performance monitoring, events, and alerts; think CloudWatch.
Think account-specific activity and audit; think CloudTrail.
Think resource-specific history, audit, and compliance; think Config.

Question 23:
A video conferencing application is hosted on a fleet of EC2 instances which are part of an Auto Scaling group (ASG). The ASG uses a Launch Configuration (LC1) with “dedicated” instance placement tenancy but the VPC (V1) used by the Launch Configuration LC1 has the instance tenancy set to default. Later the DevOps team creates a new Launch Configuration (LC2) with “default” instance placement tenancy but the VPC (V2) used by the Launch Configuration LC2 has the instance tenancy set to dedicated.
Which of the following is correct regarding the instances launched via Launch Configuration LC1 and Launch Configuration LC2?
• The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have default instance tenancy
• The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have dedicated instance tenancy(Correct)
• The instances launched by Launch Configuration LC1 will have default instance tenancy while the instances launched by the Launch Configuration LC2 will have dedicated instance tenancy
• The instances launched by Launch Configuration LC1 will have dedicated instance tenancy while the instances launched by the Launch Configuration LC2 will have default instance tenancy
Explanation
Correct option:
The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have dedicated instance tenancy
A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances. Include the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you’ve launched an EC2 instance before, you specified the same information to launch the instance.
When you create a launch configuration, the default value for the instance placement tenancy is null and the instance tenancy is controlled by the tenancy attribute of the VPC. If you set the Launch Configuration Tenancy to default and the VPC Tenancy is set to dedicated, then the instances have dedicated tenancy. If you set the Launch Configuration Tenancy to dedicated and the VPC Tenancy is set to default, then again the instances have dedicated tenancy.
Launch Configuration Tenancy vs VPC Tenancy via – https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html#as-vpc-tenancy
Incorrect options: The instances launched by Launch Configuration LC1 will have dedicated instance tenancy while the instances launched by the Launch Configuration LC2 will have default instance tenancy – If either Launch Configuration Tenancy or VPC Tenancy is set to dedicated, then the instance tenancy is also dedicated. Therefore, this option is incorrect.
The instances launched by Launch Configuration LC1 will have default instance tenancy while the instances launched by the Launch Configuration LC2 will have dedicated instance tenancy – If either Launch Configuration Tenancy or VPC Tenancy is set to dedicated, then the instance tenancy is also dedicated. Therefore, this option is incorrect.
The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have default instance tenancy – If either Launch Configuration Tenancy or VPC Tenancy is set to dedicated, then the instance tenancy is also dedicated. Therefore, this option is incorrect.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html#as-vpc-tenancy

Question 24:
A company is looking for an orchestration solution to manage a workflow that uses AWS Glue and Amazon Lambda to process data on its S3 based data lake.
As a solutions architect, which of the following AWS services involves the LEAST development effort for this use-case?
• Amazon Simple Workflow Service (SWF)
• Amazon EMR
• AWS Step Functions (Correct)
• AWS Batch
Explanation
Correct option:
AWS Step Functions
AWS Step Functions lets you coordinate and orchestrate multiple AWS services such as AWS Lambda and AWS Glue into serverless workflows. Workflows are made up of a series of steps, with the output of one step acting as input into the next. A Step Function automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected. The Step Function can ensure that the Glue ETL job and the lambda functions execute in order and complete successfully as per the workflow defined in the given use-case. Therefore, Step Function is the best solution.
How Step Functions Work via – https://aws.amazon.com/step-functions/
Incorrect options:
AWS Batch – AWS Batch is a set of batch management capabilities that enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch cannot be used to orchestrate a workflow, hence it is an incorrect option.
Amazon Simple Workflow Service (SWF) – Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. In Amazon SWF, tasks represent invocations of logical steps in applications. Tasks are processed by workers which are programs that interact with Amazon SWF to get tasks, process them, and return their results. To coordinate the application execution across workers, you write a program called the decider in your choice of programming language. Although Amazon SWF provides you complete control over your orchestration logic, it increases the complexity of developing applications. Hence this option is not correct.
Exam Alert:
Please understand the differences between Amazon SWF vs. AWS Step Functions
via – https://aws.amazon.com/swf/faqs/
Amazon EMR – Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR uses Hadoop, an open source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances. EMR cannot be used to orchestrate a workflow, hence it is an incorrect option.
References:
https://aws.amazon.com/step-functions/
https://aws.amazon.com/swf/faqs/

Question 25:
The DevOps team at an IT company has recently migrated to AWS and they are configuring security groups for their two-tier application with public web servers and private database servers. The team wants to understand the allowed configuration options for an inbound rule for a security group.
As a solutions architect, which of the following would you identify as an INVALID option for setting up such a configuration?
• You can use a range of IP addresses in CIDR block notation as the custom source for the inbound rule
• You can use an IP address as the custom source for the inbound rule
• You can use a security group as the custom source for the inbound rule
• You can use an Internet Gateway ID as the custom source for the inbound rule (Correct)
Explanation
Correct option:
You can use an Internet Gateway ID as the custom source for the inbound rule
A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you can specify one or more security groups; otherwise, you can use the default security group. You can add rules to each security group that allows traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.
Please see this list of allowed source or destination for security group rules: via – https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
Therefore, you cannot use an Internet Gateway ID as the custom source for the inbound rule.
Incorrect options:
You can use a security group as the custom source for the inbound rule
You can use a range of IP addresses in CIDR block notation as the custom source for the inbound rule
You can use an IP address as the custom source for the inbound rule
As described in the list of allowed sources or destinations for security group rules, the above options are supported.
References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html