95. SQS

 

Question 1:
Your company operates an application for uploading, processing and publishing user-submitted videos. This application is hosted on an EC2 instance for processing videos uploaded by users. It has an EC2 worker process that processes and publishes the video and also has an Auto Scaling group set up.
Select the services you should use to increase the reliability of your worker processes.
Options:
A. SQS
B. SNS
C. SES
D. CloudFront
Answer: A
Explanation:
Amazon SQS is used for decentralization, like with worker processing. The video processing request of the worker process is stored in the queue, enabling reliable processing execution by asynchronous processing. Multiple “worker” processes can be executed in parallel by distributing EC2 instances for use, responding to requests in the queue. Each message is consumed only once.
Distributed parallel processing of SQS queues can increase the reliability of worker processes. Therefore, option A is the correct answer.
Option 2 is incorrect. Messaging is the primary role of Amazon SNS and is used to configure worker processes to be triggered by specific events. SQS must be used to enable distributed processing of worker processes by queuing.
Option 3 is incorrect. You can implement the email function by using Amazon SES. SQS is used for distributed processing of worker processes.
Option 4 is incorrect. CloudFront is a service used for content distribution.


Question 2:
As a Solutions Architect, you are trying to add messaging processing using AWS messaging services to the application you are currently building. The most important requirement is to maintain the order of the messages and not send duplicate messages.
Which of the following services will help you meet this requirement?
Options:
A. SQS
B. SNS
C. SES
D. Lambda
Answer: A
Explanation
Option 1 is the correct answer. SQS is a managed message queuing service that allows you to monitor messages transferred between application components as queues. Utilizing FIFO queues enables high throughput, best effort ordering, and at least one delivery. FIFO queues guarantee message order and supports at least one message delivery.
Duplicate messages can be prevented by using the message deduplication ID in the SQS FIFO queue. The message deduplication ID is the token used to deduplication the sent message. If a message with a specific message deduplication ID is sent successfully, the outgoing message with the same ID will not be delivered during the 5-minute deduplication interval.
Option 2 is incorrect. SNS is a push-type messaging service. The order of the messages sent is not guaranteed.
Option 3 is incorrect. SES is a service that can implement the email function. The order of the messages is not guaranteed as it only performs email notifications.
Option 4 is incorrect. Lambda does not have a message notification feature.


Question 3:
As a Solutions Architect, you are building a web application on AWS. This application provides data conversion services to users. The files to be converted are first uploaded to S3, and then a Spotfleet processes the data conversion. Users are divided into free users and paid users. Files submitted by paid users should be prioritized for processing.
Choose an solution that meets these requirements.
Options:
A. Use Route53 to configure traffic routing according to customer type
B. Use SQS to set a specific queue that preferentially processes paid users, and then use a regular queue for free users
C. Use the Lambda function to send a message that allows preferentially processes of the paid users, and set the other as the default setting
D. Use SNS to send the message that processing of paid users is to be processed preferentially, and set the other as the default setting
Answer: B
Explanation
SQS allows you to set priorities for queues. By doing so, it is possible to divide the queue further into queues that are processed preferentially and queues that are not. When each queue is polled separately, the higher priority queue is polled first. With this SQS setting, it is possible to set a queue to be processed preferentially for paid users and to use the default queue for free users. Therefore, option 2 is the correct answer.
Follow the settings for prioritization:
1. Prepare multiple queues for each priority using SQS.
2. Requests that are prioritized are to be placed in a high-priority queue.
3.Prepare the number of servers that process the queue based on their priority.
4. It is also possible to delay the processing start time by using the “delayed message transmission” function of the queue.


Question 4:
Your company has a database system that uses DynamoDB. Recently, due to the increase in the number of write processes, many processing delays and failures have occurred in the database. As a Solutions Architect, you are required to take action to ensure that write operations are not lost under any circumstances.
Choose the best way to meet this requirement.
Options:
A. Use IOPS volume for DynamoDB
B. Set up a distributed processing using SQS queues for DynamoDB write processing
C. Set up a distributed processing using SQS queue and set the Lambda function for the write process of DynamoDB
D. Perform DynamoDB data processing with an EC2 instance
Answer: C
Explanation
A “pending write request to the database” can be stored in the SQS queue for asynchronous processing. For DynamoDB data processing execution, it is also possible to execute DB processing by queue in cooperation with Lambda. By queuing the processing process, you can set the write processing queue so that it is not lost. This ensures that the request message is not lost, which meets the requirements. Therefore, option 3 is the correct answer.
Option 1 is incorrect. An IOPS volume configuration cannot be chosen for DynamoDB.
Option 2 is incorrect. The distributed processing process cannot be executed only with the SQS queue for the writing processing of DynamoDB. An SQS-triggered Lambda function is essential.
Option 4 is incorrect. Performing DynamoDB data processing with an EC2 instance is inefficient. DynamoDB data process can create more efficient architecture configurations by linking with Lambda function for serverless processing.


Question 5:
As a Solutions Architect, you are developing a workflow to send video data from your system to AWS for transcoding video data. We plan to build this mechanism using an EC2 worker instance that pulls transcode jobs from SQS.
Choose the correct feature of SQS that helps you complete this.
Options:
A. SQS provides a health checks for worker instances
B. SQS can achieve horizontal scaling
C. SQS is best suited for this type of process because maintains the order of operations
D. Processing according to a set schedule can be executed by SQS
Answer: B
Explanation
Option 2 is the correct answer. SQS allows load distribution by distributing system processing through queues. This helps you scale your AWS resources horizontally. SQS queues enable parallel processing with multiple EC2 instances, achieving load distribution and processing process optimization.
Option 1 is incorrect. SQS does not health check the status of worker instances.
Option 3 is incorrect. The order of the queues is not particularly important in this video processing, so it is not included as a requirement.
Option 4 is incorrect. SQS does not perform scheduled queuing.


Question 6:
A new application will run across multiple Amazon ECS tasks. Front-end application logic will process data and then pass that data to a back-end ECS task to perform further processing and write the data to a datastore. The Architect would like to reduce-interdependencies so failures do no impact other components.
Which solution should the Architect use?
Options:
A. Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3
B. Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream
C. Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue
D. Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages
Answer: D
Explanation
This is a good use case for Amazon SQS. SQS is a service that is used for decoupling applications, thus reducing interdependencies, through a message bus. The front-end application can place messages on the queue and the back-end can then poll the queue for new messages. Please remember that Amazon SQS is pull-based (polling) not push-based (use SNS for push-based).
CORRECT: “Create an Amazon SQS queue and configure the front-end to add messages to the queue and the back-end to poll the queue for messages” is the correct answer.
INCORRECT: “Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data to the stream and the back-end to read data from the stream” is incorrect. Amazon Kinesis Firehose is used for streaming data. With Firehose the data is immediately loaded into a destination that can be Amazon S3, RedShift, Elasticsearch, or Splunk. This is not an ideal use case for Firehose as this is not streaming data and there is no need to load data into an additional AWS service.
INCORRECT: “Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to read data from Amazon S3” is incorrect as per the previous explanation.
INCORRECT: “Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-end to add messages to the queue ” is incorrect as SQS is pull-based, not push-based. EC2 instances must poll the queue to find jobs to process.


Question 7:
An eCommerce application consists of three tiers. The web tier includes EC2 instances behind an Application Load balancer, the middle tier uses EC2 instances and an Amazon SQS queue to process orders, and the database tier consists of an Auto Scaling DynamoDB table. During busy periods customers have complained about delays in the processing of orders. A Solutions Architect has been tasked with reducing processing times.
Which action will be MOST effective in accomplishing this requirement?
Options:
A. Replace the Amazon SQS queue with Amazon Kinesis Data Firehose
B. Add an Amazon CloudFront distribution with a custom origin to cache the responses for the web tier
C. Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier
D. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth
Answer: D
Explanation
The most likely cause of the processing delays is insufficient instances in the middle tier where the order processing takes place. The most effective solution to reduce processing times in this case is to scale based on the backlog per instance (number of messages in the SQS queue) as this reflects the amount of work that needs to be done.
CORRECT: “Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS queue depth” is the correct answer.
INCORRECT: “Replace the Amazon SQS queue with Amazon Kinesis Data Firehose” is incorrect. The issue is not the efficiency of queuing messages but the processing of the messages. In this case scaling the EC2 instances to reflect the workload is a better solution.
INCORRECT: “Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier” is incorrect. The DynamoDB table is configured with Auto Scaling so this is not likely to be the bottleneck in order processing.
INCORRECT: “Add an Amazon CloudFront distribution with a custom origin to cache the responses for the web tier” is incorrect. This will cache media files to speed up web response times but not order processing times as they take place in the middle tier.


Question 8:
A web application allows users to upload photos and add graphical elements to them. The application offers two tiers of service: free and paid. Photos uploaded by paid users should be processed before those submitted using the free tier. The photos are uploaded to an Amazon S3 bucket which uses an event notification to send the job information to Amazon SQS.
How should a Solutions Architect configure the Amazon SQS deployment to meet these requirements?
Options:
A. Use one SQS standard queue. Use batching for the paid photos and short polling for the free photos
B. Use a separate SQS FIFO queue for each tier. Set the free queue to use short polling and the paid queue to use long polling
C. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first
D. Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue
Answer: D
Explanation
AWS recommend using separate queues when you need to provide prioritization of work. The logic can then be implemented at the application layer to prioritize the queue for the paid photos over the queue for the free photos.
CORRECT: “Use a separate SQS Standard queue for each tier. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue” is the correct answer.
INCORRECT: “Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first” is incorrect. FIFO queues preserve the order of messages but they do not prioritize messages within the queue. The orders would need to be placed into the queue in a priority order and there’s no way of doing this as the messages are sent automatically through event notifications as they are received by Amazon S3.
INCORRECT: “Use one SQS standard queue. Use batching for the paid photos and short polling for the free photos” is incorrect. Batching adds efficiency but it has nothing to do with ordering or priority.
INCORRECT: “Use a separate SQS FIFO queue for each tier. Set the free queue to use short polling and the paid queue to use long polling” is incorrect. Short polling and long polling are used to control the amount of time the consumer process waits before closing the API call and trying again. Polling should be configured for efficiency of API calls and processing of messages but does not help with message prioritization.


Question 9:
A company is working with a strategic partner that has an application that must be able to send messages to one of the company’s Amazon SQS queues. The partner company has its own AWS account.
How can a Solutions Architect provide least privilege access to the partner?
Options:
A. Create a user account that and grant the sqs:SendMessage permission for Amazon SQS. Share the credentials with the partner company
B. Update the permission policy on the SQS queue to grant all permissions to the partner’s AWS account
C. Update the permission policy on the SQS queue to grant the sqs:SendMessage permission to the partner’s AWS account
D. Create a cross-account role with access to all SQS queues and use the partner’s AWS account in the trust document for the role
Answer: C
Explanation
Amazon SQS supports resource-based policies. The best way to grant the permissions using the principle of least privilege is to use a resource-based policy attached to the SQS queue that grants the partner company’s AWS account the sqs:SendMessage privilege.
CORRECT: “Update the permission policy on the SQS queue to grant the sqs:SendMessage permission to the partner’s AWS account” is the correct answer.
INCORRECT: “Create a user account that and grant the sqs:SendMessage permission for Amazon SQS. Share the credentials with the partner company” is incorrect. This would provide the permissions for all SQS queues, not just the queue the partner company should be able to access.
INCORRECT: “Create a cross-account role with access to all SQS queues and use the partner’s AWS account in the trust document for the role” is incorrect. This would provide access to all SQS queues and the partner company should only be able to access one SQS queue.
INCORRECT: “Update the permission policy on the SQS queue to grant all permissions to the partner’s AWS account” is incorrect. This provides too many permissions; the partner company only needs to send messages to the queue.


Question 10:
An application running on Amazon EC2 needs to asynchronously invoke an AWS Lambda function to perform data processing. The services should be decoupled.
Which service can be used to decouple the compute services?
Options:
A. Amazon SNS
B. AWS Step Functions
C. Amazon MQ
D. AWS Config
Answer: A
Explanation
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.
CORRECT: “Amazon SNS” is the correct answer.
INCORRECT: “AWS Config” is incorrect. AWS Config is a service that is used for continuous compliance, not application decoupling.
INCORRECT: “Amazon MQ” is incorrect. Amazon MQ is similar to SQS but is used for existing applications that are being migrated into AWS. SQS should be used for new applications being created in the cloud.
INCORRECT: “AWS Step Functions” is incorrect. AWS Step Functions is a workflow service. It is not the best solution for this scenario.


Question 11:
A major bank is using SQS to migrate several core banking applications to the cloud to ensure high availability and cost efficiency while simplifying administrative complexity and overhead. The development team at the bank expects a peak rate of about 1000 messages per second to be processed via SQS. It is important that the messages are processed in order.
Which of the following options can be used to implement this system?
Options:
A. Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate
B. Use Amazon SQS FIFO queue to process the messages
C. Use Amazon SQS standard queue to process the messages
D. Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process the messages at the peak rate
Answer: A
Explanation
Correct option:
Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues – Standard queues vs FIFO queues.
For FIFO queues, the order in which messages are sent and received is strictly preserved (i.e. First-In-First-Out). On the other hand, the standard SQS queues offer best-effort ordering. This means that occasionally, messages might be delivered in an order different from which they were sent.
By default, FIFO queues support up to 300 messages per second (300 send, receive, or delete operations per second). When you batch 10 messages per operation (maximum), FIFO queues can support up to 3,000 messages per second. Therefore you need to process 4 messages per operation so that the FIFO queue can support up to 1200 messages per second, which is well within the peak rate.
Incorrect options:
Use Amazon SQS standard queue to process the messages – As messages need to be processed in order, therefore standard queues are ruled out.
Use Amazon SQS FIFO queue to process the messages – By default, FIFO queues support up to 300 messages per second and this is not sufficient to meet the message processing throughput per the given use-case. Hence this option is incorrect.
Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process the messages at the peak rate – As mentioned earlier in the explanation, you need to use FIFO queues in batch mode and process 4 messages per operation, so that the FIFO queue can support up to 1200 messages per second. With 2 messages per operation, you can only support up to 600 messages per second.


Question 12:
You are establishing a monitoring solution for desktop systems, that will be sending telemetry data into AWS every 1 minute. Data for each system must be processed in order, independently, and you would like to scale the number of consumers to be possibly equal to the number of desktop systems that are being monitored.
What do you recommend?
• Use an SQS FIFO queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID (Correct)
• Use an SQS FIFO queue, and send the telemetry data as is
• Use a Kinesis Data Stream, and send the telemetry data with a Partition ID that uses the value of the Desktop ID
• Use an SQS standard queue, and send the telemetry data as is
Explanation
Correct option:
Use an SQS FIFO queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
We, therefore, need to use an SQS FIFO queue. If we don’t specify a GroupID, then all the messages are in absolute order, but we can only have 1 consumer at most. To allow for multiple consumers to read data for each Desktop application, and to scale the number of consumers, we should use the “Group ID” attribute. So this is the correct option.
Incorrect options:
Use an SQS FIFO queue, and send the telemetry data as is – This is incorrect because if we send the telemetry data as is then we will not be able to scale the number of consumers to be equal to the number of desktop systems. In this case, each message will have its consumer. So we should use the “Group ID” attribute so that multiple consumers can read data for each Desktop application.
Use an SQS standard queue, and send the telemetry data as is – An SQS standard queue has no ordering capability so that’s ruled out.
Use a Kinesis Data Stream, and send the telemetry data with a Partition ID that uses the value of the Desktop ID – Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. A Kinesis Data Stream would work and would give us the data for each desktop application within shards, but we can only have as many consumers as shards in Kinesis (which is in practice, much less than the number of producers).


Question 13:
A data analytics company is using SQS queues for decoupling the various processes of an application workflow. The company wants to postpone the delivery of certain messages to the queue by one minute while all other messages need to be delivered immediately to the queue.
As a solutions architect, which of the following solutions would you suggest to the company?
• Use message timers to postpone the delivery of certain messages to the queue by one minute
• Use visibility timeout to postpone the delivery of certain messages to the queue by one minute
• Use dead-letter queues to postpone the delivery of certain messages to the queue by one minute
• Use delay queues to postpone the delivery of certain messages to the queue by one minute
Answer: A
Explanation
Correct option:
Use message timers to postpone the delivery of certain messages to the queue by one minute
You can use message timers to set an initial invisibility period for a message added to a queue. So, if you send a message with a 60-second timer, the message isn’t visible to consumers for its first 60 seconds in the queue. The default (minimum) delay for a message is 0 seconds. The maximum is 15 minutes. Therefore, you should use message timers to postpone the delivery of certain messages to the queue by one minute.
Incorrect options:
Use dead-letter queues to postpone the delivery of certain messages to the queue by one minute – Dead-letter queues can be used by other queues (source queues) as a target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed. You cannot use dead-letter queues to postpone the delivery of certain messages to the queue by one minute.
Use visibility timeout to postpone the delivery of certain messages to the queue by one minute – Visibility timeout is a period during which Amazon SQS prevents other consumers from receiving and processing a given message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. You cannot use visibility timeout to postpone the delivery of certain messages to the queue by one minute.
Use delay queues to postpone the delivery of certain messages to the queue by one minute – Delay queues let you postpone the delivery of all new messages to a queue for several seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. You cannot use delay queues to postpone the delivery of only certain messages to the queue by one minute.


Question 14:
The engineering team at an e-commerce company wants to migrate from SQS Standard queues to FIFO queues with batching.
As a solutions architect, which of the following steps would you have in the migration checklist? (Select three)
A• Make sure that the throughput for the target FIFO queue does not exceed 3,000 messages per second
B• Make sure that the name of the FIFO queue is the same as the standard queue
C• Make sure that the name of the FIFO queue ends with the .fifo suffix
D• Convert the existing standard queue into a FIFO queue
E• Make sure that the throughput for the target FIFO queue does not exceed 300 messages per second
F• Delete the existing standard queue and recreate it as a FIFO queue
Answer: A, C & F
Explanation
Correct options:
Delete the existing standard queue and recreate it as a FIFO queue
Make sure that the name of the FIFO queue ends with the .fifo suffix
Make sure that the throughput for the target FIFO queue does not exceed 3,000 messages per second
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
By default, FIFO queues support up to 3,000 messages per second with batching, or up to 300 messages per second (300 send, receive, or delete operations per second) without batching. Therefore, using batching you can meet a throughput requirement of upto 3,000 messages per second.
The name of a FIFO queue must end with the .fifo suffix. The suffix counts towards the 80-character queue name limit. To determine whether a queue is FIFO, you can check whether the queue name ends with the suffix.
If you have an existing application that uses standard queues and you want to take advantage of the ordering or exactly-once processing features of FIFO queues, you need to configure the queue and your application correctly. You can’t convert an existing standard queue into a FIFO queue. To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.
Incorrect options:
Convert the existing standard queue into a FIFO queue – You can’t convert an existing standard queue into a FIFO queue.
Make sure that the name of the FIFO queue is the same as the standard queue – The name of a FIFO queue must end with the .fifo suffix.
Make sure that the throughput for the target FIFO queue does not exceed 300 messages per second – By default, FIFO queues support up to 3,000 messages per second with batching.


Question 15:
A financial services company is migrating their messaging queues from self-managed message-oriented middleware systems to Amazon SQS. The development team at the company wants to minimize the costs of using SQS.
As a solutions architect, which of the following options would you recommend for the given use-case?
• Use SQS visibility timeout to retrieve messages from your Amazon SQS queues
• Use SQS long polling to retrieve messages from your Amazon SQS queues (Correct)
• Use SQS message timer to retrieve messages from your Amazon SQS queues
• Use SQS short polling to retrieve messages from your Amazon SQS queues
Explanation
Correct option:
Use SQS long polling to retrieve messages from your Amazon SQS queues
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
Amazon SQS provides short polling and long polling to receive messages from a queue. By default, queues use short polling. With short polling, Amazon SQS sends the response right away, even if the query found no messages. With long polling, Amazon SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the request. Amazon SQS sends an empty response only if the polling wait time expires.
Long polling makes it inexpensive to retrieve messages from your Amazon SQS queue as soon as the messages are available. Using long polling can reduce the cost of using SQS because you can reduce the number of empty receives.
Short Polling vs Long Polling via – https://aws.amazon.com/sqs/faqs/
Incorrect options:
Use SQS short polling to retrieve messages from your Amazon SQS queues – With short polling, Amazon SQS sends the response right away, even if the query found no messages. You end up paying more because of the increased number of empty receives.
Use SQS visibility timeout to retrieve messages from your Amazon SQS queues – Visibility timeout is a period during which Amazon SQS prevents other consumers from receiving and processing a given message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. You cannot use visibility timeout to retrieve messages from your Amazon SQS queues. This option has been added as a distractor.
Use SQS message timer to retrieve messages from your Amazon SQS queues – You can use message timers to set an initial invisibility period for a message added to a queue. So, if you send a message with a 60-second timer, the message isn’t visible to consumers for its first 60 seconds in the queue. The default (minimum) delay for a message is 0 seconds. The maximum is 15 minutes. You cannot use message timer to retrieve messages from your Amazon SQS queues. This option has been added as a distractor.
References:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html
https://aws.amazon.com/sqs/faqs/


Question 16:
An IT company is using SQS queues for decoupling the various components of application architecture. As the consuming components need additional time to process SQS messages, the company wants to postpone the delivery of new messages to the queue for a few seconds.
As a solutions architect, which of the following solutions would you suggest to the company?
• Use visibility timeout to postpone the delivery of new messages to the queue for a few seconds
• Use FIFO queues to postpone the delivery of new messages to the queue for a few seconds
• Use delay queues to postpone the delivery of new messages to the queue for a few seconds (Correct)
• Use dead-letter queues to postpone the delivery of new messages to the queue for a few seconds
Explanation
Correct option:
Use delay queues to postpone the delivery of new messages to the queue for a few seconds
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
Delay queues let you postpone the delivery of new messages to a queue for several seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes.
via – https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html
Incorrect options:
Use FIFO queues to postpone the delivery of new messages to the queue for a few seconds – SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent. You cannot use FIFO queues to postpone the delivery of new messages to the queue for a few seconds.
Use dead-letter queues to postpone the delivery of new messages to the queue for a few seconds – Dead-letter queues can be used by other queues (source queues) as a target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed. You cannot use dead-letter queues to postpone the delivery of new messages to the queue for a few seconds.
Use visibility timeout to postpone the delivery of new messages to the queue for a few seconds – Visibility timeout is a period during which Amazon SQS prevents other consumers from receiving and processing a given message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. You cannot use visibility timeout to postpone the delivery of new messages to the queue for a few seconds.
Reference:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html


Question 17:
A legacy application is built using a tightly-coupled monolithic architecture. Due to a sharp increase in the number of users, the application performance has degraded. The company now wants to decouple the architecture and adopt AWS microservices architecture. Some of these microservices need to handle fast running processes whereas other microservices need to handle slower processes.
Which of these options would you identify as the right way of connecting these microservices?
• Configure Amazon Kinesis Data Streams to decouple microservices running faster processes from the microservices running slower ones
• Configure Amazon SQS queue to decouple microservices running faster processes from the microservices running slower ones (Correct)
• Add Amazon EventBridge to decouple the complex architecture
• Use Amazon SNS to decouple microservices running faster processes from the microservices running slower ones
Explanation
Correct option:
Configure Amazon SQS queue to decouple microservices running faster processes from the microservices running slower ones
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Use Amazon SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run and fail independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed. Being able to store the messages and replay them is a very important feature in decoupling the system architecture, as is needed in the current use case.
Incorrect options:
Use Amazon SNS to decouple microservices running faster processes from the microservices running slower ones – Amazon SNS follows the “publish-subscribe” (pub-sub) messaging paradigm, with notifications being delivered to clients using a “push” mechanism. This is an important difference between SNS and SQS. Whereas SQS is a polling mechanism, that gives applications the chance to poll at their own comfort, the push mechanism assumes the other applications are present. For the current requirement, we need messages to be stored till they are processed by the downstream applications. Hence, SQS is the right choice.
Configure Amazon Kinesis Data Streams to decouple microservices running faster processes from the microservices running slower ones – Amazon Kinesis Data Streams are used for streaming real-time high-volume data. Kinesis is a publish-subscribe model, used when publisher applications need to publish the same data to different consumers in parallel. SQS is the right fit for the current use case.
Add Amazon EventBridge to decouple the complex architecture – This event-based service is extremely useful for connecting non-AWS SaaS (Software as a Service) services to AWS services. With Eventbridge, the downstream application would need to immediately process the events whenever they arrive, thereby making it a tightly coupled scenario. Hence, this option is not correct.
References:
https://aws.amazon.com/sqs/


Question 18:
An e-commerce company runs its web application on EC2 instances in an Auto Scaling group and it’s configured to handle consumer orders in an SQS queue for downstream processing. The DevOps team has observed that the performance of the application goes down in case of a sudden spike in orders received.
As a solutions architect, which of the following solutions would you recommend to address this use-case?
• Use a target tracking scaling policy based on a custom Amazon SQS queue metric (Correct)
• Use a step scaling policy based on a custom Amazon SQS queue metric
• Use a simple scaling policy based on a custom Amazon SQS queue metric
• Use a scheduled scaling policy based on a custom Amazon SQS queue metric
Explanation
Correct option:
Use a target tracking scaling policy based on a custom Amazon SQS queue metric
If you use a target tracking scaling policy based on a custom Amazon SQS queue metric, dynamic scaling can adjust to the demand curve of your application more effectively. You may use an existing CloudWatch Amazon SQS metric like ApproximateNumberOfMessagesVisible for target tracking but you could still face an issue so that the number of messages in the queue might not change proportionally to the size of the Auto Scaling group that processes messages from the queue. The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.
To calculate your backlog per instance, divide the ApproximateNumberOfMessages queue attribute by the number of instances in the InService state for the Auto Scaling group. Then set a target value for the Acceptable backlog per instance.
To illustrate with an example, let’s say that the current ApproximateNumberOfMessages is 1500 and the fleet’s running capacity is 10. If the average processing time is 0.1 seconds for each message and the longest acceptable latency is 10 seconds, then the acceptable backlog per instance is 10 / 0.1, which equals 100. This means that 100 is the target value for your target tracking policy. If the backlog per instance is currently at 150 (1500 / 10), your fleet scales out, and it scales out by five instances to maintain proportion to the target value.
Scaling Based on Amazon SQS: via – https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
Incorrect options:
Use a simple scaling policy based on a custom Amazon SQS queue metric – With simple scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. The main issue with simple scaling is that after a scaling activity is started, the policy must wait for the scaling activity or health check replacement to complete and the cooldown period to expire before responding to additional alarms. This implies that the application would not be able to react quickly to sudden spikes in orders.
Use a step scaling policy based on a custom Amazon SQS queue metric – With step scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. When step adjustments are applied, they increase or decrease the current capacity of your Auto Scaling group, and the adjustments vary based on the size of the alarm breach. For the given use-case, step scaling would try to approximate the correct number of instances by increasing/decreasing the steps as per the policy. This is not as efficient as the target tracking policy where you can calculate the exact number of instances required to handle the spike in orders.
Use a scheduled scaling policy based on a custom Amazon SQS queue metric – Scheduled scaling allows you to set your scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date. You cannot use scheduled scaling policies to address the sudden spike in orders.


Question 19:
A solutions architect is designing an application on AWS. The compute layer will run in parallel across EC2 instances. The compute layer should scale based on the number of jobs to be processed. The compute layer is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?
A• Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic
B• Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage
C• Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
D• Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
Answer: D
Explanation
In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue.
To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows:
Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet’s running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance.
Acceptable backlog per instance: To calculate your target value, first determine what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message.
This solution will scale EC2 instances using Auto Scaling based on the number of jobs waiting in the SQS queue.
CORRECT: “Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue” is the correct answer.
INCORRECT: “Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage” is incorrect as scaling on network usage does not relate to the number of jobs waiting to be processed.
INCORRECT: “Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage” is incorrect. Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on CPU usage is not the best solution as it does not relate to the number of jobs waiting to be processed.
INCORRECT: “Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic” is incorrect. Amazon SNS is a notification service so it delivers notifications to subscribers. It does store data durably but is less suitable than SQS for this use case. Scaling on the number of notifications in SNS is not possible.