1. IAM

Identity and Access Management and is a Global Service.
Root account is created by default and shouldn’t be used or shared. Instead we create Users. Users are people within the organization and can be grouped like developers, operations etc. These groups only contain users, not other groups. A user can belong to multiple groups. For example a user ‘A’ part of developers group can also be part of audit group. Similarly a user ‘B’ part of operations group can also be part of audit group. JSON (Java Script Object Notation) documents called Policies will be assigned to Users or Groups. The Policies define the permissions of the users and apply least privilege principle. Means dont give more permissions to a user than he needs.

IAM allows to manage users and their level of access to the AWS console. IAM allows to set up users, groups, policies (permissions) and roles. Also allows to grant access to different parts of AWS platform.

AWS Root Account Security Best Practices:
1) Use a strong password to help protect account-level access to the AWS Management Console. 2) Never share your AWS account root user password or access keys with anyone. 3) If you do have an access key for your AWS account root user, delete it. If you must keep it, rotate (change) the access key regularly. You should not encrypt the access keys and save them on Amazon S3. 4) If you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. 5) Enable AWS multi-factor authentication (MFA) on your AWS account root user account.

Key features:
i) Centralized control of AWS account
ii) Shared access to AWS account
iii) Granular permissions — restricting access only to few services
iv) Identity Federation — Including Active Directory, FB or Linkedin etc. Users can log into AWS console using the same username and password that they used to log into windows PCs.
v) Multifactor authentication
vi) Provides temporary access for user/ devices and services where necessary.
vii) Allows to set up own password rotation policy.
viii) Integrates with many different AWS services.
ix) Supports PCI DSS compliance.

Key Terminology:
i) Users: End users such as people, employees in an organization etc.
ii) Groups: A collection of users. Each user in the group will inherit the permissions of the group.
iii) Policies: Policies are made up of documents called Policy documents. These documents will be in JSON (Java Script Object Notation) format and they give permissions as to what a user/ group/ role is able to do
iv) Roles: Create roles and then assign them to AWS resources.

US East (N. Virginia) is the region where all the new products and services will be launched first.

In console: Security, Identity & Compliance >> IAM

Access Key ID can be considered as a user name that we are going to use programmatically access. Secret Access Key is the password to access programmatically.

Tips:
i) IAM is global/ universal and does not apply to regions. So creation of user, group or role are universal.
ii) The ‘root account’ is simply the account created when we first setup our AWS account. It has complete admin access. It uses an email address to create an account.
iii) New users have no permissions/ policies when first created.
iv) New users are assigned Access Key ID & Secret Access Key when first created. These are not same as a password. We cannot use the Access Key ID & Secret Access Key to login into the console. We can use this to access AWS programmatically via APIs and the Command line. Save the Access Key ID & Secret Access Key in secure location as we get only once and if we lose, we have to regenerate them.
v) We can access AWS in two ways. Console Access & Programmatic Access.
vi) Always set up MFA (Multi Factor Authentication) on your root account.
vii) We can create and customize own password rotation policies.

Question 1:
What is an Availability Zone?
A. data center
B. multiple VPCs
C. multiple regions
D. single region
E. multiple EC2 server instances
Answer (A)

Question 2:
What are two features that correctly describe Availability Zone (AZ)
architecture?
A. multiple regions per AZ
B. interconnected with private WAN links
C. multiple AZ per region
D. interconnected with public WAN links
E. data auto-replicated between zones in different regions
F. Direct Connect supports Layer 2 connectivity to region
Answer (B,C)

Question 3:
An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account. As a solutions architect, which of the following steps would you recommend?
Answer: Create a new IAM role with the required permissions to access the resources in the PROD environment. The users can then assume this IAM role while accessing the resources from the PROD env.
Explanation: IAM roles allow you to delegate access to users or services that normally don’t have access to your organization’s AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls. Consequently, you don’t have to share long-term credentials for access to a resource. Using IAM roles, it is possible to access cross-account resources.

Question 4:
An IT consultant is helping the owner of a medium-sized business set up an AWS account. What are the security recommendations he must follow while creating the AWS account root user? (Select two)
Answer: a. Create a strong password for the AWS account root user
b. Enable MFA for the AWS account root user

Question 5:
A new DevOps engineer has joined a large financial services company recently. As part of his onboarding, the IT department is conducting a review of the checklist for tasks related to AWS Identity and Access Management. As a solutions architect, which best practices would you recommend (Select two)?
Answer: a. Configure AWS CloudTrail to log all IAM actions
b. Enable MFA for privileged users

Question 6:
A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected. Which of the following would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?
Answer: Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once.
Explanation: If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.

Question 7:
A large IT company wants to federate its workforce into AWS accounts and business applications. Which of the following AWS services can help build a solution for this requirement? (Select two)
Answer: a. Use AWS Single Sign-On(SSO)
b. Use AWS Identity and Access Management (IAM)

Question 8:
A financial services company uses Amazon GuardDuty for analyzing its AWS account metadata to meet the compliance guidelines. However, the company has now decided to stop using GuardDuty service. All the existing findings have to be deleted and cannot persist anywhere on AWS Cloud. Which of the following techniques will help the company meet this requirement?
Answer: Disable the service in the general settings
Explanation: Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately. Disabling the service will delete all remaining data, including your findings and configurations before relinquishing the service permissions and resetting the service. So, this is the correct option for our use case.

Question 9:
An IT security consultancy is working on a solution to protect data stored in S3 from any malicious activity as well as check for vulnerabilities on EC2 instances.
Answer: Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use security assessments provided by Amazon inspector to check for vulnerabilities on EC2 instance.

Question 10:
A retail company uses Amazon EC2 instances, API Gateway, Amazon RDS, Elastic Load Balancer and CloudFront services. To improve the security of these services, the Risk Advisory group has suggested a feasibility check for using the Amazon GuardDuty service. Which of the following would you identify as data sources supported by GuardDuty?
Answer: VPC Flow Logs, DNS logs, CloudTrail events
Explanation: Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.
GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.

Question 11:
As an operations administrator, you are running a set of applications hosted on AWS. Your company decided to introduce an API gateway and use it for inter-application co-operation. For this, you need to implement API Gateway permission management and give developers, IT administrators, and users the appropriate level of permissions to manage them.
Select the most appropriate setting method to implement this authority management task.
Options:
A. Use STS to set access rights to individual users
B. Use IAM policy to set access rights to individual users
C. Use AWS Config to set access permissions for individual users
D. Use the IAM access key to set access privileges for individual users
Answer: B
Explanation:
This scenario asks how to configure API Gateway to give developers, IT administrators, and users permissions to the appropriate level of API. Access to Amazon API Gateway can be controlled by permissions using IAM policies. In order to allow API callers to call APIs, it is necessary to create and set an IAM policy.
Option A is incorrect. STS is a function that gives temporary authentication permission and is not suitable for granting medium- to long-term access authority.
Option C is incorrect. AWS Config does not have the ability to grant permissions.
Option D is incorrect. Privilege management by IAM user or IAM role is required instead of IAM access key

Question 12:
The following IAM policy sets permissions for EC2 instances.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Deny",
"Action": [
"ec2:*ReservedInstances*",
"ec2:TerminateInstances"
],
"Resource": "*"
}
]
}

Select the correct description for these settings.
Options:
A. Operations on Reserved instances are allowed
B. All operations on EC2 instances are allowed
C. Operations that terminate all EC2 instance types are rejected
D. Operations that terminate only for all Reserved instances is rejected
Answer: C
Explanation
This IAM policy allows all EC2 actions, but prohibits “all operations on Reserved Instances” and “terminate operations on all instances”.
The first half of the statement gives permission for all EC2. This is a full access right.
{
“Action”: “ec2:*”,
“Effect”: “Allow”,
“Resource”: “*”
},
In the second half of the statement, it is set to reject only “All actions of Reserved Instances” and “Actions to terminate all EC2 instances”.
“Effect”: “Deny”,
“Action”: [
“ec2:*ReservedInstances*”,
“ec2:TerminateInstances”
],
“Resource”: “*”
As a result, with this setting, “any action for Reserved Instances” and “instance termination processing for all EC2 instances” cannot be performed, and Option 3 is the correct answer.

Question 13:
As a Solutions Architect, you plan to use SQS queues and Lambda to take advantage of serverless configurations in the AWS cloud. In this configuration, the SQS queue runs Lambda in parallel and then stores the data in DynamoDB.
Select the settings you need in order to send messages using Lambda.
Options:
A. Need to use FIFO queue
B. Integrate Lambda functions with API Gateway
C. Set the IAM role to a Lambda function
D. Set security group to Lambda function
Answer: C
Explanation
If your Lambda function needs to access other AWS resources, your Lambda function must have an IAM role that grants access to that service. This time the Lambda function needs access to SQS. Therefore, option 3 is the correct answer.
Use the IAM role for Lambda permissions. To grant permissions to other accounts that use your Lambda resource, or to grant permissions to other AWS resources, set the policy that applies to the resource itself in your IAM role.
Your Amazon SQS role must include the following permissions:
lambda: CreateEventSourceMapping
lambda: ListEventSourceMappings
lambda: ListFunctions
The lambda execution role must include the following permissions:
sqs: DeleteMessage
sqs: GetQueueAttributes
sqs: ReceiveMessage
If you want to associate an encrypted queue with your Lambda function, add the
kms: Decrypt
permission to your Lambda execution role.
Option 1 is incorrect. Standard queues are also available for SQS queues.
Option 2 is incorrect. You don’t need to integrate your Lambda function with API Gateway because it executes your Lambda function triggered by an SQS queue.
Option 4 is incorrect. You don’t need to set the security group in your Lambda function.

Question 14:
A developer created an application that uses Amazon EC2 and an Amazon RDS MySQL database instance. The developer stored the database user name and password in a configuration file on the root EBS volume of the EC2 application instance. A Solutions Architect has been asked to design a more secure solution.
What should the Solutions Architect do to achieve this requirement?
Options:
A. Attach an additional volume to the EC2 instance with encryption enabled. Move the configuration file to the encrypted volume
B. Install an Amazon-trusted root certificate on the application instance and use SSL/TLS encrypted connections to the database
C. Move the configuration file to an Amazon S3 bucket. Create an IAM role with permission to the bucket and attach it to the EC2 instance
D. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance
Answer: D
Explanation
The key problem here is having plain text credentials stored in a file. Even if you encrypt the volume there is still as security risk as the credentials are loaded by the application and passed to RDS.
The best way to secure this solution is to get rid of the credentials completely by using an IAM role instead. The IAM role can be assigned permissions to the database instance and can be attached to the EC2 instance. The instance will then obtain temporary security credentials from AWS STS which is much more secure.
CORRECT: “Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance” is the correct answer.
INCORRECT: “Move the configuration file to an Amazon S3 bucket. Create an IAM role with permission to the bucket and attach it to the EC2 instance” is incorrect. This just relocates the file; the contents are still unsecured and must be loaded by the application and passed to RDS. This is an insecure process.
INCORRECT: “Attach an additional volume to the EC2 instance with encryption enabled. Move the configuration file to the encrypted volume” is incorrect. This will only encrypt the file at rest, it still must be read, and the contents passed to RDS which is insecure.
INCORRECT: “Install an Amazon-trusted root certificate on the application instance and use SSL/TLS encrypted connections to the database” is incorrect. The file is still unsecured on the EBS volume so encrypting the credentials in an encrypted channel between the EC2 instance and RDS does not solve all security issues.

Question 15:
A company requires that all AWS IAM user accounts have specific complexity requirements and minimum password length.
How should a Solutions Architect accomplish this?
Options:
A. Set a password policy for each IAM user in the AWS account
B. Create an IAM policy that enforces the requirements and apply it to all users
C. Set a password policy for the entire AWS account
D. Use an AWS config rule to enforce the requirements when creating user accounts.
Answer: C
Explanation
The easiest way to enforce this requirement is to update the password policy that applies to the entire AWS account. When you create or change a password policy, most of the password policy settings are enforced the next time your users change their passwords. However, some of the settings are enforced immediately such as the password expiration period.
CORRECT: “Set a password policy for the entire AWS account” is the correct answer.
INCORRECT: “Set a password policy for each IAM user in the AWS account” is incorrect. There’s no need to set an individual password policy for each user, it will be easier to set the policy for everyone.
INCORRECT: “Create an IAM policy that enforces the requirements and apply it to all users” is incorrect. As there is no specific targeting required it is easier to update the account password policy.
INCORRECT: “Use an AWS Config rule to enforce the requirements when creating user accounts” is incorrect. You cannot use AWS Config to enforce the password requirements at the time of creating a user account.

Question 16:
An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account.
As a solutions architect, which of the following steps would you recommend?
Options:
A. Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment
B. Both IAM roles and IAM users can be used interchangeably for cross-account access
C. It is not possible to access cross-account resources
D. Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
Answer: D
Explanation
Correct option:
Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
IAM roles allow you to delegate access to users or services that normally don’t have access to your organization’s AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls. Consequently, you don’t have to share long-term credentials for access to a resource. Using IAM roles, it is possible to access cross-account resources.
Incorrect options:
Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment – There is no need to create new IAM user credentials for the production environment, as you can use IAM roles to access cross-account resources.
It is not possible to access cross-account resources – You can use IAM roles to access cross-account resources.
Both IAM roles and IAM users can be used interchangeably for cross-account access – IAM roles and IAM users are separate IAM entities and should not be mixed. Only IAM roles can be used to access cross-account resources.

Question 17:
A large IT company wants to federate its workforce into AWS accounts and business applications.
Which of the following AWS services can help build a solution for this requirement? (Select two)
Options:
A. Use AWS Organizations
B. Use Multi-Factor Authentication
C. Use AWS Identity and Access Management(IAM)
D. Use AWS Security Token Service (AWS STS) to get temporary security credentials
E. Use AWS Single Sign-On(SSO)
Answer: C & E
Explanation
Correct options:
Use AWS Single Sign-On (SSO)
Use AWS Identity and Access Management (IAM)
Identity federation is a system of trust between two parties for the purpose of authenticating users and conveying the information needed to authorize their access to resources. In this system, an identity provider (IdP) is responsible for user authentication, and a service provider (SP), such as a service or an application, controls access to resources. By administrative agreement and configuration, the SP trusts the IdP to authenticate users and relies on the information provided by the IdP about them. After authenticating a user, the IdP sends the SP a message, called an assertion, containing the user’s sign-in name and other attributes that the SP needs to establish a session with the user and to determine the scope of resource access that the SP should grant. Federation is a common approach to building access control systems that manage users centrally within a central IdP and govern their access to multiple applications and services acting as SPs.
You can use two AWS services to federate your workforce into AWS accounts and business applications: AWS Single Sign-On (SSO) or AWS Identity and Access Management (IAM). AWS SSO is a great choice to help you define federated access permissions for your users based on their group memberships in a single centralized directory. If you use multiple directories or want to manage the permissions based on user attributes, consider AWS IAM as your design alternative.
Incorrect options:
Use Multi-Factor Authentication – AWS multi-factor authentication (AWS MFA) provides an extra level of security that you can apply to your AWS environment. You can enable AWS MFA for your AWS account and for individual AWS Identity and Access Management (IAM) users you create under your account. MFA added another layer of security to IAM and is not a stand-alone service.
Use AWS Security Token Service (AWS STS) to get temporary security credentials – Temporary security credentials consist of the AWS access key ID, secret access key, and security token. Temporary security credentials are valid for a specified duration and for a specific set of permissions. If you’re making direct HTTPS API requests to AWS, you can sign those requests with the temporary security credentials that you get from AWS Security Token Service (AWS STS). STS is not a federation service.
Use AWS Organizations – AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. It does not offer federation capability, as is needed in the use case.

Question 18:
An IT company wants to review its security best-practices after an incident was reported where a new developer on the team was assigned full access to DynamoDB. The developer accidentally deleted a couple of tables from the production environment while building out a new feature.
Which is the MOST effective way to address this issue so that such incidents do not recur?
Options:
A. The CTO should review the permissions for each new developer’s IAM user so that such incidents don’t recur
B. Only root user should have full database access in the organization
C. Use permissions boundary to control the maximum permissions employees can grant to the IAM principals
D. Remove full database access for all IAM users in the organization
Answer: C
Explanation
Correct option:
Use permissions boundary to control the maximum permissions employees can grant to the IAM principals
A permissions boundary can be used to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. Therefore, using the permissions boundary offers the right solution for this use-case.
Incorrect options:
Remove full database access for all IAM users in the organization – It is not practical to remove full access for all IAM users in the organization because a select set of users need this access for database administration. So this option is not correct.
The CTO should review the permissions for each new developer’s IAM user so that such incidents don’t recur – Likewise the CTO is not expected to review the permissions for each new developer’s IAM user, as this is best done via an automated procedure. This option has been added as a distractor.
Only root user should have full database access in the organization – As a best practice, the root user should not access the AWS account to carry out any administrative procedures. So this option is not correct.

Question 19:
A development team requires permissions to list an S3 bucket and delete objects from that bucket. A systems administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows the principle of least privilege.

"Version": "2021-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket"
],
"Effect": "Allow"
}
]

Which statement should a solutions architect add to the policy to address this issue?
Answer:
{
“Action”: [
“s3:DeleteObject”
],
“Resource”: [
“arn:aws:s3:::example-bucket/*”
],
“Effect”: “Allow”
}
The main elements of a policy statement are:
Effect: Specifies whether the statement will Allow or Deny an action (Allow is the effect defined here).
Action: Describes a specific action or actions that will either be allowed or denied to run based on the Effect entered. API actions are unique to each service (DeleteObject is the action defined here).
Resource: Specifies the resources—for example, an S3 bucket or objects—that the policy applies to in Amazon Resource Name (ARN) format ( example-bucket/* is the resource defined here).
This policy provides the necessary delete permissions on the resources of the S3 bucket to the group.

Question 20:
An IT consultant is helping the owner of a medium-sized business set up an AWS account. What are the security recommendations he must follow while creating the AWS account root user? (Select two)
Options:
A. Encrypt the access keys and save them on Amazon S3
B. Create AWS account root user access keys and share those keys only with the business owner
C. Enable Multi Factor Authentication (MFA) for the AWS account root user account
D. Send an email to the business owner with details of the login username and password for the AWS root user. This will help the business owner to troubleshoot any login issues in future
E. Create a strong password for the AWS account root user
Answer: C & E
Explanation
Correct options:
Create a strong password for the AWS account root user
Enable Multi Factor Authentication (MFA) for the AWS account root user account
Here are some of the best practices while creating an AWS account root user:
1) Use a strong password to help protect account-level access to the AWS Management Console. 2) Never share your AWS account root user password or access keys with anyone. 3) If you do have an access key for your AWS account root user, delete it. If you must keep it, rotate (change) the access key regularly. You should not encrypt the access keys and save them on Amazon S3. 4) If you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. 5) Enable AWS multi-factor authentication (MFA) on your AWS account root user account.
Incorrect options:
Encrypt the access keys and save them on Amazon S3 – AWS recommends that if you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. Even an encrypted access key for the root user poses a significant security risk. Therefore, this option is incorrect.
Create AWS account root user access keys and share those keys only with the business owner – AWS recommends that if you don’t already have an access key for your AWS account root user, don’t create one unless you absolutely need to. Hence, this option is incorrect.
Send an email to the business owner with details of the login username and password for the AWS root user. This will help the business owner to troubleshoot any login issues in future – AWS recommends that you should never share your AWS account root user password or access keys with anyone. Sending an email with AWS account root user credentials creates a security risk as it can be misused by anyone reading the email. Hence, this option is incorrect.

Question 21:
A new DevOps engineer has joined a large financial services company recently. As part of his onboarding, the IT department is conducting a review of the checklist for tasks related to AWS Identity and Access Management.
As a solutions architect, which best practices would you recommend (Select two)?
Options:
A. Create a minimum number of accounts and share these account credentials among employees
B. Configure AWS CloudTrail to record all account activity
C. Enable MFA for privileged users
D. Grant maximum privileges to avoid assigning privileges again
E. Use user credentials to provide access specific permissions for Amazon EC2 instances
Answer: B & C
Explanation
Correct options:
Enable MFA for privileged users – As per the AWS best practices, it is better to enable Multi Factor Authentication (MFA) for privileged users via an MFA-enabled mobile device or hardware MFA token.
Configure AWS CloudTrail to record all account activity – AWS recommends to turn on CloudTrail to log all IAM actions for monitoring and audit purposes.
Incorrect options:
Create a minimum number of accounts and share these account credentials among employees – AWS recommends that user account credentials should not be shared between users. So, this option is incorrect.
Grant maximum privileges to avoid assigning privileges again – AWS recommends granting the least privileges required to complete a certain job and avoid giving excessive privileges which can be misused. So, this option is incorrect.
Use user credentials to provide access specific permissions for Amazon EC2 instances – It is highly recommended to use roles to grant access permissions for EC2 instances working on different AWS services. So, this option is incorrect.

Question 22:
What does this IAM policy do?

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Mystery Policy",
"Action": [
"ec2:RunInstances"
],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "34.50.31.0/24"
}
}
}
]
}

• It allows starting EC2 instances only when they have a Public IP within the 34.50.31.0/24 CIDR block
• It allows starting EC2 instances only when the IP where the call originates is within the 34.50.31.0/24 CIDR block (Correct)
• It allows starting EC2 instances only when they have a Private IP within the 34.50.31.0/24 CIDR block
• It allows starting EC2 instances only when they have an Elastic IP within the 34.50.31.0/24 CIDR block
Explanation
Correct option:
It allows starting EC2 instances only when the IP where the call originates is within the 34.50.31.0/24 CIDR block
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.
Consider the following snippet from the given policy document:
“Condition”: {
“IpAddress”: {
“aws:SourceIp”: “34.50.31.0/24”
}
}
The aws:SourceIP in this condition always represents the IP of the caller of the API. That is very helpful if you want to restrict access to certain AWS API for example from the public IP of your on-premises infrastructure.
Please see this overview of Elastic vs Public vs Private IP addresses:
Elastic IP address – An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.
Private IP address – A private IPv4 address is an IP address that’s not reachable over the Internet. You can use private IPv4 addresses for communication between instances in the same VPC.
Public IP address – A public IP address is an IPv4 address that’s reachable from the Internet. You can use public addresses for communication between your instances and the Internet.
Please note 34.50.31.0/24 is a public IP range, not a private IP range. Private IP ranges are: 192.168.0.0 – 192.168.255.255 (65,536 IP addresses) 172.16.0.0 – 172.31.255.255 (1,048,576 IP addresses) 10.0.0.0 – 10.255.255.255 (16,777,216 IP addresses)
Incorrect options:
It allows starting EC2 instances only when they have a Public IP within the 34.50.31.0/24 CIDR block
It allows starting EC2 instances only when they have an Elastic IP within the 34.50.31.0/24 CIDR block
It allows starting EC2 instances only when they have a Private IP within the 34.50.31.0/24 CIDR block
Each of these three options suggests that the IP addresses of the EC2 instances must belong to the 34.50.31.0/24 CIDR block for the EC2 instances to start. Actually, the policy states that the EC2 instance should start only when the IP where the call originates is within the 34.50.31.0/24 CIDR block. Hence these options are incorrect.

Question 23:
What does this IAM policy do?

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Mystery Policy",
"Action": [
"ec2:RunInstances"
],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "eu-west-1"
}
}
}
]
}

A• It allows running EC2 instances in any region when the API call is originating from the eu-west-1 region
B• It allows running EC2 instances anywhere but in the eu-west-1 region
C• It allows to run EC2 instances in the eu-west-1 region, when the API call is made from the eu-west-1 region
D• It allows running EC2 instances only in the eu-west-1 region, and the API call can be made from anywhere in the world
Answer: D
Explanation
Correct option:
It allows running EC2 instances only in the eu-west-1 region, and the API call can be made from anywhere in the world
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.
You can use the aws:RequestedRegion key to compare the AWS Region that was called in the request with the Region that you specify in the policy. You can use this global condition key to control which Regions can be requested.
aws:RequestedRegion represents the target of the API call. So in this example, we can only launch EC2 instances in eu-west-1, and we can do this API call from anywhere.
Incorrect options:
It allows running EC2 instances anywhere but in the eu-west-1 region
It allows running EC2 instances in any region when the API call is originating from the eu-west-1 region
It allows running EC2 instances in the eu-west-1 region when the API call is made from the eu-west-1 region
These three options contradict the earlier details provided in the explanation. To summarize, aws:RequestedRegion represents the target of the API call. So, we can only launch EC2 instances in eu-west-1 region and we can do this API call from anywhere. Hence these options are incorrect.

Question 24:
You have a team of developers in your company, and you would like to ensure they can quickly experiment with AWS Managed Policies by attaching them to their accounts, but you would like to prevent them from doing an escalation of privileges, by granting themselves the AdministratorAccess managed policy. How should you proceed?
A• Attach an IAM policy to your developers, that prevents them from attaching the AdministratorAccess policy
B• Create a Service Control Policy (SCP) on your AWS account that restricts developers from attaching themselves the AdministratorAccess policy
C• For each developer, define an IAM permission boundary that will restrict the managed policies they can attach to themselves
D• Put the developers into an IAM group, and then define an IAM permission boundary on the group that will restrict the managed policies they can attach to themselves
Answer: C
Explanation
Correct option:
For each developer, define an IAM permission boundary that will restrict the managed policies they can attach to themselves
AWS supports permissions boundaries for IAM entities (users or roles). A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. Here we have to use an IAM permission boundary. They can only be applied to roles or users, not IAM groups.
Incorrect options:
Create a Service Control Policy (SCP) on your AWS account that restricts developers from attaching themselves the AdministratorAccess policy – Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled. SCPs aren’t available if your organization has enabled only the consolidated billing features. Attaching an SCP to an AWS Organizations entity (root, OU, or account) defines a guardrail for what actions the principals can perform. If you consider this option, since AWS Organizations is not mentioned in this question, so we can’t apply an SCP.
Attach an IAM policy to your developers, that prevents them from attaching the AdministratorAccess policy – This option is incorrect as the developers can remove this policy from themselves and escalate their privileges.
Put the developers into an IAM group, and then define an IAM permission boundary on the group that will restrict the managed policies they can attach to themselves – IAM permission boundary can only be applied to roles or users, not IAM groups. Hence this option is incorrect.

Question 25:
You would like to store a database password in a secure place, and enable automatic rotation of that password every 90 days. What do you recommend?
A• Key Management Service (KMS)
B• CloudHSM
C• Secrets Manager
D• SSM Parameter Store
Answer: C
Explanation
Correct option:
“Secrets Manager”
AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB. The correct answer here is Secrets Manager
Incorrect options:
“KMS” – AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify a customer-managed CMK that you have already created. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom. KMS is an encryption service, it’s not a secrets store. So this option is incorrect.
“CloudHSM” – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM is standards-compliant and enables you to export all of your keys to most other commercially-available HSMs, subject to your configurations. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.
CloudHSM is also an encryption service, not a secrets store. So this option is incorrect.
“SSM Parameter Store” – AWS Systems Manager Parameter Store (aka SSM Parameter Store) provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, EC2 instance IDs, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can reference Systems Manager parameters in your scripts, commands, SSM documents, and configuration and automation workflows by using the unique name that you specified when you created the parameter.
SSM Parameter Store can serve as a secrets store, but you must rotate the secrets yourself, it doesn’t have an automatic capability for this. So this option is incorrect.

Question 26:
Which of the following IAM policies provides read-only access to the S3 bucket mybucket and its content?
A•

{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::mybucket"
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket/*"
}
]
}

B•

{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket/*"
}
]
}

C•

{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket"
}
]
}

D•

{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::mybucket/*"
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject"
],
"Resource":"arn:aws:s3:::mybucket"
}
]
}

Answer: A
Explanation
Correct option:
{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”:[
“s3:ListBucket”
],
“Resource”:”arn:aws:s3:::mybucket”
},
{
“Effect”:”Allow”,
“Action”:[
“s3:GetObject”
],
“Resource”:”arn:aws:s3:::mybucket/*”
}
]
}
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.
s3:ListBucket is applied to buckets, so the ARN is in the form “Resource”:”arn:aws:s3:::mybucket”, without a trailing / s3:GetObject is applied to objects within the bucket, so the ARN is in the form “Resource”:”arn:aws:s3:::mybucket/*”, with a trailing /* to indicate all objects within the bucket mybucket
Therefore, this is the correct option.
{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”:[
“s3:ListBucket”,
“s3:GetObject”
],
“Resource”:”arn:aws:s3:::mybucket”
}
]
}
This option is incorrect as it provides read-only access only to the bucket, not its contents.
{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”:[
“s3:ListBucket”,
“s3:GetObject”
],
“Resource”:”arn:aws:s3:::mybucket/*”
}
]
}
This option is incorrect as it provides read-only access only to the bucket contents, not to the bucket itself.
{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”:[
“s3:ListBucket”
],
“Resource”:”arn:aws:s3:::mybucket/*”
},
{
“Effect”:”Allow”,
“Action”:[
“s3:GetObject”
],
“Resource”:”arn:aws:s3:::mybucket”
}
]
}
This option is incorrect as it provides listing access only to the bucket contents.


Question 27:
An AWS Organization is using Service Control Policies (SCP) for central control over the maximum available permissions for all accounts in their organization. This allows the organization to ensure that all accounts stay within the organization’s access control guidelines.
Which of the given scenarios are correct regarding the permissions described below? (Select three)
A• If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can still perform that action
B• SCPs affect all users and roles in attached accounts, excluding the root user
C• If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can’t perform that action
D• SCPs do not affect service-linked role
E• SCPs affect all users and roles in attached accounts, including the root user
F• SCPs affect service-linked roles
Answer: C, D & E
Explanation
Correct options:
If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can’t perform that action
SCPs affect all users and roles in attached accounts, including the root user
SCPs do not affect service-linked role
Service control policies (SCPs) are one type of policy that can be used to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.
In SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access. You can also define conditions for when to restrict access to AWS services, resources, and API actions. These restrictions even override the administrators of member accounts in the organization.
Please note the following effects on permissions vis-a-vis the SCPs:
If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can’t perform that action.
SCPs affect all users and roles in the attached accounts, including the root user.
SCPs do not affect any service-linked role.
Incorrect options:
If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can still perform that action
SCPs affect all users and roles in attached accounts, excluding the root user
SCPs affect service-linked roles
These three options contradict the details provided in the explanation above.