A) Both A and B
B) None of these
C) VPC Addresses
D) EC2 Addresses
Correct Answer
verified
Multiple Choice
A) Create an AWS KMS key that allows the AWS Logs Delivery account to generate data keys for encryption Configure S3 default encryption to use server-side encryption with KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key. Modify the KMS key policy to allow the log processing service to perform decrypt operations.
B) Create an AWS KMS key that follows the CloudFront service role to generate data keys for encryption Configure S3 default encryption to use KMS managed keys (SSE-KMS) on the log storage bucket using the new KMS key
C) Configure S3 default encryption to use AWS KMS managed keys (SSE-KMS) on the log storage bucket using the AWS Managed S3 KMS key. Modify the KMS key policy to allow the CloudFront service role to generate data keys for encryption
D) Create a new CodeCommit repository for the AWS KMS key template. Create an IAM policy to allow commits to the new repository and attach it to the data protection team's users. Create a new CodePipeline pipeline with a custom IAM role to perform KMS key updates using CloudFormation Modify the KMS key policy to allow the CodePipeline IAM role to modify the key policy.
E) Use the existing CodeCommit repository for the AWS KMS key template. Modify the existing CodePipeline pipeline to use a custom IAM role and to perform KMS key updates using CloudFormation.
Correct Answer
verified
Multiple Choice
A) The organization must enable the parameter in the console which makes the RDS instance publicly accessible.
B) The organization must allow access from the internet in the RDS VPC security group,
C) The organization must setup RDS with the subnet group which has an external IP.
D) The organization must enable the VPC attributes DNS hostnames and DNS resolution.
Correct Answer
verified
Multiple Choice
A) AWS Relational Database Service (AWS RDS)
B) AWS ElastiCache
C) AWS Glacier
D) AWS Import/Export
Correct Answer
verified
Multiple Choice
A) Store the game files on Amazon EBS volumes mounted on Amazon EC2 instances within an Auto Scaling group. Configure an FTP service on the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to download the package.
B) Store the game files on Amazon EFS volumes that are attached to Amazon EC2 instances within an Auto Scaling group. Configure an FTP service on the EC2 instances. Use an Application Load Balancer in front of the Auto Scaling group. Publish the game download URL for users to download the package.
C) Configure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game files to the S3 bucket. Use Amazon CloudFront for the website. Publish the game download URL for users to download the package.
D) Configure Amazon Route 53 and an Amazon S3 bucket for website hosting. Upload the game files to the S3 bucket. Set Requester Pays for the S3 bucket. Publish the game download URL for users to download the package.
Correct Answer
verified
Multiple Choice
A) Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP headers from the whitelist headers section on both if the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content. Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP headers from the whitelist headers section on both if the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content.
B) Remove the User-Agent and Authorization HTTPS headers from the whitelist headers section of the cache behavior. Then update the cache behavior to use presigned cookies for authorization. Remove the User-Agent and HTTPS headers from the whitelist headers section of the cache behavior. Then update the cache behavior to use presigned cookies for authorization.
C) Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavior. Enable automatic object compression and use Lambda@Edge viewer request events for user authorization. HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavior. Enable automatic object compression and use Lambda@Edge viewer request events for user authorization.
D) Create two cache behaviors for static and dynamic content. Remove the User-Agent HTTP header from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content. Create two cache behaviors for static and dynamic content. Remove the HTTP header from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the
Correct Answer
verified
Multiple Choice
A) Use Storage Gateway and configure it to use Gateway Cached volumes.
B) Configure your backup software to use S3 as the target for your data backups.
C) Configure your backup software to use Glacier as the target for your data backups.
D) Use Storage Gateway and configure it to use Gateway Stored volumes.
Correct Answer
verified
Multiple Choice
A) Create a serverless front end using a static Amazon S3 website to allow the data scientists to request a Jupyter notebook instance by filling out a form. Use Amazon API Gateway to receive requests from the S3 website and trigger a central AWS Lambda function to make an API call to Amazon SageMaker that will launch a notebook instance with a preconfigured KMS key for the data scientists. Then call back to the front-end website to display the URL to the notebook instance.
B) Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource type with a preconfigured KMS key. Add a user-friendly name to the CloudFormation template. Display the URL to the notebook using the Outputs section. Distribute the CloudFormation template to the data scientists using a shared Amazon S3 bucket.
C) Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource type with a preconfigured KMS key. Simplify the parameter names, such as the instance size, by mapping them to Small, Large, and X-Large using the Mappings section in CloudFormation. Display the URL to the notebook using the Outputs section, then upload the template into an AWS Service Catalog product in the data scientist's portfolio, and share it with the data scientist's IAM role.
D) Create an AWS CLI script that the data scientists can run locally. Provide step-by-step instructions about the parameters to be provided while executing the AWS CLI script to launch a Jupyter notebook with a preconfigured KMS key. Distribute the CLI script to the data scientists using a shared Amazon S3 bucket.
Correct Answer
verified
Multiple Choice
A) EBS bandwidth of dedicated instance exceeding the PIOPS
B) EBS volume size
C) EC2 bandwidth
D) Instance type is not EBS optimized
Correct Answer
verified
Multiple Choice
A) Use Amazon Route 53 failover routing with geolocation-based routing. Host the website on automatically scaled Amazon EC2 instances behind an Application Load Balancer with an additional Application Load Balancer and EC2 instances for the application layer in each region. Use a Multi-AZ deployment with MySQL as the data layer.
B) Use Amazon Route 53 round robin routing to distribute the load evenly to several regions with health checks. Host the website on automatically scaled Amazon ECS with AWS Fargate technology containers behind a Network Load Balancer, with an additional Network Load Balancer and Fargate containers for the application layer in each region. Use Amazon Aurora replicas for the data layer.
C) Use Amazon Route 53 latency-based routing to route to the nearest region with health checks. Host the website in Amazon S3 in each region and use Amazon API Gateway with AWS Lambda for the application layer. Use Amazon DynamoDB global tables as the data layer with Amazon DynamoDB Accelerator (DAX) for caching.
D) Use Amazon Route 53 geolocation-based routing. Host the website on automatically scaled AWS Fargate containers behind a Network Load Balancer with an additional Network Load Balancer and Fargate containers for the application layer in each region. Use Amazon Aurora Multi-Master for Aurora MySQL as the data layer.
Correct Answer
verified
Multiple Choice
A) You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time.
B) You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance.
C) The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy.
D) You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up, causing some requests to fail.
E) You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW) .
Correct Answer
verified
Multiple Choice
A) No
B) Yes, you can but only if it is configured with Amazon Redshift.
C) Yes, you can provide the ELB is configured with Amazon AppStream.
D) Yes
Correct Answer
verified
Multiple Choice
A) Launch the EC2 instances with only the public subnet.
B) Create routing rules which will route all inbound traffic from ELB to the EC2 instances.
C) Configure ELB and NAT as a part of the public subnet only.
D) Create routing rules which will route all outbound traffic from the EC2 instances through NAT.
Correct Answer
verified
Multiple Choice
A) Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP headers from the whitelist headers section on both if the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content. Create two cache behaviors for static and dynamic content. Remove the User-Agent and HTTP headers from the whitelist headers section on both if the cache behaviors. Remove the session cookie from the whitelist cookies section and the HTTP header from the whitelist headers section for cache behavior configured for static content.
B) Remove the User-Agent and Authorization HTTP headers from the whitelist headers section of the cache behavior. Then update the cache behavior to use presigned cookies for authorization. Remove the and HTTP headers from the whitelist headers section of the cache behavior. Then update the cache behavior to use presigned cookies for authorization.
C) Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavior. Enable automatic object compression and use Lambda@Edge viewer request events for user authorization. HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavior. Enable automatic object compression and use Lambda@Edge viewer request events for user authorization.
D) Create two cache behaviors for static and dynamic content. Remove the User-Agent HTTP header from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content. Create two cache behaviors for static and dynamic content. Remove the HTTP header from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the
Correct Answer
verified
Multiple Choice
A) They support both authenticated and unauthenticated identities.
B) They support only unauthenticated identities.
C) They support neither authenticated nor unauthenticated identities.
D) They support only authenticated identities.
Correct Answer
verified
Multiple Choice
A) Create an AWS account for each business unit. Move each business unit's instances to its own account and set up a federation to allow users to access their business unit's account.
B) Set up a federation to allow users to use their corporate credentials, and lock the users down to their own VPC. Use a network ACL to block each VPC from accessing other VPCs.
C) Implement a tagging policy based on business units. Create an IAM policy so that each user can terminate instances belonging to their own business units only.
D) Set up role-based access for each user and provide limited permissions based on individual roles and the services for which each user is responsible.
Correct Answer
verified
Multiple Choice
A) Internal ELBs should only be launched within private subnets.
B) Amazon ELB service does not allow subnet selection; instead it will automatically select all the available subnets of the VPC.
C) Internal ELBs can support only one subnet in each availability zone.
D) An internal ELB can support all the subnets irrespective of their zones.
Correct Answer
verified
Multiple Choice
A) Use AWS Batch to configure the different tasks required to ship a package. Have AWS Batch trigger an AWS Lambda function that creates and prints a shipping label. Once that label is scanned, as it leaves the warehouse, have another Lambda function move the process to the next step in the AWS Batch job.
B) When a new order is created, store the order information in Amazon SQS. Have AWS Lambda check the queue every 5 minutes and process any needed work. When an order needs to be shipped, have Lambda print the label in the warehouse. Once the label has been scanned, as it leaves the warehouse, have an Amazon EC2 instance update Amazon SQS.
C) Update the application to store new order information in Amazon DynamoDB. When a new order is created, trigger an AWS Step Functions workflow, mark the orders as "in progress", and print a package label to the warehouse. Once the label has been scanned and fulfilled, the application will trigger an AWS Lambda function that will mark the order as shipped and complete the workflow.
D) Store new order information in Amazon EFS. Have instances pull the new information from the NFS and send that information to printers in the warehouse. Once the label has been scanned, as it leaves the warehouse, have Amazon API Gateway call the instances to remove the order information from Amazon EFS.
Correct Answer
verified
Multiple Choice
A) Create an IAM policy in each account that denies access to the services. Associate the policy with an IAM group, and add all IAM users to the group.
B) Create a service control policy that denies access to the services. Add all of the new accounts to a single organizational unit (OU) , and apply the policy to that OU.
C) Create an IAM policy in each account that denies access to the services. Associate the policy with an IAM role, and instruct users to log in using their corporate credentials and assume the IAM role.
D) Create a service control policy that denies access to the services, and apply the policy to the root of the organization.
Correct Answer
verified
Multiple Choice
A) Place the image processing EC2 instance into an Auto Scaling group.
B) Use AWS Lambda to run the image processing tasks.
C) Use Amazon Rekognition for image processing.
D) Use Amazon CloudFront in front of ImageBucket.
E) Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
Correct Answer
verified
Showing 41 - 60 of 871
Related Exams