Want to know Actualtests AWS-Certified-DevOps-Engineer-Professional Exam practice test features? Want to lear more about Amazon AWS Certified DevOps Engineer Professional certification experience? Study High value Amazon AWS-Certified-DevOps-Engineer-Professional answers to Most up-to-date AWS-Certified-DevOps-Engineer-Professional questions at Actualtests. Gat a success with an absolute guarantee to pass Amazon AWS-Certified-DevOps-Engineer-Professional (AWS Certified DevOps Engineer Professional) test on your first attempt.
Q21. You are creating an application which stores extremely sensitive financial information. All information in
the system must be encrypted at rest and in transit. Which of these is a violation of this policy?
A. ELB SSL termination.
B. ELB Using Proxy Protocol v1.
C. CIoudFront Viewer Protocol Policy set to HTTPS redirection.
D. Telling S3 to use AES256 on the server-side.
Answer: A
Explanation:
Terminating SSL terminates the security of a connection over HTTP, removing the S for "Secure" in HTTPS. This violates the "encryption in transit" requirement in the scenario.
Reference:
http://docs.aws.amazon.com/E|asticLoadBaIancing/latest/DeveIoperGuide/elb-listener-config.htmI
Q22. For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?
A. Terminating
B. Detaching
C. Terminating:Wait
D. EnteringStandby
Answer: A
Explanation:
When Auto Scaling responds to a scale in event, it terminates one or more instances. These instances are detached from the Auto Scaling group and enter the Terminating state.
Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/AutoScaIingGroupLifecycIe.html
Q23. You need your CI to build AMIs with code pre-installed on the images on every new code push. You need to do this as cheaply as possible. How do you do this?
A. Bid on spot instances just above the asking price as soon as new commits come in, perform all instance configuration and setup, then create an AMI based on the spot instance.
B. Have the CI launch a new on-demand EC2 instance when new commits come in, perform all instance configuration and setup, then create an AMI based on the on-demand instance.
C. Purchase a Light Utilization Reserved Instance to save money on the continuous integration machine. Use these credits whenever your create AMIs on instances.
D. When the CI instance receives commits, attach a new EBS volume to the CI machine. Perform all setup on this EBS volume so you don't need a new EC2 instance to create the AMI.
Answer: A
Explanation:
Spot instances are the cheapest option, and you can use minimum run duration if your AMI takes more than a few minutes to create.
Spot instances are also available to run for a predefined duration — in hourly increments up to six hours in length — at a significant discount (30-45%) compared to On-Demand pricing plus an additional 5% during off-peak timesl for a total of up to 50% savings.
Reference: https://aws.amazon.com/ec2/spot/pricing/
Q24. What is the maximum supported single-volume throughput on EBS?
A. 320IV|iB/s
B. 160MiB/s
C. 40MiB/s
D. 640MiB/s
Answer: A
Explanation:
The ceiling throughput for PIOPS on EBS is 320MiB/s.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVo|umeTypes.htm| IIIIIEZIIII HWS-IIEIIII|]S-EII§iII|}|}I‘-PI‘0I|}SSi0IIilI EIIEIII
Q25. Which of these is not an instrinsic function in AWS CloudFormation?
A. Fn::EquaIs
B. Fn::|f
C. Fn::Not
D. Fn::Parse
Answer: D
Explanation:
This is the complete list of Intrinsic Functions...: Fn::Base64, Fn::And, Fn::EquaIs, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Se|ect, Ref
Reference:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html
Q26. Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time. What is the simplest and cheapest way to reduce costs and scale with spikes like this?
A. Create an S3 bucket and asynchronously replicate common requests responses into S3 objects. When a request comes in for a precomputed response, redirect to AWS S3.
B. Create another ELB and Auto Scaling Group layer mounted on top of the other system, adding a tier to the system. Serve most read requests out of the top layer.
C. Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviours to proxy cache requests which can be served late.
D. Create a Memcached cluster in AWS EIastiCache. Create cache logic to serve requests which can be served late from the in-memory cache for increased performance.
Answer: C
Explanation:
CIoudFront is ideal for scenarios in which entire requests can be served out of a cache and usage patterns involve heavy reads and spikiness in demand.
A cache behavior is the set of rules you configure for a given URL pattern based on file extensions, file names, or any portion of a URL path on your website (e.g., *.jpg). You can configure multiple cache behaviors for your web distribution. Amazon CIoudFront will match incoming viewer requests with your list of URL patterns, and if there is a match, the service will honor the cache behavior you configure for that URL pattern. Each cache behavior can include the following Amazon CIoudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, cookies, and trusted signers for private content.
Reference: https://aws.amazon.com/Cloudfront/dynamic-content/
Q27. You meet once per month with your operations team to review the past month's data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API.
You use DynamoDB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer.
Which of the following techniques will NOT help you figure out what happened?
A. Check your CIoudTraiI log history around the spike's time for any API calls that caused slowness.
B. Review CIoudWatch Metrics graphs to determine which component(s) slowed the system down.
C. Review your ELB access logs in S3 to see if any ELBs in your system saw the latency.
D. Analyze your logs to detect bursts in traffic at that time.
Answer: B
Explanation:
Metrics data are available for 2 weeks. If you want to store metrics data beyond that duration, you can retrieve it using our GetMetricStatistics API as well as a number of applications and tools offered by AWS partners.
Reference: https://aws.amazon.com/cIoudwatch/faqs/
Q28. When thinking of AWS OpsWorks, which of the following is not an instance type you can allocate in a stack layer?
A. 24/7 instances
B. Spot instances
C. Time-based instances
D. Load-based instances
Answer: B
Explanation:
AWS OpsWorks supports the following instance types, which are characterized by how they are started and stopped. 24/7 instances are started manually and run until you stop them.Time-based instances are run by AWS OpsWorks on a specified daily and weekly schedule. They allow your stack to automatically adjust the number of instances to accommodate predictable usage patterns. Load-based instances are automatically started and stopped by AWS OpsWorks, based on specified load metrics, such as CPU utilization. They allow your stack to automatically adjust the number of instances to accommodate variations in incoming traffic. Load-based instances are available only for Linux-based stacks. Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/weIcome.htmI
Q29. Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced
a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?
A. Your API Gateway deployment is throttling your requests.
B. Your AWS API Gateway Deployment is bottlenecking on request (de)seriaIization.
C. You did not request a limit increase on concurrent Lambda function executions.
D. You used Consistent Read requests on DynamoDB and are experiencing semaphore lock.
Answer: C
Explanation:
AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.
AWS Lambda: Concurrent requests safety throttle per account -> 100
Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws_service_Iimits.htm|#|imits_|ambda
Q30. Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?
A. Use a CloudFront distribution to serve up your API. Even if the region your API is in goes down, the edge locations CIoudFront uses will be fine.
B. Use an ELB and a cross-zone ELB deployment to create redundancy across datacenters. Even if a region fails, the other AZ will stay online.
C. Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.
D. Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.
Answer: D
Explanation:
Latency Based Records allow request distribution when all is well with both regions, and the Failover component enables fallbacks between regions. By adding in the ELB and ASG, your system in the survMng region can expand to meet 100% of demand instead of the original fraction, whenever failover occurs.
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/dns-failover.html
You are designing an enterprise data storage system. Your data management software system requires mountable disks and a real filesystem, so you cannot use S3 for storage. You need persistence, so you will be using AWS EBS Volumes for your system. The system needs as low-cost storage as possible, and access is not frequent or high throughput, and is mostly sequential reads. Which is the most appropriate EBS Volume Type for this scenario?
A. gpl
B. iol
C. standard
D. gp2
Answer: C
Explanation:
standard volumes, or Magnetic volumes, are best for: Cold workloads where data is infrequently accessed, or scenarios where the lowest storage cost is important.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVoIumeTypes.htmI