getcertified4sure.com

AWS-Certified-DevOps-Engineer-Professional Exam

How Does Examcollection Amazon AWS-Certified-DevOps-Engineer-Professional real exam Work?




Pass4sure AWS-Certified-DevOps-Engineer-Professional Questions are updated and all AWS-Certified-DevOps-Engineer-Professional answers are verified by experts. Once you have completely prepared with our AWS-Certified-DevOps-Engineer-Professional exam prep kits you will be ready for the real AWS-Certified-DevOps-Engineer-Professional exam without a problem. We have Most recent Amazon AWS-Certified-DevOps-Engineer-Professional dumps study guide. PASSED AWS-Certified-DevOps-Engineer-Professional First attempt! Here What I Did.

Q1. You need to process long-running jobs once and only once. How might you do this?

A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.

B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.

C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.

D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to process. 

Answer: C

Explanation:

The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.

Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml


Q2. You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?

A. Kinesis Firehose + RDS

B. Kinesis Firehose + RedShift

C. EMR using Hive

D. EMR running Apache Spark 

Answer: B

Explanation:

Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily.

Reference:     https://aws.amazon.com/kinesis/firehose/detai|s/


Q3. What is the scope of an EC2 security group?

A. Availability Zone

B. Placement Group

C. Region

D. VPC

Answer:

Explanation:

A security group is tied to a region and can be assigned only to instances in the same region. You can't enable an instance to communicate with an instance outside its region using security group rules. Traffic

from an instance in another region is seen as WAN bandwidth.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.htmI


Q4. You are designing a service that aggregates clickstream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spikey, geographically distributed, high-scale, and unpredictable. How should you design this system?

A. Use a large RedShift cluster to perform the analysis, and a fileet of Lambdas to perform record inserts into the RedShift tables. Lambda will scale rapidly enough for the traffic spikes.

B. Use a CIoudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETs to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3.

C. Use API Gateway invoking Lambdas which PutRecords into Kinesis, and EMR running Spark performing GetRecords on Kinesis to scale with spikes. Spark on EMR outputs the analysis to S3, which are sent out via email.

D. Use AWS Elasticsearch service and EC2 Auto Scaling groups. The Autoscaling groups scale based on click throughput and stream into the Elasticsearch domain, which is also scalable. Use Kibana to  generate reports periodically.

Answer:

Explanation:

Because you only need to batch analyze, anything using streaming is a waste of money. CIoudFront is a Gigabit-Scale HTTP(S) global request distribution service, so it can handle scale, geo-spread, spikes, and unpredictability. The Access Logs will contain the GET data and work just fine for batch analysis and email using EMR.

Can I use Amazon CIoudFront if I expect usage peaks higher than 10 Gbps or 15,000 RPS? Yes. Complete our request for higher limits here, and we will add more capacity to your account within two business days.

Reference: https://aws.amazon.com/Cloudfront/faqs/


Q5. When thinking of AWS Elastic BeanstaIk's model, which is true?

A. Applications have many deployments, deployments have many environments.

B. Environments have many applications, applications have many deployments.

C. Applications have many environments, environments have many deployments.

D. Deployments have many environments, environments have many applications. 

Answer: C

Explanation:

Applications group logical services. Environments belong to Applications, and typically represent different deployment levels (dev, stage, prod, fo forth). Deployments belong to environments, and are pushes of bundles of code for the environments to run.

Reference:      http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/\NeIcome.html


Q6. Why are more frequent snapshots or EBS Volumes faster?

A. Blocks in EBS Volumes are allocated lazily, since while logically separated from other EBS Volumes, Volumes often share the same physical hardware. Snapshotting the first time forces full block range allocation, so the second snapshot doesn't need to perform the allocation phase and is faster.

B. The snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot.

C. AWS provisions more disk throughput for burst capacity during snapshots if the drive has been pre-warmed by snapshotting and reading all blocks.

D. The drive is pre-warmed, so block access is more rapid for volumes when every block on the device has already been read at least one time.

Answer:

Explanation:

After writing data to an EBS volume, you can periodically create a snapshot of the volume to use as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.

Reference:        http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html


Q7. What is the scope of an EC2 EIP?

A. Placement Group

B. Availability Zone

C. Region

D. VPC

Answer:

Explanation:

An Elastic IP address is tied to a region and can be associated only with an instance in the same region. Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.htmI


Q8. You are building a game high score table in DynamoDB. You will store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What's the best DynamoDB key structure?

A. HighestScore as the hash / only key.

B. GameID as the hash key, HighestScore as the range key.

C. GameID as the hash / only key.

D. GameID as the range / only key. 

Answer: B

Explanation:

Since access and storage for games is uniform, and you need to have ordering within each game for the scores (to access the highest value), your hash (partition) key should be the GameID, and there should be a range key for HighestScore.

Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuideIinesForTabIes.htmI#GuideIi nesForTabIes.Partitions


Q9. Which of these is not an instrinsic function in AWS CloudFormation?

A. Fn::EquaIs

B. Fn::|f

C. Fn::Not

D. Fn::Parse 

Answer: D

Explanation:

This is the complete list of Intrinsic Functions...: Fn::Base64, Fn::And, Fn::EquaIs, Fn::If, Fn::Not, Fn::Or, Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::Join, Fn::Se|ect, Ref

Reference:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html


Q10. You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point.

You use ELB and EC2 with Auto Scaling Groups and custom AMIs with your code pre-installed assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at  all during the deployment, but you also need to be able to switch back to the original version of code quickly if something goes wrong. What is the best way to meet these requirements?

A. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AM|s with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs.

B. Use the Blue-Green deployment method to enable the fastest possible rollback if needed. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.

C. Create AMIs with all code pre-installed. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old one. Gradually terminate instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new code. On rollback, reverse the process by doing the same thing, but changing the AMI on the Launch Config back to the original code.

D. Migrate to use AWS Elastic Beanstalk. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over time. Re-deploy the old code bundle to rollback if needed.

Answer:

Explanation:

Only Weighted Round Robin DNS Records and reverse proxies allow such fine-grained tuning of traffic splits. The Blue-Green option does not meet the requirement that we mitigate costs and keep overall EC2 fileet size consistent, so we must select the 2 ELB and ASG option with WRR DNS tuning. This method is called A/B deployment and/or Canary deployment.

Reference:        https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf