getcertified4sure.com

AWS-Certified-DevOps-Engineer-Professional Exam

What Does AWS-Certified-DevOps-Engineer-Professional latest exam Mean?




Testking offers free demo for AWS-Certified-DevOps-Engineer-Professional exam. "AWS Certified DevOps Engineer Professional", also known as AWS-Certified-DevOps-Engineer-Professional exam, is a Amazon Certification. This set of posts, Passing the Amazon AWS-Certified-DevOps-Engineer-Professional exam, will help you answer those questions. The AWS-Certified-DevOps-Engineer-Professional Questions & Answers covers all the knowledge points of the real exam. 100% real Amazon AWS-Certified-DevOps-Engineer-Professional exams and revised by experts!

Q1. For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving steady state in Standby mode?

A. Detaching

B. Terminating:Wait

C. Pending

D. EnteringStandby 

Answer: C

Explanation:

You can put any instance that is in an InService state into a Standby state. This enables you to remove the instance from service, troubleshoot or make changes to it, and then put it back into service. Instances in a Standby state continue to be managed by the Auto Scaling group. However, they are not an active part of your application until you put them back into service.

Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/AutoScaIingGroupLifecycIe.html


Q2. You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?

A. Kinesis Firehose + RDS

B. Kinesis Firehose + RedShift

C. EMR using Hive

D. EMR running Apache Spark 

Answer: B

Explanation:

Kinesis Firehose provides a managed service for aggregating streaming data and inserting it into RedShift. RedShift also supports ad-hoc queries over well-structured data using a SQL-compliant wire protocol, so the business team should be able to adopt this system easily.

Reference:     https://aws.amazon.com/kinesis/firehose/detai|s/


Q3. You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?

A. AWS Elasticsearch Service

B. AWS RedShift

C. AWS EMR

D. AWS DynamoDB 

Answer: A

Explanation:

Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

Reference:

http://docs.aws.amazon.com/elasticsearch-service/Iatest/developerguide/what-is-amazon-elasticsearch-s ervice.htmI


Q4. You need to process long-running jobs once and only once. How might you do this?

A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.

B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.

C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.

D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to process. 

Answer: C

Explanation:

The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.

Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml


Q5. Which of the following are not valid sources for OpsWorks custom cookbook repositories?

A. HTTP(S)

B. Git

C. AWS EBS

D. Subversion 

Answer: C

Explanation:

Linux stacks can install custom cookbooks from any of the following repository types: HTTP or Amazon S3 archives. They can be either public or private, but Amazon S3 is typically the preferred option for a private archive. Git and Subversion repositories provide source control and the ability to have multiple versions.

Reference:

http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-instaIlingcustom-enable.html


Q6. When thinking of DynamoDB, what are true of Global Secondary Key properties?

A. The partition key and sort key can be different from the table.

B. Only the partition key can be different from the table.

C. Either the partition key or the sort key can be different from the table, but not both.

D. Only the sort key can be different from the table. 

Answer: A

Explanation:

Global secondary index — an index with a partition key and a sort key that can be different from those on the table. A global secondary index is considered "gIobaI" because queries on the index can span all of  the data in a table, across all partitions.

Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Secondarylndexes.html


Q7. You run operations for a company that processes digital wallet payments at a very high volume. One second of downtime, during which you drop payments or are otherwise unavailable, loses you on average USD 100. You balance the financials of the transaction system once per day. Which database setup is  best suited to address this business risk?

A. A multi-AZ RDS deployment with synchronous replication to multiple standbys and read-replicas for fast failover and ACID properties.

B. A multi-region, multi-master, active-active RDS configuration using database-level ACID design principles with database trigger writes for replication.

C. A multi-region, multi-master, active-active DynamoDB configuration using application control-level BASE design principles with change-stream write queue buffers for replication.

D. A multi-AZ DynamoDB setup with changes streamed to S3 via AWS Kinesis, for highly durable storage and BASE properties.

Answer:

Explanation:

Only the multi-master, multi-region DynamoDB answer makes sense. IV|u|ti-AZ deployments do not provide sufficient availability when a business loses USD 360,000 per hour of unavailability. As RDS does not natively support multi-region, and ACID does not perform well/at all over large distances between

regions, only the DynamoDB answer works. Reference:

http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepI.htmI


Q8. You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?

A. Subscribe your queue to an SNS topic instead.

B. Use as long of a poll as possible, instead of short polls.

C. Alter your visibility timeout to be shorter.

D. Use <code>sqsd</code> on your EC2 instances. 

Answer: B

Explanation:

One benefit of long polling with Amazon SQS is the reduction of the number of empty responses, when there are no messages available to return, in reply to a ReceiveMessage request sent to an Amazon SQS queue. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response.

Reference:

http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/sqs-long-polling.html


Q9. Which of these configuration or deployment practices is a security risk for RDS?

A. Storing SQL function code in plaintext

B. Non-MuIti-AZ RDS instance

C. Having RDS and EC2 instances exist in the same subnet

D. RDS in a public subnet 

Answer: D

Explanation:

Making RDS accessible to the public internet in a public subnet poses a security risk, by making your database directly addressable and spammable.

DB instances deployed within a VPC can be configured to be accessible from the Internet or from EC2 instances outside the VPC. If a VPC security group specifies a port access such as TCP port 22, you would not be able to access the DB instance because the firewall for the DB instance provides access only via the IP addresses specified by the DB security groups the instance is a member of and the port defined when the DB instance was created.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.htmI


Q10. You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point.

You use ELB and EC2 with Auto Scaling Groups and custom AMIs with your code pre-installed assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at  all during the deployment, but you also need to be able to switch back to the original version of code quickly if something goes wrong. What is the best way to meet these requirements?

A. Create a second ELB, Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AM|s with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELBs.

B. Use the Blue-Green deployment method to enable the fastest possible rollback if needed. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.

C. Create AMIs with all code pre-installed. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old one. Gradually terminate instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new code. On rollback, reverse the process by doing the same thing, but changing the AMI on the Launch Config back to the original code.

D. Migrate to use AWS Elastic Beanstalk. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over time. Re-deploy the old code bundle to rollback if needed.

Answer:

Explanation:

Only Weighted Round Robin DNS Records and reverse proxies allow such fine-grained tuning of traffic splits. The Blue-Green option does not meet the requirement that we mitigate costs and keep overall EC2 fileet size consistent, so we must select the 2 ELB and ASG option with WRR DNS tuning. This method is called A/B deployment and/or Canary deployment.

Reference:        https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf