getcertified4sure.com

Improved AWS Certified DevOps Engineer- Professional DOP-C01 Braindump




Cause all that matters here is passing the Amazon-Web-Services DOP-C01 exam. Cause all that you need is a high score of DOP-C01 AWS Certified DevOps Engineer- Professional exam. The only one thing you need to do is downloading Certleader DOP-C01 exam study guides now. We will not let you down with our money-back guarantee.

Free demo questions for Amazon-Web-Services DOP-C01 Exam Dumps Below:

NEW QUESTION 1
As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

  • A. Ensure that the I/O block sizes for the test are randomly selected.
  • B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test.
  • C. Ensure that snapshots of the Amazon EBS volumes are created as a backup.
  • D. Ensure that the Amazon EBS volume is encrypted.

Answer: B

Explanation:
During the AMI-creation process, Amazon CC2 creates snapshots of your instance's root volume and any other CBS volumes attached to your instance
New CBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming).
However, storage blocks on volumes that were restored from snapshots must to initialized (pulled
down from Amazon S3 and written to the volume) before you can access the block. This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed. For most applications, amortizing this cost over the lifetime of the volume is acceptable.
Option A is invalid because block sizes are predetermined and should not be randomly selected. Option C is invalid because this is part of continuous integration and hence volumes can be destroyed after the test and hence there should not be snapshots created unnecessarily
Option D is invalid because the encryption is a security feature and not part of load tests normally. For more information on CBS initialization please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/ebs-in itialize.html

NEW QUESTION 2
Which of the following services can be used to detect the application health in a Blue Green deployment in A WS.

  • A. AWSCode Commit
  • B. AWSCode Pipeline
  • C. AWSCIoudTrail
  • D. AWSCIoudwatch

Answer: D

Explanation:
The AWS Documentation mentions the following
Amazon Cloud Watch is a monitoring sen/ice for AWS Cloud resources and the applications you run on AWS.9 CloudWatch can collect and track metrics, collect and monitor log files, and set alarms. It provides system-wide visibility into resource utilization, application performance, and operational health, which are key to early detection of application health in blue/green deployments.
For more information on Blue Green deployments, please refer to the below link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 3
After reviewing the last quarter's monthly bills, management has noticed an increase in the overall bill from Amazon. After researching this increase in cost, you discovered that one of your new services is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket. Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls. What process should you use to help mitigate the cost?

  • A. Update your Amazon S3 buckets' lifecycle policies to automatically push a list of objects to a new bucket, and use this list to view objects associated with the application's bucket.
  • B. Create a new DynamoDB tabl
  • C. Use the new DynamoDB table to store all metadata about all objects uploaded to Amazon S3. Any time a new object is uploaded, update the application's internalAmazon S3 object metadata cache from DynamoDB.C Using Amazon SNS, create a notification on any new Amazon S3 objects that automatical ly updates a new DynamoDB table to store allmetadata about the new objec
  • D. Subscribe the application to the Amazon SNS topic to update its internal Amazon S3 object metadata cache from the DynamoDB tabl
  • E. ^/
  • F. Upload all files to an ElastiCache file cache serve
  • G. Update your application to now read all file metadata from the ElastiCache file cache server, and configure the ElastiCache policies to push all files to Amazon S3 for long-term storage.

Answer: C

Explanation:
Option A is an invalid option since Lifecycle policies are normally used for expiration of objects or archival of objects.
Option B is partially correct where you store the data in DynamoDB, but then the number of GET requests would still be high if the entire DynamoDB table had to be
traversed and each object compared and updated in S3.
Option D is invalid because uploading all files to Clastic Cache is not an ideal solution.
The best option is to have a notification which can then trigger an update to the application to update the DynamoDB table accordingly.
For more information on SNS triggers and DynamoDB please refer to the below link:
◆ https://aws.amazon.com/blogs/compute/619/

NEW QUESTION 4
You are a Devops Engineer for your company. Your company is using Opswork stack to rollout a collection of web instances. When the instances are launched, a configuration file need to be setup prior to the launching of the web application hosted on these instances. Which of the following steps would you carry out to ensure this requirement gets fulfilled. Choose 2 answers from the options given below

  • A. Ensure that the Opswork stack is changed to use the AWS specific cookbooks
  • B. Ensure that the Opswork stack is changed to use custom cookbooks
  • C. Configure a recipe which sets the configuration file and add it to the ConfigureLifeCycle Event of the specific web layer.
  • D. Configure a recipe which sets the configuration file and add it to the Deploy LifeCycleEvent of the specific web layer.

Answer: BC

Explanation:
This is mentioned in the AWS documentation Configure
This event occurs on all of the stack's instances when one of the following occurs:
• An instance enters or leaves the online state.
• You associate an Elastic IP address with an instance or disassociate one from an instance.
• You attach an Elastic Load Balancing load balancer to a layer, or detach one from a layer.
For example, suppose that your stack has instances A, B, and C, and you start a new instance, D. After D has finished running its setup recipes, AWS OpsWorks Stacks triggers the Configure event on A, B, C, and D. If you subsequently stop A, AWS Ops Works Stacks triggers the Configure event on B, C, and
D. AWS OpsWorks Stacks responds to the Configure event by running each layer's Configure recipes, which update the instances' configuration to reflect the current set of online instances. The Configure event is therefore a good time to regenerate configuration files. For example, the HAProxy Configure recipes reconfigure the load balancer to accommodate any changes in the set of online application server instances.
You can also manually trigger the Configure event by using the Configure stack command. For more information on Opswork lifecycle events, please refer to the below URL:
• http://docs.aws.a mazon.com/opsworks/latest/userguide/workingcookbook-events.htm I

NEW QUESTION 5
You are in charge of designing Cloudformation templates for your company. One of the key requirements is to ensure that if a Cloudformation stack is deleted, a snapshot of the relational database is created which is part of the stack. How can you achieve this in the best possible way?

  • A. Create a snapshot of the relational database beforehand so that when the cloudformation stack is deleted, the snapshot of the database will be present.
  • B. Use the Update policy of the cloudformation template to ensure a snapshot is created of the relational database.
  • C. Use the Deletion policy of the cloudformation template to ensure a snapshot is created of the relational database.
  • D. Create a new cloudformation template to create a snapshot of the relational database.

Answer: C

Explanation:
The AWS documentation mentions the following
With the Deletion Policy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS Cloud Formation deletes the resource by default. Note that this capability also applies to update operations that lead to resources being removed.
For more information on the Deletion policy, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/aws-attri bute- deletionpolicy.html

NEW QUESTION 6
Your team wants to begin practicing continuous delivery using CloudFormation, to enable automated builds and deploys of whole, versioned stacks or stack layers. You have a 3-tier, mission-critical system. Which of the following is NOT a best practice for using CloudFormation in a continuous delivery environment?

  • A. Use the AWS CloudFormation ValidateTemplate call before publishing changes to AWS.
  • B. Model your stack in one template, so you can leverage CloudFormation's state management and dependency resolution to propagate all changes.
  • C. Use CloudFormation to create brand new infrastructure for all stateless resources on each push, and run integration tests on that set of infrastructure.
  • D. Parametrize the template and use Mappings to ensure your template works in multiple Regions.

Answer: B

Explanation:
Answer - B
Some of the best practices for Cloudformation are
• Created Nested stacks
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates.
• Reuse Templates
After you have your stacks and resources set up, you can reuse your templates to replicate your infrastructure in multiple environments. For example, you can create environments for development, testing, and production so that you can test changes before implementing them into production. To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same. For more information on Cloudformation best practises, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 7
A gaming company adopted AWS Cloud Formation to automate load-testing of theirgames. They have created an AWS Cloud Formation template for each gaming environment and one for the load- testing stack. The load-testing stack creates an Amazon Relational Database Service (RDS) Postgres database and two web servers running on Amazon Elastic Compute Cloud (EC2) that send HTTP requests, measure response times, and write the results into the database. A test run usually takes between 15 and 30 minutes. Once the tests are done, the AWS Cloud Formation stacks are torn down immediately. The test results written to the Amazon RDS database must remain accessible for visualization and analysis.
Select possible solutions that allow access to the test results after the AWS Cloud Formation load - testing stack is deleted.
Choose 2 answers.

  • A. Define an Amazon RDS Read-Replica in theload-testing AWS Cloud Formation stack and define a dependency relation betweenmaster and replica via the Depends On attribute.
  • B. Define an update policy to prevent deletionof the Amazon RDS database after the AWS Cloud Formation stack is deleted.
  • C. Define a deletion policy of type Retain forthe Amazon RDS resource to assure that the RDS database is not deleted with theAWS Cloud Formation stack.
  • D. Define a deletion policy of type Snapshotfor the Amazon RDS resource to assure that the RDS database can be restoredafter the AWS Cloud Formation stack is deleted.
  • E. Defineautomated backups with a backup retention period of 30 days for the Amazon RDSdatabase and perform point-in-time recovery of the database after the AWS CloudFormation stack is deleted.

Answer: CD

Explanation:
With the Deletion Policy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS Cloud Formation deletes the resource by default.
To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can retain a nested stack, S3 bucket, or CC2 instance so that you can continue to use or modify those resources after you delete their stacks.
For more information on Deletion policy, please visit the below url http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/aws-attri bute- deletionpolicy.html

NEW QUESTION 8
You are currently using SGS to pass messages to EC2 Instances. You need to pass messages which are greater than 5 MB in size. Which of the following can help you accomplish this.

  • A. UseKinesis as a buffer stream for message bodie
  • B. Store the checkpoint id fortheplacement in the Kinesis Stream in SQS.
  • C. Usethe Amazon SQS Extended Client Library for Java and Amazon S3 as a storagemechanism for message bodie
  • D. */
  • E. UseSQS's support for message partitioning and multi-part uploads on Amazon S3.
  • F. UseAWS EFS as a shared pool storage mediu
  • G. Store filesystem pointers to the fileson disk in the SQS message bodies.

Answer: B

Explanation:
The AWS documentation mentions the following
You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and consuming messages with a message size of up to 2 GB. To manage
Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java. Specifically, you use this library to:
Specify whether messages are always stored in Amazon S3 or only when a message's size exceeds 256 KB.
Send a message that references a single message object stored in an Amazon S3 bucket. Get the corresponding message object from an Amazon S3 bucket.
Delete the corresponding message object from an Amazon S3 bucket. For more information on SQS and sending larger messages please visit the link

NEW QUESTION 9
You are doing a load testing exercise on your application hosted on AWS. While testing your Amazon RDS MySQL DB instance, you notice that when you hit 100% CPU utilization on it, your application becomes non- responsive. Your application is read-heavy. What are methods to scale your data tier to meet the application's needs? Choose three answers from the options given below

  • A. Add Amazon RDS DB read replicas, and have your application direct read queries to them.
  • B. Add your Amazon RDS DB instance to an Auto Scalinggroup and configure your Cloud Watch metricbased on CPU utilization.
  • C. Use an Amazon SQS queue to throttle data going to the Amazon RDS DB instance.
  • D. Use ElastiCache in front of your Amazon RDS DB to cache common queries.
  • E. Shard your data set among multiple Amazon RDS DB instances.
  • F. Enable Multi-AZ for your Amazon RDS DB instance.

Answer: ADE

Explanation:
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out
beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and
serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput
For more information on Read Replica's please refer to the below link:
◆ https://aws.amazon.com/rds/details/read-replicas/
Sharding is a common concept to split data across multiple tables in a database For more information on sharding please refer to the below link:
https://forums.awsa mazon.com/thread jspa?messagelD=203052
Amazon OastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases
Amazon OastiCache is an in-memory key/value store that sits between ycbetappiicipiJGra arcdalie data store (database) that it accesses. Whenever your application requests data, it first makes the request to the DastiCache cache. If the data exists in the cache and is current, OastiCache returns the data to your application. If the data does not exist in the cache, or the data in the cache has expired, your application requests the data from your data store which returns the data to your application. Your application then writes the data received from the store to the cache so it can be more quickly retrieved next time it is requested. For more information on Elastic Cache please refer to the below link:
https://aws.amazon.com/elasticache/
Option B is not an ideal way to scale a database
Option C is not ideal to store the data which would go into a database because of the message size Option F is invalid because Multi-AZ feature is only a failover option

NEW QUESTION 10
Which of the following are components of the AWS Data Pipeline service. Choose 2 answers from the options given below

  • A. Pipeline definition
  • B. Task Runner
  • C. Task History
  • D. Workflow Runner

Answer: AB

Explanation:
The AWS Documentation mentions the following on AWS Pipeline
The following components of AWS Data Pipeline work together to manage your data: A pipeline definition specifies the business logic of your data management.
A pipeline schedules and runs tasks. You upload your pipeline definition to the pipeline, and then activate the pipeline. You can edit the pipeline definition for a running pipeline and activate the pipeline again for it to take effect. You can deactivate the pipeline, modify a data source, and then activate the pipeline again. When you are finished with your pipeline, you can delete it.
Task Runner polls for tasks and then performs those tasks. For example. Task Runner could copy log files to Amazon S3 and launch Amazon EMR clusters. Task Runner is installed and runs automatically on resources created by your pipeline definitions. You can write a custom task runner application, or you can use the Task Runner application that is provided by AWS Data Pipeline.
For more information on AWS Pipeline, please visit the link: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html

NEW QUESTION 11
As part of your deployment process, you are configuring your continuous integration (CI) system to build AMIs. You want to build them in an automated manner that is also cost-efficient. Which method should you use?

  • A. Attachan Amazon EBS volume to your CI instance, build the root file system of yourimage on the volume, and use the Createlmage API call to create an AMI out ofthis volume.
  • B. Havethe CI system launch a new instance, bootstrap the code and apps onto theinstance and create an AMI out of it.
  • C. Uploadall contents of the image to Amazon S3 launch the base instance, download allof the contents from Amazon S3 and create the AMI.
  • D. Havethe CI system launch a new spot instance bootstrap the code and apps onto theinstance and create an AMI out of it.

Answer: D

Explanation:
The AWS documentation mentions the following
If your organization uses Jenkins software in a CI/CD pipeline, you can add Automation as a post- build step to pre-install application releases into Amazon Machine Images (AMIs). You can also use the Jenkins scheduling feature to call Automation and create your own operating system (OS) patching cadence
For more information on Automation with Jenkins, please visit the link:
• http://docs.aws.a mazon.com/systems-manager/latest/userguide/automation-jenkinsJntm I
• https://wiki.jenkins.io/display/JCNKINS/Amazon < CC21 Plugin

NEW QUESTION 12
You are responsible for your company's large multi-tiered Windows-based web application running on Amazon EC2 instances situated behind a load balancer. While reviewing metrics, you've started noticing an upwards trend for slow customer page load time. Your manager has asked you to come up with a solution to ensure that customer load time is not affected by too many requests per second. Which technique would you use to solve this issue?

  • A. Re-deploy your infrastructure usingan AWS CloudFormation templat
  • B. Configure Elastic Load Balancing health checks to initiate a new AWS CloudFormation stack when health checks return failed.
  • C. Re-deploy your infrastructure using an AWS CloudFormation templat
  • D. Spin up a second AWS CloudFormation stac
  • E. Configure Elastic Load Balancing SpillOver functionality to spill over any slow connections to the second AWS CloudFormation stack.
  • F. Re-deploy your infrastructure using AWS CloudFormation, Elastic Beanstalk, and Auto Scalin
  • G. Setup your Auto Scalinggroup policies to scale based on the number of requests per second as well as the current customer load tim
  • H. •>/D- Re-deploy your application using an Auto Scaling templat
  • I. Configure the Auto Scaling template to spin up a new Elastic Beanstalk application when the customer load time surpasses your threshold.

Answer: C

Explanation:
Auto Scaling helps you ensure that you have the correct number of Amazon CC2 instances available to handle the load for your application. You create collections of
CC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group
never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that yourgroup never goes
above this size. If you specify the desired capacity, either when you create the group or at any time thereafter. Auto Scaling ensures that yourgroup has this many
instances. If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
Option A and B are invalid because Autoscaling is required to solve the issue to ensure the application can handle high traffic loads.
Option D is invalid because there is no Autoscaling template.
For more information on Autoscaling, please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/Whatl sAutoScaling.html

NEW QUESTION 13
You need to deploy a multi-container Docker environment on to Elastic beanstalk. Which of the following files can be used to deploy a set of Docker containers to Elastic beanstalk

  • A. Dockerfile
  • B. DockerMultifile
  • C. Dockerrun.aws.json
  • D. Dockerrun

Answer: C

Explanation:
The AWS Documentation specifies
A Dockerrun.aws.json file is an Clastic Beanstalk-specific JSON file that describes how to deploy a set of Docker containers as an Clastic Beanstalk application. You can use aDockerrun.aws.json file for a multicontainer Docker environment.
Dockerrun.aws.json describes the containers to deploy to each container instance in the environment as well as the data volumes to create on the host instance for the containers to mount.
For more information on this, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

NEW QUESTION 14
You are building a Ruby on Rails application for internal, non-production use which uses MySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?

  • A. AWSCIoudFormation
  • B. AWSOpsWorks
  • C. AWS ELB+ EC2 with CLI Push
  • D. AWS Elastic Beanstalk

Answer: D

Explanation:
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications.
AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring
Elastic Beanstalk supports applications developed in Java, PHP, .NET, Node.js, Python, and Ruby, as well as different container types for each language.
For more information on Elastic beanstalk, please visit the below URL:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

NEW QUESTION 15
Your company develops a variety of web applications using many platforms and programming languages with different application dependencies. Each application must be developed and deployed quickly and be highly available to satisfy your business requirements. Which of the following methods should you use to deploy these applications rapidly?

  • A. Develop the applications in Docker containers, and then deploy them to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing.
  • B. Use the AWS CloudFormation Docker import service to build and deploy the applications with high availability in multiple Availability Zones.
  • C. Develop each application's code in DynamoDB, and then use hooks to deploy it to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing.
  • D. Store each application's code in a Git repository, develop custom package repository managers for each application's dependencies, and deploy to AWS OpsWorks in multiple Availability Zones.

Answer: A

Explanation:
Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
By using Docker with Elastic Beanstalk, you have an infrastructure that automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
For more information on Dockers and Elastic beanstalk please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

NEW QUESTION 16
Which of the following features of the Elastic Beanstalk service will allow you to perform a Blue Green Deployment

  • A. Rebuild Environment
  • B. Swap Environment
  • C. Swap URL's
  • D. Environment Configuration

Answer: C

Explanation:
With the Swap url feature, you can keep a version of your environment ready. And when you are ready to cut over, you can just use the swap url feature to switch over
to your new environment
For more information on swap url feature, please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMCSwap.html

NEW QUESTION 17
You have a web application that's developed in Node.js The code is hosted in Git repository. You want to now deploy this application to AWS. Which of the below 2 options can fulfil this requirement.

  • A. Create an Elastic Beanstalk applicatio
  • B. Create a Docker file to install Node.j
  • C. Get the code from Gi
  • D. Use the command "aws git.push" to deploy the application
  • E. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::Container resources typ
  • F. With UserData, install Git to download the Node.js application and then set it up.
  • G. Create a Docker file to install Node.j
  • H. and gets the code from Gi
  • I. Use the Dockerfile to perform the deployment on a new AWS Elastic Beanstalk applicatio
  • J. S
  • K. Create an AWS CloudFormation template which creates an instance with the AWS::EC2::lnstance resource type and an AMI with Docker pre-installe
  • L. With UserData, install Git to download the Node.js application and then set it up.

Answer: CD

Explanation:
Option A is invalid because there is no "awsgitpush" command
Option B is invalid because there is no AWS::CC2::Container resource type.
Clastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
For more information on Docker and Clastic beanstalk please refer to the below link:
◆ http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
When you launch an instance in Amazon CC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon CC2: shell scripts and cloud- init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). For more information on Cc2 User data please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/user-data. htm I
Note: "git aws.push" with CB CLI 2.x - see a forum thread at https://forums.aws.amazon.com/thread.jspa7messageID=583202#jive-message-582979. Basically, this is a predecessor to the newer "eb deploy" command in CB CLI 31. This question kept in order to be consistent with exam.

NEW QUESTION 18
You have deployed an application to AWS which makes use of Autoscaling to launch new instances. You now want to change the instance type for the new instances. Which of the following is one of the action items to achieve this deployment?

  • A. Use Elastic Beanstalk to deploy the new application with the new instance type
  • B. Use Cloudformation to deploy the new application with the new instance type
  • C. Create a new launch configuration with the new instance type
  • D. Create new EC2 instances with the new instance type and attach it to the Autoscaling Group

Answer: C

Explanation:
The ideal way is to create a new launch configuration, attach it to the existing Auto Scaling group, and terminate the running instances.
Option A is invalid because Clastic beanstalk cannot launch new instances on demand. Since the current scenario requires Autoscaling, this is not the ideal option
Option B is invalid because this will be a maintenance overhead, since you just have an Autoscaling Group. There is no need to create a whole Cloudformation
template for this.
Option D is invalid because Autoscaling Group will still launch CC2 instances with the older launch configuration
For more information on Autoscaling Launch configuration, please refer to the below document link: from AWS
http://docs.aws.amazon.com/autoscaling/latest/userguide/l_aunchConfiguration.html

NEW QUESTION 19
You are trying to debug the creation of Cloudformation stack resources. Which of the following can be used to help in the debugging process?
Choose 2 answers from the options below

  • A. UseCloudtrail to debugall the API call's sent by the Cloudformation stack.
  • B. Usethe AWS CloudFormation console to view the status of yourstack.
  • C. Seethe logs in the/var/log directory for Linux instances
  • D. UseAWSConfig to debug all the API call's sent by the Cloudformation stack.

Answer: BC

Explanation:
The AWS Documentation mentions
Use the AWS Cloud Formation console to view the status of your stack. In the console, you can view a list of stack events while your stack is being created, updated, or
deleted. From this list, find the failure event and then view the status reason for that event.
For Amazon CC2 issues, view the cloud-init and cfn logs. These logs are published on the Amazon CC2 instance in the /var/log/ directory. These logs capture processes and command outputs while AWS Cloud Formation is setting up your instance. For Windows, view the L~C2Configure service and cfn logs
in %ProgramFiles%\Amazon\CC2ConfigService and C:\cfn\log.
For more information on Cloudformation Troubleshooting, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/troubleshooting.html

NEW QUESTION 20
You need to run a very large batch data processingjob one time per day. The source data exists
entirely in S3, and the output of the processingjob should also be written to S3 when finished. If you need to version control this processingjob and all setup and teardown logic for the system, what approach should you use?.

  • A. Model an AWSEMRjob in AWS Elastic Beanstalk.
  • B. Model an AWSEMRjob in AWS CloudFormation.
  • C. Model an AWS EMRjob in AWS OpsWorks.
  • D. Model an AWS EMRjob in AWS CLI Composer.

Answer: B

Explanation:
With AWS Cloud Formation, you can update the properties for resources in your existing stacks.
These changes can range from simple configuration changes, such
as updating the alarm threshold on a Cloud Watch alarm, to more complex changes, such as updating the Amazon Machine Image (AMI) running on an Amazon EC2
instance. Many of the AWS resources in a template can be updated, and we continue to add support for more.
For more information on Cloudformation version control, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/updating.stacks.wa I kthrough.htm I

NEW QUESTION 21
When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application rapidly scales up and down in response to load within a 10-minute window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active. What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources within your configuration management system? Choose two answers from the options given below

  • A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scalinggroup and removes terminated instances from the configuration management system.
  • B. Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system.
  • C. Use your existing configuration management system to control the launchingand bootstrapping of instances to reduce the number of moving parts in the automation.
  • D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system.

Answer: AD

Explanation:
There is a rich brand of CLI commands available for Cc2 Instances. The CLI is located in the following link:
• http://docs.aws.a mazon.com/cli/latest/reference/ec2/
You can then use the describe instances command to describe the EC2 instances.
If you specify one or more instance I Ds, Amazon CC2 returns information for those instances. If you do not specify instance IDs, Amazon EC2 returns information for all relevant instances. If you specify an instance ID that is not valid, an error is returned. If you specify an instance that you do not own, it is not included in the returned results.
• http://docs.aws.a mazon.com/cli/latest/reference/ec2/describe-insta nces.html
You can use the CC2 instances to get those instances which need to be removed from the configuration management system.

NEW QUESTION 22
You currently have an Autoscalinggroup that has the following settings Min capacity-2
Desired capacity - 2 Maximum capacity - 2
Your launch configuration has AMI'S which are based on the t2.micro instance type. The application running on these instances are now experiencing issues and you have identified that the solution is to change the instance type of the instances running in the Autoscaling Group.
Which of the below solutions will meet this demand.

  • A. Change the Instance type in the current launch configuratio
  • B. Change the Desired value of the Autoscaling Group to 4. Ensure the new instances are launched.
  • C. Delete the current Launch configuratio
  • D. Create a new launch configuration with the new instance type and add it to the Autoscaling Grou
  • E. This will then launch the new instances.
  • F. Make a copy the Launch configuratio
  • G. Change the instance type in the new launch configuratio
  • H. Attach that to the Autoscaling Group.Change the maximum and Desired size of the Autoscaling Group to 4. Once the new instances are launched, change the Desired and maximum size back to 2.
  • I. Change the desired and maximum size of the Autoscaling Group to 4. Make a copy the Launch configuratio
  • J. Change the instance type in the new launch configuratio
  • K. Attach that to the Autoscaling Grou
  • L. Change the maximum and Desired size of the Autoscaling Group to 2

Answer: C

Explanation:
You should make a copy of the launch configuration, add the new instance type. The change the Autoscaling Group to include the new instance type. Then change the Desired number of the Autoscaling Group to 4 so that instances of new instance type can be launched. Once launched, change the desired size back to 2, so that Autoscaling will delete the instances with the older configuration. Note that the assumption here is that the current instances are equally distributed across multiple AZ's because Autoscaling will first use the AZRebalance process to terminate instances.
Option A is invalid because you cannot make changes to an existing Launch configuration.
Option B is invalid because if you delete the existing launch configuration, then your application will not be available. You need to ensure a smooth deployment process.
Option D is invalid because you should change the desired size to 4 after attaching the new launch configuration.
For more information on Autoscaling Suspend and Resume, please visit the below URL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resu me-processes.html

NEW QUESTION 23
You are using Elastic Beanstalk to manage your application. You have a SQL script that needs to only be executed once per deployment no matter how many EC2 instances you have running. How can you do this?

  • A. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to false.
  • B. Use Elastic Beanstalk version and a configuration file to execute the script, ensuring that the "leader only" flag is set to true.
  • C. Use a "Container command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "leader only" flag is set to true.
  • D. Use a "leader command" within an Elastic Beanstalk configuration file to execute the script, ensuring that the "container only" flag is set to true.

Answer: C

Explanation:
You can use the container_commands key to execute commands that affect your application source code. Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non- container commands and other customization operations are performed prior to the application source code being extracted.
You can use leader_only to only run the command on a single instance, or configure a test to only run the command when a test command evaluates to true. Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated. Leader- only container commands are not executed due to launch configuration changes, such as a change in the AMI Id or instance type. For more information on customizing containers, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html

NEW QUESTION 24
......

100% Valid and Newest Version DOP-C01 Questions & Answers shared by Dumpscollection.com, Get Full Dumps HERE: https://www.dumpscollection.net/dumps/DOP-C01/ (New 116 Q&As)