Accurate SAP-C01 Resource 2021
Want to know Ucertify SAP-C01 Exam practice test features? Want to lear more about Amazon-Web-Services AWS Certified Solutions Architect- Professional certification experience? Study Virtual Amazon-Web-Services SAP-C01 answers to Most up-to-date SAP-C01 questions at Ucertify. Gat a success with an absolute guarantee to pass Amazon-Web-Services SAP-C01 (AWS Certified Solutions Architect- Professional) test on your first attempt.
Online SAP-C01 free questions and answers of New Version:
NEW QUESTION 1
A Solutions Architect must migrate an existing on-premises web application with 70 TB of static files supporting a public open-data initiative. The architect wants to upgrade to the latest version of the host operating system as part of the migration effort.
Which is the FASTEST and MOST cost-effective way to perform the migration?
- A. Run a physical-to-virtual conversion on the application serve
- B. Transfer the server image over the internet, and transfer the static data to Amazon S3.
- C. Run a physical-to-virtual conversion on the application serve
- D. Transfer the server image over AWS Direct Connect, and transfer the static data to Amazon S3.
- E. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3.
- F. Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance.
Answer: C
NEW QUESTION 2
A company is moving a business-critical application onto AWS. It is a traditional three-tier web application using an Oracle database. Data must be encrypted in transit and at rest. The database hosts 12 TB of data. Network connectivity to the source Oracle database over the internal is allowed, and the company wants to reduce the operational costs by using AWS Managed Services where possible. All resources within the web and application tiers have been migrated. The database has a few tables and a simple schema using primary keys only; however, it contains many Binary Large Object (BLOB) fields. It was not possible to use the database’s native replication tools because of licensing restrictions.
Which database migration solution will result in the LEAST amount of impact to the application’s availability?
- A. Provision an Amazon RDS for Oracle instanc
- B. Host the RDS database within a virtual private cloud (VPC) subnet with internet access, and set up the RDS database as an encrypted Read Replica of the source databas
- C. Use SSL to encrypt the connection between the two database
- D. Monitor the replication performance by watching the RDS ReplicaLag metri
- E. During the application maintenance window, shut down the on-premises database and switch over the application connection to the RDS instance when there is no more replication la
- F. Promote the Read Replica into a standalone database instance.
- G. Provision an Amazon EC2 instance and install the same Oracle database softwar
- H. Create a backup of the source database using the supported tool
- I. During the application maintenance window, restore the backup into the Oracle database running in the EC2 instanc
- J. Set up an Amazon RDS for Oracle instance, and create an import job between the database hosted in AW
- K. Shut down the source database and switch over the database connections to the RDS instance when the job is complete.
- L. Use AWS DMS to load and replicate the dataset between the on-premises Oracle database and the replication instance hosted on AW
- M. Provision an Amazon RDS for Oracle instance with Transparent Data Encryption (TDE) enabled and configure it as target for the replication instanc
- N. Create a customer-managed AWS KMS master key to set it as the encryption key for the replication instance.Use AWS DMS tasks to load the data into the target RDS instanc
- O. During the application maintenance window and after the load tasks reach the ongoing replication phase, switch the database connections to the new database.
- P. Create a compressed full database backup on the on-premises Oracle database during an application maintenance windo
- Q. While the backup is being performed, provision a 10 Gbps AWS Direct Connect connection to increase the transfer speed of the database backup files to Amazon S3, and shorten the maintenance window perio
- R. Use SSL/TLS to copy the files over the Direct Connect connectio
- S. When the backup files are successfully copied, start the maintenance window, and rise any of the Amazon RDS supported tools to import the data into a newly provisioned Amazon RDS for Oracle instance with encryption enable
- T. Wait until the data is fully loaded and switch over the database connections to the new databas
- . Delete the Direct Connect connection to cut unnecessary charges.
Answer: C
Explanation:
https://aws.amazon.com/blogs/apn/oracle-database-encryption-options-on-amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.AdvSecurity.htm l (DMS in transit encryption) https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html
NEW QUESTION 3
A company is using AWS to run an internet-facing production application written in Node.js. The Development team is responsible for pushing new versions of their software directly to production. The application software is updated multiple times a day. The team needs guidance from a Solutions Architect to help them deploy the software to the production fleet quickly and with the least amount of disruption to the service.
Which option meets these requirements?
- A. Prepackage the software into an AMI and then use Auto Scaling to deploy the production flee
- B. For software changes, update the AMI and allow Auto Scaling to automatically push the new AMI to production.
- C. Use AWS CodeDeploy to push the prepackaged AMI to productio
- D. For software changes, reconfigure CodeDeploy with new AMI identification to push the new AMI to the production fleet.
- E. Use AWS Elastic Beanstalk to host the production applicatio
- F. For software changes, upload the new application version to Elastic Beanstalk to push this to the production fleet using a blue/green deployment method.
- G. Deploy the base AMI through Auto Scaling and bootstrap the software using user dat
- H. For software changes, SSH to each of the instances and replace the software with the new version.
Answer: C
NEW QUESTION 4
A company is migrating an application to AWS. It wants to use fully managed services as much as possible during the migration. The company needs to store large, important documents within the application with the following requirements:
The data must be highly durable and available.
The data must always be encrypted at rest and in transit.
The encryption key must be managed by the company and rotated periodically. Which of the following solutions should the Solutions Architect recommend?
- A. Deploy the storage gateway to AWS in file gateway mod
- B. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes.
- C. Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS for object encryption.
- D. Use Amazon DynamoDB with SSL to connect to DynamoD
- E. Use an AWS KMS key to encrypt DynamoDB objects at rest.
- F. Deploy instances with Amazon EBS volumes attached to store this dat
- G. Use EBS volume encryption using an AWS KMS key to encrypt the data.
Answer: B
Explanation:
https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-y
NEW QUESTION 5
During a security audit of a Service team's application a Solutions Architect discovers that a username and password tor an Amazon RDS database and a set of AWSIAM user credentials can be viewed in the AWS Lambda function code. The Lambda function uses the username and password to run queries on the database and it uses the I AM credentials to call AWS services in a separate management account.
The Solutions Architect is concerned that the credentials could grant inappropriate access to anyone who can view the Lambda code The management account and the Service team's account are in separate AWS Organizations organizational units (OUs)
Which combination of changes should the Solutions Architect make to improve the solution's security? (Select TWO)
- A. Configure Lambda to assume a tole in the management account with appropriate access to AWS
- B. Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation
- C. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials
- D. Use an SCP on the management accounts OU to prevent IAM users from accessing resources m the Service team's account
- E. Enable AWS Shield Advanced on the management account to shield sensitive resources from unauthorized IAM access
Answer: BD
NEW QUESTION 6
A company uses an Amazon EMR cluster to process data once a day. The raw data comes from Amazon S3, and the resulting processed data is also stored in Amazon S3. The processing must complete within 4 hours; currently, it only takes 3 hours. However, the processing time is taking 5 to 10 minutes. longer each week due to an increasing volume of raw data.
The team is also concerned about rising costs as the compute capacity increases. The EMR cluster is currently running on three m3.xlarge instances (one master and two core nodes).
Which of the following solutions will reduce costs related to the increasing compute needs?
- A. Add additional task nodes, but have the team purchase an all-upfront convertible Reserved Instance for each additional nod e to offset the costs.
- B. Add additional task nodes, but use instance fleets with the master node in on-Demand mode and a mix of On-Demand and Spot Instances for the core and task node
- C. Purchase a scheduled Reserved Instances for the master node.
- D. Add additional task nodes, but use instance fleets with the master node in Spot mode and a mix of On-Demand and Spot Instances for the core and task node
- E. Purchase enough scheduled Reserved Instances to offset the cost of running any On-Demand instances.
- F. Add additional task nodes, but use instance fleets with the master node in On-Demand mode and a mix of On-Demand and Spot Instances for the core and task node
- G. Purchase a standard all-upfront Reserved Instance for the master node.
Answer: B
NEW QUESTION 7
A company is running an email application across multiple AWS Regions. The company uses Ohio (us-east-2) as the primary Region and Northern Virginia (us-east-1) as the Disaster Recovery (DR) Region. The data is continuously replicated from the primary Region to the DR Region by a single instance on the public subnet in both Regions. The replication messages between the Regions have a significant backlog during certain times of the day. The backlog clears on its own after a short time, but it affects the application’s RPO.
Which of the following solutions should help remediate this performance problem? (Select TWO)
- A. Increase the size of the instances.
- B. Have the instance in the primary Region write the data to an Amazon SQS queue in the primary Region instead, and have the instance in the DR Region poll from this queue.
- C. Use multiple instances on the primary and DR Regions to send and receive the replication data.
- D. Change the DR Region to Oregon (us-west-2) instead of the current DR Region.
- E. Attach an additional elastic network interface to each of the instances in both Regions and set up load balancing between the network interfaces.
Answer: AC
NEW QUESTION 8
A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket The company requires that only authenticated users are allowed to post content The application generates a preasigned URL that is used to upload objects through a browser interface Most users are reporting slow upload times for objects larger than 100 MB.
What can a Solutions Architect do to improve the performance of these uploads while ensuring only authenticated users are allowed to post content?
- A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy Configure the PUT method for this resource to expose the S3 Putobject operation Secure the API Gateway using a COGNITO_USER_POOLS authorize
- B. Have the browser interface use API Gateway instead of the presigned URL to upload objects
- C. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service proxy Configure the PUT method for this resource to expose the S3 Putobject operation Secure the API Gateway using an AWS Lambda authorizer Have the browser interface use API Gateway instead of the presigned URL lo upload objects
- D. Enable an S3 Transfer Acceleration endpoint on the S3 bucket Use the endpoint when generating the presigned URL Have the browser interface upload the objects to the URL using the S3 multipart upload API.
- E. Configure an Amazon CloudFront distribution for the destination S3 bucket Enable PUT and POST methods for the CloudFront cache behavior Update the CloudFront origin to use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy Have the browser interface upload objects using the CloudFront distribution.
Answer: A
NEW QUESTION 9
A company plans to move regulated and security-sensitive businesses to AWS. The Security team is developing a framework to validate the adoption of AWS best practice and industry-recognized compliance standards. The AWS Management Console is the preferred method for teams to provision resources.
Which strategies should a Solutions Architect use to meet the business requirements and continuously assess, audit, and monitor the configurations of AWS resources? (Choose two.)
- A. Use AWS Config rules to periodically audit changes to AWS resources and monitor the compliance of the configuratio
- B. Develop AWS Config custom rules using AWS Lambda to establish a test-driven development approach, and further automate the evaluation of configuration changes against the required controls.
- C. Use Amazon CloudWatch Logs agent to collect all the AWS SDK log
- D. Search the log data using a pre-defined set of filter patterns that machines mutating API call
- E. Send notifications using Amazon CloudWatch alarms when unintended changes are performe
- F. Archive log data by using a batch exportto Amazon S3 and then Amazon Glacier for a long-term retention and auditability.
- G. Use AWS CloudTrail events to assess management activities of all AWS account
- H. Ensure that CloudTrail is enabled in all accounts and available AWS service
- I. Enable trails, encrypt CloudTrail event log files with an AWS KMS key, and monitor recorded activities with CloudWatch Logs.
- J. Use the Amazon CloudWatch Events near-real-time capabilities to monitor system events patterns, and trigger AWS Lambda functions to automatically revert non-authorized changes in AWS resource
- K. Also, target Amazon SNS topics to enable notifications and improve the response time of incident responses.
- L. Use CloudTrail integration with Amazon SNS to automatically notify unauthorized API activities.Ensure that CloudTrail is enabled in all accounts and available AWS service
- M. Evaluate the usage of Lambda functions to automatically revert non-authorized changes in AWS resources.
Answer: AC
Explanation:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html https://docs.aws.amazon.com/en_pv/awscloudtrail/latest/userguide/best-practices-security.html
NEW QUESTION 10
A company is running a .NET three-tier web application on AWS. The team currently uses XL storage optimized instances to store serve the website’s image and video files on local instance storage. The company has encountered issues with data loss from replication and instance failures. The Solutions Architect has been asked to redesign this application to improve its reliability while keeping costs low.
Which solution will meet these requirements?
- A. Set up a new Amazon EFS share, move all image and video files to this share, and then attach this new drive as a mount point to all existing server
- B. Create an Elastic Load Balancer with Auto Scaling general purpose instance
- C. Enable Amazon CloudFront to the Elastic Load Balance
- D. Enable Cost Explorer and use AWS Trusted advisor checks to continue monitoring the environment for future savings.
- E. Implement Auto Scaling with general purpose instance types and an Elastic Load Balance
- F. Enable an Amazon CloudFront distribution to Amazon S3 and move images and video files to Amazon S3. Reserve general purpose instances to meet base performance requirement
- G. Use Cost Explorer and AWSTrusted Advisor checks to continue monitoring the environment for future savings.
- H. Move the entire website to Amazon S3 using the S3 website hosting featur
- I. Remove all the web servers and have Amazon S3 communicate directly with the application servers in Amazon VPC.
- J. Use AWS Elastic Beanstalk to deploy the .NET applicatio
- K. Move all images and video files to Amazon EF
- L. Create an Amazon CloudFront distribution that points to the EFS shar
- M. Reserve the m4.4xl instances needed to meet base performance requirements.
Answer: B
NEW QUESTION 11
A company wants to replace its call system with a solution built using AWS managed services. The company call center would like the solution to receive calls, create contact flows, and scale to handle growth projections. The call center would also like the solution to use deep learning capabilities to recognize the intent of the callers and handle basic tasks, reducing the need to speak an agent. The solution should also be able to query business applications and provide relevant information back to calls as requested.
Which services should the Solution Architect use to build this solution? (Choose three.)
- A. Amazon Rekognition to identity who is calling.
- B. Amazon Connect to create a cloud-based contact center.
- C. Amazon Alexa for Business to build conversational interface.
- D. AWS Lambda to integrate with internal systems.
- E. Amazon Lex to recognize the intent of the caller.
- F. Amazon SQS to add incoming callers to a queue.
Answer: BDE
NEW QUESTION 12
A three-tier web application runs on Amazon EC2 instances. Cron daemons are used to trigger scripts that collect the web server, application, and database logs and send them to a centralized location every hour. Occasionally, scaling events or unplanned outages have caused the instances to stop before the latest logs were collected, and the log files were lost.
Which of the following options is the MOST reliable way of collecting and preserving the log files?
- A. Update the cron jobs to run every 5 minutes instead of every hour to reduce the possibility of log messages being lost in an outage.
- B. Use Amazon CloudWatch Events to trigger Amazon Systems Manager Run Command to invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage.
- C. Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs.Configure the agent with a batch count of 1 to reduce the possibility of log messages being lost in an outage.
- D. Use Amazon CloudWatch Events to trigger AWS Lambda to SSH into each running instance and invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage.
Answer: C
Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
NEW QUESTION 13
A company is running a large application on-premises. Its technology stack consists of Microsoft .NET for the web server platform and Apache Cassandra for the database. The company wants to migrate the application to AWS to improve service reliability. The IT team also wants to reduce the time it spends on capacity management and maintenance of this infrastructure. The Development team is willing and available to make code changes to support the migration.
Which design is the LEAST complex to manage after the migration?
- A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
- B. Migrate the existing Cassandra database to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode.
- C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
- D. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ configuration.
- E. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
- F. Migrate the existing Cassandra database to Amazon DynamoDB.
- G. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
- H. Migrate the existing Cassandra database to Amazon DynamoDB.
Answer: B
NEW QUESTION 14
A company has multiple AWS accounts hosting IT applications. An Amazon CloudWatch Logs agent is installed on all Amazon EC2 instances. The company wants to aggregate all security events in a centralized AWS account dedicated to log storage.
Security Administrators need to perform near-real-time gathering and correlating of events across multiple AWS accounts.
Which solution satisfies these requirements?
- A. Create a Log Audit IAM role in each application AWS account with permissions to view CloudWatch Logs, configure an AWS Lambda function to assume the Log Audit role, and perform an hourly export of CloudWatch Logs data to an Amazon S3 bucket in the logging AWS account.
- B. Configure CloudWatch Logs streams in each application AWS account to forward events to CloudWatch Logs in the logging AWS accoun
- C. In the logging AWS account, subscribe an Amazon Kinesis Data Firehose stream to Amazon CloudWatch Events, and use the stream to persist log data in Amazon S3.
- D. Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source, and persist the log data in an Amazon S3 bucket inside the logging AWS account.
- E. Configure CloudWatch Logs agents to publish data to an Amazon Kinesis Data Firehose stream in the logging AWS account, use an AWS Lambda function to read messages from the stream and push messages to Data Firehose, and persist the data in Amazon S3.
Answer: C
Explanation:
The solution uses Amazon Kinesis Data Streams and a log destination to set up an endpoint in the logging account to receive streamed logs and uses Amazon Kinesis Data Firehose to deliver log data to the Amazon Simple Storage Solution (S3) bucket. Application accounts will subscribe to stream all (or part) of their Amazon CloudWatch logs to a defined destination in the logging account via subscription filters. https://aws.amazon.com/blogs/architecture/central-logging-in-multi-account-environments/
NEW QUESTION 15
A company wants to manage the costs associated with a group of 20 applications that are critical, by migrating to AWS. The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while standardizing by using a single deployment methodology. Most of the applications are part of month-end processing routines with a small number of concurrent users, but they are occasionally run at other times. Average application memory consumption is less than 1 GB, though some applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in Java that accesses multiple data sources and often for several hours.
Which is the MOST cost-effective solution?
- A. Deploy a separate AWS Lambda function for each applicatio
- B. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs.
- C. Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scalin
- D. Monitor services and hosts by using Amazon CloudWatch.
- E. Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resource
- F. Monitor each AWS Elastic Beanstalk deployment with using CloudWatch alarms.
- G. Deploy a new amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancer
- H. Scale cluster size based on a custom metric set on instance memory utilizatio
- I. Purchase 3-year Reserved instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group.
Answer: C
NEW QUESTION 16
An enterprise runs 103 line-of-business applications on virtual machines in an on-premises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic.
Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs ?
- A. Deploy the applications to single-instance AWS Elastic Beanstalk environments without a load balancer.
- B. Use AWS SMS to create AMIs for each virtual machine and run them in Amazon EC2.
- C. Convert each application to a Docker image and deploy to a small Amazon ECS cluster behind an Application Load Balancer.
- D. Use VM Import/Export to create AMIs for each virtual machine and run them in single-instance AWS Elastic Beanstalk environments by configuring a custom image.
Answer: A
Explanation:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-types.html
NEW QUESTION 17
The CISO of a large enterprise with multiple IT departments, each with its own AWS account, wants one central place where AWS permissions for users can be managed and users authentication credentials can be synchronized with the company’s existing on-premises solution.
Which solution will meet the CISO’s requirements?
- A. Define AWS IAM roles based on the functional responsibilities of the users in a central accoun
- B. Create a SAML-based identity management provide
- C. Map users in the on-premises groups to IAM role
- D. Establish trust relationships between the other accounts and the central account.
- E. Deploy a common set of AWS IAM users, groups, roles, and policies in all of the AWS accounts using AWS Organization
- F. Implement federation between the on-premises identity provider and the AWS accounts.
- G. Use AWS Organizations in a centralized account to define service control policies (SCPs). Create a SAML-based identity management provider in each account and map users in the on-premises groups to AWS IAM roles.
- H. Perform a thorough analysis of the user base and create AWS IAM users accounts that have the necessary permission
- I. Set up a process to provision and de provision accounts based on data in the on-premises solution.
Answer: A
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
NEW QUESTION 18
A company runs a video processing platform. Files are uploaded by users who connect to a web server, which stores them on an Amazon EFS share. This web server is running on a single Amazon EC2 instance. A different group of instances, running in an Auto Scaling group, scans the EFS share directory structure for new files to process and generates new videos (thumbnails, different resolution, compression, etc.) according to the instructions file, which is uploaded along with the video files. A different application running on a group of instances managed by an Auto Scaling group processes the video files and then deletes them from the EFS share. The results are stored in an S3 bucket. Links to the processed video files are emailed to the customer.
The company has recently discovered that as they add more instances to the Auto Scaling Group, many files are processed twice, so image processing speed is not improved. The maximum size of these video files is 2GB.
What should the Solutions Architect do to improve reliability and reduce the redundant processing of video files?
- A. Modify the web application to upload the video files directly to Amazon S3. Use Amazon CloudWatch Events to trigger an AWS Lambda function every time a file is uploaded, and have this Lambda function put a message into an Amazon SQS queu
- B. Modify the video processing application to read from SQS queue for new files and use the queue depth metric to scale instances in the video processing Auto Scaling group.
- C. Set up a cron job on the web server instance to synchronize the contents of the EFS share into Amazon S3. Trigger an AWS Lambda function every time a file is uploaded to process the video file and store the results in Amazon S3. Using Amazon CloudWatch Events trigger an Amazon SES job to send an email to the customer containing the link to the processed file.
- D. Rewrite the web application to run directly from Amazon S3 and use Amazon API Gateway to upload the video files to an S3 bucke
- E. Use an S3 trigger to run an AWS Lambda function each time a file is uploaded to process and store new video files in a different bucke
- F. Using CloudWatch Events, trigger an SES job to send an email to the customer containing the link to the processed file.
- G. Rewrite the web application to run from Amazon S3 and upload the video files to an S3 bucke
- H. Each time a new file is uploaded, trigger an AWS Lambda function to put a message in an SQS queue containing the link and the instruction
- I. Modify the video processing application to read from the SQS queue and the S3 bucke
- J. Use the queue depth metric to adjust the size of the Auto Scaling group for video processing instances.
Answer: A
NEW QUESTION 19
A company CFO recently analyzed the company’s AWS monthly bill and identified an opportunity to reduce the cost for AWS Elastic Beanstalk environments in use. The CFO has asked a Solutions Architect to design a highly available solution that will spin up an Elastic Beanstalk environment in the morning and terminate it at the end of the day.
The solution should be designed with minimal operational overhead and to minimize costs. It should also be able to handle the increased use of Elastic Beanstalk environments among different teams, and must provide a one-stop scheduler solution for all teams to keep the operational costs low.
What design will meet these requirements?
- A. Set up a Linux EC2 Micro instanc
- B. Configure an IAM role to allow the start and stop of the Elastic Beanstalk environment and attach it to the instanc
- C. Create scripts on the instance to start and stop the Elastic Beanstalk environmen
- D. Configure cron jobs on the instance to execute the scripts.
- E. Develop AWS Lambda functions to start and stop the Elastic Beanstalk environmen
- F. Configure a Lambda execution role granting Elastic Beanstalk environment start/stop permissions, and assign the role to the Lambda function
- G. Configure cron expression Amazon CloudWatch Events rules to trigger the Lambda functions.
- H. Develop an AWS Step Functions state machine with “wait” as its type to control the start and stop time.Use the activity task to start and stop the Elastic Beanstalk environmen
- I. Create a role for Step Functionsto allow it to start and stop the Elastic Beanstalk environmen
- J. Invoke Step Functions daily.
- K. Configure a time-based Auto Scaling grou
- L. In the morning, have the Auto Scaling group scale up an Amazon EC2 instance and put the Elastic Beanstalk environment start command in the EC2 instance user dat
- M. At the end of the day, scale down the instance number to 0 to terminate the EC2 instance.
Answer: B
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/schedule-elastic-beanstalk-stop-restart/
NEW QUESTION 20
A company runs a memory-intensive analytics application using on-demand Amazon EC2 compute optimized instance. The application is used continuously and application demand doubles during working hours. The application currently scales based on CPU usage. When scaling in occurs, a lifecycle hook is used because the instance requires 4 minutes to clean the application state before terminating.
Because users reported poor performance during working hours, scheduled scaling actions were implemented so additional instances would be added during working hours. The Solutions Architect has been asked to reduce the cost of the application.
Which solution is MOST cost-effective?
- A. Use the existing launch configuration that uses C5 instances, and update the application AMI to include the Amazon CloudWatch agen
- B. Change the Auto Scaling policies to scale based on memory utilizatio
- C. Use Reserved Instances for the number of instances required after working hours, and use Spot Instances to cover the increased demand during working hours.
- D. Update the existing launch configuration to use R5 instances, and update the application AMI to includeSSM Agen
- E. Change the Auto Scaling policies to scale based on memory utilizatio
- F. Use Reserved instances for the number of instances required after working hours, and use Spot Instances withon-Demand instances to cover the increased demand during working hours.
- G. Use the existing launch configuration that uses C5 instances, and update the application AMI to include SSM Agen
- H. Leave the Auto Scaling policies to scale based on CPU utilizatio
- I. Use scheduled Reserved Instances for the number of instances required after working hours, and use Spot Instances to cover the increased demand during work hours.
- J. Create a new launch configuration using R5 instances, and update the application AMI to include the Amazon CloudWatch agen
- K. Change the Auto Scaling policies to scale based on memory utilizatio
- L. use Reserved Instances for the number of instances required after working hours, and use Standard Reserved Instances with On-Demand Instances to cover the increased demand during working hours.
Answer: D
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html
NEW QUESTION 21
A Solutions Architect is migrating a 10 TB PostgreSQL database to Amazon RDS for PostgreSQL. The company’s internet link is 50 MB with a VPN in the Amazon VPC, and the Solutions Architect needs to migrate the data and synchronize the changes before the cutover. The cutover must take place within an 8-day period.
What is the LEAST complex method of migrating the database securely and reliably?
- A. Order an AWS Snowball device and copy the database using the AWS DM
- B. When the database is available in Amazon 3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.
- C. Create an AWS DMS job to continuously replicate the data from on premises to AW
- D. Cutover to Amazon RDS after the data is synchronized.
- E. Order an AWS Snowball device and copy a database dump to the devic
- F. After the data has been copied to Amazon S3, import it to the Amazon RDS instanc
- G. Set up log shipping over a VPN to synchronize changes before the cutover.
- H. Order an AWS Snowball device and copy the database by using the AWS Schema Conversion Tool.When the data is available in Amazon S3, use AWS DMS to load it to Amazon RDS, and configure a job to synchronize changes before the cutover.
Answer: B
NEW QUESTION 22
An organization has two Amazon EC2 instances:
The first is running an ordering application and an inventory application.
The second is running a queuing system.
During certain times of the year, several thousand orders are placed per second. Some orders were lost when the queuing system was down. Also, the organization’s inventory application has the incorrect quantity of products because some orders were processed twice.
What should be done to ensure that the applications can handle the increasing number of orders?
- A. Put the ordering and inventory applications into their own AWS Lambda function
- B. Have the ordering application write the messages into an Amazon SQS FIFO queue.
- C. Put the ordering and inventory applications into their own Amazon ECS containers and create an Auto Scaling group for each applicatio
- D. Then, deploy the message queuing server in multiple AvailabilityZones.
- E. Put the ordering and inventory applications into their own Amazon EC2 instances, and create an Auto Scaling group for each applicatio
- F. Use Amazon SQS standard queues for the incoming orders, and implement idempotency in the inventory application.
- G. Put the ordering and inventory applications into their own Amazon EC2 instance
- H. Write the incoming orders to an Amazon Kinesis data stream Configure AWS Lambda to poll the stream and update the inventory application.
Answer: C
Explanation:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html
P.S. Surepassexam now are offering 100% pass ensure SAP-C01 dumps! All SAP-C01 exam questions have been updated with correct answers: https://www.surepassexam.com/SAP-C01-exam-dumps.html (179 New Questions)