getcertified4sure.com

AWS-Certified-Solutions-Architect-Professional Exam

Super to aws certified solutions architect professional salary




Want to know Testking aws certified solutions architect professional exam dumps Exam practice test features? Want to lear more about Amazon AWS-Certified-Solutions-Architect-Professional certification experience? Study Vivid Amazon aws certified solutions architect professional dumps answers to Renovate aws certified solutions architect professional exam dumps questions at Testking. Gat a success with an absolute guarantee to pass Amazon aws certified solutions architect professional dumps (AWS-Certified-Solutions-Architect-Professional) test on your first attempt.

Q21. An international company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data, and synchronize only the modified elements. Which design would you choose to meet these requirements? 

A. Use AWS Data Pipeline to schedule a DynamoDB cross region copy once a day, create a "LastUpdated" attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter 

B. Use AWS Data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day, then schedule another task Immediately after it that will import data from S3 to DynamoDB in the other region 

C. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region 

D. Send also each write into an SQS queue in the second region, use an auto-scaling group behind the SQS queue to replay the write in the second region 

Answer:


Q22. Your department creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic MapReduce (EMR) jobs that generate daily PDF reports and aggregated tables in .csv format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? 

A. Use reduced redundancy storage (RRS) for all data In S3. Use a combination of Spot Instances and Reserved Instances for Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. 

B. Use reduced redundancy storage (RRS) for PDF and .csv data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. 

C. Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. 

D. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. 

Answer:


Q23. A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files. They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and keep costs to a minimum. What AWS architecture would you recommend? 

A. Ask their customers to use an S3 client instead of an FTP client. Create a single S3 bucket. Create an IAM User for each customer. Put the IAM Users in a Group that has an IAM policy that permits access to sub-directories within the bucket via use of the 'username' Policy Variable. 

B. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client. Create a bucket for each customer with a Bucket Policy that permits access only to that one customer. 

C. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client. Create a bucket for each customer with a Bucket Policy that permits access only to that one customer. 

D. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of FTP users from S3 as part of the User Data startup script on each instance. 

Answer:


Q24. You are designing a social media site and are considering how to mitigate distributed denial-of- service (DDoS) attacks. Which of the below are viable mitigation techniques? Choose 3 answers 

A. Use Dedicated Instances to ensure that each Instance has the maximum performance possible. 

B. Add alerts to Amazon CloudWatch to look for high Network In and CPU utilization. 

C. Create processes and capabilities to quickly add and remove rules to the instance OS firewall. 

D. Use an Elastic Load Balancer with auto scaling groups at the web, app, and Amazon Relational Database Service (RDS) tiers. 

E. Use an Amazon CloudFront distribution for both static and dynamic content. 

F. Add multiple elastic network Interfaces (ENIs) to each EC2 instance to Increase the network bandwidth. 

Answer: A, C, D 


Q25. Your system recently experienced down time. During the troubleshooting process you found that a new administrator mistakenly terminated several production EC2 instances. Which of the following strategies will help prevent a similar situation in the future? The administrator still must be able to: 

-launch, start, stop, and terminate development resources, 

-launch and start production instances. 

A. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances. 

B. Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production EC2 resources. 

C. Create an IAM user which is not allowed to terminate instances by leveraging production EC2 termination protection. 

D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances. 

Answer:


Q26. Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account. To streamline data capture, Company B would like to directly save player data and scoring information from the mobile app to a DynamoDB table named ScoreData. When a user saves their game, the progress data will be stored to the GameState S3 bucket. What is the best approach for storing data to DynamoDB and S3? 

A. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the ScoreData DynamoDB table and the GameState S3 bucket. 

B. Use temporary security credentials that assume a role providing access to the ScoreData DynamoDB table and the GameState S3 bucket using web identity federation C. Use an IAM user with access credentials assigned a role providing access to the ScoreData DynamoDB table and the GameState S3 bucket for distribution with the mobile app 

D. Use an EC2 instance that is launched with an EC2 role providing access to the ScoreData DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services 

Answer:


Q27. You are responsible for a legacy web application whose server environment is approaching end of life. You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations: 

. the VM's single 10GB VMDK is almost full; 

. the virtual network Interface still uses the 10Mbps dnver, which leaves your 100Mbps WAN connection completely underutilized; 

. it is currently running on a highly customized, Windows VM within a VMware environment; 

. you do not have the installation media. 

This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours, RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements? 

A. Use S3 to create a backup of the VM and restore the data into EC2. 

B. Use the EC2 VM Import Connector for vCenter to import the VM into EC2. 

C. Use the ec2-bundle-instance API to import an image of the VM into EC2. 

D. Use Import/Export to import the VM as an EBS snapshot and attach to EC2. 

Answer:


Q28. You currently operate a web application in the AWS US-East region. The application runs on an auto- scaled layer of EC2 instances and an RDS Multi-AZ database. Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2, IAM, and RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend? 

A. Create a new CloudTrail trail with one new S3 bucket to store the logs. Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket that stores your logs. 

B. Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected. Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. 

C. Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles, S3 bucket policies, and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. 

D. Create three new CloudTrail trails with three new S3 buckets to store the logs: one for the AWS Management Console, one for AWS SDKs, and one for command line tools. Use 1AM roles and S3 bucket policies on the S3 buckets that store your logs. 

Answer:


Q29. A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer's end; however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter. Which of the following options provide a viable solution to remedy this situation? Choose 2 answers 

A. Modify the instances VPC subnet route table by adding a route back to the customer's on- premises environment. 

B. Enable route propagation to the customer gateway (CGW). 

C. Add a route to the route table with an IPsec VPN connection as the target. 

D. Enable route propagation to the virtual private gateway (VGW). 

E. Modify the route table of all instances using the route' command. 

Answer: B, C 


Q30. Refer to the Exhibit:

Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances which are used as batch processors. CloudWatch monitors the number of job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in CloudWatch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner? 

A. Coordinate number of EC2 instances with number of Job requests automatically, thus improving cost effectiveness. 

B. Reduce the overall time for executing Jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup. 

C. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and work can continue with recovery of EC2 instances. Implement fault tolerance against SQS failure by backing up messages to S3. 

D. Handle high priority Jobs before lower priority Jobs by assigning a priority metadata field to SQS messages. 

E. Implement message passing between EC2 instances within a batch by exchanging messages through SQS. 

Answer: