getcertified4sure.com

AWS-Certified-Solutions-Architect-Professional Exam

Ideas to aws certified solutions architect professional exam dumps




Pass4sure offers free demo for aws certified solutions architect professional salary exam. "AWS-Certified-Solutions-Architect-Professional", also known as aws certified solutions architect professional dumps exam, is a Amazon Certification. This set of posts, Passing the Amazon aws certified solutions architect professional exam dumps exam, will help you answer those questions. The aws certified solutions architect professional dumps Questions & Answers covers all the knowledge points of the real exam. 100% real Amazon aws certified solutions architect professional exam dumps exams and revised by experts!

Q11. You are implementing a URL whitelisting system for a company that wants to restrict outbound HTTP/S connections to specific domains from their EC2-hosted applications. You deploy a single EC2 instance running proxy software and configure it to accept traffic from all subnets and EC2 instances in the VPC. You configure the proxy to only pass through traffic to domains that you define in its whitelist configuration. You have a nightly maintenance window of 10 minutes where all instances fetch new software updates. Each update is about 200MB in size and there are 500 instances in the VPC that routinely fetch updates. After a few days you notice that some machines are falling to successfully download some, but not all, of their updates within the maintenance window. The download URLs used for these updates are correctly listed in the proxy's whitelist configuration and you are able to access them manually using a web browser on the instances. What might be happening? Choose 2 answers 

A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time 

B. You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance 

C. The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy 

D. You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up, causing some requests to fail 

E. You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW) 

Answer: D, E 


Q12. Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed. Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary. You don't want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console. Which option below will meet the needs for your NOC members? 

A. Use your on-premises SAML 2.0-compliant identity provider (IdP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. 

B. Use Web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console. 

C. Use your on-premises SAML 2.0-compllant identity provider (IdP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console. D. Use OAuth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console. 

Answer:


Q13. A customer is deploying an SSL enabled web application to AWS and would like to implement a

separation of roles between the EC2 service administrators that are entitled to login to instances as well

as making API calls and the security officers who will maintain and have exclusive access to the

application’s X.509 certificate that contains the private key.

A. Upload the certificate on an S3 bucket owned by the security officers and accessible only by EC2 Role of the web servers.

B. Configure the web servers to retrieve the certificate upon boot from an CloudHSM is managed by the security officers.

C. Configure system permissions on the web servers to restrict access to the certificate only to the authority security officers

D. Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB.

Answer:


Q14. You have been asked to design the storage layer for an application. The application requires disk performance of at least 100,000 IOPS. In addition, the storage layer must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which of the following designs will meet these objectives? 

A. Instantiate a c3.8xlarge instance in us-east-1. Provision 4x1TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume. Ensure that EBS snapshots are performed every 15 minutes. 

B. Instantiate a c3.8xlarge instance in us-east-1. Provision 3xlTB EBS volumes, attach them to the Instance, and configure them as a single RAID 0 volume. Ensure that EBS snapshots are performed every 15 minutes. 

C. Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Provision 3x1TB EBS volumes, attach them to the instance, and configure them as a second RAID 0 volume. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed volume. 

D. Instantiate a c3.8xlarge instance in us-east-1. Provision an AWS Storage Gateway and configure it for 3 TB of storage and 100,000 IOPS. Attach the volume to the instance. 

E. Instantiate an i2.8xlarge instance in us-east-1a. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Configure synchronous, block-level replication to an identically configured instance in us-east-1b. 

Answer:


Q15. You've been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC). The previous architect has already deployed a 3-tier VPC. 

The configuration is as follows: 

VPC: vpc-2f8bc447 

IGW: igw-2d8bc445 

NACL: ad-208bc448 

Subnets and Route Tables: Web servers: subnet-258bc44d 

Application servers: subnet-248bc44c 

Database servers: subnet-9189c6f9 

Route Tables: rtb-218bc449 rtb-238bc44b 

Associations: subnet-258bc44d : rtb-218bc449 subnet-248bc44c : rtb-238bc44b subnet-9189c6f9 : rtb- 238bc44b 

You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the Internet. Application and database servers cannot have direct access to the Internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet? 

A. Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb-238bc44b to the NAT instance. 

B. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet- 248bc44c. 

C. Create a bastion and NAT instance in subnet-248bc44c, and add a route from rtb-238bc44b to subnet- 258bc44d. 

D. Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to Igw- 2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet- 248bc44c. 

Answer:


Q16. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas.The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced, auto scaled Ingestion layer using EC2 instances, and a PostgreSQL RDS database with 500GB standard storage The pilot is considered a success and your CEO has managed to get the attention of some potential Investors. The business plan requires a deployment of at least 100k sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements? 

A. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage 

B. Keep the current architecture, but upgrade RDS storage to 3TB and 10k provisioned IOPS 

C. Ingest data into a DynamoDB table and move old data to a Redshift cluster 

D. Add an SQS queue to the ingestion layer to buffer writes to the RDS Instance 

Answer:


Q17. A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2.xlarge instances since it is highly memory- bound. Each new deployment requires semi-automated creation and testing of a new AMI for the application servers, which takes quite a while and is therefore only done once per week. Recently, a new chat feature has been implemented in node.js and waits to be integrated in the architecture. First tests show that the new component is CPU bound. Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS OpsWorks as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS OpsWorks is necessary to integrate the new chat module in the most cost-efficient and flexible way? 

A. Create one AWS OpsWorks stack, create one AWS OpsWorks layer, create one custom recipe 

B. Create two AWS OpsWorks stacks, create two AWS OpsWorks layers, create one custom recipe 

C. Create one AWS OpsWorks stack, create two AWS OpsWorks layers, create one custom recipe 

D. Create two AWS OpsWorks stacks, create two AWS OpsWorks layers, create two custom recipes 

Answer:


Q18. An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage. When creating the CloudFormation template which of the following would allow the application Instance access to the DynamoDB tables without exposing API credentials? 

A. Create an Identity and Access Management Role that has the required permissions to read and write from the .required DynamoDB table and associate the Role to the application instances by referencing an instance profile. 

B. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance. 

C. Use the Parameter section in the CloudFormation template to have the user input Access and Secret keys from an already created IAM user that has the permissions required to read and write from the required DynamoDB table. 

D. Create an Identity and Access Management user in the CloudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and Secret keys and pass them to the application instance through user-data. 

Answer:


Q19. An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes. The customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure? 

A. Take 15 minute DB backups stored in Glacier with transaction logs stored in S3 every 5 minutes. 

B. Use synchronous database master-slave replication between two availability zones. 

C. Take hourly DB backups to EC2 instance store volumes with transaction logs stored In S3 every 5 minutes. 

D. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes. 

Answer:


Q20. You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic MapReduce Job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using CloudFront for dynamic content delivery and your website as the origin. After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard? 

A. Change your log collection process to use CloudWatch ELB metrics as input of the Elastic MapReduce Job. 

B. Turn on CloudTrail and use trail log files on S3 as input of the Elastic MapReduce job. 

C. Enable CloudFront to deliver access logs to S3 and use them as input of the Elastic MapReduce job. 

D. Use Elastic Beanstalk "Restart App Server(s)" option to update log delivery to the Elastic MapReduce job. 

E. Use Elastic Beanstalk "Rebuild Environment" option to update log delivery to the Elastic MapReduce job. 

Answer: