getcertified4sure.com

Ideas to aws solution architect associate questions




High quality of aws solution architect associate certification test question materials and free samples for Amazon certification for IT engineers, Real Success Guaranteed with Updated aws solution architect associate questions pdf dumps vce Materials. 100% PASS AWS Certified Solutions Architect - Associate exam Today!

Q31. What is Amazon Glacier?

A. You mean Amazon "Iceberg": it's a low-cost storage service.

B. A security tool that allows to "freeze" an EBS volume and perform computer forensics on it.

C. A low-cost storage service that provides secure and durable storage for data archMng and backup.

D. It's a security tool that allows to "freeze" an EC2 instance and perform computer forensics on it. 

Answer: C


Q32. In relation to AWS CIoudHSM, High-availability (HA) recovery is hands-off resumption by failed HA group members.

Prior to the introduction of this function, the HA feature provided redundancy and performance, but required that a failed/lost group member be reinstated.

A. automatically

B. periodically

C. manually

D. continuosly 

Answer: C

Explanation:

In relation to AWS CIoudHS|VI, High-availability (HA) recovery is hands-off resumption by failed HA group members.

Prior to the introduction of this function, the HA feature provided redundancy and performance, but required that a failed/lost group member be manually reinstated.

Reference: http://docs.aws.amazon.com/cloudhsm/latest/userguide/ha-best-practices.html


Q33. IAM's Policy Evaluation Logic always starts with a default _ for every request, except for those that use the AWS account's root security credentials b

A. Permit

B. Deny

C. Cancel 

Answer: B


Q34. Can I delete a snapshot of the root device of an EBS volume used by a registered AMI?

A. Only via API

B. Only via Console

C. Yes

D. No

Answer: C


Q35. You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.

A. Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.

B. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.

C. Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.

D. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts

Answer:

Explanation:

Bucket Owner Granting Cross-account Permission to objects It Does Not Own

In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. That is, your bucket can have objects that other AWS accounts own.

Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For example, that user could be a billing application that needs to access object metadata. There are two core issues:

The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket owner to grant permissions on objects it does not own, the object owner, the AWS account that created the objects, must first grant permission to the bucket owner. The bucket owner can then delegate those permissions.

Bucket owner account can delegate permissions to users in its own account but it cannot delegate permissions to other AWS accounts, because cross-account delegation is not supported.

In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects, and grant another AWS account permission to assume the role temporarily enabling it to access objects in the bucket.

Background: Cross-Account Permissions and Using IAM Roles

IAM roles enable several scenarios to delegate access to your resources, and cross-account access is

one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access cross-account to users in another AWS account, Account C. Each IAM role you create has two policies attached to it:

A trust policy identifying another AWS account that can assume the role.

An access policy defining what permissions-for example, s3:Get0bject-are allowed when someone assumes the role. For a list of permissions you can specify in a policy, see Specifying Permissions in a Policy.

The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects:

Assume the role and, in response, get temporary security credentials. Using the temporary security credentials, access the objects in the bucket.

For more information about IAM roles, go to Roles (Delegation and Federation) in IAM User Guide. The following is a summary of the walkthrough steps:

Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects.

Account A administrator creates an IAM role, establishing trust with Account C, so users in t hat account can access Account A. The access policy attached to the role limits what user in Account C can do when the user accesses Account A.

Account B administrator uploads an object to the bucket owned by Account A, granting full-control permission to the bucket owner.

Account C administrator creates a user and attaches a user policy that al lows the user to assume the role.

User in Account C first assumes the role, which returns the user temporary security credentials. Using those temporary credentials, the user then accesses objects in the bucket.

For this example, you need three accounts. The following tab Ie shows how we refer to these accounts and the administrator users in these accounts. Per IAM guidelines (see About Using an

Administrator User to Create Resources and Grant Permissions) we do not use the account root

credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials in creating resources and granting them permissions


Q36. You want to use AWS Import/Export to send data from your S3 bucket to several of your branch offices. What should you do if you want to send 10 storage units to AWS?

A. Make sure your disks are encrypted prior to shipping.

B. Make sure you format your disks prior to shipping.

C. Make sure your disks are 1TB or more.

D. Make sure you submit a separate job request for each device. 

Answer: D

Explanation:

When using Amazon Import/Export, a separate job request needs to be submitted for each physical device even if they belong to the same import or export job.

Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html


Q37. What is the command line instruction for running the remote desktop client in Windows?

A. desk.cpI

B. mstsc 

Answer: B


Q38. You are migrating a legacy client-server application to AWS. The application responds to a specific DNS domain (e.g. www.examp|e.com) and has a 2-tier architecture, with multiple application sewers and a database server. Remote clients use TCP to connect to the application sewers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A MuIti-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code, but you have to file a change request.

How would you implement the architecture on AWS in order to maximize scalability and high availability?

A. File a change request to implement Alias Resource support in the application. Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.

B. File a change request to implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs.

C. File a change request to implement Cross-Zone support in the application. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.

D. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.

Answer: D


Q39. A user needs to run a batch process which runs for 10 minutes. This will only be run once, or at maximum twice, in the next month, so the processes will be temporary only. The process needs 15 X-Large instances. The process downloads the code from S3 on each instance when it is launched, and then generates a temporary log file. Once the instance is terminated, all the data will be lost. Which of the below mentioned pricing models should the user choose in this case?

A. Spot instance.

B. Reserved instance.

C. On-demand instance.

D. EBS optimized instance. 

Answer: A

Explanation:

In Amazon Web Services, the spot instance is useful when the user wants to run a process temporarily. The spot instance can terminate the instance if the other user outbids the existing bid. In this case all storage is temporary and the data is not required to be persistent. Thus, the spot instance is a good option to save money.

Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/


Q40. You are running PostgreSQL on Amazon RDS and it seems to be all running smoothly deployed in one availability zone. A database administrator asks you if DB instances running PostgreSQL support MuIti-AZ deployments. What would be a correct response to this QUESTION ?

A. Yes.

B. Yes but only for small db instances.

C. No.

D. Yes but you need to request the service from AWS. 

Answer: A

Explanation:

Amazon RDS supports DB instances running several versions of PostgreSQL. Currently we support PostgreSQL versions 9.3.1, 9.3.2, and 9.3.3. You can create DB instances and DB snapshots,

point-in-time restores and backups.

DB instances running PostgreSQL support MuIti-AZ deployments, Provisioned IOPS, and can be created inside a VPC. You can also use SSL to connect to a DB instance running PostgreSQL.

You can use any standard SQL client application to run commands for the instance from your client computer. Such applications include pgAdmin, a popular Open Source administration and development tool for PostgreSQL, or psql, a command line utility that is part of a PostgreSQL installation. In order to deliver a managed service experience, Amazon RDS does not provide host access to DB instances, and it restricts access to certain system procedures and tables that require advanced prMleges. Amazon RDS supports access to databases on a DB instance using any standard SQL client application. Amazon RDS does not allow direct host access to a DB instance via Telnet or Secure Shell (SSH).

Reference:  http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.htmI