getcertified4sure.com

DAS-C01 Exam

How Many Questions Of DAS-C01 Brain Dumps




Cause all that matters here is passing the Amazon-Web-Services DAS-C01 exam. Cause all that you need is a high score of DAS-C01 AWS Certified Data Analytics - Specialty exam. The only one thing you need to do is downloading Certleader DAS-C01 exam study guides now. We will not let you down with our money-back guarantee.

Online DAS-C01 free questions and answers of New Version:

NEW QUESTION 1
An online retail company is migrating its reporting system to AWS. The company’s legacy system runs data processing on online transactions using a complex series of nested Apache Hive queries. Transactional data is exported from the online system to the reporting system several times a day. Schemas in the files are stable between updates.
A data analyst wants to quickly migrate the data processing to AWS, so any code changes should be minimized. To keep storage costs low, the data analyst decides to store the data in Amazon S3. It is vital that the data from the reports and associated analytics is completely up to date based on the data in Amazon S3.
Which solution meets these requirements?

  • A. Create an AWS Glue Data Catalog to manage the Hive metadat
  • B. Create an AWS Glue crawler over Amazon S3 that runs when data is refreshed to ensure that data changes are update
  • C. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR.
  • D. Create an AWS Glue Data Catalog to manage the Hive metadat
  • E. Create an Amazon EMR cluster with consistent view enable
  • F. Run emrfs sync before each analytics step to ensure data changes are update
  • G. Create an EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR.
  • H. Create an Amazon Athena table with CREATE TABLE AS SELECT (CTAS) to ensure data is refreshed from underlying queries against the raw datase
  • I. Create an AWS Glue Data Catalog to manage the Hive metadata over the CTAS tabl
  • J. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR.
  • K. Use an S3 Select query to ensure that the data is properly update
  • L. Create an AWS Glue Data Catalog to manage the Hive metadata over the S3 Select tabl
  • M. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR.

Answer: A

NEW QUESTION 2
A medical company has a system with sensor devices that read metrics and send them in real time to an Amazon Kinesis data stream. The Kinesis data stream has multiple shards. The company needs to calculate the average value of a numeric metric every second and set an alarm for whenever the value is above one threshold or below another threshold. The alarm must be sent to Amazon Simple Notification Service (Amazon SNS) in less than 30 seconds.
Which architecture meets these requirements?

  • A. Use an Amazon Kinesis Data Firehose delivery stream to read the data from the Kinesis data stream with an AWS Lambda transformation function that calculates the average per second and sends the alarm to Amazon SNS.
  • B. Use an AWS Lambda function to read from the Kinesis data stream to calculate the average per second and sent the alarm to Amazon SNS.
  • C. Use an Amazon Kinesis Data Firehose deliver stream to read the data from the Kinesis data stream and store it on Amazon S3. Have Amazon S3 trigger an AWS Lambda function that calculates the average per second and sends the alarm to Amazon SNS.
  • D. Use an Amazon Kinesis Data Analytics application to read from the Kinesis data stream and calculatethe average per secon
  • E. Send the results to an AWS Lambda function that sends the alarm to Amazon SNS.

Answer: D

NEW QUESTION 3
A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50 business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared with a group of 1,000 users.
The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by year and month, and is stored in Apache Parquet format. The company is using the AWS Glue Data Catalog as its main data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from at any point is 200 GB.
Which configuration will provide the MOST cost-effective solution that meets these requirements?

  • A. Load the data into an Amazon Redshift cluster by using the COPY comman
  • B. Configure 50 author users and 1,000 reader user
  • C. Use QuickSight Enterprise editio
  • D. Configure an Amazon Redshift data source with a direct query option.
  • E. Use QuickSight Standard editio
  • F. Configure 50 author users and 1,000 reader user
  • G. Configure an Athena data source with a direct query option.
  • H. Use QuickSight Enterprise editio
  • I. Configure 50 author users and 1,000 reader user
  • J. Configure an Athena data source and import the data into SPIC
  • K. Automatically refresh every 24 hours.
  • L. Use QuickSight Enterprise editio
  • M. Configure 1 administrator and 1,000 reader user
  • N. Configure an S3 data source and import the data into SPIC
  • O. Automatically refresh every 24 hours.

Answer: C

NEW QUESTION 4
A data analyst is using AWS Glue to organize, cleanse, validate, and format a 200 GB dataset. The data analyst triggered the job to run with the Standard worker type. After 3 hours, the AWS Glue job status is still RUNNING. Logs from the job run show no error codes. The data analyst wants to improve the job execution time without overprovisioning.
Which actions should the data analyst take?

  • A. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the executor-cores job parameter.
  • B. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the maximum capacity job parameter.
  • C. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the spark.yarn.executor.memoryOverhead job parameter.
  • D. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the num-executors job parameter.

Answer: B

NEW QUESTION 5
A team of data scientists plans to analyze market trend data for their company’s new investment strategy. The trend data comes from five different data sources in large volumes. The team wants to utilize Amazon Kinesis to support their use case. The team uses SQL-like queries to analyze trends and wants to send notifications based on certain significant patterns in the trends. Additionally, the data scientists want to save the data to Amazon S3 for archival and historical re-processing, and use AWS managed services wherever possible. The team wants to implement the lowest-cost solution.
Which solution meets these requirements?

  • A. Publish data to one Kinesis data strea
  • B. Deploy a custom application using the Kinesis Client Library (KCL) for analyzing trends, and send notifications using Amazon SN
  • C. Configure Kinesis Data Firehose on the Kinesis data stream to persist data to an S3 bucket.
  • D. Publish data to one Kinesis data strea
  • E. Deploy Kinesis Data Analytic to the stream for analyzing trends, and configure an AWS Lambda function as an output to send notifications using Amazon SN
  • F. Configure Kinesis Data Firehose on the Kinesis data stream to persist data to an S3 bucket.
  • G. Publish data to two Kinesis data stream
  • H. Deploy Kinesis Data Analytics to the first stream for analyzing trends, and configure an AWS Lambda function as an output to send notifications using Amazon SN
  • I. Configure Kinesis Data Firehose on the second Kinesis data stream to persist data to an S3 bucket.
  • J. Publish data to two Kinesis data stream
  • K. Deploy a custom application using the Kinesis Client Library (KCL) to the first stream for analyzing trends, and send notifications using Amazon SN
  • L. Configure Kinesis Data Firehose on the second Kinesis data stream to persist data to an S3 bucket.

Answer: B

NEW QUESTION 6
A mortgage company has a microservice for accepting payments. This microservice uses the Amazon DynamoDB encryption client with AWS KMS managed keys to encrypt the sensitive data before writing the data to DynamoDB. The finance team should be able to load this data into Amazon Redshift and aggregate the values within the sensitive fields. The Amazon Redshift cluster is shared with other data analysts from different business units.
Which steps should a data analyst take to accomplish this task efficiently and securely?

  • A. Create an AWS Lambda function to process the DynamoDB strea
  • B. Decrypt the sensitive data using the same KMS ke
  • C. Save the output to a restricted S3 bucket for the finance tea
  • D. Create a finance table in Amazon Redshift that is accessible to the finance team onl
  • E. Use the COPY command to load the data from Amazon S3 to the finance table.
  • F. Create an AWS Lambda function to process the DynamoDB strea
  • G. Save the output to a restricted S3 bucket for the finance tea
  • H. Create a finance table in Amazon Redshift that is accessible to the finance team onl
  • I. Use the COPY command with the IAM role that has access to the KMS key to load the data from S3 to the finance table.
  • J. Create an Amazon EMR cluster with an EMR_EC2_DefaultRole role that has access to the KMS key.Create Apache Hive tables that reference the data stored in DynamoDB and the finance table in Amazon Redshif
  • K. In Hive, select the data from DynamoDB and then insert the output to the finance table in Amazon Redshift.
  • L. Create an Amazon EMR cluste
  • M. Create Apache Hive tables that reference the data stored inDynamoD
  • N. Insert the output to the restricted Amazon S3 bucket for the finance tea
  • O. Use the COPY command with the IAM role that has access to the KMS key to load the data from Amazon S3 to the finance table in Amazon Redshift.

Answer: B

NEW QUESTION 7
A large university has adopted a strategic goal of increasing diversity among enrolled students. The data analytics team is creating a dashboard with data visualizations to enable stakeholders to view historical trends. All access must be authenticated using Microsoft Active Directory. All data in transit and at rest must be encrypted.
Which solution meets these requirements?

  • A. Amazon QuickSight Standard edition configured to perform identity federation using SAML 2.0. and the default encryption settings.
  • B. Amazon QuickSight Enterprise edition configured to perform identity federation using SAML 2.0 and the default encryption settings.
  • C. Amazon QuckSight Standard edition using AD Connector to authenticate using Active Directory.Configure Amazon QuickSight to use customer-provided keys imported into AWS KMS.
  • D. Amazon QuickSight Enterprise edition using AD Connector to authenticate using Active Directory.Configure Amazon QuickSight to use customer-provided keys imported into AWS KMS.

Answer: D

NEW QUESTION 8
A manufacturing company has been collecting IoT sensor data from devices on its factory floor for a year and is storing the data in Amazon Redshift for daily analysis. A data analyst has determined that, at an expected ingestion rate of about 2 TB per day, the cluster will be undersized in less than 4 months. A long-term solution is needed. The data analyst has indicated that most queries only reference the most recent 13 months of data, yet there are also quarterly reports that need to query all the data generated from the past 7 years. The chief technology officer (CTO) is concerned about the costs, administrative effort, and performance of a long-term solution.
Which solution should the data analyst use to meet these requirements?

  • A. Create a daily job in AWS Glue to UNLOAD records older than 13 months to Amazon S3 and delete those records from Amazon Redshif
  • B. Create an external table in Amazon Redshift to point to the S3 locatio
  • C. Use Amazon Redshift Spectrum to join to data that is older than 13 months.
  • D. Take a snapshot of the Amazon Redshift cluste
  • E. Restore the cluster to a new cluster using dense storage nodes with additional storage capacity.
  • F. Execute a CREATE TABLE AS SELECT (CTAS) statement to move records that are older than 13 months to quarterly partitioned data in Amazon Redshift Spectrum backed by Amazon S3.
  • G. Unload all the tables in Amazon Redshift to an Amazon S3 bucket using S3 Intelligent-Tierin
  • H. Use AWS Glue to crawl the S3 bucket location to create external tables in an AWS Glue Data Catalo
  • I. Create an Amazon EMR cluster using Auto Scaling for any daily analytics needs, and use Amazon Athena for the quarterly reports, with both using the same AWS Glue Data Catalog.

Answer: A

NEW QUESTION 9
A company launched a service that produces millions of messages every day and uses Amazon Kinesis Data Streams as the streaming service.
The company uses the Kinesis SDK to write data to Kinesis Data Streams. A few months after launch, a data analyst found that write performance is significantly reduced. The data analyst investigated the metrics and determined that Kinesis is throttling the write requests. The data analyst wants to address this issue without significant changes to the architecture.
Which actions should the data analyst take to resolve this issue? (Choose two.)

  • A. Increase the Kinesis Data Streams retention period to reduce throttling.
  • B. Replace the Kinesis API-based data ingestion mechanism with Kinesis Agent.
  • C. Increase the number of shards in the stream using the UpdateShardCount API.
  • D. Choose partition keys in a way that results in a uniform record distribution across shards.
  • E. Customize the application code to include retry logic to improve performance.

Answer: CD

Explanation:
https://aws.amazon.com/blogs/big-data/under-the-hood-scaling-your-kinesis-data-streams/

NEW QUESTION 10
A company has a data warehouse in Amazon Redshift that is approximately 500 TB in size. New data is imported every few hours and read-only queries are run throughout the day and evening. There is a particularly heavy load with no writes for several hours each morning on business days. During those hours, some queries are queued and take a long time to execute. The company needs to optimize query execution and avoid any downtime.
What is the MOST cost-effective solution?

  • A. Enable concurrency scaling in the workload management (WLM) queue.
  • B. Add more nodes using the AWS Management Console during peak hour
  • C. Set the distribution style to ALL.
  • D. Use elastic resize to quickly add nodes during peak time
  • E. Remove the nodes when they are not needed.
  • F. Use a snapshot, restore, and resize operatio
  • G. Switch to the new target cluster.

Answer: A

Explanation:
https://docs.aws.amazon.com/redshift/latest/dg/cm-c-implementing-workload-management.html

NEW QUESTION 11
A media company is using Amazon QuickSight dashboards to visualize its national sales data. The dashboard is using a dataset with these fields: ID, date, time_zone, city, state, country, longitude, latitude, sales_volume, and number_of_items.
To modify ongoing campaigns, the company wants an interactive and intuitive visualization of which states across the country recorded a significantly lower sales volume compared to the national average.
Which addition to the company’s QuickSight dashboard will meet this requirement?

  • A. A geospatial color-coded chart of sales volume data across the country.
  • B. A pivot table of sales volume data summed up at the state level.
  • C. A drill-down layer for state-level sales volume data.
  • D. A drill through to other dashboards containing state-level sales volume data.

Answer: B

NEW QUESTION 12
An Amazon Redshift database contains sensitive user data. Logging is necessary to meet compliance requirements. The logs must contain database authentication attempts, connections, and disconnections. The logs must also contain each query run against the database and record which database user ran each query.
Which steps will create the required logs?

  • A. Enable Amazon Redshift Enhanced VPC Routin
  • B. Enable VPC Flow Logs to monitor traffic.
  • C. Allow access to the Amazon Redshift database using AWS IAM onl
  • D. Log access using AWS CloudTrail.
  • E. Enable audit logging for Amazon Redshift using the AWS Management Console or the AWS CLI.
  • F. Enable and download audit reports from AWS Artifact.

Answer: C

NEW QUESTION 13
A marketing company is using Amazon EMR clusters for its workloads. The company manually installs third party libraries on the clusters by logging in to the master nodes. A data analyst needs to create an automated solution to replace the manual process.
Which options can fulfill these requirements? (Choose two.)

  • A. Place the required installation scripts in Amazon S3 and execute them using custom bootstrap actions.
  • B. Place the required installation scripts in Amazon S3 and execute them through Apache Spark in Amazon EMR.
  • C. Install the required third-party libraries in the existing EMR master nod
  • D. Create an AMI out of that master node and use that custom AMI to re-create the EMR cluster.
  • E. Use an Amazon DynamoDB table to store the list of required application
  • F. Trigger an AWS Lambda function with DynamoDB Streams to install the software.
  • G. Launch an Amazon EC2 instance with Amazon Linux and install the required third-party libraries on the instanc
  • H. Create an AMI and use that AMI to create the EMR cluster.

Answer: AE

Explanation:
https://aws.amazon.com/about-aws/whats-new/2017/07/amazon-emr-now-supports-launching-clusters-with-cust https://docs.aws.amazon.com/de_de/emr/latest/ManagementGuide/emr-plan-bootstrap.html

NEW QUESTION 14
A technology company is creating a dashboard that will visualize and analyze time-sensitive data. The data will come in through Amazon Kinesis Data Firehose with the butter interval set to 60 seconds. The dashboard must support near-real-time data.
Which visualization solution will meet these requirements?

  • A. Select Amazon Elasticsearch Service (Amazon ES) as the endpoint for Kinesis Data Firehos
  • B. Set up a Kibana dashboard using the data in Amazon ES with the desired analyses and visualizations.
  • C. Select Amazon S3 as the endpoint for Kinesis Data Firehos
  • D. Read data into an Amazon SageMaker Jupyter notebook and carry out the desired analyses and visualizations.
  • E. Select Amazon Redshift as the endpoint for Kinesis Data Firehos
  • F. Connect Amazon QuickSight with SPICE to Amazon Redshift to create the desired analyses and visualizations.
  • G. Select Amazon S3 as the endpoint for Kinesis Data Firehos
  • H. Use AWS Glue to catalog the data and Amazon Athena to query i
  • I. Connect Amazon QuickSight with SPICE to Athena to create the desired analyses and visualizations.

Answer: A

NEW QUESTION 15
A regional energy company collects voltage data from sensors attached to buildings. To address any known dangerous conditions, the company wants to be alerted when a sequence of two voltage drops is detected within 10 minutes of a voltage spike at the same building. It is important to ensure that all messages are delivered as quickly as possible. The system must be fully managed and highly available. The company also needs a solution that will automatically scale up as it covers additional cites with this monitoring feature. The alerting system is subscribed to an Amazon SNS topic for remediation.
Which solution meets these requirements?

  • A. Create an Amazon Managed Streaming for Kafka cluster to ingest the data, and use an Apache Spark Streaming with Apache Kafka consumer API in an automatically scaled Amazon EMR cluster to process the incoming dat
  • B. Use the Spark Streaming application to detect the known event sequence and send the SNS message.
  • C. Create a REST-based web service using Amazon API Gateway in front of an AWS Lambda function.Create an Amazon RDS for PostgreSQL database with sufficient Provisioned IOPS (PIOPS). In the Lambda function, store incoming events in the RDS database and query the latest data to detect the known event sequence and send the SNS message.
  • D. Create an Amazon Kinesis Data Firehose delivery stream to capture the incoming sensor dat
  • E. Use an AWS Lambda transformation function to detect the known event sequence and send the SNS message.
  • F. Create an Amazon Kinesis data stream to capture the incoming sensor data and create another stream for alert message
  • G. Set up AWS Application Auto Scaling on bot
  • H. Create a Kinesis Data Analytics for Java application to detect the known event sequence, and add a message to the message strea
  • I. Configure an AWS Lambda function to poll the message stream and publish to the SNS topic.

Answer: D

NEW QUESTION 16
A company has developed an Apache Hive script to batch process data stared in Amazon S3. The script needs to run once every day and store the output in Amazon S3. The company tested the script, and it completes within 30 minutes on a small local three-node cluster.
Which solution is the MOST cost-effective for scheduling and executing the script?

  • A. Create an AWS Lambda function to spin up an Amazon EMR cluster with a Hive execution ste
  • B. Set KeepJobFlowAliveWhenNoSteps to false and disable the termination protection fla
  • C. Use Amazon CloudWatch Events to schedule the Lambda function to run daily.
  • D. Use the AWS Management Console to spin up an Amazon EMR cluster with Python Hu
  • E. Hive, and Apache Oozi
  • F. Set the termination protection flag to true and use Spot Instances for the core nodes of the cluste
  • G. Configure an Oozie workflow in the cluster to invoke the Hive script daily.
  • H. Create an AWS Glue job with the Hive script to perform the batch operatio
  • I. Configure the job to run once a day using a time-based schedule.
  • J. Use AWS Lambda layers and load the Hive runtime to AWS Lambda and copy the Hive script.Schedule the Lambda function to run daily by creating a workflow using AWS Step Functions.

Answer: C

NEW QUESTION 17
......

Recommend!! Get the Full DAS-C01 dumps in VCE and PDF From Allfreedumps.com, Welcome to Download: https://www.allfreedumps.com/DAS-C01-dumps.html (New 130 Q&As Version)