getcertified4sure.com

70-775 Exam

Top Certified 70-775 answers Tips!




Proper study guides for Down to date Microsoft Perform Data Engineering on Microsoft Azure HDInsight (beta) certified begins with Microsoft 70-775 preparation products which designed to deliver the Printable 70-775 questions by making you pass the 70-775 test at your first time. Try the free 70-775 demo right now.

Q1. DRAG DROP

Note: This question is part of a series of questions that present the same Scenario. Each question I the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution while others might not have correct solution.

Start of Repeated Scenario:

You are planning a big data infrastructure by using an Apache Spark Cluster in Azure HDInsight. The cluster has 24 processor cores and 512 GB of memory.

The Architecture of the infrastructure is shown in the exhibit:

 

The architecture will be used by the following users:

* Support analysts who run applications that will use REST to submit Spark jobs.

* Business analysts who use JDBC and ODBC client applications from a real-time view. The business analysts run monitoring quires to access aggregate result for 15 minutes. The result will be referenced by subsequent quires.

* Data analysts who publish notebooks drawn from batch layer, serving layer and speed layer queries. All of the notebooks must support native interpreters for data sources that

are bath processed. The serving layer queries are written in Apache Hive and must support multiple sessions. Unique GUIDs are used across the data sources, which allow the data analysts to use Spark SQL.

The data sources in the batch layer share a common storage container. The Following data sources are used:

* Hive for sales data

* Apache HBase for operations data

* HBase for logistics data by suing a single region server.

End of Repeated scenario.

The business analysts require to monitor the sales data. The queries must be faster and more interactive than the batch layer queries.

You need to create a new infrastructure to support the queries. The solution must ensure that you can tune the cache policies of the queries.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to answer area.

 

Answer:

 


Q2. You have an Azure HDInsight cluster.

You need to store data in a file format that maximizes compression and increases read performance.

Which type of file format should you use?

A. ORC

B. Apache Parquet

C. Apache Avro

D. Apache Sequence

Answer: A

Explanation: https://docs.microsoft.com/en-us/azure/data-factory/data-factory-supported-file-and-compression-formats


Q3. HOTSPOT

You install the Microsoft Hive ODBC Driver on a computer that runs Windows 10 and has the 64-bit version of Microsoft Office 2021 installed.

You deploy a new Apache Interactive Hive cluster in Azure HDInsight. The cluster is hosted at myHDICluster.azurehdinsignt.net and contains a Hive table named hivesampletable that has 200,000 rows.

You plan to use HiveQL exclusively for the queries. The queries will return from 6,000 to 10,000 rows 90 percent of the time.

You need to configure a data source to ensure that you can use Microsoft Excel to access the data. The solution must ensure that the Hive queries execute as quickly as possible.

How should you configure the Advanced Options from the Microsoft Hive ODBC Driver DSN Setup dialog box? To answer select the appropriate options in the answer area.

NOTE:

Each correct selection is worth one point.

 

Answer:

 


Q4. You have an Apache Hadoop cluster in Azure HDInsiqht that has a head node and three data nodes. You have a MapReduce job.

You receive a notification that a data node failed.

You need to identity which component caused the failure. Which tool should you use?

A. Job Tracker

B. TaskTracker

C. ResourceManager

D. ApplicationMaster

Answer: C


Q5. You have on Apache Hive table that contains one billion rows.

You plan to use queries that will filter the data by using the WHERE clause. The values of the columns will be known only while the data loads into a Hive table.

You need to decrease the query runtime. What should you configure?

A. static partitioning

B. bucket sampling

C. parallel execution

D. dynamic partitioning

Answer: A


Q6. You have an Apache Spark cluster in Azure HDInsight. You plan to join a large table and a lookup table.

You need to minimize data transfers during the join operation. What should you do?

A. Use the reduceByKey function

B. Use a Broadcast variable.

C. Repartition the data.

D. Use the DISK_ONLY storage level.

Answer: B


Q7. You have an Apache Hadoop cluster in Azure HDInsiqht that has a head node and three data nodes. You have a MapReduce job.

You receive a notification that a data node failed.

You need to identity which component caused the failure. Which tool should you use?

A. Job Tracker

B. TaskTracker

C. ResourceManager

D. ApplicationMaster

Answer: C


Q8. Note: This question is part of a series of questions that present the same Scenario. Each question I the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution while others

might not have correct solution. Start of Repeated Scenario:

You have an initial data that contains the crime data from major cities.

You plan to build training models from the training data. You plan to automate the process of adding more data to the training models and to training the models by using the additional data, including data that is collected in near real time. The system will be used to analyze event data gathered from many different sources. Such as Internet of things (IoT) devices, Live video surveillance, and traffic activities, and to generate predictions of an increased crime risk at a particular time and ptace.

You have an incoming data stream from Twitter and an incoming data stream from Facebook. which are event-based only, rather than time-based. You also have a time interval stream every 10 seconds.

The data is in a key/value pair format. The value field represents a number that defines how many times a hashtag occurs within a Facebook post or how many times a tweet that contains a specific hashtag is retweeted.

You must use the appropriate data storage, stream analytics techniques, and Azure HDInsight cluster types tor the various tasks associated to the processing pipeline.

End of repeated Scenario.

You are designing the real-time portion of the input stream processing. The input will be a continuous stream of data and each record will be processed one at a time. The data will come from an Apache Kafka producer.

You need to identify which HDInsight cluster to use for the final processing of the input data. This will be used to generate continuous statistics and real-time analytics. The latency to process each record must be less than one millisecond and tasks must be performed in parallel.

Which type of cluster should you identify?

A. Apache Storm

B. Apache Hadoop

C. Apache HBase

D. Apache Spark

Answer: D


Q9. You have an Azure HDInsight cluster.

You need to store data in a file format that maximizes compression and increases read performance.

Which type of file format should you use?

A. ORC

B. Apache Parquet

C. Apache Avro

D. Apache Sequence

Answer: A

Explanation: https://docs.microsoft.com/en-us/azure/data-factory/data-factory-supported-file-and-compression-formats


Q10. Note: This question is part of a series of questions that present the same Scenario. Each question I the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution while others might not have correct solution.

Start of Repeated Scenario:

You are planning a big data infrastructure by using an Apache Spark Cluster in Azure HDInsight. The cluster has 24 processor cores and 512 GB of memory.

The Architecture of the infrastructure is shown in the exhibit:

 

The architecture will be used by the following users:

* Support analysts who run applications that will use REST to submit Spark jobs.

* Business analysts who use JDBC and ODBC client applications from a real-time view. The business analysts run monitoring quires to access aggregate result for 15 minutes. The result will be referenced by subsequent quires.

* Data analysts who publish notebooks drawn from batch layer, serving layer and speed layer queries. All of the notebooks must support native interpreters for data sources that are bath processed. The serving layer queries are written in Apache Hive and must support multiple sessions. Unique GUIDs are used across the data sources, which allow the data analysts to use Spark SQL.

The data sources in the batch layer share a common storage container. The Following data sources are used:

* Hive for sales data

* Apache HBase for operations data

* HBase for logistics data by suing a single region server.

End of Repeated scenario.

The business analysts report that they experience performance issues when they run the monitoring queries.

You troubleshoot the performance issues and discover that the intermediate tables generated when the analysts run the queries cause pressure for the Java Virtual Machine (JVM) garbage collection per job.

Which configuration settings should you modify to alleviate the performance issues?

A. spark.sql.inMemoryColumnarStorage.batchSize

B. spark.sql.broadcaseTimeout

C. spark.sql.files.openCostInBytes

D. spark.sql.shuffle.partitions

Answer: D