It is impossible to pass Microsoft 70-776 exam without any help in the short term. Come to us soon and find the most advanced, correct and guaranteed 70-776 Study Guides. You will get a surprising result by our 70-776 Exam Dumps.
Online 70-776 free questions and answers of New Version:
NEW QUESTION 1
DRAG DROP
You plan to use U-SQL to run federated queries to join data from a Microsoft Azure SQL data warehouse, an Azure SQL database, and a Microsoft SQL Server database on an Azure virtual machine.
You need to ensure that you can use a U-SQL query that joins all three data sources. The solution must ensure that the three data sources appear as tables in U-SQL without having to move the data from external sources.
Which three statements should you execute in sequence? To answer, move the appropriate statements from the list of statements to the answer area and arrange them in the correct order.
Answer:
Explanation:
NEW QUESTION 2
DRAG DROP
You need to create a linked service in Microsoft Azure Data Factory. The linked service must use an Azure Database for MySQL table named Customers.
How should you complete the JSON snippet? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
NEW QUESTION 3
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are monitoring user queries to a Microsoft Azure SQL data warehouse that has six compute nodes.
You discover that compute node utilization is uneven. The rows_processed column from sys.dm_pdw_workers shows a significant variation in the number of rows being moved among the distributions for the same table for the same query.
You need to ensure that the load is distributed evenly across the compute nodes. Solution: You change the table to use a column that is not skewed for hash distribution. Does this meet the goal?
Answer: A
NEW QUESTION 4
You have an extract, transformation, and load (ETL) process for a Microsoft Azure SQL data
warehouse.
You run the following statements to create the logon and user for an account that will run the nightly data load for the data warehouse.
CREATE LOGIN LoaderLogin WITH PASSWORD = ‘mypassword’; CREATE USER LoaderUser for LOGIN LoaderLogin;
You connect to the data warehouse.
You need to ensure that the user can access the highest resource class. Which statement should you execute?
Answer: B
Explanation:
References:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-develop- concurrency
NEW QUESTION 5
You need to define an input dataset for a Microsoft Azure Data Factory pipeline.
Which properties should you include when you define the dataset?
Answer: A
Explanation:
References:
https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-create-datasets
NEW QUESTION 6
You manage an on-premises data warehouse that uses Microsoft SQL Server. The data warehouse contains 100 TB of data. The data is partitioned by month. One TB of data is added to the data warehouse each month.
You create a Microsoft Azure SQL data warehouse and copy the on-premises data to the data warehouse.
You need to implement a process to replicate the on-premises data warehouse to the Azure SQL data warehouse. The solution must support daily incremental updates and must provide error handling. What should you use?
Answer: C
NEW QUESTION 7
HOTSPOT
You have a Microsoft Azure Data Lake Analytics service.
You have a file named Employee.tsv that contains data on employees. Employee.tsv contains seven columns named EmpId, Start, FirstName, LastName, Age, Department, and Title.
You need to create a Data Lake Analytics jobs to transform Employee.tsv, define a schema for the data, and output the data to a CSV file. The outputted data must contain only employees who are in the sales department. The Age column must allow NULL.
How should you complete the U-SQL code segment? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
References:
https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-get-started
NEW QUESTION 8
HOTSPOT
Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario
You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
For each table in LocalDW, you create a table in AzureDW.
On the on-premises network, you have a Data Management Gateway.
Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. The storage solution for the archived data must minimize costs.
End of repeated scenario.
How should you configure the storage to archive the source data? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
References:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
NEW QUESTION 9
You are developing an application that uses Microsoft Azure Stream Analytics.
You have data structures that are defined dynamically.
You want to enable consistency between the logical methods used by stream processing and batch processing.
You need to ensure that the data can be integrated by using consistent data points. What should you use to process the data?
Answer: D
NEW QUESTION 10
DRAG DROP
You have a Microsoft Azure SQL data warehouse named DW1. Data is located to DW1 once daily at 01:00.
A user accidentally deletes data from a fact table in DW1 at 09:00.
You need to recover the lost data. The solution must prevent the need to change any connection strings and must minimize downtime.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Answer:
Explanation:
NEW QUESTION 11
DRAG DROP
You use Microsoft Azure Stream Analytics to analyze data from an Azure event hub in real time and send the output to a table named Table1 in an Azure SQL database. Table1 has three columns named Date, EventID, and User.
You need to prevent duplicate data from being stored in the database.
How should you complete the statement? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
NEW QUESTION 12
You have a Microsoft Azure SQL data warehouse. The following statements are used to define file formats in the data warehouse.
You have an external PolyBase table named file_factPowerMeasurement that uses the FileFormat_ORC file format.
You need to change file_ factPowerMeasurement to use the FileFormat_PARQUET file format. Which two statements should you execute? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Answer: AE
NEW QUESTION 13
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario
You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract transformation, and load (ETL) functions.
For each table in LocalDW, you create a table in AzureDW.
Answer: A
NEW QUESTION 14
You have a Microsoft Azure Data Lake Analytics service.
You need to store a list of milltiple-character string values in a single column and to use a cross apply explode expression to output the values.
Which type of data should you use in a U-SQL query?
Answer: B
NEW QUESTION 15
DRAG DROP
You are monitoring a Microsoft Azure SQL data warehouse. You need to get the following information:
- The top 10 longest running queries
- The distributed query plan for a specific query
Which dynamic management view should you use for each piece of information? To answer, drag the appropriate dynamic management views to the correct pieces of information. Each dynamic management view may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Answer:
Explanation:
References:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-monitor
NEW QUESTION 16
Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
Start of repeated scenario
You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
For each table in LocalDW, you create a table in AzureDW.
On the on-premises network, you have a Data Management Gateway.
Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. End of repeated scenario.
You need to connect AzureDF to the storage account. What should you create?
Answer: C
Explanation:
References:
https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-azure-blob-connector
NEW QUESTION 17
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a table named Table1 that contains 3 billion rows. Table1 contains data from the last 36 months.
At the end of every month, the oldest month of data is removed based on a column named DateTime.
You need to minimize how long it takes to remove the oldest month of data. Solution: You implement round robin for table distribution.
Does this meet the goal?
Answer: B
NEW QUESTION 18
DRAG DROP
You need to create a dataset in Microsoft Azure Data Factory that meets the following requirements: How should you complete the JSON code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
References:
https://github.com/aelij/azure-content/blob/master/articles/data-factory/data-factory-create-pipelines.md
Thanks for reading the newest 70-776 exam dumps! We recommend you to try the PREMIUM Surepassexam 70-776 dumps in VCE and PDF here: https://www.surepassexam.com/70-776-exam-dumps.html (91 Q&As Dumps)