getcertified4sure.com

70-776 Exam

Microsoft 70-776 Dumps Questions 2021




We provide 70-776 Study Guides which are the best for clearing 70-776 test, and to get certified by Microsoft Perform Big Data Engineering on Microsoft Cloud Services (beta). The 70-776 Exam Dumps covers all the knowledge points of the real 70-776 exam. Crack your Microsoft 70-776 Exam with latest dumps, guaranteed!

Online 70-776 free questions and answers of New Version:

NEW QUESTION 1
You are using Cognitive capabilities in U-SQL to analyze images that contain different types of objects.
You need to identify which objects might be people.
Which two reference assemblies should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  • A. ExtPython
  • B. ImageCommon
  • C. ImageTagging
  • D. ExtR
  • E. FaceSdk

Answer: BC

NEW QUESTION 2
You have a Microsoft Azure SQL data warehouse that contains information about community events. An Azure Data Factory job writes an updated CSV file in Azure Blob storage to Community/{date}/events.csv daily.
You plan to consume a Twitter feed by using Azure Stream Analytics and to correlate the feed to the community events.
You plan to use Stream Analytics to retrieve the latest community events data and to correlate the data to the Twitter feed data.
You need to ensure that when updates to the community events data is written to the CSV files, the Stream Analytics job can access the latest community events data.
What should you configure?

  • A. an output that uses a blob storage sink and has a path pattern of Community/{date}
  • B. an output that uses an event hub sink and the CSV event serialization format
  • C. an input that uses a reference data source and has a path pattern of Community/{date}/events.csv
  • D. an input that uses a reference data source and has a path pattern of Community/{date}

Answer: C

NEW QUESTION 3
You have a Microsoft Azure SQL data warehouse that has 10 compute nodes.
You need to export 10 TB of data from a data warehouse table to several new flat files in Azure Blob storage. The solution must maximize the use of the available compute nodes.
What should you do?

  • A. Use the bcp utility.
  • B. Execute the CREATE EXTERNAL TABLE AS SELECT statement.
  • C. Create a Microsoft SQL Server Integration Services (SSIS) package that has a data flow task.
  • D. Create a Microsoft SQL Server Integration Services (SSIS) package that has an SSIS Azure Blob Storage task.

Answer: D

NEW QUESTION 4
HOTSPOT
You are designing a fact table that has 100 million rows and 1,800 partitions. The partitions are defined based on a column named OrderDayKey. The fact table will contain:
Data from the last five years
A clustered columnstore index
A column named YearMonthKey that stores the year and the month
Multiple transformations will be performed on the fact table during the loading process. The fact table will be hash distributed on a column named OrderId.
You plan to load the data to a staging table and to perform transformations on the staging table. You will then load the data from the staging table to the final fact table.
You need to design a solution to load the data to the fact table. The solution must minimize how long it takes to perform the following tasks:
Load the staging table.
Transfer the data from the staging table to the fact table. Remove data that is older than five years.
Query the data in the fact table
How should you configure the tables? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
70-776 dumps exhibit

    Answer:

    Explanation: 70-776 dumps exhibit

    NEW QUESTION 5
    You have a Microsoft Azure Data Lake Analytics service.
    You need to provide a user with the ability to monitor Data Lake Analytics jobs. The solution must minimize the number of permissions assigned to the user.
    Which role should you assign to the user?

    • A. Reader
    • B. Owner
    • C. Contributor
    • D. Data Lake Analytics Developer

    Answer: A

    Explanation:
    References:
    https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-manage-use-portal

    NEW QUESTION 6
    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
    You are monitoring user queries to a Microsoft Azure SQL data warehouse that has six compute nodes.
    You discover that compute node utilization is uneven. The rows_processed column from sys.dm_pdw_workers shows a significant variation in the number of rows being moved among the distributions for the same table for the same query.
    You need to ensure that the load is distributed evenly across the compute nodes. Solution: You add a clustered columnstore index.
    Does this meet the goal?

    • A. Yes
    • B. No

    Answer: B

    NEW QUESTION 7
    Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
    Start of repeated scenario
    You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
    For each table in LocalDW, you create a table in AzureDW.
    On the on-premises network, you have a Data Management Gateway.
    Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
    After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always.
    End of repeated scenario.
    You need to configure Azure Data Factory to connect to the on-premises SQL Server instance. What should you do first?

    • A. Deploy an Azure virtual network gateway.
    • B. Create a dataset in Azure Data Factory.
    • C. From Azure Data Factory, define a data gateway.
    • D. Deploy an Azure local network gateway.

    Answer: C

    Explanation:
    References:
    https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-move-data-between-onprem- and-cloud

    NEW QUESTION 8
    DRAG DROP
    Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
    Start of repeated scenario
    You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
    For each table in LocalDW, you create a table in AzureDW.
    On the on-premises network, you have a Data Management Gateway.
    Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
    After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. The storage solution for the archived data must minimize costs.
    End of repeated scenario.
    Which three actions should you perform in sequence to migrate the on-premises data warehouse to Azure SQL Data Warehouse? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
    70-776 dumps exhibit

      Answer:

      Explanation:
      References:
      https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-sql-server-with-polybase

      NEW QUESTION 9
      DRAG DROP
      You have an on-premises Microsoft SQL Server instance named Instance1 that contains a database named DB1.
      You have a Data Management Gateway named Gateway1.
      You plan to create a linked service in Azure Data Factory for DB1.
      You need to connect to DB1 by using standard SQL Server Authentication. You must use a username of User1 and a password of P@$$w0rd89.
      How should you complete the JSON code? TO answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
      NOTE: Each correct selection is worth one point.
      70-776 dumps exhibit

        Answer:

        Explanation:
        References:
        https://github.com/uglide/azure-content/blob/master/articles/data-factory/data-factory-move-data-between-onprem-and-cloud.md

        NEW QUESTION 10
        Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
        Start of repeated scenario
        You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
        For each table in LocalDW, you create a table in AzureDW.
        On the on-premises network, you have a Data Management Gateway.
        Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
        After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. The storage solution for the archived data must minimize costs.
        End of repeated scenario.
        You need to configure an activity to move data from blob storage to AzureDW. What should you create?

        • A. a pipeline
        • B. a linked service
        • C. an automation runbook
        • D. a dataset

        Answer: A

        Explanation:
        References:
        https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-azure-blob-connector

        NEW QUESTION 11
        You have sensor devices that report data to Microsoft Azure Stream Analytics. Each sensor reports data several times per second.
        You need to create a live dashboard in Microsoft Power BI that shows the performance of the sensor devices. The solution must minimize lag when visualizing the data.
        Which function should you use for the time-series data element?

        • A. LAG
        • B. SlidingWindow
        • C. System.TimeStamp
        • D. TumblingWindow

        Answer: D

        NEW QUESTION 12
        You have an on-premises Microsoft SQL Server instance.
        You plan to copy a table from the instance to a Microsoft Azure Storage account. You need to ensure that you can copy the table by using Azure Data Factory. Which service should you deploy?

        • A. an on-premises data gateway
        • B. Azure Application Gateway
        • C. Data Management Gateway
        • D. a virtual network gateway

        Answer: C

        NEW QUESTION 13
        You are designing a solution that will use Microsoft Azure Data Lake Store.
        You need to recommend a solution to ensure that the storage service is available if a regional outage occurs. The solution must minimize costs.
        What should you recommend?

        • A. Create two Data Lake Store accounts and copy the data by using Azure Data Factory.
        • B. Create one Data Lake Store account that uses a monthly commitment package.
        • C. Create one read-access geo-redundant storage (RA-GRS) account and configure a Recovery Services vault.
        • D. Create one Data Lake Store account and create an Azure Resource Manager template that redeploys the services to a different region.

        Answer: D

        NEW QUESTION 14
        Note: This question is part of a series of questions that present the same scenario. Each question in
        the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
        After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
        You are monitoring user queries to a Microsoft Azure SQL data warehouse that has six compute nodes.
        You discover that compute node utilization is uneven. The rows_processed column from sys.dm_pdw_workers shows a significant variation in the number of rows being moved among the distributions for the same table for the same query.
        You need to ensure that the load is distributed evenly across the compute nodes. Solution: You add a nonclustered columnstore index.
        Does this meet the goal?

        • A. Yes
        • B. No

        Answer: B

        NEW QUESTION 15
        You plan to deploy a Microsoft Azure virtual machine that will a host data warehouse. The data warehouse will contain a 10-TB database.
        You need to provide the fastest read and writes times for the database. Which disk configuration should you use?

        • A. storage pools with mirrored disks
        • B. RAID 5 volumes
        • C. spanned volumes
        • D. stripped volumes
        • E. storage pools with striped disks

        Answer: E

        NEW QUESTION 16
        Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
        After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
        You are troubleshooting a slice in Microsoft Azure Data Factory for a dataset that has been in a waiting state for the last three days. The dataset should have been ready two days ago.
        The dataset is being produced outside the scope of Azure Data Factory. The dataset is defined by using the following JSON code.
        70-776 dumps exhibit
        You need to modify the JSON code to ensure that the dataset is marked as ready whenever there is data in the data store.
        Solution: You change the interval to 24.
        Does this meet the goal?

        • A. Yes
        • B. No

        Answer: B

        Explanation:
        References:
        https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-create-datasets

        NEW QUESTION 17
        HOTSPOT
        You plan to implement a Microsoft Azure Stream Analytics job to track the data from IoT devices. You will have the following two jobs:
        - Job1 will contain a query that has one non-partitioned step.
        - Job2 will contain a query that has two steps. One of the steps is partitioned.
        What is the maximum number of streaming units that will be consumed per job? To answer, select the appropriate options in the answer area.
        NOTE: Each correct selection is worth one point.
        70-776 dumps exhibit

          Answer:

          Explanation:
          References:
          https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-scale-jobs https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-streaming-unit- consumption

          NEW QUESTION 18
          You ingest data into a Microsoft Azure event hub.
          You need to export the data from the event hub to Azure Storage and to prepare the data for batch processing tasks in Azure Data Lake Analytics.
          Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

          • A. Run the Avro extractor from a U-SQL script.
          • B. Create an Azure Storage account.
          • C. Add a shared access policy.
          • D. Enable Event Hubs Archive.
          • E. Run the CSV extractor from a U-SQL script.

          Answer: BD

          100% Valid and Newest Version 70-776 Questions & Answers shared by Certleader, Get Full Dumps HERE: https://www.certleader.com/70-776-dumps.html (New 91 Q&As)