getcertified4sure.com

Why You Need To 70 463 training kit pdf?




Your success in Microsoft exam 70 463 is our sole target and we develop all our 70 463 exam braindumps in a way that facilitates the attainment of this target. Not only is our microsoft 70 463 study material the best you can find, it is also the most detailed and the most updated. 70 463 pdf Practice Exams for Microsoft SQL Server 70 463 pdf are written to the highest standards of technical accuracy.

Q61. You are creating a Data Quality Services (DQS) solution. You must provide statistics on the accuracy of the data. 

You need to use DQS profiling to obtain the required statistics. 

Which DQS activity should you use? 

A. Cleansing 

B. Matching 

C. Knowledge Discovery 

D. Matching Policy 

Answer:


Q62. You are designing a SQL Server Integration Services (SSIS) package. The package moves order-related data to a staging table named Order. Every night the staging data is truncated and then all the recent orders from the online store database are inserted into the staging table. 

Your package must meet the following requirements: 

. If the truncate operation fails, the package execution must stop and report an error. . If the Data Flow task that moves the data to the staging table fails, the entire refresh operation must be rolled back. . For auditing purposes, a log entry must be entered in a SQL log table after each execution of the Data Flow task. 

The TransactionOption property for the package is set to Required. 

You need to design the package to meet the requirements. 

How should you design the control flow for the package? (To answer, drag the appropriate setting from the list of settings to the correct location or locations in the answer area.) 

Answer: 


Q63. You are building a SQL Server Integration Services (SSIS) package to load product data sourced from a SQL Azure database to a data warehouse. Before the product data is loaded, you create a batch record by using an Execute SQL task named Create Batch. After successfully loading the product data, you use another Execute SQL task named Set Batch Success to mark the batch as successful. 

You need to create and execute an Execute SQL task to mark the batch as failed if either the Create Batch or Load Products task fails. 

Which three steps should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.) 

Answer: 


Q64. You develop a SQL Server Integration Services (SSIS) project by using the Project Deployment model. 

The project contains many packages. It is deployed on a server named Development!. The project will be deployed to several servers that run SQL Server 2012. 

The project accepts one required parameter. The data type of the parameter is a string. 

A SQL Agent job is created that will call the master.dtsx package in the project. A job step is created for the SSIS package. 

The job must pass the value of an SSIS Environment Variable to the project parameter. The value of the Environment Variable must be configured differently on each server that runs SQL Server. The value of the Environment Variable must provide the server name to the project parameter. 

You need to configure SSIS on the Development1 server to pass the Environment Variable to the package. 

Which four actions should you perform in sequence by using SQL Server Management Studio? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.) 

Answer: 


Q65. You are designing a data warehouse for a software distribution business that stores sales by software title. It stores sales targets by software category. Software titles are classified into subcategories and categories. Each software title is included in only a single software subcategory, and each subcategory is included in only a single category. The data warehouse will be a data source for an Analysis Services cube. 

The data warehouse contains two fact tables: 

. factSales, used to record daily sales by software title 

. factTarget, used to record the monthly sales targets by software category 

Reports must be developed against the warehouse that reports sales by software title, category and subcategory, and sales targets. 

You need to design the software title dimension. The solution should use as few tables as possible while supporting all the requirements. 

What should you do? 

A. Create three software tables, dimSoftware, dimSoftwareCategory, and dimSoftwareSubcategory and a fourth bridge table that joins software titles to their appropriate category and subcategory table records with foreign key constraints. Direct the cube developer to use key granularity attributes. 

B. Create three software tables, dimSoftware, dimSoftwareCategory, and dimSoftwareSubcategory. Connect factSales to all three tables and connect factTarget to dimSoftwareCategory with foreign key constraints. Direct the cube developer to use key granularity attributes. 

C. Create one table, dimSoftware, which contains Software Detail, Category, and Subcategory columns. Connect factSales to dimSoftware with a foreign key constraint. Direct the cube developer to use a non-key granularity attribute for factTarget. 

D. Create two tables, dimSoftware and dimSoftwareCategory. Connect factSales to dimSoftware and factTarget to dimSoftwareCategory with foreign key constraints. Direct the cube developer to use key granularity attributes. 

Answer:


Q66. You are designing a package control flow. The package moves sales order data from a SQL Azure transactional database to an on-premise reporting database. The package will run several times a day, while new sales orders are being added to the transactional database. 

The current design of the package control flow is shown in the answer area. (Click the Exhibit button.) 

The Insert New Orders Data Flow task must meet the following requirements: 

Usage of the tempdb database should not be impacted. 

Concurrency should be maximized, while only reading committed transactions. 

If the task fails, only that task needs to be rolled back. 

You need to configure the Insert New Orders Data Flow task to meet the requirements. 

How should you configure the transaction properties? To answer, select the appropriate setting or settings in the answer area. 

Select “IsolationLeve as Choas” 

Answer: 


Q67. You are designing an enterprise star schema that will consolidate data from three independent data marts. One of the data marts is hosted on SQL Azure. 

Most of the dimensions have the same structure and content. However, the geography dimension is slightly different in each data mart. 

You need to design a consolidated dimensional structure that will be easy to maintain while ensuring that all dimensional data from the three original solutions is represented. 

What should you do? 

A. Create a junk dimension for the geography dimension. 

B. Implement change data capture. 

C. Create a conformed dimension for the geography dimension. 

D. Create three geography dimensions. 

Answer:


Q68. You are completing the installation of the Data Quality Server component of SQL Server Data Quality Services (DQS). 

You need to complete the post-installation configuration. 

What should you do? 

A. Run the DQSInstaller.exe command. 

B. Install the data providers that are used for data refresh. 

C. Install ADOMD.NET. 

D. Run the dbimpexp.exe command. 

Answer:

Explanation: References: http://msdn.microsoft.com/en-us/library/ff877917.aspx 

http://msdn.microsoft.com/en-us/library/gg492277.aspx 


Q69. You develop a SQL Server Integration Services (SSIS) package that imports SQL Azure data into a data warehouse every night. 

The SQL Azure data contains many misspellings and variations of abbreviations. To import the data, a developer used the Fuzzy Lookup transformation to choose the closest-matching string from a reference table of allowed values. The number of rows in the reference table is very large. 

If no acceptable match is found, the Fuzzy Lookup transformation passes a null value. 

The current setting for the Fuzzy Lookup similarity threshold is 0.50. 

Many values are incorrectly matched. 

You need to ensure that more accurate matches are made by the Fuzzy Lookup transformation without degrading performance. 

What should you do? 

A. Change the Exhaustive property to True. 

B. Change the similarity threshold to 0.55. 

C. Change the similarity threshold to 0.40. 

D. Increase the maximum number of matches per lookup. 

Answer:

Explanation: 

http://msdn.microsoft.com/en-us/library/ms137786.aspx 


Q70. You are developing a SQL Server Integration Services (SSIS) package. 

The package uses a data flow task to source data from a SQL Server database for loading into a dimension table in a data warehouse. 

You need to create a separate data flow path for data that has been modified since it was last processed. 

Which data flow components should you use to identify modified data? (Each correct answer presents a complete solution. Choose all that apply.) 

A. Multicast 

B. Data Conversion 

C. Lookup 

D. Slowly Changing Dimension 

E. Aggregate 

Answer: A,C 

Explanation: A: The transformation that distributes data sets to multiple outputs. 

The transformation that distributes data sets to multiple outputs. 

C: Lookup Transformation 

The transformation that looks up values in a reference table using an exact match. 

Note: 

* SQL Server Integration Services provides three different types of data flow components: 

sources, transformations, and destinations. Sources extract data from data stores such as 

tables and views in relational databases, files, and Analysis Services databases. 

Transformations modify, summarize, and clean data. Destinations load data into data 

stores or create in-memory datasets. 

Incorrect: Not B: Data Conversion Transformation The transformation that converts the data type of a column to a different data type. Not D: Slowly Changing Dimension Transformation The transformation that configures the updating of a slowly changing dimension. Not E: The Aggregate transformation applies aggregate functions, such as Average, to column values and copies the results to the transformation output. Besides aggregate functions, the transformation provides the GROUP BY clause, which you can use to specify groups to aggregate across.