getcertified4sure.com

Top Tips Of 70-762 braindumps




Want to know Testking 70-762 Exam practice test features? Want to lear more about Microsoft Developing SQL Databases (beta) certification experience? Study High quality Microsoft 70-762 answers to Most up-to-date 70-762 questions at Testking. Gat a success with an absolute guarantee to pass Microsoft 70-762 (Developing SQL Databases (beta)) test on your first attempt.

Q1. You are experiencing performance issues with the database server.

You need to evaluate schema locking issues, plan cache memory pressure points, and backup I/O problems.

What should you create?

A. a System Monitor report

B. a sys.dm_exec_query_stats dynamic management view query

C. a sys.dm_exec_session_wait_stats dynamicmanagement view query

D. an Activity Monitor session in Microsoft SQL Management Studio.

Answer: C

Explanation:

sys.dm_exec_session_wait_stats returns information about all the waits encountered by threads that executed for each session. You can use this view to diagnose performance issues with the SQL Server session and also with specific queries and batches.

Note: SQL Server wait stats are, at their highest conceptual level, grouped into two broad categories: signal waits and resource waits. A signal wait is accumulated by processes running on SQL Server which are waiting for a CPU to become available (so called because the process has “signaled” that it is ready for processing). A resource wait is accumulated by processes running on SQL Server which are waiting fora specific resource to become available, such as waiting for the release of a lock on a specific record.


Q2. Note: This question is part of a series of questions that use the same or similar answer choices. An Answer choice may be correct for more than one question in the series. Each question independent of the other questions in this series. Information and details provided in a question apply only to that question.

You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.

You must monitor and optimize the SQL Server to maximize throughput, response time, and overall SQL performance.

You need to identify previous situations where a modification has prevented queries from selecting data in tables.

What should you do?

A. Create a sys.dm_os_waiting_tasks query.

B. Create a sys.dm_exec_sessions query.

C. Create a Performance Monitor Data Collector Set.

D. Create a sys.dm_os_memory_objects query.

E. Create a sp_configure ‘max server memory’query.

F. Create a SQL Profiler trace.

G. Create a sys.dm_os_wait_stats query.

H. Create an Extended Event.

Answer: G

Explanation:

sys.dm_os_wait_stats returns information about all the waits encountered by threads that executed. You can use this aggregated view to diagnose performance issues with SQL Server and also with specific queries and batches.


Q3. Note: This question is part of a series of questions that present the same scenario. Each question in this series contains a unique solution. Determine whether the solution meets the stated goals.

The Account table was created by using the following Transact-SQL statement:

There are more than 1 billion records in the Account table. The Account Number column uniquely identifies each account. The ProductCode column has 100 different values. The values are evenly distributed in the table. Table statistics are refreshed and up to date.

You frequently run the following Transact-SQL SELECT statements:

You must avoid table scans when you run the queries. You need to create one or more indexes for the table.

Solution: You run the following Transact-SQL statement:

CREATE NONCLUSTERED INDEX IX_Account_ProductCode ON Account(ProductCode); Does the solution meet the goal?

A. Yes

B. No

Answer: A

Explanation:

References: https://msdn.microsoft.com/en-za/library/ms189280.aspx


Q4. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.

You need to create a stored procedure that updates the Customer, CustomerInfo, OrderHeader, and OrderDetails tables in order.

You need to ensure that the stored procedure:

Solution: You create a stored procedure that includes the following Transact-SQL segment:

Does the solution meet the goal?

A. Yes

B. No

Answer: B

Explanation:

References: http://stackoverflow.com/questions/11444923/stored-procedure-to-update- multiple-tables


Q5. Note: This question is part of a series of questions that present the same scenario. Each question in this series contains a unique solution. Determine whether the solution meets the stated goals.

You are developing a new application that uses a stored procedure. The stored procedure inserts thousands of records as a single batch into the Employees table.

Users report that the application response time has worsened since the stored procedure was updated. You examine disk-related performance counters for the Microsoft SQL Server instance and observe several high values that include a disk performance issue. You examine wait statistics and observe an unusually high WRITELOG value.

You need to improve the application response time.

Solution: You replace the stored procedure with a user-defined function. Does the solution meet the goal?

A. Yes

B. No

Answer: B

Explanation:

References: https://msdn.microsoft.com/en-us/library/ms345075.aspx


Q6. You use Microsoft SQL Server Profile to evaluate a query named Query1. The Profiler report indicates the following issues:

- At each level of the query plan, a low total number of rows are processed.

- The query uses many operations. This results in a high overall cost for the query. You need to identify the information that will be useful for the optimizer.

What should you do?

A. Start a SQL Server Profiler trace for the event class Auto Stats in the Performance event category.

B. Create one Extended Events session with the sqlserver.missing_column_statistics eventadded.

C. Start a SQL Server Profiler trace for the event class Soft Warnings in the Errors and Warnings event category.

D. Create one Extended Events session with the sqlserver.missing_join_predicate event added.

Answer: D

Explanation:

The Missing JoinPredicate event class indicates that a query is being executed that has no join predicate. This could result in a long-running query.


Q7. Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product, and SalesPerson. The tables were created using the following Transact SQL statements:

You must modify the ProductReview Table to meet the following requirements:

1. The table must reference the ProductID column in the Product table

2. Existing records in the ProductReview table must not be validated with the Product table.

3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview table.

4. Changes to records in the Product table must propagate to the ProductReview table.

You also have the following database tables: Order, ProductTypes, and SalesHistory, The transact-SQL statements for these tables are not available.

You must modify the Orders table to meet the following requirements:

1. Create new rows in the table without granting INSERT permissions to the table.

2. Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:

- a constraint on the SaleID column that allows the field to be used as a record identifier

- a constant that uses the ProductID column to reference the Product column of the ProductTypes table

- a constraint on the CategoryID column that allows one row with a null value in the column

- a constraint that limits the SalePrice column to values greater than four

Finance department users must be able to retrieve data from the SalesHistory table for sales persons where the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following requirements:

- The table must hold 10 million unique sales orders.

- The table must use checkpoints to minimize I/O operations and must not use transaction logging.

- Data loss is acceptable.

Performance for queries against the SalesOrder table that use Where clauses with exact equality operations must be optimized.

You need to enable referential integrity for the ProductReview table.

How should you complete the relevant Transact-SQL statement? To answer? select the appropriate Transact-SQL segments in the answer area.

Select two alternatives.

A. For the first selection select: WITH CHECK

B. For the first selection select: WITH NOCHECK

C. For the second selection select: ON DELETE NO ACTION ON UPDATE CASCADE

D. For the second selection select: ON DELETECASCADE ON UPDATE CASCADE

E. For the second selection select: ON DELETE NO ACTION ON UPDATE NO ACTION

F. For the second selection select: ON DELETE CASCADE ON UPDATE NO ACTION

Answer: B,C

Explanation:

B: We should use WITH NOCHECK as existing records inthe ProductReview table must not be validated with the Product table.

C: Deletes should not be allowed, so we use ON DELETE NO ACTION. Updates should be allowed, so we use ON DELETE NO CASCADE

NO ACTION: the Database Engine raises an error, and the updateaction on the row in the parent table is rolled back.

CASCADE: corresponding rows are updated in the referencing table when that row is updated in the parent table.

Note: ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }

Specifieswhat action happens to rows in the table that is altered, if those rows have a referential relationship and the referenced row is deleted from the parent table. The default is NO ACTION.

ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }

Specifieswhat action happens to rows in the table altered when those rows have a referential relationship and the referenced row is updated in the parent table. The default is NO ACTION.

Note: You must modify the ProductReview Table to meet the following requirements:

1. The table must reference the ProductID column in the Product table

2. Existing records in the ProductReview table must not be validated with the Product table.

3. Deleting records in the Product table must not be allowed if records are referencedby theProductReview table.

4. Changes to records in the Product table must propagate to the ProductReview table.

References:https://msdn.microsoft.com/en-us/library/ms190273.aspx https://msdn.microsoft.com/en-us/library/ms188066.aspx


Q8. Note: This question is part of a series of questions that present the same scenario. Each question in this series contains a unique solution. Determine whether the solution meets the stated goals.

The Account table was created using the following Transact-SQL statement:

There are more than 1 billion records in the Account table. The Account Number column uniquely identifies each account. The ProductCode column has 100 different values. The values are evenly distributed in the table. Table statistics are refreshed and up to date.

You frequently run the following Transact-SQL SELECT statements:

You must avoid table scans when you run the queries. You need to create one or more indexes for the table. Solution: You run the following Transact-SQL statement:

CREATE CLUSTERED INDEX PK_Account ON Account(ProductCode); Does the solution meet the goal?

A. Yes

B. No

Answer: B

Explanation:

We need an index on the productCode column as well. References:https://msdn.microsoft.com/en-us/library/ms190457.aspx


Q9. DRAG DROP

You are analyzing the performance of a database environment.

Applications that access the database are experiencing locks that are held for a large amount of time. You are experiencing isolation phenomena such as dirty, nonrepeatable and phantom reads.

You need to identify the impact of specific transaction isolation levels on the concurrency and consistency of data.

What are the consistency and concurrency implications of each transaction isolation level? To answer, drag the appropriate isolation levels to the correct locations. Each isolation level may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

Answer:

Explanation:

Read Uncommitted (aka dirty read): A transaction T1executing under this isolation level can access data changed by concurrent transaction(s).

Pros:No read locks needed to read data (i.e. no reader/writer blocking). Note, T1 still takes transaction duration locks for any data modified.

Cons: Data is notguaranteed to be transactionally consistent.

Read Committed: A transaction T1 executing under thisisolation level can only access committed data.

Pros: Good compromise between concurrency and consistency.

Cons: Locking and blocking. The data can change when accessed multiple times within the same transaction.

Repeatable Read: A transaction T1 executing under this isolation level can only access committed data with an additional guarantee that any data read cannot change (i.e. it is repeatable) for the duration of the transaction.

Pros: Higher data consistency.

Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the concurrency. It does not protect against phantom rows.

Serializable: A transaction T1 executing under this isolation level provides the highest data consistency including elimination of phantoms but at the cost of reduced concurrency. It prevents phantoms by taking a range lock or table level lock if range lock can’t be acquired

(i.e. no index on the predicate column) for the duration of the transaction. Pros: Full data consistency including phantom protection.

Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the concurrency.

References:https://blogs.msdn.microsoft.com/sqlcat/2011/02/20/concurrency-series-basics-of-transaction-isolation-levels/


Q10. Note: The question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other question in the series. Information and details provided in a question apply only to that question.

You have a database named DB1. The database does not use a memory-optimized filegroup. The database contains a table named Table1. The table must support the following workloads:

You need to add the most efficient index to support the new OLTP workload, while not deteriorating the existing Reporting query performance.

What should you do?

A. Create a clustered index on the table.

B. Create a nonclustered index on the table.

C. Create a nonclustered filtered index on the table.

D. Create a clusteredcolumnstore index on the table.

E. Create a nonclustered columnstore index on the table.

F. Create a hash index on the table.

Answer: C

Explanation:

A filtered index is an optimized nonclustered index, especially suited to cover queries that

select from awell-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes.

References:https://technet.microsoft.com/en-us/library/cc280372(v=sql.105).aspx