Master the 70-762 Developing SQL Databases (beta) content and be ready for exam day success quickly with this Actualtests 70-762 free download. We guarantee it!We make it a reality and give you real 70-762 questions in our Microsoft 70-762 braindumps.Latest 100% VALID Microsoft 70-762 Exam Questions Dumps at below page. You can use our Microsoft 70-762 braindumps and pass your exam.
Q1. DRAG DROP
You have two database tables. Table1 is a partioned table and Table 2 is a nonpartioned table.
Users report that queries take a long time to complete. You monitor queries by using Microsoft SQL Server Profiler. You observe lock escalation for Table1 and Table 2.
You need to allow escalation of Table1 locks to the partition level and prevent all lock escalation for Table2.
Which Transact-SQL statement should you run for each table? To answer, drag the appropriate Transact-SQL statements to the correct tables. Each command may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Answer:
Explanation:
Since SQL Server 2008 you can also control how SQL Server performs the Lock Escalation – through the ALTER TABLE statement and the property LOCK_ESCALATION. There are 3 different optionsavailable:
TABLE AUTO DISABLE
Box 1: Table1, Auto
The default option is TABLE, means that SQL Server *always* performs the Lock Escalation to the table level –even when the table is partitioned. If you have your table partitioned, and you want to have aPartition Level Lock Escalation (because you have tested your data access pattern, and you don’t cause deadlocks with it), then you can change the option to AUTO. AUTO means that the Lock Escalation is performed to the partition level, if the table is partitioned, and otherwise to the table level.
Box 2: Table 2, DISABLE
With the option DISABLE you can completely disable the Lock Escalation for that specific table.
For partitioned tables, use the LOCK_ESCALATION option of ALTER TABLE to escalate locks tothe HoBT level instead of the table or to disable lock escalation.
References:http://www.sqlpassion.at/archive/2014/02/25/lock-escalations/
Q2. DRAG DROP
You are analyzing the performance of a database environment.
Applications that access the database are experiencing locks that are held for a large amount of time. You are experiencing isolation phenomena such as dirty, nonrepeatable and phantom reads.
You need to identify the impact of specific transaction isolation levels on the concurrency and consistency of data.
What are the consistency and concurrency implications of each transaction isolation level? To answer, drag the appropriate isolation levels to the correct locations. Each isolation level may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Answer:
Explanation:
Read Uncommitted (aka dirty read): A transaction T1executing under this isolation level can access data changed by concurrent transaction(s).
Pros:No read locks needed to read data (i.e. no reader/writer blocking). Note, T1 still takes transaction duration locks for any data modified.
Cons: Data is notguaranteed to be transactionally consistent.
Read Committed: A transaction T1 executing under thisisolation level can only access committed data.
Pros: Good compromise between concurrency and consistency.
Cons: Locking and blocking. The data can change when accessed multiple times within the same transaction.
Repeatable Read: A transaction T1 executing under this isolation level can only access committed data with an additional guarantee that any data read cannot change (i.e. it is repeatable) for the duration of the transaction.
Pros: Higher data consistency.
Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the concurrency. It does not protect against phantom rows.
Serializable: A transaction T1 executing under this isolation level provides the highest data consistency including elimination of phantoms but at the cost of reduced concurrency. It prevents phantoms by taking a range lock or table level lock if range lock can’t be acquired
(i.e. no index on the predicate column) for the duration of the transaction. Pros: Full data consistency including phantom protection.
Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the concurrency.
References:https://blogs.msdn.microsoft.com/sqlcat/2011/02/20/concurrency-series-basics-of-transaction-isolation-levels/
Q3. Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You have a database that contains a table named Employees. The table stores information about the employees of your company.
You need to implement and enforce the following business rules:
Solution: You implement cascading referential integrity constraints on the table. Does the solution meet the goal?
A. Yes
B. No
Answer: A
Explanation:
References: https://technet.microsoft.com/en-us/library/ms186973(v=sql.105).aspx
Q4. DRAG DROP
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in the series.
You have a database named Sales that contains the following database tables. Customer, Order, and Products. The Products table and the order table shown in the following diagram.
The Customer table includes a column that stores the date for the last order that the customer placed.
You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000 records. Storage requirements for the Leads table must be minimized.
You need to begin to modify the table design to adhere to third normal form.
Which column should you remove for each table? To answer? drag the appropriate column names to the correct locations. Each column name may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Answer:
Explanation:
In the Products table the SupplierName is dependant on the SupplierID, not on the ProductID.
In the Orders table the ProductName is dependant on the ProductID, not on the OrderID.
Note:
A table is in third normal form when the following conditions are met:
* It is in second normal form.
* All nonprimary fields are dependent on the primarykey.
Second normal form states that it should meet all the rules for First 1Normnal Form and there must be no partial dependences of any of the columns onthe primary key.
First normal form (1NF) sets the very basic rules for an organized database:
* Define the data items required, because they become the columns in a table. Place
related data items in a table.
* Ensure that there are no repeating groups ofdata.
* Ensure that there is a primary key. References:https://www.tutorialspoint.com/sql/third-normal-form.htm
Q5. Note: This question is part of a series of questions that use the same or similar answer choices. An Answer choice may be correct for more than one question in the series. Each question independent of the other questions in this series. Information and details provided in a question apply only to that question.
You are a database developer for a company. The company has a server that has multiple physical disks. The disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many SQL jobs that run during off-peak hours.
You observe that many deadlocks appear to be happening during specific times of the day. You need to monitor the SQL environment and capture the information about the
processes that are causing the deadlocks. What should you do?
A. A. Create a sys.dm_os_waiting_tasks query.
B. Create a sys.dm_exec_sessions query.
C. Create a PerformanceMonitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’ query.
F. Create a SQL Profiler trace.
G. Create a sys.dm_os_wait_stats query.
H. Create an Extended Event.
Answer: F
Explanation:
Toview deadlock information, the Database Engine provides monitoring tools in the form of two trace flags, and the deadlock graph event in SQL Server Profiler.
Trace Flag 1204 and Trace Flag 1222
When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is captured in the SQL Server error log. Trace flag 1204 reports deadlock information formatted by each nodeinvolved in the deadlock. Trace flag 1222 formats deadlock information, first by processesand then by resources. It is possible to enable both trace flags to obtain two representations of the same deadlock event.
References:https://technet.microsoft.com/en-us/library/ms178104(v=sql.105).aspx
Q6. Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product, and SalesPerson. The tables were created using the following Transact SQL statements:
You must modify the ProductReview Table to meet the following requirements:
1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.
3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview table.
4. Changes to records in the Product table must propagate to the ProductReview table.
You also have the following database tables: Order, ProductTypes, and SalesHistory, The transact-SQL statements for these tables are not available.
You must modify the Orders table to meet the following requirements:
1. Create new rows in the table without granting INSERT permissions to the table.
2. Notify the sales person who places an order whether or not the order was completed.
You must add the following constraints to the SalesHistory table:
- a constraint on the SaleID column that allows the field to be used as a record identifier
- a constant that uses the ProductID column to reference the Product column of the ProductTypes table
- a constraint on the CategoryID column that allows one row with a null value in the column
- a constraint that limits the SalePrice column to values greater than four
Finance department users must be able to retrieve data from the SalesHistory table for sales persons where the value of the SalesYTD column is above a certain threshold.
You plan to create a memory-optimized table named SalesOrder. The table must meet the following requirements:
- The table must hold 10 million unique sales orders.
- The table must use checkpoints to minimize I/O operations and must not use transaction logging.
- Data loss is acceptable.
Performance for queries against the SalesOrder table that use Where clauses with exact equality operations must be optimized.
You need to enable referential integrity for the ProductReview table.
How should you complete the relevant Transact-SQL statement? To answer? select the appropriate Transact-SQL segments in the answer area.
Select two alternatives.
A. For the first selection select: WITH CHECK
B. For the first selection select: WITH NOCHECK
C. For the second selection select: ON DELETE NO ACTION ON UPDATE CASCADE
D. For the second selection select: ON DELETECASCADE ON UPDATE CASCADE
E. For the second selection select: ON DELETE NO ACTION ON UPDATE NO ACTION
F. For the second selection select: ON DELETE CASCADE ON UPDATE NO ACTION
Answer: B,C
Explanation:
B: We should use WITH NOCHECK as existing records inthe ProductReview table must not be validated with the Product table.
C: Deletes should not be allowed, so we use ON DELETE NO ACTION. Updates should be allowed, so we use ON DELETE NO CASCADE
NO ACTION: the Database Engine raises an error, and the updateaction on the row in the parent table is rolled back.
CASCADE: corresponding rows are updated in the referencing table when that row is updated in the parent table.
Note: ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifieswhat action happens to rows in the table that is altered, if those rows have a referential relationship and the referenced row is deleted from the parent table. The default is NO ACTION.
ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
Specifieswhat action happens to rows in the table altered when those rows have a referential relationship and the referenced row is updated in the parent table. The default is NO ACTION.
Note: You must modify the ProductReview Table to meet the following requirements:
1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.
3. Deleting records in the Product table must not be allowed if records are referencedby theProductReview table.
4. Changes to records in the Product table must propagate to the ProductReview table.
References:https://msdn.microsoft.com/en-us/library/ms190273.aspx https://msdn.microsoft.com/en-us/library/ms188066.aspx
Q7. HOTSPOT
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different
goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product, and SalesPerson. The tables were created using the following Transact SQL statements:
You must modify the ProductReview Table to meet the following requirements:
1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.
3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview table.
4. Changes to records in the Product table must propagate to the ProductReview table.
You also have the following database tables: Order, ProductTypes, and SalesHistory, The transact-SQL statements for these tables are not available.
You must modify the Orders table to meet the following requirements:
1. Create new rows in the table without granting INSERT permissions to the table.
2. Notify the sales person who places an order whether or not the order was completed.
You must add the following constraints to the SalesHistory table:
- a constraint on the SaleID column that allows the field to be used as a record identifier
- a constant that uses the ProductID column to reference the Product column of the ProductTypes table
- a constraint on the CategoryID column that allows one row with a null value in the column
- a constraint that limits the SalePrice column to values greater than four
Finance department users must be able to retrieve data from the SalesHistory table for sales persons where the value of the SalesYTD column is above a certain threshold.
You plan to create a memory-optimized table named SalesOrder. The table must meet the following requirements:
- The table must hold 10 million unique sales orders.
- The table must use checkpoints to minimize I/O operations and must not use transaction logging.
- Data loss is acceptable.
Performance for queries against the SalesOrder table that use Where clauses with exact equality operations must be optimized.
You need to create the Sales Order table
How should you complete the table definition? To answer? select the appropriate Transact- SQL segments in the answer area.
Answer:
Explanation:
Box 1: NONCLUSTERED HASHWITH (BUCKET_COUNT = 10000000)
Hash index is preferable over a nonclustered index when queries test the indexed columns by use of a WHERE clause with an exact equality on all index key columns. We should use a bucket count of 10 million.
Box 2: SCHEMA_ONLY
Durability: The value of SCHEMA_AND_DATA indicates that the table is durable, meaning that changes are persisted on disk and survive restart or failover. SCHEMA_AND_DATA is
the default value.
The value of SCHEMA_ONLY indicates that the table is non-durable. The table schema is persisted but any data updates are not persisted upon a restart or failover of the database. DURABILITY=SCHEMA_ONLY is only allowed with MEMORY_OPTIMIZED=ON.
References:https://msdn.microsoft.com/en-us/library/mt670614.aspx
Q8. HOTSPOT
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a database named Sales that contains the following database tables: Customer, Order, and Products. The Products table and the Order table are shown in the following diagram.
The customer table includes a column that stores the data for the last order that the customer placed.
You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000 records. Storage requirements for the Leads table must be minimized.
You need to create triggers that meet the following requirements:
In the table below, identify the trigger types that meet the requirements.
NOTE: Make only selection in each column. Each correct selection is worth one point.
Answer:
Explanation:
INSTEAD OF INSERT triggers can be defined on a view or table to replace the standard action of the INSERT statement.
AFTER specifies that the DML trigger is fired only when all operationsspecified in the triggering SQL statement have executed successfully.
References:https://technet.microsoft.com/en-us/library/ms175089(v=sql.105).aspx
Q9. Note: This question is part of a series of questions that present the same scenario. Each question in this series contains a unique solution. Determine whether the solution meets the stated goals.
Your company has employees in different regions around the world.
You need to create a database table that stores the following employee attendance information:
- Employee ID
- date and time employee checked in to work
- date and time employee checked out of work
Date and time information must be time zone aware and must not store fractional seconds. Solution: You run the following Transact-SQL statement:
Does the solution meet the goal?
A. Yes
B. No
Answer: B
Explanation:
Datetimeoffset, not datetimeofset, defines a date that is combined with a time of a day that has time zone awareness and is based on a 24-hourclock.
Syntaxis: datetimeoffset [ (fractional seconds precision) ]
For the use "datetimeoffset", the Fractional seconds precision is 7. References:https://msdn.microsoft.com/en-us/library/bb630289.aspx
Q10. DRAG DROP
Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
You have a database named DB1 that contains the following tables: Customer, CustomerToAccountBridge, and CustomerDetails. The three tables are part of the Sales schema. The database also contains a schema named Website. You create the Customer table by running the following Transact-SQL statement:
The value of the CustomerStatus column is equal to one for active customers. The value of the Account1Status and Account2Status columns are equal to one for active accounts. The following table displays selected columns and rows from the Customer table.
You plan to create a view named Website.Customer and a view named Sales.FemaleCustomers.
Website.Customer must meet the following requirements:
1. Allow users access to the CustomerName and CustomerNumber columns for active customers.
2. Allow changes to the columns that the view references. Modified data must be visible through the view.
3. Prevent the view from being published as part of Microsoft SQL Server replication.
Sales.Female.Customers must meet the following requirements:
1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that the view references.
3. Only allow updates through the views that adhere to the view filter.
You have the following stored procedures: spDeleteCustAcctRelationship and spUpdateCustomerSummary. The spUpdateCustomerSummary stored procedure was created by running the following Transacr-SQL statement:
You run the spUpdateCustomerSummary stored procedure to make changes to customer account summaries. Other stored procedures call the spDeleteCustAcctRelationship to delete records from the CustomerToAccountBridge table.
You need to create Sales.FemaleCustomers.
How should you complete the view definition? To answer, drag the appropriate Transact- SQL segments to the correct locations. Each Transact_SQL segment may be used once, more than once or not at all. You may need to drag the split bar between panes or scroll to view content.
Answer:
Explanation:
Box 1:WITH SCHEMABINDING:
SCHEMABINDING binds the viewto the schema of the underlying table or tables. When SCHEMABINDING is specified, the base table or tables cannot be modified in a way that would affect the view definition.
Box 2:Box 2: WITH CHECK OPTION
CHECK OPTION forces all data modification statements executed against the view to follow the criteria set within select_statement. When a row is modified through a view, the WITH CHECK OPTION makes sure the data remains visible through the view after the modification is committed.
Note:Sales.Female.Customers must meet the following requirements:
1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that theview references.
3. Only allow updates through the views that adhere to the view filter.
References:https://msdn.microsoft.com/en-us/library/ms187956.aspx