Cause all that matters here is passing the Microsoft 70-765 exam. Cause all that you need is a high score of 70-765 Provisioning SQL Databases (beta) exam. The only one thing you need to do is downloading Examcollection 70-765 exam study guides now. We will not let you down with our money-back guarantee.
Q11. - (Topic 1)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.
You need to create the templates.
Solution: You create the desired SQL Server configuration in an Azure Resource Group, then export the Resource Group template and save it to the Templates Library.
Does the solution meet the goal?
A. Yes
B. No
Answer: B
Explanation:
Azure Resource Manager template consists of JSON, and expressions that you can use to construct values for your deployment.
A good JSON editor, not a Resource Group template, can simplify the task of creating templates.
Note: In its simplest structure, a Azure Resource Manager template contains the following elements:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01- 01/deploymentTemplate.json#",
"contentVersion": "", "parameters": { },
"variables": { },
"resources": [ ],
"outputs": { }
}
References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
Q12. - (Topic 1)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.
You need to create the templates.
Solution: You use Visual Studio to create a XAML template that defines the deployment and configuration settings for the SQL Server environment.
Does the solution meet the goal?
A. Yes
B. No
Answer: B
Explanation:
Azure ResourceManager template consists of JSON, not XAML, and expressions that you can use to construct values for your deployment.
A good JSON editor can simplify the task of creating templates.
Note: In its simplest structure, an Azure Resource Manager template contains the following elements:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01- 01/deploymentTemplate.json#",
"contentVersion": "", "parameters": { },
"variables": { },
"resources": [ ],
"outputs": { }
}
References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
Q13. - (Topic 2)
You plan to deploy 20 Microsoft Azure SQL Database instances to an elastic pool in Azure to support a batch processing application.
Two of the databases in the pool reach their peak workload threshold at the same time every day. This leads to inconsistent performance for batch completion.
You need to ensure that all batches perform consistently. What should you do?
A. Create an In-Memory table.
B. Increase the storage limit in the pool.
C. Implement a readable secondary database.
D. Increase the total number of elastic Database Transaction Units (eDTUs) in the pool.
Answer: D
Explanation:
In SQL Database, the relative measure of a database's ability tohandle resource demands is expressed in Database Transaction Units (DTUs) for single databases and elastic DTUs (eDTUs) for databases in an elastic pool.
A pool is given a set number of eDTUs, for a set price. Within the pool, individual databases are given the flexibility to auto-scale within set parameters. Under heavy load, a database can consume more eDTUs to meet demand.
Additional eDTUs can be added to an existing pool with no database downtime. References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool
Q14. - (Topic 1)
You have a Microsoft SQL Server 2014 named SRV2014 that has a single tempdb database file. The tempdb database file is eight gigabytes (GB) in size.
You install a SQL Server 2021 instance named SQL Server 2021 by using default settings. The new instance has eight logical processor cores.
You plan to migrate the databases from SRV2014 to SRV2021.
You need to configure the tempdb database on SRV2021. The solution must minimize the number of future tempdb autogrowth events.
What should you do?
A. Increase the size of the tempdb datafile to 8 GB. In the tempdb database, set the value of the MAXDOP property to8.
B. Increase the size of the tempdb data files to1 GB.
C. Add seven additional tempdb data files. In the tempdb database, set the value of the MAXDOP property to8.
D. Setthe value for the autogrowth setting for the tempdb data file to128megabytes (MB). Add seven additional tempdb data files and set the autogrowth value to128 MB.
Answer: B
Explanation:
In an effort to simplify the tempdb configuration experience, SQL Server 2021 setup has been extended to configure various properties for tempdb for multi-processor environments.
1. A new tab dedicated to tempdb has been added to the Database Engine Configuration step of setup workflow.
2. Configuration options: Data Files
* Number offiles – this will default to the lower value of 8 or number of logical cores as detected by setup.
* Initial size – is specified in MB and applies to each tempdb data file. This makes it easier to configure all files of same size. Total initial size is the cumulative tempdb data file size (Number of files * Initial Size) that will be created.
* Autogrowth – is specified in MB (fixed growth is preferred as opposed to a non-linear percentage based growth) and applies to each file. The default value of 64MBwas chosen to cover one PFS interval.
Figure:
References:https://blogs.msdn.microsoft.com/psssql/2021/03/17/sql-2021-it-just-runs-faster-automatic-tempdb-configuration/
Q15. - (Topic 2)
You are deploying a Microsoft SQL Server database that will support a mixed OLTP and OLAP workload. The target virtual machine has four CPUs.
You need to ensure that reports do not use all available system resources. What should you do?
A. Enable Auto Close.
B. Increase the value for the Minimum System Memory setting.
C. Set MAXDOP to half the number of CPUs available.
D. Increase the value for the Minimum Memory per query setting.
Answer: C
Explanation:
When an instance of SQL Server runs on a computer that has more than one microprocessor or CPU, it detects the best degree of parallelism, that is, the number of processors employed to run a single statement, for each parallel plan execution. You can use the max degree of parallelism option to limit the number of processors to use in parallel plan execution.
Q16. HOTSPOT - (Topic 1)
You plan to migrate a Microsoft SQL Server workload from an on-premises server to a Microsoft Azure virtual machine (VM). The current server contains 4 cores with an average
CPU workload of 6 percent and a peak workload of 10 percent when using 2.4Ghz processors.
You gather the following metrics:
You need to design a SQL Server VM to support the migration while minimizing costs. For each setting, which value should you use? To answer, select the appropriate storage
option from each list in the answer area.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
Data drive: Premium Storage Transaction log drive: Standard Storage TempDB drive: Premium Storage
Note: A standard disk is expected to handle 500 IOPS or 60MB/s. A P10 Premium disk is expected to handle 500 IOPS.
A P20 Premium disk is expected to handle 2300 IOPS. A P30 Premium disk is expected to handle 5000 IOPS.
VM size: A3
Max data disk throughput is 8x500 IOPS
References:https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines- windows-sizes
Topic 2, Manage databases and instances
13. - (Topic 2)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
You manage a Microsoft SQL Server environment with several databases.
You need to ensure that queries use statistical data and do not initialize values for local variables.
Solution: You enable the PARAMETER_SNIFFING option for the databases. Does the solution meet the goal?
A. Yes
B. No
Answer: A
Explanation:
PARAMETER_SNIFFING = { ON | OFF | PRIMARY} enables or disables parameter sniffing. This is equivalent to Trace Flag 4136.
SQL server uses a process called parameter sniffing when executing queries or stored procedures that use parameters. During compilation, the value passed into the parameter is evaluated and used to create an execution plan. That value is also stored with the execution plan in the plan cache. Future executions of the plan will re-use the plan that was compiled with that reference value.
References:https://msdn.microsoft.com/en-us/library/mt629158.aspx
Q17. HOTSPOT - (Topic 1)
You use Resource Manager to deploy a new Microsoft SQL Server instance in a Microsoft Azure virtual machine (VM) that uses Premium storage. The combined initial size of the SQL Server user database files is expected to be over 200 gigabytes (GB). You must maximize performance for the database files and the log file.
You add the following additional drive volumes to the VM:
You have the following requirements:
You need to deploy the SQL instance.
In the table below, identify the drive where you must store each SQL Server file type. NOTE: Make only one selection in each column. Each correct selection is worth one point.
Answer:
Explanation:
Enable read caching on the disk(s) hosting the data files and TempDB.
Do not enable caching on disk(s) hosting the log file. Host caching is not used for log files.
Q18. HOTSPOT - (Topic 2)
You need to ensure that a user named Admin2 can manage logins.
How should you complete the Transact-SQL statements? To answer, select the appropriate Transact-SQL segments in the answer area.
Answer Area
Answer:
Explanation:
Step 1: CREATE LOGIN
First you need to create a login for SQL Azure, it's syntax is as follows: CREATE LOGIN username WITH password='password'
Step 2, CREATE USER Step 3: LOGIN
Users are created per database and are associated with logins. You must be connected to the database in where you want to create the user. In most cases, this is not the master database. Here is some sample Transact-SQL that creates a user:
CREATE USER readonlyuser FROM LOGIN readonlylogin; Step 4: loginmanager
Members of the loginmanager role can create new logins in the master database.
References:
https://azure.microsoft.com/en-us/blog/adding-users-to-your-sql-azure-database/ https://docs.microsoft.com/en-us/azure/sql-database/sql-database-manage-logins
Q19. - (Topic 1)
You plan to migrate a database To Microsoft Azure SQL Database. The database requires 500 gigabytes (GB) of storage.
The database must support 50 concurrent logins. You must minimize the cost associated with hosting the database.
You need to create the database. Which pricing tier should you use?
A. Standard S3 pricing tier
B. Premium P2tier
C. Standard S2 pricing tier
D. Premium P1 tier
Answer: D
Explanation:
For a database size of 500 GB the Premium tier is required. Both P1 and P2 are adequate. P1 is preferred as it is cheaper.
Note:
Q20. HOTSPOT - (Topic 1)
You plan to migrate a Microsoft SQL Server workload from an on-premises server to a Microsoft Azure virtual machine (VM). The current server contains 4 cores with an average
CPU workload of 6 percent and a peak workload of 10 percent when using 2.4Ghz processors.
You gather the following metrics:
You need to design a SQL Server VM to support the migration while minimizing costs. For each setting, which value should you use? To answer, select the appropriate storage
option from each list in the answer area.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
Data drive: Premium Storage Transaction log drive: Standard Storage TempDB drive: Premium Storage
Note: A standard disk is expected to handle 500 IOPS or 60MB/s. A P10 Premium disk is expected to handle 500 IOPS.
A P20 Premium disk is expected to handle 2300 IOPS. A P30 Premium disk is expected to handle 5000 IOPS.
VM size: A3
Max data disk throughput is 8x500 IOPS
References:https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines- windows-sizes
Topic 2, Manage databases and instances
13. - (Topic 2)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
You manage a Microsoft SQL Server environment with several databases.
You need to ensure that queries use statistical data and do not initialize values for local variables.
Solution: You enable the PARAMETER_SNIFFING option for the databases. Does the solution meet the goal?
A. Yes
B. No
Answer: A
Explanation:
PARAMETER_SNIFFING = { ON | OFF | PRIMARY} enables or disables parameter sniffing. This is equivalent to Trace Flag 4136.
SQL server uses a process called parameter sniffing when executing queries or stored procedures that use parameters. During compilation, the value passed into the parameter is evaluated and used to create an execution plan. That value is also stored with the execution plan in the plan cache. Future executions of the plan will re-use the plan that was compiled with that reference value.
References:https://msdn.microsoft.com/en-us/library/mt629158.aspx