Virtual of 70-646 real exam materials and free demo for Microsoft certification for IT specialist, Real Success Guaranteed with Updated 70-646 pdf dumps vce Materials. 100% PASS PRO: Windows Server 2008, Server Administrator exam Today!
Q81. - (Topic 1)
Your network is configured as shown in the following diagram.
Each office contains a server that has the File Services server role installed. The servers have a shared folder named Resources.
You need to plan the data availability of the Resources folder. Your plan must meet the following requirements:
. If a WAN link fails, the files in the Resources folder must be available in all of the offices. . If a single server fails, the files in the Resources folder must be available in each of the branch offices, and the users must be able to use existing drive mappings. . Your plan must minimize network traffic over the WAN links.
What should you include in your plan?
A. a standalone DFS namespace that uses DFS Replication in a full mesh topology
B. a domain-based DFS namespace that uses DFS Replication in a full mesh topology
C. a standalone DFS namespace that uses DFS Replication in a hub and spoke topology
D. a domain-based DFS namespace that uses DFS Replication in a hub and spoke topology
Answer: D
Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration: Distributed File System (DFS) DFS is considerably enhanced in Windows Server 2008. It consists of two technologies, DFS Namespaces and DFS Replication, that you can use (together or independently) to provide fault-tolerant and flexible file sharing and replication services.
DFS Namespaces lets you group shared folders on different servers (and in multiple sites) into one or more logically structured namespaces. Users view each namespace as a single shared folder with a series of subfolders. The underlying shared folders structure is hidden from users, and this structure provides fault tolerance and the ability to automatically connect users to local shared folders, when available, instead of routing them over wide area network (WAN) connections.
DFS Replication provides a multimaster replication engine that lets you synchronize folders on multiple servers across local or WAN connections. It uses the Remote Differential Compression (RDC) protocol to update only those files that have changed since the last replication. You can use DFS Replication in conjunction with DFS Namespaces or by itself.
Specifying the Replication Topology The replication topology defines the logical connections that DFSR uses to replicate files among servers. When choosing or changing a topology, remember that that two one-way connections are created between the members you choose, thus allowing data to flow in both directions. To create or change a replication topology in the DFS Management console, right-click the replication group for which you want to define a new topology and then click New Topology. The New Topology Wizard lets you choose one of the following options:
Hub And Spoke This topology requires three or more members. For each spoke member, you should choose a required hub member and an optional second hub member for redundancy. This optional hub ensures that a spoke member can still replicate if one of the hub members is unavailable. If you specify more than one hub member, the hub members will have a full-mesh topology between them. Full Mesh In this topology, every member replicates with all the other members of the replication group. This topology works well when 10 or fewer members are in the replication group.
Q82. - (Topic 2)
You need to recommend a strategy for using managed service accounts on the Web servers.
Which managed service accounts should you recommend?
A. One account for all the web servers.
B. One account for each web server.
C. One account for the parent domain and one account for both child domains.
D. One account for the parent domain and one account for each child domain.
Answer: B
Explanation:
There are 5 web servers in total, 3 in the forest root domain and 1 in each child domain.
Service Account Vulnerability
The practice of configuring services to use domain accounts for authentication leads to potential security exposure. The degree of risk exposure is dependent on various factors, including: The number of servers that have services that are configured to use service accounts. The vulnerability profile of a network increases for every server that has domain account authenticated services that run on that server. The existence of each such server increases the odds that an attacker might compromise that server, which can be used to escalate privileges to other resources on a network. The scope of privileges for any given domain account that services use. The larger the scope of privileges that a service account has, the greater the number of resources that can be compromised by that account. Domain administrator level privileges are a particularly high risk, because the scope of vulnerability for such accounts includes any computer on the network, including the domain controllers. Because such accounts have administrative privileges to all member servers, the compromise of such an account would be severe and all computers and data in the domain would be suspect. The number of services configured to use domain accounts on any given server. Some services have unique vulnerabilities, which make them somewhat more susceptible to attacks. Attackers will usually attempt to exploit known vulnerabilities first. Use of a domain account by a vulnerable service presents an escalated risk to other systems, which could have otherwise been isolated to a single server. The number of domain accounts that are used to run services in a domain. Monitoring and managing the security of service accounts requires more diligence than ordinary user accounts, and each additional domain account in use by services only complicates administration of those accounts. Given that administrators and security administrators need to know where each service account is used to detect suspicious activity highlights the need to minimize the number of those accounts. The preceding factors lead to several possible vulnerability scenarios that can exist, each with a different level of potential security risk. The following diagram and table describe these scenarios. For these examples it is assumed that the service accounts are domain accounts and each account has at least one service on each server using it for authentication. The following information describes the domain accounts shown in the following figure. Account A has Administrator-equivalent privileges to more than one domain controller. Account B has administrator-equivalent privileges on all member servers. Account C has Administrator-equivalent privileges on servers 2 and 3. Account D has Administrator-equivalent privileges on servers 4 and 5. Account E has Administrator-equivalent privileges on a single member server only.
Q83. - (Topic 16)
You need to recommend a solution for deploying App2. What should you recommend?
A. Deploy a new AppV package that contains App2. Stream the package to the client computers of the 10 users.
B. Deploy a new MEDV workspace that contains App2. Deploy the workspace to the client computers of the 10 users.
C. On an RD Session Host server in the branch office, install and publish App2 by using RemoteApp. Deploy the RemoteApp program as an MSI file.
D. On an RD Virtualization Host server in the branch office, create 10 Windows 7 VMs that contain App2. Configure the new VMs as personal virtual desktops.
Answer: D
Q84. - (Topic 19)
You need to recommend a solution to meet the IT security requirements and data encryption requirements for TT-FILE01 with the minimum administrative effort.
What should you recommend? (Choose all that Apply.)
A. Turn on BitLocker on drive X:\ and select the Automatically unlock this drive on this computer option.
B. Migrate TT-FILE01 to Windows Server 2008 R2 Enterprise.
C. Store BitLocker recovery information in the tailspintoys.com domain.
D. Turn on BitLocker on the system drive.
Answer: A,C Explanation:
Backing up recovery passwords for a BitLocker-protected disk volume allows
administrators to recover the volume if it is locked. This ensures that encrypted data
belonging to the enterprise can always be accessed by authorized users.
Storage of BitLocker recovery information in Active Directory
Backed up BitLocker recovery information is stored in a child object of the Computer object.
That is, the Computer object is the container for a BitLocker recovery object.
Each BitLocker recovery object includes the recovery password and other recovery
information. More than one BitLocker recovery object can exist under each Computer
object, because there can be more than one recovery password associated with a
BitLocker-enabled volume.
The name of the BitLocker recovery object incorporates a globally unique identifier (GUID)
and date and time information, for a fixed length of 63 characters. The form is:
<Object Creation Date and Time><Recovery GUID>
For example:
2005-09-30T17:08:23-08:00{063EA4E1-220C-4293-BA01-4754620A96E7}
Q85. - (Topic 1)
Your company has a main office and a branch office. Your network contains a single Active Directory domain.
The functional level of the domain is Windows Server 2008 R2. An Active Directory site exists for each office.
All servers run Windows Server 2008 R2. You plan to deploy file servers in each office.
You need to design a file sharing strategy to meet the following requirements:
Users in both offices must be able to access the same files.
Users in both offices must use the same Universal Naming Convention (UNC) path
to access files.
The design must reduce the amount of bandwidth used to access files.
Users must be able to access files even if a server fails.
What should you include in your design?
A. A standalone DFS namespace that uses replication.
.. ..
B. A domainbased DFS namespace that uses replication.
C. A multisite failover cluster that contains a server located in the main office and another server located in the branch office.
D. A Network Load Balancing cluster that contains a server located in the main office and another server located in the branch office.
Answer: B
Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration:
Domain-Based Namespaces
You can create domain-based namespaces on one or more member servers or DCs in the same domain.
Metadata for a domain-based namespaces is stored by AD DS. Each server must contain an NTFS volume to host the namespace. Multiple namespace servers increase the availability of the namespace and ensure failover protection. A domain-based namespace cannot be a clustered resource in a failover cluster. However, you can locate the namespace on a server that is also a node in a failover cluster provided that you configure the namespace to use only local resources on that server. A domain-based namespace in Windows Server 2008 mode supports access-based enumeration. Windows Server 2008 mode is discussed later in this lesson.
You choose a domain-based namespace if you want to use multiple namespace servers to ensure the availability of the namespace, or if you want to make the name of the namespace server invisible to users.
When users do not need to know the UNC path to a namespace folder it is easier to replace the namespace server or migrate the namespace to another server. If, for example, a stand-alone namespace called \\Glasgow\Books needed to be transferred to a server called Brisbane, it would become \\Brisbane\Books. However, if it were a domain-based namespace (assuming Brisbane and Glasgow are both in the Contoso.internal domain), it would be \\Contoso.internal\Books no matter which server hosted it, and it could be transferred from one server to the other without this transfer being apparent to the user, who would continue to use \\Contoso.internal\Books to access it.
Q86. - (Topic 1)
Your company has a main office and a branch office. The offices connect by using WAN links. The network consists of a single Active Directory domain. An Active Directory site exists for each office. Servers in both offices run Windows Server 2008 R2 Enterprise. You plan to deploy a failover cluster solution to service users in both offices.
You need to plan a failover cluster to meet the following requirements:
•Maintain the availability of services if a single server fails
•Minimize the number of servers required
What should you include in your plan?
A. Deploy a failover cluster that contains one node in each office.
B. Deploy a failover cluster that contains two nodes in each office.
C. In the main office, deploy a failover cluster that contains one node. In the branch office, deploy a failover cluster that contains one node.
D. In the main office, deploy a failover cluster that contains two nodes. In the branch office, deploy a failover cluster that contains two nodes.
Answer: A
Explanation:
MCITP Self-Paced Training Kit Exam 70-646 Windows Server Administration: Failover Clustering Failover clustering is a technology that allows another server to continue to service client requests in the event that the original server fails. Clustering is covered in more detail in Chapter 11, “Clustering and High Availability.” You deploy failover clustering on mission-critical servers to ensure that important resources are available even if a server hosting those resources fails. Failover clustering The Failover Clustering feature enables multiple servers to work together to increase the availability of services and applications. If one of the clustered servers (or nodes) fails, another node provides the required service through failover and is available in Windows Server 2008 Enterprise and Datacenter editions and is not available in Windows Server 2008 Standard or Web editions. Failover clustering - Formerly known as server clustering, Failover Clustering creates a logical grouping of servers, also known as nodes, that can service requests for applications with shared data stores.
Q87. - (Topic 1)
Your network consists of a single Active Directory domain. All domain controllers run Windows Server 2008 R2.
All client computers run Windows 7. All user accounts are stored in an organizational unit (OU) named Staff. All client computer accounts are stored in an OU named Clients. You plan to deploy a new Application.
You need to ensure that the Application deployment meets the following requirements:
. Users must access the Application from an icon on the Start menu.
. The Application must be available to remote users when they are offline.
What should you do?
A. Publish the Application to users in the Staff OU.
B. Publish the Application to users in the Clients OU.
C. Assign the Application to computers in the Staff OU.
D. Assign the Application to computers in the Clients OU.
Answer: D
Explanation:
http://www.youtube.com/watch?v=hQkRN96cKkM Group policy objects can be applied either to users or to computers. Deploying applications through the Active Directory is also done through the use of group policies, and therefore applications are deployed either on a per user basis or on a per computer basis. There are two different ways that you can deploy an application through the Active Directory. You can either publish the application or you can assign the application. You can only publish applications to users, but you can assign applications to either users or to computers. The application is deployed in a different manner depending on which of these methods you use. Publishing an application doesn’t actually install the application, but rather makes it available to users. For example, suppose that you were to publish Microsoft Office. Publishing is a group policy setting, so it would not take effect until the next time that the user logs in. When the user does log in though, they will not initially notice anything different. However, if the user were to open the Control Panel and click on the Add / Remove Programs option, they will find that Microsoft Office is now on the list. A user can then choose to install Microsoft office on their machine. One thing to keep in mind is that regardless of which deployment method you use, Windows does not perform any sort of software metering. Therefore, it will be up to you to make sure that you have enough licenses for the software that you are installing. Assigning an application to a user works differently than publishing an application. Again, assigning an application is a group policy action, so the assignment won’t take effect until the next time that the user logs in. When the user does log in, they will see that the new application has been added to the Start menu and / or to the desktop. Although a menu option or an icon for the application exists, the software hasn’t actually been installed though. To avoid overwhelming the server containing the installation package, the software is not actually installed until the user attempts to use it for the first time. This is also where the self healing feature comes in. When ever a user attempts to use the application, Windows always does a quick check to make sure that the application hasn’t been damaged. If files or registry settings are missing, they are automatically replaced. Assigning an application to a computer works similarly to assigning an application to a user. The main difference is that the assignment is linked to the computer rather than to the user, so it takes effect the next time that the computer is rebooted. Assigning an application to a computer also differs from user assignments in that the deployment process actually installs the application rather than just the application’s icon. as assigning installs the application the next time a computer reboots the app will be available when at next login regardless of which user logs in. also as its being assigned to a computer the GPO needs to be linked to the Clients OU as this is where the computer accounts are located.
Assigning Software to a group.
http://support.microsoft.com/kb/324750
Create a folder to hold the Windows Installer package on a server. Share the folder by
applying permissions that let users and computers read and run these files. Then, copy the
MSI package files into this location.
From a Windows Server 2003-based computer in the domain, log on as a domain
administrator, and then start Active Directory Users and Computers.
In Active Directory Users and Computers, right-click the container to which you want to link
the GPOs, and then click Properties.
Click the Group Policy tab, and then click New to create a new GPO for installing the
Windows Installer package. Give the new GPO a descriptive name.
Click the new GPO, and then click Edit.
The Group Policy Object Editor starts.
Right-click the Software Settings folder under either Computer Configuration or User
Configuration, point to
New, and then click Package.
Q88. - (Topic 1)
Your network consists of a single Active Directory forest. The forest contains one Active Directory domain. The domain contains eight domain controllers. The domain controllers run Windows Server 2003 Service Pack 2.
You upgrade one of the domain controllers to Windows Server 2008 R2.
You need to recommend an Active Directory recovery strategy that supports the recovery of deleted objects.
The solution must allow deleted objects to be recovered for up to one year after the date of deletion.
What should you recommend?
A. Increase the tombstone lifetime for the forest.
B. Increase the interval of the garbage collection process for the forest.
C. Configure daily backups of the Windows Server 2008 R2 domain controller.
D. Enable shadow copies of the drive that contains the Ntds.dit file on the Windows Server 2008 R2 domain controller.
Answer: A
Explanation:
The tombstone lifetime must be substantially longer than the expected replication latency between the domain controllers. The interval between cycles of deleting tombstones must be at least as long as the maximum replication propagation delay across the forest. Because the expiration of a tombstone lifetime is based on the time when an object was deleted logically, rather than on the time when a particular server received that tombstone through replication, an object's tombstone is collected as garbage on all servers at approximately the same time. If the tombstone has not yet replicated to a particular domain controller, that DC never records the deletion. This is the reason why you cannot restore a domain controller from a backup that is older than the tombstone lifetime
By default, the Active Directory tombstone lifetime is sixty days. This value can be changed if necessary. To change this value, the tombstoneLifetime attribute of the CN=Directory Service object in the configuration partition must be modified.
This is related to server 2003 but should still be relelvant http://www.petri.co.il/
changing_the_tombstone_lifetime_windows_ad.htm
Authoritative Restore
When a nonauthoritative restore is performed, objects deleted after the backup was taken will again be deleted when the restored DC replicates with other servers in the domain. On every other DC the object is marked as deleted so that when replication occurs the local copy of the object will also be marked as deleted. The authoritative restore process marks the deleted object in such a way that when replication occurs, the object is restored to active status across the domain. It is important to remember that when an object is deleted it is not instantly removed from Active Directory, but gains an attribute that marks it as deleted until the tombstone lifetime is reached and the object is removed. The tombstone lifetime is the amount of time a deleted object remains in Active Directory and has a default value of 180 days.
To ensure that the Active Directory database is not updated before the authoritative restore takes place, you use the Directory Services Restore Mode (DSRM) when performing the authoritative restore process. DSRM allows the administrator to perform the necessary restorations and mark the objects as restored before rebooting the DC and allowing those changes to replicate out to other DCs in the domain.
Q89. - (Topic 10)
You need to recommend a solution for managing all of the servers. The solution must meet the company's technical requirements. What should you include in the recommendation?
A. Remote Server Administration Tools (RSAT)
B. the Administration Tools Pack (adminpak.msi)
C. the Remote Desktop Gateway (RD Gateway) role service
D. the Remote Desktop Web Access (RD Web Access) role service
Answer: A
Explanation:
http://support.microsoft.com/kb/941314 Microsoft Remote Server Administration Tools (RSAT) enables IT administrators to remotely manage roles and features in Windows Server 2008 from a computer that is running Windows Vista with Service Pack 1 (SP1). It includes support for remote management of computers that are running either a Server Core installation option or a full installation option of Windows Server 2008. It provides similar functionality to the Windows Server 2003 Administration Tools Pack.
Q90. - (Topic 18)
You need to plan for the installation of critical updates to only shared client computers.
What should you recommend?
A. Configure all WSUS servers as upstream servers.
B. Create an Automatic Approval rule that Applies to the GDI_Students group.
C. Create an Automatic Approval rule that Applies to the LabComputers group
D. Configure the shared client computers to synchronize hourly from Microsoft Update.
Answer: C