Your success in Microsoft 70-535 is our sole target and we develop all our 70-535 braindumps in a way that facilitates the attainment of this target. Not only is our 70-535 study material the best you can find, it is also the most detailed and the most updated. 70-535 Practice Exams for Microsoft 70-535 are written to the highest standards of technical accuracy.
P.S. Top Quality 70-535 paper are available on Google Drive, GET MORE: https://drive.google.com/open?id=1QGQh8lSQv2kpYQewvx2Fa025vtCRw5Vh
Q1. You manage a cloud service that hosts a customer-facing application. The application allows users to upload images and create collages. The cloud service is running in two medium instances and utilizes Azure Queue storage for image processing.
The storage account is configured to be locally redundant. The sales department plans to send a newsletter to potential clients. As a result, you expect a significant increase in global traffic.
You need to recommend a solution that meets the following requirements:
* Configure the cloud service to ensure the application is responsive to the traffic increase.
* Minimize hosting and administration costs.
What are two possible ways to achieve this goal? Each correct answer presents a complete solution
A. Configure the cloud service to run in two Large instances.
B. Configure the cloud service to auto-scale to three instances when processor utilization is above 80%.
C. Configure the storage account to be geo-redundant
D. Deploy a new cloud service in a separate data center.Use Azure Traffic Manager to load balance traffic between the cloud services.
E. Configure the cloud service to auto-scale when the queue exceeds 1000 entries per machine.
Answer: B,E
Explanation:
An autoscaling solution reduces the amount of manual work involved in dynamically scaling an application. It can do this in two different ways: either preemptively by setting constraints on the number of role instances based on a timetable, or reactively by adjusting the number of role instances in response to some counter(s) or measurement(s) that you can collect from your application or from the Azure environment.
References: https://msdn.microsoft.com/en-us/library/hh680945(v=pandp.50).aspx
Q2. You manage a set of virtual machines (VMs) deployed to the cloud service named fabrikamVM. You configure auto scaling according to the following parameters:
* With an instance range of two to six instances
* To maintain CPU usage between 70 and 80 percent
* To scale up one instance at a time
* With a scale up wait time of 30 minutes
* To scale down one instance at a time
* With a scale down wait time of 30 minutes
You discover the following usage pattern of a specific application:
* The application peaks very quickly, and the peak lasts for several hours.
* CPU usage stays above 90 percent for the first 1 to 1.5 hours after usage increases. After
1.5 hours, the CPU usage falls to about 75 percent until application usage begins to decline.
You need to modify the auto scaling configuration to scale up faster when usage peaks. What are two possible ways to achieve this goal? Each correct answer presents a complete solution.
A. Decrease the scale down wait time.
B. Decrease the scale up wait time.
C. Increase the number of scale up instances.
D. Increase the scale up wait time.
E. Increase the maximum number of instances
Answer: B,C
Q3. You need to ensure that users do not need to re-enter their passwords after they authenticate to cloud applications for the first time.
What should you do?
A. Enable Microsoft Account authentication.
B. Set up a virtual private network (VPN) connection between the VanArsdel premises and the Azure datacenter. Set up a Windows Active Directory domain controller in Azure VM. Implement Integrated Windows authentication.
C. Deploy ExpressRoute.
D. Configure Azure Active Directory Sync to use single sign-on (SSO).
Answer: D
Explanation:
Single sign-on (SSO) is a property of access control of multiple related, but independent software systems. With this property a user logs in once and gains access to all systems without being prompted to log in again at each of them.
References: http://en.wikipedia.org/wiki/Single_sign-on
Q4. You administer an Azure Storage account named contosostorage. The account has a blob container to store image files. A user reports being unable to access an image file.
You need to ensure that anonymous users can successfully read image files from the container.
Which log entry should you use to verify access?
A. Option A
B. Option B
C. Option C
D. Option D
Answer: A
Explanation:
Option A includes AnonymousSuccess.
References: https://blogs.msdn.microsoft.com/windowsazurestorage/2011/08/02/windows-azure-storage-logging-using-logs-to-track-storage-requests/
Q5. You administer an Azure Storage account named contoso storage. The account has queue containers with logging enabled. You need to view all log files generated during the month of July 2014. Which URL should you use to access the list?
A. http://contosostorage.queue.core.windows.net/$logs?restype=container&comp=list&prefix= queue/2014/07
B. http://contosostorage.queue.core.windows.net/$files?restype=container&comp=list&prefix= queue/2014/07
C. http://contosostorage.blob.core.windows.net/$files?restype=container&comp=list&prefix=bl ob/2014/07
D. http://contosostorage.blob.core.windows.net/$logs?restype=container&comp=list&prefix=bl ob/2014/07
Answer: D
Explanation:
All logs are stored in block blobs, not queues, in a container named $logs, not $files, which is automatically created when Storage Analytics is enabled for a storage account. The
$logs container is located in the blob namespace of the storage account, for example: http://<accountname>.blob.core.windows.net/$logs.
References: https://docs.microsoft.com/en-us/rest/api/storageservices/About-Storage- Analytics-Logging?redirectedfrom=MSDN
Q6. Your company network has two physical locations configured in a geo-clustered environment.
You create a Blob storage account in Azure that contains all the data associated with your company.
You need to ensure that the data remains available in the event of a site outage. Which storage option should you enable?
A. Locally redundant storage
B. Geo-redundant storage
C. Zone-redundant storage
D. Read-only geo-redundant storage
Answer: D
Explanation:
Read-access geo-redundant storage (RA-GRS) maximizes availability for your storage account, by providing read-only access to the data in the secondary location, in addition to the replication across two regions provided by GRS.
When you enable read-only access to your data in the secondary region, your data is available on a secondary endpoint, in addition to the primary endpoint for your storage account.
References: https://docs.microsoft.com/en-us/azure/storage/storage-redundancy
Q7. You are planning an application to run on Azure virtual machines (VMs). The VMs will be backed up using Azure Backup.
The application maintains its state in three binary files stored on disk. Changes in application state require that all three files be updated on disk. If only one or two of the files are updated on disk, work is lost and the system is in an inconsistent state.
You need to ensure that when a backup occurs, the applicationu2021s data is always in a consistent state.
What should you do?
A. Disable caching for the VMs virtual hard disks.
B. Use Premium Storage for the VMs virtual hard disks.
C. Implement the Volume Shadow Copy Service (VSS) API in the application.
D. Store the application files on an Azure File Service network share.
Answer: C
Q8. You manage a cloud service that utilizes an Azure Service Bus queue. You need to ensure that messages that are never consumed are retained. What should you do?
A. Check the MOVE TO THE DEAD-LETTER SUBQUEUE option for Expired Messages in the Azure Portal.
B. From the Azure Management Portal, create a new queue and name it Dead-Letter.
C. Execute the Set-AzureServiceBus PowerShell cmdlet.
D. Execute the New-AzureSchedulerStorageQueueJob PowerShell cmdlet.
Answer: A
Explanation:
Deadlettering u2013 From time to time a message may arrive in your queue that just canu2021t be processed. Each time the message is retrieved for processing the consumer throws an exception and cannot process the message. These are often referred to as poisonous messages and can happen for a variety of reasons, such as a corrupted payload, a message containing an unknown payload inadvertently delivered to a wrong queue, etc. When this happens, you do not want your system to come to grinding to a halt simply because one of the messages canu2021t be processed.
Ideally the message will be set aside to be reviewed later and processing can continue on to other messages in the queue. This process is called u2021Deadletteringu2021 a message and the Service Bus Brokered Messaging supports dead lettering by default. If a message fails to be processed and appears back on the queue ten times it will be placed into a dead letter queue. You can control the number of failures it takes for a message to be dead lettered by setting the MaxDeliveryCount property on the queue. When a message is deadlettered it is actually placed on a sub queue which can be accessed just like any other Service Bus queue. In the example used above the dead letter queue path would be samplequeue/$DeadLetterQueue. By default a message will be moved to the dead letter queue if it fails delivery more than 10 times.
Automatic dead lettering does not occur in the ReceiveAndDelete mode as the message has already been removed from the queue.
References: https://www.simple-talk.com/cloud/cloud-data/an-introduction-to-windows-azure-service-bus-brokered-messaging/
Q9. You need to configure availability for the virtual machines that the company is migrating to Azure.
What should you implement?
A. Traffic Manager
B. Availability Sets
C. Virtual Machine Autoscaling
D. Cloud Services
Answer: D
Explanation:
Scenario: VanArsdel plans to migrate several virtual machine (VM) workloads into Azure.
Q10. You manage several Azure virtual machines (VMs). You create a custom image to be used by employees on the development team.
You need to ensure that the custom image is available when you deploy new servers. Which Azure Power Shell cmdlet should you use?
A. Update-AzureVMImage
B. Add-AzureVhd
C. Add-AzureVMImage
D. Update-AzureDisk
E. Add-AzureDataDis
Answer: C
Explanation:
The Add-AzureVMImage cmdlet adds a new operating system image or a new virtual machine image to the image repository. The image is a generalized operating system image, using either Sysprep for Windows or, for Linux, using the appropriate tool for the distribution.
References: https://docs.microsoft.com/en-us/powershell/module/azure/add-azurevmimage?view=azuresmps-4.0.0
P.S. Easily pass 70-535 Exam with Allfreedumps Top Quality Dumps & pdf vce, Try Free: https://www.allfreedumps.com/70-535-dumps.html ( New Questions)