getcertified4sure.com

Professional-Cloud-Architect Exam

A Review Of High Value Professional-Cloud-Architect Free Samples




Passleader offers free demo for Professional-Cloud-Architect exam. "Google Certified Professional - Cloud Architect (GCP)", also known as Professional-Cloud-Architect exam, is a Google Certification. This set of posts, Passing the Google Professional-Cloud-Architect exam, will help you answer those questions. The Professional-Cloud-Architect Questions & Answers covers all the knowledge points of the real exam. 100% real Google Professional-Cloud-Architect exams and revised by experts!

Also have Professional-Cloud-Architect free dumps questions for you:

NEW QUESTION 1

You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public Internet. What should you do?

  • A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database.
  • B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the onpremises database.
  • C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database.
  • D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.

Answer: D

Explanation:
https://cloud.google.com/appengine/docs/flexible/python/using-third-party-databases

NEW QUESTION 2

Your agricultural division is experimenting with fully autonomous vehicles.
You want your architecture to promote strong security during vehicle operation. Which two architecture should you consider?
Choose 2 answers:

  • A. Treat every micro service call between modules on the vehicle as untrusted.
  • B. Require IPv6 for connectivity to ensure a secure address space.
  • C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
  • D. Use a functional programming language to isolate code execution cycles.
  • E. Use multiple connectivity subsystems for redundancy.
  • F. Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.

Answer: AC

NEW QUESTION 3

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

  • A. Tests should scale well beyond the prior approaches.
  • B. Unit tests are no longer required, only end-to-end tests.
  • C. Tests should be applied after the release is in the production environment.
  • D. Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Answer: A

Explanation:
From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity

NEW QUESTION 4

You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises systems and Google Cloud until the migration is completed.
You want to make sure all your on-premises systems remain reachable during this period. How should you organize your networking in Google Cloud?

  • A. Use the same IP range on Google Cloud as you use on-premises
  • B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap with the range you use on-premises
  • C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises
  • D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises

Answer: C

NEW QUESTION 5

You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?

  • A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.
  • B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdnver endpoint check to call the service.
  • C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance.
  • D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url

Answer: C

NEW QUESTION 6

The current Dress4win system architecture has high latency to some customers because it is located in one data center.
As of a future evaluation and optimizing for performance in the cloud, Dresss4win wants to distribute it's system architecture to multiple locations when Google cloud platform. Which approach should they use?

  • A. Use regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances in each region separately based on traffic.
  • B. Use a global load balancer with a set of virtual machines that forward the requests to a closer group ofvirtual machines managed by your operations team.
  • C. Use regional managed instance groups and a global load balancer to increase reliability by providing automatic failover between zones in different regions.
  • D. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines as part of a separate managed instance groups.

Answer: A

NEW QUESTION 7

For this question, refer to the TerramEarth case study.
The TerramEarth development team wants to create an API to meet the company's business requirements. You want the development team to focus their development effort on business value versus creating a custom framework. Which method should they use?

  • A. Use Google App Engine with Google Cloud Endpoint
  • B. Focus on an API for dealers and partners.
  • C. Use Google App Engine with a JAX-RS Jersey Java-based framewor
  • D. Focus on an API for the public.
  • E. Use Google App Engine with the Swagger (open API Specification) framewor
  • F. Focus on an API for the public.
  • G. Use Google Container Engine with a Django Python containe
  • H. Focus on an API for the public.
  • I. Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framewor
  • J. Focus on an API for dealers and partners.

Answer: A

Explanation:
https://cloud.google.com/endpoints/docs/openapi/about-cloud-endpoints?hl=en_US&_ga=2.21787131.-1712523 https://cloud.google.com/endpoints/docs/openapi/architecture-overview
https://cloud.google.com/storage/docs/gsutil/commands/test
Develop, deploy, protect and monitor your APIs with Google Cloud Endpoints. Using an Open API Specification or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API development.
From scenario: Business Requirements
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies – especially with seed and fertilizer suppliers in the fast-growing agricultural business – to create compelling joint offerings for their customers.
Reference: https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth

NEW QUESTION 8

Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take? Choose 2 answers

  • A. Load logs into Google BigQuery.
  • B. Load logs into Google Cloud SQL.
  • C. Import logs into Google Stackdriver.
  • D. Insert logs into Google Cloud Bigtable.
  • E. Upload log files into Google Cloud Storage.

Answer: AE

NEW QUESTION 9

You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore.
What should you do?

  • A. Point gcloud datastore create-indexes to your configuration file
  • B. Upload the configuration file the App Engine’s default Cloud Storage bucket, and have App Engine detect the new indexes
  • C. In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file
  • D. Create an HTTP request to the built-in python module to send the index configuration file to your application

Answer: A

NEW QUESTION 10

You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on premises network and the GCP network.
What should you do?

  • A. Verify that Dedicated Interconnect can replicate files to GC
  • B. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails.
  • C. Verify that Dedicated Interconnect can replicate files to GC
  • D. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
  • E. Verify that the Transfer Appliance can replicate files to GC
  • F. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails.
  • G. Verify that the Transfer Appliance can replicate files to GC
  • H. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.

Answer: B

Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering

NEW QUESTION 11

An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a bettor tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs, what should you do?

  • A. Direct them to download and install the Google StackDriver logging agent.
  • B. Send them a list of online resources about logging best practices.
  • C. Help them define their requirements and assess viable logging tools.
  • D. Help them upgrade their current tool to take advantage of any new features.

Answer: C

Explanation:
Help them define their requirements and assess viable logging tools. They know the requirements and the existing tools' problems. While it's true StackDriver Logging and Error Reporting possibly meet all their requirements, there might be other tools also meet their need. They need you to provide expertise to make assessment for new tools, specifically, logging tools that can "capture errors and help them analyze their historical log data".
References: https://cloud.google.com/logging/docs/agent/installation

NEW QUESTION 12

A recent audit that a new network was created in Your GCP project. In this network, a GCE instance has an SSH port open the world. You want to discover this network's origin. What should you do?

  • A. Search for Create VM entry in the Stackdriver alerting console.
  • B. Navigate to the Activity page in the Home sectio
  • C. Set category to Data Access and search for Create VM entry.
  • D. In the logging section of the console, specify GCE Network as the logging sectio
  • E. Search for the Create Insert entry.
  • F. Connect to the GCE instance using project SSH Key
  • G. Identify previous logins in system logs, and match these with the project owners list.

Answer: C

NEW QUESTION 13

Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting with a machine learning model on Google Cloud Platform to improve the quality of results.
What should the customer do to improve their model’s results over time?

  • A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model.
  • B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results.
  • C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance.
  • D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.

Answer: D

Explanation:
https://cloud.google.com/solutions/building-a-serverless-ml-model

NEW QUESTION 14

You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage and replace the copy easily if there are changes on the production virtual machine. You will deploy the copy as a new instances in a different project in the US-East region. What steps must you take?

  • A. Use the Linux dd and netcat command to copy and stream the root disk contents to a new virtual machine instance in the US-East region.
  • B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
  • C. Create an image file from the root disk with Linux dd command, create a new disk from the image file, and use it to create a new virtual machine instance in the US-East region
  • D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file for the root disk.

Answer: D

Explanation:
https://stackoverflow.com/questions/36441423/migrate-google-compute-engine-instance-to-a-different-region

NEW QUESTION 15

You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company's web hosting platform. Improvement to the QA/Test processes accomplished an 80% reduction. Which additional two approaches can you take to further reduce the rollbacks? Choose 2 answers

  • A. Introduce a green-blue deployment model.
  • B. Replace the QA environment with canary releases.
  • C. Fragment the monolithic platform into microservices.
  • D. Reduce the platform's dependency on relational database systems.
  • E. Replace the platform's relational database systems with a NoSQL database.

Answer: AC

NEW QUESTION 16

You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?

  • A. Cloud Pub/Sub alone
  • B. Cloud Pub/Sub to Cloud DataFlow
  • C. Cloud Pub/Sub to Stackdriver
  • D. Cloud Pub/Sub to Cloud SQL

Answer: B

Explanation:
Reference https://cloud.google.com/pubsub/docs/ordering

NEW QUESTION 17

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update. What strategy should you take?

  • A. Work with your ISP to diagnose the problem.
  • B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application.
  • C. Roll back to an earlier known good release initially, then use Stackdriver Trace and logging to diagnose the problem in a development/test/staging environment.
  • D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate.Then use Stackdriver Trace and logging to diagnose the problem.

Answer: C

Explanation:
Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services (AWS). Our API also allows ingestion of any custom log data from any source. Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs. Even better, you can analyze all that log data in real time.
References: https://cloud.google.com/logging/

NEW QUESTION 18

You want to enable your running Google Container Engine cluster to scale as demand for your application changes.
What should you do?

  • A. Add additional nodes to your Container Engine cluster using the following command: gcloud container clusters resize CLUSTER_NAME --size 10
  • B. Add a tag to the instances in the cluster with the following command:gcloud compute instances add-tags INSTANCE --tags enable --autoscaling max-nodes-10
  • C. Update the existing Container Engine cluster with the following command:gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1 --max-nodes=10
  • D. Create a new Container Engine cluster with the following command:gcloud alpha container clusters create mycluster --enable-autocaling --min-nodes=1 --max-nodes=10 and redeploy your application.

Answer: B

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler Cluster autoscaling
--enable-autoscaling
Enables autoscaling for a node pool.
Enables autoscaling in the node pool specified by --node-pool or the default node pool if --node-pool is not provided.
Where:
--max-nodes=MAX_NODES
Maximum number of nodes in the node pool.
Maximum number of nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale.

NEW QUESTION 19

For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

  • A. Container Engine, Cloud Pub/Sub, and Cloud SQL
  • B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
  • C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
  • D. Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow
  • E. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Answer: B

Explanation:
A real time requires Stream / Messaging so Pub/Sub, Analytics by Big Query.
Ingest millions of streaming events per second from anywhere in the world with Cloud Pub/Sub, powered by Google's unique, high-speed private network. Process the streams with Cloud Dataflow to ensure reliable, exactly-once, low-latency data transformation. Stream the transformed data into BigQuery, the cloud-native data warehousing service, for immediate analysis via SQL or popular visualization tools.
From scenario: They plan to deploy the game’s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics.
Requirements for Game Analytics Platform
Professional-Cloud-Architect dumps exhibit Dynamically scale up or down based on game activity
Professional-Cloud-Architect dumps exhibit Process incoming data on the fly directly from the game servers
Professional-Cloud-Architect dumps exhibit Process data that arrives late because of slow mobile networks
Professional-Cloud-Architect dumps exhibit Allow SQL queries to access at least 10 TB of historical data
Professional-Cloud-Architect dumps exhibit Process files that are regularly uploaded by users’ mobile devices
Professional-Cloud-Architect dumps exhibit Use only fully managed services
References: https://cloud.google.com/solutions/big-data/stream-analytics/

NEW QUESTION 20

Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.
How should you configure users’ access roles?

  • A. Add all users to a grou
  • B. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data.
  • C. Add all users to a grou
  • D. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data.
  • E. Add all users to a grou
  • F. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
  • G. Add all users to a grou
  • H. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.

Answer: A

Explanation:
Reference: https://cloud.google.com/bigquery/docs/running-queries

NEW QUESTION 21
......

100% Valid and Newest Version Professional-Cloud-Architect Questions & Answers shared by 2passeasy, Get Full Dumps HERE: https://www.2passeasy.com/dumps/Professional-Cloud-Architect/ (New 170 Q&As)