Certbus > Google > Google Certifications > PROFESSIONAL-CLOUD-ARCHITECT > PROFESSIONAL-CLOUD-ARCHITECT Online Practice Questions and Answers

PROFESSIONAL-CLOUD-ARCHITECT Online Practice Questions and Answers

Questions 4

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:

1.

Services are deployed redundantly across multiple regions in the US and Europe

2.

Only frontend services are exposed on the public internet

3.

They can provide a single frontend IP for their fleet of services

4.

Deployment artifacts are immutable

Which set of products should they use?

A. Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine

B. Google Cloud Storage, Google App Engine, Google Network Load Balancer

C. Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

D. Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Browse 277 Q&As
Questions 5

At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files.

The database files are compressed tar files stored in their current data center.

How should he proceed?

A. Create a cron script using gsutil to copy the files to a Coldline Storage bucket.

B. Create a cron script using gsutil to copy the files to a Regional Storage bucket.

C. Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.

D. Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.

Browse 277 Q&As
Questions 6

As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load. They want to ensure that:

1.

The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day

2.

Their administrators are notified automatically when their application reports errors.

3.

They can filter their aggregated logs down in order to debug one piece of the application across many hosts Which Google StackDriver features should they use?

A. Logging, Alerts, Insights, Debug

B. Monitoring, Trace, Debug, Logging

C. Monitoring, Logging, Alerts, Error Reporting

D. Monitoring, Logging, Debug, Error Report

Browse 277 Q&As
Questions 7

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

A. Use a private cluster with a private endpoint with master authorized networks configured.

B. Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C. Use a private cluster with a public endpoint with master authorized networks configured.

D. Use a public cluster with master authorized networks enabled and firewall rules.

Browse 277 Q&As
Questions 8

The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.

Which process should you implement?

A. ?Append metadata to file body. ?Compress individual files. ?Name files with serverName-Timestamp. ?Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket

B. ?Batch every 10,000 events with a single manifest file for metadata. ?Compress event files and manifest file into a single archive file. ?Name files using serverName-EventSequence. ?Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.

C. ?Compress individual files. ?Name files with serverName-EventSequence. ?Save files to one bucket ?Set custom metadata headers for each object after saving.

D. ?Append metadata to file body. ?Compress individual files. ?Name files with a random prefix pattern. ?Save files to one bucket

Browse 277 Q&As
Questions 9

Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance. What should you do?

A. Engage with a security company to run web scrapes that look your users' authentication data om malicious websites and notify you if any if found.

B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access.

C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.

D. Configure a red replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API.

Browse 277 Q&As
Questions 10

You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data. You want to ensure that the minimum amount of data is lost in case of catastrophic failure. Which two features should you implement? (Choose two.)

A. Sharding

B. Read replicas

C. Binary logging

D. Automated backups

E. Semisynchronous replication

Browse 277 Q&As
Questions 11

Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not time-critical. You also need to use GCP services that are HIPAA-certified and manage service costs.

How should you design to meet Google best practices?

A. Provisioning preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.

B. Provisioning preemptible VMs to reduce cost. Disable and then discontinue use of all GCP and APIs that are not HIPAA-compliant.

C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.

D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.

Browse 277 Q&As
Questions 12

Your team will start developing a new application using microservices architecture on Kubernetes Engine. As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment. You want to ensure that all code deployed in the development environment follows this process. What should you do?

A. Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.

B. Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.

C. Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions.

D. Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions.

Browse 277 Q&As
Questions 13

You have an outage in your Compute Engine managed instance group: all instance keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What should you do?

A. Grant your colleague the IAM role of project Viewer

B. Perform a rolling restart on the instance group

C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys

D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys

Browse 277 Q&As
Questions 14

A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services. You want to know which service takes the longest in those cases. What should you do?

A. Set timeouts on your application so that you can fail requests faster.

B. Send custom metrics for each of your requests to Stackdriver Monitoring.

C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high.

D. Instrument your application with Stackdnver Trace in order to break down the request latencies at each microservice.

Browse 277 Q&As
Questions 15

For this question, refer to the TerramEarth case study. You are building a microservice-based application for TerramEarth. The application is based on Docker containers. You want to follow Google-recommended practices to build the application continuously and store the build artifacts. What should you do?

A. 1. Configure a trigger in Cloud Build for new source changes.

2.

Invoke Cloud Build to build one container image, and tag the image with the label 'latest.'

3.

Push the image to the Artifact Registry.

B. 1. Configure a trigger in Cloud Build for new source changes.

2.

Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash.

3.

Push the images to the Artifact Registry.

C. 1 Create a Scheduler job to check the repo every minute.

2.

For any new change, invoke Cloud Build to build container images for the microservices.

3.

Tag the images using the current timestamp, and push them to the Artifact Registry.

D. 1. Configure a trigger in Cloud Build for new source changes.

2.

The trigger invokes build jobs and build container images for the microservices.

3.

Tag the images with a version number, and push them to Cloud Storage.

Browse 277 Q&As
Questions 16

You are managing an application deployed on Cloud Run for Anthos, and you need to define a strategy for deploying new versions of the application. You want to evaluate the new code with a subset of production traffic to decide whether to

proceed with the rollout.

What should you do?

A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions.

B. Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancing instance in front of both services.

C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch. As part of the Cloud Build trigger, configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version.

D. In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run. Configure Traffic Director to send a small percentage of traffic to the new version of the application.

Browse 277 Q&As
Questions 17

Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage.

What is the Google-recommended way for your application to authenticate to the required Google Cloud services?

A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.

B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles.

C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.

D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.

Browse 277 Q&As
Questions 18

You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don't run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements?

A. 1) Enable automatic storage increase for the instance. 2) Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3) Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.

B. 1) Enable automatic storage increase for the instance. 2) Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3) Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.

C. 1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2) Deploy memcached to reduce CPU load. 3) Change the instance type to a 32-core machine type to reduce replication lag.

D. 1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2) Deploy memcached to reduce CPU load. 3) Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.

Browse 277 Q&As
Exam Name: Professional Cloud Architect on Google Cloud Platform
Last Update: Mar 19, 2025
Questions: 277 Q&As

PDF

$49.99

VCE

$55.99

PDF + VCE

$65.99