Weekend Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

Associate-Data-Practitioner Google Cloud Associate Data Practitioner ( ADP Exam ) Questions and Answers

Questions 4

You are developing a data ingestion pipeline to load small CSV files into BigQuery from Cloud Storage. You want to load these files upon arrival to minimize data latency. You want to accomplish this with minimal cost and maintenance. What should you do?

Options:

A.

Use the bq command-line tool within a Cloud Shell instance to load the data into BigQuery.

B.

Create a Cloud Composer pipeline to load new files from Cloud Storage to BigQuery and schedule it to run every 10 minutes.

C.

Create a Cloud Run function to load the data into BigQuery that is triggered when data arrives in Cloud Storage.

D.

Create a Dataproc cluster to pull CSV files from Cloud Storage, process them using Spark, and write the results to BigQuery.

Buy Now
Questions 5

Your organization needs to store historical customer order data. The data will only be accessed once a month for analysis and must be readily available within a few seconds when it is accessed. You need to choose a storage class that minimizes storage costs while ensuring that the data can be retrieved quickly. What should you do?

Options:

A.

Store the data in Cloud Storaqe usinq Nearline storaqe.

B.

Store the data in Cloud Storaqe usinq Coldline storaqe.

C.

Store the data in Cloud Storage using Standard storage.

D.

Store the data in Cloud Storage using Archive storage.

Buy Now
Questions 6

You work for a healthcare company that has a large on-premises data system containing patient records with personally identifiable information (PII) such as names, addresses, and medical diagnoses. You need a standardized managed solution that de-identifies PII across all your data feeds prior to ingestion to Google Cloud. What should you do?

Options:

A.

Use Cloud Run functions to create a serverless data cleaning pipeline. Store the cleaned data in BigQuery.

B.

Use Cloud Data Fusion to transform the data. Store the cleaned data in BigQuery.

C.

Load the data into BigQuery, and inspect the data by using SQL queries. Use Dataflow to transform the data and remove any errors.

D.

Use Apache Beam to read the data and perform the necessary cleaning and transformation operations. Store the cleaned data in BigQuery.

Buy Now
Questions 7

You work for a home insurance company. You are frequently asked to create and save risk reports with charts for specific areas using a publicly available storm event dataset. You want to be able to quickly create and re-run risk reports when new data becomes available. What should you do?

Options:

A.

Export the storm event dataset as a CSV file. Import the file to Google Sheets, and use cell data in the worksheets to create charts.

B.

Copy the storm event dataset into your BigQuery project. Use BigQuery Studio to query and visualize the data in Looker Studio.

C.

Reference and query the storm event dataset using SQL in BigQuery Studio. Export the results to Google Sheets, and use cell data in the worksheets to create charts.

D.

Reference and query the storm event dataset using SQL in a Colab Enterprise notebook. Display the table results and document with Markdown, and use Matplotlib to create charts.

Buy Now
Questions 8

You manage a large amount of data in Cloud Storage, including raw data, processed data, and backups. Your organization is subject to strict compliance regulations that mandate data immutability for specific data types. You want to use an efficient process to reduce storage costs while ensuring that your storage strategy meets retention requirements. What should you do?

Options:

A.

Configure lifecycle management rules to transition objects to appropriate storage classes based on access patterns. Set up Object Versioning for all objects to meet immutability requirements.

B.

Move objects to different storage classes based on their age and access patterns. Use Cloud Key Management Service (Cloud KMS) to encrypt specific objects with customer-managed encryption keys (CMEK) to meet immutability requirements.

C.

Create a Cloud Run function to periodically check object metadata, and move objects to the appropriate storage class based on age and access patterns. Use object holds to enforce immutability for specific objects.

D.

Use object holds to enforce immutability for specific objects, and configure lifecycle management rules to transition objects to appropriate storage classes based on age and access patterns.

Buy Now
Questions 9

Your organization has several datasets in their data warehouse in BigQuery. Several analyst teams in different departments use the datasets to run queries. Your organization is concerned about the variability of their monthly BigQuery costs. You need to identify a solution that creates a fixed budget for costs associated with the queries run by each department. What should you do?

Options:

A.

Create a custom quota for each analyst in BigQuery.

B.

Create a single reservation by using BigQuery editions. Assign all analysts to the reservation.

C.

Assign each analyst to a separate project associated with their department. Create a single reservation by using BigQuery editions. Assign all projects to the reservation.

D.

Assign each analyst to a separate project associated with their department. Create a single reservation for each department by using BigQuery editions. Create assignments for each project in the appropriate reservation.

Buy Now
Questions 10

You need to create a new data pipeline. You want a serverless solution that meets the following requirements:

• Data is streamed from Pub/Sub and is processed in real-time.

• Data is transformed before being stored.

• Data is stored in a location that will allow it to be analyzed with SQL using Looker.

Associate-Data-Practitioner Question 10

Which Google Cloud services should you recommend for the pipeline?

Options:

A.

1. Dataproc Serverless

2. Bigtable

B.

1. Cloud Composer

2. Cloud SQL for MySQL

C.

1. BigQuery

2. Analytics Hub

D.

1. Dataflow

2. BigQuery

Buy Now
Questions 11

Your organization needs to implement near real-time analytics for thousands of events arriving each second in Pub/Sub. The incoming messages require transformations. You need to configure a pipeline that processes, transforms, and loads the data into BigQuery while minimizing development time. What should you do?

Options:

A.

Use a Google-provided Dataflow template to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.

B.

Create a Cloud Data Fusion instance and configure Pub/Sub as a source. Use Data Fusion to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.

C.

Load the data from Pub/Sub into Cloud Storage using a Cloud Storage subscription. Create a Dataproc cluster, use PySpark to perform transformations in Cloud Storage, and write the results to BigQuery.

D.

Use Cloud Run functions to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.

Buy Now
Questions 12

Your company’s customer support audio files are stored in a Cloud Storage bucket. You plan to analyze the audio files’ metadata and file content within BigQuery to create inference by using BigQuery ML. You need to create a corresponding table in BigQuery that represents the bucket containing the audio files. What should you do?

Options:

A.

Create an external table.

B.

Create a temporary table.

C.

Create a native table.

D.

Create an object table.

Buy Now
Questions 13

You have created a LookML model and dashboard that shows daily sales metrics for five regional managers to use. You want to ensure that the regional managers can only see sales metrics specific to their region. You need an easy-to-implement solution. What should you do?

Options:

A.

Create asales_regionuser attribute, and assign each manager’s region as the value of their user attribute. Add anaccess_filterExplore filter on theregion_namedimension by using thesales_regionuser attribute.

B.

Create five different Explores with thesql_always_filterExplore filter applied on theregion_namedimension. Set eachregion_namevalue to the corresponding region for each manager.

C.

Create separate Looker dashboards for each regional manager. Set the default dashboard filter to the corresponding region for each manager.

D.

Create separate Looker instances for each regional manager. Copy the LookML model and dashboard to each instance. Provision viewer access to the corresponding manager.

Buy Now
Questions 14

You work for a financial organization that stores transaction data in BigQuery. Your organization has a regulatory requirement to retain data for a minimum of seven years for auditing purposes. You need to ensure that the data is retained for seven years using an efficient and cost-optimized approach. What should you do?

Options:

A.

Create a partition by transaction date, and set the partition expiration policy to seven years.

B.

Set the table-level retention policy in BigQuery to seven years.

C.

Set the dataset-level retention policy in BigQuery to seven years.

D.

Export the BigQuery tables to Cloud Storage daily, and enforce a lifecycle management policy that has a seven-year retention rule.

Buy Now
Questions 15

You are working on a data pipeline that will validate and clean incoming data before loading it into BigQuery for real-time analysis. You want to ensure that the data validation and cleaning is performed efficiently and can handle high volumes of data. What should you do?

Options:

A.

Write custom scripts in Python to validate and clean the data outside of Google Cloud. Load the cleaned data into BigQuery.

B.

Use Cloud Run functions to trigger data validation and cleaning routines when new data arrives in Cloud Storage.

C.

Use Dataflow to create a streaming pipeline that includes validation and transformation steps.

D.

Load the raw data into BigQuery using Cloud Storage as a staging area, and use SQL queries in BigQuery to validate and clean the data.

Buy Now
Questions 16

You need to create a weekly aggregated sales report based on a large volume of data. You want to use Python to design an efficient process for generating this report. What should you do?

Options:

A.

Create a Cloud Run function that uses NumPy. Use Cloud Scheduler to schedule the function to run once a week.

B.

Create a Colab Enterprise notebook and use the bigframes.pandas library. Schedule the notebook to execute once a week.

C.

Create a Cloud Data Fusion and Wrangler flow. Schedule the flow to run once a week.

D.

Create a Dataflow directed acyclic graph (DAG) coded in Python. Use Cloud Scheduler to schedule the code to run once a week.

Buy Now
Questions 17

Your team needs to analyze large datasets stored in BigQuery to identify trends in user behavior. The analysis will involve complex statistical calculations, Python packages, and visualizations. You need to recommend a managed collaborative environment to develop and share the analysis. What should you recommend?

Options:

A.

Create a Colab Enterprise notebook and connect the notebook to BigQuery. Share the notebook with your team. Analyze the data and generate visualizations in Colab Enterprise.

B.

Create a statistical model by using BigQuery ML. Share the query with your team. Analyze the data and generate visualizations in Looker Studio.

C.

Create a Looker Studio dashboard and connect the dashboard to BigQuery. Share the dashboard with your team. Analyze the data and generate visualizations in Looker Studio.

D.

Connect Google Sheets to BigQuery by using Connected Sheets. Share the Google Sheet with your team. Analyze the data and generate visualizations in Gooqle Sheets.

Buy Now
Questions 18

You want to process and load a daily sales CSV file stored in Cloud Storage into BigQuery for downstream reporting. You need to quickly build a scalable data pipeline that transforms the data while providing insights into data quality issues. What should you do?

Options:

A.

Create a batch pipeline in Cloud Data Fusion by using a Cloud Storage source and a BigQuery sink.

B.

Load the CSV file as a table in BigQuery, and use scheduled queries to run SQL transformation scripts.

C.

Load the CSV file as a table in BigQuery. Create a batch pipeline in Cloud Data Fusion by using a BigQuery source and sink.

D.

Create a batch pipeline in Dataflow by using the Cloud Storage CSV file to BigQuery batch template.

Buy Now
Questions 19

Your company uses Looker to generate and share reports with various stakeholders. You have a complex dashboard with several visualizations that needs to be delivered to specific stakeholders on a recurring basis, with customized filters applied for each recipient. You need an efficient and scalable solution to automate the delivery of this customized dashboard. You want to follow the Google-recommended approach. What should you do?

Options:

A.

Create a separate LookML model for each stakeholder with predefined filters, and schedule the dashboards using the Looker Scheduler.

B.

Create a script using the Looker Python SDK, and configure user attribute filter values. Generate a new scheduled plan for each stakeholder.

C.

Embed the Looker dashboard in a custom web application, and use the application's scheduling features to send the report with personalized filters.

D.

Use the Looker Scheduler with a user attribute filter on the dashboard, and send the dashboard with personalized filters to each stakeholder based on their attributes.

Buy Now
Questions 20

Your organization uses a BigQuery table that is partitioned by ingestion time. You need to remove data that is older than one year to reduce your organization’s storage costs. You want to use the most efficient approach while minimizing cost. What should you do?

Options:

A.

Create a scheduled query that periodically runs an update statement in SQL that sets the “deleted" column to “yes” for data that is more than one year old. Create a view that filters out rows that have been marked deleted.

B.

Create a view that filters out rows that are older than one year.

C.

Require users to specify a partition filter using the alter table statement in SQL.

D.

Set the table partition expiration period to one year using the ALTER TABLE statement in SQL.

Buy Now
Questions 21

You created a customer support application that sends several forms of data to Google Cloud. Your application is sending:

1. Audio files from phone interactions with support agents that will be accessed during trainings.

2. CSV files of users’ personally identifiable information (Pll) that will be analyzed with SQL.

3. A large volume of small document files that will power other applications.

You need to select the appropriate tool for each data type given the required use case, while following Google-recommended practices. Which should you choose?

Options:

A.

1. Cloud Storage

2. CloudSQL for PostgreSQL

3. Bigtable

B.

1. Filestore

2. Cloud SQL for PostgreSQL

3. Datastore

C.

1. Cloud Storage

2. BigQuery

3. Firestore

D.

1. Filestore

2. Bigtable

3. BigQuery

Buy Now
Exam Name: Google Cloud Associate Data Practitioner ( ADP Exam )
Last Update: Feb 15, 2025
Questions: 72

PDF + Testing Engine

$57.75  $164.99

Testing Engine

$43.75  $124.99
buy now Associate-Data-Practitioner testing engine

PDF (Q&A)

$36.75  $104.99
buy now Associate-Data-Practitioner pdf