Winter Special - Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: top65certs

Free and Premium Snowflake DEA-C01 Dumps Questions Answers

Page: 1 / 4
Total 65 questions

SnowPro Advanced: Data Engineer Certification Exam Questions and Answers

Question 1

The JSON below is stored in a variant column named v in a table named jCustRaw:

Which query will return one row per team member (stored in the teamMembers array) along all of the attributes of each team member?

A)

B)

C)

D)

Options:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Buy Now
Question 2

Which output is provided by both theSYSTEM$CLUSTERING_DEPTHfunction and theSYSTEM$CLUSTERING_INFORMATIONfunction?

Options:

A.

average_depth

B.

notes

C.

average_overlaps

D.

total_partition_count

Question 3

The following code is executed ina Snowflake environment with the default settings:

What will be the result of the select statement?

Options:

A.

SQL compilation error object CUSTOMER' does not exist or is not authorized.

B.

John

C.

1

D.

1John

Question 4

How can the following relational data be transformed into semi-structured data using the LEAST amount of operational overhead?

Options:

A.

Use the to_json function

B.

Use the PAESE_JSON function to produce a variant value

C.

Use the OBJECT_CONSTRUCT function to return a Snowflake object

D.

Use the TO_VARIANT function to convert each of the relational columns to VARIANT.

Question 5

Assuming that the session parameter USE_CACHED_RESULT is set to false, what are characteristics of Snowflake virtual warehouses in terms of the use of Snowpark?

Options:

A.

Creating a DataFrame from a table will start a virtual warehouse

B.

Creating a DataFrame from a staged file with the read () method will start a virtual warehouse

C.

Transforming a DataFrame with methods like replace () will start a virtual warehouse -

D.

Calling a Snowpark stored procedure to query the database with session, call () will start a virtual warehouse

Question 6

A new customer table is created by a data pipeline in a Snowflake schema where MANAGED ACCESSenabled.

…. Can gran access to the CUSTOMER table? (Select THREE.)

Options:

A.

The role that owns the schema

B.

The role that owns the database

C.

The role that owns the customer table

D.

The SYSADMIN role

E.

The SECURITYADMIN role

F.

The USERADMIN role with the manage grants privilege

Question 7

Within a Snowflake account permissions have been defined with custom roles and role hierarchies.

To set up column-level masking using a role in the hierarchy of the current user, what command would be used?

Options:

A.

CORRECT_ROLE

B.

IKVOKER_ROLE

C.

IS_RCLE_IN_SESSION

D.

IS_GRANTED_TO_INVOKER_ROLE

Question 8

Which callback function is required within a JavaScript User-Defined Function (UDF) for it to execute successfully?

Options:

A.

initialize ()

B.

processRow ()

C.

handler

D.

finalize ()

Question 9

A Data Engineer has developed a dashboard that will issue the same SQL select clause to Snowflake every 12 hours.

---will Snowflake use the persisted query results from the result cache provided that the underlying data has not changed^

Options:

A.

12 hours

B.

24 hours

C.

14 days

D.

31 days

Question 10

Which Snowflake objects does the Snowflake Kafka connector use? (Select THREE).

Options:

A.

Pipe

B.

Serverless task

C.

Internal user stage

D.

Internal table stage

E.

Internal named stage

F.

Storage integration

Question 11

A CSV file around 1 TB in size is generated daily on an on-premise server A corresponding table. Internal stage, and file format have already been created in Snowflake to facilitate the data loading process

How can the process of bringing the CSV file into Snowflake be automated using the LEAST amount of operational overhead?

Options:

A.

Create a task in Snowflake that executes once a day and runs a copy into statement that references the internal stage The internal stage will read the files directly

from the on-premise server and copy the newest file into the table from the on-premise server to the Snowflake table

B.

On the on-premise server schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage Create a task that executes once a

day m Snowflake and runs a OOPY WTO statement that references the internal stage Schedule the task to start after the file lands in the internal stage

C.

On the on-premise server schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a pipe that runs a copy

into statement that references the internal stage Snowpipe auto-ingest will automatically load the file from the internal stage when the new file lands in the internal

stage.

D.

On the on premise server schedule a Python file that uses the Snowpark Python library. The Python script will read the CSV data into a DataFrame and generate an

insert into statement that will directly load into the table The script will bypass the need to move a file into an internal stage

Question 12

A Data Engineer has written a stored procedure that will run with caller's rights. The Engineer has granted ROLEA right to use this stored procedure.

What is a characteristic of the stored procedure being called using ROLEA?

Options:

A.

The stored procedure must run with caller's rights it cannot be converted later to run with owner's rights

B.

If the stored procedure accesses an object that ROLEA does not have access to the stored procedure will fail

C.

The stored procedure will run in the context (database and schema) where the owner created the stored procedure

D.

ROLEA will not be able to see the source code for the stored procedure even though the role has usage privileges on the stored procedure

Question 13

A Data Engineer ran a stored procedure containing various transactions During the execution, the session abruptly disconnected preventing one transactionfrom committing or rolling hark.The transaction was left in a detached state and created a lock on resources

...must the Engineer take to immediately run a new transaction?

Options:

A.

Call the system function SYSTEM$ABORT_TRANSACTION.

B.

Call the system function SYSTEM$CANCEL_TRANSACTION.

C.

Set the LOCK_TIMEOUTto FALSE in the stored procedure

D.

Set the transaction abort on error to true in the stored procedure.

Question 14

A company is using Snowpipe to bring in millions of rows every day of Change Data Capture (CDC) into a Snowflake staging table on a real-time basis The CDC needs to get processedand combined with other data in Snowflake and land in a final table as part of the full data pipeline.

How can a Data engineer MOST efficiently process the incoming CDC on an ongoing basis?

Options:

A.

Create a stream on the staging table and schedule a task that transforms data from the stream only when the stream has data.

B.

Transform the data during the data load with Snowpipe by modifying the related copy into statement to include transformation steps such as case statements andJOIN'S.

C.

Schedule a task that dynamically retrieves the last time the task was run from information_schema-rask_hiSwOry and use that timestamp to process the delta of the new rows since the last time the task was run.

D.

Use a create ok replace table as statement that references the staging table and includes all the transformation SQL. Use a task to run the full create or replace table as statement on a scheduled basis

Question 15

A Data Engineer is implementing a near real-time ingestionpipeline to toad data into Snowflake using the Snowflake Kafka connector. There will be three Kafka topics created.

……snowflake objects are created automatically when the Kafka connector starts? (Select THREE)

Options:

A.

Tables

B.

Tasks

C.

Pipes

D.

internal stages

E.

External stages

F.

Materialized views

Question 16

Which methods will trigger an action that will evaluate a DataFrame? (Select TWO)

Options:

A.

DataFrame.random_split ( )

B.

DataFrame.collect ()

C.

DateFrame.select ()

D.

DataFrame.col ( )

E.

DataFrame.show ()

Question 17

A company has an extensive script in Scala that transforms data by leveraging DataFrames. A Data engineer needs to move these transformations to Snowpark.

…characteristics of data transformations in Snowpark should be considered to meet this requirement? (Select TWO)

Options:

A.

It is possible to join multiple tables using DataFrames.

B.

Snowpark operations are executed lazily on the server.

C.

User-Defined Functions (UDFs) are not pushed down to Snowflake

D.

Snowpark requires a separate cluster outside of Snowflake for computations

E.

Columns in different DataFrames with the same name should be referred to with squared brackets

Page: 1 / 4
Total 65 questions