Given the statement template below, which database objects can be added to a share?(Select TWO).
GRANT ON To SHARE ;
Show Answer
Answer:
C, D
Explanation:
In Snowflake, shares are used to share data across different Snowflake accounts securely. When you create a share, you can include various database objects that you want to share with consumers. According to Snowflake's documentation, the types of objects that can be shared include tables, secure views, secure materialized views, and streams. Secure functions and stored procedures are not shareable objects. Tasks also cannot be shared directly. Therefore, the correct answers are streams (C) and tables (D).
To share a stream or a table, you use the GRANT statement to grant privileges on these objects to a share. The syntax for sharing a table or stream involves specifying the type of object, the object name, and the share to which you are granting access. For example:
GRANT SELECT ON TABLE my_table TO SHARE my_share; GRANT SELECT ON STREAM my_stream TO SHARE my_share;
These commands grant the SELECT privilege on a table named my_table and a stream named my_stream to a share named my_share . This enables the consumer of the share to access these objects according to the granted privileges.
[Reference: Snowflake Documentation on Shares and Database Objects (https://docs.snowflake.com), , ]
Which function returns the URL of a stage using the stage name as the input?
Show Answer
Answer:
C
Explanation:
The function in Snowflake that returns the URL of a stage using the stage name as the input is C. GET_PRESIGNED_URL . This function generates a pre-signed URL for a specific file in a stage, enabling secure, temporary access to that file without requiring Snowflake credentials. While the function is primarily used for accessing files in external stages, such as Amazon S3 buckets, it is instrumental in scenarios requiring direct, secure file access for a limited time.
It's important to note that as of my last update, Snowflake's documentation does not specifically list a function named GET_PRESIGNED_URL for directly obtaining a stage's URL by its name. The description aligns closely with functionality available in cloud storage services (e.g., AWS S3's presigned URLs) which can be used in conjunction with Snowflake stages for secure, temporary access to files. For direct interaction with stages and their files, Snowflake offers various functions and commands, but the exact match for generating a presigned URL through a simple function call may vary or require leveraging external cloud services APIs in addition to Snowflake's capabilities.
References:
Snowflake Documentation and cloud services (AWS, Azure, GCP) documentation on presigned URLs and stage interactions.
When an object is created in Snowflake. who owns the object?
Options:
B.
The user's default role
C.
The current active primary role
D.
The owner of the parent schema
Show Answer
Answer:
C
Explanation:
In Snowflake, when an object is created, it is owned by the role that is currently active. This active role is the one that is being used to execute the creation command. Ownership implies full control over the object, including the ability to grant and revoke access privileges. This is specified in Snowflake's documentation under the topic of Access Control, which states that "the role in use at the time of object creation becomes the owner of the object."
References :
What is a characteristic of a tag associated with a masking policy?
Options:
A.
A tag can be dropped after a masking policy is assigned
B.
A tag can have only one masking policy for each data type.
C.
A tag can have multiple masking policies for each data type.
D.
A tag can have multiple masking policies with varying data types
Show Answer
Answer:
B
Explanation:
In Snowflake, a tag can be associated with only one masking policy for each data type. This means that for a given data type, you can define a single masking policy to be applied when a tag is used. Tags and masking policies are part of Snowflake's data classification and governance features, allowing for data masking based on the context defined by the tags.
References :
Authorization to execute CREATE statements comes only from which role?
Show Answer
Answer:
A
Explanation:
In Snowflake, the authorization to execute CREATE < object > statements, such as creating tables, views, databases, etc., is determined by the role currently set as the user's primary role. The primary role of a user or session specifies the set of privileges (including creation privileges) that the user has. While users can have multiple roles, only the primary role is used to determine what objects the user can create unless explicitly specified in the session.
[Reference: This is based on the principle of Role-Based Access Control (RBAC) in Snowflake, where roles are used to manage access permissions. The official Snowflake documentation on Understanding and Using Roles would be the best resource to verify this information: https://docs.snowflake.com/en/user-guide/security-access-control-overview.html#roles, , ]
What does the Activity area of Snowsight allow users to do? (Select TWO).
Options:
A.
Schedule automated data backups.
B.
Explore each step of an executed query.
C.
Monitor queries executed by users in an account.
D.
Create and manage user roles and permissions.
E.
Access Snowflake Marketplace to find and integrate datasets.
Show Answer
Answer:
B, C
Explanation:
The Activity area of Snowsight, Snowflake's web interface, allows users to perform several important tasks related to query management and performance analysis. Among the options provided, the correct ones are:
B. Explore each step of an executed query: Snowsight provides detailed insights into query execution, including the ability to explore the execution plan of a query. This helps users understand how a query was processed, identify performance bottlenecks, and optimize query performance.
C. Monitor queries executed by users in an account: The Activity area enables users to monitor the queries that have been executed by users within the Snowflake account. This includes viewing the history of queries, their execution times, resources consumed, and other relevant metrics.
These features are crucial for effective query performance tuning and ensuring efficient use of Snowflake's resources.
References:
Which statement will trigger a stream to advance its offset?
Options:
D.
CREATE OR REPLACE STREAM
If a virtual warehouse is suspended, what happens to the warehouse cache?
Options:
A.
The cache is dropped when the warehouse is suspended and is no longer available upon restart.
B.
The warehouse cache persists for as long the warehouse exists, regardless of its suspension status.
C.
The cache is maintained for up to two hours and can be restored If the warehouse Is restarted within this limit.
D.
The cache is maintained for the auto suspend duration and can be restored it the warehouse 15 restarted within this limit.
Show Answer
Answer:
A
Explanation:
When a virtual warehouse in Snowflake is suspended, the cache is dropped and is no longer available upon restart. This means that all cached data, including results and temporary data, are cleared from memory. The purpose of this behavior is to conserve resources while the warehouse is not active. Upon restarting the warehouse, it will need to reload any data required for queries from storage, which may result in a slower initial performance until the cache is repopulated. This is a critical consideration for managing performance and cost in Snowflake.
By default, which role can create resource monitors?
Show Answer
Answer:
A
Explanation:
The role that can by default create resource monitors in Snowflake is the ACCOUNTADMIN role. Resource monitors are a crucial feature in Snowflake that allows administrators to track and control the consumption of compute resources, ensuring that usage stays within specified limits. The creation and management of resource monitors involve defining thresholds for credits usage, setting up notifications, and specifying actions to be taken when certain thresholds are exceeded.
Given the significant impact that resource monitors can have on the operational aspects and billing of a Snowflake account, the capability to create and manage them is restricted to the ACCOUNTADMIN role. This role has the broadest set of privileges in Snowflake, including the ability to manage all aspects of the account, such as users, roles, warehouses, databases, and resource monitors, among others.
References:
The Snowflake VARIANT data type imposes a 16 MB size limit on what?
What best practice recommendations will help prevent timeouts when using the PUT command to load large data sets? (Select TWO).
Options:
A.
Compress the files before loading.
B.
Use a semi-structured file format.
C.
Increase the PARALLEL option value.
D.
Load the data into a table stage.
E.
Enable the overwrite option.
Show Answer
Answer:
A, C
Explanation:
To avoid timeouts during large data uploads with the PUT command in Snowflake, it is recommended to:
Compress files before loading : Compressed files are smaller and upload faster, reducing the risk of timeouts.
Incr ease the PARALLEL option value : This option allows more simultaneous upload threads, improving upload speed and efficiency for large datasets.
Semi-structured file formats and table staging do not directly impact timeouts, while enabling overwrite does not prevent timeouts but rather controls overwriting of existing files.
Which Snowflake data governance feature supports resource usage monitoring?
Which type of role can be granted to a share?
Show Answer
Answer:
B
Explanation:
In Snowflake, shares are used to share data between Snowflake accounts. When creating a share, it is possible to grant access to the share to roles within the Snowflake account that is creating the share. The type of role that can be granted to a share is a Custom role. Custom roles are user-defined roles that account administrators can create to manage access control in a more granular way. Unlike predefined roles such as ACCOUNTADMIN or SYSADMIN, custom roles can be tailored with specific privileges to meet the security and access requirements of different groups within an organization.
Granting a custom role access to a share enables users associated with that role to access the shared data if the share is received by another Snowflake account. It is important to carefully manage the privileges granted to custom roles to ensure that data sharing aligns with organizational policies and data governance standards.
References:
Snowflake Documentation on Shares: Shares
Snowflake Documentation on Roles: Access Control
In Snowflake's data security framework, how does column-level security contribute to the protection of sensitive information? (Select TWO).
Options:
A.
Implementation of column-level security will optimize query performance.
B.
Column-level security supports encryption of the entire database.
C.
Column-level security ensures that only the table owner can access the data.
D.
Column-level security limits access to specific columns within a table based on user privileges
E.
Column-level security allows the application of a masking policy to a column within a table or view.
Show Answer
Answer:
D, E
Explanation:
Column-level security in Snowflake enhances data protection by restricting access and applying masking policies to sensitive data at the column level.
Limiting Access Based on User Privileges :
Column-level security allows administrators to define which users or roles have access to specific columns within a table.
This ensures that sensitive data is only accessible to authorized personnel, thereby reducing the risk of data breaches.
Application of Masking Policies :
Masking policies can be applied to columns to obfuscate sensitive data.
For example, credit card numbers can be masked to show only the last four digits, protecting the full number from being exposed.
References :
Snowflake Documentation: Column-Level Security
Snowflake Documentation: Dynamic Data Masking
Who can activate a network policy for users in a Snowflake account? (Select TWO)
Options:
E.
Any role that has the global ATTACH POLICY privilege
Show Answer
Answer:
A, E
Explanation:
Network policies in Snowflake are used to control access to Snowflake accounts based on IP address ranges. These policies can be activated by specific roles that have the necessary privileges.
Role: ACCOUNTADMIN :
The ACCOUNTADMIN role has full administrative rights across the Snowflake account.
This role can manage all aspects of the Snowflake environment, including network policies.
Role with Global ATTACH PO LICY Privilege :
Any role that has been granted the global ATTACH POLICY privilege can activate network policies.
This privilege allows the role to attach policies that control network access to the account.
References :
Snowflake's access control framework combines which models for securing data? (Select TWO).
Options:
A.
Attribute-based Access Control (ABAC 1
B.
Discretionary Access Control (DAC)
C.
Access Control List (ACL)
D.
Role-based Access Control (RBAC)
E.
Rule-based Access Control (RuBAC)
Show Answer
Answer:
B, D
Explanation:
Snowflake's access control framework utilizes a combination of Discretionary Access Control (DAC) and Role-based Access Control (RBAC). DAC in Snowflake allows the object owner to grant access privileges to other roles. RBAC involves assigning roles to users and then granting privileges to those roles. Through roles, Snowflake manages which users have access to specific objects and what actions they can perform, which is central to security and governance in the Snowflake environment.References: Snowflake Documentation on Access Control,
To use the overwrite option on insert, which privilege must be granted to the role?
Show Answer
Answer:
B
Explanation:
To use the overwrite option on insert in Snowflake, the DELETE privilege must be granted to the role. This is because overwriting data during an insert operation implicitly involves deleting the existing data before inserting the new data.
Understanding the Overwrite Option: The overwrite option (INSERT OVERWRITE ) allows you to replace existing data in a table with new data. This operation is particularly useful for batch-loading scenarios where the entire dataset needs to be refreshed.
Why DELETE Privilege is Required: Since the overwrite operation involves removing existing rows in the table, the executing role must have the DELETE privilege to carry out both the deletion of old data and the insertion of new data.
Granting DELETE Privilege:
To grant the DELETE privilege to a role, an account administrator can execute the following SQL command:
sqlCopy code
GRANT DELETE ON TABLE my_table TO ROLE my_role;
[Reference: For additional details on inserting data with the overwrite option and the required privileges, consult the Snowflake documentation on data loading: https://docs.snowflake.com/en/sql-reference/sql/insert.html, , ]
Which privilege is needed for a SnowFlake user to see the definition of a secure view?
Show Answer
Answer:
A
Explanation:
To see the definition of a secure view in Snowflake, the minimum privilege required is OWNERSHIP of the view. Ownership grants the ability to view the definition as well as to modify or drop the view. Secure views are designed to protect sensitive data, and thus the definition of these views is restricted to users with sufficient privileges to ensure data security.
References:
What optional properties can a Snowflake user set when creating a virtual warehouse? (Select TWO).
Show Answer
Answer:
A, D
Explanation:
When creating a virtual warehouse in Snowflake, users have the option to set several properties to manage its behavior and resource usage. Two of these optional properties are Auto-suspend and Resource monitor.
Auto-suspend: This property defines the period of inactivity after which the warehouse will automatically suspend. This helps in managing costs by stopping the warehouse when it is not in use.
CREATE WAREHOUSE my_warehouse
WITH WAREHOUSE_SIZE = 'XSMALL'
AUTO_SUSPEND = 300; -- Auto-suspend after 5 minutes of inactivity
Resource monitor: Users can assign a resource monitor to a warehouse to control and limit the amount of credit usage. Resource monitors help in setting quotas and alerts for warehouse usage.
CREATE WAREHOUSE my_warehouse
WITH WAREHOUSE_SIZE = 'XSMALL'
RESOURCE_MONITOR = 'my_resource_monitor';
References:
Snowflake Documentation: Creating Warehouses
Snowflake Documentation: Resource Monitors
An external stage many_stage contains many directories including one, app_files that contains CSV files
How can all the CSV files from this directory be moved into table my_table without scanning files that are not needed?
Which view in SNOWFLAKE.ACCOUNT_USAGE shows from which IP address a user connected to Snowflak?
Show Answer
Answer:
B
Explanation:
The LOGIN_HISTORY view in SNOWFLAKE.ACCOUNT_USAGE shows from which IP address a user connected to Snowflake. This view is particularly useful for auditing and monitoring purposes, as it helps administrators track login attempts, successful logins, and the geographical location of users based on their IP addresses.
Reference to Snowflake documentation on LOGIN_HISTORY:
Which Snowflake feature or service is primarily used for managing and monitoring data and user activities?
A Snowflake user accidentally deleted a table. The table no longer exists, but the session is within the data retention period. How can the table be restored using the LEAST amount of operational overhead?
Options:
A.
Clone the table schema as it existed before the table was dropped.
B.
Clone the database as it existed before the table was dropped.
C.
Recreate the table and reload the data.
D.
Run the UNDROP command against the table.
Show Answer
Answer:
D
Explanation:
In Snowflake, if a table is accidentally dropped but still within the data retention period (also known as "Time Travel"), the simplest and most efficient recovery method is the UNDROP command. This command restores the deleted table to its previous state with minimal operational effort. Since Snowflake retains dropped table data for a specific retention period (up to 90 days for the Enterprise edition), UNDROP can quickly recover the table without the need for complex cloning or data reloading processes, making it ideal for accidental deletions.
Which privilege is required on a virtual warehouse to abort any existing executing queries?
Show Answer
Answer:
B
Explanation:
The privilege required on a virtual warehouse to abort any existing executing queries is OPERATE . The OPERATE privilege on a virtual warehouse allows a user to perform operational tasks on the warehouse, including starting, stopping, and restarting the warehouse, as well as aborting running queries. This level of control is essential for managing resource utilization and ensuring that the virtual warehouse operates efficiently.
References:
A Snowflake user needs to optimize the definition of a secure view, but the user cannot see the view.
Which of the LEAST-PRIVILEGED access or role that should be granted to the user to complete this task?
Options:
A.
Grant the user the AYSADMIN role.
B.
Grant the user the ownership privilege on the secure view.
C.
Grant the user the imported privileges privilege on the database.
D.
Grant the user the SHOWFLAKE. object viewer database role.
Which typos of charts does Snowsight support? (Select TWO).
Show Answer
Answer:
A, B
Explanation:
Snowsight, Snowflake’s user interface for executing and analyzing queries, supports various types of visualizations to help users understand their data better. Among the supported types, area charts and bar charts are two common options. Area charts are useful for representing quantities through the use of filled areas on the graph, often useful for showing volume changes over time. Bar charts, on the other hand, are versatile for comparing different groups or categories of data. Both chart types are integral to data analysis, enabling users to visualize trends, patterns, and differences in their data effectively.References: Snowflake Documentation on Snowsight Visualizations
Which Snowflake function and command combination should be used to convert rows in a relational table to a single VARIANT column, and unload the rows Into a file in JSON format? (Select TWO).
Show Answer
Answer:
C, E
Explanation:
To convert rows in a relational table to a single VARIANT column and unload the rows into a file in JSON format, you can use the COPY command in combination with the OBJECT_CONSTRUCT function. The OBJECT_CONSTRUCT function converts the row into a JSON object stored in a VARIANT column, and the COPY command can then be used to unload this data into a JSON file.
References :
Snowflake Documentation: OBJECT_CONSTRUCT
Snowflake Documentation: COPY INTO
Top of Form
Bottom of Form
Based on a review of a Query Profile, which scenarios will benefit the MOST from the use of a data clustering key? (Select TWO.)
Options:
A.
A column that appears most frequently in order by operations
B.
A column that appears most frequently in where operations
C.
A column that appears most frequently in group by operations
D.
A column that appears most frequently in aggregate operations
E.
A column that appears most frequently in join operations
What is the purpose of the use of the VALIDATE command?
Options:
A.
To view any queries that encountered an error
B.
To verify that a SELECT query will run without error
C.
To prevent a put statement from running if an error occurs
D.
To see all errors from a previously run COPY INTO
statement
Show Answer
Answer:
D
Explanation:
The VALIDATE command in Snowflake is used to check for errors that occurred during the execution of a COPY INTO
statement. This command helps users identify and resolve data loading issues.
command to load data from a stage into a table.
COPY INTO my_table
FROM @my_stage
FILE_FORMAT = (FORMAT_NAME = 'my_format');
SELECT *
FROM TABLE(VALIDATE(my_table, JOB_ID => 'my_copy_job_id'));
Review Er rors: The VALIDATE function will return details about any errors that occurred, such as parsing errors or data type mismatches.
References:
Snowflake Documentation: Validating Data Loads
Snowflake Documentation: COPY INTO
Which user preferences can be set for a user profile in Snowsight? (Select TWO).
Options:
A.
Multi-Factor Authentication (MFA)
Show Answer
Answer:
B, C
Explanation:
In Snowsight, Snowflake's web interface, user preferences can be customized to enhance the user experience. Among these preferences, users can set a default database and default schema. These settings streamline the user experience by automatically selecting the specified database and schema when the user initiates a new session or query, reducing the need to manually specify these parameters for each operation. This feature is particularly useful for users who frequently work within a specific database or schema context.References: Snowflake Documentation on Snowsight User Preferences
Which statement describes Snowflake tables?
Options:
A.
Snowflake tables arc logical representation of underlying physical data.
B.
Snowflake tables are the physical instantiation of data loaded into Snowflake.
C.
Snowflake tables require that clustering keys be defined lo perform optimally.
D.
Snowflake tables are owned by a use.
Show Answer
Answer:
A
Explanation:
In Snowflake, tables represent a logical structure through which users interact with the stored data. The actual physical data is stored in micro-partitions managed by Snowflake, and the logical table structure provides the means by which SQL operations are mapped to this data. This architecture allows Snowflake to optimize storage and querying across its distributed, cloud-based data storage system.References: Snowflake Documentation on Tables
Which Snowflake feature enables loading data from cloud storage as soon as files ate available in a stage?
When unloading data, which combination of parameters should be used to differentiate between empty strings and NULL values? (Select TWO).
Options:
A.
ESCAPE_UNENCLOSED_FIELD
B.
REPLACE_INVALID_CHARACTERS
C.
FIELD_OPTIONALLY_ENCLOSED_BY
Show Answer
Answer:
C, D
Explanation:
When unloading data in Snowflake, it is essential to differentiate between empty strings and NULL values to preserve data integrity. The parameters FIELD_OPTIONALLY_ENCLOSED_BY and EMPTY_FIELD_AS_NULL are used together to address this:
FIELD_OPTIONALLY_ENCLOSED_BY : This parameter specifies the character used to enclose fields, which can differentiate between empty strings (as enclosed fields) and NULLs.
EMPTY_FIELD_AS_NULL : By setting this parameter, Snowflake interprets empty fields as NULL values when unloading data, ensuring accurate representation of NULLs versus empty strings.
These parameters are crucial when exporting data for systems that need explicit differentiation between NULL and empty string values.
When used with the UNLOAD command, which parameter specifies the destination of unloaded data?
Show Answer
Answer:
B
Explanation:
In Snowflake, the COPY INTO syntax is used with the UNLOAD command to specify the target location where the data should be unloaded, typically a stage or cloud storage (such as Amazon S3 or Azure Blob Storage). This command unloads data from a Snowflake table into files within the specified destination, enabling easy export and external storage of data. GET and PUT commands are used for file management but are not related to unloading table data directly.
Which command will unload data from a table into an external stage?
Show Answer
Answer:
C
Explanation:
In Snowflake, the COPY INTO < location > command is used to unload (export) data from a Snowflake table to an external stage, such as an S3 bucket, Azure Blob Storage, or Google Cloud Storage. This command allows users to specify the format, file size, and other options for the data being unloaded, making it a flexible solution for exporting data from Snowflake to external storage solutions for further use or analysis.References: Snowflake Documentation on Data Unloading
The following settings are configured:
THE MIN_DATA_RETENTION_TIME_IN_DAYS is set to 5 at the account level.
THE DATA_RETENTION_TIME_IN_DAYS is set to 2 at the object level.
For how many days will the data be retained at the object level?
Show Answer
Answer:
A
Explanation:
The settings shown in the image indicate that the data retention time in days is configured at two different levels: the account level and the object level. At the account level, the MIN_DATA_RETENTION_TIME_IN_DAYS is set to 5 days, and at the object level, the DATA_RETENTION_TIME_IN_DAYS is set to 2 days. Since the object level setting has a lower value, it takes precedence over the account level setting for the specific object. Therefore, the data will be retained for 2 days at the object level.References: Snowflake Documentation on Data Retention Policies
What is used to diagnose and troubleshoot network connections to Snowflake?
Show Answer
Answer:
A
Explanation:
SnowCD (Snowflake Connectivity Diagnostic Tool) is used to diagnose and troubleshoot network connections to Snowflake. It runs a series of connection checks to evaluate the network connection to Snowflake
How many resource monitors can be assigned at the account level?
Show Answer
Answer:
A
Explanation:
Snowflake allows for only one resource monitor to be assigned at the account level. This monitor oversees the credit usage of all the warehouses in the account. References: Snowflake Documentation
How would a user run a multi-cluster warehouse in maximized mode?
Options:
A.
Configure the maximum clusters setting to "Maximum."
B.
Turn on the additional clusters manually after starting the warehouse.
C.
Set the minimum Clusters and maximum Clusters settings to the same value.
D.
Set the minimum clusters and maximum clusters settings to different values.
Show Answer
Answer:
C
Explanation:
To run a multi-cluster warehouse in maximized mode, a user should set the minimum and maximum number of clusters to the same value. This ensures that all clusters are available when the warehouse is started, providing maximum resources for query execution. References: Snowflake Documentation2.
Which Snowflake tool would be BEST to troubleshoot network connectivity?
Show Answer
Answer:
D
Explanation:
SnowCD (Snowflake Connectivity Diagnostic Tool) is the best tool provided by Snowflake for troubleshooting network connectivity issues. It helps diagnose and resolve issues related to connecting to Snowflake services
.
Which statement describes pruning?
Options:
A.
The filtering or disregarding of micro-partitions that are not needed to return a query.
B.
The return of micro-partitions values that overlap with each other to reduce a query's runtime.
C.
A service that is handled by the Snowflake Cloud Services layer to optimize caching.
D.
The ability to allow the result of a query to be accessed as if it were a table.
Show Answer
Answer:
A
Explanation:
Pruning in Snowflake refers to the process of filtering or disregarding micro-partitions that are not needed to satisfy the conditions of a query. This optimization technique helps reduce the amount of data scanned, thereby improving query performance
Which file format will keep floating-point numbers from being truncated when data is unloaded?
Show Answer
Answer:
D
Explanation:
The Parquet file format is known for preserving the precision of floating-point numbers when data is unloaded, preventing truncation of the values3.
How can a Snowflake user optimize query performance in Snowflake? (Select TWO).
Options:
C.
Enable the search optimization service.
Show Answer
Answer:
B, C
Explanation:
To optimize query performance in Snowflake, users can cluster a table, which organizes the data in a way that minimizes the amount of data scanned during queries. Additionally, enabling the search optimization service can improve the performance of selective point lookup queries on large tables34.
Which of the following describes the Snowflake Cloud Services layer?
Options:
A.
Coordinates activities in the Snowflake account
B.
Executes queries submitted by the Snowflake account users
C.
Manages quotas on the Snowflake account storage
D.
Manages the virtual warehouse cache to speed up queries
Show Answer
Answer:
A
Explanation:
The Snowflake Cloud Services layer coordinates activities within the Snowflake account. It is responsible for tasks such as authentication, infrastructure management, metadata management, query parsing and optimization, and access control. References: Based on general cloud database architecture knowledge.
What is a characteristic of the Snowflake Query Profile?
Options:
A.
It can provide statistics on a maximum number of 100 queries per week.
B.
It provides a graphic representation of the main components of the query processing.
C.
It provides detailed statistics about which queries are using the greatest number of compute resources.
D.
It can be used by third-party software using the Query Profile API.
Show Answer
Answer:
B
Explanation:
The Snowflake Query Profile provides a graphic representation of the main components of the query processing. This visual aid helps users understand the execution details and performance characteristics of their queries4.
What is the MAXIMUM size limit for a record of a VARIANT data type?
Show Answer
Answer:
B
Explanation:
The maximum size limit for a record of a VARIANT data type in Snowflake is 16MB. This allows for storing semi-structured data types like JSON, Avro, ORC, Parquet, or XML within a single VARIANT column. References: Based on general database knowledge as of 2021.
What type of columns does Snowflake recommend to be used as clustering keys? (Select TWO).
Options:
B.
A column with very low cardinality
C.
A column with very high cardinality
D.
A column that is most actively used in selective filters
E.
A column that is most actively used in join predicates
Show Answer
Answer:
C, D
Explanation:
Snowflake recommends using columns with very high cardinality and those that are most actively used in selective filters as clustering keys. High cardinality columns have a wide range of unique values, which helps in evenly distributing the data across micro-partitions. Columns used in selective filters help in pruning the number of micro-partitions to scan, thus improving query performance. References: Based on general database optimization principles.
In which Snowflake layer does Snowflake reorganize data into its internal optimized, compressed, columnar format?
Show Answer
Answer:
B
Explanation:
Snowflake reorganizes data into its internal optimized, compressed, columnar format in the Database Storage layer. This process is part of how Snowflake manages data storage, ensuring efficient data retrieval and query performance
Which stages are used with the Snowflake PUT command to upload files from a local file system? (Choose three.)
Show Answer
Answer:
B, D, F
Explanation:
The Snowflake PUT command is used to upload files from a local file system to Snowflake stages, specifically the user stage, table stage, and internal named stage. These stages are where the data files are temporarily stored before being loaded into Snowflake tables
Two users share a virtual warehouse named wh dev 01. When one of the users loads data, the other one experiences performance issues while querying data.
How does Snowflake recommend resolving this issue?
Options:
A.
Scale up the existing warehouse.
B.
Create separate warehouses for each user.
C.
Create separate warehouses for each workload.
D.
Stop loading and querying data at the same time.
Show Answer
Answer:
C
Explanation:
Snowflake recommends creating separate warehouses for each workload to resolve performance issues caused by shared virtual warehouses. This ensures that the resources are not being overutilized by one user’s activities, thereby affecting the performance of another user’s activities4.
A user needs to create a materialized view in the schema MYDB.MYSCHEMA. Which statements will provide this access?
Options:
A.
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO ROLE MYROLE;
B.
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER USER1;
C.
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB. K"-'SCHEMA TO USER! ;
D.
GRANT ROLE MYROLE TO USER USER1;
GRANT CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE;
Show Answer
Answer:
A
Explanation:
To provide a user with the necessary access to create a materialized view in a schema, the user must be granted a role that has the CREATE MATERIALIZED VIEW privilege on that schema. First, the role is granted to the user, and then the privilege is granted to the role
What is the minimum Snowflake edition needed for database failover and fail-back between Snowflake accounts for business continuity and disaster recovery?
Options:
D.
Virtual Private Snowflake
Show Answer
Answer:
C
Explanation:
The minimum Snowflake edition required for database failover and fail-back between Snowflake accounts for business continuity and disaster recovery is the Business Critical edition. References: Snowflake Documentation3.
Which languages requite that User-Defined Function (UDF) handlers be written inline? (Select TWO).
Show Answer
Answer:
B, E
Explanation:
User-Defined Function (UDF) handlers must be written inline for Javascript and SQL. These languages allow the UDF logic to be included directly within the SQL statement that creates the UDF2.
What effect does WAIT_FOR_COMPLETION = TRUE have when running an ALTER WAREHOUSE command and changing the warehouse size?
Options:
A.
The warehouse size does not change until all queries currently running in the warehouse have completed.
B.
The warehouse size does not change until all queries currently in the warehouse queue have completed.
C.
The warehouse size does not change until the warehouse is suspended and restarted.
D.
It does not return from the command until the warehouse has finished changing its size.
Show Answer
Answer:
D
Explanation:
The WAIT_FOR_COMPLETION = TRUE parameter in an ALTER WAREHOUSE command ensures that the command does not return until the warehouse has completed resizing. This means that the command will wait until all the necessary compute resources have been provisioned and the warehouse size has been changed. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which clients does Snowflake support Multi-Factor Authentication (MFA) token caching for? (Select TWO).
Show Answer
Answer:
C, D
Explanation:
Multi-Factor Authentication (MFA) token caching is typically supported for clients that maintain a persistent connection or session with Snowflake, such as the ODBC driver and Python connector, to reduce the need for repeated MFA challenges. References: Based on general security practices in cloud services as of 2021.
How does Snowflake allow a data provider with an Azure account in central Canada to share data with a data consumer on AWS in Australia?
Options:
A.
The data provider in Azure Central Canada can create a direct share to AWS Asia Pacific, if they are both in the same organization.
B.
The data consumer and data provider can form a Data Exchange within the same organization to create a share from Azure Central Canada to AWS Asia Pacific.
C.
The data provider uses the GET DATA workflow in the Snowflake Data Marketplace to create a share between Azure Central Canada and AWS Asia Pacific.
D.
The data provider must replicate the database to a secondary account in AWS Asia Pacific within the same organization then create a share to the data consumer's account.
Show Answer
Answer:
D
Explanation:
Snowflake allows data providers to share data with consumers across different cloud platforms and regions through database replication. The data provider must replicate the database to a secondary account in the target region or cloud platform within the same organization, and then create a share to the data consumer’s account. This process ensures that the data is available in the consumer’s region and on their cloud platform, facilitating seamless data sharing. References: Sharing data securely across regions and cloud platforms | Snowflake Documentation
What is the MAXIMUM Time Travel retention period for a transient table?
Show Answer
Answer:
B
Explanation:
The maximum Time Travel retention period for a transient table in Snowflake is 1 day. This is the default and maximum duration for which Snowflake maintains the historical data for transient tables, allowing users to query data as it appeared at any point within the past 24 hours2.
For the ALLOWED VALUES tag property, what is the MAXIMUM number of possible string values for a single tag?
Show Answer
Answer:
D
Explanation:
For the ALLOWED VALUES tag property, the maximum number of possible string values for a single tag is 256. This allows for a wide range of values to be assigned to a tag when it is set on an object
Which feature allows a user the ability to control the organization of data in a micro-partition?
Options:
B.
Search Optimization Service
D.
Horizontal Partitioning
Show Answer
Answer:
C
Explanation:
Automatic Clustering is a feature that allows users to control the organization of data within micro-partitions in Snowflake. By defining clustering keys, Snowflake can automatically reorganize the data in micro-partitions to optimize query performance1.
Which of the following can be used when unloading data from Snowflake? (Choose two.)
Options:
A.
When unloading semi-structured data, it is recommended that the STRIP_OUTER_ARRAY option be used.
B.
Use the ENCODING file format option to change the encoding from the default UTF-8.
C.
The OBJECT_CONSTRUCT function can be used to convert relational data to semi-structured data.
D.
By using the SINGLE = TRUE parameter, a single file up to 5 GB in size can be exported to the storage layer.
E.
Use the PARSE_JSON function to ensure structured data will be unloaded into the VARIANT data type.
Show Answer
Answer:
C, D
Explanation:
The OBJECT_CONSTRUCT function is used in Snowflake to create a JSON object from relational data, which is useful when unloading semi-structured data. The SINGLE = TRUE parameter is used when unloading data to ensure that the data is exported as a single file, which can be up to 5 GB in size. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A materialized view should be created when which of the following occurs? (Choose two.)
Options:
A.
There is minimal cost associated with running the query.
B.
The query consumes many compute resources every time it runs.
C.
The base table gets updated frequently.
D.
The query is highly optimized and does not consume many compute resources.
E.
The results of the query do not change often and are used frequently.
Show Answer
Answer:
B, E
Explanation:
A materialized view is beneficial when the query consumes many compute resources every time it runs (B), and when the results of the query do not change often and are used frequently (E). This is because materialized views store pre-computed data, which can speed up query performance for workloads that are run frequently or are complex
Which role has the ability to create and manage users and roles?
Show Answer
Answer:
B
Explanation:
The USERADMIN role in Snowflake has the ability to create and manage users and roles within the Snowflake environment. This role is specifically dedicated to user and role management and creation
What is the difference between a stored procedure and a User-Defined Function (UDF)?
Options:
A.
Stored procedures can execute database operations while UDFs cannot.
B.
Returning a value is required in a stored procedure while returning values in a UDF is optional.
C.
Values returned by a stored procedure can be used directly in a SQL statement while the values returned by a UDF cannot.
D.
Multiple stored procedures can be called as part of a single executable statement while a single SQL statement can only call one UDF at a time.
Show Answer
Answer:
A
Explanation:
Stored procedures in Snowflake can perform a variety of database operations, including DDL and DML, whereas UDFs are designed to return values and cannot execute database operations1.
If a multi-cluster warehouse is using an economy scaling policy, how long will queries wait in the queue before another cluster is started?
Show Answer
Answer:
B
Explanation:
In a multi-cluster warehouse with an economy scaling policy, queries will wait in the queue for 2 minutes before another cluster is started. This is to minimize costs by allowing queries to queue up for a short period before adding additional compute resources. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What privilege should a user be granted to change permissions for new objects in a managed access schema?
Options:
A.
Grant the OWNERSHIP privilege on the schema.
B.
Grant the OWNERSHIP privilege on the database.
C.
Grant the MANAGE GRANTS global privilege.
D.
Grant ALL privileges on the schema.
Show Answer
Answer:
C
Explanation:
To change permissions for new objects in a managed access schema, a user should be granted the MANAGE GRANTS global privilege. This privilege allows the user to manage access control through grants on all securable objects within Snowflake2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which statements reflect key functionalities of a Snowflake Data Exchange? (Choose two.)
Options:
A.
If an account is enrolled with a Data Exchange, it will lose its access to the Snowflake Marketplace.
B.
A Data Exchange allows groups of accounts to share data privately among the accounts.
C.
A Data Exchange allows accounts to share data with third, non-Snowflake parties.
D.
Data Exchange functionality is available by default in accounts using the Enterprise edition or higher.
E.
The sharing of data in a Data Exchange is bidirectional. An account can be a provider for some datasets and a consumer for others.
Show Answer
Answer:
B, E
Explanation:
A Snowflake Data Exchange allows groups of accounts to share data privately among the accounts (B), and it supports bidirectional sharing, meaning an account can be both a provider and a consumer of data (E). This facilitates secure and governed data collaboration within a selected group3.
How many network policies can be assigned to an account or specific user at a time?
Show Answer
Answer:
A
Explanation:
According to my knowledge, a security administrator can create multiple network policies, but only one network policy can be active for an account or specific user at any given time. This ensures that there is a clear and consistent policy being applied without conflicts. References: Based on my internal knowledge as of 2021.
Which native data types are used for storing semi-structured data in Snowflake? (Select TWO)
Show Answer
Answer:
B, E
Explanation:
Snowflake supports semi-structured data types, which include OBJECT and VARIANT. These data types are capable of storing JSON-like data structures, allowing for flexibility in data representation. OBJECT can directly contain VARIANT, and thus indirectly contain any other data type, including itself1.
A tabular User-Defined Function (UDF) is defined by specifying a return clause that contains which keyword?
Show Answer
Answer:
B
Explanation:
In Snowflake, a tabular User-Defined Function (UDF) is defined with a return clause that includes the keyword “TABLE.” This indicates that the UDF will return a set of rows, which can be used in the FROM clause of a query. References: Based on my internal knowledge as of 2021.
Which Snowflake feature will allow small volumes of data to continuously load into Snowflake and will incrementally make the data available for analysis?
Show Answer
Answer:
B
Explanation:
The Snowflake feature that allows for small volumes of data to be continuously loaded into Snowflake and incrementally made available for analysis is Snowpipe. Snowpipe is designed for near-real-time data loading, enabling data to be loaded as soon as it’s available in the storage layer3
The first user assigned to a new account, ACCOUNTADMIN, should create at least one additional user with which administrative privilege?
Show Answer
Answer:
A
Explanation:
The first user assigned to a new Snowflake account, typically with the ACCOUNTADMIN role, should create at least one additional user with the USERADMIN administrative privilege. This role is responsible for creating and managing users and roles within the Snowflake account. References: Access control considerations | Snowflake Documentation
Which of the following are characteristics of security in Snowflake?
Options:
A.
Account and user authentication is only available with the Snowflake Business Critical edition.
B.
Support for HIPAA and GDPR compliance is available for UI Snowflake editions.
C.
Periodic rekeying of encrypted data is available with the Snowflake Enterprise edition and higher
D.
Private communication to internal stages is allowed in the Snowflake Enterprise edition and higher.
Show Answer
Answer:
C
Explanation:
One of the security features of Snowflake includes the periodic rekeying of encrypted data, which is available with the Snowflake Enterprise edition and higher2. This ensures that the encryption keys are rotated regularly to maintain a high level of security. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What statistical information in a Query Profile indicates that the query is too large to fit in memory? (Select TWO).
Options:
A.
Bytes spilled to local cache.
B.
Bytes spilled to local storage.
C.
Bytes spilled to remote cache.
D.
Bytes spilled to remote storage.
E.
Bytes spilled to remote metastore.
Show Answer
Answer:
A, B
Explanation:
In a Query Profile, the statistical information that indicates a query is too large to fit in memory includes bytes spilled to local cache and bytes spilled to local storage. These metrics suggest that the working data set of the query exceeded the memory available on the warehouse nodes, causing intermediate results to be written to disk
How would a user execute a series of SQL statements using a task?
Options:
A.
Include the SQL statements in the body of the task CREATE TASK mytask .. AS INSERT INTO target1 SELECT .. FROM stream_s1 WHERE .. INSERT INTO target2 SELECT .. FROM stream_s1
WHERE ..
B.
A stored procedure can have only one DML statement per stored procedure invocation and therefore the user should sequence stored procedure calls in the task definition CREATE TASK mytask .... AS
call stored_proc1(); call stored_proc2();
C.
Use a stored procedure executing multiple SQL statements and invoke the stored procedure from the task. CREATE TASK mytask .... AS call stored_proc_multiple_statements_inside();
D.
Create a task for each SQL statement (e.g. resulting in task1, task2, etc.) and string the series of SQL statements by having a control task calling task1, task2, etc. sequentially.
Show Answer
Answer:
C
Explanation:
To execute a series of SQL statements using a task, a user would use a stored procedure that contains multiple SQL statements and invoke this stored procedure from the task. References: Snowflake Documentation2.
Which of the following features, associated with Continuous Data Protection (CDP), require additional Snowflake-provided data storage? (Choose two.)
Show Answer
Answer:
B, C
Explanation:
The features associated with Continuous Data Protection (CDP) that require additional Snowflake-provided data storage are Time Travel and Fail-safe. Time Travel allows users to access historical data within a defined period, while Fail-safe provides an additional layer of data protection beyond the Time Travel period. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which of the following is a data tokenization integration partner?
Show Answer
Answer:
A
Explanation:
Protegrity is listed as a data tokenization integration partner for Snowflake. This partnership allows Snowflake users to utilize Protegrity’s tokenization solutions within the Snowflake environment3.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
When loading data into Snowflake, how should the data be organized?
Options:
A.
Into single files with 100-250 MB of compressed data per file
B.
Into single files with 1-100 MB of compressed data per file
C.
Into files of maximum size of 1 GB of compressed data per file
D.
Into files of maximum size of 4 GB of compressed data per file
Show Answer
Answer:
A
Explanation:
When loading data into Snowflake, it is recommended to organize the data into single files with 100-250 MB of compressed data per file. This size range is optimal for parallel processing and can help in achieving better performance during data loading operations. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is an advantage of using an explain plan instead of the query profiler to evaluate the performance of a query?
Options:
A.
The explain plan output is available graphically.
B.
An explain plan can be used to conduct performance analysis without executing a query.
C.
An explain plan will handle queries with temporary tables and the query profiler will not.
D.
An explain plan's output will display automatic data skew optimization information.
Show Answer
Answer:
B
Explanation:
An explain plan is beneficial because it allows for the evaluation of how a query will be processed without the need to actually execute the query. This can help in understanding the query’s performance implications and potential bottlenecks without consuming resources that would be used if the query were run
A company needs to allow some users to see Personally Identifiable Information (PII) while limiting other users from seeing the full value of the PII.
Which Snowflake feature will support this?
Options:
D.
Role based access control
Show Answer
Answer:
B
Explanation:
Data masking policies in Snowflake allow for the obfuscation of specific data within a field, enabling some users to see the full data while limiting others. This feature is particularly useful for handling PII, ensuring that sensitive information is only visible to authorized users1.
By default, which Snowflake role is required to create a share?
Show Answer
Answer:
D
Explanation:
By default, the Snowflake role required to create a share is ACCOUNTADMIN (D). This role has the necessary privileges to perform administrative tasks, including creating shares for data sharing purposes
Which of the following is an example of an operation that can be completed without requiring compute, assuming no queries have been executed previously?
Options:
A.
SELECT SUM (ORDER_AMT) FROM SALES;
B.
SELECT AVG(ORDER_QTY) FROM SALES;
C.
SELECT MIN(ORDER_AMT) FROM SALES;
D.
SELECT ORDER_AMT * ORDER_QTY FROM SALES;
Show Answer
Answer:
B
Explanation:
Operations that do not require compute resources are typically those that can leverage previously cached results. However, if no queries have been executed previously, all the given operations would require compute to execute. It’s important to note that certain operations like DDL statements and queries that hit the result cache do not consume compute credits2.
When cloning a database, what is cloned with the database? (Choose two.)
Options:
A.
Privileges on the database
B.
Existing child objects within the database
C.
Future child objects within the database
D.
Privileges on the schemas within the database
E.
Only schemas and tables within the database
Show Answer
Answer:
A, B
Explanation:
When cloning a database in Snowflake, the clone includes all privileges on the database as well as existing child objects within the database, such as schemas, tables, views, etc. However, it does not include future child objects or privileges on schemas within the database2.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
Which of the following significantly improves the performance of selective point lookup queries on a table?
Options:
D.
Search Optimization Service
Show Answer
Answer:
D
Explanation:
The Search Optimization Service significantly improves the performance of selective point lookup queries on tables by creating and maintaining a persistent data structure called a search access path, which allows some micro-partitions to be skipped when scanning the table
In the Snowflake access control model, which entity owns an object by default?
Options:
A.
The user who created the object
C.
Ownership depends on the type of object
D.
The role used to create the object
Show Answer
Answer:
D
Explanation:
In Snowflake’s access control model, the default owner of an object is the role that was used to create the object. This role has the OWNERSHIP privilege on the object and can grant access to other roles1
What is the maximum Time Travel retention period for a temporary Snowflake table?
Show Answer
Answer:
B
Explanation:
The maximum Time Travel retention period for a temporary Snowflake table is 1 day. This is the standard retention period for temporary tables, which allows for accessing historical data within a 24-hour window
What actions will prevent leveraging of the ResultSet cache? (Choose two.)
Options:
A.
Removing a column from the query SELECT list
B.
Stopping the virtual warehouse that the query is running against
C.
Clustering of the data used by the query
D.
Executing the RESULTS_SCAN() table function
E.
Changing a column that is not in the cached query
Show Answer
Answer:
B, D
Explanation:
The ResultSet cache is leveraged to quickly return results for repeated queries. Actions that prevent leveraging this cache include stopping the virtual warehouse that the query is running against (B) and executing the RESULTS_SCAN() table function (D). Stopping the warehouse clears the local disk cache, including the ResultSet cache1. The RESULTS_SCAN() function is used to retrieve the result of a previously executed query, which bypasses the need for the ResultSet cache.
What happens to historical data when the retention period for an object ends?
Options:
A.
The data is cloned into a historical object.
B.
The data moves to Fail-safe
C.
Time Travel on the historical data is dropped.
D.
The object containing the historical data is dropped.
Show Answer
Answer:
C
Explanation:
When the retention period for an object ends in Snowflake, Time Travel on the historical data is dropped ©. This means that the ability to access historical data via Time Travel is no longer available once the retention period has expired2.
A virtual warehouse is created using the following command:
Create warehouse my_WH with
warehouse_size = MEDIUM
min_cluster_count = 1
max_cluster_count = 1
auto_suspend = 60
auto_resume = true;
The image below is a graphical representation of the warehouse utilization across two days.
What action should be taken to address this situation?
Options:
A.
Increase the warehouse size from Medium to 2XL.
B.
Increase the value for the parameter MAX_CONCURRENCY_LEVEL.
C.
Configure the warehouse to a multi-cluster warehouse.
D.
Lower the value of the parameter STATEMENT_QUEUED_TIMEOUT_IN_SECONDS.
Show Answer
Answer:
C
Explanation:
The graphical representation of warehouse utilization indicates periods of significant queuing, suggesting that the current single cluster cannot efficiently handle all incoming queries. Configuring the warehouse to a multi-cluster warehouse will distribute the load among multiple clusters, reducing queuing times and improving overall performance1.
References = Snowflake Documentation on Multi-cluster Warehouses1
A user created a transient table and made several changes to it over the course of several days. Three days after the table was created, the user would like to go back to the first version of the table.
How can this be accomplished?
Options:
A.
Use Time Travel, as long as DATA_RETENTION_TIME_IN_DAYS was set to at least 3 days.
B.
The transient table version cannot be retrieved after 24 hours.
C.
Contact Snowflake Support to have the data retrieved from Fail-safe storage.
D.
Use the FAIL_SAFE parameter for Time Travel to retrieve the data from Fail-safe storage.
Show Answer
Answer:
A
Explanation:
To go back to the first version of a transient table created three days prior, one can use Time Travel if the DATA_RETENTION_TIME_IN_DAYS was set to at least 3 days. This allows the user to access historical data within the specified retention period. References: [COF-C02] SnowPro Core Certification Exam Study Guide
In a Snowflake role hierarchy, what is the top-level role?
Show Answer
Answer:
C
Explanation:
In a Snowflake role hierarchy, the top-level role is ACCOUNTADMIN. This role has the highest level of privileges and is capable of performing all administrative functions within the Snowflake account
When should a multi-cluster warehouse be used in auto-scaling mode?
Options:
A.
When it is unknown how much compute power is needed
B.
If the select statement contains a large number of temporary tables or Common Table Expressions (CTEs)
C.
If the runtime of the executed query is very slow
D.
When a large number of concurrent queries are run on the same warehouse
Show Answer
Answer:
D
Explanation:
A multi-cluster warehouse should be used in auto-scaling mode when there is a need to handle a large number of concurrent queries. Auto-scaling allows Snowflake to automatically add or remove compute clusters to balance the load, ensuring that performance remains consistent during varying levels of demand
What are the responsibilities of Snowflake's Cloud Service layer? (Choose three.)
Options:
C.
Virtual warehouse caching
D.
Query parsing and optimization
F.
Physical storage of micro-partitions
Show Answer
Answer:
A, B, D
Explanation:
The responsibilities of Snowflake’s Cloud Service layer include authentication (A), which ensures secure access to the platform; resource management (B), which involves allocating and managing compute resources; and query parsing and optimization (D), which improves the efficiency and performance of SQL query execution3.
Which Snowflake layer is always leveraged when accessing a query from the result cache?
Show Answer
Answer:
Explanation:
The Cloud Services layer in Snowflake is responsible for managing the result cache. When a query is executed, the results are stored in this cache, and subsequent identical queries can leverage these cached results without re-executing the entire query1.
What do the terms scale up and scale out refer to in Snowflake? (Choose two.)
Options:
A.
Scaling out adds clusters of the same size to a virtual warehouse to handle more concurrent queries.
B.
Scaling out adds clusters of varying sizes to a virtual warehouse.
C.
Scaling out adds additional database servers to an existing running cluster to handle more concurrent queries.
D.
Snowflake recommends using both scaling up and scaling out to handle more concurrent queries.
E.
Scaling up resizes a virtual warehouse so it can handle more complex workloads.
F.
Scaling up adds additional database servers to an existing running cluster to handle larger workloads.
Show Answer
Answer:
A, E
Explanation:
Scaling out in Snowflake involves adding clusters of the same size to a virtual warehouse, which allows for handling more concurrent queries without affecting the performance of individual queries. Scaling up refers to resizing a virtual warehouse to increase its compute resources, enabling it to handle more complex workloads and larger queries more efficiently.
If 3 size Small virtual warehouse is made up of two servers, how many servers make up a Large warehouse?
Show Answer
Answer:
B
Explanation:
In Snowflake, each size increase in virtual warehouses doubles the number of servers. Therefore, if a size Small virtual warehouse is made up of two servers, a Large warehouse, which is two sizes larger, would be made up of eight servers (2 servers for Small, 4 for Medium, and 8 for Large)2.
Size specifies the amount of compute resources available per cluster in a warehouse. Snowflake supports the following warehouse sizes:
uide/warehouses-overview.html
Which of the following statements apply to Snowflake in terms of security? (Choose two.)
Options:
A.
Snowflake leverages a Role-Based Access Control (RBAC) model.
B.
Snowflake requires a user to configure an IAM user to connect to the database.
C.
All data in Snowflake is encrypted.
D.
Snowflake can run within a user's own Virtual Private Cloud (VPC).
E.
All data in Snowflake is compressed.
Show Answer
Answer:
A, C
Explanation:
Snowflake uses a Role-Based Access Control (RBAC) model to manage access to data and resources. Additionally, Snowflake ensures that all data is encrypted, both at rest and in transit, to provide a high level of security for data stored within the platform. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What are common issues found by using the Query Profile? (Choose two.)
Options:
A.
Identifying queries that will likely run very slowly before executing them
B.
Locating queries that consume a high amount of credits
C.
Identifying logical issues with the queries
D.
Identifying inefficient micro-partition pruning
E.
Data spilling to a local or remote disk
Show Answer
Answer:
D, E
Explanation:
The Query Profile in Snowflake is used to identify performance issues with queries. Common issues that can be found using the Query Profile include identifying inefficient micro-partition pruning (D) and data spilling to a local or remote disk (E). Micro-partition pruning is related to the efficiency of query execution, and data spilling occurs when the memory is insufficient, causing the query to write data to disk, which can slow down the query performance1.
Which statements are correct concerning the leveraging of third-party data from the Snowflake Data Marketplace? (Choose two.)
Options:
A.
Data is live, ready-to-query, and can be personalized.
B.
Data needs to be loaded into a cloud provider as a consumer account.
C.
Data is not available for copying or moving to an individual Snowflake account.
D.
Data is available without copying or moving.
E.
Data transformations are required when combining Data Marketplace datasets with existing data in Snowflake.
Show Answer
Answer:
A, D
Explanation:
When leveraging third-party data from the Snowflake Data Marketplace, the data is live, ready-to-query, and can be personalized. Additionally, the data is available without the need for copying or moving it to an individual Snowflake account, allowing for seamless integration with existing data
Files have been uploaded to a Snowflake internal stage. The files now need to be deleted.
Which SQL command should be used to delete the files?
Show Answer
Answer:
C
Explanation:
The SQL command used to delete files from a Snowflake internal stage is REMOVE. This command can be used to remove files from either an internal or external stage within Snowflake
Which command should be used to load data from a file, located in an external stage, into a table in Snowflake?
Show Answer
Answer:
D
Explanation:
The COPY command is used in Snowflake to load data from files located in an external stage into a table. This command allows for efficient and parallelized data loading from various file formats1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
Which command should be used to download files from a Snowflake stage to a local folder on a client's machine?
Show Answer
Answer:
B
Explanation:
The GET command is used to download files from a Snowflake stage to a local folder on a client’s machine2.
[Reference: https://docs.snowflake.com/en/sql-reference/sql/get.html, ]
Which services does the Snowflake Cloud Services layer manage? (Choose two.)
Show Answer
Answer:
C, E
Explanation:
The Snowflake Cloud Services layer manages various services, including authentication and metadata management. This layer ties together all the different components of Snowflake to process user requests, manage sessions, and control access3.
Which file formats are supported for unloading data from Snowflake? (Choose two.)
Options:
E.
Delimited (CSV, TSV, etc.)
Show Answer
Answer:
B, E
Explanation:
Snowflake supports unloading data in JSON and delimited file formats such as CSV and TSV. These formats are commonly used for data interchange and are supported by Snowflake for unloading operations
Which statements are true concerning Snowflake's underlying cloud infrastructure? (Select THREE),
Options:
A.
Snowflake data and services are deployed in a single availability zone within a cloud provider's region.
B.
Snowflake data and services are available in a single cloud provider and a single region, the use of multiple cloud providers is not supported.
C.
Snowflake can be deployed in a customer's private cloud using the customer's own compute and storage resources for Snowflake compute and storage
D.
Snowflake uses the core compute and storage services of each cloud provider for its own compute and storage
E.
All three layers of Snowflake's architecture (storage, compute, and cloud services) are deployed and managed entirely on a selected cloud platform
F.
Snowflake data and services are deployed in at least three availability zones within a cloud provider's region
Show Answer
Answer:
D, E, F
Explanation:
Snowflake’s architecture is designed to operate entirely on cloud infrastructure. It uses the core compute and storage services of each cloud provider, which allows it to leverage the scalability and reliability of cloud resources. Snowflake’s services are deployed across multiple availability zones within a cloud provider’s region to ensure high availability and fault tolerance. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What types of data listings are available in the Snowflake Data Marketplace? (Choose two.)
Show Answer
Answer:
C, E
Explanation:
In the Snowflake Data Marketplace, the types of data listings available include ‘Vendor’, which refers to the providers of data, and ‘Personalized’, which indicates customized data offerings tailored to specific consumer needs45.
In an auto-scaling multi-cluster virtual warehouse with the setting SCALING_POLICY = ECONOMY enabled, when is another cluster started?
Options:
A.
When the system has enough load for 2 minutes
B.
When the system has enough load for 6 minutes
C.
When the system has enough load for 8 minutes
D.
When the system has enough load for 10 minutes
Show Answer
Answer:
A
Explanation:
In an auto-scaling multi-cluster virtual warehouse with the SCALING_POLICY set to ECONOMY, another cluster is started when the system has enough load for 2 minutes (A). This policy is designed to optimize the balance between performance and cost, starting additional clusters only when the sustained load justifies it2.
Which of the following accurately describes shares?
Options:
A.
Tables, secure views, and secure UDFs can be shared
C.
Data consumers can clone a new table from a share
D.
Access to a share cannot be revoked once granted
Show Answer
Answer:
A
Explanation:
Shares in Snowflake are named objects that encapsulate all the information required to share databases, schemas, tables, secure views, and secure UDFs. These objects can be added to a share by granting privileges on them to the share via a database role
A user has unloaded data from a Snowflake table to an external stage.
Which command can be used to verify if data has been uploaded to the external stage named my_stage?
Show Answer
Answer:
B
Explanation:
The list @my_stage command in Snowflake can be used to verify if data has been uploaded to an external stage named my_stage. This command provides a list of files that are present in the specified stage2.
Which Snowflake architectural layer is responsible for a query execution plan?
Show Answer
Answer:
C
Explanation:
In Snowflake’s architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization, and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.
How does Snowflake Fail-safe protect data in a permanent table?
Options:
A.
Fail-safe makes data available up to 1 day, recoverable by user operations.
B.
Fail-safe makes data available for 7 days, recoverable by user operations.
C.
Fail-safe makes data available for 7 days, recoverable only by Snowflake Support.
D.
Fail-safe makes data available up to 1 day, recoverable only by Snowflake Support.
Show Answer
Answer:
C
Explanation:
Snowflake’s Fail-safe provides a 7-day period during which data in a permanent table may be recoverable, but only by Snowflake Support, not by user operations3.
What is the MINIMUM edition of Snowflake that is required to use a SCIM security integration?
Options:
A.
Business Critical Edition
C.
Virtual Private Snowflake (VPS)
Show Answer
Answer:
D
Explanation:
The minimum edition of Snowflake required to use a SCIM security integration is the Enterprise Edition. SCIM integrations are used for automated management of user identities and groups, and this feature is available starting from the Enterprise Edition of Snowflake. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How many days is load history for Snowpipe retained?
Show Answer
Answer:
C
Explanation:
Snowpipe retains load history for 14 days. This allows users to view and audit the data that has been loaded into Snowflake using Snowpipe within this time frame3.
Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)
Options:
A.
Customer-managed encryption keys through Tri-Secret Secure
B.
Automatic encryption of all data
C.
Up to 90 days of data recovery through Time Travel
D.
Object-level access control
E.
Column-level security to apply data masking policies to tables and views
Show Answer
Answer:
B, D
Explanation:
In all Snowflake editions, two key capabilities are universally available:
B. Automatic encryption of all data : Snowflake automatically encrypts all data stored in its platform, ensuring security and compliance with various regulations. This encryption is transparent to users and does not require any configuration or management.
D. Object-level access control : Snowflake provides granular access control mechanisms that allow administrators to define permissions at the object level, including databases, schemas, tables, and views. This ensures that only authorized users can access specific data objects.
These features are part of Snowflake’s commitment to security and governance, and they are included in every edition of the Snowflake Data Cloud.
References:
Snowflake Documentation on Security Features
SnowPro® Core Certification Exam Study Guide
Which of the following is a valid source for an external stage when the Snowflake account is located on Microsoft Azure?
Options:
A.
An FTP server with TLS encryption
B.
An HTTPS server with WebDAV
C.
A Google Cloud storage bucket
D.
A Windows server file share on Azure
Show Answer
Answer:
D
Explanation:
In Snowflake, when the account is located on Microsoft Azure, a valid source for an external stage can be an Azure container or a folder path within an Azure container. This includes Azure Blob storage which is accessible via the azure:// endpoint. A Windows server file share on Azure, if configured properly, can be a valid source for staging data files for Snowflake. Options A, B, and C are not supported as direct sources for an external stage in Snowflake on Azure12. R eferences: [COF-C02] SnowPro Core Certification Exam Study Guide
A sales table FCT_SALES has 100 million records.
The following Query was executed
SELECT COUNT (1) FROM FCT__SALES;
How did Snowflake fulfill this query?
Options:
A.
Query against the result set cache
B.
Query against a virtual warehouse cache
C.
Query against the most-recently created micro-partition
D.
Query against the metadata excite
Show Answer
Answer:
D
Explanation:
Snowflake is designed to optimize query performance by utilizing metadata for certain types of queries. When executing a COUNT query, Snowflake can often fulfill the request by accessing metadata about the table’s row count, rather than scanning the entire table or micro-partitions. This is particularly efficient for large tables like FCT_SALES with a significant number of records. The metadata layer maintains statistics about the table, including the row count, which enables Snowflake to quickly return the result of a COUNT query without the need to perform a full scan.
References:
Snowflake Documentation on Metadata Management
SnowPro® Core Certification Study Guide
Which Snowflake partner specializes in data catalog solutions?
Show Answer
Answer:
A
Explanation:
Alation is known for specializing in data catalog solutions and is a partner of Snowflake. Data catalog solutions are essential for organizations to effectively manage their metadata and make it easily accessible and understandable for users, which aligns with the capabilities provided by Alation.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake’s official documentation and partner listings
What is the MOST performant file format for loading data in Snowflake?
Show Answer
Answer:
B
Explanation:
Parquet is a columnar storage file format that is optimized for performance in Snowflake. It is designed to be efficient for both storage and query performance, particularly for complex queries on large datasets. Parquet files support efficient compression and encoding schemes, which can lead to significant savings in storage and speed in query processing, making it the most performant file format for loading data into Snowflake.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Loading1
Which of the following describes how multiple Snowflake accounts in a single organization relate to various cloud providers?
Options:
A.
Each Snowflake account can be hosted in a different cloud vendor and region.
B.
Each Snowflake account must be hosted in a different cloud vendor and region
C.
All Snowflake accounts must be hosted in the same cloud vendor and region
D.
Each Snowflake account can be hosted in a different cloud vendor, but must be in the same region.
Show Answer
Answer:
A
Explanation:
Snowflake’s architecture allows for flexibility in account hosting across different cloud vendors and regions. This means that within a single organization, different Snowflake accounts can be set up in various cloud environments, such as AWS, Azure, or GCP, and in different geographical regions. This allows organizations to leverage the global infrastructure of multiple cloud providers and optimize their data storage and computing needs based on regional requirements, data sovereignty laws, and other considerations.
https://docs.snowflake.com/en/user-guide/intro-regions.html
Which of the following Snowflake features provide continuous data protection automatically? (Select TWO).
Show Answer
Answer:
C, E
Explanation:
Snowflake’s Continuous Data Protection (CDP) encompasses a set of features that help protect data stored in Snowflake against human error, malicious acts, and software failure. Time Travel allows users to access historical data (i.e., data that has been changed or deleted) for a defined period, enabling querying and restoring of data. Fail-safe is an additional layer of data protection that provides a recovery option in the event of significant data loss or corruption, which can only be performed by Snowflake.
References:
Continuous Data Protection | Snowflake Documentation1
Data Storage Considerations | Snowflake Documentation2
Snowflake SnowPro Core Certification Study Guide3
Snowflake Data Cloud Glossary
Which Snowflake feature is used for both querying and restoring data?
Show Answer
Answer:
B
Explanation:
Snowflake’s Time Travel feature is used for both querying historical data in tables and restoring and cloning historical data in databases, schemas, and tables3. It allows users to access historical data within a defined period (1 day by default, up to 90 days for Snowflake Enterprise Edition) and is a key feature for data recovery and management. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is a machine learning and data science partner within the Snowflake Partner Ecosystem?
Show Answer
Answer:
D
Explanation:
Data Robot is recognized as a machine learning and data science partner within the Snowflake Partner Ecosystem. It provides an enterprise AI platform that enables users to build and deploy accurate predictive models quickly. As a partner, Data Robot integrates with Snowflake to enhance data science capabilities2.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Machine Learning & Data Science Partners
Which feature is only available in the Enterprise or higher editions of Snowflake?
Options:
B.
SOC 2 type II certification
C.
Multi-factor Authentication (MFA)
D.
Object-level access control
Show Answer
Answer:
A
Explanation:
Column-level security is a feature that allows fine-grained control over access to specific columns within a table. This is particularly useful for managing sensitive data and ensuring that only authorized users can view or manipulate certain pieces of information. According to my last update, this feature was available in the Enterprise Edition or higher editions of Snowflake.
References: Based on my internal data as of 2021, column-level security is an advanced feature typically reserved for higher-tiered editions like the Enterprise Edition in data warehousing solutions such as Snowflake.
owflake.com/en/user-guide/intro-editions.html
True or False: When you create a custom role, it is a best practice to immediately grant that role to ACCOUNTADMIN.
Show Answer
Answer:
B
Explanation:
The ACCOUNTADMIN role is the most powerful role in Snowflake and should be limited to a select number of users within an organization. It is responsible for account-level configurations and should not be used for day-to-day object creation or management. Granting a custom role to ACCOUNTADMIN could inadvertently give broad access to users with this role, which is not a recommended security practice.
[Reference: https://docs.snowflake.com/en/user-guide/security-access-control-considerations.html, , ]
What is the recommended file sizing for data loading using Snowpipe?
Options:
A.
A compressed file size greater than 100 MB, and up to 250 MB
B.
A compressed file size greater than 100 GB, and up to 250 GB
C.
A compressed file size greater than 10 MB, and up to 100 MB
D.
A compressed file size greater than 1 GB, and up to 2 GB
Show Answer
Answer:
C
Explanation:
For data loading using Snowpipe, the recommended file size is a compressed file greater than 10 MB and up to 100 MB. This size range is optimal for Snowpipe’s continuous, micro-batch loading process, allowing for efficient and timely data ingestion without overwhelming the system with files that are too large or too small.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Snowpipe1
Which of the following conditions must be met in order to return results from the results cache? (Select TWO).
Options:
A.
The user has the appropriate privileges on the objects associated with the query
B.
Micro-partitions have been reclustered since the query was last run
C.
The new query is run using the same virtual warehouse as the previous query
D.
The query includes a User Defined Function (UDF)
E.
The query has been run within 24 hours of the previously-run query
Show Answer
Answer:
A, E
Explanation:
To return results from the results cache in Snowflake, certain conditions must be met:
Privileges : The user must have the appropriate privileges on the objects associated with the query. This ensures that only authorized users can access cached data.
Time Frame : The query must have been run within 24 hours of the previously-run query. Snowflake’s results cache is designed to store the results of queries for a short period, typically 24 hours, to improve performance for repeated queries.
Which of the following indicates that it may be appropriate to use a clustering key for a table? (Select TWO).
Options:
A.
The table contains a column that has very low cardinality
B.
DML statements that are being issued against the table are blocked
C.
The table has a small number of micro-partitions
D.
Queries on the table are running slower than expected
E.
The clustering depth for the table is large
Show Answer
Answer:
D, E
Explanation:
A clustering key in Snowflake is used to co-locate similar data within the same micro-partitions to improve query performance, especially for large tables where data is not naturally ordered or has become fragmented due to extensive DML operations. The appropriate use of a clustering key can lead to improved scan efficiency and better column compression, resulting in faster query execution times.
The indicators that it may be appropriate to use a clustering key for a table include:
D. Queries on the table are running slower than expected : This can happen when the data in the table is not well-clustered, leading to inefficient scans during query execution.
E. The clustering depth for the table is large : A large clustering depth indicates that the table’s data is spread across many micro-partitions, which can degrade query performance as more data needs to be scanned.
References:
Snowflake Documentation on Clustering Keys & Clustered Tables
Snowflake Documentation on SYSTEM$CLUSTERING_INFORMATION
Stack Overflow discussion on cluster key selection in Snowflake
A user has an application that writes a new Tile to a cloud storage location every 5 minutes.
What would be the MOST efficient way to get the files into Snowflake?
Options:
A.
Create a task that runs a copy into operation from an external stage every 5 minutes
B.
Create a task that puts the files in an internal stage and automate the data loading wizard
C.
Create a task that runs a GET operation to intermittently check for new files
D.
Set up cloud provider notifications on the Tile location and use Snowpipe with auto-ingest
Show Answer
Answer:
D
Explanation:
The most efficient way to get files into Snowflake, especially when new files are being written to a cloud storage location at frequent intervals, is to use Snowpipe with auto-ingest. Snowpipe is Snowflake’s continuous data ingestion service that loads data as soon as it becomes available in a cloud storage location. By setting up cloud provider notifications, Snowpipe can be triggered automatically whenever new files are written to the storage location, ensuring that the data is loaded into Snowflake with minimal latency and without the need for manual intervention or scheduling frequent tasks.
References:
Snowflake Documentation on Snowpipe
SnowPro® Core Certification Study Guide
True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X-Small Warehouse.
Show Answer
Answer:
A
Explanation:
Provisioning time can vary based on the size of the warehouse. A 4X-Large Warehouse typically has more resources and may take longer to provision compared to a X-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.References : Understanding and viewing Fail-safe | Snowflake Documentation
Which services does the Snowflake Cloud Services layer manage? (Select TWO).
Show Answer
Answer:
C, E
Explanation:
The Snowflake Cloud Services layer manages a variety of services that are crucial for the operation of the Snowflake platform. Among these services, Authentication and Metadata management are key components. Authentication is essential for controlling access to the Snowflake environment, ensuring that only authorized users can perform actions within the platform. Metadata management involves handling all the metadata related to objects within Snowflake, such as tables, views, and databases, which is vital for the organization and retrieval of data.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation12
A user unloaded a Snowflake table called mytable to an internal stage called mystage.
Which command can be used to view the list of files that has been uploaded to the staged?
Show Answer
Answer:
D
Explanation:
The command list @mystage; is used to view the list of files that have been uploaded to an internal stage in Snowflake. The list command displays the metadata for all files in the specified stage, which in this case is mystage. This command is particularly useful for verifying that files have been successfully unloaded from a Snowflake table to the stage and for managing the files within the stage.
References:
Snowflake Documentation on Stages
SnowPro® Core Certification Study Guide
Which of the following can be executed/called with Snowpipe?
Options:
A.
A User Defined Function (UDF)
C.
A single copy_into statement
D.
A single insert__into statement
Show Answer
Answer:
C
Explanation:
Snowpipe is used for continuous, automated data loading into Snowflake. It uses a COPY INTO
statement within a pipe object to load data from files as soon as they are available in a stage. Snowpipe does not execute UDFs, stored procedures, or insert statements. References: Snowpipe | Snowflake Documentation
What is the minimum Snowflake edition required to create a materialized view?
Options:
C.
Business Critical Edition
D.
Virtual Private Snowflake Edition
Show Answer
Answer:
B
Explanation:
Materialized views in Snowflake are a feature that allows for the pre-computation and storage of query results for faster query performance. This feature is available starting from the Enterprise Edition of Snowflake. It is not available in the Standard Edition, and while it is also available in higher editions like Business Critical and Virtual Private Snowflake, the Enterprise Edition is the minimum requirement.
References:
Snowflake Documentation on CREATE MATERIALIZED VIEW1.
Snowflake Documentation on Working with Materialized Views
/en/sql-reference/sql/create-materialized-view.html#:~:text=Materialized%20views%20require%20Enterprise%20Edition,upgrading%2C%20please%20contact%20Snowflake%20Support.
Which data types does Snowflake support when querying semi-structured data? (Select TWO)
Show Answer
Answer:
A, B
Explanation:
Snowflake supports querying semi-structured data using specific data types that are capable of handling the flexibility and structure of such data. The data types supported for this purpose are:
A. VARIANT : This is a universal data type that can store values of any other type, including structured and semi-structured types. It is particularly useful for handling JSON, Avro, ORC, Parquet, and XML data formats1.
B. ARRAY : An array is a list of elements that can be of any data type, including VARIANT, and is used to handle semi-structured data that is naturally represented as a list1.
These data types are part of Snowflake’s built-in support for semi-structured data, allowing for the storage, querying, and analysis of data that does not fit into the traditional row-column format.
References:
Snowflake Documentation on Semi-Structured Data
[COF-C02] SnowPro Core Certification Exam Study Guide
True or False: Loading data into Snowflake requires that source data files be no larger than 16MB.
Show Answer
Answer:
B
Explanation:
Snowflake does not require source data files to be no larger than 16MB. In fact, Snowflake recommends that for optimal load performance, data files should be roughly 100-250 MB in size when compressed. However, it is not recommended to load very large files (e.g., 100 GB or larger) due to potential delays and wasted credits if errors occur. Smaller files should be aggregated to minimize processing overhead, and larger files should be split to distribute the load among compute resources in an active warehouse.
References: Preparing your data files | Snowflake Documentation
When is the result set cache no longer available? (Select TWO)
Options:
A.
When another warehouse is used to execute the query
B.
When another user executes the query
C.
When the underlying data has changed
D.
When the warehouse used to execute the query is suspended
E.
When it has been 24 hours since the last query
Show Answer
Answer:
C, E
Explanation:
The result set cache in Snowflake is invalidated and no longer available when the underlying data of the query results has changed, ensuring that queries return the most current data. Additionally, the cache expires after 24 hours to maintain the efficiency and accuracy of data retrieval1.
What tasks can be completed using the copy command? (Select TWO)
Options:
A.
Columns can be aggregated
B.
Columns can be joined with an existing table
C.
Columns can be reordered
D.
Columns can be omitted
E.
Data can be loaded without the need to spin up a virtual warehouse
Show Answer
Answer:
C, D
Explanation:
The COPY command in Snowflake allows for the reordering of columns as they are loaded into a table, and it also permits the omission of columns from the source file during the load process. This provides flexibility in handling the schema of the data being ingested. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which is the MINIMUM required Snowflake edition that a user must have if they want to use AWS/Azure Privatelink or Google Cloud Private Service Connect?
Query compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?
Options:
C.
Cloud infrastructure layer
Show Answer
Answer:
D
Explanation:
Query compilation in Snowflake occurs in the Cloud Services layer. This layer is responsible for coordinating and managing all aspects of the Snowflake service, including authentication, infrastructure management, metadata management, query parsing and optimization, and security. By handling these tasks, the Cloud Services layer enables the Compute layer to focus on executing queries, while the Storage layer is dedicated to persistently storing data.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Snowflake Architecture1
When reviewing a query profile, what is a symptom that a query is too large to fit into the memory?
Options:
A.
A single join node uses more than 50% of the query time
B.
Partitions scanned is equal to partitions total
C.
An AggregateOperacor node is present
D.
The query is spilling to remote storage
Show Answer
Answer:
D
Explanation:
When a query in Snowflake is too large to fit into the available memory, it will start spilling to remote storage. This is an indication that the memory allocated for the query is insufficient for its execution, and as a result, Snowflake uses remote disk storage to handle the overflow. This spill to remote storage can lead to slower query performance due to the additional I/O operations required.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Query Profile1
Snowpro Core Certification Exam Flashcards2
Which copy INTO command outputs the data into one file?
Show Answer
Answer:
B
Explanation:
The COPY INTO command in Snowflake can be configured to output data into a single file by setting the MAX_FILE_NUMBER option to 1. This option limits the number of files generated by the command, ensuring that only one file is created regardless of the amount of data being exported.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Data Unloading
What is a responsibility of Snowflake's virtual warehouses?
Options:
A.
Infrastructure management
D.
Query parsing and optimization
E.
Management of the storage layer
Show Answer
Answer:
C
Explanation:
The primary responsibility of Snowflake’s virtual warehouses is to execute queries. Virtual warehouses are one of the key components of Snowflake’s architecture, providing the compute power required to perform data processing tasks such as running SQL queries, performing joins, aggregations, and other data manipulations.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Virtual Warehouses1
TION NO: 230
A developer is granted ownership of a table that has a masking policy. The developer's role is not able to see the masked data. Will the developer be able to modify the table to read the masked data?
Options:
A.
Yes, because a table owner has full control and can unset masking policies.
B.
Yes, because masking policies only apply to cloned tables.
C.
No, because masking policies must always reference specific access roles.
D.
No, because ownership of a table does not include the ability to change masking policies
Show Answer
Answer:
D
Explanation:
Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data. Masking policies are applied at the schema level and require specific privileges to modify12.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Masking Policies
What is the purpose of an External Function?
Options:
A.
To call code that executes outside of Snowflake
B.
To run a function in another Snowflake database
C.
To share data in Snowflake with external parties
D.
To ingest data from on-premises data sources
Show Answer
Answer:
A
Explanation:
The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with external services and leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud services3.
What Snowflake features allow virtual warehouses to handle high concurrency workloads? (Select TWO)
Options:
A.
The ability to scale up warehouses
B.
The use of warehouse auto scaling
C.
The ability to resize warehouses
D.
Use of multi-clustered warehouses
E.
The use of warehouse indexing
Show Answer
Answer:
B, D
Explanation:
Snowflake’s architecture is designed to handle high concurrency workloads through several features, two of which are particularly effective:
B. The use of warehouse auto scaling : This feature allows Snowflake to automatically adjust the compute resources allocated to a virtual warehouse in response to the workload. If there is an increase in concurrent queries, Snowflake can scale up the resources to maintain performance.
D. Use of multi-clustered warehouses : Multi-clustered warehouses enable Snowflake to run multiple clusters of compute resources simultaneously. This allows for the distribution of queries across clusters, thereby reducing the load on any single cluster and improving the system’s ability to handle a high number of concurrent queries.
These features ensure that Snowflake can manage varying levels of demand without manual intervention, providing a seamless experience even during peak usage.
References:
Snowflake Documentation on Virtual Warehouses
SnowPro® Core Certification Study Guide
Which of the following are valid methods for authenticating users for access into Snowflake? (Select THREE)
Options:
B.
Federated authentication
D.
Key-pair authentication
Show Answer
Answer:
B, D, E
Explanation:
Snowflake supports several methods for authenticating users, including federated authentication , key-pair authentication , and OAuth . Federated authentication allows users to authenticate using their organization’s identity provider. Key-pair authentication uses a public-private key pair for secure login, and OAuth is an open standard for access delegation commonly used for token-based authentication. References : Authentication policies | Snowflake Documentation, Authenticating to the server | Snowflake Documentation, External API authentication and secrets | Snowflake Documentation.
Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).
Options:
A.
Load files that are approximately 25 MB or smaller.
B.
Remove all dates and timestamps.
C.
Load files that are approximately 100-250 MB (or larger)
D.
Avoid using embedded characters such as commas for numeric data types
E.
Remove semi-structured data types
Show Answer
Answer:
C, D
Explanation:
When loading data into Snowflake, it is recommended to:
C. Load files that are approximately 100-250 MB (or larger) : This size is optimal for parallel processing and can help to maximize throughput. Smaller files can lead to overhead that outweighs the actual data processing time.
D. Avoid using embedded c haracters such as commas for numeric data types : Embedded characters can cause issues during data loading as they may be interpreted incorrectly. It’s best to clean the data of such characters to ensure accurate and efficient data loading.
These best practices are designed to optimize the data loading process, ensuring that data is loaded quickly and accurately into Snowflake.
References:
Snowflake Documentation on Data Loading Considerations
[COF-C02] SnowPro Core Certification Exam Study Guide
User-level network policies can be created by which of the following roles? (Select TWO).
Show Answer
Answer:
B, D
Explanation:
User-level network policies in Snowflake can be created by roles with the necessary privileges to manage security and account settings. The ACCOUNTADMIN role has the highest level of privileges across the account, including the ability to manage network policies. The SECURITYADMIN role is specifically responsible for managing security objects within Snowflake, which includes the creation and management of network policies.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Network Policies1
Section 1.3 - SnowPro Core Certification Study Guide
How long is Snowpipe data load history retained?
Options:
A.
As configured in the create pipe settings
B.
Until the pipe is dropped
Show Answer
Answer:
C
Explanation:
Snowpipe data load history is retained for 64 days. This retention period allows users to review and audit the data load operations performed by Snowpipe over a significant period of time, which can be crucial for troubleshooting and ensuring data integrity.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Snowpipe1
True or False: Reader Accounts are able to extract data from shared data objects for use outside of Snowflake.
Show Answer
Answer:
B
Explanation:
Reader accounts in Snowflake are designed to allow users to read data shared with them but do not have the capability to extract data for use outside of Snowflake. They are intended for consuming shared data within the Snowflake environment only.
What happens when an external or an internal stage is dropped? (Select TWO).
Options:
A.
When dropping an external stage, the files are not removed and only the stage is dropped
B.
When dropping an external stage, both the stage and the files within the stage are removed
C.
When dropping an internal stage, the files are deleted with the stage and the files are recoverable
D.
When dropping an internal stage, the files are deleted with the stage and the files are not recoverable
E.
When dropping an internal stage, only selected files are deleted with the stage and are not recoverable
Show Answer
Answer:
A, D
Explanation:
When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the stage object in Snowflake is removed.
On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not recoverable once the internal stage is dropped, as they are permanently removed from Snowflake’s storage.
References:
[COF-C02] SnowPro Core Certification Exam Study Guide
Snowflake Documentation on Stages
What are characteristic of Snowsight worksheet? (Select TWO.)
Options:
A.
Worksheets can be grouped under folder, and a folder of folders.
B.
Each worksheet is a unique Snowflake session.
C.
Users are limited to running only one on a worksheet.
D.
The Snowflake session ends when a user switches worksheets.
E.
Users can import worksheets and share them with other users.
Show Answer
Answer:
A, E
Explanation:
Characteristics of Snowsight worksheets in Snowflake include:
A. Worksheets can be grouped under folders, and a folder of folders : This organizational feature allows users to efficiently manage and categorize their worksheets within Snowsight, Snowflake's web-based UI, enhancing the user experience by keeping related worksheets together.
E. Users can import worksheets and share them with other user s : Snowsight supports the sharing of worksheets among users, fostering collaboration by allowing users to share queries, analyses, and findings. This feature is crucial for collaborative data exploration and analysis workflows.
References:
What are the benefits of the replication feature in Snowflake? (Select TWO).
Options:
D.
Database failover and fallback
Show Answer
Answer:
A, D
Explanation:
The replication feature in Snowflake provides several benefits, with disaster recovery and database failover and fallback being two of the primary advantages. Replication allows for the continuous copying of data from one Snowflake account to another, ensuring that a secondary copy of the data is available in case of outages or disasters. This capability supports disaster recovery strategies by allowing operations to quickly switch to the replicated data in a different account or region. Additionally, it facilitates database failover and fallback procedures, ensuring business continuity and minimizing downtime.
References:
When floating-point number columns are unloaded to CSV or JSON files, Snowflake truncates the values to approximately what?
Show Answer
Answer:
D
Explanation:
When unloading floating-point number columns to CSV or JSON files, Snowflake truncates the values to approximately 15 significant digits with 9 digits following the decimal point, which can be represented as (15,9). This ensures a balance between accuracy and efficiency in representing floating-point numbers in text-based formats, which is essential for data interchange and processing applications that consume these files.
References:
Which command can be used to list all the file formats for which a user has access privileges?
Show Answer
Answer:
D
Explanation:
The command to list all the file formats for which a user has access privileges in Snowflake is SHOW FILE FORMATS . This command provides a list of all file formats defined in the user's current session or specified database/schema, along with details such as the name, type, and creation time of each file format. It is a valuable tool for users to understand and manage the file formats available for data loading and unloading operations.
References:
What should be used when creating a CSV file format where the columns are wrapped by single quotes or double quotes?
Options:
B.
ESCAPE_UNENCLOSED_FIELD
C.
FIELD_OPTIONALLY_ENCLOSED_BY
Show Answer
Answer:
C
Explanation:
When creating a CSV file format in Snowflake and the requirement is to wrap columns by single quotes or double quotes, the FIELD_OPTIONALLY_ENCLOSED_BY parameter should be used in the file format specification. This parameter allows you to define a character (either a single quote or a double quote) that can optionally enclose each field in the CSV file, providing flexibility in handling fields that contain special characters or delimiters as part of their data.
References:
Which SQL command can be used to verify the privileges that are granted to a role?
Show Answer
Answer:
C
Explanation:
To verify the privileges that have been granted to a specific role in Snowflake, the correct SQL command is SHOW GRANTS TO ROLE < Role Name > . This command lists all the privileges granted to the specified role, including access to schemas, tables, and other database objects. This is a useful command for administrators and users with sufficient privileges to audit and manage role permissions within the Snowflake environment.
References:
Why would a Snowflake user decide to use a materialized view instead of a regular view?
Options:
A.
The base tables do not change frequently.
B.
The results of the view change often.
C.
The query is not resource intensive.
D.
The query results are not used frequently.
Show Answer
Answer:
A
Explanation:
A Snowflake user would decide to use a materialized view instead of a regular view primarily when the base tables do not change frequently. Materialized views store the result of the view query and update it as the underlying data changes, making them ideal for situations where the data is relatively static and query performance is critical. By precomputing and storing the query results, materialized views can significantly reduce query execution times for complex aggregations, joins, and calculations.
References:
Which role has the ability to create a share from a shared database by default?
Show Answer
Answer:
A
Explanation:
By default, the ACCOUNTADMIN role in Snowflake has the ability to create a share from a shared database. This role has the highest level of access within a Snowflake account, including the management of all aspects of the account, such as users, roles, warehouses, and databases, as well as the creation and management of shares for secure data sharing with other Snowflake accounts.
References:
What SnowFlake database object is derived from a query specification, stored for later use, and can speed up expensive aggregation on large data sets?
Show Answer
Answer:
D
Explanation:
A materialized view in Snowflake is a database object derived from a query specification, stored for later use, and can significantly speed up expensive aggregations on large data sets. Materialized views store the result of their underlying query, reducing the need to recompute the result each time the view is accessed. This makes them ideal for improving the performance of read-heavy, aggregate-intensive queries.
References:
How long is a query visible in the Query History page in the Snowflake Web Interface (Ul)?
Show Answer
Answer:
C
Explanation:
In the Snowflake Web Interface (UI), the Query History page displays the history of queries executed in Snowflake for up to 14 days. This allows users to review and analyze their query performance, troubleshoot issues, and understand their query patterns over a two-week period. The Query History page is a critical tool for monitoring and optimizing the use of Snowflake.
References:
Which use case does the search optimization service support?
Options:
A.
Disjuncts (OR) in join predicates
B.
LIKE/ILIKE/RLIKE join predicates
C.
Join predicates on VARIANT columns
D.
Conjunctions (AND) of multiple equality predicates
Show Answer
Answer:
D
Explanation:
The search optimization service in Snowflake supports use cases involving conjunctions (AND) of multiple equality predicates. This service enhances the performance of queries that include multiple equality conditions by utilizing search indexes to quickly filter data without scanning entire tables or partitions. It's particularly beneficial for improving the response times of complex queries that rely on specific data matching across multiple conditions.
References:
Which view can be used to determine if a table has frequent row updates or deletes?
Show Answer
Answer:
B
Explanation:
The TABLE_STORAGE_METRICS view can be used to determine if a table has frequent row updates or deletes. This view provides detailed metrics on the storage utilization of tables within Snowflake, including metrics that reflect the impact of DML operations such as updates and deletes on table storage. For example, metrics related to the number of active and deleted rows can help identify tables that experience high levels of row modifications, indicating frequent updates or deletions.
References:
What activities can a user with the ORGADMIN role perform? (Select TWO).
Options:
A.
Create an account for an organization.
B.
Edit the account data for an organization.
C.
Delete the account data for an organization.
D.
View usage information for all accounts in an organization.
E.
Select all the data in tables for all accounts in an organization.
Show Answer
Answer:
A, D
Explanation:
The ORGADMIN role in Snowflake is an organizational-level role that provides administrative capabilities across the entire organization, rather than being limited to a single Snowflake account. Users with this role can:
A. Create an account for an organization : The ORGADMIN role has the privilege to create new Snowflake accounts within the organization, allowing for the expansion and management of the organization's resources.
D. View usage information for all accounts in an organization : This role also has access to comprehensive usage and activity data across all accounts within the organization. This is crucial for monitoring, cost management, and optimization at the organizational level.
References:
Which command should be used to unload all the rows from a table into one or more files in a named stage?
Show Answer
Answer:
A
Explanation:
To unload data from a table into one or more files in a named stage, the COPY INTO < location > command should be used. This command exports the result of a query, such as selecting all rows from a table, into files stored in the specified stage. The COPY INTO command is versatile, supporting various file formats and compression options for efficient data unloading.
References:
Which common query problems are identified by the Query Profile? (Select TWO.)
Options:
C.
Ambiguous column names
D.
Queries too large to fit in memory
E.
Object does not exist or not authorized
Show Answer
Answer:
B, D
Explanation:
The Query Profile in Snowflake can identify common query problems, including:
B. Inefficient pruning : This refers to the inability of a query to effectively limit the amount of data being scanned, potentially leading to suboptimal performance.
D. Queries too large to fit in memory : This indicates that a query requires more memory than is available in the virtual warehouse, which can lead to spilling to disk and degraded performance.
The Query Profile helps diagnose these issues by providing detailed execution statistics and visualizations, aiding in query optimization and troubleshooting.
References:
Top of Form
What are characteristics of reader accounts in Snowflake? (Select TWO).
Options:
A.
Reader account users cannot add new data to the account.
B.
Reader account users can share data to other reader accounts.
C.
A single reader account can consume data from multiple provider accounts.
D.
Data consumers are responsible for reader account setup and data usage costs.
E.
Reader accounts enable data consumers to access and query data shared by the provider.
Show Answer
Answer:
A, E
Explanation:
Characteristics of reader accounts in Snowflake include:
A. Reader account users cannot add new data to the account : Reader accounts are intended for data consumption only. Users of these accounts can query and analyze the data shared with them but cannot upload or add new data to the account.
E. Reader accounts enable data consumers to access and query data shared by the provider : One of the primary purposes of reader accounts is to allow data consumers to access and perform queries on the data shared by another Snowflake account, facilitating secure and controlled data sharing.
References:
User1, who has the SYSADMIN role, executed a query on Snowsight. User2, who is in the same Snowflake account, wants to view the result set of the query executed by User1 using the Snowsight query history.
What will happen if User2 tries to access the query history?
Options:
A.
If User2 has the sysadmin role they will be able to see the results.
B.
If User2 has the securityadmin role they will be able to see the results.
C.
If User2 has the ACCOUNTADMIN role they will be able to see the results.
D.
User2 will be unable to view the result set of the query executed by User1.
Show Answer
Answer:
C
Explanation:
In Snowflake, the query history and the results of queries executed by a user are accessible based on the roles and permissions. If User1 executed a query with the SYSADMIN role, User2 would be able to view the result set of that query executed by User1 only if User2 has the ACCOUNTADMIN role. The ACCOUNTADMIN role has the broadest set of privileges, including the ability to access all aspects of the account's operation, data, and query history, thus enabling User2 to view the results of queries executed by other users.
References:
What criteria does Snowflake use to determine the current role when initiating a session? (Select TWO).
Options:
A.
If a role was specified as part of the connection and that role has been granted to the Snowflake user, the specified role becomes the current role.
B.
If no role was specified as part of the connection and a default role has been defined for the Snowflake user, that role becomes the current role.
C.
If no role was specified as part of the connection and a default role has not been set for the Snowflake user, the session will not be initiated and the log in will fail.
D.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, it will be ignored and the default role will become the current role.
E.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, the role is automatically granted and it becomes the current role.
Show Answer
Answer:
A, B
Explanation:
When initiating a session in Snowflake, the system determines the current role based on the user's connection details and role assignments. If a user specifies a role during the connection, and that role is already granted to them, Snowflake sets it as the current role for the session. Alternatively, if no role is specified during the connection, but the user has a default role assigned, Snowflake will use this default role as the current session role. These mechanisms ensure that users operate within their permissions, enhancing security and governance within Snowflake environments.
References:
How can a user get the MOST detailed information about individual table storage details in Snowflake?
Options:
B.
SHOW EXTERNAL TABLES command
D.
TABLE STORAGE METRICS view
Show Answer
Answer:
D
Explanation:
To get the most detailed information about individual table storage details in Snowflake, the TABLE STORAGE METRICS view should be used. This Information Schema view provides granular storage metrics for tables within Snowflake, including data related to the size of the table, the amount of data stored, and storage usage over time. It's an essential tool for administrators and users looking to monitor and optimize storage consumption and costs.
References:
What are characteristics of transient tables in Snowflake? (Select TWO).
Options:
A.
Transient tables have a Fail-safe period of 7 days.
B.
Transient tables can be cloned to permanent tables.
C.
Transient tables persist until they are explicitly dropped.
D.
Transient tables can be altered to make them permanent tables.
E.
Transient tables have Time Travel retention periods of 0 or 1 day.
Show Answer
Answer:
B, C
Explanation:
Transient tables in Snowflake are designed for temporary or intermediate workloads with the following characteristics:
B. Transient tables can be cloned to permanent tables : This feature allows users to create copies of transient tables for permanent use, providing flexibility in managing data lifecycles.
C. Transient tables persist until they are explicitly dropped : Unlike temporary tables that exist for the duration of a session, transient tables remain in the database until explicitly removed by a user, offering more durability for short-term data storage needs.
References:
Which command removes a role from another role or a user in Snowflak?
Show Answer
Answer:
B
Explanation:
The REVOKE ROLE command is used to remove a role from another role or a user in Snowflake. This command is part of Snowflake's role-based access control system, allowing administrators to manage permissions and access to database objects efficiently by adding or removing roles from users or other roles.
References:
Which Snowflow object does not consume and storage costs?
Show Answer
Answer:
C
Explanation:
Temporary tables in Snowflake do not consume storage costs. They are designed for transient data that is needed only for the duration of a session. Data stored in temporary tables is held in the virtual warehouse's cache and does not persist beyond the session's lifetime, thereby not incurring any storage charges.
References:
There are two Snowflake accounts in the same cloud provider region: one is production and the other is non-production. How can data be easily transferred from the production account to the non-production account?
Options:
A.
Clone the data from the production account to the non-production account.
B.
Create a data share from the production account to the non-production account.
C.
Create a subscription in the production account and have it publish to the non-production account.
D.
Create a reader account using the production account and link the reader account to the non-production account.
Show Answer
Answer:
B
Explanation:
To easily transfer data from a production account to a non-production account in Snowflake within the same cloud provider region, creating a data share is the most efficient approach. Data sharing allows for live, read-only access to selected data objects from the production account to the non-production account without the need to duplicate or move the actual data. This method facilitates seamless access to the data for development, testing, or analytics purposes in the non-production environment.
References:
A Snowflake user is writing a User-Defined Function (UDF) that includes some unqualified object names.
How will those object names be resolved during execution?
Options:
A.
Snowflake will resolve them according to the SEARCH_PATH parameter.
B.
Snowflake will only check the schema the UDF belongs to.
C.
Snowflake will first check the current schema, and then the schema the previous query used
D.
Snowflake will first check the current schema, and them the PUBLIC schema of the current database.
Show Answer
Answer:
D
Explanation:
Object Name Resolution: When unqualified object names (e.g., table name without schema) are used in a UDF, Snowflake follows a specific hierarchy to resolve them. Here's the order:
Current Schema: Snowflake first checks if an object with the given name exists in the schema currently in use for the session.
PUBLIC Schema: If the object isn't found in the current schema, Snowflake looks in the PUBLIC schema of the current database.
Note: The SEARCH_PATH parameter influences object resolution for queries, not within UDFs.
References:
How does a Snowflake user extract the URL of a directory table on an external stage for further transformation?
Options:
A.
Use the SHOW STAGES command.
B.
Use the DESCRIBE STAGE command.
C.
Use the GET_ABSOLUTE_PATH function.
D.
Use the GET_STAGE_LOCATION function.
Show Answer
Answer:
C
Explanation:
To extract the URL of a directory table on an external stage for further transformation in Snowflake, the GET_ABSOLUTE_PATH function can be used. This function returns the full path of a file or directory within a specified stage, enabling users to dynamically construct URLs for accessing or processing data stored in external stages.
References:
How does a Snowflake stored procedure compare to a User-Defined Function (UDF)?
Options:
A.
A single executable statement can call only two stored procedures. In contrast, a single SQL statement can call multiple UDFs.
B.
A single executable statement can call only one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
C.
A single executable statement can call multiple stored procedures. In contrast, multiple SQL statements can call the same UDFs.
D.
Multiple executable statements can call more than one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
Show Answer
Answer:
B
Explanation:
In Snowflake, stored procedures and User-Defined Functions (UDFs) have different invocation patterns within SQL:
Option B is correct: A single executable statement can call only one stored procedure due to the procedural and potentially transactional nature of stored procedures. In contrast, a single SQL statement can call multiple UDFs because UDFs are designed to operate more like functions in traditional programming, where they return a value and can be embedded within SQL queries.References: Snowflake documentation comparing the operational differences between stored procedures and UDFs.
When referring to User-Defined Function (UDF) names in Snowflake, what does the term overloading mean?
Options:
A.
There are multiple SOL UDFs with the same names and the same number of arguments.
B.
There are multiple SQL UDFs with the same names and the same number of argument types.
C.
There are multiple SQL UDFs with the same names but with a different number of arguments or argument types.
D.
There are multiple SQL UDFs with different names but the same number of arguments or argument types.
Show Answer
Answer:
C
Explanation:
In Snowflake, overloading refers to the creation of multiple User-Defined Functions (UDFs) with the same name but differing in the number or types of their arguments. This feature allows for more flexible function usage, as Snowflake can differentiate between functions based on the context of their invocation, such as the types or the number of arguments passed. Overloading helps to create more adaptable and readable code, as the same function name can be used for similar operations on different types of data.
References:
What is the MINIMUM permission needed to access a file URL from an external stage?
Show Answer
Answer:
D
Explanation:
To access a file URL from an external stage in Snowflake, the minimum permission required is USAGE on the stage object. USAGE permission allows a user to reference the stage in SQL commands, necessary for actions like listing files or loading data from the stage, but does not permit the user to alter or drop the stage.
References:
How does Snowflake describe its unique architecture?
Options:
A.
A single-cluster shared data architecture using a central data repository and massively parallel processing (MPP)
B.
A multi-duster shared nothing architecture using a soloed data repository and massively parallel processing (MPP)
C.
A single-cluster shared nothing architecture using a sliced data repository and symmetric multiprocessing (SMP)
D.
A multi-cluster shared nothing architecture using a siloed data repository and symmetric multiprocessing (SMP)
Show Answer
Answer:
A
Explanation:
Snowflake's unique architecture is described as a multi-cluster, shared data architecture that leverages massively parallel processing (MPP). This architecture separates compute and storage resources, enabling Snowflake to scale them independently. It does not use a single cluster or rely solely on symmetric multiprocessing (SMP); rather, it uses a combination of shared-nothing architecture for compute clusters (virtual warehouses) and a centralized storage layer for data, optimizing for both performance and scalability.
References:
Which Snowflake data type is used to store JSON key value pairs?
Show Answer
Answer:
D
Explanation:
The VARIANT data type in Snowflake is used to store JSON key-value pairs along with other semi-structured data formats like AVRO, BSON, and XML. The VARIANT data type allows for flexible and dynamic data structures within a single column, accommodating complex and nested data. This data type is crucial for handling semi-structured data in Snowflake, enabling users to perform SQL operations on JSON objects and arrays directly.
References:
When using the ALLOW_CLI£NT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
Show Answer
Answer:
C
Explanation:
A cached MFA token is valid for up to four hours. er-of-prompts-during-authentication-optional
What does the TableScan operator represent in the Query Profile?
Options:
A.
The access to a single table
B.
The access to data stored in stage objects
C.
The list of values provided with the VALUES clause
D.
The records generated using the TABLE (GENERATOR (...)) construct
Show Answer
Answer:
A
Explanation:
In the Query Profile of Snowflake, the TableScan operator represents the access to a single table. This operator indicates that the query execution involved reading data from a table stored in Snowflake. TableScan is a fundamental operation in query execution plans, showing how the database engine retrieves data directly from tables as part of processing a query.
References:
Which data formats are supported by Snowflake when unloading semi-structured data? (Select TWO).
Options:
B.
Binary file in Parquet
D.
Newline Delimited JSON
E.
Plain text file containing XML elements
Show Answer
Answer:
B, D
Explanation:
Snowflake supports a variety of file formats for unloading semi-structured data, among which Parquet and Newline Delimited JSON (NDJSON) are two widely used formats.
B. Binary file in Parquet : Parquet is a columnar storage file format optimized for large-scale data processing and analysis. It is especially suited for complex nested data structures.
D. Newline Delimited JSON (NDJSON) : This format represents JSON records separated by newline characters, facilitating the storage and processing of multiple, separate JSON objects in a single file.
These formats are chosen for their efficiency and compatibility with data analytics tools and ecosystems, enabling seamless integration and processing of exported data.
References:
What is the Fail-safe retention period for transient and temporary tables?
Show Answer
Answer:
A
Explanation:
The Fail-safe retention period for transient and temporary tables in Snowflake is 0 days. Fail-safe is a feature designed to protect data against accidental loss or deletion by retaining historical data for a period after its Time Travel retention period expires. However, transient and temporary tables, which are designed for temporary or short-term storage and operations, do not have a Fail-safe period. Once the data is deleted or the table is dropped, it cannot be recovered.
References:
What are valid sub-clauses to the OVER clause for a window function? (Select TWO).
Show Answer
Answer:
C, D
Explanation:
Valid sub-clauses to the OVER clause for a window function in SQL are:
C. ORDER BY : This clause specifies the order in which the rows in a partition are processed by the window function. It is essential for functions that depend on the row order, such as ranking functions.
D. PARTITION BY : This clause divides the result set into partitions to which the window function is applied. Each partition is processed independently of other partitions, making it crucial for functions that compute values across sets of rows that share common characteristics.
These clauses are fundamental to defining the scope and order of data over which the window function operates, enabling complex analytical computations within SQL queries.
References:
What is the default value in the Snowflake Web Interface (Ul) for auto suspending a Virtual Warehouse?
Show Answer
Answer:
C
Explanation:
The default value for auto-suspending a Virtual Warehouse in the Snowflake Web Interface (UI) is 10 minutes. This setting helps manage compute costs by automatically suspending warehouses that are not in use, ensuring that compute resources are efficiently allocated and not wasted on idle warehouses.
References:
What is the only supported character set for loading and unloading data from all supported file formats?
Show Answer
Answer:
A
Explanation:
UTF-8 is the only supported character set for loading and unloading data from all supported file formats in Snowflake. UTF-8 is a widely used encoding that supports a large range of characters from various languages, making it suitable for internationalization and ensuring data compatibility across different systems and platforms.
References:
A user wants to add additional privileges to the system-defined roles for their virtual warehouse. How does Snowflake recommend they accomplish this?
Options:
A.
Grant the additional privileges to a custom role.
B.
Grant the additional privileges to the ACCOUNTADMIN role.
C.
Grant the additional privileges to the SYSADMIN role.
D.
Grant the additional privileges to the ORGADMIN role.
Show Answer
Answer:
A
Explanation:
Snowflake recommends enhancing the granularity and management of privileges by creating and utilizing custom roles. When additional privileges are needed beyond those provided by the system-defined roles for a virtual warehouse or any other resource, these privileges should be granted to a custom role. This approach allows for more precise control over access rights and the ability to tailor permissions to the specific needs of different user groups or applications within the organization, while also maintaining the integrity and security model of system-defined roles.
References:
Which command is used to start configuring Snowflake for Single Sign-On (SSO)?
Options:
C.
CREATE SECURITY INTEGRATION
D.
CREATE PASSWORD POLICY
Show Answer
Answer:
C
Explanation:
To start configuring Snowflake for Single Sign-On (SSO), the CREATE SECURITY INTEGRATION command is used. This command sets up a security integration object in Snowflake, which is necessary for enabling SSO with external identity providers using SAML 2.01.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What are the least privileges needed to view and modify resource monitors? (Select TWO).
Show Answer
Answer:
C, D
Explanation:
To view and modify resource monitors, the least privileges needed are MONITOR and MODIFY. These privileges allow a user to monitor credit usage and make changes to resource monitors3.
What is the purpose of the STRIP NULL_VALUES file format option when loading semi-structured data files into Snowflake?
Options:
A.
It removes null values from all columns in the data.
B.
It converts null values to empty strings during loading.
C.
It skips rows with null values during the loading process.
D.
It removes object or array elements containing null values.
Show Answer
Answer:
D
Explanation:
The STRIP NULL_VALUES file format option, when set to TRUE, removes object or array elements that contain null values during the loading process of semi-structured data files into Snowflake. This ensures that the data loaded into Snowflake tables does not contain these null elements, which can be useful when the “null” values in files indicate missing values and have no other special meaning2.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
Show Answer
Answer:
C
Explanation:
The COPY INTO command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage1.
What does SnowCD help Snowflake users to do?
Options:
B.
Manage different databases and schemas.
C.
Troubleshoot network connections to Snowflake.
D.
Write SELECT queries to retrieve data from external tables.
Show Answer
Answer:
C
Explanation:
SnowCD is a connectivity diagnostic tool that helps users troubleshoot network connections to Snowflake. It performs a series of checks to evaluate the network connection and provides suggestions for resolving any issues4.
What are key characteristics of virtual warehouses in Snowflake? (Select TWO).
Options:
A.
Warehouses that are multi-cluster can have nodes of different sizes.
B.
Warehouses can be started and stopped at any time.
C.
Warehouses can be resized at any time, even while running.
D.
Warehouses are billed on a per-minute usage basis.
E.
Warehouses can only be used for querying and cannot be used for data loading.
Show Answer
Answer:
B, C
Explanation:
Virtual warehouses in Snowflake can be started and stopped at any time, providing flexibility in managing compute resources. They can also be resized at any time, even while running, to accommodate varying workloads910. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the primary purpose of a directory table in Snowflake?
Options:
A.
To store actual data from external stages
B.
To automatically expire file URLs for security
C.
To manage user privileges and access control
D.
To store file-level metadata about data files in a stage
Show Answer
Answer:
D
Explanation:
A directory table in Snowflake is used to store file-level metadata about the data files in a stage. It is conceptually similar to an external table and provides information such as file size, last modified timestamp, and file URL. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake view is used to support compliance auditing?
Show Answer
Answer:
A
Explanation:
The ACCESS_HISTORY view in Snowflake is utilized to support compliance auditing. It provides detailed information on data access within Snowflake, including reads and writes by user queries. This view is essential for regulatory compliance auditing as it offers insights into the usage of tables and columns, and maintains a direct link between the user, the query, and the accessed data1.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which function unloads data from a relational table to JSON?
Show Answer
Answer:
B
Explanation:
The TO_JSON function is used to convert a VARIANT value into a string containing the JSON representation of the value. This function is suitable for unloading data from a relational table to JSON format. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is a characteristic of materialized views in Snowflake?
Options:
A.
Materialized views do not allow joins.
B.
Clones of materialized views can be created directly by the user.
C.
Multiple tables can be joined in the underlying query of a materialized view.
D.
Aggregate functions can be used as window functions in materialized views.
Show Answer
Answer:
C
Explanation:
One of the characteristics of materialized views in Snowflake is that they allow multiple tables to be joined in the underlying query. This enables the pre-computation of complex queries involving joins, which can significantly improve the performance of subsequent queries that access the materialized view4. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which parameter can be set at the account level to set the minimum number of days for which Snowflake retains historical data in Time Travel?
Options:
A.
DATA_RETENTION_TIME_IN_DAYS
B.
MAX_DATA_EXTENSION_TIME_IN_DAYS
C.
MIN_DATA_RETENTION_TIME_IN_DAYS
Show Answer
Answer:
A
Explanation:
The parameter DATA_RETENTION_TIME_IN_DAYS can be set at the account level to define the minimum number of days Snowflake retains historical data for Time Travel1.
Which statistics are displayed in a Query Profile that indicate that intermediate results do not fit in memory? (Select TWO).
Options:
C.
Bytes spilled to local storage
D.
Bytes spilled to remote storage
E.
Percentage scanned from cache
Show Answer
Answer:
C, D
Explanation:
The Query Profile statistics that indicate intermediate results do not fit in memory are the bytes spilled to local storage and bytes spilled to remote storage2.
What does a Notify & Suspend action for a resource monitor do?
Options:
A.
Send an alert notification to all account users who have notifications enabled.
B.
Send an alert notification to all virtual warehouse users when thresholds over 100% have been met.
C.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses after all statements being executed by the warehouses have completed.
D.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses immediately, canceling any statements being executed by the warehouses.
Show Answer
Answer:
C
Explanation:
The Notify & Suspend action for a resource monitor in Snowflake sends a notification to all account administrators who have notifications enabled and suspends all assigned warehouses. However, the suspension only occurs after all currently running statements in the warehouses have been completed1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which metadata table will store the storage utilization information even for dropped tables?
Options:
A.
DATABASE_STORAGE_USAGE_HISTORY
D.
STAGE STORAGE USAGE HISTORY
Show Answer
Answer:
B
Explanation:
The TABLE_STORAGE_METRICS metadata table stores the storage utilization information, including for tables that have been dropped but are still incurring storage costs2.
Which Snowflake role can manage any object grant globally, including modifying and revoking grants?
Show Answer
Answer:
D
Explanation:
The SECURITYADMIN role in Snowflake can manage any object grant globally, including modifying and revoking grants. This role has the necessary privileges to oversee and control access to all securable objects within the Snowflake environment4.
Which type of loop requires a BREAK statement to stop executing?
Show Answer
Answer:
B
Explanation:
The LOOP type of loop in Snowflake Scripting does not have a built-in termination condition and requires a BREAK statement to stop executing4.
What does a masking policy consist of in Snowflake?
Options:
A.
A single data type, with one or more conditions, and one or more masking functions
B.
A single data type, with only one condition, and only one masking function
C.
Multiple data types, with only one condition, and one or more masking functions
D.
Multiple data types, with one or more conditions, and one or more masking functions
Show Answer
Answer:
A
Explanation:
A masking policy in Snowflake consists of a single data type, with one or more conditions, and one or more masking functions. These components define how the data is masked based on the specified conditions3.
Which Snowflake object does not consume any storage costs?
Show Answer
Answer:
C
Explanation:
Temporary tables do not consume any storage costs in Snowflake because they only exist for the duration of the session that created them and are automatically dropped when the session ends, thus incurring no long-term storage charges4. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which commands are restricted in owner's rights stored procedures? (Select TWO).
Show Answer
Answer:
A, E
Explanation:
In owner’s rights stored procedures, certain commands are restricted to maintain security and integrity. The SHOW and DESCRIBE commands are limited because they can reveal metadata and structure information that may not be intended for all roles.
How can a Snowflake administrator determine which user has accessed a database object that contains sensitive information?
Options:
A.
Review the granted privileges to the database object.
B.
Review the row access policy for the database object.
C.
Query the ACCESS_HlSTORY view in the ACCOUNT_USAGE schema.
D.
Query the REPLICATION USAGE HISTORY view in the ORGANIZATION USAGE schema.
Show Answer
Answer:
C
Explanation:
To determine which user has accessed a database object containing sensitive information, a Snowflake administrator can query the ACCESS_HISTORY view in the ACCOUNT_USAGE schema, which provides information about access to database objects3.
Which solution improves the performance of point lookup queries that return a small number of rows from large tables using highly selective filters?
Options:
C.
Query acceleration service
D.
Search optimization service
Show Answer
Answer:
D
Explanation:
The search optimization service improves the performance of point lookup queries on large tables by using selective filters to quickly return a small number of rows. It creates an optimized data structure that helps in pruning the micro-partitions that do not contain the queried values3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the purpose of a Query Profile?
Options:
A.
To profile how many times a particular query was executed and analyze its u^age statistics over time.
B.
To profile a particular query to understand the mechanics of the query, its behavior, and performance.
C.
To profile the user and/or executing role of a query and all privileges and policies applied on the objects within the query.
D.
To profile which queries are running in each warehouse and identify proper warehouse utilization and sizing for better performance and cost balancing.
Show Answer
Answer:
B
Explanation:
The purpose of a Query Profile is to provide a detailed analysis of a particular query’s execution plan, including the mechanics, behavior, and performance. It helps in identifying potential performance bottlenecks and areas for optimization
A user wants to access files stored in a stage without authenticating into Snowflake. Which type of URL should be used?
Show Answer
Answer:
D
Explanation:
A Pre-signed URL should be used to access files stored in a Snowflake stage without requiring authentication into Snowflake. Pre-signed URLs are simple HTTPS URLs that provide temporary access to a file via a web browser, using a pre-signed access token. The expiration time for the access token is configurable, and this type of URL allows users or applications to directly access or download the files without needing to authenticate into Snowflake5.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is a directory table in Snowflake?
Options:
A.
A separate database object that is used to store file-level metadata
B.
An object layered on a stage that is used to store file-level metadata
C.
A database object with grantable privileges for unstructured data tasks
D.
A Snowflake table specifically designed for storing unstructured files
Show Answer
Answer:
B
Explanation:
A directory table in Snowflake is an object layered on a stage that is used to store file-level metadata. It is not a separate database object but is conceptually similar to an external table because it stores metadata about the data files in the stage5.
A column named "Data" contains VARIANT data and stores values as follows:
How will Snowflake extract the employee's name from the column data?
Show Answer
Answer:
D
Explanation:
In Snowflake, to extract a specific value from a VARIANT column, you use the column name followed by a colon and then the key. The keys are case-sensitive. Therefore, to extract the employee’s name from the “Data” column, the correct syntax is data:employee.name.
What will prevent unauthorized access to a Snowflake account from an unknown source?
Options:
C.
Multi-Factor Authentication (MFA)
D.
Role-Based Access Control (RBAC)
Show Answer
Answer:
A
Explanation:
A network policy in Snowflake is used to restrict access to the Snowflake account from unauthorized or unknown sources. It allows administrators to specify allowed IP address ranges, thus preventing access from any IP addresses not listed in the policy1.
What factors impact storage costs in Snowflake? (Select TWO).
Options:
B.
The storage file format
C.
The cloud region used by the account
D.
The type of data being stored
E.
The cloud platform being used
Show Answer
Answer:
A, C
Explanation:
The factors that impact storage costs in Snowflake include the account type (Capacity or On Demand) and the cloud region used by the account. These factors determine the rate at which storage is billed, with different regions potentially having different rates3.
A user with which privileges can create or manage other users in a Snowflake account? (Select TWO).
Show Answer
Answer:
D, E
Explanation:
A user with the OWNERSHIP privilege on a user object or the CREATE USER privilege on the account can create or manage other users in a Snowflake account56.
Which privilege must be granted by one role to another role, and cannot be revoked?
Show Answer
Answer:
C
Explanation:
The OWNERSHIP privilege is unique in that it must be granted by one role to another and cannot be revoked. This ensures that the transfer of ownership is deliberate and permanent, reflecting the importance of ownership in managing access and permissions.
How is unstructured data retrieved from data storage?
Options:
A.
SQL functions like the GET command can be used to copy the unstructured data to a location on the client.
B.
SQL functions can be used to create different types of URLs pointing to the unstructured data. These URLs can be used to download the data to a client.
C.
SQL functions can be used to retrieve the data from the query results cache. When the query results are output to a client, the unstructured data will be output to the client as files.
D.
SQL functions can call on different web extensions designed to display different types of files as a web page. The web extensions will allow the files to be downloaded to the client.
Show Answer
Answer:
B
Explanation:
Unstructured data stored in Snowflake can be retrieved by using SQL functions to generate URLs that point to the data. These URLs can then be used to download the data directly to a client
A permanent table and temporary table have the same name, TBL1, in a schema.
What will happen if a user executes select * from TBL1 ;?
Options:
A.
The temporary table will take precedence over the permanent table.
B.
The permanent table will take precedence over the temporary table.
C.
An error will say there cannot be two tables with the same name in a schema.
D.
The table that was created most recently will take precedence over the older table.
Show Answer
Answer:
A
Explanation:
In Snowflake, if a temporary table and a permanent table have the same name within the same schema, the temporary table takes precedence over the permanent table within the session where the temporary table was created4.
Which data types can be used in Snowflake to store semi-structured data? (Select TWO)
Show Answer
Answer:
A, E
Explanation:
Snowflake supports the storage of semi-structured data using the ARRAY and VARIANT data types. The ARRAY data type can directly contain VARIANT, and thus indirectly contain any other data type, including itself. The VARIANT data type can store a value of any other type, including OBJECT and ARRAY, and is often used to represent semi-structured data formats like JSON, Avro, ORC, Parquet, or XML34.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What function can be used with the recursive argument to return a list of distinct key names in all nested elements in an object?
Show Answer
Answer:
A
Explanation:
The FLATTEN function can be used with the recursive argument to return a list of distinct key names in all nested elements within an object. This function is particularly useful for working with semi-structured data in Snowflake
TION NO: 521
Who can grant object privileges in a regular schema?
Show Answer
Answer:
A
Explanation:
In a regular schema within Snowflake, the object owner has the privilege to grant object privileges. The object owner is typically the role that created the object or to whom the ownership of the object has been transferred78.
References = [COF-C02] SnowPro Core Certification Exam Study Guide
What objects in Snowflake are supported by Dynamic Data Masking? (Select TWO).'
Show Answer
Answer:
A, C
Explanation:
Dynamic Data Masking in Snowflake supports tables and views . These objects can have masking policies applied to their columns to dynamically mask data at query time3.
Which Snowflake function will parse a JSON-null into a SQL-null?
Show Answer
Answer:
D
Explanation:
The STRIP_NULL_VALUE function in Snowflake is used to convert a JSON null value into a SQL NULL value1.
Certification: SnowPro Core Certification
Exam Name: SnowPro Core Certification Exam
Page: 1 / 57
Total 751 questions
Get COF-C02 Full Access
Download COF-C02 PDF
COF-C02 All Real Exam Questions
COF-C02 Exam easy to use and print PDF format
Download Free COF-C02 Demo (Try before Buy)
Free Frequent Updates
100% Passing Guarantee by CertsTopics
Download Demo
Get Full Access Now