What type of account can be used to share data with a consumer who does have a Snowflake account?
Data provider
Data consumer
Reader
Organization
A Reader account in Snowflake can be used to share data with a consumer who does not have a Snowflake account. Reader accounts are a type of shared account provided by data providers to external data consumers, allowing them to access and query shared data using Snowflake's web interface without needing their own Snowflake account.
References:
Snowflake Documentation: Reader Accounts
What happens when a table or schema with a standard retention period is dropped?
The object is immediately removed from the system.
The object is instantaneously moved to Fail-safe.
The object is retained but all associated data is immediately purged.
The object is retained for the data retention period.
In Snowflake, when a table or schema is dropped, it is not immediately deleted but retained for the configured data retention period, also known as "Time Travel." During this period, users can use commands like UNDROP to recover the dropped object if needed. After the retention period expires, the object is then moved to Fail-safe (if applicable) for an additional seven days before being permanently removed. This feature is intended to provide data protection and recovery options in case of accidental deletions.
Which task is supported by the use of Access History in Snowflake?
Data backups
Cost monitoring
Compliance auditing
Performance optimization
Access History in Snowflake is primarily utilized for compliance auditing. The Access History feature provides detailed logs that track data access and modifications, including queries that read from or write to database objects. This information is crucial for organizations to meet regulatory requirements and to perform audits related to data access and usage.
Role of Access History: Access History logs are designed to help organizations understand who accessed what data and when. This is particularly important for compliance with various regulations that require detailed auditing capabilities.
How Access History Supports Compliance Auditing:
By providing a detailed log of access events, organizations can trace data access patterns, identify unauthorized access, and ensure that data handling complies with relevant data protection laws and regulations.
Access History can be queried to extract specific events, users, time frames, and accessed objects, making it an invaluable tool for compliance officers and auditors.
How can an administrator check for updates (for example, SCIM API requests) sent to Snowflake by the identity provider?
ACCESS_HISTORY
LOAD_HISTORY
QUERY_HISTORY
REST EVENT HISTORY
To monitor updates, such as SCIM API requests sent to Snowflake by the identity provider, an administrator can use the REST EVENT HISTORY feature. This feature allows administrators to query historical data about REST API calls made to Snowflake, including those related to user and role management through SCIM (System for Cross-domain Identity Management).
The REST EVENT HISTORY table function returns information about REST API calls made over a specified period. It is particularly useful for auditing and monitoring purposes, especially when integrating Snowflake with third-party identity providers that use SCIM for automated user provisioning and deprovisioning.
An example query to check for SCIM API requests might look like this:
SELECT * FROM TABLE(information_schema.rest_event_history(date_range_start=>dateadd('hours',-1,current_timestamp()))) WHERE request_type = 'SCIM';
This query returns details on SCIM API requests made in the last hour, including the request type, the identity provider's details, and the outcome of each request.
Which views are included in the data_sharing_usage schema? (Select TWO).
ACCESS_HISTORY
DATA_TRANSFER_HISTORY
WAREHOUSE_METERING_HISTORY
MONETIZED_USAGE_DAILY
LISTING TELEMETRY DAILY
How should a Snowflake use' configure a virtual warehouse to be in Maximized mode''
Set the WAREHOUSES_SIZE to 6XL.
Set the STATEMENT_TIMEOUT_1M_SECOMES to 0.
Set the MAX_CONCURRENCY_LEVEL to a value of 12 or large.
Set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT.
In Snowflake, configuring a virtual warehouse to be in a "Maximized" mode implies maximizing the resources allocated to the warehouse for its duration. This is done to ensure that the warehouse has a consistent amount of compute resources available, enhancing performance for workloads that require a high level of parallel processing or for handling high query volumes.
To configure a virtual warehouse in maximized mode, you should set the same value for both MIN_CLUSTER_COUNT and MAX_CLUSTER_COUNT. This configuration ensures that the warehouse operates with a fixed number of clusters, thereby providing a stable and maximized level of compute resources.
Reference to Snowflake documentation on warehouse sizing and scaling:
Warehouse Sizing and Scaling
Understanding Warehouses
Which command will unload data from a table into an external stage?
PUT
INSERT
COPY INTO
GET
In Snowflake, the COPY INTO <location> command is used to unload (export) data from a Snowflake table to an external stage, such as an S3 bucket, Azure Blob Storage, or Google Cloud Storage. This command allows users to specify the format, file size, and other options for the data being unloaded, making it a flexible solution for exporting data from Snowflake to external storage solutions for further use or analysis.References: Snowflake Documentation on Data Unloading
How does Snowflake define i1s approach to Discretionary Access Control (DAC)?
A defined level of access to an object
An entity in which access can be granted
Each object has an owner, who can in turn grail access to that object.
Access privileges are assigned to roles. which are in turn assigned to use's
Snowflake implements Discretionary Access Control (DAC) by using a role-based access control model. In this model, access privileges are not directly assigned to individual objects or users but are encapsulated within roles. These roles are then assigned to users, effectively granting them the access privileges contained within the role. This approach allows for granular control over database access, making it easier to manage permissions in a scalable and flexible manner.References: Snowflake Documentation on Access Control
When snaring data in Snowflake. what privileges does a Provider need to grant along with a share? (Select TWO).
USAGE on the specific tables in the database.
USAGE on the specific tables in the database.
MODIFY on 1Mb specific tables in the database.
USAGE on the database and the schema containing the tables to share
OPEBATE on the database and the schema containing the tables to share.
When sharing data in Snowflake, the provider needs to grant the following privileges along with a share:
A. USAGE on the specific tables in the database: This privilege allows the consumers of the share to access the specific tables included in the share.
D. USAGE on the database and the schema containing the tables to share: This privilege is necessary for the consumers to access the database and schema levels, enabling them to access the tables within those schemas.
These privileges are crucial for setting up secure and controlled access to the shared data, ensuring that only authorized users can access the specified resources.
Reference to Snowflake documentation on sharing data and managing access:
Data Sharing Overview
Privileges Required for Sharing Data
Which command is used to lake away staged files from a Snowflake stage after a successful data ingestion?
DELETE
DROP
REMOVE
TRUNCATE
The REMOVE command is used in Snowflake to delete files from a stage after they have been successfully ingested into Snowflake tables. This command helps manage storage by allowing users to clean up staged files that are no longer needed, ensuring that the stage does not accumulate unnecessary data over time. Unlike DELETE, DROP, or TRUNCATE commands, which are used for managing data within Snowflake tables or dropping objects, REMOVE specifically targets the management of files in stages.References: Snowflake Documentation on Stages and File Management
What is the purpose of the use of the VALIDATE command?
To view any queries that encountered an error
To verify that a SELECT query will run without error
To prevent a put statement from running if an error occurs
To see all errors from a previously run COPY INTO