New Year Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free DAS-C01 Questions Attempt

Page: 11 / 15
Total 207 questions

AWS Certified Data Analytics - Specialty Questions and Answers

Question 41

A data analyst notices the following error message while loading data to an Amazon Redshift cluster:

"The bucket you are attempting to access must be addressed using the specified endpoint."

What should the data analyst do to resolve this issue?

Options:

A.

Specify the correct AWS Region for the Amazon S3 bucket by using the REGION option with the COPY command.

B.

Change the Amazon S3 object's ACL to grant the S3 bucket owner full control of the object.

C.

Launch the Redshift cluster in a VPC.

D.

Configure the timeout settings according to the operating system used to connect to the Redshift cluster.

Question 42

A media company has been performing analytics on log data generated by its applications. There has been a recent increase in the number of concurrent analytics jobs running, and the overall performance of existing jobs is decreasing as the number of new jobs is increasing. The partitioned data is stored in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) and the analytic processing is performed on Amazon EMR clusters using the EMR File System (EMRFS) with consistent view enabled. A data analyst has determined that it is taking longer for the EMR task nodes to list objects in Amazon S3.

Which action would MOST likely increase the performance of accessing log data in Amazon S3?

Options:

A.

Use a hash function to create a random string and add that to the beginning of the object prefixes when storing the log data in Amazon S3.

B.

Use a lifecycle policy to change the S3 storage class to S3 Standard for the log data.

C.

Increase the read capacity units (RCUs) for the shared Amazon DynamoDB table.

D.

Redeploy the EMR clusters that are running slowly to a different Availability Zone.

Question 43

A marketing company is storing its campaign response data in Amazon S3. A consistent set of sources has generated the data for each campaign. The data is saved into Amazon S3 as .csv files. A business analyst will use Amazon Athena to analyze each campaign’s data. The company needs the cost of ongoing data analysis with Athena to be minimized.

Which combination of actions should a data analytics specialist take to meet these requirements? (Choose two.)

Options:

A.

Convert the .csv files to Apache Parquet.

B.

Convert the .csv files to Apache Avro.

C.

Partition the data by campaign.

D.

Partition the data by source.

E.

Compress the .csv files.

Question 44

A company uses Amazon Redshift for its data warehousing needs. ETL jobs run every night to load data, apply business rules, and create aggregate tables for reporting. The company's data analysis, data science, and business intelligence teams use the data warehouse during regular business hours. The workload management is set to auto, and separate queues exist for each team with the priority set to NORMAL.

Recently, a sudden spike of read queries from the data analysis team has occurred at least twice daily, and queries wait in line for cluster resources. The company needs a solution that enables the data analysis team to avoid query queuing without impacting latency and the query times of other teams.

Which solution meets these requirements?

Options:

A.

Increase the query priority to HIGHEST for the data analysis queue.

B.

Configure the data analysis queue to enable concurrency scaling.

C.

Create a query monitoring rule to add more cluster capacity for the data analysis queue when queries are waiting for resources.

D.

Use workload management query queue hopping to route the query to the next matching queue.

Page: 11 / 15
Total 207 questions