Summer Special - Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: top65certs

SAA-C03 Questions Bank

Page: 43 / 78
Total 1039 questions

AWS Certified Solutions Architect - Associate (SAA-C03) Questions and Answers

Question 169

A company's application runs on Amazon EC2 instances that are in multiple Availability Zones. The application needs to ingest real-time data from third-party applications.

The company needs a data ingestion solution that places the ingested raw data in an Amazon S3 bucket.

Which solution will meet these requirements?

Options:

A.

Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume the Kinesis data streams. Specify the S3 bucket as the destination of the delivery streams.

B.

Create database migration tasks in AWS Database Migration Service (AWS DMS). Specify replication instances of the EC2 instances as the source endpoints. Specify the S3 bucket as the target endpoint. Set the migration type to migrate existing data and replicate ongoing changes.

C.

Create and configure AWS DataSync agents on the EC2 instances. Configure DataSync tasks to transfer data from the EC2 instances to the S3 bucket.

D.

Create an AWS Direct Connect connection to the application for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume direct PUT operations from the application. Specify the S3 bucket as the destination of the delivery streams.

Question 170

A company is running a photo hosting service in the us-east-1 Region. The service enables users across multiple countries to upload and view photos. Some photos are heavily viewed for months, and others are viewed for less than a week. The application allows uploads of up to 20 MB for each photo. The service uses the photo metadata to determine which photos to display to each user.

Which solution provides the appropriate user access MOST cost-effectively?

Options:

A.

Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items.

B.

Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.

C.

Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags to keep track of metadata.

D.

Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon OpenSearch Service.

Question 171

A company runs a three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances run in an Auto Scaling group for the application tier.

The company needs to make an automated scaling plan that will analyze each resource's daily and weekly historical workload trends. The configuration must scale resources appropriately according to both the forecast and live changes in utilization.

Which scaling strategy should a solutions architect recommend to meet these requirements?

Options:

A.

Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances.

B.

Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.

C.

Create an automated scheduled scaling action based on the traffic patterns of the web application.

D.

Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time

Question 172

A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket The company occasionally needs to use SQL to analyze the log files Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Create an Amazon Aurora MySQL database Migrate the data from the S3 bucket into Aurora by using AWS Database Migration Service (AWS DMS) Issue SQL statements to the Aurora database.

B.

Create an Amazon Redshift cluster Use Redshift Spectrum to run SQL statements directly on the data in the S3 bucket

C.

Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket Use Amazon Athena to run SQL statements directly on the data in the S3 bucket

D.

Create an Amazon EMR cluster Use Apache Spark SQL to run SQL statements directly on the data in the S3 bucket

Page: 43 / 78
Total 1039 questions