Black Friday Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Selected 1z0-1110-23 Oracle Cloud Infrastructure Certification Questions Answers

Page: 5 / 6
Total 80 questions

Oracle Cloud Infrastructure Data Science 2023 Professional Questions and Answers

Question 17

The Accelerated Data Science (ADS) model evaluation classes support different types of machine

learning modeling techniques. Which three types of modeling techniques are supported by ADS

Evaluators?

Options:

A.

Principal Component Analysis

B.

Multiclass Classification

C.

K-means Clustering

D.

Recurrent Neural Network

E.

Binary Classification

F.

Regression Analysis

Question 18

You have an embarrassingly parallel or distributed batch job on a large amount of data that you

consider running using Data Science Jobs. What would be the best approach to run the workload?

Options:

A.

Create the job in Data Science Jobs and start a job run. When it is done, start a new job run

until you achieve the number of runs required.

B.

Create the job in Data Science Jobs and then start the number of simultaneous jobs runs

required for your workload.

C.

Reconfigure the job run because Data Science Jobs does not support embarrassingly parallel

workloads.

D.

Create a new job for every job run that you have to run in parallel, because the Data Science

Jobs service can have only one job run per job.

Question 19

You have a complex Python code project that could benefit from using Data Science Jobs as it is a

repeatable machine learning model training task. The project contains many subfolders and classes.

What is the best way to run this project as a Job?

Options:

A.

ZIP the entire code project folder and upload it as a Job artifact. Jobs automatically identifies

the_main_ top level where the code is run.

B.

Rewrite your code so that it is a single executable Python or Bash/Shell script file.

C.

ZIP the entire code project folder and upload it as a Job artifact on job creation. Jobs

identifies the main executable file automatically.

D.

ZIP the entire code project folder, upload it as a Job artifact on job creation, and set

JOB_RUN_ENTRYPOINT to point to the main executable file.

Question 20

As a data scientist, you are tasked with creating a model training job that is expected to take

different hyperparameter values on every run. What is the most efficient way to set those

parameters with Oracle Data Science Jobs?

Options:

A.

Create a new job every time you need to run your code and pass the parameters as

environment variables.

B.

Create a new job by setting the required parameters in your code and create a new job for

every code change.

C.

Create your code to expect different parameters either as environment variables or as

command line arguments, which are set on every job run with different values.

D.

Create your code to expect different parameters as command line arguments and create a

new job every time you run the code.

Page: 5 / 6
Total 80 questions