Black Friday Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free and Premium Oracle 1z0-1110-23 Dumps Questions Answers

Page: 1 / 6
Total 80 questions

Oracle Cloud Infrastructure Data Science 2023 Professional Questions and Answers

Question 1

You are a data scientist trying to load data into your notebook session. You understand that Accelerated Data Science (ADS) SDK supports loading various data formats. Which of the following THREE are ADS supported data formats?

Options:

A.

DOCX

B.

Pandas DataFram

C.

JSON

D.

Raw Images

E.

XML

Buy Now
Question 2

You have created a conda environment in your notebook session. This is the first time you are

working with published conda environments. You have also created an Object Storage bucket with

permission to manage the bucket.

Which two commands are required to publish the conda environment?

Options:

A.

odac conda publish --slug

B.

odsc conda list --override

C.

odsc conda init --bucket_namespace <nameSPACE> --bucket_name

D.

odsc conda create --file manifest.yaml

E.

conda activate /home/datascience/conda/

Question 3

You want to evaluate the relationship between feature values and target variables. You have a

large number of observations having a near uniform distribution and the features are highly

correlated.

Which model explanation technique should you choose?

Options:

A.

Feature Permutation Importance Explanations

B.

Local Interpretable Model-Agnostic Explanations

C.

Feature Dependence Explanations

D.

Accumulated Local Effects

Question 4

Six months ago, you created and deployed a model that predicts customer churn for a call

centre. Initially, it was yielding quality predictions. However, over the last two months, users are

questioning the credibility of the predictions.

Which two methods would you employ to verify the accuracy of the model?

Options:

A.

Retrain the model

B.

Validate the model using recent data

C.

Drift monitoring

D.

Redeploy the model

E.

Operational monitoring

Question 5

You have trained a machine learning model on Oracle Cloud Infrastructure (OCI) Data Science,

and you want to save the code and associated pickle file in a Git repository. To do this, you have to

create a new SSH key pair to use for authentication. Which SSH command would you use to create

the public/private algorithm key pair in the notebook session?

Options:

A.

ssh-agent

B.

ssh-copy-id

C.

ssh-add

D.

ssh-Keygen

Question 6

As a data scientist, you create models for cancer prediction based on mammographic images.

The correct identification is very crucial in this case. After evaluating two models, you arrive at the

following confusion matrix.

Model 1 has Test accuracy is 80% and recall is 70%.

• Model 2 has Test accuracy is 75% and recall is 85%.

Which model would you prefer and why?

Options:

A.

Model 2, because recall is high.

B.

Model 1, because the test accuracy is high.

C.

Model 2, because recall has more impact on predictions in this use se.

D.

Model 1, because recall has lesser impact on predictions in this use case

Question 7

For your next data science project, you need access to public geospatial images.

Which Oracle Cloud service provides free access to those images?

Options:

A.

Oracle Open Data

B.

Oracle Big Data Service

C.

Oracle Cloud Infrastructure Data Science

D.

Oracle Analytics Cloud

Question 8

As a data scientist, you are working on a global health data set that has data from more than 50 countries. You want to encode three features, such as 'countries', 'race', and 'body organ' as categories. Which option would you use to encode the categorical feature?

Options:

A.

DataFramLabelEncode()

B.

auto_transform()

C.

OneHotEncoder()

D.

show_in_notebook()

Question 9

When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data Science model catalog, you create a score.py file. What is the purpose of the score.py fie?

Options:

A.

Define the compute scaling strategy.

B.

Configure the deployment infrastructure.

C.

Define the inference server dependencies.

D.

Execute the inference logic code

Question 10

data scientist, you use the Oracle Cloud Infrastructure (OCI) Language service to train custom

models. Which types of custom models can be trained?

Options:

A.

Image classification, Named Entity Recognition (NER)

B.

Text classification, Named Entity Recognition (NER)

C.

Sentiment Analysis, Named Entity Recognition (NER)

D.

Object detection, Text classification

Question 11

You are a data scientist designing an air traffic control model, and you choose to leverage Oracle

AutoML You understand that the Oracle AutoML pipeline consists of multiple stages and

automatically operates in a certain sequence. What is the correct sequence for the Oracle AutoML

pipeline?

Options:

A.

Algorithm selection, Feature selection, Adaptive sampling, Hyperparameter tuning

Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid)

B.

Adaptive sampling, Algorithm selection, Feature selection, Hyperparameter tuning

C.

Adaptive sampling, Feature selection, Algorithm selection, Hyperparameter tuning

D.

Algorithm selection, Adaptive sampling, Feature selection, Hyperparameter tuning

Question 12

You are a data scientist leveraging Oracle Cloud Infrastructure (OCI) Data Science to create a

model and need some additional Python libraries for processing genome sequencing data. Which of

the following THREE statements are correct with respect to installing additional Python libraries to

process the data?

Options:

A.

You can only install libraries using yum and pip as a normal user.

B.

You can install private or custom libraries from your own internal repositories.

C.

OCI Data Science allows root privileges in notebook sessions.

D.

You can install any open source package available on a publicly accessible Python Package

Index (PyPI) repository.

E.

You cannot install a library that's not preinstalled in the provided image

Question 13

You have trained three different models on your data set using Oracle AutoML. You want to

visualize the behavior of each of the models, including the baseline model, on the test set. Which

class should be used from the Accelerated Data Science (ADS) SDK to visually compare the models?

Options:

A.

EvaluationMetrics

B.

ADSEvaluator

C.

ADSExplainer

D.

ADSTuner

Question 14

You are a data scientist working inside a notebook session and you attempt to pip install a

package from a public repository that is not included in your conda environment. After running this

command, you get a network timeout error.

What might be missing from your networking configuration?

Options:

A.

FastConnect to an on-premises network.

B.

Primary Virtual Network Interface Card (VNIC).

C.

NAT Gateway with public internet access.

D.

Service Gateway with private subnet access

Question 15

You are creating an Oracle Cloud Infrastructure (OCI) Data Science job that will run on a recurring basis in a production environment. This job will pick up sensitive data from an Object Storage bucket, train a model, and save it to the model catalog. How would you design the authentication mechanism for the job?

Options:

A.

Package your personal OC file and keys in the job artifact.

B.

Use the resource principal of the job run as the signer in the job code, ensuring there is a dynamic group for this job run with appropriate access to Object Storage and the model catalog.

C.

Store your personal OCI config file and kays in the Vault, and access the Vault through the job nun resource principal

D.

Create a pre-authenticated request (PAA) for the Object Storage bucket, and use that in the job code.

Question 16

What preparation steps are required to access an Oracle AI service SDK from a Data Science

notebook session?

Options:

A.

Create and upload score.py and runtime.yaml.

B.

Create and upload the APIsigning key and config file.

C.

Import the REST API.

D.

Call the ADS command to enable AI integration

Question 17

The Accelerated Data Science (ADS) model evaluation classes support different types of machine

learning modeling techniques. Which three types of modeling techniques are supported by ADS

Evaluators?

Options:

A.

Principal Component Analysis

B.

Multiclass Classification

C.

K-means Clustering

D.

Recurrent Neural Network

E.

Binary Classification

F.

Regression Analysis

Question 18

You have an embarrassingly parallel or distributed batch job on a large amount of data that you

consider running using Data Science Jobs. What would be the best approach to run the workload?

Options:

A.

Create the job in Data Science Jobs and start a job run. When it is done, start a new job run

until you achieve the number of runs required.

B.

Create the job in Data Science Jobs and then start the number of simultaneous jobs runs

required for your workload.

C.

Reconfigure the job run because Data Science Jobs does not support embarrassingly parallel

workloads.

D.

Create a new job for every job run that you have to run in parallel, because the Data Science

Jobs service can have only one job run per job.

Question 19

You have a complex Python code project that could benefit from using Data Science Jobs as it is a

repeatable machine learning model training task. The project contains many subfolders and classes.

What is the best way to run this project as a Job?

Options:

A.

ZIP the entire code project folder and upload it as a Job artifact. Jobs automatically identifies

the_main_ top level where the code is run.

B.

Rewrite your code so that it is a single executable Python or Bash/Shell script file.

C.

ZIP the entire code project folder and upload it as a Job artifact on job creation. Jobs

identifies the main executable file automatically.

D.

ZIP the entire code project folder, upload it as a Job artifact on job creation, and set

JOB_RUN_ENTRYPOINT to point to the main executable file.

Question 20

As a data scientist, you are tasked with creating a model training job that is expected to take

different hyperparameter values on every run. What is the most efficient way to set those

parameters with Oracle Data Science Jobs?

Options:

A.

Create a new job every time you need to run your code and pass the parameters as

environment variables.

B.

Create a new job by setting the required parameters in your code and create a new job for

every code change.

C.

Create your code to expect different parameters either as environment variables or as

command line arguments, which are set on every job run with different values.

D.

Create your code to expect different parameters as command line arguments and create a

new job every time you run the code.

Question 21

Which two statements are true about published conda environments?

Options:

A.

They are curated by Oracle Cloud Infrastructure (OCI) Data Science.

B.

The odac conda init command is used to configure the location of published conda

environments.

C.

Your notebook session acts as the source to share published conda environments with team

members.

D.

You can only create a published conda environment by modifying a Data Science conda

environment.

E.

In addition to service job run environment variables, conda environment variables can be

used in Data Science Jobs.

Question 22

As a data scientist, you are working on a global health data set that has data from more than 50

countries. You want to encode three features such as 'countries', 'race' and 'body organ' as

categories.

Which option would you use to encode the categorical feature?

Options:

A.

OneHotEncoder ()

B.

DataFrameLabelEncoder ()

C.

show_in_notebook ()

D.

auto_transform()

Question 23

Which Oracle Accelerated Data Science (ADS) classes can be used for easy access to data sets from

reference libraries and index websites such as scikit-learn?

Options:

A.

DataLabeling

B.

DatasetBrowser

C.

SecretKeeper

D.

ADSTuner

Question 24

While reviewing your data, you discover that your data set has a class imbalance. You are aware

that the Accelerated Data Science (ADS) SDK provides multiple built-in automatic transformation

tools for data set transformation. Which would be the right tool to correct any imbalance between

the classes?

Options:

A.

visualize_transforms ()

B.

auto_transform()

C.

sample ()

D.

suggest_recommendations()

Page: 1 / 6
Total 80 questions