Special Summer Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free and Premium Oracle 1z0-1127-25 Dumps Questions Answers

Page: 1 / 7
Total 88 questions

Oracle Cloud Infrastructure 2025 Generative AI Professional Questions and Answers

Question 1

Given the following code block:

history = StreamlitChatMessageHistory(key="chat_messages")

memory = ConversationBufferMemory(chat_memory=history)

Which statement is NOT true about StreamlitChatMessageHistory?

Options:

A.

StreamlitChatMessageHistory will store messages in Streamlit session state at the specified key.

B.

A given StreamlitChatMessageHistory will NOT be persisted.

C.

A given StreamlitChatMessageHistory will not be shared across user sessions.

D.

StreamlitChatMessageHistory can be used in any type of LLM application.

Buy Now
Question 2

In which scenario is soft prompting especially appropriate compared to other training styles?

Options:

A.

When there is a significant amount of labeled, task-specific data available.

B.

When the model needs to be adapted to perform well in a different domain it was not originally trained on.

C.

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training.

D.

When the model requires continued pre-training on unlabeled data.

Question 3

What is the role of temperature in the decoding process of a Large Language Model (LLM)?

Options:

A.

To increase the accuracy of the most likely word in the vocabulary

B.

To determine the number of words to generate in a single decoding step

C.

To decide to which part of speech the next word should belong

D.

To adjust the sharpness of probability distribution over vocabulary when selecting the next word

Question 4

How are chains traditionally created in LangChain?

Options:

A.

By using machine learning algorithms

B.

Declaratively, with no coding required

C.

Using Python classes, such as LLMChain and others

D.

Exclusively through third-party software integrations

Question 5

How does the structure of vector databases differ from traditional relational databases?

Options:

A.

A vector database stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It is based on distances and similarities in a vector space.

D.

It uses simple row-based data storage.

Question 6

In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?

Options:

A.

Selecting a random word from the entire vocabulary at each step

B.

Picking a word based on its position in a sentence structure

C.

Choosing the word with the highest probability at each step of decoding

D.

Using a weighted random selection based on a modulated distribution

Question 7

Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?

Options:

A.

Updates the weights of the base model during the fine-tuning process

B.

Serves as a designated point for user requests and model responses

C.

Evaluates the performance metrics of the custom models

D.

Hosts the training data for fine-tuning custom models

Question 8

What is the primary purpose of LangSmith Tracing?

Options:

A.

To generate test cases for language models

B.

To analyze the reasoning process of language models

C.

To debug issues in language model outputs

D.

To monitor the performance of language models

Question 9

What is the function of the Generator in a text generation system?

Options:

A.

To collect user queries and convert them into database search terms

B.

To rank the information based on its relevance to the user's query

C.

To generate human-like text using the information retrieved and ranked, along with the user's original query

D.

To store the generated responses for future use

Question 10

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

Options:

A.

The model's ability to generate imaginative and creative content

B.

A technique used to enhance the model's performance on specific tasks

C.

The process by which the model visualizes and describes images in detail

D.

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true

Question 11

What does accuracy measure in the context of fine-tuning results for a generative model?

Options:

A.

The number of predictions a model makes, regardless of whether they are correct or incorrect

B.

The proportion of incorrect predictions made by the model during an evaluation

C.

How many predictions the model made correctly out of all the predictions in an evaluation

D.

The depth of the neural network layers used in the model

Question 12

What does the Ranker do in a text generation system?

Options:

A.

It generates the final text based on the user's query.

B.

It sources information from databases to use in text generation.

C.

It evaluates and prioritizes the information retrieved by the Retriever.

D.

It interacts with the user to understand the query better.

Question 13

What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?

Options:

A.

It allows the LLM to access a larger dataset.

B.

It eliminates the need for any training or computational resources.

C.

It provides examples in the prompt to guide the LLM to better performance with no training cost.

D.

It significantly reduces the latency for each model request.

Question 14

Which statement accurately reflects the differences between these approaches in terms of the number of parameters modified and the type of data used?

Options:

A.

Fine-tuning and continuous pretraining both modify all parameters and use labeled, task-specific data.

B.

Parameter Efficient Fine-Tuning and Soft Prompting modify all parameters of the model using unlabeled data.

C.

Fine-tuning modifies all parameters using labeled, task-specific data, whereas Parameter Efficient Fine-Tuning updates a few, new parameters also with labeled, task-specific data.

D.

Soft Prompting and continuous pretraining are both methods that require no modification to the original parameters of the model.

Question 15

Given the following code:

chain = prompt | llm

Which statement is true about LangChain Expression Language (LCEL)?

Options:

A.

LCEL is a programming language used to write documentation for LangChain.

B.

LCEL is a legacy method for creating chains in LangChain.

C.

LCEL is a declarative and preferred way to compose chains together.

D.

LCEL is an older Python library for building Large Language Models.

Question 16

Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?

Options:

A.

Reduced model complexity

B.

Enhanced generalization to unseen data

C.

Increased model interpretability

D.

Faster training time and lower cost

Question 17

What is the purpose of frequency penalties in language model outputs?

Options:

A.

To ensure that tokens that appear frequently are used more often

B.

To penalize tokens that have already appeared, based on the number of times they have been used

C.

To reward the tokens that have never appeared in the text

D.

To randomly penalize some tokens to increase the diversity of the text

Question 18

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?

Options:

A.

25 unit hours

B.

40 unit hours

C.

20 unit hours

D.

30 unit hours

Question 19

What do prompt templates use for templating in language model applications?

Options:

A.

Python's list comprehension syntax

B.

Python's str.format syntax

C.

Python's lambda functions

D.

Python's class and object structures

Question 20

Which statement best describes the role of encoder and decoder models in natural language processing?

Options:

A.

Encoder models and decoder models both convert sequences of words into vector representations without generating new text.

B.

Encoder models take a sequence of words and predict the next word in the sequence, whereas decoder models convert a sequence of words into a numerical representation.

C.

Encoder models convert a sequence of words into a vector representation, and decoder models take this vector representation to generate a sequence of words.

D.

Encoder models are used only for numerical calculations, whereas decoder models are used to interpret the calculated numerical values back into text.

Question 21

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

Options:

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Question 22

Which LangChain component is responsible for generating the linguistic output in a chatbot system?

Options:

A.

Document Loaders

B.

Vector Stores

C.

LangChain Application

D.

LLMs

Question 23

Which statement describes the difference between "Top k" and "Top p" in selecting the next token in the OCI Generative AI Generation models?

Options:

A.

"Top k" and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.

B.

"Top k" selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the top tokens.

C.

"Top k" considers the sum of probabilities of the top tokens, whereas "Top p" selects from the "Top k" tokens sorted by probability.

D.

"Top k" and "Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.

Question 24

What does a cosine distance of 0 indicate about the relationship between two embeddings?

Options:

A.

They are completely dissimilar

B.

They are unrelated

C.

They are similar in direction

D.

They have the same magnitude

Question 25

Which statement is true about string prompt templates and their capability regarding variables?

Options:

A.

They can only support a single variable at a time.

B.

They are unable to use any variables.

C.

They support any number of variables, including the possibility of having none.

D.

They require a minimum of two variables to function properly.

Question 26

Which is a distinctive feature of GPUs in Dedicated AI Clusters used for generative AI tasks?

Options:

A.

GPUs are shared with other customers to maximize resource utilization.

B.

The GPUs allocated for a customer’s generative AI tasks are isolated from other GPUs.

C.

GPUs are used exclusively for storing large datasets, not for computation.

D.

Each customer's GPUs are connected via a public Internet network for ease of access.

Exam Detail
Vendor: Oracle
Exam Code: 1z0-1127-25
Last Update: Apr 2, 2025
1z0-1127-25 Question Answers
Page: 1 / 7
Total 88 questions