New Year Special 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Pass Databricks-Generative-AI-Engineer-Associate Exam Guide

Databricks Certified Generative AI Engineer Associate Questions and Answers

Question 5

A Generative Al Engineer has developed an LLM application to answer questions about internal company policies. The Generative AI Engineer must ensure that the application doesn’t hallucinate or leak confidential data.

Which approach should NOT be used to mitigate hallucination or confidential data leakage?

Options:

A.

Add guardrails to filter outputs from the LLM before it is shown to the user

B.

Fine-tune the model on your data, hoping it will learn what is appropriate and not

C.

Limit the data available based on the user’s access level

D.

Use a strong system prompt to ensure the model aligns with your needs.

Question 6

When developing an LLM application, it’s crucial to ensure that the data used for training the model complies with licensing requirements to avoid legal risks.

Which action is NOT appropriate to avoid legal risks?

Options:

A.

Reach out to the data curators directly before you have started using the trained model to let them know.

B.

Use any available data you personally created which is completely original and you can decide what license to use.

C.

Only use data explicitly labeled with an open license and ensure the license terms are followed.

D.

Reach out to the data curators directly after you have started using the trained model to let them know.

Question 7

A Generative AI Engineer is designing a RAG application for answering user questions on technical regulations as they learn a new sport.

What are the steps needed to build this RAG application and deploy it?

Options:

A.

Ingest documents from a source –> Index the documents and saves to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> Evaluate model –> LLM generates a response –> Deploy it using Model Serving

B.

Ingest documents from a source –> Index the documents and save to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> LLM generates a response -> Evaluate model –> Deploy it using Model Serving

C.

Ingest documents from a source –> Index the documents and save to Vector Search –> Evaluate model –> Deploy it using Model Serving

D.

User submits queries against an LLM –> Ingest documents from a source –> Index the documents and save to Vector Search –> LLM retrieves relevant documents –> LLM generates a response –> Evaluate model –> Deploy it using Model Serving

Question 8

A Generative Al Engineer is tasked with developing an application that is based on an open source large language model (LLM). They need a foundation LLM with a large context window.

Which model fits this need?

Options:

A.

DistilBERT

B.

MPT-30B

C.

Llama2-70B

D.

DBRX