4# Interview Questions on Large Language Models (LLMs)

โ“ Question โ€” What are some of the strategies to ๐—ฟ๐—ฒ๐—ฑ๐˜‚๐—ฐ๐—ฒ ๐—ต๐—ฎ๐—น๐—น๐˜‚๐—ฐ๐—ถ๐—ป๐—ฎ๐˜๐—ถ๐—ผ๐—ป in large language models (LLMs)?

Mastering LLM (Large Language Model)
3 min readOct 30, 2023
What are some of the strategies to reduce hallucination in large language models (llms)?

Answer:

Hallucinations can be detected in LLM at different levels:
1๏ธโƒฃ Prompt Level
2๏ธโƒฃ Model Level
3๏ธโƒฃ Self-check

A new course launched for interview preparation

We have launched a new course โ€œInterview Questions and Answers on Large Language Models (LLMs)โ€ series.

This program is designed to bridge the job gap in the global AI industry. It includes 100+ questions and answers from top companies like FAANG and Fortune 500 & 100+ self-assessment questions.

The course offers regular updates, self-assessment questions, community support, and a comprehensive curriculum covering everything from Prompt Engineering and basics of LLM to Supervised Fine-Tuning (SFT) LLM, Deployment, Hallucination, Evaluation, and Agents etc.

Detailed curriculum (Get 50% off using coupon code MED50 for first 10 users)

Free self assessment on LLM (30 MCQs in 30 mins)

Prompt Level:

  1. Include instructions to the model: Include instruction to the model not to make up stuff on its own. For e.g. โ€œDo not make up stuff outside of given contextโ€
  2. Repeat : Repeat most important instructions multiple times.
  3. Position: Position most important instruction at the end, making use of latency effect.
  4. Parameter: Keep temperature to 0.
  5. Restrict: Restrict output to a confined list instead of free float text.
  6. Add CoT type instructions : For GPT models โ€œLets think step by stepโ€ works for reasoning task, For PaLM โ€” โ€œTake a deep breath and work on this problem step by stepโ€ outperforms.
  7. Use Few shot examples โ€” Use domain/use case specific few shot examples, also there is a recent study on new technique called โ€œ๐—”๐—ป๐—ฎ๐—น๐—ผ๐—ด๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜๐—ถ๐—ป๐—ดโ€ where model generates its own examples internally which out performs few shot prompting.
๐—”๐—ป๐—ฎ๐—น๐—ผ๐—ด๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜๐—ถ๐—ป๐—ด

8. In Context Learning: Use In-context learning to provide better context to the model.

9. Self-consistency/Voting: Generating multiple answers from the model and selecting most frequent answers.

Model Level:

  1. DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models: Simple decoding strategy in large pre-trained LLMs to reduce hallucinations.

2. Fine-Tuned model on good quality data โ€” Fine-tuning small LLM model on good quality data has shown promising results as well as help reduce hallucinations.

Self-check:

Methods like chain of verification can help reduce the hallucinations to a great extent, read more here

--

--

Mastering LLM (Large Language Model)
Mastering LLM (Large Language Model)

Written by Mastering LLM (Large Language Model)

MasteringLLM is a AI first EdTech company making learning LLM simplified with its visual contents. Look out for our LLM Interview Prep & AgenticRAG courses.

No responses yet