4# Interview Questions on Large Language Models (LLMs)
โ Question โ What are some of the strategies to ๐ฟ๐ฒ๐ฑ๐๐ฐ๐ฒ ๐ต๐ฎ๐น๐น๐๐ฐ๐ถ๐ป๐ฎ๐๐ถ๐ผ๐ป in large language models (LLMs)?
Answer:
Hallucinations can be detected in LLM at different levels:
1๏ธโฃ Prompt Level
2๏ธโฃ Model Level
3๏ธโฃ Self-check
A new course launched for interview preparation
We have launched a new course โInterview Questions and Answers on Large Language Models (LLMs)โ series.
This program is designed to bridge the job gap in the global AI industry. It includes 100+ questions and answers from top companies like FAANG and Fortune 500 & 100+ self-assessment questions.
The course offers regular updates, self-assessment questions, community support, and a comprehensive curriculum covering everything from Prompt Engineering and basics of LLM to Supervised Fine-Tuning (SFT) LLM, Deployment, Hallucination, Evaluation, and Agents etc.
Detailed curriculum (Get 50% off using coupon code MED50 for first 10 users)
Free self assessment on LLM (30 MCQs in 30 mins)
Prompt Level:
- Include instructions to the model: Include instruction to the model not to make up stuff on its own. For e.g. โDo not make up stuff outside of given contextโ
- Repeat : Repeat most important instructions multiple times.
- Position: Position most important instruction at the end, making use of latency effect.
- Parameter: Keep temperature to 0.
- Restrict: Restrict output to a confined list instead of free float text.
- Add CoT type instructions : For GPT models โLets think step by stepโ works for reasoning task, For PaLM โ โTake a deep breath and work on this problem step by stepโ outperforms.
- Use Few shot examples โ Use domain/use case specific few shot examples, also there is a recent study on new technique called โ๐๐ป๐ฎ๐น๐ผ๐ด๐ถ๐ฐ๐ฎ๐น ๐ฃ๐ฟ๐ผ๐บ๐ฝ๐๐ถ๐ป๐ดโ where model generates its own examples internally which out performs few shot prompting.
8. In Context Learning: Use In-context learning to provide better context to the model.
9. Self-consistency/Voting: Generating multiple answers from the model and selecting most frequent answers.
Model Level:
- DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models: Simple decoding strategy in large pre-trained LLMs to reduce hallucinations.
2. Fine-Tuned model on good quality data โ Fine-tuning small LLM model on good quality data has shown promising results as well as help reduce hallucinations.
Self-check:
Methods like chain of verification can help reduce the hallucinations to a great extent, read more here
Start your interview journey from Question 1:
Your feedback as comments and claps encourages us to create better content for the community.