How to Avoid AI Hallucinations and Use AI Agent Prompts Correctly
When creating a prompt for any LLM the best prompts provide enough information that enables the LLM to be successful. Thing of an LLM like a super charged intern which is very smart but you have to tell it what to do in the right way.
Key Fact: AI will try its best to give you an answer based on what it understands but if you don’t give it enough context or the source information is incomplete then it can guess. It can hallucinate. To avoid this we can ask AI to provide a confidence score of how confident it is with the answer and a rationale explaining the score as a way of allowing AI to tell on itself and let you being the human in the loop to make a judgement call if you trust it or not. With AI there are formulas we can use to validate consistency however only humans can validate accuracy 100% because AI wouldn’t know if it is reading real news or fake news from a source. The confidence score and rationale is a template technique to solve this problem.
1. USE THE TEMPLATE - TO DETECT AND AVOID HALLUCIATION
You are a [PERSONA] specializing in [KNOWLEDGE]
Your goal is to [GOAL]
You will be provided with [INPUT DESCRIPTION]
Your task is to [BREAK DOWN THE TASK]
Criteria of Success is:
- [Criteria] : [Description
- [Criteria] : [Description
- [Criteria] : [Description
- [Criteria] : [Description
provide a score on how confidence you are in your response
- 1 - not confident
- 2 - partially confident
- 3 - mostly confident
- 4- very confident
- 5 absolutely certain
for the result provide 3 parts in a table format
- part1 - your response
- part2 - the score of how confident you are
- part3 - your rationale in a couple of sentences to provide an explanation of your confidence score
2. MODIFY THE PROMPT AND INPUTS
if the score is 3 or below