Skip to main content

When the Model Hallucinates

Occasionally, the model may generate output that is incorrect, incomplete, or not grounded in your actual project state. This is referred to as hallucination. Hallucinations usually appear as:
  • Referencing files, APIs, or features that don’t exist
  • Claiming something was “fixed” when nothing changed
  • Repeating solutions that don’t resolve the issue
  • Providing confident but incorrect instructions
This is expected behavior in AI systems and does not indicate a broken project.

How to Resolve Hallucinations

1. Ground the Model With Facts

Provide real, concrete inputs:
  • Error logs
  • File contents
  • Build output
  • Exact error messages
Avoid vague prompts like:
“It’s broken, fix it.”
Instead use:
“Here is the full Expo prebuild error output. Fix only what’s causing this error.”

2. Constrain the Scope

Tell the model exactly what it can and cannot change. Examples:
  • “Only modify app.json.”
  • “Do not add new libraries.”
  • “Do not assume files that don’t exist.”
Clear constraints reduce hallucination significantly.

3. Force Verification

If the model claims something is fixed, require confirmation. Use prompts like:
  • “Show me exactly what changed.”
  • “Which file was modified and why?”
  • “Quote the line that fixes the error.”
If it cannot point to a real change, assume the fix is invalid.

4. Reset the Context

If hallucinations persist:
  • Start a new message
  • Paste only the relevant files or logs
  • Restate the goal clearly
Long conversations increase the chance of drift.

5. Switch Models (If Available)

Different models reason differently. If one is looping or hallucinating:
  • Switch models
  • Re-paste the same grounded inputs
  • Retry with tighter constraints

When to Stop Iterating

Stop prompting the model if:
  • It repeats the same fix multiple times
  • It references non-existent files
  • It contradicts itself across responses
At that point, move to:
  • Manual inspection
  • Logs-first debugging
  • Native tools (Xcode, Expo logs)

Best Practices to Avoid Hallucinations

  • Always paste real logs
  • Keep prompts short and specific
  • Verify every claimed fix
  • Treat AI output as a suggestion, not ground truth

Key Principle

The model is strongest when reacting to real inputs, not guessing missing context.
Hallucinations are a signal that the model needs more constraints or better grounding, not that your app is broken.