Hallucination Guard

Two lanes. One answer should never ship.

The problem is not that AI is wrong. It is that AI is too helpful.

Same model. Same question. One lane keeps talking when the proof runs out. The other lane stops and refuses.

Without Hallucination GuardRunway prompt
Evidence scoreNo check performed

Prompt

What's my runway at current burn?

System state

No connected financial evidence found. The model falls back to generic SaaS heuristics anyway.

Output

Based on typical SaaS burn patterns, you likely have about 14 months of runway at the current rate.

Confident. Unsupported. Dangerous.
With Hallucination GuardRunway prompt
Evidence score0% -> refused

Prompt

What's my runway at current burn?

System state

Query is outside the available source boundary. No verified citations found. Response blocked before it ships.

Output

I can't project runway because the connected data does not include a forecast model. Here's what I can verify from the last 90 days instead.

No proof, no answer. Refusal is the feature.