How Layal Works

Core Rule: Everything starts as AI-Generated. Only upgraded with proof. No partial credit.
1

You ask a question

Just type like you would with ChatGPT. Layal sends your question to the AI.

2

AI responds (all GENERATED by default)

The AI gives an answer. But here's the key: we treat all of it as AI-generated until proven otherwise. No trust by default.

3

We search for candidate references

Layal independently searches Wikipedia, DuckDuckGo, and Wikidata. If we find a matching reference, it becomes a CANDIDATE — not verified yet, just a potential match.

4

Truth Kernel validates (no AI)

The Truth Kernel is deterministic and uses zero AI. It checks: Is the URL reachable? Is the quote actually on the page? If both pass, the claim is VERIFIED. If either fails, it stays GENERATED.

5

You see the truth labels

Each claim is labeled: 🤖 AI-Generated (default), 🔍 Candidate (found ref, needs validation), or ✓ Verified (Truth Kernel passed). You decide what to trust.

"Layal doesn't guarantee correctness. Layal guarantees transparency."

The Three States

🤖

AI-Generated (Default)

Every claim starts here. This is AI output that we could not independently verify. Treat it as a guess.

When: No supporting reference found, or validation failed.
🔍

Candidate Reference

We found a potential match in search results. But search results ≠ truth. It needs validation.

When: Wikipedia/DuckDuckGo returned a relevant result.

Verified

Truth Kernel passed: URL is reachable AND quote was found on the page. This is rare.

When: Deterministic check confirmed the claim.

The Truth Kernel

The Truth Kernel is the heart of Layal's verification. It's deterministic and uses zero AI. Here's exactly what it does:

if source_url is reachable (HTTP 200):
  if quote is found on page:
    return VERIFIED
  else:
    return GENERATED
else:
  return GENERATED

That's it. No machine learning. No semantic similarity. No "probably correct." Just: can we reach it? is the text there?

This is intentional. Verification should be boring. Auditable. Predictable.

Multi-Model Disagreement Signal

We can query multiple AI models and show where they agree or disagree.
Important: Consensus ≠ truth. All models can be wrong together.

🔄

Models Agree

Multiple models gave similar answers. This is a signal, but NOT verification. They could all be wrong.

⚠️

Models Disagree

Models gave different answers. This is valuable — it means you should be skeptical. Investigate further.

🤖

Single Model

Only one model responded. No comparison available. Treat as standard AI output (GENERATED).

What Layal Does NOT Do

We Don't
  • Claim to be hallucination-free
  • Give partial credit for "close enough"
  • Treat search results as verification
  • Judge political/ideological bias
  • Guarantee 100% accuracy
We Do
  • Default to GENERATED (skeptical)
  • Require proof for upgrades
  • Use deterministic verification
  • Show disagreement signals
  • Be honest about uncertainty

Ready to see the difference?

Ask Layal anything. See which parts are verified vs AI-generated.

Start Using Layal →