MaxtDesignAI Studios
All tracks

Judgment & Verification track

AI Judgment and Verification

Calibrate trust, catch hallucination, check sources, and build an eval mindset.

The track that turns AI output from "looks confident" to "actually correct enough to ship". Four modules on calibrating trust, catching specific hallucinations, verifying citations, and building lightweight quality tests.

~155 minutes5 modules · 29 lessons

Take the diagnostic first.

Academy progress is tied to your AI IQ participant token. The report that surfaces this track contains a one-click link that brings you back here signed in.

Start the diagnostic

By the end of this track, you will

  • Match verification effort to task stakes, without over- or under-verifying.
  • Scan AI output for the five signatures of hallucination in under 30 seconds.
  • Run a four-step source check that catches 90% of fabricated citations.
  • Build a five-example monthly eval that catches drift before it ships.

Syllabus

  1. Module 1

    When to trust, when to verify

    A calibration layer. Three tiers of trust, matched to three tiers of verification effort. The instinct that makes the rest of the track possible.

    • LTrust is calibrated, not granted~5 min
    • LThe verification effort math~5 min
    • DDrill: triage the stakes~3 min
    • LTrust earned, trust rescinded~5 min
    • DDrill: find the trust misfits~5 min
    • LCalibrate your own week~7 min
    • DDrill: defend the override~5 min
  2. Module 2

    Catching hallucination

    Practical techniques. How to spot fabricated claims when the surrounding prose is fluent, and how to structure AI prompts so hallucinations are easier to catch.

    • LThe signatures of hallucination~5 min
    • LStructure prompts so hallucinations are easier to catch~5 min
    • DDrill: spot the signature~3 min
    • LThe minimum-effort verification techniques~5 min
    • DDrill: audit the briefing~5 min
    • LBuild your hallucination checklist~7 min
    • DDrill: handle the catch~5 min
  3. Module 3

    Citation and source checking

    The specific discipline for verifying AI-cited sources. The traps that show up for AI references, and the two-minute process that catches most of them.

    • LThe two-minute source check~5 min
    • LTraps specific to AI-cited sources~5 min
    • DDrill: which trap is it~3 min
    • LPrompt the model for checkable citations~5 min
    • DDrill: find the bad citations~5 min
    • LRefine your source-check for your domain~6 min
    • DDrill: handle the exposure~5 min
  4. Module 4

    The eval mindset

    Lightweight quality tests for your AI workflows. The shift from "it looked fine" to "I measured it against a set of known-good examples".

    • LWhy spot-checks are not enough~5 min
    • LBuild a five-example eval~6 min
    • DDrill: what the eval tells you~3 min
    • LWhen to grow the eval~5 min
    • DDrill: spot the weak eval~5 min
    • LBuild yours this week~8 min
    • DDrill: respond to a drift alert~5 min
  5. Module 5

    Capstone

    Eight questions across the four modules. Clear 70% to earn the certificate.

    • QFinal quiz~14 min