Building AI that knows what it knows - self-monitoring, uncertainty awareness, and epistemic humility.
Expressing confidence levels that accurately reflect the probability of being correct.
Distinguishing what it knows confidently from what it's uncertain about.
Recognizing when a question is outside its training or capabilities.
Catching potential mistakes in its own reasoning before outputting.
Distinguishing between learned knowledge and reasoning versus speculation.
Declining to answer when the system cannot be reliably helpful.
Metacognition is "thinking about thinking" - the ability to monitor, evaluate, and regulate one's own cognitive processes. In AI, this means systems that can assess their confidence, recognize when they don't know something, and adjust their approach accordingly.
AI systems without metacognition can be confidently wrong, fabricating plausible-sounding answers. Metacognitive AI knows its limitations, expresses appropriate uncertainty, and can decline tasks beyond its competence - essential qualities for trustworthy AI.