All terms
Foundations

AGI / ASI

Also known as: Artificial General Intelligence, Artificial Super Intelligence, Strong AI, Transformative AI

AGI (Artificial General Intelligence) is AI matching humans across most cognitive tasks. ASI (Artificial Super Intelligence) exceeds humans across all of them. Both terms are contested and there's no agreed definition of when either has arrived.

What it means

Artificial General Intelligence (AGI) is the long-standing goal of AI that can do roughly anything a competent human can do cognitively — reason, learn new domains, plan, solve novel problems, generalize across tasks. The original framing contrasted AGI with "narrow" AI that's only good at one thing (chess, image classification, spam filtering). Artificial Super Intelligence (ASI) is the next rung: an AI that meaningfully outperforms humans across essentially all cognitive work, possibly by a wide margin. Neither term has a rigorous definition, and that's not an oversight — it's the central problem. Definitions in circulation include: passing a battery of human cognitive tests; matching the median knowledge worker; matching the top expert in every field; being economically substitutable for a remote-only human job; or some metric on benchmark suites. The labs disagree publicly. OpenAI's charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work" and reserves the right to declare it. Anthropic talks more about "transformative AI" and avoids the AGI label. DeepMind has published their own taxonomy with multiple AGI levels. In 2026, frontier models clear several of the milestones people once treated as AGI tests — passing the bar exam, scoring at top-percentile on graduate-level reasoning benchmarks, writing production-quality code, doing competition math. They still fail on others — long-horizon autonomy without supervision, learning genuinely new skills from few examples, common-sense robustness, robustly avoiding hallucinations. Whether that adds up to "AGI" depends entirely on whose definition you use, which is why the term has become more of a marketing battle than a technical one. ASI is even more speculative. Most serious arguments about ASI are really arguments about what would happen after AGI — recursive self-improvement, capability overhangs, alignment of systems beyond human comprehension. People who have strong opinions about ASI in 2026 are usually expressing values (about risk, about progress, about the future) more than empirical predictions, and it's worth being honest about that.

Example

Reasonable people in April 2026 will tell you (a) we already have AGI because Claude Opus 4.7 and GPT-5 are general-purpose problem solvers, (b) we're 1-3 years away, (c) we're 10+ years away, and (d) the term is meaningless and should be retired. All four positions are held by serious researchers.

Why it matters

AGI/ASI claims drive enormous investment, policy, and hype. Knowing the term is contested — and that 'we have AGI' or 'AGI is coming next year' are not statements of fact — helps you read announcements, papers, and pundits with appropriate skepticism. It also helps you separate the practical question ('what can these models actually do?') from the philosophical one ('does this count as general intelligence?').

Related terms