What Is AGI and How Close Are We?

What Is AGI and How Close Are We?

By 13 min read
ai agi artificial-general-intelligence machine-learning future-of-work ai-safety

What is AGI? Is it already here? Expert timelines from Davos 2026, the long-horizon agent breakthrough, and what AGI means for work and society.

Updated: March 2, 2026

AGI used to be a “someday” question. In 2026 it’s not. For a growing number of people—including some of the biggest names in AI—the conversation has shifted from “if” to “when.” For a few, “when” is basically now. So what is AGI, who thinks it’s here, and what does that actually mean for the rest of us?


What Is AGI? The Core Definition

Artificial General Intelligence (AGI) is the idea of AI that can do any intellectual job a human can: reason, learn, plan, solve problems, and switch between domains without being retrained for each one. Human-like flexibility, in other words.

AGI vs. Today’s AI: Three Levels of Intelligence

A simple way to see where AGI fits:

LevelNameCapabilityExample
ANIArtificial Narrow IntelligenceExcels at one specific taskAlphaGo, ChatGPT, DALL-E
AGIArtificial General IntelligencePerforms any intellectual task a human canHuman-level reasoning across domains
ASIArtificial SuperintelligenceFar surpasses the brightest humansHypothetical future system

Right now we’re still in ANI land. ChatGPT is remarkable.

Still basically a language machine, though. AGI would be the thing that actually generalizes.


The Debate: Has AGI Already Arrived?

Definitions here drive the answer. I lean functional: if “AGI” means systems that can figure things out across open-ended tasks without being retrained every time, we’re already in that regime for a growing slice of work.

The “Yes, It’s Already Here” Camp

A group of UC San Diego professors from philosophy, data science, and linguistics recently published a controversial conclusion in Nature: by reasonable standards, current large language models already constitute AGI.

Their reasoning cuts through common misconceptions:

  • AGI doesn’t require perfection – Humans make mistakes, hallucinate false memories, and hold cognitive biases—yet we’re still considered intelligent.
  • AGI doesn’t need a body – Stephen Hawking communicated through text and synthesized speech; his physical limitations didn’t diminish his intelligence.
  • AGI doesn’t require human-like cognition – The human brain is just one form of cognitive architecture.

Lead author Eddy Keming Chen explains: “There is a common misconception that AGI must be perfect—knowing everything, solving every problem—but no individual human can do that. The real question is whether LLMs display the flexible, general competence characteristic of human thought. Our conclusion: insofar as individual humans possess general intelligence, current LLMs do too.”

They argue that frontier models already meet two key thresholds:

  • Turing-test level: Basic literacy and adequate conversation
  • Expert tier: Gold-medal Olympiad performance, PhD-level problem-solving in multiple domains

The Functional Definition: “Figure Things Out”

Venture capital firm Sequoia takes a pragmatic, investor-focused view: AGI is simply the “ability to figure things out.”

For them, AGI requires three core capabilities working together:

  1. Baseline knowledge (pre-training) – The ChatGPT moment (2022)
  2. Reasoning ability (inference-time computation) – OpenAI o1 (2024), DeepSeek R1 (2025)
  3. Iterative capability (long-horizon agents) – The breakthrough now emerging (2026)

By that functional definition, AGI has already arrived in 2026. The disagreement is whether that definition is too loose—Hassabis and others would reserve “AGI” for human-level generality including scientific creativity. Either way, the capability jump is real.

Two things most AGI explainers skip: (1) The bottleneck for “true” AGI might not be scale but evaluation—we don’t have good tests for “figuring out novel problems” in the wild, so we keep optimizing for benchmarks that understate or overstate progress. (2) Long-horizon agents shift the unit of value from “one smart answer” to “ownership of a process”; that’s why “hire the AI” is the right mental model—you’re buying outcome, not a chat.

I care less about the label than about what’s actually getting built. Teams are already handing multi-step, open-ended work to agents—recruiting, code review, research synthesis. The systems aren’t perfect. They’re good enough to change how work gets done. That’s the transition we’re in. Whether we call it AGI or “advanced automation” doesn’t change the fact that the center of gravity is shifting from “ask the model” to “assign the agent.”


The Timeline Debate: 1 Year or 10 Years?

The top AI CEOs don’t agree on the timeline—their answers don’t even overlap.

The Davos 2026 Showdown

At the World Economic Forum in Davos, two of AI’s most influential figures went head-to-head:

ExecutiveAGI TimelineKey Argument
Dario Amodei (Anthropic CEO)2026-2027 (1-2 years)Self-improvement loop already forming; models writing models
Demis Hassabis (Google DeepMind CEO)5-10 years (50% chance by 2030)True scientific creativity remains unsolved; natural sciences require hypothesis generation, not just solution-finding

Amodei’s conviction stems from what’s happening inside Anthropic today: “I have engineers within Anthropic who say, I don’t write any code anymore—they let models write code while they focus on editing.” He suggests we might be only six to twelve months away from AI systems performing most or all software engineering tasks end-to-end.

Hassabis counters that coding and mathematics are “verifiable domains” where progress is naturally faster. True general intelligence requires “coming up with the question in the first place”—formulating novel hypotheses in natural sciences, which remains unsolved.

Other Expert Voices

The timeline spans a wide spectrum:

ExpertTimelineSource
Sam Altman (OpenAI CEO)“Soft singularity may have already arrived”2025
Elon Musk (xAI CEO)Superhuman AI by end of 20262026
Ha Jung-woo (South Korea presidential AI aide)AI surpassing humans in science by 20302026
Geoffrey Hinton (Turing winner)5-20 years (revised down from 30-50)2025
Yann LeCun (Meta Chief AI Scientist)Not with current LLMs; need new approaches2025
Lee Ki-min (KAIST professor)At least 1-2 years2026
Cheon Hyeon-deuk (Seoul National University)5-10 years or more2026

Lee Ki-min identifies the true inflection point: “The emergence of self-evolving AI. If AI can learn and improve on its own without human intervention, the pace of its development will become uncontrollably fast.”


The Breakthrough: Long-Horizon Agents

Whether AGI is “already here” or “years away” depends largely on how you interpret the latest capability leap: long-horizon agents.

What Changed in 2026

The big shift: we’re not just talking to AI anymore. We’re giving it a job and letting it run—for half an hour, an hour, sometimes longer. It corrects itself, adapts, decides what to do next.

Sequoia frames it as three steps that had to fall into place:

  • 2022: ChatGPT proved foundational knowledge from pre-training
  • 2024: OpenAI o1 added genuine reasoning capability
  • Early 2026: Claude Code and other coding agents crossed the threshold into autonomous iteration

The result: AI that can work for 30 minutes, hours, or eventually days—correcting its own mistakes, adapting to new information, and deciding next steps without human instruction.

The 31-Minute Recruiter: A Concrete Example

Sequoia illustrates with a scenario that would have seemed like science fiction months ago: a founder asks an agent to “Find me a Head of Developer Relations—technically strong enough to earn engineers’ respect, genuinely loves Twitter. We sell to platform teams.”

The agent runs a full recruiting workflow, not just keyword search:

  1. 0-5 min: Searches LinkedIn for Developer Relations at competitors (Datadog, Temporal, Langchain); quickly sees titles alone are unreliable.

  2. 5-15 min: Pivots to YouTube conference speakers, filters by audience engagement, cross-references Twitter—half the accounts are inactive or just retweets.

  3. 15-25 min: Scores remaining candidates on real influence: opinions, engagement, content quality.

  4. 25-31 min: Identifies one candidate whose posting frequency dropped (possible job dissatisfaction), whose company just cut marketing, and whose technical focus matches the startup; LinkedIn is two months out of date.

Output: A single personalized outreach email to that candidate. 31 minutes total.

“The agent has the skills of an excellent recruiter, but it never gets tired and doesn’t need to be told specific methods.”

The Exponential Curve

This isn’t a one-off—it’s a trend. METR, an independent evaluation organization, has tracked AI’s ability to complete long-horizon tasks:

Capability doubles approximately every 7 months.

Projecting forward:

  • 2028: AI reliably completes a human expert’s full day of work
  • 2030-2034: AI completes a human expert’s full year of work
  • 2037: AI completes tasks requiring 100 human years

One hundred years of work could mean analyzing all historical clinical trial data, finding patterns in millions of customer service records, or rewriting complex regulatory frameworks. Caveat: these doubling curves assume continued stability—no major hardware bottlenecks, regulatory shocks, or evaluation-set drift. If benchmarks are gamed or task distributions shift, the timeline could stretch or compress; treat the numbers as a direction, not a guarantee.

Counterweight: Reasons for Skepticism

Caveats: Models overfit to the tasks they’re evaluated on; crushing coding or recruiting doesn’t guarantee the same on other long-horizon work. A slick demo isn’t a production system. I still think the functional claim—“systems that can figure things out”—holds for 2026. The nuance is how far that generalizes.

Bold prediction (next 5 years): By 2030, at least one major company will run a business unit where the majority of “knowledge work” (analysis, drafting, coordination) is done by agentic AI with humans in review and exception-handling roles. The debate will shift from “is it AGI?” to “how do we structure organizations around it?”


The Three Pillars of Modern AGI

Sequoia’s framing—what had to fall into place:

PillarWhat It ProvidesWhen It Arrived
Pre-training (Knowledge)Baseline understanding of the worldChatGPT (2022)
Inference-time computation (Reasoning)Ability to think through problems step-by-stepo1, DeepSeek R1 (2024-2025)
Long-horizon agents (Iteration)Autonomous work over extended periods, self-correctionClaude Code, Manus, etc. (Early 2026)

Combine all three, and you have an AI that can:

  • Work autonomously for extended periods
  • Make mistakes and self-correct
  • Decide next steps without instructions
  • Solve problems in ambiguous, real-world conditions

What AGI Means for Work and Society

The Employment Question

Both Amodei and Hassabis agree on one thing: AI is already affecting jobs, particularly entry-level positions.

Amodei’s prediction: “In the next one to five years, 50% of entry-level white-collar jobs will disappear. It’s not that companies will cut more—it’s that companies will stop hiring.”

Hassabis adds nuance: “This year, we’re already seeing internships and junior positions affected. What’s being replaced first are repetitive, rules-based tasks that take time but don’t require experience.”

The emerging picture:

  • Experienced roles still exist
  • Entry pathways are narrowing
  • Mid-career advancement becomes compressed

The paradoxical result: Companies operate normally, but career ladders are fracturing. Organizations haven’t adapted their hiring, training, or promotion mechanisms to this new reality.

The “Hiring AI” Era

Sequoia frames the transformation in practical terms: The litmus test for AGI is whether you can “hire” it.

Already, you can “hire” specialized AI agents across industries:

  • Medical: OpenEvidence’s Deep Consult as specialist physician
  • Legal: Harvey agent as junior lawyer
  • Cybersecurity: XBOW agent as penetration tester
  • DevOps: Traversal agent as Site Reliability Engineer
  • GTM: Day AI agent as BDR, SE, and RevOps combined
  • Recruiting: Juicebox agent as recruiter
  • Mathematics: Harmonic’s Aristotle as mathematician
  • Chip design: Ricursive agent as chip designer
  • AI research: GPT-5.2 and Claude as research assistants

This shifts the paradigm entirely:

  • 2023-2024 AI applications: “Talkers”—conversational partners
  • 2026-2027 AI applications: “Doers”—colleagues working alongside you

Usage patterns change from a few queries daily to multiple agents running continuously. The user’s role shifts from individual contributor to manager of an AI team.

The Deeper Questions

Reflection: The employment numbers (50% of entry-level jobs at risk, etc.) are the headline, but the harder issue is what happens to meaning when machines can do most of the “thinking” work. Hassabis is right to flag it:

“Then there are even bigger questions than that—to do with meaning and purpose. A lot of the things we get from our jobs, not just economically—that’s one question—but what happens to the human condition and humanity as a whole?”

Economic redistribution might be solvable. The crisis of human purpose when machines can perform most cognitive tasks may be the deeper challenge.


Risks and Safeguards

The Danger of Self-Improvement Loops

Lee Ki-min pinpoints the core risk: “If AI can learn and improve on its own without human intervention, the pace of its development will become uncontrollably fast.”

Once AI can improve AI without human input, the trajectory leaves our ability to predict or control.

Three Categories of AI Risk

Kim Kyung-hoon, AI Safety Leader at Kakao, categorizes the threats:

Risk TypeDescription
Malicious useBad actors weaponizing AI
Systemic riskWidespread failures or cascading impacts
Malfunction riskAI smarter than humans no longer follows commands

The last—loss of human control—is the existential concern.

Technical Safeguards Under Development

Amodei describes Anthropic’s work on “mechanistic interpretability”—studying model decision-making like neuroscientists study brains, understanding why outputs occur, then intervening and retraining.

Hassabis calls for global scientific collaboration, comparing the need to CERN—open, transparent, international cooperation on AGI safety.

Both advocate for:

  • International coordination on safety standards
  • Restrictions on advanced chip sales to maintain strategic advantage
  • Slower, more deliberate development pacing to allow societal preparation

As Amodei puts it: despite predicting faster progress, he prefers Hassabis’s longer timeline—“it would be better for the world.”


The Bottom Line

My read: On a functional definition—“ability to figure things out” across open-ended tasks—AGI is already here in 2026. Long-horizon agents are doing it. If you reserve “AGI” for human-level generality including scientific creativity, we’re not there yet; Hassabis puts that at ~50% by 2030. METR’s curves suggest a full expert day by ~2028 and a full year’s work in the early 2030s.

The 1–2 vs. 5–10 year split is less important than the consensus: self-improvement loops are forming, entry-level work is shifting now, and most organizations and policy are behind. The question isn’t “will AGI show up?”—it’s whether we’ll have institutions and guardrails that keep up. Stop thinking of AI as a chatbot; think of it as something you assign work to. That’s already the reality—a sprawl of agents figuring things out, not a single “AGI launch.”


Quick Reference: Key AGI Data Points (2026)

QuestionAnswerSource
Has AGI arrived?Yes (functional definition)Sequoia, UC San Diego professors
Fastest timeline1-2 years (Amodei)Davos 2026
Conservative timeline5-10 years (Hassabis, 50% by 2030)Davos 2026
Capability doubling timeEvery 7 monthsMETR via Sequoia
Entry jobs at risk50% in 1-5 yearsAmodei, Davos 2026
Full-day work by AI2028METR projection
Full-year work by AI2030-2034METR projection
100-year work by AI2037METR projection

Sources: UC San Diego, World Economic Forum Davos 2026, Sequoia Capital, METR via Sequoia, Science Donga. All data current as of February-March 2026.

About the author

Ravi Kinha

Technology enthusiast and developer with experience in AI, automation, cloud, and mobile development.

The race toward Artificial General Intelligence has reached a critical inflection point. Here's the definition, the timeline debate, and what 2026 data shows.

📚 Recommended Resources

* Some links are affiliate links. This helps support the blog at no extra cost to you.