OpenAI Sets Fully Automated AI Researcher as Multi-Year North Star
Summary
- • OpenAI's new North Star: a fully automated AI research agent by 2028
- • An autonomous AI research intern targeting deployment by September 2026
- • The system aims to tackle problems too large or complex for humans alone
- • Chief scientist Pachocki says OpenAI now has most of what it needs to get there
Details
OpenAI names autonomous AI researcher its primary research goal for next several years
The company is framing this as its 'North Star,' pulling together previously separate research tracks — reasoning models, agents, and interpretability — into one unified objective. This is a significant organizational and directional signal from the most influential lab in the industry.
Autonomous AI research intern targeted for release by September 2026
This first milestone will be a system capable of independently handling a small number of specific research problems. It is conceived as the precursor architecture to the full multi-agent system, allowing OpenAI to iterate on agent reliability, coherence, and task scoping before scaling up.
Fully automated multi-agent AI research system planned for 2028
The 2028 system is intended to tackle problems too large or complex for human researchers — spanning math, physics, biology, chemistry, business, and policy. Any problem formulated in text, code, or diagrams would theoretically be in scope.
Reasoning models, agents, and interpretability research converging under one agenda
Rather than pursuing these as parallel tracks, OpenAI is positioning them as converging inputs toward the AI researcher goal. This suggests resource and talent prioritization will shift accordingly across the organization.
Chief scientist Pachocki envisions a 'whole research lab in a data center'
Pachocki, who helped develop GPT-4 and OpenAI's reasoning models, stated that the company is close to models that can sustain long-horizon coherent work — with humans still setting goals but automated systems executing research end-to-end.
OpenAI's Codex agent, released January 2026, represents an early step in this direction
Codex is an agent-based app that generates and executes code on the fly, analyzes documents, and creates daily digests. It demonstrates OpenAI's current agent capabilities as a foundation for more autonomous research-oriented systems.
Strategic pivot comes amid intensifying competition from Anthropic and Google DeepMind
The competitive pressure from rival frontier labs — each advancing their own long-horizon agent and research capabilities — makes this strategic direction critical not just for OpenAI's market position but for determining whose approach to autonomous AI research shapes the field.
Anthropic CEO Dario Amodei has described his own lab's goal as 'a country of geniuses in a data center'
The parallel framing from Anthropic underscores that automated scientific research is now a stated goal across multiple leading AI labs. The race to build credible autonomous research systems is accelerating across the frontier.
Strategy = organizational direction; New Tech = upcoming capabilities; Research = scientific agenda; Insight = leadership perspective; Product Launch = released product; Market Impact = competitive dynamics; Context = background framing
What This Means
OpenAI is making a definitive strategic bet that the next frontier of AI value lies not in better chat products but in systems that can autonomously conduct scientific and technical research at scale. If the 2028 timeline holds, it would represent a qualitative shift in what AI can do — moving from tool to collaborator to independent researcher. This matters for anyone in science, medicine, engineering, or policy, as the question becomes not just whether such a system works, but who controls it, what problems it prioritizes, and how its outputs are validated. The September 2026 intern milestone will be the first real test of whether this vision is an engineering roadmap or aspiration.
Sentiment
Broadly excited about the ambition and timelines, with some contrasting concerns on risks and feasibility
“"Anything made before 2028 is going to be valuable." — an OpenAI employee implicitly discloses their timetable”
“Anthropic co-founder Jared Kaplan believes that FULLY automated AI research could be as little as ONE YEAR away. As a reminder, OpenAI's stated goal is to reach fully automated AI research in 2 years (by March 2028). I now wonder whether this was too conservative.”
“This AI Scouting Report is for folks who know the @METR_Evals chart, but don't know that @OpenAI plans to have a fully automated AI researcher in 2028. 90 slides in 1 hour at @UCLaw_SF @LexLabSF's Law & AI Certificate Program.”
“OpenAI thinks they are on the verge of automating AI R&D, creating an entity of untold power and autonomy. Almost everyone who thinks this is risky has left. Those that remain are a bunch of idiot-geniuses, speed-running the creation of a god they refuse to contemplate.”
notably highlighting safety and control risks
Split
~70/30 excited optimists vs risk skeptics.
Sources
- OpenAI is throwing everything into building a fully automated researcherMIT Technology Review
