OpenAI is building AI researcher that needs no human help
What's the story
OpenAI, the leading artificial intelligence (AI) company, has announced its next big project: a fully automated, agent-based AI researcher. The ambitious initiative aims to create an autonomous system that can independently tackle complex problems without requiring step-by-step human guidance. The company's chief scientist Jakub Pachocki told MIT Technology Review that this project is now their long-term goal.
Project evolution
The new AI system will plan its own work
The new AI system being developed by OpenAI will not just answer prompts but also plan its own work, analyze information, and test different ideas. The company hopes to combine various research strands such as reasoning models, autonomous agents, and interpretability into a single system. This unified system would be able to solve large and complex problems in fields like mathematics, physics, and life sciences with little human intervention.
Initial phase
Starting with an 'AI research intern'
The first step in this ambitious project is to create an "AI research intern." This autonomous agent would be able to take on smaller research tasks that would usually take a human several days. Over time, OpenAI plans to scale this into a larger multi-agent system where multiple AI programs work together in data centers on bigger and more complex projects.
Industry race
Competing with other tech giants
OpenAI's plan to build an autonomous researcher comes as the race to develop autonomous AI systems heats up. Companies like Google DeepMind and Anthropic are also working on advanced reasoning models and AI agents. However, OpenAI believes recent advancements in coding agents and reasoning-based AI show that machines are already getting better at working longer without human help.
Risk management
Addressing safety concerns
The project also raises major safety concerns as systems that can run independently for long periods could make mistakes, misinterpret instructions, or be misused if not properly controlled. To mitigate these risks, OpenAI is already experimenting with ways to monitor how AI reasons while it works. The company may keep highly capable systems in restricted environments to reduce potential risks associated with their autonomous operation.