OpenAI's New ChatGPT Tool Revolutionizes Research, But Human Experts Still Crucial
February 12, 2025
OpenAI has introduced a new 'deep research' tool integrated into ChatGPT Pro, designed to perform research tasks significantly faster than human experts, completing them in mere minutes.
This tool acts as a research assistant, autonomously searching the web, compiling sources, and delivering structured reports.
Despite its advanced capabilities, deep research can miss key details, struggle with recent information, and sometimes invent facts.
Early evaluations have highlighted issues such as a lack of context, missing recent developments, and an inability to distinguish between reliable and unreliable sources.
These shortcomings emphasize that, despite advancements in AI, skilled human researchers who can synthesize information and think critically remain irreplaceable.
To use AI responsibly, it is crucial to verify sources and apply critical thinking, especially in high-stakes areas like health and justice.
The emergence of similar AI tools, such as a free version from Hugging Face, raises concerns about the risks of overestimating AI's capabilities.
Currently, deep research is available only to ChatGPT Pro users in the U.S. for $200 per month, with plans to expand access to other user tiers soon.
The tool is primarily targeted at professionals in fields such as finance, science, law, and engineering, as well as academics and business strategists.
The research process involves five steps: submitting a request, clarifying the task, searching the web, synthesizing findings, and delivering a report within five to thirty minutes.
In early tests, deep research achieved a score of 26.6% on Humanity's Last Exam, outperforming many other AI models.
However, OpenAI acknowledges that the tool has limitations, including the potential to 'hallucinate' facts and make incorrect inferences, though at a lower rate than previous models.
Summary based on 2 sources
Get a daily email with more AI stories
Sources

Mirage News • Feb 12, 2025
OpenAI's New 'deep Research' Agent Is Still Just Fallible Tool - Not Human-level Expert