Google DeepMind, once celebrated for freely sharing groundbreaking AI research, has shifted gears. The renowned AI lab now carefully guards its research findings as competition heats up in the artificial intelligence industry.
Several current and former research scientists reveal that DeepMind has implemented stricter review processes, making it significantly harder to publish studies about their work.
The organization is particularly reluctant to share papers that might benefit competitors or present Google’s Gemini AI model in an unfavorable light compared to rival systems.
This represents a major change for DeepMind, which built its reputation on publishing revolutionary research and attracting top scientific talent. In 2017, Google researchers released the “transformers” paper that provided the foundation for today’s large language models, directly contributing to the current generative AI boom.
“I cannot imagine us putting out the transformer papers for general use now,” admitted one current researcher.
DeepMind Tightens AI Research Rules, Restricting Publication of Strategic Papers
Among the new restrictions is a six-month embargo on “strategic” papers related to generative AI. Researchers must now convince multiple staff members before getting approval for publication, according to insiders familiar with the matter.
A source close to DeepMind defended these changes, saying they aim to spare researchers from wasting time on work unlikely to receive approval for strategic or competitive reasons. They emphasized that the company still publishes hundreds of papers annually and remains a major contributor to important AI conferences.

The shift coincides with investor concerns that Google was losing ground to competitors like OpenAI, the maker of ChatGPT. These worries contributed to the 2023 merger of London-based DeepMind with Google’s California-based Brain AI unit. Since then, the company has accelerated the release of various AI-enhanced products.
“The company has shifted to one that cares more about product and less about getting research results out for the general public good,” explained a former DeepMind research scientist. “It’s not what I signed up for.”
In response, DeepMind stated it has “always been committed to advancing AI research” and is “instituting updates to our policies that preserve the ability for our teams to publish and contribute to the broader research ecosystem.”
Former staff members suggest the new procedures have restricted the sharing of commercially sensitive research to protect potential innovations. One claimed that publishing papers on generative AI had become “almost impossible.”
In one notable instance, DeepMind reportedly blocked the publication of research showing Google’s Gemini language model underperforming compared to competitors, especially OpenAI’s GPT-4. Interestingly, it also prevented the release of a paper revealing vulnerabilities in OpenAI’s ChatGPT, fearing it might appear as competitive retaliation.
DeepMind’s Balancing Act: Research Freedom vs. Google’s AI Ambitions
A DeepMind insider countered that the company doesn’t block papers discussing security vulnerabilities, noting that such work is routinely published under a “responsible disclosure policy” requiring researchers to give companies time to fix flaws before going public.
The publication restrictions have unsettled some employees, particularly in an environment where career advancement has traditionally been tied to appearances in prestigious scientific journals. “If you can’t publish, it’s a career killer if you’re a researcher,” noted a former staff member.
Former employees also mentioned that projects focused on enhancing Google’s Gemini products increasingly receive priority when competing for valuable resources like datasets and computing power.
Despite these internal tensions, Google has successfully launched various AI-powered products that have impressed the market. These include improved AI-generated search summaries and an “Astra” AI agent capable of answering real-time queries across multiple formats.
The company’s stock price rose by up to a third over the past year, though recent concerns about US tariffs have trimmed some of these gains.
DeepMind’s leader, Nobel Prize-winner Sir Demis Hassabis, continues balancing Google’s commercial ambitions with his personal mission to develop artificial general intelligence—AI systems matching or exceeding human capabilities.
“Anything that gets in the way of that he will remove,” said one current employee. “He tells people this is a company, not a university campus; if you want to work at a place like that, then leave.”