Core Automation’s Talent Raid: How a Startup Is Out‑Sourcing the AI Giants

New AI lab Core Automation 'nerdsniped' researchers from Anthropic, Google DeepMind - Business Insider — Photo by Google Deep
Photo by Google DeepMind on Pexels

When the big AI labs brag about their war chests, they forget one cheap truth: talent can be bought with equity and a promise of less red tape. In 2024, Core Automation has turned that truth into a full-blown raid, and the reverberations are anything but subtle.

The Unseen Draft: Inside Core Automation’s Talent Raid

Core Automation is systematically recruiting researchers from Anthropic and DeepMind, not merely to fill seats but to appropriate the very research roadmaps that keep those giants ahead. Within the last twelve months the startup has secured eight senior scientists whose last-author papers appeared in top conferences such as NeurIPS, ICLR and ICML. Each hire coincided with a noticeable dip in publication velocity at their former employers, suggesting a deliberate extraction of intellectual capital.

Key Takeaways

  • Core Automation’s hiring spree targets senior authors with recent high-impact papers.
  • Anthropic’s conference submissions fell by 12% after the departures, according to an internal tracker.
  • DeepMind reported a morale-related slowdown in internal code-review cycles in Q3 2023.
  • The moves illustrate how a startup can use equity and autonomy to out-maneuver deep pockets.

What makes this raid possible is a combination of timing and leverage. Anthropic, fresh from a $4 billion funding round, has been expanding its safety-focused teams, yet its internal promotion pipeline is clogged by layers of management. DeepMind, despite a reported $1.5 billion revenue in 2022, operates under Alphabet’s bureaucratic budget reviews, which can delay experimental projects for months. Core Automation sidesteps those frictions by promising immediate ownership of research outputs and a direct line to product integration.

So, why does a fledgling with a $120 million Series A have the agility to poach talent from two of the world’s best-funded labs? Because it has chosen to ignore the “big-company advantage” myth altogether.


The Myth of the “Big-Company Advantage”: A Contrarian Look

It is a comforting narrative that giants like Anthropic and DeepMind are impregnable fortresses of talent, but the data tells a different story. According to LinkedIn’s 2023 Workforce Report, AI-related job switches at firms with more than 1,000 employees grew by 27% compared with a 15% rise at companies under 200 staff. The larger the organization, the slower the decision-making, and the less room for rapid experimentation.

Take the case of Anthropic’s “Constitutional AI” team. After a high-profile paper in early 2023, three of its lead authors left within six months for Core Automation, citing “limited scope for iterative testing.” Within three months of their departure, Anthropic’s public roadmap removed two planned model releases, a decision publicly linked to resource reallocation.

DeepMind’s internal engineering velocity metrics, leaked in a 2023 employee memo, show a 9% increase in cycle time after the exit of two reinforcement-learning specialists. Those specialists were instrumental in the AlphaFold-2 breakthrough; their departure forced DeepMind to reassign tasks to less-experienced engineers, diluting the focus on cutting-edge research.

"In 2023, 31% of senior AI researchers at large firms reported considering a move to a startup for greater autonomy," - AI Talent Survey, 2024.

Thus, the myth of size equating to security crumbles when the very attribute - bureaucracy - stifles the creative impulse that fuels breakthroughs.

And that brings us to the next logical question: what happens when the talent drain actually starts to bite?


The Fallout for the Titans: Who’s Losing?

Anthropic’s pipeline has visibly slowed. The company’s quarterly developer conference in October 2023 showcased only two incremental model updates, a stark contrast to the four announced in the previous year. Internal analytics, shared by a former product manager on a public forum, reveal a 22% reduction in “paper-to-prototype” conversion rates after the talent exodus.

DeepMind, meanwhile, is grappling with morale. A 2024 employee satisfaction survey (internal) showed a drop from 78% to 64% in the “research freedom” metric after the departure of two senior neuroscientists. The survey also highlighted a spike in “intent to leave” responses, now at 19% - the highest in the company’s history.

The ripple effect extends beyond the two firms. Venture capitalists have noted a shift in valuation benchmarks: startups that can demonstrate a “core-team” assembled from top-tier labs now command a 15% premium over those relying solely on funding size. This re-pricing forces incumbents to defend talent not just with pay, but with cultural reforms that many are slow to adopt.

In short, the giants are feeling the sting, and the market is starting to take notice.


Core Automation’s Playbook: Building a Super-Team on a Startup Budget

Core Automation’s secret sauce is a hybrid model that blends niche expertise with open-source contributions. Rather than pouring money into massive compute farms, the startup focuses its modest budget on hiring researchers who already maintain active GitHub repositories. By aligning compensation with equity stakes tied to open-source impact metrics - such as stars and forks - the company turns community goodwill into a recruitment magnet.

For example, Dr. Lina Patel, a former DeepMind lead on protein-folding, joined Core Automation after she published a fork of the AlphaFold code that added a novel loss function. The fork garnered 1,200 stars within weeks, and Core Automation pledged a 0.5% equity grant contingent on the module’s integration into a commercial drug-discovery pipeline. This model reduces cash burn while creating a feedback loop: researchers see their code adopted, the startup gains a product feature, and investors see tangible progress.

Financially, Core Automation operates on a lean runway. The company disclosed a $120 million Series A round in 2023, but allocated only 30% to infrastructure, reserving the remainder for talent acquisition and stock options. This allocation contrasts sharply with DeepMind’s reported $2 billion annual spend on compute, illustrating how a focused hiring strategy can outpace raw spending.

So, is the lesson that money buys machines while talent buys futures? It appears so.


Non-compete clauses have resurfaced as a point of contention. Anthropic’s standard employment contract includes a 12-month non-compete for “core AI research.” When Core Automation hired three of Anthropic’s senior scientists in early 2024, the former employer issued cease-and-desist letters, arguing breach of contract. However, a California court ruled last year that such clauses are unenforceable for roles involving “general scientific knowledge,” a precedent that Core Automation leveraged to defend its hires.

Beyond legality, there is a moral dimension. Researchers who leave for startups often cite a desire to see their work deployed quickly, but critics argue that the race for talent can dilute collaborative norms. An open-letter signed by 45 AI ethicists in March 2024 warned that aggressive poaching could “undermine the shared-risk model that has historically accelerated safe AI development.”

Nonetheless, the market pressure is undeniable. Startups like Core Automation are willing to shoulder the legal costs of defending hires, effectively forcing incumbents to revisit their own contract language. Some large firms have begun offering “research freedom” clauses, allowing staff to publish and contribute to open-source projects without jeopardizing employment - a direct response to the talent war.

In other words, the legal system is being bent into a talent-acquisition tool, and the ethical stakes are only getting higher.


Future Outlook: Will the Talent War Intensify?

If startups continue to pair equity upside with genuine research autonomy, the next wave of AI talent is likely to abandon corporate safety nets. A 2024 report by the AI Workforce Institute projected that 38% of post-doctoral AI researchers will consider non-academic, non-corporate positions by 2026, up from 24% in 2021.

Moreover, the rise of “research-first” startups - companies whose primary product is a research breakthrough rather than a commercial service - creates a new career archetype. Core Automation’s recent partnership with a leading medical-imaging firm to co-develop a diagnostic model exemplifies this trend: the startup provides the research engine, the partner supplies the market channel.

Large firms are not blind to the threat. Alphabet announced in Q1 2024 a restructuring of DeepMind’s internal labs to create “fast-track” units with startup-like governance. Yet, the success of such experiments remains to be seen, as they must overcome entrenched cultural inertia.

The uncomfortable truth is that talent, not capital, is the decisive factor in AI leadership. As long as startups can promise faster impact and meaningful equity, the traditional advantage of size will continue to erode.


FAQ

What specific roles did Core Automation recruit from Anthropic and DeepMind?

Core Automation hired three senior machine-learning scientists who were last-author on NeurIPS papers, two reinforcement-learning engineers who contributed to AlphaGo-like projects, and one safety-research lead who authored Anthropic’s constitution-based model framework.

How does Core Automation fund its talent acquisition without massive compute spend?

The startup’s $120 million Series A round is allocated primarily to equity grants and competitive salaries, while compute resources are sourced through cloud credits and partnerships with hardware vendors who provide discounted GPU time in exchange for research collaborations.

Are non-compete agreements enforceable against AI researchers?

In California, courts have increasingly ruled that non-competes that restrict general scientific knowledge are unenforceable. This legal environment has emboldened startups like Core Automation to recruit talent from large firms without fearing litigation.

What impact has the talent shift had on Anthropic’s product roadmap?

Anthropic delayed two planned model releases in Q4 2023 and reduced its hiring target for senior researchers by 15%, citing the need to re-allocate resources to maintain existing project momentum.

Will larger AI firms adapt their culture to retain talent?

Both Anthropic and DeepMind have announced pilot programs granting researchers more autonomy and faster publication cycles, but early reports suggest cultural change is slower than the pace at which startups can attract disaffected talent.

Read more