About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

When Everyone Has LLMs, Who do Quants Still Hire?

Subscribe to our newsletter

The democratisation of alternative data has been a recurring theme at industry conferences for several years. The arrival of capable large language models has accelerated it. Datasets that once required specialist teams to ingest and structure can now be parsed by anyone with a reasonable prompt; signals that once took weeks to extract can be generated in an afternoon. So what, in this environment, still creates edge?

That question framed the closing panel at the recent A-Team/Eagle Alpha Alternative Data Conference in New York. The answer that emerged was less about data, and more about who quant firms still need to hire, and who they don’t. The session, titled “Democratisation of alternative data – when everyone has LLMs, what still creates edge?”, was moderated by Kathryn Zhao, Adjunct Professor at Cornell University, with panellists Gordon Ritter, Chief Investment Officer at Ritter Alpha; Tony Berkman, formerly Managing Director at Two Sigma; Evan Reich, formerly Head of Data at Verition Fund Management; Duncan Robinson, Director of AI & Optimization and Principal Research Scientist at Franklin Templeton; and Ellison Kandler, SVP and Head of Equity Solutions and Classification at Syntax.

The roles that no longer get hired

One portfolio manager on the panel described two job categories he used to fill regularly, and no longer does. The first was the pure quantitative developer, the bright young coder who could take a specified model and turn it into production-ready code. The second was the algorithms specialist who could answer any question about computational complexity off the top of their head.

Both roles, he said, have been absorbed by AI coding assistants. A quant who used to spend a day on a Kalman filter implementation can now do it in an hour. Questions about the fastest algorithm for a given problem are answered by the model, not by an in-house expert.

This isn’t a marginal observation. If implementation is no longer a bottleneck, headcount and budget shift toward the parts of the workflow that still need humans. The panel was relatively united on what those are.

What’s harder to automate

Several panellists circled the same idea: the bottleneck has moved from doing the work to framing the problem, validating the output, and knowing what’s worth asking in the first place.

One speaker, only half-joking, suggested parents thinking about university for their children should steer them toward English and philosophy. The serious version is that the skills that matter for engaging with AI productively – clear communication, creative framing, orthogonal thinking – are not what a computer science degree optimises for. He noted that an excellent engineer he had worked with previously had told him the AI now codes better than he does.

A vendor representative offered a concrete example. A first-year analyst with an English degree had vibe-coded a point-in-time prototype that became the basis of the firm’s extraction methodology. The prototype itself wasn’t production-grade, and no one suggested it should ship. But the insight – recognising that a manual process could be handed to an LLM with a discrete directive and tested iteratively – came from someone with no formal technical background.

A counterpoint came from another panellist who pushed back on the liberal-arts framing. Reading AI-generated code and judging whether it’s doing the right thing is much faster if you have a coding background yourself, he argued. Without that grounding, you’re left testing the final application – a long way from understanding what your system is actually doing. Few quant traders would be comfortable operating at that remove.

The synthesis came from a buy-side panellist: the most valuable hires combine agency and logical thinking with genuine domain understanding. Pure technical skill without business context has always been limited. Pure liberal-arts thinking without methodological grounding has the same problem. What’s changed is that the technical-skill-only profile has lost most of its remaining value.

Why this matters for spend

The hiring shift maps directly onto where panellists said incremental budget should go. Asked where they would put marginal dollars, the panel pointed away from the parts of the stack that LLMs have already absorbed.

Data engineering, in the view of one panellist, is roughly 98% solvable and likely to disappear as a discrete function within three years, replaced by exception-handling and vendor coordination. Execution, said another, is a problem the major investment banks have largely solved. Trying to beat them is no longer a sensible use of resources. Portfolio optimisation, the same panellist argued, is similarly mature. AI should be used to determine the inputs to an optimiser, not to replace the optimiser itself.

What’s left, by elimination, is the front end of the research process: feature engineering on the quant side, ideation on the discretionary side. One panellist argued this is the inflection point, the place where AI can produce a profound change in the scale and quality of what a research team can generate, and therefore where marginal spend has the highest return.

A vendor panellist made a related point about proprietary data. If an LLM can recreate your dataset, it wasn’t proprietary to begin with. The defensible work is in the systematic, step-by-step refinement that turns raw inputs into something an outside model can’t reproduce. That, too, requires human judgment at the flagging and validation stages.

The orthogonality question

One observation, raised in passing by a former multi-strat panellist, didn’t get picked up by the others but is worth flagging. Multi-manager and pod shops rely on running portfolios of orthogonal strategies that can be levered five, six, seven times notional. If LLMs allow quant firms to produce ten times the number of models, and if those models are increasingly drawn from the same pool of publicly accessible data and similar foundation models, do they remain orthogonal? And if they don’t, does the leverage that underpins the multi-manager economic model become harder to sustain?

No one on the panel had an answer. But it sits underneath the rest of the discussion as a structural question the industry will need to confront. The democratisation of alternative data and LLMs may turn out to have lowered the barrier to producing models while raising the barrier to producing genuinely differentiated ones. Which is, in the end, the only kind that matters.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: Navigating the Build vs Buy Dilemma: Cloud Strategies for Accelerating Quantitative Research

Date: 20 May 2026 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes For many quantitative trading firms and asset managers, building a self-provisioned historical market data environment remains one of the most time-consuming and resource-intensive steps in establishing a new research capability. Sourcing data, normalising symbologies, handling corporate actions and maintaining...

BLOG

FPGA-Accelerated Engine from Exegy Will Power EU’s First Equities Consolidated Tape

EuroCTP, the joint venture of 15 European exchange groups selected by the European Securities and Markets Authority (ESMA) to deliver the EU’s first real-time consolidated tape for shares and ETFs, has named Exegy as its core technology partner. Exegy’s FPGA-accelerated ticker plant will serve as the data normalisation and consolidation engine at the centre of...

EVENT

RepRisk Sustainability Breakfast Roundtable London

The London sustainability breakfast is part of the global roundtable thought leadership event series hosted by RepRisk in key markets, including, New York, Toronto, London, Frankfurt, Oslo, Copenhagen, Stockholm, Hong Kong and Singapore in 2026.

GUIDE

GDPR Handbook

The May 25, 2018 compliance deadline of General Data Protection Regulation (GDPR) is approaching fast, requiring financial institutions to understand what personal data they hold, why they process it, and whether it is shared with other organisations. In line with individuals’ rights under the regulation, they must also provide access to individuals’ personal data and...