About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

From Quiet Practice to Explicit Disclosure: How the Fed’s Alt-Data Use is Reshaping the Macro Stack

Subscribe to our newsletter

At the October 2025 FOMC press conference, during the federal government shutdown that had interrupted the flow of official statistics, Chair Jerome Powell did something the Federal Reserve has rarely done in public: he cited the alternative datasets from PriceStats, Adobe and ADP that the central bank was leaning on. For many observers of the alt-data industry, it was a moment when a quiet practice crossed into explicit disclosure.

Julia Meigh, an economist and independent consultant who spent eight years at alternative data firm Neudata and now runs Alternatus Intelligence, has been arguing that the shift is more than a shutdown workaround. In a March piece on the Fed’s data playbook, Meigh set out the case that central banks have been incorporating alternative datasets into monetary policy decisions since the pandemic, that the eroding quality of official statistics has accelerated the trend, and that a handful of commercial datasets are positioning themselves as benchmark monthly references.

The argument has direct consequences for institutional data teams. If the Fed and other central banks are increasingly looking at the same alt-data sources, the buy-side has reasons to track them too, at least defensively, if not as primary signal.

Meigh will be moderating the keynote Leaders Panel: Alternative Data in Production at the A-Team/Eagle Alpha Alternative Data Conference London on 19 May. Ahead of the event, she speaks with Market & Alt Data Insight about the practitioner consequences of the shift she describes.

MADI: Your piece treats Chair Powell’s October FOMC reference to PriceStats, Adobe and ADP as a watershed – alt data crossing from quiet practice into explicit disclosure. How significant a shift do you think it represents? Is this a one-off forced by the shutdown, or the formalisation of something the Fed has been quietly building toward?

JM: Probably the latter – a shift they’ve been making for a while. It seems to have begun during COVID-19, when sharp changes in the economy weren’t being picked up by official statistics, because they’re very lagged. It’s hard to shape monetary policy decisions on lagged economic data – these figures can sometimes be based on data that’s 3-9 months old. The way official statistics are constructed, you’re looking at the economic past rather than the present. Alternative data is much more timely and higher-frequency, and much better at giving a current picture of what’s happening in the economy. So it seems the Fed started using alternative data to inform interest-rate decisions around the pandemic, and they’ve continued to use it since.

MADI: A central plank of your argument is that official statistics are degrading – falling response rates, statistical-agency capacity issues, methodology lag. How much of the Fed’s move toward alternative datasets is opportunistic adoption of better signal, and how much is defensive substitution for indicators that are eroding underneath them?

JM: Definitely a combination of the two. The catalyst was the eroding quality of official statistics – I saw a figure recently that response rates on some surveys are down to around 30%, when they used to be 70% a few years ago. So the catalyst was defensive. But even if the official statistics had held up, the Fed would probably have started increasing its use of alternative data anyway, because of the advantages with timeliness and breadth.

And this isn’t just the Fed. The ECB has mentioned using QuantCube to track prices in Russia and supply-side shocks during the invasion of Ukraine. The Bank of England has referred to alt data in its meetings, and so has the ONS – which started its faster economic indicators series before the pandemic, as experimental statistics. So even before the quality issues became more prominent, official agencies were experimenting with alternative data.

MADI: You make a striking point about commercial datasets – Revelio’s RPLS, for instance – potentially establishing themselves as benchmark monthly references. What does that path actually look like? What does a private dataset have to demonstrate, and over what timeframe, before it’s treated as a benchmark rather than a complement?

JM: It’s a difficult question, because a lot of these datasets rely on official BLS statistics to construct their methodology. They’ll always be used in a complementary format to some extent. Take inflation: it’s broken down into spending categories – food at home, apparel, transport, energy – and each category makes up a portion of the indicator. A lot of alternative data providers, including PriceStats and others, use the weights in official figures to construct their alternative index. Some of them aim to be as close as possible to the official figures ahead of the release. That’s where the benefit lies.

So there’s a theme that will always make them a complement. Whether it’s possible to be an entirely independent benchmark – something completely different to what’s being published – is another question. We may move more towards that if problems with official figures deepen, because we’re now in a position where you’re effectively guessing what the survey response is going to be. It might be 30% one month, 35% the next, 25% the month after. There’s no way of really knowing.

For the moment, being a complement, published as an alternative-data benchmark alongside the official figures, is where we’re heading. In the longer term, independent alternative benchmarks are quite likely as well.

On what providers need to do to position themselves as benchmark candidates: broadly, a good panel size and wide capture of the economy. Some labour market datasets cover more of the services and consumer sector; others have more blue-collar jobs. A representative panel and a long history both matter. There’s also a back-filling issue. With official statistics, you get the first, second and third release of an economic figure, and markets have the biggest reaction to the first release. Some vendors don’t keep those historical vintages, and that’s something users need to look at.

MADI: If we take your thesis at face value, what changes operationally for a buy-side macro team? Are they now obliged to track the same alt-data sources the Fed is citing, defensively, even if their own models don’t weight them heavily?

JM: There are two ways of looking at this. The prices of macro-linked instruments are impacted by official statistical releases, so buy-side teams want to be as close as possible to what those agencies are going to announce for employment, inflation, GDP and so on. But because the Fed and other central banks are increasingly using alternative data to shape monetary policy decisions, there’s also a benefit in knowing which datasets they’re looking at. That gives a gauge into the direction of interest rate changes.

I started suspecting central banks were already using alternative data back in August 2024, when Jerome Powell cut the interest rate by 50 basis points. He’s known to be very cautious, and that to me seemed like a steep change. Everyone had been criticising him for being too slow to respond to the official figures. So that’s when I started suspecting Powell and others at the Fed were using alternative datasets to inform their decisions. The August 2024 cut suggested he was looking at other datasets, and that likely prompted a larger cut than you’d otherwise have expected.

So yes – the price of several financial instruments is still driven by the official releases, but it’s useful to know which alternative datasets the central banks are using. If you monitor those, you can gauge where the thinking is heading on interest rates.

Julia Meigh will be moderating the Keynote Leaders Panel: Alternative Data in Production at the A-Team/Eagle Alpha Alternative Data Conference London on 19 May 2026. The panel brings together senior buy-side, sell-side and vendor voices on where alternative data is moving from experiment into infrastructure. To register, visit the conference page.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: The Data Foundation for Alpha – How fragmented data is eroding hedge fund performance

Date: 23 June 2026 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes Alpha depends on more than models, talent and execution. It depends on the quality, consistency and timeliness of the data behind every investment decision. Many hedge funds still operate with fragmented datasets, inconsistent identifiers and manual reconciliation processes that...

BLOG

Barclays Deepens Market Data Strategy with Multiyear FactSet Agreement

Barclays has agreed a multiyear strategic collaboration with FactSet that marks a shift in how the bank is approaching market data and analytics infrastructure as part of a broader enterprise-level data strategy. The arrangement will see Barclays integrate a broad suite of FactSet products, data and technology solutions into its workflows to support data-driven decision-making...

EVENT

Eagle Alpha Alternative Data Conference, London, hosted by A-Team Group

Now in its 8th year, the Eagle Alpha Alternative Data Conference managed by A-Team Group, is the premier content forum and networking event for investment firms and hedge funds.

GUIDE

Pricing and Valuations

This special report accompanies a webinar we held a webinar on the popular topic of Pricing and Valuations, discussing issues such as transparency of pricing and how to ensure data quality. You can register here to get immediate access to the Special Report.