About a-team Marketing Services
The knowledge platform for the financial technology industry
The knowledge platform for the financial technology industry

A-Team Insight Blogs

Knowledge Graphs – the Future of Data Management?

Subscribe to our newsletter

Knowledge graphs are becoming an increasingly popular way of thinking about and organising data within financial services firms. The industry is turning to knowledge graphs as a methodology for making data more accessible, and for use in artificial intelligence (AI) solutions, for example.
Edgar Zalite, global head of metadata management within the chief data and innovation office at Deutsche Bank, presented a ‘Case study: The Practicalities of building an enterprise knowledge graph’ at the recent A-Team Group Data Management Summit in New York.

Answering the question

Knowledge graphs are perhaps best known as the basis on which Google presents certain kinds of search engine results – particularly the infobox that appears on the right-hand side panel of some searches. This infobox grew out of Google’s realisation that most people are not searching for a bunch of related links when they use the search engine – rather, they are
looking for the answer to a question, such is, ‘Where is Latvia?’, or  ‘Who is Thomas Jefferson?’. In other words, Google wants to give its users ‘things, not strings’.

As it was seeking to create the infobox, Google discovered that there are relationships between pieces of data, and began studying this within a discipline that is now called ‘data ontology’. The word ‘ontology’ simply means the study of the nature of reality, so data ontology is the study of the nature of the reality of data.

This new way of understanding the relatedness of data is being explored as a way to make it easier to use data in artificial intelligence solutions, for example. It is an alternative to
the static data models that have dominated so much of the history of technology.

The infobox is attempting to present users with a complete semantic understanding of the answer to the original question, and this is what knowledge graphs are attempting to do with data. As well, an approach based on knowledge graphs enables inferences – additional sources of information can be added because the scaling is linear and extensible. This differs from the data warehouse, where it can be more difficult to add new sources over time, after the initial build is complete.

A knowledge graph approach to data is also contextual. It is able to bring in data that is relevant to the user. So, for example, on an equities trading desk, a query about position information could also bring back information about risk metrics, the employees on the desk, and key performance indicators. A knowledge graph approach delivers users a broader context for the information they have asked for, such as where it came from, how valid it is, and what it should be used for.

Keeping the focus
To prevent a knowledge graph from turning into a data swamp, one approach to use is a standardised namespace. This involves creating a standard template for the data – other people can add to this, but the standard template remains at the core. An example of this is
schema.org, which provides standard internet schemas. For example, there is a schema for recipes, so that if a user wanted to create a recipe website, they could use this standard schema and their site would be accessible by ontological searches by Google and other organisations.

As with all data management and governance projects, it is best to start small. Find a use case or a group of stakeholders who are willing to work with the data team on a knowledge graph approach. Get a win out there, and let interest build in this way of approaching engagement with data.

Subscribe to our newsletter

Related content

WEBINAR

Upcoming Webinar: ESG data sourcing and management to meet your ESG strategy, objectives and timeline

Date: 11 June 2024 Time: 10:00am ET / 3:00pm London / 4:00pm CET Duration: 50 minutes ESG data plays a key role in research, fund product development, fund selection, asset selection, performance tracking, and client and regulatory reporting, yet it is not always easy to source and manage in a complete, transparent and timely manner....

BLOG

S&P Global Dataset Aligns MiFID II and SFDR Sustainability Reporting Requirements

S&P Global launches SFDR Sustainable Investment Framework dataset via Xpressfeed and Snowflake. The dataset enables market participants to align reporting with MiFID II and SFDR requirements when incorporating sustainability considerations into investment decisions. S&P Global Sustainable1, S&P Global’s central source for sustainability intelligence, has released a dataset that enables financial markets participants to better align...

EVENT

TradingTech Briefing New York

Our TradingTech Briefing in New York is aimed at senior-level decision makers in trading technology, electronic execution, trading architecture and offers a day packed with insight from practitioners and from innovative suppliers happy to share their experiences in dealing with the enterprise challenges facing our marketplace.

GUIDE

Regulatory Data Handbook 2023 – Eleventh Edition

Welcome to the eleventh edition of A-Team Group’s Regulatory Data Handbook, a popular publication that covers new regulations in capital markets, tracks regulatory change, and provides advice on the data, data management and implementation requirements of more than 30 regulations across UK, European, US and Asia-Pacific capital markets. This edition of the handbook includes new...