The leading knowledge platform for the financial technology industry
The leading knowledge platform for the financial technology industry

A-Team Insight Blogs

Knowledge Graphs – the Future of Data Management?

Share article

Knowledge graphs are becoming an increasingly popular way of thinking about and organising data within financial services firms. The industry is turning to knowledge graphs as a methodology for making data more accessible, and for use in artificial intelligence (AI) solutions, for example.
Edgar Zalite, global head of metadata management within the chief data and innovation office at Deutsche Bank, presented a ‘Case study: The Practicalities of building an enterprise knowledge graph’ at the recent A-Team Group Data Management Summit in New York.

Answering the question

Knowledge graphs are perhaps best known as the basis on which Google presents certain kinds of search engine results – particularly the infobox that appears on the right-hand side panel of some searches. This infobox grew out of Google’s realisation that most people are not searching for a bunch of related links when they use the search engine – rather, they are
looking for the answer to a question, such is, ‘Where is Latvia?’, or  ‘Who is Thomas Jefferson?’. In other words, Google wants to give its users ‘things, not strings’.

As it was seeking to create the infobox, Google discovered that there are relationships between pieces of data, and began studying this within a discipline that is now called ‘data ontology’. The word ‘ontology’ simply means the study of the nature of reality, so data ontology is the study of the nature of the reality of data.

This new way of understanding the relatedness of data is being explored as a way to make it easier to use data in artificial intelligence solutions, for example. It is an alternative to
the static data models that have dominated so much of the history of technology.

The infobox is attempting to present users with a complete semantic understanding of the answer to the original question, and this is what knowledge graphs are attempting to do with data. As well, an approach based on knowledge graphs enables inferences – additional sources of information can be added because the scaling is linear and extensible. This differs from the data warehouse, where it can be more difficult to add new sources over time, after the initial build is complete.

A knowledge graph approach to data is also contextual. It is able to bring in data that is relevant to the user. So, for example, on an equities trading desk, a query about position information could also bring back information about risk metrics, the employees on the desk, and key performance indicators. A knowledge graph approach delivers users a broader context for the information they have asked for, such as where it came from, how valid it is, and what it should be used for.

Keeping the focus
To prevent a knowledge graph from turning into a data swamp, one approach to use is a standardised namespace. This involves creating a standard template for the data – other people can add to this, but the standard template remains at the core. An example of this is
schema.org, which provides standard internet schemas. For example, there is a schema for recipes, so that if a user wanted to create a recipe website, they could use this standard schema and their site would be accessible by ontological searches by Google and other organisations.

As with all data management and governance projects, it is best to start small. Find a use case or a group of stakeholders who are willing to work with the data team on a knowledge graph approach. Get a win out there, and let interest build in this way of approaching engagement with data.

Leave a comment

Your email address will not be published. Required fields are marked *

*

Related content

WEBINAR

Recorded Webinar: Data lineage – how to ensure you can deliver the right information, to the right people, at the right time

Data lineage is critical to digital transformation, business decisions and regulatory compliance. It is also difficult to implement at scale, not only because large quantities of data across numerous systems must be inventoried and tracked, but also because the data is not static and needs context to make sense to the business. If you are...

BLOG

How Data Services Sourcing Options are Changing for Financial Institutions

By Mark Hepsworth, CEO at Asset Control. Financial institutions are facing a perfect storm of pressure on revenues and increasing costs driven by regulatory mandates and the need for overdue investment in infrastructure. At the same time, they are funding projects to improve their competitive differentiation and support revenue growth. Against this backdrop, many firms...

GUIDE

Entity Data Management Handbook – Fifth Edition

Welcome to the fifth edition of A-Team Group’s Entity Data Management Handbook, sponsored for the fourth year running by entity data specialist Bureau van Dijk, a Moody’s Analytics Company. The past year has seen a crackdown on corporate responsibility for financial crime – with financial firms facing draconian fines for non-compliance and the very real...