By John Bantleman, RainStor
www.rainstor.com
We recently hosted a dinner in New York City with 20 technology executives focused on big data in banking and financial services. I found the event insightful, so I thought it would be interesting to share some of the perspectives from those who attended.
The first (and close to my heart) is the separation of the big data business problem and the available technology and solution architectures. I am not trying to detract from the applicability and relevance of the Hadoop technology stack and the attractive, low cost scalability platform being central to making to solving big data challenges for today’s enterprise. Yes, there is an elephant charging straight towards us but I do feel as an industry, we spend a lot more time focused on the technology itself and less on the business problems it solves.
Financial services and banking see themselves as living big data, with very high SLAs and stringent availability and security requirements for many years. In fact, long before the big data market took off where we see start-ups forming almost every day with millions of dollars of venture funds in addition to public technology providers pouring in serious investment. If you look at your classic Wall Street financial services organisation or bank, there is a big problem. When you have to manage the magnitude of say 50-200 petabytes of data, which is growing at 40% to 100% under increasing pressure and scrutiny from outside regulators while trying to reduce IT spend, the problem is just not big, its staggering!
The days of consistent double-digit margins, even for large investment banks, are no longer and with little to no control over the power to increase volume on the top end – you have figure out a way to take a third of your embedded cost out. As one executive put it: “If you are not focused on reducing IT costs right now, you are just not paying attention.”
So is Big Data viewed as opportunity or cost? An informal mini-poll conducted said 80% cost vs. 20% opportunity. The regulatory environment on Wall Street is such that it now costs the industry $30 billion a year and there is simply no avoiding it. By contrast, when we discussed this with retail banks, the desire is to better understand customer behavior and we saw a shift in the balance where the business opportunity to better understand the customer across a broad range of products and services (i.e. data sources) is a very compelling proposition for executives and line-of-business owners. The technology enabler for that then became the “how can I keep all the history of customer transactions and clicks for a longer time without increasing infrastructure spend,” or in other words: “cheap and deep” which I covered in a previous blog a few weeks back.
Interestingly the respective big data solution architectures varied too. Most banks have a Hadoop cluster in a sand pit where a few technical resources are playing with it, some investigating a broader usage, and nearly all interested in it for it’s low-cost storage (even cloud) where you can easily provision virtual servers, and in some cases Content Addressable Stores (to comply with WORM requirements). By contrast, the classic financial services or investment firm is certainly paying attention to new technology innovation but does not appear to be taking Hadoop as seriously right now – at least in terms of rolling out production clusters where the business will come to rely upon it.
There is no question that big data is a big deal in financial services. The problem already exists as opposed to it being “anticipated” or “coming”. The requirements are mostly set and are very high, in terms of enterprise-grade capabilities. For a market sector that is known to be an early technology adopter, it will be very interesting to see how big data technology ecosystems play out in the coming months and years.
Subscribe to our newsletter