IBM’s Cognitive Bank: Big Data, bigger problems
21 July 2015
Joan McGowan
Last Wednesday I attended IBM’s analyst presentation onTransforming Banking and Financial Markets with Data. The crux of the presentation was the benefits of big data and cognitive analytics for financial markets. The return from better understanding the desires of an individual bank customer are well understood and IBM did a good job of illustrating the up-lift. But what were not discussed are the daunting challenges and complexities a bank will face in implementing and managing a big data project. The implementation and ongoing management of data will make or break the success of cognitive computing. What I would like to see is an open discussion on the successes and failures of big data implementation programs by the banks, IBM, and other vendors working in this space. How smooth was the implementation process (time/budget/resourcing etc.)? Were your expectations set correctly? Did you get the required support from management? What were the lessons learnt? What value do you see from your big data program? It’s not easy Structured data tends to sit in multiple databases housed in silo-ed legacy systems; it is customized, lacks consistency, has incomplete fields, is often latent in nature and is prone to human error. All of which compounds the complexity of managing the data. Add to structured data the volume, variety, and velocity (known as the 3 Vs) of unstructured data and the challenge of implementing and managing information becomes even greater. And, the larger and more complex the bank the more likely its data architecture and governance process will hinder data-based implementations projects. Automating the management of data is time consuming and laborious and scope creep is significant, adding months onto implementation projects as well as extra expense and frustration. Resourcing such projects can be taxing as there is a limited pool of big data expertise and they are expensive. To perform cognitive analytics, massive parallel processing power is required and the most cost-effective operating environment is through the cloud. If you get the data right, cognitive analytics can be very powerful. Cognitive analytics Cognitive analytics (also referred to as cognitive computing) is a super-charged power tool that allows data scientists to crunch vast amounts of structured and unstructured data and to codify instincts and learnings found in that data in order to develop hypotheses and recommendations. Recommendations are ranked based on the confidence the computer has in the accuracy of the answer. How you rate confidence was not made clear by IBM and I would argue that this can only come after the fact, when you can use KPIs to validate the scoring and criteria. The modeling techniques include artificial intelligence, machine learning and natural language processing and, unlike us mere mortals, the more data you feed the computer, the higher the quality of the insight. If you do get it right, the rewards are significant We continue to leave behind mind-boggling amounts of digital information about our lifestyles, personalities, and desires. A sample of sites where I know I have left a hefty footprint include Facebook, Reddit, LinkedIn, Twitter, YouTube, iTunes, blogs, career sites, industry associations, search history patterns, buying patterns, geo locations, and content libraries. IBM Watson offers banks a cost-effective way, through the cloud, of scouring such data to build up clues that provide a more in-depth view of what their customers’ desire. Current analytic segmentation is requirements-based and is modeled on past behavior to determine and influence future behavior. The segmentation buckets are broad and all within them are treated the same. Cognitive analytics allow a much more precise and immediate analysis of behavioral characteristics in different environments and, therefore, a more personalized and satisfying experience for the customer. I’d welcome any feedback from those of you who have been involved in implementing or are in the process of implementing big data in banking. And, if you’re interested, take a look at Celent’s Dan Latimore’s blog Implementing Watson is Hard On a side note, IBM introduced the term Cognitive Bank and it is not a phrase that works for me. It is disconcerting to describe a bank as having the mental process of perception, memory, judgment, and reasoning. Looking forward to hearing from you.