バンキングにおける新たな技術的負債 ー 生成AI
Counting the human capital cost
Is that a shocking headline? Clickbait?
Last month I had the pleasure of attending the AWS Financial Services Analyst Summit, followed up by the Financial Services Symposium. As always, I appreciate the invitation from AWS to keep us at Celent informed and engaged in relevant product, partner, and client news. Although there were product updates on AWS AI developments, the event was led primarily through a narrative of client successes, and it was a pleasure to meet with several bank executives who shared their journeys to the AWS cloud – and by extension – into AI.
A major theme was about how to scale AI investments and use cases, for all types of AI. News headlines can barely keep pace with the rapid rate of GenAI developments. Products that were launched to fanfare in 2023 are now either obsolete or have been subsumed into new solutions. Amazon Code Whisperer, a copilot for developers, was a standalone product in 2023, but is now integrated into Amazon Q for Developers. Model-makers, Anthropic and OpenAI are regularly launching new versions of Claude and ChatGPT.
Several of the banks and insurance companies at the AWS event spoke about their efforts to scale AI. Typically, this involved prioritizing a series of experimental use cases to trial the technology, and to understand the possibilities and the risks, then being very selective about rolling out solutions to employees. As confidence grew, these solutions were often embedded into a workflow. For example, a relationship manager or contact center workflow being integrated with GenAI models. Pretty exciting stuff!
However, two key challenges stood out to me, especially when developing GenAI solutions at scale:
The cost of human capital. Yes, the technology designed to create so many advances in human productivity is actually quite human-intensive (expensive) to build. We've all heard about compute resourses, but prompt engineering is a primary component and a rare skill, andthere is also a high demand for subject matter experts who understand the business operations and data. These resources are needed extensively in scoping and defining the requirements for prompt engineers. Once models are trained, the same SMEs are required to test for accuracy and validate that hallucinations fall within the tolerances. When integrating these models into a business workflow, the variables and complexity of end to end testing grows significantly. Humans are required to be in the loop because testing GenAI output is not a binary function.
The immediacy of technical debt. Experimental use cases should be somewhat light to provide enough validation of a concept, but without “industrialization.” Once completed they are typically discarded, rescoped, and reengineered for production with the latest versions of the technology. However, once deployed to a production environment, change becomes more challenging. Financial institutions at the AWS event found that extensive reengineering was required as models change. And, models change frequently, at least for now. There is little cross-pollination of prompts across different models – such as between different versions of Anthropic Claude, for example. Again, once models are integrated to a business workflow, the cost of change grows. All this brings increasing demands for the expertise noted above.
Does that mean a bank should hold back on GenAI investments? No – it is vital that banks are experimenting with and deploying GenAI use cases. However, banks must be selective in prioritizing use cases for production, recognize the implications on human capital costs, and commit to manage new short-term technical debt.
Interested to explore in more detail? Meet us in New York on September 18th, 2024 at Celent’s GenAI Symposium for financial services executives.