Using Gen AI - A Snap Poll for the Celent Executive Panel
Available Only for Members of the NA Celent Insurance Executive Panel
Abstract
Snap polls reflect questions posed by members of the Celent Executive Panel, a group of C level executives at insurance carriers. This comes from an insurer that is looking for insights on OpenAI. The snap poll was fielded to select members of the Celent Executive Panel October 29 - October 31, 2024. 20 insurers responded to this survey. This deck provides a summary of the responses.
If you are an insurer and are interested in participating and receiving these snap polls, please email kcarnahan@celent.com to verify eligibility.
The question that was posed was:
Background:
This insurer has an instance of OpenAI via Microsoft, intended to get the benefit of GenAI but in a safe way. They’ve found that the user interface that comes with it is very rudimentary and even working with Microsoft directly, doesn’t seem very extensible (can’t easily even add an “Upload Document” button / capability for analysis or learning?). They’re looking to see what others are doing.
Questions:
Has your organization sanctioned the use of a Generative AI chat application?
- If so, are you using a public solution or have you set up a private Gen AI solution?
Which generative AI chat application has your organization sanctioned?
- OpenAI
- Co-pilot (Microsoft)
- Gemini (Google)
- Claude (Anthropic)
- Llama (META)
- HugginChat (Huggingface)
- Other
What benefits are you finding with that solution versus others?
We find the user interface that comes with OpenAI to be very limited. Are you extending your solutions out of the box functionality? (yes/no)
How are you extending out of the box functionality?
- Internal development
- Commercial application
- Add-ons
- Other (please describe)
For those using OpenAI or another competing LLM internally, what functionality is your company likely to use the most?
- Generate new content
- Analyze content,
- Internal knowledgebase
- Other (please describe)
Are there any other insights / advice you'd offer for deploying a robust internal gen ai application?