GPT4およびその他のAIに関するニュース:未来の予兆
Celent is endeavoring to keep you up to speed regarding generative AI news (including links with details) and the exceptional speed at which it is advancing. AI watersheds are happening fast and furious. But we, humans, remain critical to the progress. The watershed fine-tuning of large language models uses reinforcement learning from human feedback.
OpenAI sets a furious pace, launching GPT-4 March 13, barely 4 months after the launch of ChatGPT. GPT-4 was trained on Microsoft Azure AI supercomputers and Azure’s AI-optimized infrastructure is being used to deliver GPT-4 to users around the world via ChatGPT Plus and an API.
Improved performance:
- 25,000 word limit up from 3,000 for ChatGPT
- “40% more likely to produce factual responses than GPT-3.5 on our internal evaluations”
- GPT-4 scores in the 90th percentile on the uniform bar exam compared to ChatGPT’s 10th percentile.
- OpenAI is underscoring its focus on building guardrails to prevent adversarial usage and unwanted content and to assure privacy. For example, “GPT-4 is 82% less likely to respond to requests for disallowed content.”
Coding power: GPT-4 created a working website from a drawing.
Little detail: Very little has been revealed regarding the technical details: “this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
Microsoft is pursuing three paths.
- Search: it opened up access to its “new Bing” last week which uses GPT-4.
- Cloud: it announced that GPT-4 is available in preview in Azure OpenAI Service and included pricing based on tokens (a token is ~0.75 word).
- Microsoft 365 Copilot: stay tuned, in test phase with 20 customers, release date likely in a few months.
Google released Bard 3/21/23 and announces Pathways Language Model (PaLM) API.
- With a cautious release, Google is releasing Bard to a limited number of consumers in the US and UK (prior to this, it had released it to approved testers). Similar to OpenAI, it is not revealing technical details.
- Google announced its PaLM API, an entry point for Google’s large language models, and released with a new tool called MakerSuite.
- Stay tuned for its release of generative AI features embedded in its Workspace apps (e.g., Google Docs, Gmail, Sheets, and Slides).
Meta released its “light” LLM genie from the bottle and can’t get it back in, raising concerns regarding malicious use of the model.
- In late February, Meta unveiled a new “light” LLM, LLaMA-13B, which can run on a single GPU, i.e., one doesn’t need a supercomputer to run the model.
- AI researchers interested in testing this model have to fill out a form to receive the full code and the "learned" training data.
- The model was leaked on 4chan and available for download by anyone to run on their own computer without any guardrails.
Anthropic introduced Claude.
- One of the generative AI unicorns ($1.3B in funding), a Google cloud partner with a strong focus on AI safety
-
Quora, a leading partner, is offering Claude to users through Poe, their AI Chat app.
Tomorrow is happening today in the adoption of generative AI in financial services. Hence, FIs should keep an eye on this rapidly evolving landscape.
- A major fintech is exploring coding use cases, such as, generation of code snippet, error resolution (e.g., API call not working), and debugging code. In addition, it is exploring how to use to expedite product recommendation (e.g., best API) for developers.
- A leading cloud provider is testing call center use cases with at least one financial institution, comparing the performance a control group with a group using generative AI.
-
Stripe is using GPT-4 to streamline developers’ experience and mitigate fraud.
- Morgan Stanley is using GPT-4 to help wealth managers locate relevant information across its internal sites (including pdfs).