Responsible Innovation: The Implications of AI Regulation for Tech and Insurance Companies
Celent has been extensively covering the topic of Artificial Intelligence (AI) and data usage, as it has emerged as a game-changing technology with the potential to transform industries, including technology and insurance. Our research and insights cover industry trends, benchmarks, and case studies from insurers around the world participating in Celent's Model Insurer Award Program. As AI continues to advance though, concerns about ethics, privacy, and fairness have prompted the need for regulation. For example on October 30, 2023 President Biden issued an Executive Order aimed at ensuring that the United States leads in the development and management of the risks inherent to artificial intelligence (AI). Other regulations, frameworks, and guidance are being provided by governments, industry regulators, private consortiums, industry associations, consulting companies, and even tech companies. All these have practical implications for tech and insurance companies. Let's take a quick (non-exhaustive) look.
AI specific regulation
As of now, there is no comprehensive global regulation specifically tailored for artificial intelligence (AI). However, various countries and regions have started to develop and implement regulations and guidelines to address specific aspects of AI. Here are some examples:
- European Union: The EU has taken significant steps towards AI regulation. The European Commission released the "Ethics Guidelines for Trustworthy AI" in 2019, which provides a framework for ethical AI development. The EU is also working on the proposed "Regulation on a European approach for Artificial Intelligence," which aims to establish legal requirements for AI systems' transparency, accountability, and safety.
- United States: While there is no federal AI regulation in the U.S., several states have introduced AI-related legislation. For example, California passed the California Consumer Privacy Act (CCPA), which includes provisions related to AI and data privacy. Additionally, the National Institute of Standards and Technology (NIST) has been developing AI-related standards and guidelines. An Executive Order was issued aimed on Safe, Secure, and Trustworthy AI.
- Canada: The Canadian government has released the "Directive on Automated Decision-Making," which provides guidelines for the responsible use of AI in the federal government. The government has also established the Canadian Institute for Advanced Research (CIFAR) AI and Society program to explore the societal implications of AI.
- Singapore: The Singapore government has developed the "Model AI Governance Framework," which provides guidance for organizations to deploy AI responsibly. It covers areas such as fairness, transparency, accountability, and human-centricity.
It's important to note that these examples represent a fraction of the regulatory efforts around AI globally. The field of AI regulation is still evolving, and more countries and regions are likely to develop their own frameworks and guidelines in the coming years.
AI and insurance regulation
There are no specific regulations solely focused on AI in the insurance industry though some guidance is being provided. For example, the Monetary Authority of Singapore (MAS) in Singapore is leading a consortium of companies around the Veritas Initiative. Veritas aims to enable financial institutions to evaluate their AIDA-driven solutions against the principles of fairness, ethics, accountability and transparency (FEAT) that MAS co-created with the financial industry in late 2018 to strengthen internal governance around the application of AI and the management and use of data. On June 2023 MAS announced the release of an open-source toolkit to enable the responsible use of Artificial Intelligence (AI) in the financial industry.
The Geneva Association issued a report on regulation of AI in insurance in September 2023 stating that even in the absence of AI-specific regulation, AI in insurance is not in a vacuum and its use is subject to existing regulations that govern various aspects of the industry, such as data protection, consumer protection, and fairness in underwriting and pricing. Here are some key areas where existing regulations may apply to AI in insurance (more detail on this in the Geneva Association's report):
- Data Protection: Insurance companies must comply with data protection regulations, such as the European Union's General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) in the United States. These regulations govern the collection, use, and storage of personal data, including data used in AI algorithms.
- Fairness and Non-Discrimination: Insurance companies must adhere to regulations that prohibit unfair discrimination in underwriting and pricing. For example, in the United States, the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA) regulate fair lending practices and prohibit discrimination based on protected characteristics.
- Consumer Protection: Insurance companies are subject to regulations that protect consumers, such as disclosure requirements, claims handling procedures, and fair treatment of policyholders. These regulations aim to ensure transparency, fairness, and accountability in insurance practices.
- Risk Management and Solvency: Insurance regulators often require companies to have robust risk management and solvency frameworks in place. While not specific to AI, these regulations ensure that insurers have appropriate systems and controls to manage risks associated with their operations, including the use of AI technologies.
AI regulation is crucial to ensure the responsible development and deployment of AI technologies. For tech companies, it means addressing algorithmic bias, transparency, and accountability to build trust with users. Insurance companies, on the other hand, must navigate data protection, fairness, and consumer protection to ensure ethical and equitable practices. By embracing regulation, both sectors can foster public trust and unlock the full potential of AI.
It's worth noting that as AI continues to advance and its use in insurance becomes more prevalent, regulators may develop specific guidelines or regulations tailored to AI applications in the industry. Insurance industry associations and regulatory bodies are also actively exploring the implications of AI and discussing potential guidelines or best practices for its responsible use.
Biden’s Administration Executive Order on AI
The Executive Order covers a wide range of areas, including protecting against the risks of using AI to engineer dangerous biological materials, protecting Americans from AI-enabled fraud and deception, strengthening privacy-preserving research and technologies, ensuring fairness throughout the criminal justice system, standing up for consumers, patients, and students, developing standards for AI safety and security, promoting innovation and competition, advancing American leadership abroad, and ensuring responsible and effective government use of AI.
Practical Implications
The Executive Order emphasizes the need for ethical considerations, international collaboration, governance and oversight, data protection, fairness, and consumer protection in the development and deployment of AI technologies. As such, it has significant implications for tech and insurance companies. Here are a few that come to mind (please jot yours in the comment’s section of this blog post).
Tech Companies:
- Ethical Considerations: The Executive Order emphasizes the need to address algorithmic bias, transparency, and accountability. Tech companies must prioritize fairness and responsible data usage to mitigate biases and ensure AI systems align with societal values. This will require investing in research and development to develop more ethical AI algorithms and frameworks. Additionally, companies should establish clear guidelines and policies for AI development and use, ensuring that ethical considerations are integrated into the entire AI lifecycle.
- International Collaboration: The order highlights the importance of collaborating with international partners to develop and implement AI standards. Tech companies need to align their practices with global standards, ensuring interoperability and trustworthiness of AI systems across borders. This collaboration will foster innovation and enable the exchange of best practices. Companies can actively participate in international forums, contribute to the development of common standards, and share insights and experiences to shape responsible AI practices globally.
- Governance and Oversight: While the Executive Order does not explicitly address governance and oversight mechanisms, it opens the door for potential future regulations. Tech companies should proactively establish internal governance structures to ensure responsible development, deployment, and monitoring of AI technologies. This includes implementing mechanisms for public input, transparency, and accountability. Companies can establish AI ethics committees or advisory boards to provide guidance and oversight, conduct regular audits of AI systems, and ensure compliance with emerging regulatory frameworks.
Insurance Companies:
- Data Protection: The Executive Order recognizes the risks to privacy posed by AI and emphasizes the use of privacy-preserving techniques. Insurance companies must ensure compliance with data protection regulations, such as GDPR or CCPA. Robust data protection measures, explicit consent, and secure data handling practices are essential to safeguard customer information. Companies should implement comprehensive data governance frameworks, conduct privacy impact assessments, and prioritize data minimization and anonymization techniques.
- Fairness and Non-Discrimination: The order addresses algorithmic discrimination, requiring insurance companies to ensure fairness in underwriting and pricing. Companies must adhere to regulations that prohibit unfair discrimination and ensure AI systems do not perpetuate biases. Regular monitoring and auditing of AI systems are necessary to identify and rectify any biases that may arise. Companies can invest in explainable AI models, conduct fairness assessments, and establish clear guidelines for underwriting and pricing practices to ensure fairness and non-discrimination.
- Consumer Protection: The Executive Order focuses on protecting consumers from AI-enabled fraud and deception. Insurance companies need to establish standards and best practices for detecting AI-generated content and authenticating official content. Transparent explanations of AI-driven decisions and accessible human support are crucial to ensure fair treatment and build consumer trust. Companies can develop robust verification processes, provide clear and understandable explanations of AI-driven decisions, and establish channels for policyholders to seek human assistance when needed.
These are interesting and fast evolving times in the realm of AI, and maybe this blog post becomes outdated a week from now. In the meantime, for tech and insurance companies, AI regulation presents both challenges and opportunities. Embracing ethical considerations, collaborating internationally, establishing governance mechanisms, prioritizing data protection, ensuring fairness, and enhancing consumer protection are key steps towards responsible AI adoption. By navigating the evolving regulatory landscape, tech and insurance companies can build trust, drive innovation, and unlock the full potential of AI while upholding ethical standards. Embracing AI regulation is not only a legal requirement but also a strategic imperative to thrive in the AI-driven future. By proactively integrating responsible AI practices into their operations, tech and insurance companies can position themselves as leaders in their respective industries, driving positive societal impact and ensuring long-term success in the AI era.
If you want to discuss further on AI in financial services, please reach out. We have a bunch of experts and data driven insight to help you take full value of data and technology.