On February 12, 2025, the European Insurance coverage and Occupational Pensions Authority (“EIOPA”) printed a session on its draft opinion on synthetic intelligence (“AI”) governance and threat administration (the “Opinion”).
The Opinion is addressed to supervisory authorities and covers the actions of each insurance coverage undertakings and intermediaries (hereafter collectively known as “Undertakings”), insofar as they could use AI methods within the insurance coverage worth chain.
The Opinion goals to make clear the primary ideas and necessities in insurance coverage sectoral laws for insurance coverage AI methods which are neither prohibited nor deemed high-risk below Regulation (EU) 2024/1689 (the “AI Act”). The Opinion gives steering on how one can apply insurance coverage sectoral laws to AI methods that weren’t frequent or out there when the AI Act was handed. The method units high-level supervisory expectations for the governance and threat administration ideas that undertakings ought to observe to make use of AI methods responsibly, taking into consideration the dangers and proportionality of every case.
A abstract of the important thing factors from the Opinion are as follows:
- The Opinion posits that Undertakings ought to assess the danger of AI in its varied use instances (noting there are various ranges of dangers amongst these AI use instances that aren’t prohibited or thought of as high-risk below the AI Act). As a part of this evaluation, Undertakings ought to take into account their threat and develop governance and threat administration measures taking into consideration the next:
-
- processing knowledge on a big scale;
- the sensitivity of the information;
- the extent to which the AI system can act autonomously;
- the potential affect an AI system might have on the proper to non-discrimination;
- the extent to which an AI system is utilized in a line of enterprise that’s necessary for the monetary inclusion of shoppers or whether it is obligatory by legislation; and
- the extent an AI system is utilized in vital actions that may affect the enterprise continuity of an Endeavor.
- The Opinion additional explains that following the evaluation, Undertakings ought to develop proportionate measures to make sure the accountable use of AI. This means that governance and threat administration measures could also be tailor-made to the precise AI use case to realize the specified final result.
- Consistent with Artwork. 41 of the European Directive 2009/138/EC, Artwork. 25 of the Insurance coverage Distribution Directive (“IDD”) and Arts. 4,5 and 6 of the Digital Operational Resilience Act, Undertakings ought to develop proportionate governance and threat administration methods, with specific give attention to the next:
- equity and ethics;
- knowledge governance;
- documentation and report holding;
- transparency and explainability;
- human oversight; and
- accuracy, robustness, and cybersecurity.
- The Opinion recommends that Undertakings utilizing AI methods outline and doc in a coverage (e.g., IT technique, knowledge technique or a particular AI technique) the method to the usage of AI throughout the organisation. This method ought to be repeatedly reviewed.
- Undertakings are additionally really useful to implement accountability frameworks, no matter whether or not the AI system was developed in-house or by third events.
- The EIOPA encourages a customer-centric method to AI governance to make sure that clients are handled pretty and in keeping with their finest curiosity consistent with Artwork. 17 of the IDD and EIOPA’s 2023 Supervisory Assertion on Differential Pricing Practices. This consists of growing a tradition that features ethics and equity steering and trainings.
- The Opinion features a give attention to knowledge used to coach AI, explaining that it have to be full, correct and freed from bias and the outputs of AI methods ought to be meaningfully explainable to establish and mitigate potential bias. The outcomes ought to be repeatedly monitored and audited. It additionally states that threat administration methods ought to have an information governance coverage, in addition to insurance policies addressing the sufficiency and high quality of related knowledge for underwriting and reserving processes, as effectively.
- Aligning with a number of different governance frameworks, the Opinion emphasizes that satisfactory redress mechanisms ought to be in place to permit clients to hunt redress when negatively affected by an AI system.
- It additional emphasizes that applicable information ought to be stored on the coaching/testing knowledge used and modelling methodologies to make sure reproducibility and traceability.
- The EIOPA additional explains that measures ought to be adopted to make sure the outcomes of AI methods may be meaningfully defined. Such explanations being tailored for the related AI use case, and the recipient stakeholders.
- The Opinion additionally highlights inner controls for an efficient compliance and threat administration program, together with:
- administrative, administration or supervisory physique members liable for the usage of AI methods within the organisation, with such members having adequate data of its use and potential dangers to the organisation;
- compliance and audit capabilities to make sure compliance with relevant legal guidelines and rules;
- an information safety officer to make sure that all knowledge processed by AI methods is in compliance with relevant knowledge safety guidelines together with the potential creation of an AI officer function; and
- adequate coaching to workers to make sure efficient human oversight of the AI methods.
- The AI system ought to carry out constantly all through its lifecycle with respect to its ranges of accuracy, robustness, and cybersecurity (proportionate to its use case). The Opinion recommends that metrics be used to measure the efficiency of the AI system, and that such methods be resilient to unauthorised third events trying to change their use, outputs, or efficiency.
The Opinion doesn’t suggest any extra laws or amendments to present legislation. Quite, the Opinion makes clear that the EIOPA merely seeks to supply insurance coverage sector-specific steering on the operation of AI methods below present EU statutes, making use of a principle-based method. It ought to be famous that EIOPA is partaking with the European Fee’s AI Workplace and extra commentary could also be printed sooner or later.
Responses to the Opinion have to be submitted by Might 12, 2025. EIOPA will then take into account the suggestions obtained and revise the Opinion accordingly.
This publish is as of the posting date acknowledged above. Sidley Austin LLP assumes no responsibility to replace this publish or publish about any subsequent developments having a bearing on this publish.