Artificial intelligence and the insurance industry
An ethical approach to AI is fundamental in the design, development and use of data and algorithms to maximise sustainable value creation and minimise risks to individuals and society. A priority in Generali's strategy to reinforce its Lifetime Partner ambition
Since it was invented, artificial intelligence (AI) has had a growing impact on our increasingly interconnected society, and can be applied to many different sectors and industries. This is the case, for example, in healthcare, where it is used to dose drugs and deliver different treatments that are tailored to specific patients, and to support surgical procedures in the operating room.
Artificial intelligence can also be used in the financial sector, in order to detect and report suspicious activity, such as the unusual use of payment and investment tools, thus facilitating the work of anti-fraud departments. AI applications are also used to facilitate trading operations, making it easier, for instance, to estimate the supply, demand, and price of securities.
In short, artificial intelligence promises numerous advantages - e.g., in terms of speed, efficiency and accuracy - thanks to the advanced use of data and algorithms. Nowadays, the public is increasingly aware of this; but it is also more and more aware of the potential risks associated with the use of these technologies in ever more areas of work and everyday life, in particular the excessive pervasiveness of data and algorithms and the distortions that can result.
An example of this is offered by the so-called bias or machine distortion, i.e., the effect of incorrect assumptions in the automatic learning processes of algorithms. Such biases reflect problems in the collection or use of data, where systems draw improper conclusions about information sets, either due to human intervention or due to a lack of cognitive evaluation of the data.
A case in point is offered by a December 2020 study entitled “Investigating Bias in Image Classification using Model Explanations”, which shows how biased data collection can lead to equally biased classification results. The authors of the study collected a set of images to distinguish between those of 'doctors' and 'nurses', testing different percentages of men and women in the two categories in order to detect their impact on the performance of the algorithm. In this case, a biased model predicts 'nurse' when the subject is a woman, 'doctor' when it is a man, without looking at the actual characteristics that should be considered (e.g., the type of coat or the presence of a stethoscope).
That is why it is so important to adopt an ethical approach on AI, the use and the sharing of data and algorithms beyond legal obligations, in order to maximise sustainable value creation and minimise risks to individuals and society.
AI in the insurance sector
Artificial intelligence systems are being used with increasing frequency in the insurance industry, in order to provide increasingly personalised, accurate, and competitively priced products and services: from health and life protection, to premium setting, underwriting and claims assessment. If not properly designed, developed and used, however, AI systems can lead to significant risks for people’s lives, including financial exclusion and discrimination. For instance, is AI able to correctly decide who should or should not receive insurance products and services, or to make the decision-making process transparent for end customers? Therefore, there are specific ethic ‘dilemmas’ that Generali has long addressed trying to maximise the transparency of its algorithms, which always go hand in hand with human supervision for the most delicate applications.
Meanwhile, regulatory authorities are also starting to work towards defining standards and controls for the use of AI, with significant impacts also on insurance: this is the case, for instance, of the European Regulation on Artificial Intelligence (‘AI Act’), proposed by the European Commission in April 2021 and expected to be approved in 2023. The AI Act package includes new obligations referring to:
- the evaluation of risks deriving from AI use
- algorithm control and monitoring
- transparency towards customers
The aim is to promote a responsible development of AI and strengthen Europe’s competitive potential globally, maximising resources and coordinating investments. Through the Digital Europe and Horizon Europe programmes, the Commission plans to invest 1 billion euros per year in AI, mobilising further investment from the private sector and Member States to reach an annual investment volume of 20 billion euros during the digital decade.
Generali's strategy and the Trustworthy AI initiative
Generali welcomes this historic change as an opportunity to strengthen its Lifetime Partner 24: Driving Growth strategy, thanks to the integration of its core ethical values into the new programmes of digital natives. Through this strategy, Generali aims to:
- increase its digital trustworthiness, thus creating a distinctive competitive advantage;
- strengthen its Lifetime Partner ambition;
- mitigate ethical, reputational, and financial risk, such as a lack of transparency and interpretability of AI systems, or unintentional bias and discrimination in the decision-making process of AI systems.
Generali also supports the adoption of the new “trustworthy AI” model, which is a combination of legislative frameworks, techno-ethical principles, advanced analytical techniques, and organisational transformations that will profoundly change the current way of developing and applying AI. The model also works with regulators and industry associations, in order to improve legislative proposals on digital technologies, by leveraging its technical expertise and business experience.
This includes, for example, the Trustworthy AI initiative, which aims to ensure the responsible use of data and algorithms to earn full digital trust from customers, leading to a sustainable competitive advantage and a stronger Lifetime Partner ambition. Generali’s strategy is also based on the idea of providing guidelines and developing algorithms capable of avoiding risks and ensuring transparency in all processes, so that human control is guaranteed over the most delicate activities and decisions.