Feedback:
It might be useful to consider the video in two parts. Firstly, the human part because actuaries have to abide by the Actuaries’ Code (the Code) and all the things that happen irrespective of the technology and tools being used. The second part is the effect of the AI and whether it adds extra risks to the actuarial world.
- What principles of the Actuaries’ Code come into play?
The Communication principle of the Code states that “Members must communicate appropriately.” and “Members must communicate in a timely manner; clearly, and in a way that takes account of the users.”
Clearly, communication is a fundamental part of the problem in the scenario. Veronique will be communicating to Senior Management who will then have to communicate to the market, which in turn will mean wider public knowledge. Whatever she does will have consequences.
- If Veronique sticks with the results that she's been working on all day, to some extent, she would be misleading the market. She would not be adhering to the Integrity principle of the Code where it states that “Members must act honestly and with integrity.” and this could potentially be a problem for her professionally. There’s a risk if original results are retained, with no communication on issues, then it will eventually get out anyway (with social media implications too).
- If Veronique explains the problems that Simon has identified and the fact that 1-Gen is unsure of its results, this could have a significant effect for her employer and could also panic the market. If 1-Gen is suspected of not being financially accurate what about the other insurance companies doing the same type of activity?
As there are specific issues around professional requirements it is important to take into account the Compliance principle of the Code which states that “Members must comply with all relevant legal, regulatory and professional requirements.” It's a fundamental part of all actuarial work that actuaries use reliable data, and if there's any question about the data not being reliable, that it is disclosed.
The Impartiality principle of the Code which states that “Members must ensure that their professional judgement is not compromised, and cannot reasonably be seen to be compromised, by bias, conflict of interest, or the undue influence of others.” also comes into play. The scenario touches on the staff Share Option Scheme vesting the following week and Simon doesn’t want to lose out. However, Simon appears to have accepted Veronique's comment that the problem with the unreliable data is a bigger issue that the Share Option Scheme. However, the conflict of interest stretches beyond the Share Option Scheme and also includes the conflict of interest on the market. There are going to be implications whatever decision they reach.
Actuaries need to be mindful of the Speaking up principle which states that “Members should speak up if they believe, or have reasonable cause to believe, that a course of action is unethical or is unlawful.” Simon identified issues and raised it with Veronique but if Veronique doesn’t do anything with that information, he has an obligation to speak up, perhaps to the Financial Director or the CEO, and perhaps the IFoA for not abiding by the Code.
Veronique is in a very difficult position. She has no option but to tell her superiors and whatever they decide will affect what she does. If her superiors decide to sweep the issue under the carpet, then as an actuary, she's got the problem of deciding whether she continues to be involved, and perhaps having to go above them to the regulator.
- Technical Actuarial Standards (TASs)
If Veronique discloses that the data is unreliable then she will have failed to be compliant with the relevant TAS 100 which states that “Practitioners carrying out technical actuarial work must seek to ensure data is sufficiently accurate, complete and appropriate, so that the intended user can rely on the resulting actuarial information.”
The systems that they're using will likely have already been deemed compliant with the TAS, but the systems learn and evolve over time and actuaries need to consider if checking TAS compliance at one point is sufficient to claim compliance. AI is going to challenge actuaries in terms of meeting those professional requirements.
- Does AI add extra risks to the actuarial world?
The IFoA published a risk alert that may prove helpful thinking about the issues in this example. There is a consumer angle that actuaries need to be aware of in this scenario in that the different systems are feeding off the underwriting and risk budget system so this could quickly become a pricing issue. And there’s the wider market point and confidence in the market to consider.
It’s important to consider human-in-the-loop controls to mitigate some of the issues that might happen with AI. In this example, the system is set up where you could introduce extra controls, especially when information is passed from one part of the system to the next. Human-in-the-loop controls are certainly something that a lot of emerging AI standards and principles are quite strong on. In the testing environment and in the live environment, it looks like additional such controls could be very beneficial.
Communication is fundamental and is an essential element of the process, both in relation to explaining assumptions, methods and results, and the risks and limitations associated with data and models.
There's a lot of information available about developing validation solutions around these systems, and actuaries should use their skills to put better controls in place once these systems go live.
- How sceptical should actuaries be about AI?
It would be prudent to have professional and constructive scepticism in the work that actuaries do when dealing with newer techniques and techniques that not everyone will be an expert in.
Hallucinating is the technical term of AI systems creating misleading data. In some of the large language models there can be a reasonable proportion of outcomes that don't look fully accurate, with some reports indicating possibly 25% of output may contain hallucinations. Actuaries need to be aware of what type of AI system they’re working with and looking at whether they’re going to face a problem to a greater or lesser extent and when testing and implementing these systems.
No question is a daft question, and people who are involved in this type of work should feel able to speak up and challenge where they have concerns about AI implementations.