This article pulls together findings from our latest Top Risk Review where participants were asked to provide their personal views on material risk concerns relating to the rapid development and adoption of artificial intelligence (AI).
Adopting artificial intelligence brings a range of opportunities…,
While the primary focus in the survey was on risks and threats, the importance of exploring opportunities and advantages of new and emerging technologies was acknowledged by survey respondents. They noted the following key motivators behind adopting artificial intelligence:
- Efficiencies and improved processes
- New and enhanced modelling capabilities
- Improved decision-making
- Improved data analytics and trend observations, e.g. vulnerability or threat detection for operational resilience
- Control environment optimisation, automation and enhancements, e.g. new detective cybersecurity controls
In addition to the above, the importance of maintaining competitive advantage and the risk of ‘falling behind’ other industry players (including competitors) if failing to adopt emerging technology were also key themes.
…But with those opportunities come a list of threats impacting across the risk profile
Chart details: Survey respondents were asked to select up to five risks from the ORX Reference Taxonomy that they believed will be most adversely impacted by AI risk factors over the next six months.
Whether it relates to regulation, the overall pace and direction of technological advancements or the difficulty assessing and measuring associated risks, it is clear that AI is driving significant levels of uncertainty. As a result, AI is now considered a central driver of material and emerging risks. A number of specific threats emerged from the survey.
Exploitation of new and developing technology by criminals and bad actors
Widespread access to AI is driving new opportunities for bad actors to exploit, lowering barriers to entry for committing cybercrime. This is resulting in an attack landscape which is not only greater in terms of numbers of potential perpetrators, but overall, is also more sophisticated. The development of new, harder to detect attack types and advancing capabilities to circumvent controls are significant concerns to risk professionals who frequently describe the cyber risk landscape as a continuous arms race.
Specific methodologies and threat types listed include advancing impersonation attacks, use of deepfake technology and use of AI to write malware scripts.
In addition to malicious use of AI by themselves, bad actors may also seek to utilise AI to create or exploit vulnerabilities associated with the use of AI and machine learning models within organisations. This is typically done in order to manipulate or steal data or to cause disruption, e.g. through adversarial attacks such as data poisoning or inference attacks.
Quality, management and protection of data
The use of AI is furthering existing opportunities and threats relating to growing strategic importance and value of data, something which has been a central theme in recent ORX risk landscape research.
Ongoing challenges associated with data quality, compatibility and availability could inhibit or negatively impact the quality of outputs of AI models leading to biased outcomes as well as inappropriate or poor (strategic) decision making, possibly driving up Conduct and Model risk exposure.
Other data concerns associated with using AI technology include potential data theft or breaches (e.g. through housing sensitive data on AI platforms) and the risk of infringing intellectual property rights (particularly associated with the use of generative AI).
Impact on people and culture
With frequent media and other coverage on AI often highlighting the threat of job displacements, risk managers worry about the impact of such concerns on the welfare and overall atmosphere within the workforce. Increased levels of discontent, anxiety and general uncertainty could translate into cultural challenges such as increased levels of misconduct and lack of risk management discipline.
Undesirable customer and stakeholder outcomes
Relying on AI-enabled automation and AI models to perform tasks and make decisions for or relating to customers and other external stakeholders is driving significant conduct and ethical concerns. This includes risks resulting from inappropriate, unfair and/or biased outcomes. Examples may include product misselling, discrimination and biased/unethical recruitment practices.
The risk may be further compounded or perpetuated as lack of model transparency and inherent data risks make it harder to detect and correct model-driven biases or flaws.
AI model use and reliance on data means ensuring secure, transparent, ethical and responsible use of customer and other sensitive data is a key priority.
Other notable concerns
Business continuity and resilience
The use of AI across businesses may drive new and emerging business continuity threats, e.g. where automated processes are not adequately backed up by continuity plans or as a result of new and emerging cyber threats.
Despite sounding like a dystopic, futuristic concern, scenarios where control of AI is, to some degree, lost or compromised present a legitimate concern and risk associated with increased use of the emerging technology. While such scenarios are considered improbable, entrusting AI with significant responsibilities without adequate oversight or using irresponsibly developed technology could increase the risk of them materialising.
Third party use of AI
Lack of transparency and openness around the use of AI at third parties could lead to downstream privacy, conduct and security risks.