Despite the hype, many consumers don't trust artificial intelligence, or claims about the benefits it brings.
Research conducted by Herbert Smith Freehills reveals that just 5 percent of UK consumers are unconcerned about the growing presence of AI in everyday life. Only 20 percent say they have a high level of faith that AI systems are trustworthy.
Undertaken to mark the launch of the firm's Emerging Tech Academy, the research a representative sample of 1,000 UK consumers between the ages of 18 and 80. Respondents were asked about the type of AI systems they use today, expectations about future usage, and comfort levels with the way machines gather data and operate. Key findings include:
- manipulative machines: a significant proportion (56 percent of those who trust AI) do not accept that AI can be impartial. Additionally, amongst those who do not trust AI, more than one third of respondents (37 percent) fear the outputs of AI systems could be biased against specific groups and over half (53 percent) also fear AI will make decisions that directly impact them using information that is wrong
- responsive, but not responsible: whilst 60 percent accept that AI will make the world run more efficiently by offering solutions quickly, just over half of those who do not trust AI (53 percent) say they are concerned about a lack of accountability in AI systems. One third (31 percent) also suggest that AI tools failing to meet ethical expectations is a problem
- modern, yet outdated: although a significant proportion accept that AI can help reduce human errors (44 percent), just 16 percent believe AI tools give accurate information. More than one-third (38 percent) also fear that AI systems use out-of-date information.
"Artificial intelligence can undoubtedly benefit consumers, but there is clearly still work to do to win their trust and overcome cynicism. The AI market risks being seen as the 'wild west' so, as policymakers define their strategies to address the risks of AI, they must ensure they are creating a system that delivers certainty and confidence now, while being flexible enough to promote and account for future innovations," says Alexander Amato-Cravero, regional head of Herbert Smith Freehills' Emerging Technology Group.
Based on the findings and ahead of the UK hosting the first major global summit on AI safety, Herbert Smith Freehills' Emerging Tech Academy has identified three steps which, taken together, can foster an environment in which consumer and business confidence in AI will improve. These are:
- accelerating the development and implementation of legally-binding AI rules: the sooner policymakers can plug the gaps in the current patchwork of rules that apply to AI with laws, regulations, guidance, and principles that are fit for purpose and have the force of law, the sooner consumers and businesses will be more comfortable engaging with AI systems
- increasing alignment among domestic and global policymakers on AI: the risks associated with AI are overseen by multiple regulators and authorities. A harmonised approach is needed to address gaps in the existing collection of laws and regulations. With consumers engaging with businesses around the world, this discourse must go beyond domestic policy and address global alignment and interoperability as well.
- improving dialogue and better educating consumers and markets on AI risks: despite excitement about the possibilities of AI systems now and in the future, consumers' fear and distrust will be minimised through balanced dialogue about the benefits and risks and protections.
Amato-Cravero concludes: "The key to long-term success is dialogue rather than fanfare. It's easy to get caught up in the hype, but building confidence in AI requires cutting through the noise with sharp focus on the risks as well as the opportunities.. At the same time, policymakers must deliver certainty to consumers and businesses by clarifying the patchwork of existing laws and regulations."
The research was conducted during May and June 2023 and is based on 1,000 respondents. Respondent profiles include individuals including those in full time employment and education and across 11 UK regions.