Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
On 17 January 2024, the Australian Government published its much-anticipated interim response to the Safe and Responsible AI consultation held in 2023. The interim response outlines how the Government intends to ensure that AI is designed, developed and deployed safely and responsibly in this market, namely:
as well as considering the adequacy of existing technology neutral laws in relation to AI-specific risks as part of parallel law reform reviews.
Although we do not yet know the final outcomes, the general approach of combining targeted obligations on high-risk AI with lighter touch voluntary “soft law” for less risky uses, strikes a good balance to encourage the uptake of AI in Australia whilst protecting consumers.
In the international context, the Government recognises the need to consider specific obligations for the development, deployment, and use of high-powered general-purpose AI, considered to be ‘frontier AI’, and will seek to align with, and influence, international developments in this area. Whilst we recognise the benefit of coherence with laws of other jurisdictions for regulating AI, Australian economic and policy objectives may differ from other major AI jurisdictions, for example in the balance between developers and users of AI.
The headline proposal in the interim response is consultation on options to introduce regulations establishing safety guardrails for high-risk AI use cases. The rationale is that in high-risk use cases, it can be difficult or impossible to reverse any harm caused by the use of the AI. As such, safeguards will apply to the development, deployment and on-going use of AI in high-risk areas to identify, measure and mitigate risks to the community. The proposed guardrails will focus on testing, transparency and accountability and the Government has set out some initial proposals (see below). The Government also proposes to establish a temporary expert advisory body on AI to assess options for AI guardrails.
The practical impact of this proposal remains unclear given the current uncertainty of the scope of the definition of high-risk AI uses. In the earlier consultation paper two examples of high-risk uses were given: robots used in surgery and autonomous vehicles. The scope of high-risk AI will be considered during the Government’s consultation, together with development of guidelines setting out when AI use is high risk. Accordingly, businesses that are contemplating using AI in areas of potential high risk should actively consider participation in the upcoming consultation that will shape these future regulations.
These proposals come on top of, and are intended to dovetail AI considerations into, existing regulatory reform work the Government is doing across a number of other areas. In particular, reforms to the Privacy Act, a review of the Online Safety Act 2021, including new laws to address misinformation, and efforts to increase transparency and integrity of automated decisions in the wake of the Robodebt Royal Commission report. The interim response does not provide much detail on the progress of these reviews, although we anticipate that amendments will take account of feedback provided during last year’s consultation. What is also notable in the interim response is that the temporary expert advisory body is not expressly considering updates to existing laws as part of its remit.
Partner, Head of TMT & Digital Australia, Melbourne
Regional Head, Emerging Technology (APAC), Brisbane
Senior Associate, Melbourne
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024