Follow us


A high number of Australia’s AI industry participants responded to the Australian Government’s consultation on the Safe and Responsible Development and Deployment of AI showing a divergence of views towards future governance of AI.  

In June 2023, the Department of Industry, Science, and Resources sought public consultation on a discussion paper addressing the risks and responsible use of Artificial Intelligence (AI) (Discussion Paper). The Discussion Paper, informed by the March 2023 Rapid Response Information Report, focused on potential legal and practical mechanisms to support responsible development and deployment of AI in Australia. Given the diverse application of AI across a range of industries and use cases, determining the appropriate governance and regulatory response is challenging. Different use cases of similar AI technology can give rise to very different risks. Identifying the optimal mix of governance tools will benefit from the extensive public consultation to help balance the key objectives of fostering innovation and ensuring safe and responsible AI. The Discussion Paper received strong industry response with over 500 submissions, 448 of which have been published. The published submissions demonstrate the wide range of views across the AI industry.

Defining AI

Opinions varied even on the basic question of how to define AI and key related terms including machine learning, generative AI, large language models, multimodal foundation models, and automated decision making. The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) highlights the risk that any definition will likely become outdated quickly due to the constantly evolving AI landscape. Canva recommends definitions of AI, Machine Learning (ML), and algorithms to be aligned with global definitions to achieve consistency across different sectors as well as nations. Canva also stresses the importance of distinguishing between Generative AI systems and non-Generative AI systems because of their very different risk factors (e.g. GAI being more unpredictable due to the black box phenomenon). Amazon Web Services (AWS) suggests a definition of AI that focuses on the technology’s unique attributes, and Microsoft suggests making the definitions for foundational models more specific to appropriately acknowledge that they are a narrow class of AI models and not deployed directly to a user. National Australia Bank (NAB) recommends strengthening the definitions framework by defining the characteristics inherent in ‘high-risk’ applications of AI. While submission responses to the proposed definitions vary significantly, a common theme emerges in the need to develop future-proof definitions which accurately describe the technology and enable its appropriate treatment and/or regulation. Herbert Smith Freehills’ submission shares this view, suggesting that definitions should strike an appropriate balance so they are not too broad to result in overregulation.

Risk-based approach

Of a sample of public submissions, there appeared to be general support for a risk-based approach to assessing and mitigating the potential consequences arising from high-risk applications of AI. Herbert Smith Freehills notes the advantages of a risk-based approach being flexible regulation without stifling innovation, but suggests that risk assessment should be an ongoing conversation due to the impossibility of predicting all outcomes and unforeseen risks. At this early stage, Atlassian suggests that the most effective approach is likely to be a mix of ‘hard’ and ‘soft’ law to address AI risks and harms while also building the foundation for a culture of safe and responsible AI. Google and the Human Technology Institute recommend that regulation should be risk-weighted according to severity and likelihood, and ADM+S queries who will classify the risk profiles of different systems, noting that some parties may be incentivised to underestimate risk. The Gradient Institute suggests that a risk-based approach would fail to target context-specific risks effectively, while the Commonwealth Scientific and Industrial Research Organisation (CSIRO) notes the risk-based approach’s particular adeptness to dealing with unexpected events. Google, AWS, and OpenAI all express support for a risk-based approach, with Google emphasising the need for a ‘human in the loop’ and notices for high-risk practices. However, ADM+S notes that the role of human oversight in AI can often obscure problems with AI systems rather than make them more transparent.

CSIRO suggests that a risk-based approach could be complemented by other approaches focused on rights, quality, principles, and outcomes. Atlassian notes that in order to reach the desired outcome where high-risk applications of AI are adequately regulated, a ‘traffic-light’ classification system may prove useful in coordinating the prioritisation of addressing present and imminent harms which are not currently captured by the regulatory regime. The University of Melbourne’s School of Computing and Information Systems and Centre for Artificial Intelligence and Digital Ethics (CAIDE) calls for an adaptive and flexible approach to regulation, what it defines as a principles-based approach to regulation, to be compatible with Australia’s current regulatory regime. ADM+S agrees, suggesting that the success of a risk-based framework depends on addressing gaps in our legislation and enforcement capacities.

Addressing gaps in existing laws

Government is assessing the preferred approach to the governance of AI, querying the need for new AI specific laws and reforms to existing sector-specific or general laws. OpenAI notes that while the legal landscape is rapidly changing, legislation is being interpreted and adapted to address new risks arising from AI. As government regulation may take time before an adequate framework is in place, the AI developer encourages the adoption of voluntary commitments and recommends the development of registration and licensing requirements for future generations of foundational models. CSIRO proposes seven non-regulatory initiatives to facilitate responsible AI practices, ranging from developing industry best practices and trustworthiness metrics to setting up a ‘national sandbox platform’ to support experimentation with responsible AI approaches.

Many submissions identified aspects of the current legal framework as being inadequate to ensure safe and responsible development of AI. The Human Technology Institute advocates for the uplifting of existing regulation in addition to taking a risk-based approach to AI specific regulation. NAB favours a regulatory approach that harmonises existing legislation to regulate AI as the landscape continues to change rapidly, noting that privacy law and financial services regulation can be used to address emerging risks posed by AI in these sectors. But as CAIDE notes, the risk to personal privacy arising from AI places an increased urgency on privacy law reforms which have been long awaited. Because uplifting existing regulation takes time, NAB suggests that it is important to ‘future proof’ the governance approach by raising awareness of how existing laws can apply to risks arising from AI use. Where any material risks are not covered by existing legislation, NAB supports specific and targeted AI regulation but only to the extent to close the gap as to not duplicate regulation or hinder innovation.

ADM+S suggests that for a risk-based approach to AI regulation to be effective in Australia, it will require adequate enforcement mechanisms and consequences for harms caused by an organisation’s irresponsible development of AI. ADM+S draws attention to the fact that some harms arising from AI systems do not have any legal remedy, suggesting that more enforcement pathways are needed for interest groups to drive transparency and accountability in AI regulation. ADM+S and CAIDE both focus on the importance of requiring ex-ante risk mitigation to identify risks during the design and development of AI systems before any harm has occurred.

Across the submissions sampled, the varied responses shared a common thread in advocating for both the uplifting of existing legislation to effectively regulate AI and the need for thoughtful targeted specific legislation.

The takeaway

The high volume of industry response to the Discussion Paper highlights both the broad interest in the technology and the complexities of the challenge of regulating AI. The submissions vary on nuanced aspects of the Discussion Paper including defining key terms, weighing the merits of a risk-based approach to AI, and signalling the importance of addressing gaps in existing laws. However, there will also be commonality and coherence amongst the industry participants and stakeholders. This demonstrates the need for an ongoing dialogue and adaptive strategies to ensure appropriate governance models are developed for the safe and responsible development and deployment of AI. The Australian Government now faces the challenge of synthesising the diverse perspectives into a coherent and effective framework for regulation. From this consultation, it is likely there will be uplifting of existing regulation and development of new specific and targeted AI legislation for high-risk implementations. However, as many submissions noted, any governance framework emerging from this consultation should have ongoing improvement and review. The real challenge will as always, be whether the law can keep pace with a runaway AI boom.

Key contacts

Susannah Wilkinson photo

Susannah Wilkinson

Director, Emerging Technology (Advisory), Brisbane

Susannah Wilkinson
Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln
Kwok Tang photo

Kwok Tang

Partner, Sydney

Kwok Tang

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Australia Technology, Media and Entertainment, and Telecommunications Emerging Tech and Digital Transformation Tech Regulation Technology, Media and Telecoms Emerging Technologies Susannah Wilkinson Julian Lincoln Kwok Tang