Follow us


The Australian Government is calling for industry consultation to inform the appropriate regulatory and policy responses to mitigate the potential risks of AI and support safe and responsible AI practices in Australia. 

On 1 June 2023, the Australian Government released a discussion paper titled Safe and Responsible AI in Australia (available here). The Australian Government recognises the potential for Australia to become a global leader in responsible AI and is seeking submissions on whether further governance mechanisms (including regulatory and voluntary mechanisms) are required to mitigate AI risks and increase public trust and confidence in its use.

The discussion paper builds upon and was released concurrently with the National Science and Technology Council’s Rapid Research Report on Generative AI and is open for consultation until 26 July 2023.

Hon. Ed Husic, Minister for Industry and Science stated, ‘using AI safely and responsibly is a balancing act the whole world is grappling with at the moment…there needs to be appropriate safeguards to ensure the safe and responsible use of AI’. Given the potential risks of AI, it is appropriate Australia consider whether further regulation is required. The paper notes Australia’s governance framework should harmonise with that of its major trading partners to help bolster its economy. Governance measures should aim to ensure appropriate safeguards are in place and provide businesses confidence when investing in AI technologies.

As well as seeking industry consultation, the discussion paper outlines:

  • opportunities and challenges with AI technologies;
  • the domestic and international landscape on AI regulation; and
  • managing the potential risks of AI.

The high-level discussion paper proposes a risk management approach, that draws heavily from the European Unions proposed AI Act, and the Canadian Directive of Automated Decision-Making. This risk management approach involves an organisation contemplating the risk level of the AI application being considered. The higher the risk level, the more onerous risk management requirements apply. The paper asserts this approach best caters to context-specific risks, allowing for less onerous obligations when appropriate and allows AI to be used in high-risk settings when justified.

While there are no real surprises in the discussion paper, the level of activity and attention being given to AI both domestically and globally, points to the likelihood of a serious appetite for reform within Australia.

INTERNATIONAL CONTEXT

This consultation takes place against a backdrop of recent announcements by other jurisdictions grappling with the same challenge in differing ways. Some jurisdictions continue to rely on voluntary self-regulation and frameworks, while others are pushing for more targeted risk based regulations. As summarised in the paper, “[s]ome countries like Singapore favour voluntary approaches to promote responsible AI governance. Others like the EU and Canada are pursuing regulatory approaches with proposed new AI laws. The US has so far relied on voluntary approaches and is consulting on how to ensure AI systems work as claimed, and the UK has released principles for regulators supported by system-wide coordination functions. G7 countries in May 2023 agreed to prioritise collaborations on AI governance, emphasising the importance of forward-looking, risk-based approaches to AI development and deployment.”1

Despite the varying approaches, a common theme from public statements of senior officials across the globe is the cross border application of AI as an emerging technology requires international convergence on governance approaches to lay appropriate guardrails and foster innovation that will unlock associated economic benefits.

The G7 countries announced the intention to develop guardrails on AI. In April this year, Ministers for digital and technology issues met in Japan and agreed broad recommendations for AI, ahead of the May G7 summit. In their communique, they reaffirmed that “AI policies and regulations should be human centric and based on nine democratic values, including protection of human rights and fundamental freedoms and the protection of privacy and personal data”.2

European Union AI Act

The EU is pressing ahead with the proposed European law on artificial intelligence (the AI Act) which would be the first general law on AI by a major regulator anywhere. As a result, the AI Act is expected to exhibit the ‘Brussels effect’ experienced with the General Data Protection Regulation (GDPR) introduced in 2018. The GDPR harmonised data privacy laws across Europe and went on to set the international benchmark for data privacy for international businesses.

The approach taken in the AI Act recognises that it is impractical to regulate the technology itself so focuses on regulating the use of the technology where AI applications and systems present a high or unacceptable risk. The AI Act adopts a risk-based approach imposing obligations on providers, developers and users based on the level of risk the AI system can generate.

“AI systems with an unacceptable level of risk to people’s safety and intrusive and discriminatory uses would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).”3

High-risk applications including harm to people’s health, safety, fundamental rights or the environment are subject to specific legal requirements (for example influencing voters in political campaigns or CV-scanning tool that ranks job applicants). Applications that are neither explicitly banned nor classified as high-risk are largely left unregulated.

For the significant number of applications of AI systems that are neither banned as unacceptable, nor regulated as high-risk, they will likely fall outside the regulatory permitter of the regime.

The AI Act sensibly recognises that compliance with legal principles is often challenging in the case of rapidly evolving technology. As a result, the AI Act will lean heavily on the role of international technical standards to evidence compliance. To support industry, the AI Act seeks to impose the burden of evaluation on the regulator not the individual company.

The current hype around generative AI has resulted in updates to the draft bill since its first iteration. Generative foundation models (the language model engines underpinning chat bots like ChatGPT, Microsoft Bing, and Google Bard) will be required to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database. Obligations in respect of transparency will require disclosure that the content was generated by AI, design features to prevent the model form generating illegal content and transparency with regards to training data which is subject to copyright4.

After extended period of consultation and debate, earlier this month the Internal Market Committee and the Civil Liberties Committee of the European Parliament adopted a draft negotiating mandate for the text by resounding majority. If Parliament votes to accept the text at the upcoming plenary vote scheduled for mid-June, this will provide the mandate for subsequent trilogue negotiations with the Council and the Commission. The proposal of the EU AI Act will become law once both the Council representing the 27 EU Member States and the European Parliament agree on a final version of the text, expected by the end of 2023/beginning of 2024.5

U.S. Approach

The U.S. has a well-established tech industry and motivation to promote investment and economic activity in AI. Historically the U.S. has relied on voluntary standards, self-regulation and the patchwork of existing laws and regulations that apply to AI systems. High profile Senate hearing in mid-May signalled a potential shift in approach towards considering domestic AI regulation with lawmakers questioning C.E.O. of OpenAI, Sam Altman on how AI should be regulated6. While the U.S. released the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the Blueprint for an AI Bill of Rights, there is no genuine indication yet of a unified federal law on AI. The U.S. is however, playing a key role in the harmonisation and convergence of international standards, an vital part of the international AI governance response. At the fourth ministerial meeting of the EU-US Trade and Technology Council on 12 May 2023, representatives resolved to strengthen translantic co-operation on emerging technologies (including AI).

China

China was one of the first countries to introduce specific legislation directed as particular use cases of AI models and systems including recommendation algorithms and deep synthesis technology (responsible for deep fakes). We expect China to continue to adopt this targeted approach.

State of play in Australia

Currently, there is no AI-specific legislation in place in Australia. The current consultation process, follows attempts by the previous government with the Department of the Prime Minister and Cabinet consultation in March 2022. At that time the Digital Technology Taskforce was exploring how regulatory settings and systems could maximise opportunities facilitate responsible use of AI and automated decision making. The intended discussion paper identifying possible reforms and action was never published.

Until any AI specific law reform is introduced in Australia (if at all), businesses will need to continue to navigate the legal frameworks enforced by various regulators for general laws (e.g. data protection and privacy, Australian Consumer Law, discrimination law, copyright law etc.), sector-specific laws (e.g. motor vehicle, therapeutic goods etc.) as well as current law reform processes underway that may impact AI applications and their uses such as the Privacy Act Review Report.

In addition to legal requirements, companies should consider voluntary frameworks and guidance for self-regulation. Such initiatives include:

  • The Australia’s AI Ethics Principles. Australia was one of the first countries in the world to adopt AI Ethics principles. These voluntary principles aimed to help:
    • achieve safer, more reliable and fairer outcomes for all Australians
    • reduce the risk of negative impact on those affected by AI applications
    • businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
  • The work by Standards Australia and other international standards bodies including for example SA TR ISO/IEC 24027:2022, Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making (adopting ISO/IEC TR 24027:2021 standard and specifies measurement techniques and methods to address bias-related vulnerabilities) and ISO/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence (approaches to assess and achieve availability, resiliency, reliability, accuracy, safety, security and privacy of AI systems).
  • The Human Technology Institute recently released its State of AI Governance in Australia Report which provides an overview of proposed corporate governance structures surrounding AI. The report highlights the need for AI-focused corporate governance, that Australian organisations should consider implementing. The report also outlines that while Australia does not have AI-specific laws, companies are subject to many technology-neutral laws of general application (including IP laws, anti-discrimination legislation, and company director duties).
  • The Responsible AI Network is a cross-ecosystem program aimed at supporting Australian companies use AI both ethically and safely. The Network is centred around the pillars: Law, Standards, Principles, Governance, Leadership and Technology.

  1. Safe and Responsible AI in Australia Paper at p3.
  2. https://www.ft.com/content/1b9d1e21-ebc1-494d-9cce-97e0afd30c2d
  3. https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence#:~:text=Before%20negotiations%20with%20the%20Council,the%2012%2D15%20June%20session.
  4. European Parliament News ‘AI Act: a step closer to the first rules on Artificial Intelligence’ Press Releases 11 May 2023. here.
  5. https://artificialintelligenceact.eu/developments/
  6. https://www.nytimes.com/2023/05/17/business/openai-altman-congress-ai-regulation.html

Key contacts

Susannah Wilkinson photo

Susannah Wilkinson

Regional Head, Emerging Technology (APAC), Brisbane

Susannah Wilkinson
Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs