Bryony Evans, Kendra Fouracre and Ella Hall look at how AI is regulated globally – and the path that Australia should take. They explain why AI-specific regulation should build on the existing laws we have, rather than rushing to introduce new laws.
With the meteoric rise in popularity of AI chatbots and the productivity potential of generative AI promising to transform the way we work, artificial intelligence (AI) is the new “must have” for individuals and companies. But as all things AI continue to proliferate, there are also concerns over the potential risks that using AI entails – sparking urgency among countries to consider how to regulate AI.
In July, Australia’s Department of Industry, Science and Resources called for public submissions on how Australia should regulate AI. As summarised by minister Ed Husic:
“Using AI safely and responsibly is a balancing act the whole world is grappling with at the moment … Today is about what we do next to build trust and public confidence in these critical technologies.”
Australia is already behind other countries in regulating AI. The European Union is more than two years into developing (and soon enacting) the first-of-its kind Artificial Intelligence Act tackling the harms of AI across all sectors of the economy by imposing prescriptive rules that AI systems must comply with based on the risk it poses (an approach clearly inspired by product liability legislation). Canada and Brazil are considering similar approaches to regulation.
Other countries are taking contrasting approaches.
- The United Kingdom has proposed a sector-based approach where a select number of regulators target sector-specific risks presented by AI.
- China has enacted provisional rules specific to public-facing generative AI services.
- Singapore is encouraging companies to voluntarily manage the risks of AI.
This poses the question: should Australia directly adopt any of these international approaches to regulating AI?
We think the answer is no.
The limitations of current approaches
Prescriptively listing what is considered to be high-risk AI across the economy (as the EU has done) leads to regulation that will fail to keep up with the evolution of AI.
As it is, the European Parliament has already had to propose changes to the draft Artificial Intelligence Act to introduce rules on how companies can deploy generative AI such as ChatGPT (a topic missing in the initial draft released in 2021).
In contrast, the UK’s approach of only regulating AI in certain “high-risk” sectors is already receiving criticism as having the potential to lead to inconsistency, inefficiency and result in gaps where AI is used in a sector that is not regulated. After all, a substantial proportion of AI being developed will not be unique to a single sector of the economy.
The alternative of “no regulation” and a purely voluntary approach will leave Australia where it is at the moment – with a patchwork of existing laws and some organisations that are doing the right thing to manage the risks of AI to ensure public safety and trust.
However, when other organisations do the wrong thing, the public’s trust in AI and how it is used by Australian organisations is at risk.
The need for a balanced approach in Australia
Ultimately, the regulatory approach for Australia will be about balance. Balancing the need for regulating the risks posed by AI without stifling innovation. Balancing the need to ensure regulation is able to keep pace with emerging AI with ensuring it is able to support a strong economy. Balancing the need to adequately reflect the Australian legislative context without ending up with different rules operating in different jurisdictions. Balancing innovation with the need for the Australian public to trust that all Australian companies will do the right thing and will actively consider, and mitigate, the harms that AI presents to individuals, society and the environment.
To reach this balance in Australia, we think AI-specific regulation should build on the existing laws we have, rather than rushing to introduce new laws.
We already have laws that apply (in part) to AI – let’s review them and, if we need to, amend them to better reflect the potential impacts that AI could have on individuals, society and the environment. The use of personal information in automated decision-making is best regulated through the Privacy Act. The potential for bias in AI-generated decisions is best regulated through the Discrimination Act. Managing the rights of creators when AI is used is best dealt with through the Copyright Act.
Building principles of ethics – and trust in AI
Australia also already has in place a set of robust national AI Ethics Principles released by the Department of Industry, Science and Resources – let’s build upon them. Let’s encourage public trust in AI by building upon such principles as benefiting individuals, society and the environment, human-centred values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability and accountability.
Furthermore, some organisations are already assessing the impact of AI before they deploy it – let’s encourage that process by having organisations undertake AI impact assessments before they deploy AI for certain use cases, including to understand how to best mitigate identified risks. As these risks will differ depending on what and how AI is being used, it is companies (and not the government) that are best placed to consider how best to mitigate the risks to individuals, society and the environment.
As submissions to the federal government’s consultation process officially closed on August 4, only time will tell how Australia will navigate this emerging AI regulatory landscape.
This opinion piece originally published in The Australian on August 15, 2023.