Share
  • LinkedIn
  • Facebook
  • X
  • Threads

In Competition

True or False?

10 October 2024

UPDATE: On 24 November 2024, the Minister for Communications (the Hon Michelle Rowlands MP) announced as follows, “Based on public statements and engagements with Senators, it is clear that there is no pathway to legislate this proposal through the Senate. The Government will not proceed with the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024.”

Rebecca Maher and Zareen Qayyum explain how ACMA is set to get new powers to regulate content on digital platforms for misinformation.

On September 12 2024, the Hon Michelle Rowland MP, Minister for Communications, unveiled a long anticipated proposal to combat misinformation and disinformation on digital communications platforms (think: social media, instant messaging, online forums, search engines, and more).

The Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 (Bill) was introduced following a consultation process on an exposure draft in 2023. The Bill has been referred to the Senate Environment and Communications Legislation Committee, with a report due on 25 November 2024.

The Bill proposes to empower the Australian Communications and Media Authority (ACMA) with enhanced regulatory capabilities and mandatory information gathering powers aimed at holding digital platforms accountable – designed to facilitate a safer and more transparent online environment for all Australians.

In this article, we:

  • explore the history of attempts to regulate misinformation and disinformation
  • detail the obligations imposed on providers under the Bill and their interaction with the Australian Consumer Law, and
  • outline the penalties for non-compliance.

Moving from voluntary to mandatory

In December 2019, as part of the Government’s response to the Australian Competition and Consumer Commission’s Digital Platforms Inquiry, ACMA was asked to oversee the development of a voluntary code by industry on disinformation and news quality.

In February 2021, the Digital Industry Group Inc launched the voluntary Australian Code of Practice on Disinformation and Misinformation, which requires its signatories to commit to a number of measures to address misinformation and disinformation on their services.

In its June 2021 and July 2023 reports to the Australian Government, ACMA identified a number of shortcomings with existing self-regulatory requirements, including in particular a lack of consistent data available from digital communications platform providers on their measures and actions taken to address misinformation and disinformation in the Australian market.

The proposed Bill was therefore crafted with the following key objectives:

  • Empowering ACMA: The legislation will enable ACMA to require digital communications providers to take proactive steps in managing the risks associated with misinformation and disinformation on their platforms.
  • Increasing transparency: It aims to enhance transparency in how these providers manage published content, allowing users and ACMA alike to better understand the measures being taken to address misinformation and disinformation.
  • Empowering users: The bill is designed to empower users of digital communications platforms to identify and respond to misinformation and disinformation on digital communications platforms.

Cracking the (Misinformation) code

Under the new legislation, a new Schedule 9 will be added to the Broadcasting Services Act 1992 (BSA).

Schedule 9 generally applies to providers of digital communications platforms, which include a broad range of digital services such as search engines, news aggregators, instant messaging services, social media, web-forums, dating sites and podcasts with an interactive feature, but not SMS and MMS, email services or media sharing services without an interactive feature. The Minister also has the power to exclude certain digital services by legislative instrument.

1. Essential legislative obligations

If passed, the legislation will impose essential obligations on digital communication platform providers, which include:

  • Risk Assessments: Providers will be required to conduct assessments of the risks associated with misinformation and publish their findings. 
  • Policy Transparency: They must outline their approach to managing misinformation and how they plan to address it on their platforms.[2]
  • Media Literacy Plans: Providers will need to develop and publish media literacy plans aimed at educating users about identifying misinformation.[3]
  • Information gathering: Providers will be required to comply with ACMA’s new compulsory information-gathering powers.[4]

2. Digital platforms rules

The Bill proposes to empower ACMA to, by legislative instrument, make “digital platforms rules” in relation to risk management, media literacy plans, and complaints and dispute handling processes.[6] The digital platforms rules are proposed to apply to all digital communications platform providers.

3. Mandatory misinformation code

Importantly, the Bill also empowers ACMA:

  • to approve a mandatory ‘misinformation code’ developed by the industry, or
  • in the absence of a code being developed that satisfies the relevant requirements, to develop mandatory misinformation standards) requiring digital communications platform providers to take steps to manage the risk of misinformation and disinformation.[7]

Examples of matters that may be included in a misinformation code include:

  • Preventing or responding to misinformation or disinformation: Codes or standards may set objectives for platforms to take actions to minimize the spread of misinformation, e.g., altering algorithms or limiting the reach of accounts spreading false information.
  • Using technology to address misinformation: This includes requiring digital platforms to utilize automated processes to detect and manage misinformation or disinformation, such as prompting users to reflect before sharing.
  • Handling misinformation as foreign interference: Codes may require platforms to identify and manage instances of misinformation that constitute foreign interference differently from other types of content.
  • Preventing advertising with misinformation: Digital platforms could be required to reject advertisements containing misinformation or disinformation, especially if it could harm public health or undermine preventative measures.

Crucially, the focus of Schedule 9 is on enhancing systems and processes for managing the risk of misinformation and disinformation, rather than regulating individual pieces of content. This means that while platforms will be required to take proactive, preventative action against misinformation, they will not be compelled to remove specific posts unless they involve “inauthentic which encompasses tactics such as using fake accounts, automated bots, impersonation and other manipulative strategies to amplify misinformation or disinformation.

However, this does not prevent other legislation from regulating content that contains false, misleading or deceptive conduct or representations. For example, the Bill does not limit existing obligations that may apply to digital communications platforms providers under the Australian Consumer Law s 18 (which prohibits conduct, in trade or commerce, that is misleading or deceptive or is likely to mislead or device) and s 29 (which prohibits false or misleading representations in trade or commerce in connection with the supply or possible supply of goods or services, or in connection with the promotion by any means of the supply or use of goods or services).

Diving deeper into the interaction with the Australian Consumer Law, it is notable that misinformation and disinformation is defined as content that is ‘false, misleading or deceptive’. However, content covered by the Australian Consumer Law is limited to conduct or representations made ‘in trade or commerce’. On the other hand, as set out in the next section, the scope of misinformation and disinformation is limited to content that is likely to cause ‘serious harm’. Therefore, while the Australian Consumer Law does not cover a wide range of misinformation or disinformation, it may provide an alternative source of relief for the ACCC or consumers in cases where content is false, misleading or deceptive but does not meet the threshold of ‘serious harm’ to be captured by a misinformation code.

Misinformation, disinformation, what’s the difference?

A critical aspect of the Bill is the clear differentiation between misinformation and disinformation. Here’s how these terms are defined:

  • Misinformation refers to content that is verifiably false, misleading, or deceptive, likely causing serious harm but without the intent to deceive.
  • Disinformation, refers to content that is verifiably false, misleading, or deceptive, likely causing serious harm with the intent to deceive.

The Bill establishes a high threshold for what constitutes these terms, requiring there to be a likely effect of, or actual, “serious harm”. Serious harm can manifest as:[9]

  • harm to the operation or integrity of a Commonwealth, State, Territory, or local government electoral or referendum process
  • harm to public health in Australia, including the efficacy of preventative health measures
  • vilification of a group distinguished by race, religion, sex, sexual orientation, gender identity, intersex status, disability, nationality, or national or ethnic origin
  • intentionally inflicted physical injury to an individual in Australia
  • imminent damage to critical infrastructure or disruption of emergency services in Australia
  • imminent harm to the Australian economy, including harm to public confidence in the banking system or financial markets,
  • significant and far-reaching consequences for the Australian community or severe consequences for an individual.

The definition further states that this harm must have significant and far-reaching consequences for the Australian community or severe consequences for an individual in Australia.

In addition to the high threshold for what constitutes misinformation and disinformation that is likely to cause ‘serious harm’, the Bill specifically excludes content deemed to be parody or satire, along with professional news content and reasonable dissemination for academic, artistic, or scientific purposes. This delineation is intended to ensure that legitimate forms of expression remain protected while targeting harmful misinformation.

What will compl(AI)nce look like?

In addition to the possible development of a misinformation code or standard, the legislation will require a broad range of digital platforms to implement new processes for compliance.

For example, digital communications platforms providers may need to consider what processes they can employ to be able to differentiate between legitimate information and misinformation or disinformation to comply with the new legislation. An approved misinformation code may require digital platforms to use technology to detect and act on misinformation and disinformation or to support fact-checking by users – including potentially by employing artificial intelligence (AI).

As to the misinformation and disinformation arising from artificial intelligence, the Bill does not currently contemplate AI-generated content as a separate area for regulation. The Explanatory Memorandum reads:

An example of a new kind of digital service that could be determined to be a digital communications platform is a generative artificial intelligence service. At the time of the drafting of the Bill and this Explanatory Memorandum, the scope and nature of generative artificial intelligence services is still evolving, and it is unclear if there will be a future need to determine generative artificial intelligence services as a distinct kind of digital communications platform for the purposes of Schedule 9 to the BSA. Currently, generative artificial intelligence is starting to be incorporated into existing digital services.

Therefore, digital communications platforms providers will need to consider how to account for AI-generated content on their platforms in their risk assessments, policies and media literacy plans.  We can expect a greater level of regulatory action by ACMA against digital communications platform providers in respect of these obligations.

What’s the harm in a little white lie?

Remedial powers 

ACMA can issue a remedial direction requiring a digital communications platform provider to take specified actions to ensure compliance. Failure to comply with these directions can also result in civil penalties. The ACMA may issue formal warnings for contraventions, which serve as a prelude to potential penalties.

This reflects a graduated approach to compliance and enforcement, allowing ACMA to respond proportionately to different levels of non-compliance.

Penalties 

The proposed penalties for a breach of the new Schedule 9 to the BSA will vary depending on the nature of the contravention, including:

  • In relation to non-compliance by providers of their obligation to make certain information regarding misinformation and disinformation public: the maximum penalty is 5,000 penalty units for a body corporate (currently $1,565,000).[10]
  • For contraventions of the information-gathering powers: the maximum penalty is 40 penalty units for a body corporate (currently $12,520).[11]
  • For contraventions of an approved misinformation code or a remedial direction to comply with such a code: the maximum penalty is 10,000 penalty units (currently $3,130,000) or 2% of the annual turnover of the body corporate during the turnover period (whichever is greater) for a body corporate.[12]
  • For contraventions of a misinformation standard or a remedial direction to comply with a misinformation standard: the maximum penalty is 25,000 penalty units (currently $7,825,000) or 5% of the annual turnover of the body corporate during the turnover period (whichever is greater) for a body corporate.[13]

ACMA can also issue infringement notices for certain contraventions, with specified penalties:

  • 8 penalty units (currently $2,504) for body corporates regarding information-gathering powers, and
  • 60 penalty units (currently $18,780) for body corporates for other provisions in Schedule 9.

For completeness, the Bill does not provide any avenue for private actions for damages against digital communications platforms providers, though as flagged above it does not prevent private actions under the Australian Consumer Law or other legislation that governs false, misleading or deceptive content – though such actions may be brought against a third party rather than the platform, depending on who publishes the content.

[1] Bill s 17.

[2] Ibid.

[3] Bill ss 17 and 22.

[4] Bill ss 33 and 34.

[5] Bill s 82.

[6] Bill ss 19, 22, 25.

[7] Bill, Division 4.

[8] Defined in the Bill as any dissemination that misleads users regarding the identity or intent behind the content. This includes the use of automated systems to manipulate perceptions about the source or popularity of the information: see Explanatory Memorandum to the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024, page 61.

[9] Bill, s 14.

[10] Bill s 17(6) (and 1,000 penalty units for a person who is not a body corporate).

[11] Bill s 205F(5F) (and 30 penalty units for a person who is not a body corporate).

[12] Bill s 205F(5G) (and 2,000 penalty units for individuals).

[13] Bill s 205F(5H) (and 5,000 penalty units for individuals).

Share
  • LinkedIn
  • Facebook
  • X
  • Threads