Share
  • LinkedIn
  • Facebook
  • X
  • Threads

IP Whiteboard

It’s alive! Safe and responsible AI in therapeutic goods

3 December 2024

Imagine a future where medical devices, powered by artificial intelligence, are seamlessly integrated into our lives.

  • Wearable devices monitor our vital signs in real-time, alerting us to potential health risks before they escalate. Smart insulin pumps automatically adjust dosage based on blood sugar levels, minimising the risk of hypoglycaemia.
  • AI-powered diagnostic tools analyse medical images with unparalleled accuracy, leading to earlier and more precise diagnoses.
  • A robotic pill roams your digestive system, taking selfies of your ulcers and sending them directly to your doctor.
  • If we can invent driverless cars, why not driverless doctors?

This future, while still emerging, holds the promise of revolutionising healthcare, improving patient outcomes, and extending lives — but, as specialists in the regulation of therapeutic goods, we ask: how will all this be regulated?

The Australian Government recently committed $39.9 million over 5 years to the development of policy and capability across government to support the adoption and use of artificial intelligence (AI) in a safe and responsible manner. The budget measure for ‘Safe and Responsible AI’ aims to clarify and strengthen existing laws to address risks and harms from AI through an immediate review of priority areas, which includes the health and aged care sector.

As part of this work, the Therapeutic Goods Administration (TGA) is leading a priority review into the implications of AI for the regulation of therapeutic goods.

What are therapeutic goods?

‘Therapeutic goods’ are broadly defined as products that are used to prevent, diagnose, treat, or monitor disease, and include products that are marketed as having a therapeutic effect (whether or not they are used for that purpose). They generally fit within three main categories: medicines, biologicals, and medical devices. Medical devices include instruments, implants and appliances, such as pacemakers and sterile bandages. They can also include wearable devices and apps, although many consumer health devices are excluded from therapeutic goods regulation.

For example, a simple fitness tracker that monitors heart rate and steps may not be considered a medical device. However, a device that can detect irregular heart rhythms or predict seizures would likely fall under the category of medical devices.

This sector-specific priority review is being conducted in parallel with further economy-wide consultation by the Department of Industry, Science and Resources (DISR), which is leading the whole of government agenda for Safe and Responsible AI. DISR recently released a proposals paper to introduce mandatory guardrails for AI in high-risk settings, including principles for designating settings as ‘high-risk’.

As adverse events associated with therapeutic goods are likely to have a harmful impact on physical or mental health, they will generally be considered ‘high-risk’ under the principles for designation proposed by DISR.[1] The TGA’s review of the current legislative framework has been conducted on the basis that therapeutic goods that use AI will need to demonstrate compliance with the proposed guardrails for high-risk AI models and systems.

Consultation by the TGA

The TGA released its consultation paper ‘Clarifying and strengthening the regulation of Artificial Intelligence (AI)’ on 12 September 2024, and the consultation period closed on 20 October 2024. While the TGA is yet to release its report to the Australian Government, the content of the consultation paper and the accompanying commentary by the TGA offer insight into potential areas of change, and what that change might look like.

Language and definitions

The TGA identifies two key potential changes to the language and definitions currently used in the Therapeutic Goods Act 1989 (TG Act) and subordinate legislation:

  1. changes to definitions to further clarify regulatory responsibility where software products that are, or incorporate, AI models and systems are therapeutic goods, and
  2. changes to language about activities previously performed by human beings that are now performed by engineered systems, including AI models and systems.

The TGA provides the following examples of definitions that could be recommended for change:

  1. ‘Supply’ — to include language about the availability of software products from virtual platforms, for example, websites or app stores
  2. ‘Manufacturer’ — to include the legal entity that is appropriately responsible for the development and deployment of software products that are medical devices, and
  3. ‘Sponsor’ — to include a person who provides, hosts, or facilitates access to software products that are medical devices, particularly where they are accessible through data transfer or online platforms.

The TGA acknowledges that clarity about who is responsible and liable for the outputs of AI systems will be important, particularly where activities may constitute a breach of the TG Act or other laws. In addition to a review of definitions to ensure clarity for stakeholders, the TGA proposes to review current language that defines responsibility for meeting legal obligations under the TG Act to ensure penalties for civil and criminal offences are applied to the most appropriate legal entity associated with the use of an AI model or system.

Medical devices

The regulatory framework for medical devices, like all therapeutic goods, is largely principles-based. As such, it is well designed to develop and expand to meet the challenges and opportunities that AI presents. The TGA accepts, however, that there may be scenarios where a more targeted response to identified risk will be required.

Three key areas that could be recommended for change include:

  1. Classification rules: the TGA acknowledges that AI enabled devices are increasingly being used to predict clinical outcomes or provide prognostic information about a particular disease or treatment. It is recognised that existing classification rules do not adequately provide for predictive or prognostic tools, such that the default risk classification of Class I currently applies. The TGA proposes new classifications so that the seriousness of the disease the device is to be used for, and whether it is to be used to provide information to a clinician or a patient, can be reflected in the classification.
  2. Essential principles: the TGA identifies essential principles 12.1 and 13 as being most likely to require amendment to address risks associated with medical devices that are, or incorporate, AI models and systems, and poses a number of questions to stakeholders as to the suitability and sufficiency of the essential principles in the context of AI. These questions include whether there should be a requirement to identify when AI is incorporated in a medical device, and whether risks associated with the use of AI could be mitigated by introducing additional labelling requirements.
  3. Software exclusions: a number of software-based products were excluded from TGA oversight in 2021 on the basis they presented a very low risk to users, were not medical devices to begin with, or were subject to existing oversight measures under other regulatory frameworks. The TGA accepts that, in light of the increasing use of AI, these exclusions may no longer be appropriate for consumer health products (eg wearable fitness monitors and fitness tracking apps), digital mental health tools, software that is not a calculator, and laboratory information management systems. The TGA proposes three potential options for change, which include:
    1. removing the exclusion for some of these products so that they are regulated by the TGA
    2. removing the exclusion completely and introducing an exemption for individual products, and
    3. changing the current conditions of exclusion for these products.

While not specifically flagged, there are a variety of other ways the TGA might seek to regulate the inclusion of AI in medical devices.

Transparency and accountability: The TGA could require manufacturers to provide detailed information about the development, testing, and validation of underlying software. This might include transparency regarding the data used to train the AI, the algorithms themselves, and performance metrics.
Post-market surveillance: The TGA could strengthen post-market surveillance requirements to monitor the performance of AI-powered devices in real-world settings. This would involve collecting and analysing a greater range and volume of data on device performance, adverse events, and unintended consequences.
Ethical considerations: Guidelines could be developed to address ethical considerations related to AI in healthcare, such as bias, fairness, and privacy. This could involve ensuring that AI algorithms are designed to avoid discriminatory outcomes and protect patient data.
International collaboration: The TGA could collaborate with international regulatory bodies to develop harmonised standards for AI-powered medical devices. This would facilitate the global adoption of innovative technologies while maintaining high safety and efficacy standards.

Stakeholder submissions

The Australian Medical Association (AMA) published its submission to the TGA on 18 October 2024. The AMA agrees that the TGA should seek to align the legislative and regulatory framework for therapeutic goods with DISR’s proposed mandatory guardrails for AI, and is broadly supportive of the TGA’s proposed approach, particularly in relation to changes to language and definitions in the TG Act and subordinate legislation.

The AMA goes further than the TGA with respect to proposed changes to the classification rules and recommends that a specific classification be developed for medical devices that are, or incorporate, AI systems and models. To this end, the AMA suggests that the TGA’s existing regulatory structure could be relied upon to support a tiered, application-based approach to governing the use of AI. The AMA submits that AI could be classified by two major risk criteria: its intended purpose and its inherent processes.

In relation to software exclusions, perhaps unsurprisingly given its role in the industry, the AMA opposes any broad exclusions for software-based products on the basis that even low risk software can present a risk of harm. The AMA supports the scope of therapeutic goods regulation being broadened to ensure all AI applications used in clinical contexts are captured under the medical device framework.

The AMA also recommends that the essential principles should require products to clearly disclose the use of AI and state when AI is responsible for generating outputs.

Next steps

We are some way away from robot doctors and implantable AI, but the future of healthcare is automated — and sensible regulation will be needed. Now that the consultation period has closed, and stakeholder submissions have been received by the TGA, the TGA will prepare a report to government which will identify areas of the legislative and regulatory framework where potential changes could be considered to mitigate risk associated with AI models and systems.

Whether additional areas for reform will be identified in this consultation process, and how the TGA’s report will sit against the backdrop of DISR’s economy-wide consultation, remains to be seen.

Feature image: Dietmar Rabich / Wikimedia Commons / “Dülmen, Münsterstraße, Viktorkirmes, Riesenrad — 2023 — 9053 (kreativ)” / CC BY-SA 4.0.

[1] For more information on the proposed mandatory guardrails and principles for designating settings as ‘high-risk’, please refer to the DISR proposals paper and KWM’s insights.

Share
  • LinkedIn
  • Facebook
  • X
  • Threads