Back to News home

23/04/2025 – Generative AI, liability and insurance: Who bears the risk?

Key points

  • Practices using generative AI which is capable of producing new work based on acquired learning are likely to bear the risk of errors in the AI’s work, just as they would for the work of an employee.
  • Professional indemnity insurance does not typically exclude cover for claims arising out of work performed by generative AI, but individual policy terms may vary, and the position could change at any time.
  • The risk of breaches of confidentiality and intellectual property rights need to be considered before allowing AI to learn from your data or your clients’ data.

Our national webinar in February, Records You Can Rely On, generated a number of questions about the role of AI. In this article, we’d like to take the opportunity to recap some fundamental points.

Insurance cover for claims arising out of using generative AI

“Generative” AI describes artificial intelligence models that are capable of producing new content based on learning acquired from training data. ChatGPT, Co-Pilot, Llama and Grok are commonly given examples.

A typical professional indemnity policy covers claims arising out of work within the insured profession that is stated on the policy schedule, and does not dictate which programs or systems can be used to produce that work. In many ways, generative AI is simply a tool like any other system or software used by consultants, albeit one with enhanced capabilities, and professional indemnity insurance policies do not usually mandate or prohibit the use of any specific tools.

Generally, we would expect that if a consultant uses generative AI to produce any part of its professional services, and there are alleged mistakes in those services, the consultant’s professional indemnity insurance ought to cover resulting claims just as they would if one of the consultant’s human staff had produced the allegedly defective work. However, you would need to check with your insurance broker what your individual policy provides. Policy terms can differ, insurers can add exclusions to individual policies or to their standard policy terms at each annual renewal, and AI is a new field where capabilities and risks are being rapidly developed and re-evaluated.

Liability exposure from using generative AI

Certainly at this point in its evolution, generative AI is highly fallible. Reports have already emerged of lawyers in Australia and overseas facing disciplinary action for submitting to a court an AI-generated list of legal precedent cases on which some of the cases had been entirely fabricated or “hallucinated” by the AI. With that in mind, it’s alarming to contemplate how much mischief AI could get up to if asked to produce architectural or engineering design documents.

If a flaw in AI causes errors in your work, you will very likely bear liability to the party who contracted you to perform that work. If you sought to shift blame and liability to the AI provider, you would have to overcome a number of hurdles. The terms and conditions you accepted when purchasing or using the AI probably contain strong disclaimers and releases, including an express obligation for you to check the AI’s work, and the AI provider may be based in an overseas jurisdiction where it will be costly and/or difficult to institute any legal action against them.

For this reason, AI is not like a sub-consultant to which you could delegate risks and responsibilities by means of a sub-consultancy agreement, and which might have its own professional indemnity insurance to cover the cost of errors and claims.

Instead, as a rule of thumb, generative AI should be thought of as being akin to a junior employee, in the sense that the risk and responsibility for its work is carried by your business and is, practically, very difficult to delegate. For that reason, the AI’s work needs to be thoroughly checked.

A second liability risk is that a generative AI program that learns from uploaded data may end up incorporating aspects of that data into its library of knowledge, thereby sharing it with other users. Allowing AI to learn from data that belongs to your clients is likely to breach obligations of confidentiality you owe to those clients, except in the unlikely event that the client has given consent.

Other considerations

For the same reason, consultants who wish to protect their bespoke design solutions and intellectual property rights should be reluctant to share data with any generative AI that will learn from it and share elements of it with others. Providers of generative AI for the legal profession – for which maintaining confidentiality is a very high priority – are currently developing and marketing “closed” AI systems that are said to learn only from the law practice’s own data without sharing those learnings externally. Similar closed systems may be or become available in the other industries, and we understand some providers offer technologies that create a closed system within an open AI environment.

A final consideration is that clients may impose restrictions on your ability to use AI, by including specific prohibitions in consultancy agreements or tender conditions. For example, government clients with understandable concerns about maintaining confidentiality over their information may prohibit the use of generative AI, or may require you to disclose where and how you use AI.

AI at informed by Planned Cover

How does this thinking apply to our work at informed by Planned Cover? Based on what we are hearing about AI capabilities at this point, we would think that generative AI could be used to conduct reviews of consultancy agreements with a fair to moderate level of accuracy. However, in order to verify the AI’s output and correct omissions, a human risk manager with legal qualifications would need to both review the contract and check the accuracy of the AI’s output. The amount of time this would take would largely negate the benefit of using AI in the first place.

We have no imminent plans to hand off our workload to AI at this point. All our content at informed by Planned Cover, from news articles to webinars and contract reviews, is still 100% human produced, and you can meet the delightful humans in our risk management team here, invite us into your offices to present training for larger teams, or join us at one of our upcoming national webinars.

Wendy Poulton
Manager Risk Services

Disclaimer

This article is only general advice in respect of risk management. It is not tailored to your individual needs or those of your business, nor is it intended to be relied upon as legal or insurance advice. For such assistance you should approach your legal and/or insurance advisors.

News archive