Ready or not: AI in litigation and legal practice – do you know the rules?

Jul 16, 2024

Catherine Anderson and Danijela Malesevic

AI involvement, in particular ‘generative AI’, in litigation and other parts of legal practice is evolving rapidly. As a result, we are seeing the introduction of guidance documents, both from courts and from leading legal industry bodies, on the appropriate use of AI in this space.

In this article, we consider the recent guidance documents, and outline the key takeaways that practitioners, both in private practice and in-house, should be aware of in relation to the use of AI in litigation and legal practice more generally.

Clients are likely to expect, at a minimum, that where their legal providers are adopting AI in the provision of legal services, they are adhering to the latest guidance and regulations.

Guidance materials available

The Supreme Court of Victoria (VSC) and the Supreme Court of Queensland (QSC) both issued guidelines on the use of AI in litigation in May 2024. The VSC AI guidelines are for all litigants, while the QSC guidelines are for “non-lawyers (including self-represented litigants, McKenzie friends, lay advocates and employment advocates)”. This is in addition to an array of industry body publications dealing with the use of AI in legal practice, most notably documents from the law societies of Queensland and New South Wales.

The Victorian Law Reform Commission has also been asked to provide principles and guidelines to assess the suitability of new AI programs in Victorian tribunals, with a report due by 31 October 2025.

The guidance published to date in relation to the use of AI in litigation, as well as in relation to practice management, provides important guidance and considerations for practitioners to be aware of, and where appropriate, take steps to address.

Key takeaways on use of AI in litigation – court guidance and recent case law

Both the VSC and the QSC direct litigants to:

  • obtain a sufficient understanding of how AI programs work and what their limitations are;
  • ensure their use of AI does not involve any possibility of compromising the confidentiality or privacy of client data; and
  • check over, revise and take responsibility for any AI-generated outputs that are used.
  • Specific training is likely to be necessary to ensure practitioners meet this standard when implementing AI into their work. In practice, this may mean ensuring a firm has its own policies setting out the way in which AI is to be used, as well as training programs for its staff on the specific AI tool adopted.

The QSC adds one additional caution, in relation to possible copyright issues involved in using AI-generated text.

The VSC’s directions go further, noting that:

  • practitioners must ensure the use of AI does not indirectly mislead other participants in litigation (including the Court) regarding work or content produced by AI;
  • accordingly, practitioners must disclose to the other party “the assistance provided by AI programs to the legal task undertaken”;
  • practitioners are subject to the usual obligations of candour and proper basis, and practitioners signing a document are responsible for any errors it contains;
  • if a document contains errors or omissions, reliance on the fact that it was prepared with the assistance of generative AI is “unlikely to be an adequate response”; and
  • practitioners must exercise “particular caution” in using generative AI to assist the preparation of affidavits, lay witness statements or expert reports. Such documents must be “sworn/affirmed or finalised in a manner that reflects the person’s own knowledge and words”, and if necessary, be compliant with the Expert Witness Code of Conduct – this would require consideration of any AI tools used to compile data in the preparation of an expert report.

The latter point is particularly relevant in relation to experts routinely involved in litigation, since practitioners may not have direct visibility over their internal working process; practitioners must be aware of the use of AI either in the compilation of expert data or the drafting of an expert report, and work with their experts to ensure any such use is compliant with the Expert Witness Code of Conduct.

Accordingly, these concerns may need to be addressed in the experts’ engagement letters, setting out the engaging party’s expectations regarding the acceptability of AI use, and where it is acceptable, the duty to disclose the use of AI in generating any reports under the engagement. In addition, such engagement letters may include a requirement for an accompanying attestation that the work has been verified, is the expert’s own, and complies with the Expert Witness Code of Conduct.

Two judgments have been handed down which mention the use of AI, and both appear to substantiate the VSC’s point that disclosure is crucial.

In DPP v Khan [2024] ACTSC 19 (a criminal sentencing proceeding), the offender did not disclose the use of an AI tool in documents tendered to the court. The ACT Supreme Court made its own inference that such use had likely occurred, in respect of a character reference tendered by the offender, noting “the use of language within the document is consistent with an artificial intelligence generated document”. On that basis, the court had serious reservations about the probative value of the resulting document.
In Youssef v Eckersley & Anor [2024] QSC 35, a self-represented plaintiff disclosed that his submissions were prepared with the assistance of the AI tool ChatGPT. The plaintiff vouched for the accuracy of the submissions but disclosed that “the platform assisted in the organisational structure and added a flourish to his submissions”. The QSC appeared to take no issue with the use of an AI tool in this instance.
Technology Assisted Review (TAR), a specific type of AI tool which is already widely used in large discovery processes, has the endorsement of both the VSC and the Federal Court of Australia. MolinoCahill was one of the first firms to adopt the use of TAR as part of discovery processes in the Federal Court in 2015. At the time, significant justification was required for the use of that technology which is now well accepted. What we are now seeing with the introduction of ‘generative AI’ technology may be a similar process. It may be that while a slow and cautious approach is currently required (and expected), in time it may become ‘business as usual’ in legal practice.

Key takeaways on use of AI in legal practice – what you should expect from your legal service providers

Looking to additional guidance from industry bodies, the Law Society Journal of New South Wales and Queensland Law Society (QLS) generally take similar stances to the courts, highlighting the need for lawyer competence in any AI use, protection of confidentiality, and supervision and accountability for all AI outputs. Their documents both highlight a series of Australian Solicitors’ Conduct Rules[1] that are particularly important to be mindful of when using AI:

rule 4 – Competence, Integrity and Honesty;
rule 5 – Dishonest or Disreputable Conduct;
rule 9 – Confidentiality;
rule 17 – Independence (avoidance of personal bias);
rule 19 – Duty to the Court; and
rule 37 – Supervision of Legal Services.

In line with these principles, the QLS considers that there is an obligation, wherever possible, to advise clients if an AI tool will be used on their matter. It may be sufficient to do so in the retainer agreement or letter of engagement, as well as outlining what controls a firm has in place to ensure accuracy of AI-generated materials if called upon to do so. In this process, firms should keep in mind that a client may have its own policy or view concerning AI use.

The Queensland Law Society’s Guidance Statement (QLS Guidance) raises particularly interesting questions for practice management and the use of AI. The QLS Guidance does not contain anything that is inherently state-specific, and accordingly it is reasonable to expect its suggestions around practice management to be adopted by legal professionals as ‘best practice’. Accordingly, clients should expect their legal services providers to consider:

the impact of the use of an AI tool on their billing practices, noting that where time billing is used, time for AI-assisted work cannot be adjusted upwards to reflect the time a task ‘would have’ taken if performed by a human;
how the licensing costs of an AI tool will be reflected in their cost structure; and
whether their Professional Indemnity (PI) insurance covers their planned AI work.
Though AI licensing costs may or may not be capable of consideration as disbursements, it is a separate question whether such a pass-through would be commercially acceptable. If use of AI tools becomes widespread in the legal industry, their use may become ‘business as usual’ for a legal practice, akin to a subscription to a research database, and accordingly may be treated similarly from a cost perspective.

The insurance issue may turn on the type of AI tool employed by a practice. For example, a commercial product, such as those offered by LexisNexis and Thomson Reuters, may pose less of an issue in respect of PI insurance coverage compared to a bespoke AI tool developed in-house.

Additionally, the NSW Bar Association has published a document intended to assist barristers to understand their duties in relation to AI. The main points of guidance include:

to check, revise, and be able to independently explain and support any AI-drafted work they submit;
to be transparent with clients about all use of AI, including an explanation of the nature of the AI tool and acknowledgment of its limitations;
not to include any confidential or sensitive data in prompts to generative AI, and otherwise to take careful note of one’s obligations before using any case information; and
to carefully review any AI drafts for biased or discriminatory language.
Conclusion

It is increasingly possible for lawyers to integrate AI into their practice, but the implementation must be considered carefully. There must be a clear idea of what work the tool will be approved for, and all staff must be trained on both its functions and the policies governing its use. If AI-assisted work is to be used in litigation, practitioners must carefully consider the VSC and QSC guidance in relation to such use, the required safeguards and the required disclosure. Practitioners must also take care and have regard to the QLS Guidance in reflecting an AI tool in their cost structure.

Finally, even in a practice that has no intention of using AI, lawyers are increasingly expected to understand its basic functions and limitations, so that they can give an informed opinion on its use. This is particularly relevant in relation to the engagement of external experts in litigation: practitioners must be aware of any AI use by their experts and ensure that it fits within their obligations.

 

[1] Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015 (NSW); Legal Profession                   (Australian Solicitors’ Conduct Rules) Notice 2012 (Qld).

Share

Related Insights

See all insights
Dec 4, 2024

Court confirms clear words required to oust common law damages for repudiation

Read more
Nov 28, 2024

What constitutes a valid offer under the UCPR?

Read more
Nov 14, 2024

Pay Later, Argue Later? When will a Court grant a stay on payments under SOPA?

Read more