JOIN THE ZIMSPHERE WHATSAPP NEWS CHANN

Why lawyers in Zimbabwe should be careful about AI use in legal practice

TAKUDZWA HILLARY CHIWANZA

Just a few years ago, it was not conceivable that Artificial Intelligence (AI) would command so much dominance over our lives as it does now. The ubiquity of AI use (in particular, generative AI) in almost all facets of daily life, across many professions/disciplines, has taken the world by storm. Basically, no profession has been left untouched by the gigantic influence of AI, which has massively revolutionized how we approach and interact with work. It goes without saying that this has also come with a litany of advantages and disadvantages. The desirable and the undesirable. The legal sector has not been spared in this. And recent events in Zimbabwe’s legal spaces show just how easily technology can outpace our caution and our professional judgment.


The issue of legal practitioners in Zimbabwe using AI


There was shock in Zimbabwe's legal realm when it emerged a few months ago that Professor Welshman Ncube had filed legal submissions to the Supreme Court generated with the assistance of AI - in which 12 fictitious judgments were cited in heads of argument that have since been declared as defective and null.

At the time, Prof Ncube was quick to proffer a sincere apology to the Supreme Court, in which he acknowledged the grave error, saying that this emanated from a graduate researcher whose AI-generated research work was not verified. (We covered the story here.) Just last week, the Supreme Court ruled that the AI-generated legal arguments submitted by Prof Ncube are invalid and must be treated as a nullity, after the opposing counsel had challenged the submissions (rightly so).

“The court has the power to erase these documents as of no consequence,” the court ruled emphatically, bringing into sharp focus the ethical risks that AI has wrought on the legal profession as legal professionals increasingly turn to AI for assistance in their work. 

This is an episode that should be a wake-up call for lawyers, and the major takeaway is this: using AI is not inherently malignant (it greatly expands access to information, makes research easier, automates routine tasks, refines written content, helps with case analysis, summarising concepts, and drafting outlines) but the fundamental aspect is that the human should always be in control. We should not lose the human.

There are certain things that AI can do. And in the same breadth there are tasks that can only be performed by human beings. In the first instance, the underlying premise we have to acknowledge is that technology in the 21st century is moving at break-neck speed - AI is here, and we cannot circumvent it. The obvious logical thing to do is to mindfully incorporate it in our work, treating it as the assistance tool it is. In the second instance, it would be remiss for us to treat it passively and uncritically; we should always remember it is a machine trained on (historically biased) datasets, which is what makes it ‘hallucinate’. AI can thus not become the human; it cannot supplant essential tasks performed by humans.

At the core of legal practice lies these critical components: critical thinking, evaluative judgment, and human imagination. These are traits that can never be replicated by AI. Passively assigning these traits to AI risks putting the legal profession into grave disrepute. AI cannot independently verify what it has produced, and this is where these critical components come into play. Legal practitioners have an immutable duty to verify every authority they rely on. Yet as AI becomes more accessible, its alluring convenience can easily lull legal professionals into complacency. The fact that AI hallucinates - inventing cases or quotations that simply don’t exist -- demands that the human exercises critical thinking, evaluative judgment, and human imagination more than ever before.

Accuracy is not optional in the practice of law. It is the bedrock of credibility that the profession is built on. A single false authority can mislead the court, embarrass counsel, and damage public trust in the legal profession. The import of this is simple: lawyers should be very careful with how they use AI. 

AI can assist with research but it cannot replace a lawyer’s duty to verify, interpret, and apply the law responsibly. Its inability to judge and evaluate its own output sometimes leads to misleading or fabricated information, as it produces inaccurate information to ‘please’ the user (AI never says ‘hey, I could be wrong on this’). So while the information may appear plausible, the human should always rigorously verify it with sufficient intellectual fervour. As legal professionals we should not accept AI-generated information uncritically.

In the final analysis, it is imperative for legal professionals to strike a balance between mindfully using technology and preserving unique human cognitive faculties that are central to the practice of law. The Welshman Ncube case points to an ineluctable realization – AI has permeated all aspects of professional life; and this necessitates the development of unequivocal guidelines on AI use in practice. Perhaps the Law Society of Zimbabwe could formulate directives on ethical boundaries, disclosure requirements, and acceptable tools. There should also be trainings for both senior and junior counsel on how to safely and responsibly use these technologies. 

The bottom line is that AI is just an assistant; it cannot transcend that designation. It can make us faster, but not wiser. It can draft, but it cannot discern. And this is why we should always be careful when using these technologies. 

Post a Comment

0 Comments