Join our growing community

Subscribe to InView to receive fortnightly newsletters access to exclusive content and invites to exciting events near you.

AI is a critical ally for in-house counsel

What important truth do very few people agree with you on? In his celebrated book Zero to One, Peter Thiel starts with this provocative question. My important truth is this; If you aren't already using AI, you are already obsolete in the legal profession.


Technology, and in particular AI, has already disrupted the future of law, redefining the skills lawyers need to stay relevant. There is no avoiding the fact that as technological development progresses our reality shifts further away from the everyday practice of law.  It is imperative that the legal system adapts to this unprecedented rate of change.

Entirely new professions have burgeoned thanks to technology, from computer engineering to data science and social media. The legal profession, on the other hand, has made comparatively little structural change during this time. The recent proliferation of legal ops is the obvious exception, but it is still generally seen as a separate discipline instead of a fundamental skillset that every lawyer should adopt.

Dare I say that for some of us it may have taken the hype of ChatGPT to start thinking about the true impact of AI on the law.  

Technology has already changed the legal profession (a little bit): The Golden Age of legal workflow

As in-house lawyers, we shouldn’t be surprised, as we have witnessed the intrinsic value of legal workflow systems over the last decade. Previously, you didn't need any technological expertise to be an effective in-house lawyer, now there is simply no choice but to have some semblance of technological savvy. If you want to make any efficiency gains or implement structural risk mitigants using technology, you're forced to upskill or hire a lawyer with the necessary skillset.

The skillset I'm referring to not only involves familiarity with tech but encompasses a wide variety of other areas. For instance, competency in change management is essential for a lawyer implementing new technology as they must  lead their organization through accepting and adopting a new way of working. This is not an easy task when other functions are used to lobbying requests via email and essentially having the legal team do their admin for them.

Business analysis and data analytics are also crucial to a technologically savvy lawyer's skillset. As we access more data about our legal services, we will be expected to report effectively on our delivery and demonstrate our value. The fact that legal reporting is a trending topic heightens this demand, as there is greater external visibility and expectations around reporting capabilities.

Very recently, ChatGPT provided us with an AI companion to use at our discretion in our straightforward business as usual. For example, within seconds it can generate a starting point for simple templates or legal training. It can help with researching and summarizing countless sources in an instant. While by no means a finished product, it's certainly a starting point for review. ChatGPT  has introduced an entirely new skillset that lawyers must adopt, which relates to understanding how the system works and what prompts yield the most effective results. Prompt engineering will only become increasingly important as systems become increasingly complex and pervasive.

Just like that, the skills required for in-house legal professionals to be successful have changed right in front of our eyes, and without giving us much opportunity to intervene..  

The few who saw the truth before the many

Let's look at some of the insightful few who advocated for legal solutions and ideas that may have seemed far-fetched at the time, but have since proved otherwise. Richard Susskind rightly claimed that AI would transform the law for decades and he is a veritable leader in this space. Since the 1990s, he’s been talking about how technology will more effectively and efficiently distribute human knowledge and expertise, and delegate lower-value tasks.  

David Sherman, a Canadian tax law expert, began to combine computer science practices with tax law services as far back as the 1980s. He modelled a computer program intended to generate documentation for a tax-based transaction and advise a lawyer on what decisions should be made and what the tax effects will be. In 1989, Sherman was designing systems that would still be considered as innovative today.

From as early as 2010, Gillian Hadfield argued that the legal infrastructure (i.e., the legal resources available to individuals, organizations, and regulators used to help govern relationships) is insufficient and needs to change to account for the flattening of the globe through globalization and proliferation of information technology. She has continued to write on this topic with an increasing focus on AI.

And these are only a few of the many examples.

Inertia or inaccuracy?

Many legal commentators now agree with the revolutionary impact of AI on the law. So why have so many opportunities been missed? My own observation is that lawyers tend to be insular in their focus given the narrowness and complexity of our subject matter coupled with high workloads, which can be all-consuming. This has led to a professional inertia, which rational critiques have obscured.

A standard critique of AI in the legal context is that it lacks accuracy. As lawyers, we have a particular sensitivity to accuracy and apportionment of liability for AI outputs. These are issues that regulators worldwide are starting to consider. For example, in September 2022, the European Commission published a proposal for a directive on adapting non-contractual civil liability rules to AI.

Regarding the recent darling of AI, ChatGPT, Sam Altman referred to it as a preview of progress, with much work to do on robustness and truthfulness. But equally, he isn't reserved about AI's potential impact, saying that he thinks it's going to be "the greatest force for economic empowerment". It's unequivocal that more ambitious AI tech still has work to do. But, we have many examples of the robustness and accuracy of more tried and tested AI that's been widely adopted.

The evidence from established technologies suggests that inaccuracy concerns may be a red herring. For example, Lex Machina is a legal tool developed by Stanford University and bought by LexisNexis in 2015 that is said to predict the outcome of US patent disputes more accurately than lawyers. Lex Machina uses AI to mine millions of documents of litigation information to discover insights about judges, lawyers, relevant parties and case subject matter.

Companies such as LawGeex, the enterprise brand of Superlegal, use AI to review contracts more accurately and efficiently than humans. The AI is taught to recognize and understand legalese, while machine and deep-learning technology is trained on tens of thousands of contracts to detect various contractual issues. A landmark study conducted by LawGeex and published in 2018 pitched 20 highly experienced lawyers against AI to review five NDAs for problems. The results are startling - where the lawyers took an average of 92 minutes to finish reviewing the five contracts, the AI took 26 seconds. The lawyers had an average accuracy rate of 85 percent across the contracts, and the AI 94 percent. These results clearly show the merits of AI for routine contract review.

As lawyers, we baulk at liability limits within contracts for software incorporating AI, which perform critical legal or commercial services like contract review and dispute management. Yet is this so different from accepting legal advice from external counsel who cap their liability? The difference is there is a professional consensus that the latter scenario is acceptable because we trust our external providers. However, as humans, they are just as likely to make mistakes as we are. There is no professional consensus that we trust AI.  Yet AI can also enable significant efficiencies of output, and so will allow us to do work that may otherwise not be attempted.  I urge lawyers not to avoid technology without broadly analyzing its risks and rewards.


With opportunity comes room for error

For the legal profession, there are boundless possibilities that the nexus of law and technology presents. Our work will become more efficient and streamlined with the aid of AI. Yet, while it is a no-brainer that we must embrace technology, we must do so while also being cognisant of new legal developments and pre-empting them for ourselves and our clients.

There is a burgeoning area of law around AI use and regulation; or, more accurately, it’s a new application of existing areas of law, including intellectual property, ethics and privacy. Privacy is particularly topical as AI raises concerns over whether modern privacy laws (which many countries have just spent years developing) accommodate more progressive AI with sufficient certainty in their current form. The recent cessation of the ChatGPT service in Italy in response to the watchdog's privacy concerns is a powerful example, although the recent reintroduction of the service also foreshadows a regulatory and corporate willingness to work through these issues.

The issues spill into criminal law too. Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is an example of legal tech that has already raised potential ethical red flags. COMPAS is an AI tool used by some criminal judges in the United States to determine the recidivism risk of defendants or convicted people when making decisions regarding sentencing. Of particular concern is the inherent bias in such AI programs, which are predominantly trained on historical data, especially when they are used to inform decisions which can drastically change the course of someone’s life.


Where to from here?

Technology will completely change the way we practice law. Whether we like it or not, it is forcing a reset of the legal profession. The extent of the reset is up to us. If you aren't already using AI, your skillset is obsolete.

The obsolescence will compound as technology progresses. Routine, simple tasks will increasingly be trusted to technology, while human-centered, strategic legal tasks will become a focus for legal professionals. The corollary is that fewer lawyers will be required in the short to medium term for routine, simple tasks. However, we will also see new opportunities for those who evolve their skillset, such as in the legal operations and data analytics space as well as emerging areas of law and policy.

It's impossible for us to all agree on precisely how or when AI will evolve. We don't need to. But I urge each of us to seize the opportunity to adopt technology as a critical ally in our day-to-day work lives and to take a role in not only pre-empting but also shaping a workable and valuable legal regime in the future.

Recommended Articles

In-house legal tips straight to your inbox

Subscribe