Positively Legal: Bridging Legal Operations and Ethical Tech Deployments

Banner artwork by DannyOliva / Shutterstock.com

Last month, I spoke with ex lawyers and legal operations professionals Michael Thompson, Deb Hook, about the skills you need to move into Legal Operations. This month I speak with them about one of the most talked about topics in law – the use of technology and its intersection with ethics. As legal operations departments are key proponents of technology, we discussed implementing technology and the ethics behind it.  

Have implemented technology? If so, what are the skills you need? 

Thompson said, Yes, I have. Project Management skills are beneficial. More important from my perspective is to know what you want and become an expert (of the tool) yourself. Don’t rely too much on the advice of the vendor or service provider, but instead, be certain about your own requirements. Otherwise, you will not be able to understand what you get and if that matches with what you need. Be ready to fail and be able to fail (meaning you should be able to afford to fail and have a plan B).”

Hook has also implemented technology: “We have had a few technology roll outs. My philosophy is use what you have where you can (as cybersecurity and integrations can catch you out) and only add technology that improves the human experience for those using it or it won’t be well adopted (which can be a range of different stakeholders, you can’t just focus on one). My approach is the design-thinking one above, trial everything (often myself) from each stakeholder’s perspective and pre-configure everything to make their lives easier, nothing is best just out of the box. I then do a small pilot with ‘friendlies’ followed by a soft or beta launch inviting feedback (technology never lands perfectly on launch so this manages expectations and invites feedback instead of grumbling), before refining and handing over to BAU. Overcommunicate and try to find systems that require no training or build things on existing systems people already understand.”

What are your thoughts on using AI in the business and the legal function? What are the challenges? 

“In the future, I think it will be indispensable to use AI,” said Thompson, “especially in the legal function — actually, the future is already there.” However, Thompson thinks lawyers can be slow to adapt because they “tend to know everything and know everything better. They are so busy with day-to-day business that they don't have the time or see it as a priority to work on processes. Since they do not deal with the relevant possibilities, they sometimes do not know what is possible.”

He predicts that “customers will expect that lawyers use AI to work more efficiently. But at the same time (and that’s the challenge), they will expect the outcomes of a lawyer’s work to be true (no hallucination) and accurate. It will be necessary to take the middle course between using AI to work more efficiently and making sure that the results you provide are watertight.”

Hook thinks “AI is already essential to business function, although still less so in the day-to-day legal function. It’s just with the release of ChatGPT more of us became aware of it, and more people have ready access to it without necessarily the skills set or experience to do so responsibly.”

“In the immediate term,” says Hook, “I am of the strong view that legal teams need to focus on understanding how AI works and pilot a ‘sandbox’ project for their own work to understand the risks, trade-offs, and complexities that can arise from even the simplest project.”

Hook advises that legal teams must champion to streams of governance programs:

(1) an AI Governance program through a cross functional team that includes more than just IT and Privacy experts, an AI register and risk-based review of AI project being built, purchased (or added on to existing purchases) or used ‘in the shadows’ – the challenge here is we don’t yet know where regulation will land, so don’t want to invest too much time and effort only to have to redesign and retrain our people when that becomes clear

(2) an information governance program (or GPT Readiness project) to get your business data structured, so that when products like Microsoft Copilot becomes available, you can actually use them to scour your existing information without inadvertently giving people access to things they shouldn’t see, or outdated/conflicting information. The challenge here is that it is a huge and unappealing project to undertake. Hook is speaking about this in more detail at the ACC National Conference in Canberra in October if anyone wants to spend an hour with me getting a clearer roadmap to achieve this.”

How do you grapple with the ethics of using AI for Ops and for the business? 

Thompson says, “When I think of ‘ethics’ in terms of AI, I mostly think of ‘biases’ Every other “problem” when it comes to using AI already has a different name from my perspective (e.g., confidentiality, data protection, IP-rights, etc.). And to make sure that results provided by AI are not biased, there will always have to be a human factor involved. It will not be possible and should not be strived for to shirk responsibility when using AI, especially but not limited to ‘ethics.’ Responsible use is the most important thing here, in my view. More important than any legal regulation.”

Hook says, “we’re so lucky in that the University has some of the world’s best AI ethicists on tap. Personally, I see AI as inevitable, so it’s just about engaging transparently and minimizing social impacts like bias, unfair treatment, and data breaches. The University is guided by Australia’s AI Ethics Principles, which I think is a good start for any business.”

Which future technologies can elevate our legal deliverables? 

Thompson says, from my perspective, it is the combination of process automation and generative AI, because with those tools you can (and those solutions already exist on the market) enable legal laymen to answer certain legal questions themselves. Without too much sugar coating: those tools will, at least from my perspective, make lawyers and paralegals redundant and replace them in certain areas, if whoever uses those tools is willing to take the risk that the answers one gets are wrong, made up, or hallucinated. But to be honest, you are already facing the same risks if you are asking a human lawyer to give you some advice. The difference is that you will theoretically be able to take recourse/regress with a person. But since ‘it always depends,’ when getting a legal opinion, it might also in this case be hard to prove that the advice you got was ‘wrong.’ So, there are not many arguments left for why AI should not replace lawyers to a certain degree. And it’s cheaper and faster.” 

Hook says, “Maybe I’m old school, but just focus on getting the basics right first. You need a good solution for matter management, document management, template management, and knowledge sharing (internally and to the business). I think lawyers need to improve their Word, PowerPoint and SharePoint skills so they can have greater impact and work more efficiently (or equivalent in other operating environments). A lot of that is boring but important stuff that is essential to later capitalise on promising new technologies.

Looking to the near future I think there is great promise in these technologies and am monitoring their journeys:

  • Microsoft Co-pilot is obviously the top of the list, enabling you to ask Microsoft things like ‘Turn this Word advice into a PowerPoint presentation using the firm’s template deck.’ It will also effectively enable ChatGPT in your own secure environment so you can be less concerned about sharing confidential information, and it can prepare first drafts of lots of things for you.
  • A contract lifecycle management system that allows contract managers (in the business) to track progress of their contracts without having to leave the Word/Outlook environment (you don’t have to log into a separate software system to send and receive emails in order to track them), as I think that is critical to the success of roll out of a CLM, at least in our organisation.
  • Software solutions that review first drafts of third-party contracts and highlight key issues or non-market-terms.

Longer term, things will get interesting, and we may well see an entire industry disruption, which is why I think lawyers need to focus on broadening their skills sets to prepare for wherever that takes us”.

What role should legal operations play in the ethics of AI usage? 

Thompson says, I think ops should be involved to answer the question(s) where and how to utilize those technologies in the most efficient way. And legal should be involved in assessing the risks and suggesting where a human should still be involved. But, as always, it is vital that both (legal AND Ops) work together and not against each other. The conversation should not be about the ‘if’ but about the ‘how’ to use AI and how to deal with questions of ethics, risks, and responsible behaviour best in this regard.”

For Hook, involvement would mean someone in “a leading role, at the GC level. The legal team usually works across the whole business and is respected as independent. We also tend to be quite good at picking apart the risks/opportunities of things to help decision makers make an informed choice. These skills are crucial in bringing together the cross-functional team needed to tackle AI in line with public expectation and (soon) regulation. If you have no idea where to start, give your GC or Board a copy of The State of AI Governance in Australia from Human Technology Institute (HTI) at the University of Technology Sydney (UTS) which should get the conversation started – it is excellent.”

As the interest in technology grows so too should considerations around ethics. It is clear legal teams are well placed to advise their businesses and broader industries in these areas.