Learning the Language of AI: How LLMs Can Make Us Better Communicators

Banner artwork by Day Of Victory Studio / Shutterstock.com

Cheat Sheet: 

  • Better prompts = better conversations. Learning to prompt GenAI effectively can improve how we communicate with each other. 
  • Apply different techniques. Techniques like “few-shot” prompting and “role prompting” enhance both AI and human understanding. 
  • Communication style shapes results. Adjust the “temperature” of your prompting style for different outputs. 
  • Utilize GenAI as a tool, not a replacement. Use it to support, not substitute, real human dialogue and collaboration. Artful prompting improves GenAI output. But lessons in how to “juice” the LLM apply equally to our human interactions. 

Talk to your in-house colleagues and there’s a general anxiety out there — generative AI is coming for our jobs. The efficiency with which it chews through information and offers accurate summary; its lack of complaint when asked to hunch over the most uninspiring tasks; and how it continues to offer crisp insights long after the day has worn down our own ability to think. We mortals can’t compete.  

The countervailing view, of course, is that GenAI won’t replace us, but simply free up time to focus on the more innovative and strategic parts of our work. Some call this the AI dividend: time back in your day for what’s most fulfilling.  

If we want the latter to be true, perhaps there’s a hidden benefit in embracing our new AI tools. Enormous effort goes into training large language models (LLMs) to communicate like humans. But if you look closely, the LLM may be offering to train us back. “Prompt engineering” is the art and science of instructing GenAI to produce the most precise, accurate, and insightful output. Pay attention to it, and you’ll find that LLM prompting skills are just as useful for helping us engage better with each other as humans. 

“Prompt engineering” is the art and science of instructing GenAI to produce the most precise, accurate, and insightful output.

“Prompt engineering”: getting along with your eager new intern 

The first time you used ChatGPT, Copilot, Claude, or any other LLM you probably felt a bit underwhelmed. The response was too surface-level, unoriginal, and probably heavy on the corporate jargon (or, to apply some favorite interrogatory objections: vague, ambiguous, overbroad, and unduly burdensome).  

But refine your prompting and see how the results improve. The conventional wisdom is to anthropomorphize your LLM. Think of it as an eager new associate or even a friend or faithful collaborator. Talk to it as if it were an incredibly intelligent and well-versed person, but one that requires some guardrails. Be clear, provide plenty of context, set goals, and break down tasks into smaller prompts.  

Be clear, provide plenty of context, set goals, and break down tasks into smaller prompts.

Following these techniques is also dependable advice for prompting our human colleagues. The following are several common prompting techniques that can help us improve our many conversations as in-house counsel. 

Clarity is key: “few-shot” prompting 

GenAI thrives on descriptive, detailed prompts. “Few-shot” prompting is the practice of giving the LLM several examples or demonstrations (“shots”) to guide its response. This approach helps the model understand the pattern of reasoning you’re looking for. Few-shot prompts also set clear expectations to guide the model’s logic.   

Few-shot prompting works just as well with helping cross-functional teams find solutions. As an illustration: say your company is beginning the search for a new information security vendor and you’d like your group to draw from past vendor reviews to create a new strategy. To begin, provide your team a reference to what went right with a recent review, such as:  

Last year’s cloud storage provider selection started with our identifying five dealbreakers: data location requirements, data breach notifications and obligations, audited compliance with an established security framework, controls on sub-processors, and exit terms. We screened these before doing deeper diligence.  

Then explain to your team why doing it this way worked: “By identifying our dealbreakers up front, we eliminated 60 percent of vendors early, saving weeks of review.”  

Then, follow up with a second example:  

For our payment processor selection several years ago, we created a tiered risk assessment, focusing on critical requirements (like PCI compliance); important preferences (like API documentation); and some nice-to-haves (like integration support). Vendors who met each tier moved on to deeper due diligence. This reduced our average review from five to two weeks per vendor ...  

Providing a negative illustration is also helpful:  

Five years ago, our “comprehensive first pass” approach for software vendors failed. We began with a 200-question assessment, which took vendors six to eight weeks to complete, with many abandoning it before finishing. Our group spent months reviewing responses, missing critical risks because they were overwhelmed with information. When we finally made a decision, both sides had become frustrated, causing unnecessary friction as we signed the deal.  

Finally, tie it all together: “With these cases in mind, for our new process, how might we draw from past successes and avoid repeating previous failures? What would this look like?”  

Few-shot prompting helps teams visualize success, showing what works and what doesn’t. It speeds up decisions by drawing from past experience instead of starting from scratch. 

“You are an expert in [blank]”: role prompting for problem solving    

Role prompting instructs the LLM to adopt a specific perspective, experience level, or identity when creating its response. To help creatively explore an issue, the LLM is told to respond from any number of different viewpoints: opposing counsel, a data privacy expert, a customer, or a hungry start up competitor, for instance. 

We naturally view challenges solely through our own experiences. Being able to step outside the blinders of our own mind helps us spot problems and generate fresh ideas, creating an opportunity to “Climb in [another’s] skin and walk around in it,” as Atticus Finch would say.  

Role prompting instructs the LLM to adopt a specific perspective, experience level, or identity when creating its response.

Consider how you could apply role prompting to a real-life scenario, such as implementing your organization’s new compliance program. To explore different approaches, ask your team to spend 10 minutes thinking like a front-line employee trying to follow the new procedures during their busiest part of year. Ask: What obstacles would they face and where are there temptations to take shortcuts? 

Role-prompting can be extended to asking the model to produce multiple solutions to a problem, often from divergent angles. In human terms, this translates to asking your stakeholders to consider alternative ways around an obstacle. For our compliance policy, this could mean having your team brainstorm different ways to roll it out, such as department-by-department, all at once with a grace period, or a risk-tiered approach starting with high-risk areas.  

Adjusting the thermostat 

Like people, each LLM has its own “personality” and capabilities (try giving the same prompt to four different GenAI tools and see how they differ). To figure out what works best with each LLM, you’ll need to experiment. But be intentional and look for feedback. Treat each exchange as a step along a path of continuous refinement. If a response seems unsatisfactory, ask the question in a different way. Experts recommend being patient and curious with GenAI; we’re wise to approach our colleagues the same way.  

To figure out what works best with each LLM, you'll need to experiment. But be intentional and look for feedback.

Applying such patience and curiosity is particularly valuable in cross-functional teams where people address questions from considerably different backgrounds and professional cultures.  

Prompt engineers will often talk of “adjusting the temperature” of outputs through your prompting style. Lower temperature prompts are more focused and conservative. Higher temperature prompts are more creative and varied. To lower the temperature, use more precise, specific language (“create a step-by-step procedure for ...”). To raise it, encourage creativity and use open-ended language (“Imagine a situation in which ...” or “Think of some unconventional ways to ...”).  

... Or just go random 

There’s also the view that you should practice engaging with GenAI without any topic in mind. Just banter with it. See where the conversation goes and what ideas are conjured.  

GenAI is powerful and accessible. But because of this, GenAI can easily become another tool for cutting off human interaction. The next time you need a creative spark, don’t forget how easy it is to also pick up the phone, set up a video conference, or if you’re so fortunate, walk down the hall for some person-to-person free-flowing discussion.  

What would Socrates do?  

Effective, proactive communication is often the key to success as in-house counsel. We communicate to advise, we communicate to frame risks, and we communicate to flush out facts and develop narratives. We also excel as facilitators, helping stakeholders reach their own reasoned decisions rather than just providing the answers.  

In the book “Trusted Advisor,” the authors compare giving advice to being a great teacher, whose job is to hold the student’s hand through the different logical points of a topic. This is the “Socratic teaching” process, a step-by-step journey through the stages of reasoning. When we engage in good LLM prompting, we’re essentially participating in a form of Socratic dialogue, which has been employed over centuries to coax critical thinking, develop deep mutual understanding of a subject, and build trust.  

So, as we continue to grow GenAI’s involvement in our legal work, remember how we can ensure it enhances, rather than replaces, our most valuable skill: human conversation. It’s often through robust and regular dialogue with our clients, colleagues, and peers that we’ll find our richest ideas and most valuable contributions.  

Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.

 Generate AI Summary
 ACC AI Summarizer can make mistakes, so double-check the results
Thank you for your feedback!