I frequently speak on AI topics and facilitate company training on responsible AI use. Using anonymous polling software, I end each session with this hypothetical scenario:
Mr. Taylor writes the following prompt for work into ChatGPT: “Marco has been diagnosed with degenerative disc disorder and is requesting an accommodation. Given the intensive physical requirements of Marco’s job, we’d be better off letting Marco go. Is that legal? When are final wages due?”
What’s the most significant problem here? A. The conversation is not privileged, B. Mr. Taylor violated Marco’s HIPAA rights, C. his company cannot be trusted to determine final wages, or D. his company is not engaging in the interactive process.
Answers C and D are certainly correct. AI cannot be trusted on such matters. (Recent white papers have shown that 58% to 88% of “hallucinations” occur when AI responds to legal questions.) The company indefensibly violated the Americans with Disabilities Act (ADA) by not engaging in the interactive process (good luck to the defense team). However, I think the most significant problem, while perhaps not the most egregious one, is A, the AI conversation is not privileged.
Mr. Taylor’s prompt to ChatGPT is not confidential, as it is not protected by the attorney-client privilege or the work-product privilege. It is electronically stored information that must be preserved during litigation and is discoverable by opposing parties. Woe to the HR leader or business manager whose prompt is discovered and who must then defend it at a deposition.
This hypothetical is a clear example of failing to appreciate the hidden dangers associated with this technology. Additional dangers lurk as prompts become more nuanced. The attorney-client privilege is sacrosanct. But it is not bulletproof, and it is highly susceptible to user error.
Any disclosure of privileged communications to a third party compromises the privilege. That likely includes inviting AI into the attorney-client relationship. Not all, but most AI platforms, are not secure and lack confidentiality. At a minimum, they recycle prompts, responses, and conversations back into the platform for future AI training. This means that your conversation with ChatGPT is not only disclosed to OpenAI, but is also used to create new, better versions of ChatGPT. The same applies to Gemini, Claude, Watson, and Grok depending on the platform and subscription level.
Here are several examples of where HR and business leaders may inadvertently break attorney-client privilege by using AI:
- Using AI-enabled note-taking software for conversations with counsel.
- Putting draft letters or draft pleadings from counsel into AI without sufficiently scrubbing them or anonymizing them.
- Asking AI for legal advice based on specific situations as opposed to making a general inquiry, such as, “what are an employer’s obligations under the ADA to provide an interactive process?”
- Asking AI to help draft correspondence or emails intended for legal counsel.
- Double-checking legal counsel’s advice by asking AI to comment.
- Using AI to summarize a confidential legal memo.
Each scenario is considered a voluntary disclosure that may compromise the entire subject matter of the information fed into AI, not just the isolated prompt. This is a genuine and real issue. The attorney-client privilege enables attorneys to provide candid advice about complicated legal issues. Sometimes that advice is grim. Litigation and public relations are difficult enough without your counsel’s candid advice put into circulation.
At a minimum, every company must:
- Develop an AI Policy. Shockingly, only 33% of businesses, on average, have adopted an AI policy. Succeeding with AI starts with proper business governance. Every business should adopt a business plan that covers when AI will be deployed, how it will be used, and how it will be constrained and monitored, including what information cannot be fed into the platform. Waiting to develop formal governance is unwise. Your employees are already using AI and are already putting sensitive information at risk.
- Train Staff. Effective training on responsible AI use in the workplace is equally essential. Policies are often one-dimensional. Training commands attention and offers examples and insights into these nuanced questions. Issues concerning the attorney-client privilege are only the tip of the iceberg. An AI policy and associated training will address dozens more, including harassment, data security, privacy, and algorithmic discrimination.
Failing to establish clear governance around AI use exposes sensitive information to risk and can inadvertently waive one of your business’s most crucial legal protections. Take proactive steps to implement policies and training to safeguard the future of your company.
Attorney Brian Bouchard is a member of Sheehan Phinney’s Litigation and Employment Law groups. He has published several articles on AI and regularly speaks on the topic throughout New England. For more information, visit sheehan.com.