Newsletter and Subscription Sign Up
Subscribe

Why You Need a Corporate AI Policy

Published Friday Jun 28, 2024

Author Brian Bouchard

Why You Need a Corporate  AI Policy

When it comes to workplace policies, less is more. Bloated handbooks cannot replace civility, common sense, and careful hiring practices. Companies often have workplace policies that they do not enforce—or worse, enforce inconsistently—and that eventually become outdated.  However, one policy all businesses need is an AI policy.

As the use of artificial intelligence in professional settings becomes increasingly prevalent, with approximately 56% of service employees regularly incorporating AI into their work, the need for a comprehensive AI policy is more pressing than ever. This trend is set to continue as AI is integrated into everyday programs like Adobe, Microsoft Word, and Google search, and as companies develop industry specific AI applications, such as for recruiting or algorithmic financial advising.

Despite the increasing deployment of AI at work, only 22% of companies have AI policies.  This can have serious consequences. Last year, Samsung famously adopted an AI policy prohibiting external generative programs, like ChatGPT, after an employee uploaded proprietary code onto ChatGPT, an open platform.

Legal Risks of AI at Work
Most people know that AI comes with great promise and great peril. In the workplace specifically, AI poses three principal risks—not including the potential for mass displacement:

1. Unauthorized use of copyrighted material. The risk of using generative AI is that a business may inadvertently use work derived from copyrighted material. Months ago, the New York Times sued OpenAI, publisher of ChatGPT, alleging billions in damages for copyright infringement.

The Times alleged that ChatGPT had been trained on its articles so that users could request the day’s news in the style of the New York Times.   

2. Disclosure of confidential/ proprietary information. This is similar to the Samsung example. The risk is that employees may share confidential, proprietary, HIPAA-protected, or other private information with AI, resulting in its unauthorized disclosure. 

3. Algorithmic discrimination. The U.S. Equal Employment Opportunity Commission calls this the next discrimination frontier. The fear is that AI, without careful monitoring, will replicate existing workplace demographics, creating a disparate impact for historically disenfranchised groups.

Essential Policy Provisions
Having a comprehensive AI policy is critical for leveraging AI’s strengths while minimizing its risks and legal downsides. At a minimum, every business should have an AI policy that has the following:

Identifies accepted AI tools and uses. An AI policy should identify what AI tools, providers, and uses are permitted. Welcoming AI into your business should not be an all-or-nothing approach. A company may allow employees to use ChatGPT for basic research but prohibit using AI for employee management. A business may decide that some tasks are off-limits.

Identifies allowed users. Just as businesses should intentionally select the AI tools they deploy, they should also limit which employees use those tools. Higher-risk roles, such as marketing and recruiting, may have limited AI access or receive additional scrutiny.

Protects data. Every AI policy must establish categorial rules about avoiding unauthorized disclosures and protecting trade secrets, personally identifiable information, and other confidential information.

Includes an audit procedure. Perhaps the most important provision of any AI policy is its audit procedure. This provision must identify how and when the business will monitor AI use in the workplace to ensure legal risks are avoided, mitigated, and contained. This includes receiving periodic audit reports from vendors about data usage and bias. 

Reinforces human responsibility. Trusting AI to perform correctly—particularly generative models—is frighteningly easy. But the technology is not infallible. Users must know they are ultimately responsible for whatever the AI tool creates and must carefully verify all results. Saying “the algorithm did it” is never a defense.

Provides for mandatory reporting. Users must be obligated to report
any known violation of the policy, including instances of discrimination and the unauthorized disclosure of confidential information.

Addresses additional training. The risks of corporate AI use are nuanced, complex, and far-reaching. Employees, even non-approved users, should receive training about the legal implications of AI use in the workplace.

States consequences for a violation. Like all good policies, an AI policy should advise employees about the consequences of violating the policy.

Have some grace here. AI is a new, burgeoning technology. It will take time for businesses to find their groove and for the corporate AI policy to capture what is and is not tolerated.

An effective AI policy starts with stakeholders, including legal, HR, information technology, finance, and business interests. The goal is to identify business opportunities for AI, price out the cost of associated AI tools, and then develop a policy that balances risk and opportunity. Even if a business decides to prohibit AI use, it should have a workplace policy making that prohibition clear. 

Attorney Brian Bouchard is a member of Sheehan Phinney’s Labor and Employment Law Group. For more information, visit sheehan.com.

All Stories