Back to all posts

For Employers

What should your corporate AI policy contain?

A lawyer offers best practices for employers

Published on

September 25, 2023

Note: This post was written by a human.

Writing anything about AI is sort of like building a plane while flying it. Except imagine that the plane suddenly morphs into something that doesn’t even exist yet.  

That is, as soon as you put your virtual pen to paper, most of what you’re going to write is likely already out of date. As April Gougeon, an associate in the Privacy and Data Protection group at Toronto-based firm Fogler, Rubinoff LLP, says, “AI is chaotically evolutionary.”  

But the good news for employers, she adds, is that unlike an editorial piece, an AI policy is supposed to be regularly updated to keep it current.

We asked April for some best practices for employers when it comes to drafting a corporate AI Acceptable Use policy and have summarized her tips and insights below—food for thought when developing your AI policy.

Note: This post is intended for general information purposes only; it should not be relied upon as legal advice or step-by-step guidance for creating your policy.

Tip 1: Do your research

Every organization is different and will use a variety of AI-powered tools in a myriad of ways and to varying degrees—from not at all to all the time. And those tools will change over time, as new tools and capabilities come to market.

This means there’s no such thing as a one-size-fits-all AI Acceptable Use policy.  

However, regardless of your organization, before you write your policy, the first step is to understand what AI is, and know what its risks are and how you intend to mitigate those risks.  

Tips from April:  

  • It's a good idea to highlight where your organization stands on things like the use of ChatGPT, and whether open-source AI tools are embraced, banned, or to be used only in specific circumstances.  
  • It’s better to be transparent and communicate expectations about how these tools are used in your workplace rather than reacting when they’re used improperly. After all, “Nobody wants to be the next meme,” she says.

Tip 2: Acknowledge the risks and benefits of using AI

“Deploying AI tools means assessing the tension between the risks and benefits of using AI,” says April. And once you’ve made that assessment, your policy should outline what is an acceptable level of risk and what is an unacceptable level of risk.

  • Example - risks: When it comes to a tool like Open AI’s ChatGPT, acknowledge that there is a risk of the program delivering inaccurate information and explain why this could be problematic. For example, the information delivered by a generative AI program like ChatGPT could be false, inaccurate or out of date.

    It is acceptable that employees query a tool like ChatGPT for the latest salary trends in a given industry. It is unacceptable that employees release the content generated without first using a secondary source (other than ChatGPT) to fact-check it and ensure it is current.
  • Example - benefits: Highlight that you can use a program like ChatGPT in your workplace to create an outline of an email, which could reduce writing time and increase creativity and productivity.

Tip 3: Outline the acceptable use of AI at your organization

AI tools can be used in many ways by all teams across an organization. You should know how you intend to use AI in your workplace, why you’re using it and what the benefits of using it are.  

Tip from April: Consider adding an appendix to your policy with detailed use cases outlining in plain language how each team in your organization can use AI-powered tools responsibly. Always avoid vague language because it leaves you open to regulatory risk.

  • Example – use case for the Sales team: Sellers can use a generative AI program like ChatGPT to help prepare for calls or meetings. In a prompt, it’s acceptable to enter general information about the prospect or client to have the system suggest relevant discussion points, potential pain points and consultative questions to ask during the conversation. It’s unacceptable to enter the name of your client in a prompt because names are confidential.

Tip 4: Treat your AI policy as an entry point to your organization’s broader AI governance

Your AI policy is only one piece of puzzle when it comes to your “culture of AI management,” as April puts it.

That is, in addition to outlining how you can use AI in your organization, and the risks and benefits of doing so, your policy could outline the governing principles for AI and the roles different people have for enacting those principles at your company. For example, consider outlining which team members are able to answer questions about the policy or provide training on your company’s use of AI.

Tips from April:

  • AI policies will require training and empowering people with the knowledge on how to use the policy, so employees are able to perform their duties accordingly.
  • This training should be ongoing and mandatory, ideally occurring biannually. And it should encompass the basics, such as what constitutes private or confidential information.  

Tip 5: Keep your policy up to date and compliant with legislation

Another best practice is to include built-in, regular checks (ideally, quarterly) to ensure your policy stays as current and effective as possible.

These checks run the gamut from staying current with relevant legislation to monitoring the terms of service of all tools you currently use to see if they’ve changed.

AI legislation in Canada

At time of writing, there is no regulatory framework in Canada specific to AI, and no approach to ensure that AI systems address systemic risks such as bias or discrimination during their design and development. However, the Artificial Intelligence and Data Act (AIDA), which was tabled in June 2022 as part of Bill C-27, has gone through two readings so far.

If passed, AIDA would:

  • Set the foundation for the responsible design, development and deployment of "high-impact" AI systems in Canada, that is, systems that could result in bias or discrimination. For example, some AI-powered recruitment tools have been known to discriminate against candidates from certain backgrounds.
  • (important for employers) Hold businesses accountable for how they develop and use these technologies. That is, employers would need to ensure that any third-party AI tools that they acquire and use in their own organization are safe, free from bias and cause no harm. To do so, employers would need to establish measures to identify, assess and mitigate the risk of harm or biased output from the use of these "high impact systems."

While AIDA likely won't become law until at least 2025, in the interim, the government has issued a voluntary generative AI Code of Practice aimed at ensuring "developers, deployers, and operators of generative AI systems are able to avoid harmful impacts, build trust in their systems, and transition to smoothly to compliance with Canada's forthcoming regulatory regime."

Tips from April:

  • Since the AI tools your organization uses can change (for example, terms of services can be unilaterally updated, which means what was once deemed an acceptable risk may need to be re-evaluated), assign a group of people who are responsible for scanning the tools you use and any regulations that affect your jurisdiction and industry. Then, schedule regular meetings to discuss any pertinent updates and how this could affect the policy.  
  • Stay up to date with any regulatory frameworks or legislation in your jurisdiction and industry that could have an impact on how you use AI in your workplace. And consider monitoring legislation in other jurisdictions like Europe and US to get a sense of what could be coming to Canada. For example, the National Institute of Standards and Technology’s AI Risk Management Framework (US) and the EU AI Act.

Tip 6: Be transparent with your clients and customers

Be prepared to explain to clients and customers how you use AI at your organization. Consider drafting a public-facing statement like ours, highlighting which tools you use, and how you use them in an ethical, responsible way.  

“I think people would appreciate knowing that a company is using AI in a trustworthy and responsible manner,” says April. However, the key with a public statement is to keep it current, just like your policy, updating it whenever your organization adopts new AI tools or practices, or as new legislation comes into effect that applies to your industry.  

  • Tip from April: To avoid risks when drafting a public statement, refrain from using absolutes. For example, rather than saying you will only use a particular tool, say you will evaluate tools as they become available and for now, you use X, Y, Z tools.

High-level AI policy checklist

✓ Do your research to understand what AI is and how it applies to your business.

✓ Create an AI taskforce or working group.

✓ Know your AI tools—which ones you can use, what they are and how you will use them.

✓ Develop an AI risk assessment for your organization.

✓ Develop a transparent, clear and concise policy, outlining the acceptable use of AI at your organization.

✓ Develop use cases for teams across your organization.

✓ Develop specific training so you can empower people to know how to use AI in the workplace.

✓ Monitor the tools and legislation and build in periodic reviews of your AI policy.

Learn more about April Gougeon

April Gougeon is an associate in the Privacy and Data Protection group at Toronto-based firm Fogler, Rubinoff LLP, where she focuses her practice on assisting clients in the implementation and preparation of appropriate privacy policies, privacy management programs, conducting privacy audits and responding to privacy complaints and investigations.  

Prior to joining Foglers, April was a Senior Advisor and Senior Strategic Policy and Research Analyst with the Office of the Privacy Commissioner of Canada (OPC), where she advised on the Personal Information Protection and Electronic Documents Act (PIPEDA) and other international and domestic privacy-related policy and legislation.

Heading

Share this post
Copied!

Your next role starts here.