Skip to content

Employee Use of AI: Why You Need a Policy and What to Include

Mar 11, 2026

Employees naturally gravitate towards tools that enable faster, more effective work. And generally, increased efficiency is good for business. But unauthorized use of generative AI (genAI) can expose your organization to significant risk — primarily the leakage of confidential and proprietary information.

The Risk of Ignoring the AI Elephant in the Room

Since ChatGPT launched in November 2022, low-cost and free genAI tools have flooded the market. Without policies governing these tools, employees may use them for a range of tasks from drafting a simple email to reviewing contracts and creating bids.

Your board or insurers may already be asking about your AI use and risk policies. You also might have certain regulatory requirements. For example, a 2025 California law requires health facilities that use genAI in patient communications to disclose such use, unless the communication is reviewed by a licensed provider.

Silence on the issue is not a strategy. But neither is a heavy-handed ban on all things AI (which would be doomed to fail, anyway). The middle ground here — putting in place a policy on the proper use of AI in the workplace — is rooted in effective governance and risk awareness. 

What Are Key Elements of an Employee AI Use Policy?

Effective AI policies are practical, not alarmist, and they acknowledge AI’s role as a productivity tool while emphasizing risk management. Partnering with HR and legal ensures your policy fits organizational needs. Consider the following field-tested structure, suited to most small and midsized organizations.

Purpose and Scope

Clarify the types of AI tools that are in scope, as well as who the policy applies to. In addition to all employees, contractors, and temporary staff, organizations with boards of directors and volunteers should make clear that the policy also applies to them.

Approved and Prohibited Use

State what’s allowed or prohibited. For example, explicitly ban the use of AI to:

  • Upload confidential data.
  • Make legal or financial decisions.
  • Commit the company contractually.
  • Replace professional judgment.

Typically allowed (with conditions) is using genAI to: 

  • Draft generic emails or summaries.
  • Brainstorm ideas or outlines.
  • Learn about concepts or best practices, as long as the content is not based on confidential or proprietary information.

Confidential and Proprietary Information Protection

This provision is crucial for companies subject to privacy and security regulations (e.g., HIPAA), as well as those that have confidentiality or trade secret agreements.

Specify which information is off-limits, such as contracts, bid documents, customer data, financial statements, internal reports, and proprietary methods. State clearly that all company information is confidential, regardless of format. Copying, pasting, dictating, or uploading information all violate policy.

Make sure employees understand how AI tools use the information they input. For example, public AI platforms may retain and use input data to train their algorithms; as a result, confidential and proprietary information could be exposed to and accessed by other users.

Data Ownership and Intellectual Property (IP) Considerations

Employees may not realize AI-generated output isn't exclusively theirs. IP ownership might be unclear, and using AI in external deliverables can cause disputes.

Accuracy, Bias, and Human Accountability

Require human review of all deliverables that use AI output. Given AI's tendency to “hallucinate” (i.e., to generate confident falsehoods), make clear that employees are fully accountable for accuracy, compliance, and safety. This is critical in safety-driven sectors like construction and healthcare.

Security and IT Oversight

Align the AI policy with existing IT governance. For example:

  • All browser extensions and plugins require approval.
  • Only approved tools may be integrated with company systems.
  • The IT or information security department may monitor usage patterns.

Training and Awareness

Integrating training and awareness into policy sends the message that you prioritize education over punishment. Include concrete examples and scenarios, and mandate ongoing training.

Violations and Enforcement

Keep your disciplinary approach measured. Focus on deliberate misuse rather than experimentation. Escalate only when there is negligence or intentional abuse, and reference existing disciplinary processes where possible.

Acknowledgment and Review Cycle

Use a defined timeline and process for updating and communicating changes. Set a recurring annual review and require written acknowledgment of the policy from all relevant individuals during each review cycle.

Tackle AI Risk Head-On

AI risk demands confident leadership — not avoidance or panic. A written policy that encourages responsible AI use establishes sturdy guardrails to protect your organization. Contact your CRI advisor if you’d like to discuss governance policies to keep your organization moving in the right direction.

Relevant insights

Join Our Conversation

Subscribe to our e-communications to receive the latest accounting and advisory news and updates impacting you and your business.

This field is for validation purposes and should be left unchanged.

By proceeding, you are agreeing to the terms and conditions in the Carr, Riggs and Ingram Privacy Policy. This form submission acts as your acknowledgment to receive occasional email updates, news and promotions from Carr, Riggs & Ingram.