Why is AI governance so important for organizational security and compliance?
In support of Cybersecurity Awareness Month, we are spotlighting the critical areas where organizations must elevate their cybersecurity strategies for 2025 and beyond. The focus of this article is on AI governance.
Organizations everywhere are rushing to implement some form of AI. Some are integrating AI into their service offerings. Others are allowing employees to use monthly subscription models, such as Claude and ChatGPT. A few are simply enabling employees to utilize built-in AI capabilities, like Microsoft’s CoPilot or Google’s Gemini.
Regardless of the approach your organization is taking to integrate AI into its business processes, there are established standards and frameworks you can utilize to help you address governance concerns. One of the most comprehensive AI governance playbooks to date is the AI Risk Management Framework (AI RMF), released by the National Institute of Standards and Technology (NIST).
The NIST AI RMF lists four core functions of Govern, Map, Measure, and Manage. Govern addresses the policies and practices around AI in an organization. Map relates to the context mapping of the purpose, expectations, operations, and use of AI in an organization. Measure relates to the measurement of AI risks. Manage addresses the analysis of risk versus reward, aiming to answer the question, “Are the risks worth the reward with this AI deployment?”
In alignment with the NIST AI RMF, the Schneider Downs Cybersecurity team suggests three key AI Governance strategies that can be used to protect your business and stay compliant:
- Data classification and AI usage policies
- AI-specific access controls and monitoring
- AI incident response plan
1. Data Classification and AI Usage Policies
Organizations should have a data classification policy that defines the different types of data used in their day-to-day business processes. Examples of classification types include public (website content), internal (policies and training materials), confidential (employee records and financial reports), and restricted (proprietary business materials, protected health information, and payment card information).
Your organization’s data classification and AI usage policies should specifically mention the use of data with AI. Beyond the expected “prohibited from use with AI” policy, your organization should also create “allowed to use” AI tool lists with specific use cases. Consider creating different approval levels for data sensitivity and AI tool type. AI technology is constantly evolving, so be sure to perform regular reviews of any AI policy. Data classification and AI usage policies align this strategy with the NIST AI RMF Map (level 2) function.
2. AI-Specific Access Controls and Monitoring
Access controls and monitoring should include the specific types of AI that are allowable. Perhaps the organization only wants employees to use corporate-managed AI accounts versus personal subscription accounts. AI tool usage can be monitored via logs and endpoint detection. Be sure to create alerts for unauthorized AI tool access or anomalous data transfer patterns (specific to your business expectations/baselines). Your organization should also schedule regular AI usage audits and include who has access and why they have access. Continuous monitoring of AI risk levels, documented allow lists, and managing account types align this strategy with the NIST AI RMF Govern (level 4), Measure, and Manage functions.
3. AI Incident Response Plan (AI-IRP)
At a minimum, AI should be included in your response plan. It’s crucial to be prepared for the moments following a breach that might include AI. The AI-IRP should include response steps for data leaks through AI prompts, model poisoning attempts (insider threat or otherwise), and prompt injections (phishing or other types). Make sure your cybersecurity team members are properly trained for AI tool forensics and AI log analysis. Also, they should be familiar with AI vendor security incident cooperation capabilities and policies. Finally, create communication templates for notifying clients, stakeholders, and vendors of AI-related security incidents. Managing AI risks and regularly documenting and managing response and recovery techniques aligns this strategy with the NIST AI RMF Manage (levels 3 and 4) function.
This article is part of a series highlighting the critical areas where organizations must elevate their cybersecurity strategies for 2025 and beyond. Additional articles include:
- 5 Things Companies Wish They Did Before a Breach
- Did You Use a Password to Login Today? You’re Set Up for Failure!
About Cybersecurity Awareness Month
Since 2004, the United States and Congress have recognized October as Cybersecurity Awareness Month to raise awareness about the importance of cybersecurity in the public and private sectors and tribal communities. With a focus on securing our world, Cybersecurity Awareness Month recognizes the importance of taking daily action to reduce risks when online and connected to devices.
Related Resources
- CISA Cybersecurity Awareness Month Resource Center
- CISA Cybersecurity Awareness Month 2025 Toolkit
- Schneider Downs Cybersecurity Resource Library
About Schneider Downs Cybersecurity
The Schneider Downs cybersecurity practice consists of experts offering a comprehensive set of information technology security services, including penetration testing, intrusion prevention/detection review, ransomware security, vulnerability assessments and a robust digital forensics and incident response team. In addition, our Digital Forensics and Incident Response teams are available 24x7x365 at 1-800-993-8937 if you suspect or are experiencing a network incident of any kind.
To learn more, visit our dedicated Cybersecurity page.
Want to be in the know? Subscribe to our bi-weekly newsletter, Focus on Cybersecurity.