As your company prepares for the FY2026 Sarbanes‑Oxley (SOX) compliance cycle, now is an ideal time for Internal Audit teams to identify opportunities to improve efficiency, strengthen control precision, and enhance audit readiness.
This article, the eighth of a focused series, guides you through next steps so you can approach SOX compliance in 2026 with clarity and confidence.
Artificial intelligence (AI) is no longer a future consideration it is already embedded in financial reporting, process automation, compliance monitoring and a growing range of operational and decision-support functions across virtually every industry.
Why AI Governance has Become a SOX-Level Concern
As AI adoption accelerates, organizations are being pressed to answer a critical governance question that goes beyond whether to use AI: Do we have sufficient controls and oversight in place to manage the risks that AI introduces?
For publicly traded companies and those subject to regulatory compliance obligations, there is an additional layer of urgency: How does AI affect our internal controls over financial reporting (ICFR) and our Sarbanes-Oxley (SOX) compliance posture?
Understanding AI Risk Materiality in a SOX Environment
At its core, AI risk materiality refers to the degree to which an AI systems output, if inaccurate, biased, or otherwise flawed, could meaningfully impact business operations, financial reporting integrity, regulatory compliance, or stakeholder trust. Not all AI use is created equal. A tool that automates meeting summaries carries far less risk than one that influences revenue recognition, automates journal entries, or flags exceptions in financial controls. Under SOX, any technology that plays a role in the preparation, review, or approval of financial statements is subject to scrutiny. When AI is woven into those processes, organizations must assess whether existing control frameworks are designed to account for AI-specific failure modes including model error, data quality gaps, and lack of auditability.
Key Risk Dimensions to Evaluate in an AI Risk Materiality Assessment
When conducting an AI risk materiality assessment, organizations should evaluate AI systems against the following key risk dimensions:
- Model and Performance Risk – Is the AI producing accurate, reliable, and consistent outputs? Are there mechanisms to test and detect model drift and inaccuracies over time? For SOX-relevant processes, model errors that affect financial calculations or reporting outputs may constitute a material weakness if not identified and controlled.
- Data Governance – Is the data used to train and operate the AI model high-quality, representative, and free from harmful bias? Do you know where AI data inputs are stored and processed? Poor data governance upstream can undermine the reliability of AI-driven outputs downstream, which is a direct concern for financial reporting integrity.
- Transparency and Explainability – Can the model’s decisions and outputs be explained to auditors, regulators, and management? Under SOX, control owners must be able to demonstrate that they understand and can validate the outputs of systems used in financial reporting. “Black box” AI outputs introduce auditability challenges that traditional control frameworks were not designed to address.
- Fairness and Bias – Does the AI system produce outcomes that are equitable across customer segments, demographics, and protected classes? Biased outputs in areas like credit decisioning, hiring, or customer service can result in regulatory violations and reputational harm. Organizations should establish clear definitions of fairness for each AI use case and implement ongoing controls to detect and mitigate discriminatory patterns.
- Human Oversight – Where does a human need to remain in the loop, and are those thresholds and responsibilities clearly defined and enforced? SOX compliance relies heavily on human review and approval as a compensating control, so organizations must be intentional about where AI replaces, assists, or requires human validation.
- Third-Party and Vendor Risk – When AI tools are sourced externally, do contractual agreements provide adequate visibility, audit rights, and control expectations? You can’t outsource AI risk.
- Business Continuity and Change Management – What happens when an AI system fails, is retrained, or is replaced? Organizations must ensure that AI-dependent processes have documented fallback procedures, that model updates follow a structured change management process with appropriate testing and approval gates, and that operational resilience is not compromised by over-reliance on any single AI system or vendor. For SOX, changes to systems that support financial reporting controls require formal change management documentation and may require updated control testing.
- Cybersecurity and Model Security – Are AI systems protected against adversarial threats like data poisoning or prompt injection attacks? Do you have the right user access restrictions in place? Unauthorized manipulation of an AI model that influences financial reporting could have direct SOX implications.
AI Governance Frameworks That Align Well With SOX
Fortunately, several well-established frameworks exist to guide organizations through AI governance, and many map well to existing compliance obligations like SOX. Such frameworks include:
- NIST AI Risk Management Framework (AI RMF) provides a comprehensive structure for identifying, measuring, and managing AI risk across the full model lifecycle. Its “Govern, Map, Measure, Manage” structure aligns well with how SOX control frameworks are organized.
- ISO 42001, the international standard for AI management systems, offers a certifiable governance structure similar in spirit to ISO 27001 for information security, providing a repeatable, auditable approach to AI risk management.
- COBIT and ISACA’s AI governance resources provide IT-focused control objectives that complement SOX ITGC (IT General Controls) frameworks, helping organizations extend their existing control environments to cover AI-specific risks.
When applied together, these frameworks give organizations a common language and a structured approach to building AI governance programs that satisfy both internal risk appetite and external regulatory scrutiny, including the heightened expectations that come with SOX compliance.
AI Risk Materiality is an Ongoing Process, not a One-Time Exercise
Assessing AI risk materiality is not a one-time exercise. It requires ongoing monitoring, periodic re-evaluation as models evolve, and a governance culture that treats AI risk with the same rigor applied to any other financial or operational risk.
Explore the rest of the series for more actionable insights:
- Strengthen SOX Compliance: FY2025 SOX Close‑Out and Lessons Learned
- Strengthen SOX Compliance: FY2026 SOX Scope and Risk Assessment
- Strengthen SOX Compliance: External Auditor Alignment
- Strengthen SOX Compliance: Balancing a Risk-Based SOX Program with External Auditor Needs
- Strengthen SOX Compliance: SOX IT General Controls and System-Dependent Controls
- Strengthen SOX Compliance: Third-Party Service Providers and SOC Reports
- SOX Compliance: Implementing Continuous Auditing
If you have questions about refining your SOX approach or want to discuss how to strengthen your internal processes, reach out to the Schneider Downs team at [email protected].
About Schneider Downs Risk Advisory
Our team of experienced risk advisory professionals focus on collaborating with your organization to identify and effectively mitigate risks. Our goal is to understand not only the risks related to potential loss to the organization, but to drive solutions that add value to your organization and advise on opportunities to ensure minimal disruption to your business.
Explore our full Risk Advisory Service offerings or contact the team at [email protected].