It’s Data Privacy Week 2026, and the theme is “Take Control of Your Data.”
Recognized by the National Cybersecurity Alliance (NCA), Data Privacy Week runs from January 26-30 this year, and aims to empower individuals and organizations to respect privacy, safeguard information, and build trust in data use.
At its core, I believe that data privacy is the fundamental right to control your personal information and determine third-party access to its collection, storage, and use.
Today, significant volumes of sensitive data are stored across platforms, often without users’ full awareness. While this presents challenges, actionable steps can improve both personal and organizational data protection.
AI provides a timely example: AI models may learn from the inputs you provide, but many offer settings to disable input-data training, helping you control how your information is used. Regularly reviewing the types of information shared with AI helps mitigate risks to both individuals and organizations.
Here are five practical tips for organizations to take in 2026 to protect sensitive data and reduce AI-related risks:
1. Data Governance, Classification, and AI Boundaries
As organizations rapidly adopt AI tools, one of the most overlooked risks is uncontrolled data exposure. AI systems do not inherently understand sensitivity, regulatory obligations, or confidentiality. If sensitive data enters an AI tool, whether through prompts, integrations, or background indexing, it may be logged, replicated, or retained in ways that extend outside of traditional application risk models. Without defined AI boundaries through proper data governance and classification, organizations lose visibility and control over how their data is used.
Organizations should clearly define what data AI is allowed to access and what is strictly prohibited. Establishing these boundaries early prevents downstream privacy incidents that are far more difficult to contain.
2. Treat AI Prompts and Outputs as Sensitive Data
AI risk does not stop at data inputs. Prompts and AI-generated outputs can contain sensitive information, inferred insights, or regulated conclusions. In many cases, AI outputs are a new form of “derived” data that can unintentionally expose customer information, business strategy, or protected attributes—even when the original prompt seemed harmless.
Organizations should treat prompts and outputs with the same discipline applied to sensitive data. Clear governance over outputs reduces the risk of over-reliance, hallucinations, and privacy-impacting errors.
3. Privacy Impact Assessments to Include AI-Specific Risks
Traditional Privacy Impact Assessments (PIA) were not designed for systems that learn, infer, and evolve over time. AI introduces unique privacy challenges, including training data reuse, model memory, cross-border data processing, and the potential to infer sensitive characteristics from non-sensitive inputs. Applying legacy PIAs without modification leaves material risk unaddressed.
Organizations should expand their assessment processes to explicitly evaluate AI-specific risks before deployment. Aligning assessments to recognized frameworks such as the NIST AI Risk Management Framework or ISO 42001 helps ensure privacy considerations are embedded into AI design rather than addressed after issues arise.
4. Third-Party Oversight for AI-Enabled Vendors
A growing portion of AI risk sits outside organizational boundaries, embedded within third-party platforms, SaaS tools, and service providers. Many vendors now include AI features by default, sometimes using customer data to train models or relying on complex chains of sub-processors. Without enhanced oversight, organizations may unknowingly expose sensitive data through trusted partners.
Vendor risk management programs should be updated to explicitly address AI usage. Contracts should clearly define AI-specific data protections and opt-out provisions. Treating AI-enabled vendors as data processors, not just technology providers, helps organizations maintain accountability and regulatory compliance.
5. Responsible AI: Employee Awareness and Education
Despite advanced technical controls, human behavior remains the most common source of AI-related privacy incidents. Employees often use AI tools with good intentions without realizing the privacy implications of the data they share. Policies alone are not enough to change this behavior.
Organizations should provide targeted, scenario-based training that clearly explains what data is appropriate for AI use and what is not. This re-enforcement helps empower employees to use these tools responsibly, while protecting sensitive data.
How Can Schneider Downs Help?
Have a data privacy question or concern? Schneider Downs is uniquely experienced with the evolving data privacy landscape and supports organizations in addressing the risks associated with AI, starting with data governance.
Contact us to help you assess and enhance your privacy and security posture through a tailored assessment specific to your needs.
About Data Privacy Week
As recognized by the National Cybersecurity Alliance (NCA) since 2021, Data Privacy Week is an international effort to empower individuals and organizations to respect privacy, safeguard data, and enable trust. Check out the resources on the Data Privacy Week website to learn ways to more effectively manage your personal information and understand, from an organizational perspective, why it is important to respect user data.
About Schneider Downs IT Risk Advisory
Schneider Downs’ team of experienced risk advisory professionals focuses on collaborating with your organization to identify and effectively mitigate risks. Our goal is to understand not only the risks related to potential loss to the organization, but to drive solutions that add value to your organization and advise on opportunities to ensure minimal disruption to your business.
To learn more, visit our dedicated IT Risk Advisory page.