The recent release of Snapchat’s new Internal Friend, MyAI, proves just how advanced artificial intelligence (AI) and chatbots have gotten recently. However, with these incredible features comes uncertainty about the collection of users’ data and users’ data privacy.
As internet usage has increased over the years, so has the importance of data privacy. Data privacy can be defined as the ability individuals have to govern their data in regard to how their personal information is disclosed, collected, and communicated to others. Examples of personal information include a user’s name, location, contact information or online or real-world behavior.
Oftentimes, websites and other applications need to collect and store personal data about users in order to provide services. However, these applications often exceed users’ expectations for data collection and usage, leaving users with much less privacy than they realized. Additionally, user data not adequately stored by those platforms opens the possibility of a data breach – ultimately compromising the user’s data without his or her knowledge.
In many jurisdictions, privacy is considered a fundamental human right, and data protection laws exist to guard that right. Governments around the world and in certain U.S. states have begun passing laws and regulations regarding what kind of data can be collected about users and how that data must be stored. However, with the introduction of extremely advanced chatbots like ChatGPT and Snapchat’s new MyAI that exist with little regulation over data collection, users may find themselves leery of usage and unaware of just exactly what they’re revealing to their “online friends.”
In mid-April 2023, Snapchat released MyAI to all of Snapchat’s 750 million monthly users for free, after originally making it available in February to the app’s more than three million paid subscribers. The OpenAI-powered bot was created to improve the user app experience and help the navigation of the app with recommendations and chats. However, the implications and pure computing capabilities of MyAI lend it to do much more than just that.
MyAI offers personal recommendations for everyday problems, answers both interpretative and factual questions, helps users make plans, writes articles about topics, and does almost everything in between, in just seconds. The possible applications for MyAI are endless. However, this comes with the drawback of users sharing information with the AI and straining users’ confidence about what personal data the chatbot is storing and how it is being stored.
As stated on Snapchat’s Support forum: “Content shared with MyAI including your location if you’ve shared that with Snapchat, will be used by MyAI to provide relevant and useful responses to your requests, including nearby place recommendations. Your data may also be used by Snap to improve Snap’s products and personalize your experience, including ads.”
Like similar statements used by most tech companies, this disclaimer is intentionally vague, designed to make the user hastily disregard it and click “agree.” Slight elaboration is found in the Snapchat Privacy Policy, where the company outlines what types of data are collected and how it is stored. Some data collection types include usage information, content information, device information, device phonebook, camera, photos and audio, information collected by cookies and other technologies, and log information.
Though none of the data collected in the privacy policy is outstandingly inappropriate, with the rapid advancements of AI, users should question exactly what lengths these companies will push to achieve maximum user experience. Users should consider what types of information will be stored on them in the future if MyAI begins to alter its personality based on user-personalized interests.
As for storage of the data that Snapchat collects, there are two main categories of security issues when it comes to data protection: security threats and system vulnerabilities. A security threat is defined as any risk or threat that is a possible danger to a computer system caused by intentional acts with the purpose of causing harm to the system/company, financial gain or other means of obtaining tangible or intangible value. Many chatbots, including Snapchat’s MyAI, use cloud computing services to store customer data, which handles threats and vulnerabilities extremely well.
This perhaps may leave users believing that their data is impenetrable. However, in March of this year, the same OpenAI that powered Snapchat’s MyAI saw a security breach that exposed the payment-related information of 1.2% of ChatGPT Plus subscribers who were active during a specific nine-hour window. The bug exposed affected users’ first and last names, email addresses, payment addresses, the last four digits of their credit card numbers, and their credit card expiration dates. These types of breaches serve as a reminder that even with the most robust security measures in place, data breaches remain a risk.
With the continuous possibility of data breaches and security vulnerabilities, companies like Snapchat can only implement best practices when it comes to data management and data storage. Some of these include:
- Implementing least privileges in identity access management
- Providing up-to-date information security awareness training and role-specific training for all employees
- Protecting communications through end-to-end encryption
- Providing vulnerability and patch management
However, with all of the available data management complexities, there still exists a risk of a data breach. Users must assume some of the responsibility to ensure that they are not exposing chatbots to any personal information that can be harmful if inappropriately accessed. Some best practices for users of these chatbots include:
- Keeping personally identifiable information (PII) private.
- Do not share anything with the MyAI that would expose yourself if it were to be leaked. Credit card information, financial account information, social security information, health records, and other private documents should only be shared with well-encrypted and trusted resources.
- Avoiding discussing private company information.
- Many consumers have used the chatbot Slack GPT as a note-taking tool for business meetings. Users should be leery of using these sorts of AI tools and be conscious of the types of company information they share with AI.
- Abiding by the community guidelines for AI usage.
- Understand that these AI are not perfect and that loopholes can be created in the AI’s function to be not biased or inappropriate. The best practice for all users is to abide by the company’s acceptable use guidelines that can be readily available with a quick search.
It is equally important to understand that because of the rapid expansion of AI and AI-related services, many of these processes are new and far from perfect. All that you can do in this stage of development is to be judicious of what information you provide to the chatbots and follow the best practices on the user side of communication.
If you would like more information about AI and AI chatbots, contact a member of Schneider Downs’ IT Risk Advisory Services team at [email protected].
About Schneider Downs IT Risk Advisory
Our IT Risk Advisory practice helps ensure that your organization is risk-focused, promotes sound IT controls, ensures the timely resolution of audit deficiencies, and informs the board of directors of the effectiveness of risk management practices. We will partner with you to provide comprehensive IT audits and compliance reviews that will ensure your organization has effective and efficient technology controls that better align the technology function with their business and risk strategies.
To learn more, visit our IT Risk Advisory page or contact us at [email protected].
Related Posts
No related posts.