It's happening more and more; employees using AI chatbots to quickly draft an e-mail or write a summary. AI chatbots can be a great tool for doing work more efficiently. But there are big risks involved. The Personal Data Authority (AP) has recently received several reports of data breaches as employees shared medical data or addresses with a chatbot that uses artificial intelligence (AI). Exactly how this works and how you can prevent a data breach, we tell you in this article.
What is an AI chatbot?
An AI chatbot is a computer program that communicates with humans through written language. It is supported by artificial intelligence, making human assistance unnecessary. The chatbot's answers are based on what it knows or has learned through previously asked questions.

How a data breach arises from the use of AI chatbots
Imagine sitting in a crowded café, everyone around you can listen in and use the information you share. Then you are also careful about what information you share. In the same way, you need to deal with AI chatbots. In fact, data you enter into an AI chatbot, such as ChatGPT, can also be stored and potentially accessed by others. And thus potentially cause a data breach.
A data breach involves accessing personal data without permission or intent. The AP sees many people in the workplace using digital assistants, such as ChatGPT. But most of the companies behind the chatbots store all the data entered. As a result, that data ends up on the servers of these tech companies. Often without the person who entered the data realizing it and without knowing exactly what that company will do with the data. Employees often use the chatbots on their own initiative and against the agreements with the employer: if personal data is entered in the process, it is a data breach. Sometimes the use of AI chatbots is part of an organization's policy: then it is not a data breach, but be careful what data you are required by law of sharing with chatbots. Organizations should avoid both situations.
When can you do use AI chatbots?
AI can provide efficient and creative solutions to work, so if you still want to continue using it choose a variant that is "done" learning and no longer uses input data to get smarter or one that is properly secured. ChatGPT, while compliant with basic rules such as GDPR, can use entered data to further train itself. So it stores entered data, which poses a risk to your organization.
Microsoft365 Copilot provides a secure environment where your data stays within the organization. When a user logs into Copilot, data is encrypted. As a result, no one other than that user can see the data. After the session, all prompts and responses are immediately deleted. Copilot works with the Microsoft tools you use every day, such as Word, Excel and Teams, and it makes smart use of the data within your organization. Still, we always recommend replacing business-sensitive information with dummy text and replacing it with the original data after the chatbot has done its job. This way, you do allow the chatbot to generate the correct output, but avoid the risk of a data breach.
Reducing risks when using AI chatbots
Using AI chatbots is attractive and can actually increase the efficiency of your employees. It is not always necessary to completely ban the use of AI, but do try to reduce the risks in various ways:
- Make arrangements
It is important that you make clear agreements with your employees regarding the use of AI chatbots in their work. Determine whether employees are allowed to use AI chatbots or not. Are they allowed? If so, provide clear guidelines for which work activities employees may use the tools and what data they may and may not enter.
- Create awareness
In addition, it is important that your employees are aware of the risks to using AI chatbots. Therefore, provide awareness trainings for your staff around the use of AI. That way, you can educate employees on how to use AI tools in a safe and ethical way. Are you interested in awareness trainings? Nestor Security offers these.
So what if things do go wrong?
Does an employee leak personal data by using an AI chatbot against the agreements made? If so, reporting to the AP and victims is mandatory in many cases. Would you like advice on the proper use of AI chatbots in your organization? Then our consultants are ready to help you.

This article was written by Wendy Sikkema. Do you need help or have any questions? Please feel free to contact her without obligation.