Buy Ticket
Bernard Sonnenschein
29.1.2026

Data security and AI: What companies should clarify before the first project

Digital circuit board with a protective shield as a symbol of AI data security
Download Article

Many companies are faced with a paradox: They know that AI can ensure their competitiveness. At the same time, they are reluctant to get started — not out of lack of interest, but out of uncertainty. One of the most common hurdles concerns the issue of data protection and data security. What happens to our data when we use AI tools? What are the risks? And what do we need to clarify internally before we even talk to a provider?

This article provides guidance. He explains what data security means in an AI context, where the differences to data protection lie and which questions companies should answer before the first project starts.

Data security vs. data protection: What does what mean?

The terms data security and data protection are often used interchangeably — they describe different things. Anyone who wants to introduce AI into a company should understand both concepts and be able to clearly separate them from each other.

Data protection relates to the legal framework: Which personal data may be collected? How must they be processed? What rights do data subjects have? In Europe, the GDPR (General Data Protection Regulation) regulates these issues. Violations can be punished with fines of up to four percent of annual worldwide turnover.

Data security, on the other hand, describes the technical and organizational measures that protect data from unauthorized access, loss, or manipulation. The three classic protection goals are confidentiality, integrity and availability. It's about encryption, access controls, backup strategies, and emergency plans. Data security concerns all data in a company — not just personal data.

For AI use, this means that even if a company does not feed any personal data into an AI system, massive risks can still arise. Trade secrets, strategic plans, production data, or email communication can fall into the wrong hands or inadvertently be incorporated into public AI models.

Why data security is particularly critical when using AI

AI systems — in particular generative AI such as ChatGPT, Gemini or Claude — work differently than traditional software. They learn from data, process inputs and may be able to save them or use them for training purposes. This creates new threats and risk dimensions to protect data.

83 percent of companies do not have automated security controls for AI tools. Instead, they rely on training for employees, simple warnings — or nothing at all. This means that sensitive data can continuously enter public AI systems without this being documented or controlled. This risk is particularly high when using generative AI tools.

Unlike traditional data breaches that occur at a specific point in time, the data flow with AI is gradual. Employees use AI tools in their everyday work, copy texts into them, have emails summarized or reports analyzed. As soon as this data ends up in public systems, companies can neither track nor retrieve or delete it.

There is also regulatory momentum: In Europe, the NIS2 Directive is already in force, and the AI regulation imposes additional requirements. Anyone who does not document and control the use of AI in a company risks compliance violations — often without realizing it. Our article on GDPR and AI regulation.

The three biggest risks associated with a lack of data security

Uncontrolled data flow

The most obvious risk: Confidential information is leaked to the outside world. This can happen when employees upload sensitive documents to public AI tools in order to have them summarized or translated. Once entered, this data can become part of the training material — and, in the worst case, appear in answers for other users. The consequences range from data loss to unintentional data theft by third parties.

It is particularly critical when it comes to trade secrets. If product strategies, calculations or contract details flow into public systems, the competitive advantage can be lost for years. And unlike a classic hack by cyber criminals, there is no attacker here who could be identified — the damage is caused by everyday use.

Compliance violations

Companies that use AI tools without tracking are violating key regulatory requirements. Compliance with the GDPR requires in Article 30: that all processing activities are documented. Any untracked AI interaction with personal data is a potential breach.

Industry-specific regulations are also taking effect: Breaches of data protection regulations can be particularly serious in the healthcare sector. Regulated industries such as the financial sector have strict requirements for the documentation and traceability of decision-making processes — even if AI is involved.

Reputational damage

Data breaches are expensive — not only because of potential fines, but above all because of loss of trust. Customers, partners, and employees expect their data to be handled securely. If it becomes known that sensitive information has been leaked to the outside world as a result of negligent use of AI, the damage to the image may exceed the financial damage.

What companies should clarify before the first AI project

Before companies evaluate talks or tools with external providers, they should provide internal clarity. The following questions help you to determine your own position and make well-founded decisions.

Which data is particularly worth protecting?

Not all data is equally sensitive. Classification helps to set priorities. Typical categories include personal data from customers and employees, trade secrets such as product developments or pricing strategies, financial data, and confidential communication.

For each category, it should be defined: Can this type of data be entered into AI systems at all? If so, under which conditions? Which tools are approved for this?

Where is our data processed and stored?

The location of data processing is relevant from a legal and strategic perspective. Where does the processing take place and who has access to the data? 45 percent of German companies prefer data centers located in Germany. This is understandable: European data protection standards are considered stricter and legal enforceability is easier.

When choosing AI providers, it should be checked: Where are the servers located? Which countries is data transferred to? Do comparable data protection standards apply there? Are there guarantees regarding the use of data for training purposes? This audit is essential, especially for cloud solutions and SaaS applications.

Who is allowed to use which tools — and how is this controlled?

Without clear guidelines, employees decide for themselves which AI tools they use. This leads to shadow IT: applications that are used without the knowledge or approval of the IT department. AI often finds its way into companies through the backdoor because employees independently use chatbots or other AI services without official approval or control.

Companies should therefore define which tools are approved for which use cases. Just as important: technical controls that prevent sensitive data from entering unauthorized systems — for example through network policies or restricted access rights. If you create clear structures here right from the start, you save yourself time-consuming improvements later on.

You can find out how other SMEs have successfully built up AI governance, including on the Mittelstands Stage at the data:unplugged festival 2026 on March 26 & 27 in Münster. You can find an overview of all speakers here.

How do we document the use of AI?

Documentation is not only relevant for compliance, but also for internal management. Which AI systems are in use? For which processes? What data flows in? Who is responsible?

Most companies have not yet established an AI governance framework. This means that when asked, they cannot prove how they use AI and which data security measures apply. This is a problem for audits, certifications or in the event of a data breach.

What happens in an emergency?

No system is 100% secure. Companies should therefore also play through the emergency: What do we do if sensitive data is released to the outside world through the use of AI? Who is informed? Which measures are taking effect? How do we communicate internally and externally?

A contingency plan for AI-related data breaches should be part of the overall IT security strategy. It ensures that everyone involved knows what to do in an emergency. The earlier these questions are resolved, the faster and more confidently the company can react and protect sensitive information.

From theory to practice: The next steps

The good news: Data security when using AI is not an insoluble problem. It requires conscious decisions and clear structures. With the right solutions and a well-thought-out approach, the risk can be significantly reduced. The first step is often the most important — namely to put the issue on the agenda in the first place.

Many companies start with an internal workshop in which IT, management and specialist departments jointly analyse the current situation. Which AI tools are already in use? Which data flows where? Where are there blind spots? Specific measures and resources can then be derived from this inventory.

Anyone who approaches the topic in a structured manner not only gains security, but also the ability to act. Because with a clear framework for securing sensitive information, companies can start AI projects faster because the basic questions have already been clarified.

In the master classes at the data:unplugged festival, experts explain concrete security concepts for using AI — from initial risk analysis to successful implementation.

Conclusion: Data security as a prerequisite for successful use of AI

Data security is not a brake for AI projects — it is the prerequisite for these projects to be successful in the long term. Companies that ask the right questions before taking the first step avoid expensive improvements, compliance problems and damage to their reputation later on.

The key is balance: being open to the possibilities of AI, but at the same time dealing with the risks responsibly. Anyone who finds this balance can see AI not as a threat but as an opportunity. You can find out how to get started in practice in our guide to AI implementation in SMEs.

You can find out how other SMEs successfully combine data security and the use of AI at the data:unplugged festival 2026 on March 26 & 27 in Münster. On the SME stage and in interactive master classes, companies share their experiences — from the initial risk analysis to the established governance framework.

Data security is not an issue for lone wolves. For successful implementation, it is important to involve IT, specialist departments and management together. data:unplugged stands for practical transfer of knowledge — from which the entire team benefits. Get your ticket now!

Get your

ticket

now!

We can’t wait to see you and your team!
March 26–27, 2026
MCC Halle Münsterland