Bernard Sonnenschein
24.11.2025

GDPR & KI Regulation: Using ChatGPT in a legally secure manner

Illustration of a floating laptop with safety shield on the screen.
Download Article

Whether ChatGPT for customer inquiries, Microsoft Copilot for protocols or other AI tools in medium-sized companies — anyone using AI is operating within a complex legal framework of GDPR and EU AI Act and must understand both.

The good news first: Using AI in a legally secure manner is possible. We provide an overview and clarity of the most important regulations and concrete implementation steps.

Why you need to deal with AI law

Since February 2, 2025, the first provisions of the EU AI Regulation (EU AI Act) have been in force. Further requirements will follow gradually until 2027. At the same time, the General Data Protection Regulation (GDPR) remains fully applicable. The two sets of rules intertwine when using AI systems.

The EU AI Act is the world's first comprehensive set of rules for artificial intelligence and is intended to create trust in technology while enabling innovation. For companies, this means in practice: Anyone who uses AI must understand which laws apply when and how. Otherwise, not only will there be significant fines, because the corporate image is also at stake if data breaches or legal violations become known.

GDPR vs. AI Act: What's the difference?

Many decision makers confuse GDPR and AI Act or think that one replaces the other. However, both sets of rules exist in parallel and have different focal points.

The GDPR: The focus is on data protection

The General Data Protection Regulation was introduced back in 2018 and applies to all processing of personal data. This data includes all information that makes a person identifiable, such as names, email addresses, IP addresses, but also data from employees or customer information.

When you feed ChatGPT by name, have emails analyzed via Microsoft Copilot, or use AI for applicant management, personal data is processed. That's when the GDPR comes into play with its core requirements:

A legal basis is required (consent, contract fulfilment or legitimate interest). Only as much data may be collected as is actually required (data minimization). What happens with the data must be transparent (duty to provide information). Data subjects have rights of information, deletion and objection.

The GDPR has no exception for AI systems. On the contrary, particular care should be taken with AI applications, as they often process large amounts of data and work in a non-transparent manner.

The AI Act: Risk-based AI Regulation

The EU AI Act has been in force since August 2024 and is being implemented step by step. His focus is not primarily on data protection, but on the security and trustworthiness of AI systems as a whole.

The AI Act takes a risk-based approach and divides AI systems into categories:

Prohibited AI systems: Social scoring, real-time biometric remote identification in public spaces, or manipulative AI practices have been banned since February 2025. This also includes recognition of emotions in the workplace.

High-risk AI systems: These include AI applications in critical areas such as applicant management, lending, critical infrastructure, or education. From August 2026, strict requirements for transparency, documentation, human supervision and risk management will apply here.

General-purpose AI models (GPAI): Since August 2025, tools such as ChatGPT, Gemini or Claude have been subject to special transparency and copyright laws. Vendors must disclose which data was used to train their models.

Low or minimal risk: Most other AI applications only have transparency requirements. You must mark when users interact with a chatbot or when content is AI-generated.

The AI Act does not replace the GDPR, but supplements it. Both regulations must be observed at the same time.

What does that mean specifically for ChatGPT & Co.?

ChatGPT: Not automatically GDPR-compliant

The free version of ChatGPT is virtually non-GDPR-compliant for companies when processing personal data. OpenAI uses inputs to train its models, stores data on servers outside the EU and does not provide sufficient data protection guarantees for the free version.

The situation is different with ChatGPT Enterprise or ChatGPT Team. OpenAI has been offering EU Data Residency here since 2025 — which means that data from European customers is guaranteed to remain in the EU. In addition, companies can contractually exclude the use of their inputs for model training.

Nevertheless, even with the Enterprise version, an order processing contract (AVV) must be concluded, a data protection impact assessment carried out and employees must be trained. Sensitive data such as health information, financial data or special categories of personal data should generally not be entered into cloud AI tools.

Microsoft Copilot: integration with Office 365

Microsoft Copilot for 365 is deeply integrated into the Microsoft infrastructure and offers companies a comparatively secure option. Microsoft has comprehensive data protection obligations and provides standard contractual clauses for data transfers.

Nevertheless, the full GDPR compliance obligation also applies here. In addition, it must be examined whether the works council must be involved in the introduction — especially if AI is suitable for monitoring the performance or behavior of employees.

European alternatives: Mistral, Aleph Alpha & Co.

European AI providers are increasingly recommended for companies with high data protection requirements. Mistral AI from France or Aleph Alpha from Germany offer powerful language models with guaranteed EU data processing and clear GDPR guarantees.

These tools are specifically developed for the European market and avoid many legal grey areas that exist with US providers. The downside: They are often less well-known and have smaller developer communities than ChatGPT or Gemini.

Specific practical examples help more than abstract compliance checklists. At the data:unplugged festival, you'll meet companies that have already overcome these challenges — from legally compliant implementation to practical implementation in everyday life.

The 7 most important steps for legally compliant AI use

How is AI now used in companies in a legally secure manner? These seven steps provide guidance:

1. Inventory: Which AI is already in use?

Create an AI registry for your company. Record all AI tools that are already in use or are being planned. In doing so, document:

  • Which tools are used? (ChatGPT, copilot, image generators, etc.)
  • For what purposes? (text generation, data analysis, customer service, etc.)
  • Which data is processed? (personal, intra-business, public)
  • Who uses the tools? (departments, individual employees)
  • Where is data stored? (EU, USA, other third countries)

This inventory is the basis for all further steps and shows where legal risks lurk.

2. Risk assessment under AI Act

For each AI system, check which risk group it falls into. Most enterprise AI tools are rated as low or minimal risk. But beware: As soon as AI is used to select applications, evaluate employees or make credit decisions, it can be considered a high-risk system.

The risk-based approach of the AI Act means: The higher the risk, the stricter the requirements. From August 2026, high-risk systems must be comprehensively documented, regularly tested and monitored by people.

3. Create an internal AI policy

Employees need clear rules for using AI. An internal AI policy should specify:

  • Which AI tools are allowed?
  • What purposes can they be used for?
  • Which data must not be entered?
  • How are AI outputs checked?
  • Who is responsible for problems?

Important: Employees should only use company accounts with deactivated history storage. This prevents sensitive data from flowing uncontrollably into cloud systems.

4. Carry out a data protection impact assessment

According to Article 35 GDPR, AI systems require a data protection impact assessment (DSFA) in most cases. Regulators stress that AI applications are often considered to be at high risk.

A DSFA documents:

  • What are the risks for those affected
  • What protection measures are being taken
  • Whether the processing is proportionate

The DSFA is not a bureaucratic obstacle, but helps you to identify risks at an early stage and to act in a legally secure manner.

5. Conclude order processing contract

When external AI providers are used, the company is responsible for data protection — the provider is a contract processor. This requires an order processing contract (AVV). This regulates:

  • Purpose and duration of data processing
  • Type of data processed
  • Duties and rights of both parties
  • Technical and organizational measures
  • Subcontractors and their locations

Most professional AI providers offer standard AVVs. These should be carefully reviewed and adjusted as necessary. It is particularly important to rule out the use of data for model training.

6. Train employees

Employees are the key to legally compliant use of AI. Article 4 of the AI Regulation expressly obliges companies to ensure that employees have the necessary AI skills.

Teams should be trained in the following areas:

  • Which data can be entered into AI tools — and which not?
  • How do you recognize AI hallucinations and misinformation?
  • How are copyrights protected for AI-generated content?
  • What to do in case of data breaches or security incidents?

In exchange with other companies, you will find valuable approaches to successfully qualify teams for the use of AI. At the data:unplugged festival, this transfer of knowledge is specifically promoted — practical and feasible.

7. Continuous monitoring and adjustment

AI law is developing rapidly. What is legally secure today may already be outdated tomorrow. Continuous monitoring is therefore still of great importance:

  • Check at least once a year whether the AI systems used still meet the current requirements
  • Follow government guidelines and court rulings
  • Adjust internal policies as needed
  • Document all changes carefully

The European Commission regularly publishes updates and guidelines on the implementation of the AI Act. Data protection authorities also provide ongoing guidance.

Further special features for SMEs

In addition to the general requirements, there are a number of challenges that affect SMEs in particular:

Works Council participation

In Germany, if AI tools are suitable for monitoring employees or affect their workplaces, the works council has the right of participation. This also applies to productivity tools such as Microsoft Copilot, when they can analyze work processes.

A works agreement provides clarity for all parties and prevents subsequent conflicts.

Unclear responsibilities

In Germany, it is still unclear which authority will be responsible for compliance with the AI Act. The Federal Network Agency, the Data Protection Conference and the BSI are discussed. This uncertainty doesn't make things easier, but it doesn't release you from the obligation to comply with laws.

Lack of resources

Many medium-sized companies do not have their own data protection officers or IT security experts. Nevertheless, you must meet the legal requirements. External know-how helps here: Specialized law firms, data protection consultants or AI compliance service providers can help you implement it. Competence platforms also offer free initial consultations for companies.

Conclusion: Legally secure with AI — possible and feasible

The legal requirements for AI tools are complex, but by no means insurmountable. It is crucial to have a structured approach that includes both General Data Protection Regulation (GDPR) As well as the AI Act, as both regulations apply in parallel and must be observed. Early and careful planning helps to avoid expensive improvements and fines. This does not require in-depth legal expertise, but a solid basic understanding as well as clear processes and responsibilities within the company is required.

AI offers enormous opportunities, especially for SMEs, through efficiency gains, cost reductions and better decision-making bases. In order to use this potential securely and in a legally compliant manner, investment in AI expertise and continuous training is essential. This allows your company to take full advantage of the technology while ensuring compliance with legal requirements.

You can find out how other SMEs are using AI in a legally secure and successful manner at data:unplugged festival 2027 on April 13 & 14 in Münster. Here, companies from the logistics, mechanical engineering, manufacturing or retail sectors share their implemented use cases on GDPR compliance, AI Act implementation and AI governance: from order processing contracts to data protection impact assessments to employee training. At the SME Stage and a further four stages, space will be created for exchange on well-founded practical examples of AI law and compliance in order to understand the benefits of the technology and make them legally secure.

AI compliance affects all areas of the company: For effective implementation, it is crucial to involve key people in your company, train them and positively prepare them for deployment. data:unplugged stands for a broad and well-founded transfer of knowledge — from which the entire business team benefits. Get a ticket for yourself and your core team now!

d:u Events 2027:
JETZT TICKETS SICHERN
Am 13. & 14. April 2027 findet das data:unplugged Festival wieder in Münster statt.
Jetzt TIckets sichern