
Artificial intelligence has arrived in SMEs: 67 percent of German employees already regularly use generative AI tools. The question is therefore no longer whether companies use artificial intelligence — but how they do it.
This is where it gets interesting: As usage increases, trust falls by the wayside. Only 32 percent of Germans trust AI-generated content - At the same time, two thirds use it regularly. This tension between intensive use and growing skepticism is not a purely technical problem, but a central challenge for trust in AI systems. Trust is not only created by better algorithms, but requires responsible behavior from all parties involved — from developers to companies and the entire society.
Ethics and AI are therefore an indispensable sub-area that focuses on the responsible use of artificial intelligence and focuses on the effects on people and society.
The legal framework for artificial intelligence is now clearly defined. The EU AI Act has been in force since August 2024, the first bans are taking effect, further obligations will follow gradually up to 2027. Anyone interested in the specific legal requirements can find in our article Data protection & AI regulation a comprehensive overview.
But compliance alone does not create trust. An AI system can meet all legal requirements and still make decisions that people feel are wrong. It can work in accordance with the law and yet change a corporate culture. That is exactly why, in addition to the legal framework, an ethical stance is needed - a conscious decision about how artificial intelligence should be used in companies.
This position answers questions that no law regulates:
The Federal Association of Digital Economy (BVDW) has formulated six principles that serve as guidelines for the ethical use of AI. These principles are not a law, but they are a helpful framework for companies that want to consciously use artificial intelligence.
AI systems should treat all people equally, regardless of gender, origin or other characteristics. That sounds obvious, but it is technically demanding. Artificial intelligence learns from historical data, and when this data reflects social prejudices, the system reproduces them. Fairness means actively identifying and correcting these patterns.
Employees and customers should know when they are interacting with artificial intelligence. This applies to obvious cases such as chatbots, but also to more subtle applications, such as when AI systems help pre-select applications or generate recommendations for business decisions. Transparency means actively providing this information.
Why did artificial intelligence make this recommendation? That question should be answerable. Complete explanability is not always possible with complex models, but the principle behind them remains important. People should be able to understand how a decision that affects them came about.
What data flows into artificial intelligence? How are they processed? Where are they stored? These issues are not only legally relevant, but also a factor of trust. Employees who don't know what's happening with their information will be skeptical of AI systems.
AI systems must be protected against manipulation. The risks relate to technical aspects such as cybersecurity, but also the question of who has access to which systems and how misuse is prevented. Robustness means that the system works reliably even under unexpected conditions.
If an artificial intelligence decision is wrong, who is responsible? This question must be resolved before deployment, not afterwards. Responsibility also means that there is always someone who can intervene.
The BCG study AI at Work 2025 shows an interesting pattern: 67 percent of German employees use artificial intelligence, but only 36 percent feel sufficiently prepared for it. In companies with particularly intensive use of AI, 46 percent of employees are worried about their jobs. Significantly more than in companies with a lower affinity for AI.
This is not a contradictory situation, but a comprehensible development. With the increasing presence of artificial intelligence in everyday life, the associated fears are also becoming more noticeable for many people. These fears cannot be eliminated by ignoring them, but require open communication, clear guidelines and the involvement of affected people.
Employees who are actively supported by their management show a significantly more positive attitude towards artificial intelligence and their own career opportunities. This support not only promotes enjoyment of work, but also strengthens confidence in one's professional future. When managers support their employees, take fears seriously and offer assistance, a constructive environment is created in which the potential of AI technologies can be better exploited. This not only increases the acceptance of AI, but also promotes the motivation and commitment of employees in the long term.
On the Mittelstands Stage at the data:unplugged festival, you can find out how SMEs not only manage to inform teams, but also really get involved on March 26th & 27th. The answers come from managers who have already taken this path.
Ethical use of artificial intelligence is not a one-off project, but a continuous process. The following five steps help to establish an appropriate culture in the company.
Which AI systems are already in use? 43 percent of employees use AI tools without critically examining the results. And almost half share AI-generated content as their own work. Before guidelines can take effect, it must be clear what is actually happening.
What can be done with artificial intelligence and what can't? Which data may be entered? How are results marked? These questions need clear answers, not as a list of bans, but as a framework.
Who decides on the use of new AI tools? Who checks the results? Who is the contact person for questions or problems? Without clear responsibilities, ethical use of AI remains lip service.
Only a comparatively small proportion of people have completed training on artificial intelligence so far — there is still a lot of catching up to do worldwide. This deficit poses risks, because anyone who does not understand AI systems can neither use them effectively nor critically question them. Training therefore not only provides technical knowledge, but also the ability to correctly classify results and recognize limits.
Ethical issues often only arise in everyday life when an AI result seems strange, when someone feels uncomfortable, or when something doesn't work as expected. These observations need a channel. Regular feedback not only creates better processes, but also more trust.
The master classes at the data:unplugged festival offer the opportunity to exchange ideas with experts about concrete guidelines for the ethical use of AI - practical and interactive.
Companies that now invest in ethical AI practices are gaining more than just compliance. They gain trust from employees, customers and partners. Trusting a technology that many in society do not yet fully understand is a real competitive advantage.
Consumers who perceive a company's use of artificial intelligence as ethical are more willing to trust that company, recommend it and remain loyal in the long term. Conversely, negative experiences with AI systems lead to complaints, demands for explanations and, in the worst case, to the termination of the business relationship.
This is a special opportunity for SMEs. While large corporations often struggle with complex legacy systems and inert structures, medium-sized companies can establish a consistent culture for the use of artificial intelligence more quickly. Personal relationships with employees and customers make it easier to build trust.
Ethical principles sound abstract until they meet concrete situations. Three examples show where companies face decisions in everyday life.
In recruiting, many companies are already using artificial intelligence to pre-select applications. The efficiency gains are real, but so are the risks. If the system has been trained on historical settings data, it may reproduce patterns from the past. A company that has so far had few women in management positions could get a system that systematically rates female applicants worse. The ethical question is: How do we ensure that artificial intelligence does not reinforce existing prejudices? The answer lies in regular audits, various training data and human control over critical decisions.
AI chatbots have long been standard in customer service. Many customers accept this as long as they know what they're interacting with. It becomes problematic when companies try to disguise the bot as a person. Short-term efficiency is paid for by long-term loss of confidence. Transparency is not only ethically required here, but is also business-wise.
When using AI tools internally, questions arise that many companies don't even have on their radar yet. When employees use ChatGPT to formulate emails or create presentations, what company data flows into external systems? Who is liable if AI-generated content is faulty? These questions need answers before they become problems.
Developing an ethical AI culture doesn't work without leadership. Managers don't have to be AI experts themselves, but they must do three things:
Ethical concerns about artificial intelligence are justified. But they are not an argument against use, but an argument for conscious use. Companies that take ethical issues seriously make better decisions, gain more trust and are more successful in the long term.
Responsible use of artificial intelligence is easy to implement. It does not require perfect systems, but a clear attitude, transparent communication and a willingness to continuously develop. This requires places where questions of AI ethics are discussed objectively and openly — beyond hype and fear.
Find out how other SMEs are successfully taking the path to an ethical AI culture at data:unplugged festival 2026 on March 26 & 27 in Münster. Among other things, the Mittelstands Stage is dedicated to the question of how companies not only introduce AI systems, but also anchor them sustainably. Specific guidelines are developed in master classes. And in exchange with other decision makers, ideas arise that make the difference in your own company.
AI ethics affects all areas of the company: For effective implementation, it is crucial to involve key people in your company, train them and positively prepare them for deployment. data:unplugged stands for a broad and well-founded transfer of knowledge — from which the entire business team benefits. Get a ticket for yourself and your core team now!