
The demand for prompt expertise is, according to the Randstad Workmonitor 2026 increased by 403 percent — not as a niche skill for developers, but as a basic skill across all industries and functions. At the same time, practice shows that most teams already use AI tools such as ChatGPT, Copilot or Claude in everyday life. But the results vary significantly. Some receive precise answers in seconds, others receive generic texts with no discernible added value.
The reason is almost always the same: it is not the tool that is the problem, but the prompt. If you understand how a good AI prompt is structured, you get much more out of the same AI tools. This article provides tips, examples and instructions on what is important when prompting, which errors can be avoided and how prompts make the difference between useful and really good in a corporate context.
The simplest definition: An AI prompt is the input that a user sends to an AI system. It is the instruction, question, or description on the basis of which a language model such as ChatGPT, Claude or Gemini generates an answer. Basically, the prompt works like a briefing: The clearer the task, the better the output.
Prompts can look very different. Sometimes it's a single sentence, sometimes a multi-part text with context, goal, style, and examples. It is not the length that is decisive, but the precision. Large language models work on the basis of probabilities. Every word in the prompt influences in which direction the AI model develops its answers. A vague input produces vague results. A well-thought-out prompt produces answers that can actually be used in everyday work.
The principle applies equally to all AI tools:
Whether text, images or code: The quality of the output always depends on the quality of the prompt.
A prompt isn't a Google search. With a search engine, keywords are enough. An AI model is about more: context, role, task and the desired outcome. Anyone who has understood this difference uses artificial intelligence much more effectively.
Dealing with AI is often reduced to choosing the right tool. The real lever lies in the way employees interact with AI tools such as ChatGPT.
Anyone who asks ChatGPT “Write me a text about digitization” gets a superficial answer. The answers remain generic, the style interchangeable, the information too general.
On the other hand, who says “You are a strategy editor for medium-sized companies. Create a draft internal newsletter about AI in purchasing. Target group: Managers without a technical background. Style: factual, concrete, maximum 300 words,” gets results that can be used directly. The same AI, the same tool — but a fundamentally different output.
The difference isn't in AI. He's right on the line. And this is exactly where it is decided whether a company uses artificial intelligence as a tool or just as a search engine. How companies can take this step strategically is shown by d:u blog article on AI strategy in SMEs.
A good prompt is no accident. It follows a structure. The following five elements help to formulate prompts in such a way that AI tools deliver significantly better results.
Anyone who gives the AI model a clear role controls the perspective of the answer. “You are an experienced personnel consultant” provides different results than “You are a lawyer in the area of employment law.” The role influences the language, depth and content focus of the output.
Without context, the model lacks classification. A prompt like “Summarize this text” can work. “Summarize this text. It is a strategy paper for the Executive Board, the summary is used as a draft decision,” provides a much more precise result. The more relevant context there is in the prompt, the better the answers.
Many prompts fail because the task is ambiguous. “Help me with marketing” is too unspecific. “Create three suggestions for LinkedIn posts on the topic of AI in retail, a maximum of 150 words each, factually inspiring style” is a clear instruction with measurable output. The more precise the topic and instructions, the better the results — regardless of the tool.
Language models can represent almost any style — but only if they know which style they should use. Tonality, text length, target group, structure and language belong in the prompt. This applies to texts as well as analyses, tables and descriptions. Without a style specification, the AI model provides a generic mean value that rarely fits your own use case.
Few techniques improve quality as reliably as a specific example in a prompt manner. This so-called few-shot prompting shows the AI model what is desired and significantly reduces the scope for interpretation. An example of the desired result makes the request comprehensible for the AI and the output significantly better.
Improving individual prompts is the first step. KI prompt engineering continues. It means working systematically on prompts, documenting them, testing them, and making them available as reusable templates. The approach transforms the use of artificial intelligence from improvised individual use into scalable expertise.
In many companies, the productive use of AI is not failing because of the technology, but because of the lack of systematics. The Boston Consulting Group distinguishes between “Deploy” and “Reshape”: Companies that only use artificial intelligence achieve moderate results. Those who adapt their processes and understand prompts as part of their workflows achieve significantly more.
Prompt engineering is not an IT issue, but a question of work organization. When you standardize and optimize prompts for various tasks, you create a basis on which teams work faster and more consistently. Prompt engineering means developing interaction with AI models such as ChatGPT or Claude from improvisation to structured competence.
Practitioners will be showing how companies integrate prompting workflows into their processes at d:u26 on March 26 & 27 in Münster. You can find an overview of all speakers here.
Prompts are not an issue for technology enthusiasts alone. In everyday life, this makes it possible to solve specific tasks faster and better.
With a structured prompt, three variants of a social media post are created within minutes, tailored to the target group, platform and tone. Images can also be created promptly: AI tools such as DALL-E or Midjourney generate images for campaigns, newsletters or presentations from a detailed description. Good prompts save time not only for texts, but also when creating visual content.
Discussion guidelines, complaint handling and follow-up emails can be generated promptly — adapted to industry, company size and phase in the decision-making process. A specific example: ChatGPT can create an industry-specific follow-up email from just a few pieces of information on a topic, which can be used directly.
AI prompts help with formulating job advertisements, summarizing applicant profiles, and creating onboarding materials. HR in particular shows how much quality a good context brings promptly: Anyone who gives the AI not only the job title, but also team size, corporate culture and desired style gets significantly more appropriate results.
Contracts, supplier comparisons and market research can be pre-structured using AI tools. Anyone who provides the model with the right information as input receives analyses that are suitable as a basis for decision-making. The technology is only the tool — the added content is created by the quality of the prompts.
With just a few instructions, minutes, summaries, and presentation drafts can be created that teams can use directly. The content comes from people, and artificial intelligence structures and processes the information.
A deeper insight into how the language models behind these applications work is provided by our Blog articles about LLMs and language models.
Whether it's text work, data analysis or automation — At d:u26, AI workflows can be experienced hands-on in over 40 master classes. Find out why the d:u26 is the right event for you and your team.
There are typical mistakes that even experienced AI users regularly make. Anyone who knows them achieves better results with less effort.
Anyone who knows the most common mistakes can immediately achieve better results with targeted adjustments. The following approaches can be applied directly to ChatGPT, Claude, and other AI tools.
The difference between disappointing and convincing AI results rarely lies in the tool. It is due to the quality of the prompts. Anyone who provides context, formulates clear tasks, sets style and format and works iteratively uses artificial intelligence not only superficially, but as a tool in everyday working life. The tips and examples in this article show that no special technical knowledge is required to achieve better results with ChatGPT and other AI tools.
Good prompting is a skill that can be learned. With the right tips, clear instructions and well-thought-out wording, results are noticeably improved within a very short period of time — across all industries and departments, from marketing to HR to purchasing. Developing prompt engineering as a team's expertise is an investment that pays off.
You can find out how other SMEs go from occasional use of AI to structured prompt engineering on the data:unplugged festival 2026 on March 26 & 27 in Münster. At the Mittelstand Blazers Stage, companies share their experiences, and in the master classes it becomes concrete: How do you build up AI expertise in a team? Which prompt strategies work in practice?
AI prompting affects all areas of the company — from marketing to HR to purchasing. For successful implementation, it is important to involve and qualify key people. data:unplugged stands for practical transfer of knowledge — from which the entire team benefits. Get your ticket now!