
Where is the data and where does the artificial intelligence that processes it run? Decision-makers in SMEs are increasingly concerned with this question. Because while AI projects inspire in theory, they often fail in practice because of a fundamental problem: Data is here, models run there, and there is a governance gap in between.
In our Podcast episode #200 Bernhard talks to Dr. Fabian Gampfer, Principal AI Specialist at Snowflake, about solving this problem: Data and artificial intelligence must come together where the data resides — with European data sovereignty and clear data governance.
Snowflake Inc. is a Cloud-based Software-as-a-Service-Company that connects data agents, AI models, and structured data in the cloud.
The most important findings from the discussion are summarized here. If you want to go deeper, you can listen to the full episode at this link.
Anyone who works with classic databases comes up against limits at some point: database full, i.e. place a second one next to it. AI models run on separate systems, often with external providers. As a result, data must be moved back and forth, governance becomes complicated, and when it comes to sensitive information, the question is who actually has access.
Modern data platforms solve this problem by separating storage and compute. Scalability is almost unlimited, and AI applications can run where the data already resides. For companies that want to develop an AI strategy, this means that data and models no longer have to run on different systems.
One topic runs through almost every customer conversation, Fabian reports in the podcast: Where do the AI models actually run? The question has a strategic dimension. While many language models are hosted in the USA, European companies have legitimate reservations about sending sensitive business data across the Atlantic.
This is a decisive factor, particularly for companies in regulated industries such as financial services or healthcare. GDPR compliance is difficult to ensure when it is unclear where data is being processed and who has access to it.
The development is therefore moving towards European hosting options. Snowflake, for example, now also hosts larger AI models, such as those from Anthropic, directly in European regions. This allows companies to apply AI applications to their data without it leaving the EU. For many, this is a prerequisite for moving from pilot projects to production.
Control over access rights, data quality and compliance requirements remains centrally managed — without data silos, without loss of control. Especially in medium-sized companies, where historically developed system landscapes often exist, a central platform provides an overview and reduces risks.
Not every AI application fits every problem. In the podcast, Fabian distinguishes between three areas that address different challenges.
Only 10 to 20 percent of company data is structured in tables. The rest — emails, documents, images — often remains unused. With the right technologies, this data can be accessed without setting up months-long projects.
A practical example: A financial service provider processes around 50,000 customer inquiries per month. In the past, this meant manual evaluation. Today, a simple prompt can be used to answer questions such as: What are the most common topics? Which problems were not resolved?
The special feature: A business analyst can use such applications without having to program themselves. The ability to analyze data is shifting from IT to specialist departments.
The second area is Agentic AI — AI agents who can make decisions, retrieve data, and perform actions on their own. They recognize patterns, suggest measures and can perform tasks independently under human supervision.
One example: Siemens Energy has indexed 800,000 R&D documents via Snowflake. Users can ask questions ad-hoc via a chatbot or have structured reports generated.
Fabian explains in detail how such agents are used in everyday business life and which pitfalls need to be avoided in detail in podcast episode.
The third area is classic data science. Forecasting, classification, time series analyses — these methods are by no means obsolete. On the contrary: For forecasts and numerical analyses, they often work better than generative AI.
Fabian puts it in a nutshell: The GEN-AI chatbot is not particularly good at computing. If you need sales forecasts or risk models, you can't avoid classic data science methods.
The data:unplugged 2026 festival offers an exchange with decision makers who are already using such use cases productively. On the SME Stage, they share their experiences — from pilot projects to scaling.
Costs are a relevant factor for SMEs. Classic licensing models often mean large investments in advance, of which only a fraction is used in the end.
Pay-per-use models offer an alternative: companies only pay for what they actually use. Scalability works both ways — boot up when needed, shut down when the peak is over. Especially when budgets are approved on a project-specific basis, this enables a gradual start.
A question that concerns many managers: Is programming knowledge still needed when AI agents take on more and more tasks?
For simple applications — such as evaluating customer communication — prompts are now sufficient. A business analyst can analyze data without writing code. However, complex system integrations still require people who understand what the technologies are doing in the background.
The more important competence lies elsewhere: Understanding what artificial intelligence can and cannot do. This applies not only to IT departments, but to all areas. Whether marketing, HR or sales — AI applications are everywhere. If you want to use them effectively, you need a basic understanding.
A topic that is particularly important to Fabian: the social dimension. Do we want agents to rate people? There is no technical answer to these questions — they require conscious decisions.
Data and artificial intelligence can no longer be thought of separately. If you want to successfully implement AI projects, you need a platform that brings both together — with clear data governance, digital sovereignty and the ability to scale.
For decision makers in SMEs, this means that the question is not whether artificial intelligence will become relevant, but how quickly their own company can act. From analyzing unstructured data to autonomous AI agents to precise forecasting — all applications depend on the quality and availability of the data.
This shows how other SMEs are taking this path data:unplugged 2026 festival on March 26 & 27 in Münster. Decision-makers and data scientists share their experiences on five stages — practically and on equal footing. Tickets are available here.