*This post was created with a self-made CustomGPT based on Sophie’s latest podcast. You can find the link to the podcast at the end of this article.
Welcome to the year 2025! Sophie starts her first podcast year with an exciting guest: Sven Kohlmeier, lawyer for IT law and expert for digital regulations. The episode focuses on AI regulation, data protection and the impact of new laws on companies in the DACH region. Here you will find a summary of the most important findings.
Why AI regulation is important
Generative AI has become an integral part of our everyday lives. However, with their growing importance come legal challenges. The EU AI Act in particular is a topic of discussion. Sven Kohlmeier explains that the Act defines machine-based systems that use autonomy to make predictions or derive decisions. The aim is to regulate high-risk applications – such as AI in the healthcare sector.
EU regulation: What do Swiss companies need to consider?
A central point of the discussion: Do Swiss companies have to take EU regulation into account? The answer: It depends. There are currently no specific AI regulations in Switzerland, which leaves plenty of room for experimentation. However, as soon as a product is exported to the EU, the relevant laws apply. Sven therefore recommends taking the EU requirements into account as early as the development phase.
Switzerland as a center of innovation
Switzerland stands out for its innovation-friendly legislation. According to Sven, it is therefore attractive for German companies to experiment here. But here too, regulation will come sooner or later. An “interpretative framework” for AI is currently being discussed and is due to be published soon. Companies should therefore remain vigilant.
AI definition: Where does artificial intelligence begin?
Not everything that is called AI is actually AI. Sven explains that many systems are simply automated processes. The decisive factor is whether a system makes autonomous predictions and optimizes itself. Particularly important: Companies should carefully check whether their applications fall under the AI Act in order to avoid legal consequences.
The EU AI Act: timetable and requirements
The regulation comes into force in stages:
- February 2, 2025: First regulations, such as bans on certain surveillance techniques.
- August 2025 to 2026: Full implementation. The risk-based approach distinguishes between low and high risks, with the latter being subject to strict transparency and certification obligations.
For companies, this means Providing information in good time, assessing risks and adapting processes.
Special features in the healthcare sector
Sven emphasizes that particular caution is required in the healthcare sector. Working with sensitive data requires the highest standards of data protection. For example, medical data must not be allowed to enter large AI systems in an uncontrolled manner. An example from Sophie’s everyday life: her dentist stored sensitive data on a private iPhone – a potential risk that can be avoided through training and clear processes.
Training and education: a must
Both emphasize how important it is to train employees. Those who understand AI systems can better assess risks and take targeted action. The AI Act also requires such training from February 2025. Companies should take this seriously, not only legally, but also for reputational reasons.
Conclusion: pragmatism instead of panic
Sven Kohlmeier conveys a clear message: regulation is necessary, but no reason to panic. Companies should take a pragmatic approach, plan in good time and consult experts such as lawyers or AI consultants. Switzerland in particular currently offers attractive conditions for innovative developments.
Any further questions?
Do you have any questions? I will be happy to support you, act as a sparring partner and answer your questions. I am always happy to receive your messages, preferably by WhatsApp message or e-mail.