ChatGPT Image 15. Sept. 2025, 12_52_30

Critical thinking and human-AI collaboration

A contribution from Sophie Hundertmark

Sophie Hundertmark is an expert in the practical use of artificial intelligence with a focus on chatbots, AI strategies and responsible technology integration. She is a researcher and lecturer at the Lucerne University of Applied Sciences and Arts and is currently writing her dissertation in the field of Conversational AI at the University of Fribourg. As a consultant, she supports companies, administrations and educational institutions in the introduction of effective AI solutions. More about Sophie Hundertmark on LinkedIn.

A CustomGPT was used for linguistic and stylistic creation – as well as for translation. This is based on the GPT-5 language model from OpenAI and was developed by Sophie Hundertmark herself.


Over the last few months, I’ve been having the same discussion in my projects time and again: will artificial intelligence soon replace us humans – or will it remain a tool that supports us? Many are fascinated by the possibilities, but at the same time unsettled by headlines about hallucinations, manipulation and ethical risks.

One thing is clear to me: critical thinking remains the most important skill in the age of AI. We can only make informed and responsible decisions if we scrutinize the results of a machine. At the same time, it is becoming apparent that the future does not lie in pure automation, but in collaboration between man and machine. AI will become our sparring partner, coach and accelerator – but it should never act completely detached from human control.

In this article, I will show you why critical thinking remains irreplaceable, how human-AI collaboration will develop and why this approach is also the only ethically justifiable solution.


Why critical thinking is more important than ever

Artificial intelligence impresses with its speed and precision. But no matter how powerful a system becomes: AI hallucinates, distorts data and reflects the bias of its training basis. This means that we can never blindly rely on their answers.

AI hallucinations are unavoidable

Studies show that hallucinations in Large Language Models (LLMs) are unavoidable due to the system. These systems have no real grounding in reality and do not distinguish between true and false. This is precisely why we humans need the ability to review, critically assess and, if necessary, correct results.

Critical thinking as a counterweight

In workshops and consulting projects, I often see how companies are fascinated by AI outputs – without questioning how valid these results really are. Critical thinking ensures that we stay in control anddon’t fall into the trap of mistaking AI answers for objective facts.

Practical tip: Consciously build reflection loops into your teams. Always ask the questions:

  • What data is the result based on?
  • Could bias or manipulation be involved?
  • What would be the consequences of a wrong decision?

The human-AI combination as a sustainable solution

It is already clear that the most exciting use cases are not created through complete automation, but through clever combinations.

Models of cooperation

  • Human-in-the-loop (HITL): People check every decision – indispensable in areas such as medicine or law.
  • Human-on-the-Loop (HOTL): People monitor, but only intervene when necessary – for example in traffic control.
  • Human-out-of-the-Loop (HOOTL): Fully autonomous systems – only useful for clearly defined routine tasks.

The greatest opportunity lies in seeing AI as an extension of our intelligence. It can analyze data, recognize patterns and make suggestions – we humans bring context, ethics and values to the table.

AI as a sparring partner instead of a replacement

My projects with banks and in the education sector have shown that The best results are achieved when employees perceive AI not as a threat, but as a coach or assistant. The machine provides input, people make the final decision.


The ethical dimension of collaboration

The history of AI ethics shows how difficult it is to maintain a balance between innovation and responsibility. Too often, economic interests have superseded debates about fairness and transparency.

From idealism to pragmatism

Originally, there were high hopes that strict ethical guidelines would regulate the use of AI. In practice, however, it is often the faster and more profitable solutions that prevail under market pressure.

However, this does not mean that ethics are superfluous. On the contrary: only human-AI collaboration with clear rules can create trust in the long term.

Responsibility remains human

Whether in medicine, journalism or in a military context – responsibility must never be delegated to a machine. AI systems can prepare decisions, but humans must always remain the final authority.


Education as a key competence

We need AI literacy – education that combines critical thinking and technical understanding – so that we can exploit the opportunities of AI and at the same time control the risks.

What companies can do

  • Regularly invite employees to AI skills training.
  • Integrate reflection and discussion about AI outputs into everyday life.
  • Bringing together expertise in ethics, law and technology.

Future only through interaction

When we look at current developments, it becomes clear that the future does not belong to machines alone, but to collaboration between humans and AI.

  • Critical thinking remains indispensable.
  • AI is most valuable as a sparring partner, not as a replacement.
  • Ethics must be lived in practice and not just discussed.

I am convinced that only in this combination can we exploit the potential of AI and at the same time ensure that it works in the interests of our society.


Any further questions?

I would be happy to support you in the development of your own custom GPTs or with the question of how you can cleverly integrate context into your AI systems – taking data protection into account, of course. I am always happy to receive your messages.

Preferably by WhatsApp message or e-mail.

Frequently asked questions (FAQ)

1 Why is critical thinking so important when dealing with AI?
Critical thinking helps us to question AI outputs, recognize errors and make informed decisions. Without this ability, there is a risk of accepting incorrect or distorted information without checking it.

2. can AI ever replace critical thinking?
No. AI can analyze data and recognize patterns, but critical thinking is a human skill based on values, experience and context.

3 What does human-AI collaboration mean?
Human-AI collaboration describes the cooperation between humans and artificial intelligence. Humans use the strengths of AI – speed, data analysis – but contribute their own judgment and ethical thinking.

4. what risks arise if we use AI in an uncontrolled manner?
Uncontrolled use of AI can lead to wrong decisions, hallucinations, manipulation or ethical conflicts. Without human supervision, responsibility remains unclear.

5 What is the difference between human-in-the-loop and human-on-the-loop?

  • Human-in-the-loop: The human checks every AI decision.
  • Human-on-the-loop: The human monitors, but only intervenes when necessary.
    Both models ensure that responsibility remains with the individual.

6 How can companies promote critical thinking when dealing with AI?
Through training, workshops and reflection loops. Employees should learn to question AI results and not just blindly accept them.

7 What is AI Literacy?
AI Literacy means AI skills development. It encompasses technical knowledge, critical thinking and ethical awareness in dealing with AI systems.

8 Why is a human-AI combination more sustainable than pure automation?
Pure automation ignores ethical and social values. Only the combination of technical efficiency (AI) and human responsibility is sustainable in the long term.

9 What role do ethics play in the use of AI?
Ethics ensure that AI systems are used fairly, transparently and responsibly. It protects against misuse and ensures trust in the technology.

10. how do I best see AI in everyday life – as a tool or a partner?
It makes the most sense to view AI as a sparring partner or coach. It provides valuable support, but never replaces the responsibility of the individual.

Book now
Your personal consultation

Do you need support or have questions? Then simply make an appointment with me and get a personal consultation. I look forward to hearing from you!

> Concept & Strategy

> Keynotes, workshops and expert contributions

> Chatbots, Voicebots, ChatGPT

Further contributions

Good content costs time...

... Sometimes time is money.

But you can pay a small amount as a thank you for your work here.