Chatbots are becoming an increasingly important means of communication in most industries, including the financial sector. They are no longer just a technology, but also have psychological aspects. If the conversational agents are to represent a company or replace or supplement communication with a human representative, the personality and tonality of the bot plays an increasingly important role alongside the technical features. When people communicate, emotions and certain forms of empathy cannot usually be ruled out. The same is also becoming increasingly important for chatbots. In practice and also in research, the question is repeatedly asked as to which and how many emotions a chatbot is actually allowed to have.
Sophie Hundertmark, a doctoral student at the University of Fribourg and research assistant at the Institute of Financial Services Zug (IFZ), has developed three hypotheses based on extensive literature research and tested them in an experiment with insurance customers. The results are almost as predicted in the hypotheses.
When taking out or providing information about supplementary insurance, emotions on the part of the chatbot are generally very well received and are also desired. A chatbot that shows friendly emotions in the context of supplementary insurance is generally recommended more often than a chatbot that shows no emotions. Furthermore, a chatbot without any emotions makes people want a human advisor more quickly.
When it comes to “reporting damage”, users generally prefer a chatbot without emotions. They just want their request to be dealt with quickly and also want confirmation that the bot has taken their request on board.
Hardly any major differences were found between the chatbots’ assessments of “Change address”. In principle, both versions were found to be useful and suitable.
The following three statements can be summarized from the experiment:
- When users are in a negative mood, they don’t want the chatbot to show any exaggerated emotions. Example: damage report (insurance), lost card (bank), complaint (retail).
- When users are in a positive mood or are in a consultation process, emotions are desired from the chatbot. Example: premium advice (insurance), account advice (bank), general advice (retail).
- As soon as rudimentary situations are involved, in which the user does not feel any great emotion, but rather the quick completion of a case, the emotions of the chatbot hardly play a role. They can therefore also be omitted, but do not cause much damage if they are present. Example: Change address (insurance), request bank statement (bank), request invoice (retail).
These (new) findings give Sophie H. a lot of potential for new research. Further studies will now investigate and specify the relationships between user situation and emotional state and bot tonality. At the same time, according to Sophie H., algorithms must be developed that enable the bot to find out for itself what emotional state the user is currently in so that it can then respond appropriately. In reality, it will not be enough to classify the emotional state on the basis of the use case; instead, the individual user, who may also have a particular history, must be taken into account. It is to be expected that the disciplines of fuzzy logic and computing with words will be applied here. Both technologies are closely related and make it possible for a software system to process fuzzy words and natural language.
The complete results of the experiment can be downloaded here.

About the author Sophie Hundertmark
Sophie Hundertmark is a doctoral student at the University of Fribourg and works as a research assistant at the Institute of Financial Services Zug (IFZ) at Lucerne University of Applied Sciences and Arts. She also works as a freelance chatbot consultant and primarily advises banks and insurance companies from the entire DACH region.

Contact: