Over the past few months, I have had the opportunity to accompany various chatbot projects that use Generative AI in general or the ChatGPT API in particular. In the process, I was able to gain numerous insights that I would like to share with you in this article. These findings are based on practical experience, exchanges with other experts and scientific research. However, it should be noted that I gather new experiences almost every day and will continue to share them with you. If you don’t want to miss any news in the future, I would like to invite you to join my WhatsApp group group. Just click on the link and join.
What is Generative AI?
Generative AI is a type of artificial intelligence (AI) that aims to create something new, such as texts, images, music or videos. Unlike other types of AI, which are designed to analyze existing information or perform specific tasks, generative AI attempts to create something new based on what it has learned from a large amount of data. It uses patterns and structures in the data to generate new content that is similar to what it has seen before. It is, so to speak, a creative machine that is able to think and create in a way that was previously only possible for humans. Generative AI is used in many areas, from art and entertainment to research and development.
What are Large Language Models?
Large Language Model (LLM) is a part of generative AI. A Large Language Model (LLM) is a computerized system designed to generate human-like text and respond to questions. Thus, an LLM is a special type of generative AI model that has been trained to generate texts in natural language. It uses a large amount of training data to understand speech patterns and generate new texts that appear human-like. LLMs are often used for various tasks, such as writing articles, answering questions, creating dialogs and much more. However, generative AI also includes other models and approaches that are used to generate content in various media such as images, music or videos.
What is a Generative AI chatbot?
Generative AI chatbots, often simply called ChatGPT chatbots, are chatbots that use LLMs to understand and process the user’s request and generate an appropriate response. To create the answer, the chatbot uses the language model on which it is based and individual knowledge databases. Companies that use the chatbot can manually integrate the knowledge databases into the bot system. Advanced generative AI bots also contain relevant prompts that provide the chatbot with additional “rules” for answering user queries. You can read more about prompts for chatbots in one of my last posts. By the way, I recently published a whole article about generative AI chatbots.
Best practices and experiences with generative AI chatbots
- Start with a simple use case: Choosing the right use case is more important than ever. You want to gain your first experience, but you also don’t want to overtax yourself and your colleagues. My experience has shown that use cases around FAQs – i.e. chatbots that answer frequent, repetitive questions – are very good starting cases for a first Generative AI chatbot.
- Don’t do everything on your own: There are now some good chatbot providers that have integrated the use of Generative AI into their tool. This means that you no longer have to connect to an LLM yourself, but can benefit from the experience of others. I have even created a provider overview.
- Don’t forget prompt engineering: If you want to have more than just natural conversations with your bot, don’t forget prompt engineering. With the right prompts or commands, you can significantly increase the quality of your bot.
- Not everything has to be generative AI: Of course, it is tempting to have every dialog created with the help of an LLM. But every request to the ChatGPT API costs money. So think about which requests can benefit from an LLM and which requests can be dealt with without one.
- Data protection and sensitive data: In principle, it is also possible to process sensitive data with the help of Generative AI. However, this requires higher demands on data protection and IT architecture. It is advisable to launch the first MVP without sensitive data if possible.
- Quality over quantity for training data: The well-known principle of “shit in – shit out” applies to training data and connected knowledge databases. It therefore makes no sense to simply provide the LLM with as much content as possible without having checked the quality of the content. For example, it is often not useful to list the entire website as a source of knowledge, but rather only the pages that are really relevant.
- Nobody’s chatbot is perfect: Even though generative AI and LLMs have taken the quality of chatbots to a whole new level, the fact remains that no chatbot will ever be perfect. It is also advisable to test the chatbot before launching it and eliminate any major errors. But then you should dare to publish the “Chatbot Experiment” and continuously improve it over the course of its use.
Conclusion: The most important learning
The learning curve continues to rise and the learning process will never be finished. We all have to deal – some more, some less – with the topics of ChatGPT, Generative AI, AI, chatbots, voicebots and co. and will constantly gather new insights.
I’m looking forward to it!
Even more insights?
Have these first examples whetted your appetite for more? No problem, just send me your message or your questions via WhatsApp message or e-mail and I will reply promptly.
Or come directly to my WhatsApp group – where I regularly post use cases, news, best practices, events and much more about chatbots, ChatGPT and co.