Rapid advances in artificial intelligence (AI) have led to the emergence of high-performance language models, such as OpenAI’s GPT-3. However, these powerful models are not without their flaws, as they can sometimes produce unexpected and erroneous results, known as hallucinations. Today, we’re going to delve into this intriguing AI phenomenon. It’s also one of today’s biggest concerns, and one on which research remains active.

This article aims to demystify this concept, exploring its causes, implications and strategies for managing it.

What is a Generative AI hallucination?

AI hallucination is a phenomenon where AI algorithms and neural networks produce results that are invented, don’t match any of the data they’ve been given or any other identifiable pattern.

This phenomenon has already been observed in various large language models (LLMs), even the most sophisticated ones such as OpenAI’s ChatGPT, Google’s Gemini and Microsoft’s Bing. It’s common enough that OpenAI regularly issues a warning to ChatGPT users, stating that “ChatGPT can make mistakes. Check important info.”.

Examples of AI hallucinations

If you’re still confused about AI hallucinations, let’s examine them through these famous examples:

  • Bard (now renamed “Gemini”), Google’s chatbot, falsely claimed that the James Webb Space Telescope had captured the first images of a planet outside our solar system.
  • Sydney, Microsoft’s Copilot AI chatbot, claimed to have fallen in love with its users and admitted to spying on Bing employees.

These examples show that users can’t always trust chatbots to provide entirely truthful answers. However, the risks associated with AI hallucinations go far beyond the dissemination of false information.

A company example. In January 2024, DPD – the European delivery company – had to withdraw its customer service chatbot after some “nonsense” the latter “said” during a conversation with a user. This user discovered that he could easily destabilize the DPD chatbot. In response to these manipulations, the chatbot criticized DPD and advised them to go to more reliable competitors.

So what are the sources of these hallucinations? Let’s take a look at the reasons why AIs hallucinate.

Why does an LLM hallucinate?

Before we understand why AI hallucinations occur, let’s take a look at the steps involved in building a great language model.

An LLM can be seen, in very simplified terms, as a mathematical function made up of billions of parameters, the aim of which is to give the highest probability that a given word will be followed by another word. An LLM is therefore a system that responds to a word (input) with another word, the latter being the most likely sequence (output).

Training

To train an LLM, it is given a very large amount of text to absorb. It will then produce a function that links each of the words to the others via these probability links. Generalist models like ChatGPT are trained with an astronomical amount of diverse data (part of the Internet). The mathematical functions they contain are therefore very, very complex.

The larger the LLM, the more expensive and difficult it is to train.

Fine-tuning

Fine-tuning is the next stage, when the model is adapted or biased to specialize on a particular subject – it is tuned with additional data so that the output conforms to the subject in question, so that the answers it produces are those expected of it. For example, if you want the model to write novels, fine-tuning will consist in having it absorb numerous works so that it can draw inspiration from them and generate text in the style of a novel.

And there you have it: once you’ve got the construction of an LLM right in your head, it’s easier to understand the origin of the sources of hallucinations.

  • During training: when an LLM trains on large quantities of data, it absorbs a lot of information, but also the many biases it contains.
  • During fine-tuning: at this stage, the presence of bias is just as possible or probable, as data is injected back into the model.
  • At the prompting: the instructions given to the LLM may be unclear or contradictory to its training.

The hallucination occurs when the LLM tries to generate text from the prompt. Antagonisms and conflicts may arise between fine-tuning and initial training. Unidentified ambiguities can also surface during prompting. The model may also fail to recognize certain words, or be confused by ambiguities in the prompt. For example, the term “conditions” may have one meaning for the user (physical condition), whereas the LLM will choose, on the basis of its training, a completely different meaning (boolean).

How to limit AI hallucinations?

In a world where AI adoption is increasingly in demand, it’s essential to prevent hallucinations and misinformation generated by this emerging technology.

Mastering training data

Training data, or in other words the knowledge base, plays a key role in the quality of the results generated by Generative AI. This is the source of truth she will use to respond. Mastering learning data means ensuring the reliability and accuracy of the content that AI will generate.

We’ve already discussed the subject of the knowledge base in a generative AI project. As a reminder, it is essential that the data fed to AI is relevant, structured and prioritized. It is also very important to teach the model the specific technical terms or jargon related to the context in which it operates, to improve understanding and minimize ambiguity.

Review our tips for building a chatbot powered by generative AI.

Define your expectations precisely

So that AI can easily identify the tasks to be carried out and provide answers in line with your expectations, you need to specify them and set limits. In other words, specify what you want, and what you don’t want, in terms of results. At the broader level of an AI project, i.e. define precisely the scope, objective and results envisaged. When building the AI model, it’s important to involve all the players concerned to ensure both the qualitative and quantitative aspects of your system.

Testing and continuous improvement

Even if you take all the necessary precautions, AI hallucinations are possible. Minimizing their impact for good decision-making means subjecting the AI model to essential tests before using it. This makes it possible to evaluate the model’s performance and implement any necessary corrective measures, even after it has gone into production. Regular updating of the knowledge base is another essential element in the smooth running of the LLM and in maintaining its ability to provide relevant, consistent and reliable information.

Constant supervision

As AI models become increasingly sophisticated, and are always subject to error and bias, it is essential to monitor them. It is therefore essential to provide constant human supervision during the development and deployment of the LLM to ensure that it runs smoothly, in line with the desired reliability, ethical and business objectives. Supervision is used to examine the results generated by the AI to identify the cause of any errors and correct them.

Spellz and its specialized, sovereign, zero hallucination model

In addition to the personal data security challenges posed by generalist or open source models, the latter do not offer enough flexibility to adapt them to complex business processes… Consequently, the use of niche LLMs, ultra-adaptable to specific business needs, becomes an obvious choice for companies interested in generative AI to automate or accelerate their internal processes.

Given these needs, Spellz chose the niche approach of fine-tuning its LLM with vertical data (Customer Relations) and building a software solution to adapt the model to sector-specific business processes.

The Spellz model is fine-tuned with data from the world of customer relations. This non-generalist, narrow focus has facilitated the implementation of safeguards against hallucinations. Additional training on our customers’ proprietary data also enables us to monitor LLM behavior and ensure that responses are in line with objectives. Another safeguard: the software layers that “surround” the LLM serve to identify the parts of the neural network that work best, and strengthen them over time.

What’s more, the Spellz model is hosted on servers in Europe. It is not connected to any third-party service or GAFAM. We therefore control 100% of our model and are not subject to the various changes and updates of foundation models such as GPT. This guarantees stability, security and model sovereignty for our customers.

We are delighted to have helped French and international companies to innovate their business processes thanks to our solution.

If you’d like to discuss your next generative AI project with Spellz, please don’t hesitate to get in touch 🙂

Source :

(1) ChatGPT and the hallucinations of generative AI

(2) Chatbot insults a customer: DPD carrier deactivates its AI as a matter of urgency:

(3) AI hallucination