At OpenAI they claim to be working on correcting the “hallucinations” of the chatbot.
Although everyone is surprised by the technical skills that ChatGPT displays in each of its responses, the software is not exempt from errors as, on some occasions, it presents bugs that show that it is a technology in progress and with a wide margin of improvement. improvement.
The way text generative AIs like ChatGPT work can sometimes lead to something that in the jargon described as a hallucinationwhich happens when a machine provides a convincing but completely made-up answer.
It is thus that the version of Bing based on ChatGPT began to incur these hallucinations. Even above the original ChatGPT, who had already staged ‘hallucinations’, responding with detailed but false datato the question of the record for crossing the English Channel on foot or about the last time the Golden Gate was transported through Egypt.
To make their chatbot technology more reliable, OpenAI engineers have revealed that they are currently focusing on improving their software to reduce and hopefully eliminate these problematic occurrences.
A group of researchers specialized in AI within OpenAI was the one who revealed the company’s plans to appease these hallucinations. They explain that “even the most advanced models are prone” to produce them, since they tend to “invent facts in times of uncertainty“.
“These hallucinations are particularly problematic in domains that require multi-step reasoning, as a single logical error is enough to derail a much larger solution,” they added.
In this sense, they also say that if the company’s objective is to end up creating artificial general intelligence (a concept commonly known as AGI), it is critical to mitigate this type of hallucinations.
To address chatbot bugs, OpenAI engineers are working on ways for their AI models to reward yourselves for generating correct data when moving towards an answer, instead of being rewarded only at the point of completion.
Said model has been trained with a data set made up of more than 800,000 human-generated labels, and in the first tests it has been possible to obtain much higher results than those obtained with models based on the supervision of final results.
And while it’s important that OpenAI is working to resolve this flaw, it could be a while before the changes are reflected. In the meantime, as OpenAI says, ChatGPT can occasionally generate incorrect information, so it’s key to confirm your answers if they’re part of some important task.
Refine questions to minimize error
ChatGPT, like its competitors, needs people to be clear and avoid wanderingbecause otherwise the results will be generic and imprecise.
At this point it is also important to direct the response, that is, if a recommendation is needed about a Renaissance painter, it is essential to include all the period data in the request so that the chatbot understands where the interest points.
All this is key because AI tends to easily fail due to a lack of context, for example, creating a script for a YouTube video is not the same as creating a script for a television program.