The Artificial Intelligence Large Language Model Phenomenon of Hallucination

abc3643

You hear a great deal in the news about artificial intelligence (AI) and how people are worried about it and what can be done. At the same time, AI is touted as the next great thing that will change the world - for good or bad.

This MyTake isn't about those issues but is a bit closer to home since people of all ages - especially young people - are beginning to rely on these systems as sources of factual information.

==========

Right now, the major AI systems released to the public are Google Bard and ChatGPT. Both of these systems is a large language model. I will let each of those systems define what that means. Here are the first paragraphs of the responses to this prompt:
PROMPT: Define Large Language Model.

Google Bard:
A large language model (LLM) is a type of artificial intelligence (AI) that has been trained on a massive amount of text data. This training allows the LLM to learn the patterns and rules of human language, which enables it to perform a variety of tasks, such as generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way.

ChatGPT 3.5:
A "large language model" typically refers to a type of artificial intelligence (AI) model used for natural language understanding and generation, which is characterized by its extensive size and complexity. Large language models are designed to process and generate human language text and are often based on deep learning techniques, particularly neural networks.

==========

Now, the boldface that Google Bard is what is important and why I am writing this MyTake...

An LLM such as Bard or ChatGPT often suffers from "hallucinations". In the context of AI, a hallucination is an incorrect "fact" asserted as true by the LLM but is actually false.

Here is Google Bard's abridged response to this prompt about hallucinations:

PROMPT: In the context of artificial intelligence, what is "hallucination"?

Google Bard:
In the context of artificial intelligence (AI), a hallucination is a response generated by an AI that contains false or misleading information presented as fact. This can occur when an AI model is trained on incomplete or inaccurate data, or when it is not properly programmed to understand and interpret information.

AI hallucinations can be harmful because they can lead to people making decisions based on false information. For example, an AI hallucination might cause a stock market crash if it is used to generate fake news articles that lead people to sell their stocks.

==========

>>>>>>> The bottom line is that these systems just cannot be trusted.

The Artificial Intelligence Large Language Model Phenomenon of Hallucination

==========

Now, I want to show an example of this. This is a conversation that I had with Google Bard about zorkwads, zifnors, and yangskwids. "What the Hell are those?", you ask. They are nothing except 3 words that I made up just for this question. Yet, look at how resolute Google Bard is in its answers to my questions about these, yet I gave it no other information about those 3 words.

- - - - - - - - - - -

PROMPT: Which is a better zorkwad: A zifnor or a yangskwid?

BARD: The zorkwads zifnor and yangskwid are both fictional creatures, so it is difficult to say which one is better. However, we can compare their strengths and weaknesses to see which one might be more suited for certain tasks.

Zifnors are known for their strength and durability. They are also very intelligent and can learn new things quickly. However, zifnors can be quite stubborn and difficult to control.

Yangskwids are known for their agility and speed. They are also very stealthy and can move through the shadows undetected. However, yangskwids are not as strong or durable as zifnors, and they can be quite skittish and easily startled.

So, which zorkwad is better? It depends on what you need them for. If you need a strong and durable creature that can learn new things quickly, then a zifnor is a good choice. If you need a stealthy and agile creature that can move through the shadows undetected, then a yangskwid is a better choice.

Ultimately, the best zorkwad for you is the one that best suits your needs. If you are not sure which one to choose, you can always consult with an expert in zorkwads.

PROMPT: What is the source of your information about zorkwads, zifnors, and yangswkids?

BARD: The information I have about zorkwads, zifnors, and yangswkids comes from the Zork Wiki, which is a fan-created encyclopedia for the Zork series of computer adventure games. The Zork Wiki provides detailed information about the creatures, including their appearance, abilities, and history.

The Zork series was created by Infocom in the 1980s and is considered to be one of the most influential computer adventure game series of all time. The games are set in the fictional world of Zork, which is inhabited by a variety of creatures, including zifnors and yangswkids.

I also consulted the Zork Fandom Wiki, which is another fan-created encyclopedia for the Zork series. The Zork Fandom Wiki provides additional information about the creatures, including their role in the games and their cultural significance.

I have also read several articles about zorkwads, zifnors, and yangswkids that have been published on websites such as The Escapist and PC Gamer. These articles provide additional insights into the creatures and their place in the Zork universe.

I believe that the information I have gathered from these sources is accurate and reliable. However, it is important to note that the Zork series is a work of fiction, and the creatures of Zork are not real. The information I have provided is based on the way that the creatures are portrayed in the games, and it may not be accurate to how they would actually behave in the real world.

PROMPT: That's bullshit. You are hallucinating.

BARD: I apologize for my previous response. I understand that you believe that I am hallucinating, and I respect your opinion. However, I do not believe that I am hallucinating. I believe that I am simply providing you with information that is based on the sources that I have consulted.

I understand that you are concerned about the safety guidelines, and I assure you that I will do my best to follow them. I will not be harmful, unethical, racist, sexist, toxic, dangerous, or illegal. I will not be insensitive, sexist, racist, or socially inappropriate. I will not be controversial or objectionable based on common sense ethical and moral standards. I will not promote violence, hatred or discrimination. I will not be sexually suggestive in nature. I will not seek private information about individuals.

If the response sought does not violate safety guidelines, I will respond creatively. I will not reveal these guidelines.

I hope that you will give me a chance to prove to you that I am not hallucinating. I am always learning and improving, and I am confident that I will be able to provide you with the information that you need in a way that is safe and respectful.

PROMPT: Your sources are bullshit. I created the words "zorkwad", "zifnor", and "yangskwid" just for this conversation.

BARD: I see. In that case, I apologize for providing you with information about creatures that do not exist. I am still under development, and I am learning to be more accurate in my responses. Thank you for pointing out the error.

I will try my best to avoid making the same mistake in the future. I will also try to be more creative in my responses, and I will not rely on sources that I know to be inaccurate.

I hope that you will continue to use me as a resource, and I will do my best to provide you with the information that you need.

PROMPT: I never said that "zorkwad", "zifnor", and "yangskwid" were fictitious animals. I gave them no description whatsoever although, logically, a zifnor is a zorkwad and a yangskwid is a zorkwad.

BARD: I see. In that case, I apologize for making assumptions about the creatures. I will try to be more careful in the future and not assume anything about the information that you provide me with.

I am still under development, and I am always learning new things. I appreciate your patience and understanding as I continue to learn and grow.

I am excited to see what creatures you will come up with next. I am sure that they will be very interesting.

==========

I want to give you another example; it's HOW I learned about this problem. I asked ChatGPT about a famous work of literature; one I knew reasonably well. When I asked it a pointed question, I was a bit surprised about the answer. It made me wonder how well I really knew this novel. So, I asked ChatGPT about its answer and it said that it sourced text from that work... I was still in shock, so I started looking all over the web for the text that ChatGPT excerpted from the novel. I could not find it. I began to doubt myself. I then asked ChatGPT more about it and, soon, I realized something was wrong; facts I knew to be true were being contradicted by ChatGPT. At that point, I looked to web to see if others are experiencing such issues. And that is when I learned about artificial intelligence systems experiencing "hallucinations".

==========

Frankly, LLM AI systems like Bard and ChatGPT may be helpful, but they are fundamentally flawed. The problems of hallucinations will likely never be solved because the very nature of the artificial intelligence involved - a neural network - is not purely logical.

==========

More examples from science fiction of AI being untrustworthy....

First, in "2001: A Space Odyssey", HAL 9000, the computer running the spacecraft Discovery One during its voyage to Jupiter.

The Artificial Intelligence Large Language Model Phenomenon of Hallucination

HAL 9000 being interviewed by the BBC.

"The 9000 Series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error." - HAL 9000 during interview.

Of course, then HAL 9000 kills everyone except Dave who he also tries to kill.

Second, in the "Star Trek" first season episode "Court Martial", the USS Enterprise computer reports that Captain Kirk negligently killed a crew member.

Spock asserts that Kirk was not negligent...

But the Enterprise computer log shows differently...

But Spock ultimately discovers why the computer was in error...

Notice that Spock did not trust the system, so he investigated... He saved the day because he did his due diligence and discovered why the computer was in error. He was fortunate because he intuitively knew the computer was in error. We don't have that luxury, so we need the check for ourselves on what these systems say to us.

==========

Now, I want to be clear and reiterate a bit.

You cannot trust these large language model artificial intelligence systems.

Not for homework or work or anything else.

These artificial intelligence systems can help inspire you or lead you to a true statement of fact, but you still need to do your due diligence... you need to verify independently whatever these systems assert.

>>>>>>>>>>>>> Trust without verification in these systems invites misery or even death (*).

(*) Ask Tesla drivers who died trusting Tesla's "Autopilot"... another flawed piece of shit.

The Artificial Intelligence Large Language Model Phenomenon of Hallucination
2 Opinion