Is Hallucination a Vehicle for Creativity?

Is hallucination a vehicle for creativity?

Introduction

A few weeks ago, I had a realization: hallucination might be the key to creativity. When analyzing Large Language Models (LLMs), I noticed that we impose countless restrictions on their generation capabilities. Some are dictated by safety concerns, others by user expectations—but ultimately, we aim to make these models conform to our standards. This approach, though necessary, raises an important question: Are we stifling AI’s creative potential?

For instance, consider this LinkedIn post. The author questions why AI-generated code must adhere to bloated frameworks. Could an AI, left unrestricted, develop a more efficient coding method? If so, are we the bottleneck in AI’s creative evolution?

Here is another LinkedIn post that talks about the concept of AI-driven development evolution.

This thought process leads to a broader shift: agentic-driven development. I explore this in-depth in this article. However, for now, let’s focus on hallucination and its role in creativity.

Let's Dissect Human Creativity

As mere mortals, we do not create from thin air—we rely on inspiration. If we are to claim that hallucination fuels creativity, we first need to establish a benchmark for creativity: human creativity itself.

What is Human Creativity?

Human creativity is the ability to generate original ideas, concepts, or solutions by combining knowledge, experience, and imagination in novel ways.

From birth, humans are like neural networks initialized with random weights. Children are naturally curious, undergoing what developmental psychologists call the sensitive period—or, as I like to call it, the supervised fine-tuning phase.

Some individuals excel in structured learning, memorizing vast amounts of information (overfitting), while others detect patterns and develop new ways of thinking (generalizing). It’s the latter group that we identify as creative.

The environment, upbringing, and exposure to novel ideas play crucial roles in fostering creativity. For example:

  • In some cultures, academic performance is prioritized, leading students to memorize facts rather than explore deeper patterns.
  • Others encourage trial and error, risk-taking, and curiosity, fostering innovation.
  • Institutions like Harvard and MIT often select students who thrive in unsupervised learning environments, reinforcing the idea that creativity flourishes under open-ended exploration.

Hallucination as a Catalyst for Creativity

The idea that hallucination fuels creativity is not new. Many researchers have explored this connection, leading me to investigate several key resources:

What Research Says About Hallucination & Creativity

Studies have found that hallucination in AI resembles human creativity in how it generates novel outputs. Research on human cognition suggests that creative breakthroughs often stem from errors, misinterpretations, and re-imagining existing knowledge. Hallucination in LLMs functions similarly—it generates unexpected associations that may lead to innovation.

For example:

  • A study on AI-generated drug discovery found that hallucination led to the identification of new molecular structures beyond what was available in training data.
  • In art and design, AI hallucinations have produced unique, surreal compositions that human artists later refined into meaningful pieces.
  • Mathematical proofs and scientific discovery: Some AI models have generated new proofs by making seemingly illogical leaps that, upon investigation, turned out to be useful.

What is Hallucination in LLMs?

Hallucination in AI refers to the generation of text or data that is factually incorrect, nonsensical, or irrelevant to the prompt, often appearing plausible but ultimately misleading.

Why is Hallucination Undesirable?

Many industries rely on LLMs for critical applications, such as customer support and legal assistance. Erroneous outputs can be costly—just ask Air Canada, which faced a lawsuit due to misleading AI-generated information. Efforts like Vectara’s hallucination leaderboard aim to mitigate this issue.

The Role of Reinforcement Learning in Hallucination

How RL Can Guide AI’s Hallucinations

Random hallucination is not inherently useful—it must be guided. Unchecked hallucinations can snowball into an incoherent mess, making debugging nearly impossible. We need a structured approach, which Reinforcement Learning (RL) offers.

The DeepSeek-R1 paper explores this concept in depth. It introduces DeepSeek-R1-Zero, an AI trained purely through RL, which naturally develops reasoning chains at the cost of readability. Researchers later introduced DeepSeek-R1, incorporating supervised fine-tuning for better clarity and usability. If you are short on time, I highly recommend watching this YC decoded episode by Dianna Hu where she breaks down the paper in quite some detail here.

This research paper introduces a 2 phased structured approach to harness hallucination for creativity using reinforcement learning.

  • Divergent Phase: Encourages LLMs to generate loosely connected, novel ideas (akin to brainstorming).
  • Convergent Phase: Filters and refines hallucinations to ensure useful and meaningful creative outputs.

Examples of RL Encouraging Creativity

  • AlphaGo’s Move 37: A strategic move predicted to have a 1-in-10,000 chance of being played by a human, yet proved highly effective. (Watch here)
  • AI in scientific research: AI models trained using RL have proposed novel physics theories that were later validated by researchers
  • OpenAI Five Defeats Dota 2 Champions: AI outperformed world champions through RL-driven strategies. (Learn more)
  • OpenAI trained neural network using RL solves a Rubik's cube. You can read more about it here

The Future of Reinforcement Learning

Reinforcement learning is a nascent but growing field. Andrej Karpathy highlights RL as an emerging frontier. A lot of research is happening and a lot is still unknown.

Conclusion

Hallucination is not a flaw to be eradicated—it’s a phenomenon to be harnessed. If guided systematically, it can become a powerful tool for creativity and problem-solving.

I am personally fascinated by completely unrestricted RL environments, where models learn purely through trial and error. What if we gave AI true free rein? The possibilities are endless.

Let's continue exploring this fascinating frontier together! If you have any suggestions, feedback or questions, please reach out to me on X or email.

Credits

I would like to thank Nathan Lu and Larry Qin for their insightful comments, feedback and suggestions.