Skip to content Skip to footer

What a Museum Exhibition Taught Me About How We Decide What’s True in the Age of AI

Home / The AI Assembly Blog / What a Museum Exhibition Taught Me About How We Decide What’s True in the Age of AI
In my opinion by Amelia Bentley

I recently attended an exhibition at the State Library Victoria, Make Believe, which reminded me that misinformation isn’t a problem of the digital age but a feature of human thinking. The exhibition explored how psychology, cognitive shortcuts, and emotion shape what we accept as truth.

Walking through the exhibition made it clear that misinformation has been spread long before the digital age, but modern technologies now amplify its reach and impact. Those same technologies, however, also have the capacity to amplify accurate knowledge, increase access to learning, and support people who were previously unable to access such knowledge.

As new technologies flood us with information, fact and fiction increasingly arrive dressed with the same confidence. The question is how we harness the benefits of these tools while minimising the harm?

Understanding hallucinations without fear

A hallucination occurs when a generative AI system produces information that sounds plausible but isn’t based on verified sources. Misinformation is when false information is believed, shared, or acted on as if it were true.

The link between the two is human. Hallucinations only become misinformation when people treat AI outputs as fact without verifying them. 

It’s easy to think of hallucinations as technical bugs, but they aren’t. They’re a feature of how generative systems work, producing statistically likely answers rather than verified truth. That design is what allows these tools to be fast and accessible.

A $440,000 report prepared by Deloitte for the Australian government was found to contain AI-generated fabrications. These errors were so believable that they bypassed internal review within a major global firm before academics uncovered them.

If hallucinations can pass through even highly resourced organisations and be believed as fact, it underscores the importance of human judgement and critical thinking to stop hallucinations from becoming misinformation.  

How human cognition shapes what we trust

As the exhibition highlighted, humans are not neutral processors of information and we never have been. AI hallucinations expose this vulnerability by relying on human judgment to distinguish plausibility from truth.

One reason our understanding of information is often subjective is confirmation bias. This is our tendency to seek out and believe information that aligns with what we already think. As cognitive psychologist Raymond Nickerson notes, people rarely approach information neutrally, we evaluate new facts through the lens of our existing beliefs, not the other way around (Nickerson, 1998). This means that when misinformation is spread, it’s not always because someone believes it to be true, but because they want it to be. 

Another reason is cognitive ease. Daniel Kahneman, a behavioural psychologist, describes how information that is simpler and easier to process is more likely to be believed. These shortcuts evolved to help us make fast decisions under uncertainty, not to navigate infinite, high-confidence information environments.

These are just two examples of how our thinking can be influenced and how tools like generative AI can magnify those effects.

Balancing risk and reward in generative AI

Misinformation may be amplified by technology, but it is not just a technological problem…. It is a human one. Hallucinations are not defects in generative AI. They are a feature of systems designed to produce fast, accessible approximations of human knowledge. Misinformation emerges when genai outputs are trusted or acted on without context or verification.

That does not mean we should stop using these tools. Generative AI offers real rewards, increasing accessibility to information, learning, and creative ideas. But those benefits depend on people being equipped to use the technology critically.

Our task is not to eliminate hallucinations. It is to invest in human capability by building the judgment, literacy, and confidence people need to decide when approximation is enough and when accuracy truly matters. That is how we reduce harm while unlocking the full value these tools can offer.

Author’s note: This piece is grounded in my own experience and existing research. It was written by me, a human, with a little help from ChatGPT as an editing helper.

Sources

https://www.slv.vic.gov.au/exhibitions/make-believe/ 

https://www.1news.co.nz/2025/10/16/nz-makes-first-deepfake-porn-prosecution-but-are-we-equipped-for-ai-onslaught/ 

https://ia800603.us.archive.org/10/items/DanielKahnemanThinkingFastAndSlow/Daniel%20Kahneman-Thinking%2C%20Fast%20and%20Slow%20%20.pdf 

https://psycnet.apa.org/record/2018-70006-003

About Amelia

About the Author
Amelia Bentley is a contributing author for the AI Assembly and a Data & AI Scientist at HazardCo. Standing at the intersection of data science and psychology, Amelia is redefining how we interact with machines. Her work focuses on bridging the gap between complex code and human behavior, ensuring Kiwi businesses adopt new technologies with a people-first mindset.

A champion for diversity in tech, Amelia is an founding member of our HerAIStory community. She recently shared her expertise as a panelist at the Wellington HerAIStory event, helping to shape the narrative for women in AI.

Amelia's latest posts

Why Aotearoa’s AI Future Needs Women

Home / In my opinion by Amelia Bentley The Times’ recent list of the 100 most influential people in AI included only 27 women. Of the 24 leaders they named, just two were women. Several of the most influential men in AI also appear on the Forbes 400 list, highlighting

Read More »