Skip to content Skip to footer
Home / AI Tool Directory

AI Tool Directory

TitleContent
What a Museum Exhibition Taught Me About How We Decide What’s True in the Age of AI

Home /

In my opinion by Amelia Bentley

I recently attended an exhibition at the State Library Victoria, Make Believe, which reminded me that misinformation isn’t a problem of the digital age but a feature of human thinking. The exhibition explored how psychology, cognitive shortcuts, and emotion shape what we accept as truth.

Walking through the exhibition made it clear that misinformation has been spread long before the digital age, but modern technologies now amplify its reach and impact. Those same technologies, however, also have the capacity to amplify accurate knowledge, increase access to learning, and support people who were previously unable to access such knowledge.

As new technologies flood us with information, fact and fiction increasingly arrive dressed with the same confidence. The question is how we harness the benefits of these tools while minimising the harm?

Understanding hallucinations without fear

A hallucination occurs when a generative AI system produces information that sounds plausible but isn’t based on verified sources. Misinformation is when false information is believed, shared, or acted on as if it were true.

The link between the two is human. Hallucinations only become misinformation when people treat AI outputs as fact without verifying them. 

It’s easy to think of hallucinations as technical bugs, but they aren’t. They’re a feature of how generative systems work, producing statistically likely answers rather than verified truth. That design is what allows these tools to be fast and accessible.

A $440,000 report prepared by Deloitte for the Australian government was found to contain AI-generated fabrications. These errors were so believable that they bypassed internal review within a major global firm before academics uncovered them.

If hallucinations can pass through even highly resourced organisations and be believed as fact, it underscores the importance of human judgement and critical thinking to stop hallucinations from becoming misinformation.  

How human cognition shapes what we trust

As the exhibition highlighted, humans are not neutral processors of information and we never have been. AI hallucinations expose this vulnerability by relying on human judgment to distinguish plausibility from truth.

One reason our understanding of information is often subjective is confirmation bias. This is our tendency to seek out and believe information that aligns with what we already think. As cognitive psychologist Raymond Nickerson notes, people rarely approach information neutrally, we evaluate new facts through the lens of our existing beliefs, not the other way around (Nickerson, 1998). This means that when misinformation is spread, it’s not always because someone believes it to be true, but because they want it to be. 

Another reason is cognitive ease. Daniel Kahneman, a behavioural psychologist, describes how information that is simpler and easier to process is more likely to be believed. These shortcuts evolved to help us make fast decisions under uncertainty, not to navigate infinite, high-confidence information environments.

These are just two examples of how our thinking can be influenced and how tools like generative AI can magnify those effects.

Balancing risk and reward in generative AI

Misinformation may be amplified by technology, but it is not just a technological problem…. It is a human one. Hallucinations are not defects in generative AI. They are a feature of systems designed to produce fast, accessible approximations of human knowledge. Misinformation emerges when genai outputs are trusted or acted on without context or verification.

That does not mean we should stop using these tools. Generative AI offers real rewards, increasing accessibility to information, learning, and creative ideas. But those benefits depend on people being equipped to use the technology critically.

Our task is not to eliminate hallucinations. It is to invest in human capability by building the judgment, literacy, and confidence people need to decide when approximation is enough and when accuracy truly matters. That is how we reduce harm while unlocking the full value these tools can offer.

Author’s note: This piece is grounded in my own experience and existing research. It was written by me, a human, with a little help from ChatGPT as an editing helper.

Sources

https://www.slv.vic.gov.au/exhibitions/make-believe/ 

https://www.1news.co.nz/2025/10/16/nz-makes-first-deepfake-porn-prosecution-but-are-we-equipped-for-ai-onslaught/ 

https://ia800603.us.archive.org/10/items/DanielKahnemanThinkingFastAndSlow/Daniel%20Kahneman-Thinking%2C%20Fast%20and%20Slow%20%20.pdf 

https://psycnet.apa.org/record/2018-70006-003

About Amelia

About the Author
Amelia Bentley is a contributing author for the AI Assembly and a Data & AI Scientist at HazardCo. Standing at the intersection of data science and psychology, Amelia is redefining how we interact with machines. Her work focuses on bridging the gap between complex code and human behavior, ensuring Kiwi businesses adopt new technologies with a people-first mindset.

A champion for diversity in tech, Amelia is an founding member of our HerAIStory community. She recently shared her expertise as a panelist at the Wellington HerAIStory event, helping to shape the narrative for women in AI.

Amelia’s latest posts

How to Build the Future With Your People, Not At Them

Home /

In my opinion by Karrina Mountfort

December 2, 2025

Leaders everywhere can feel it — the future of work is shifting underneath us.
AI is part of that shift, but the real transformation is happening with people.

Inside NZ organisations, I see the same pattern:

Leaders want clarity, safety and alignment.
Teams want capability, flexibility, speed and tools that genuinely help them do their best work.

When the future is designed in isolation, those realities clash.
People disengage.
Shadow AI grows.
And leadership loses visibility of what’s really happening.

The answer isn’t to tighten control.
And it isn’t to hand over the steering wheel.

The answer is simple:

Build the future with the people who will actually live in it.

Here’s what works on the ground.

1. Ask your people before you tell your people

Most “future of work” plans are just announcements wrapped in slides.

But your people already know:

  • what slows work

  • where AI is already helping

  • where it’s risky

  • which tools they trust

  • what they need to perform at their best

Ask them early — not as a formality, but because their reality is your roadmap.

These conversations surface the truth faster than any strategy workshop.

2. Build the future in the open, not behind closed doors

This doesn’t mean co-creating everything.
It means involving the people who actually understand the work.

Co-design the parts that matter:

  • workflow redesign

  • where AI belongs (and doesn’t)

  • what “good” looks like

  • how work should move

  • where risk lives

  • where opportunity lives

This is not diluting leadership.
This is grounding leadership in what’s real.

3. Co-create the guardrails people actually live with

Leaders set the non-negotiables.
Teams shape the practical reality.

Leaders decide:

  • data safety

  • risk thresholds

  • privacy boundaries

  • compliance

  • ethics

  • direction

Teams shape:

  • how tools fit their workflow

  • what good use looks like

  • which options are workable

  • how to escalate risk

  • which rules need clarity

  • what’s slowing them down

People follow guardrails they helped define.

They break guardrails written in a vacuum.

4. Treat the future as a first draft

The future is not a 50-page PDF.
It’s a living document — shaped, improved, and refined over time.

Tell your people:

“This is where we’re heading, and you will help shape how we get there.”

Why this works:

  1. No one feels locked into something unrealistic.

  2. People contribute rather than resist.

A living approach keeps your organisation adaptive and reduces shadow AI because people see space for their ideas.

5. Measure alignment through behaviour, not agreement

Nodding in a meeting isn’t alignment.
Here’s what alignment actually looks like:

  • people using the agreed tools

  • less hidden AI use

  • more surfaced risks

  • shared wins

  • workflow improvements

  • stronger guardrails

  • people teaching people

Alignment isn’t verbal.
It’s behavioural.

Governance without visibility drives shadow AI.
Governance with involvement builds trust.

Where It All Comes Together

People don’t commit to a strategy because it’s smart, correct or well-written.

They commit because they were part of building the future it leads to.

When teams co-shape the parts they actually live with, co-design the guardrails they use, and co-own the path forward:

They don’t nod politely and return to their old ways.

They turn up differently:

  • with clarity

  • with capability

  • with confidence

  • with genuine commitment

And that’s when the magic happens:

a governed, aligned, people-powered future of work — one your organisation can actually grow into.

About Karrina


Linkedin

Karrina Mountfort is a driving force in Aotearoa’s emerging AI ecosystem, serving as the founder of The AI Assembly™ and creator of HerAIStory™, two initiatives reshaping how New Zealanders engage with artificial intelligence.

With a vision rooted in accessibility, collaboration, and community-building, Karrina works at the intersection of technology, culture, and human empowerment. Her mission is to ensure that AI in Aotearoa is not just advanced—but inclusive, ethical, and reflective of the people it serves.

A champion for representation and future-focused leadership, Karrina leads national conversations around AI literacy, equity, and innovation. Through HerAIStory™, she has amplified the voices of women and under-represented groups in tech, providing platforms for connection, visibility, and impact. Her events bring together industry leaders, creators, and communities to inspire action and shape the narrative of AI in New Zealand.

Whether convening cross-sector dialogues, guiding organisations through the AI landscape, or elevating diverse perspectives, Karrina continues to influence how Aotearoa prepares for an AI-enabled future—one that is collaborative, culturally grounded, and centered on people.

Karrinas latest posts

Join HerAIStory our community

The AI Assembly™ – your pathway to AI The Right Way™ for all, through events, community, a skills hub & peer learning 🤝


The Toddler That Changed the World: Reflections on ChatGPT at Three

Home /

In my opinion by Karrina Mountfort

December 1, 2025

It’s hard to believe it has only been three years.

On November 30, 2022, there was no global countdown. No keynote delivered by a tech CEO in a turtleneck. No cinematic launch video with futuristic background music. There was just a quiet tweet from OpenAI announcing a “research preview” called ChatGPT.

I remember sitting at my laptop that first week, not quite sure what I was about to witness. Like many of you, I opened with a deliberately ridiculous prompt – something about a sea shanty inspired by complex spreadsheets -expecting it to fail spectacularly. Instead, I watched that grey cursor blink once… and then stream out something so unexpectedly clever that I actually laughed out loud.

That was the click felt around the world.

The Evolution of a Species (2022–2025)

To understand where we are today, it’s worth looking back at how rapidly the “biology” of this technology has evolved. In just three short years, ChatGPT transformed from a text-bound brain in a jar into a fully sensing digital partner.

1. The Text-Only Era (2022)

“The Brain in a Box”

When ChatGPT launched, running on GPT-3.5, it was brilliant but painfully limited. It was blind, deaf, and mute – a purely textual intelligence. Describe your broken sink? You had to explain the leak in excruciating detail because it couldn’t see. It was a powerful linguistic tool, but it had no access to the physical world around us. 

 A screenshot of the original 2022 ChatGPT interface: simplistic, just text.
📸 Pic was generated with my helpful assistant credit Gemini Pro 3

2. The Awakening of Senses (2023–2024)

“Eyes, Ears, and a Voice”

Late 2023 hinted at change, but May 2024 marked the true leap. GPT-4o (“Omni”) erased the distinction between “AI interaction” and “conversation.”

Suddenly, latency disappeared. The model could hear tone. It could see emotion. It could respond while you interrupted it. We snapped photos of our fridges for recipes, asked it questions on our walks, and started treating it less like a search box and more like a companion in dialogue.

It stopped waiting for prompts.
It started participating.

3. The Agentic Era (2025)

“The Active Partner”

This year brought the most dramatic shift yet: AIs that don’t just answer – they act. With real-time video analysis, reasoning engines, and contextual memory, ChatGPT stepped out of the realm of tool and into the realm of partner.

 

2025: A multimodal partner that can see, hear, and reason in real-time.
📸 Pic was generated with my helpful assistant credit Gemini Pro 3

The Human Element: Growing Pains

At The AI Assembly™, we focus on the human experience, not just the technical increments – and the last three years have put us all through emotional whiplash.

  • 2022: The honeymoon phase – awe, experimentation, delight.

  • 2023: The fear phase – “Will this replace me?”

  • 2024: The frustration phase – the rise of low-effort AI content flooding our feeds.

Now, in late 2025, something has shifted. Panic has settled into pragmatism. ChatGPT has grown to 800 million weekly active users, but what people are doing with it has fundamentally changed.

Then: 80% of prompts were for code, content, and productivity.
Now: Over 70% of conversations are personal – learning, reflection, creativity, and advice.

We’re starting to understand that AI isn’t here to displace human ingenuity. It’s here to multiply it.

Looking Ahead: The Terrible Threes?

ChatGPT is only three years old. In human terms, it’s just out of nappies – walking confidently, talking constantly, occasionally throwing tantrums.

As we look toward 2028, our collective responsibility remains the same:
We must be the adults in the room.

We need to guide this technology toward outcomes that centre humanity – authenticity, connection, and clarity – rather than letting the technology set the terms for us.

Happy birthday, ChatGPT.
Three years in, and the journey has only just begun.

✨ Crafted by me as a human, polished by Gemini. I use AI to sharpen my writing, but the opinions are 100% mine.

About Karrina


Linkedin

Karrina Mountfort is a driving force in Aotearoa’s emerging AI ecosystem, serving as the founder of The AI Assembly™ and creator of HerAIStory™, two initiatives reshaping how New Zealanders engage with artificial intelligence.

With a vision rooted in accessibility, collaboration, and community-building, Karrina works at the intersection of technology, culture, and human empowerment. Her mission is to ensure that AI in Aotearoa is not just advanced—but inclusive, ethical, and reflective of the people it serves.

A champion for representation and future-focused leadership, Karrina leads national conversations around AI literacy, equity, and innovation. Through HerAIStory™, she has amplified the voices of women and under-represented groups in tech, providing platforms for connection, visibility, and impact. Her events bring together industry leaders, creators, and communities to inspire action and shape the narrative of AI in New Zealand.

Whether convening cross-sector dialogues, guiding organisations through the AI landscape, or elevating diverse perspectives, Karrina continues to influence how Aotearoa prepares for an AI-enabled future—one that is collaborative, culturally grounded, and centered on people.

Karrinas latest posts

Join HerAIStory our community

The AI Assembly™ – your pathway to AI The Right Way™ for all, through events, community, a skills hub & peer learning 🤝


Why Aotearoa’s AI Future Needs Women

Home /

In my opinion by Amelia Bentley

The Times’ recent list of the 100 most influential people in AI included only 27 women. Of the 24 leaders they named, just two were women. Several of the most influential men in AI also appear on the Forbes 400 list, highlighting the obvious, that men overwhelmingly hold the power shaping both AI and the global economy.

When one group dominates the design of a transformative technology, their values and experiences inevitably become embedded in the systems we all depend on. Even with the best intentions, no group can design for experiences they’ve never lived. That’s how biases are reinforced, communities are overlooked, and technology ends up serving only a few.

For many women and minorities, navigating systems not built for us is familiar, and AI will be no different unless a diverse range of voices shape its development and direction.

Navigating the Future of AI as a Young Woman

My awareness of these gaps began long before I entered the workforce.

As a young girl, I loved maths and STEM subjects, but my enthusiasm faded as I repeatedly felt unwelcome or dismissed for showing interest. I still remember my first maths competition in Year 7. I’d earned my spot on the team, but when I asked how I could contribute, one of the boys told me to “just sit there and look pretty.”

Experiences like that didn’t push me away; they fuelled my determination to help shape a more inclusive future.

Studying data science and psychology helped me understand the social responsibility that comes with building transformative technology. Entering the field just as generative AI accelerated meant I was learning alongside the technology itself, constantly questioning its assumptions and implications.

At times during my studies, those same feelings of dismissal resurfaced. But finding a community of like-minded women at university helped me feel seen and supported. It reminded me of the importance of connection and belonging in encouraging diverse voices to step forward and shape technology.

My experience is just one example of a wider challenge facing the world and Aotearoa.

Diversity in Technology Across Aotearoa

It’s easy to view The Time’s list as a reflection of global power structures far from Aotearoa. But the truth is that the same disparities exist here, too.

In New Zealand, only 27% of the technology workforce is female, and just 2.8% are Māori or Pasifika. These numbers show that we’re at risk of building AI systems that benefit only those already overrepresented in the room, while overlooking the communities that make Aotearoa unique.

As someone early in my career, I’ve often looked around offices or industry events and noticed how little diversity there is. Those moments make those numbers real, but they also highlight an opportunity. AI in Aotearoa is still taking shape, which means we still have time to shape it differently.

A Future We Can Shape Together

Being part of The AI Assembly reminds me that the future of AI is strengthened when more of us are in the room. HerAI gives women the chance to learn, connect, and lead in shaping the future of AI. By sharing my story, I hope more young women feel the confidence to step into these spaces and influence what comes next.

When we show up, the future of AI becomes more human, more inclusive, and more reflective of Aotearoa.

Author Notes

This is an opinion piece grounded in personal experience, aligning with The AI Assembly’s focus on human-centred, inclusive AI education.

References:

https://nztech.org.nz/2022/06/15/diversity-critical-for-nz-tech-sectors-future/
https://www.womentech.net/women-in-tech-stats
https://www.unesco.org/en/articles/girl-trouble-breaking-through-bias-ai-0 
https://time.com/collections/time100-ai-2025/

About Amelia


Linkedin

About the Author

Amelia Bentley is a contributing author for the AI Assembly and a Data & AI Scientist at HazardCo. Standing at the intersection of data science and psychology, Amelia is redefining how we interact with machines. Her work focuses on bridging the gap between complex code and human behavior, ensuring Kiwi businesses adopt new technologies with a people-first mindset.

A champion for diversity in tech, Amelia is an founding member of our HerAIStory community. She recently shared her expertise as a panelist at the Wellington HerAIStory event, helping to shape the narrative for women in AI.

Amelia’s latest posts

Join HerAIStory our community

The AI Assembly™ – your pathway to AI The Right Way™ for all, through events, community, a skills hub & peer learning 🤝


Welcome to The AI Assembly™ Tool Directory Hub

Welcome to The AI Assembly™ Tool Directory Hub, your trusted source for practical insights and guidance on navigating the world of AI tools. This hub is meticulously curated to provide you with valuable resources and learnings designed to empower your AI journey.

Here, you’ll find a living database where we actively test and evaluate new tools. Our goal is to help you make informed decisions, ensuring you can harness the power of AI responsibly and ethically. We are dedicated to providing clear, team-vetted information, including our own ratings and a detailed look at each tool’s use case.

This hub is designed to be a collaborative space, constantly evolving with the latest discoveries. We add new tools and resources on a weekly basis, so keep your eyes out for new additions as this hub continues to grow!

Made in NZ

Disclaimer: The tools featured on this platform are submitted and tested as our team has the time and capability. Our ratings and evaluations are our personal opinions based on the elements we have tested and are not a recommendation or promotion in any way. We are not liable for the use of any tools listed.

Important Disclaimer:

The field of technology, and Artificial Intelligence in particular, is evolving at an unprecedented pace. The information, resources, and guidance provided within this hub are current at the time of publication and are intended for general educational and informational purposes only. Due to the rapid advancements and continuous changes in AI tools, best practices, and regulatory landscapes, we strongly advise you to check back frequently for the latest updates. We cannot guarantee that all information will remain entirely current at all times. Always exercise your own due diligence and consult with relevant professionals (e.g., legal, cybersecurity experts) for specific advice tailored to your unique situation before implementing any AI solutions, especially those involving sensitive data. Your responsibility is to stay informed and make decisions that align with the most current understanding of AI’s capabilities and risks.