Skip to content

Centre for Data Ethics and Innovation news and commentary: August 2025

CDEI Hero

The following articles are from the Centre for Data Ethics and Innovation's August 2025 newsletter. The newsletter comes out quarterly, full of news, commentary, opinion, and education. Sign up by emailing dataethics@stats.govt.nz

In this edition:

Data and useful bureaucracy: Free rein to creativity

By Professor Mariana Mazzucato.

Data helps us make better decisions. It informs our policies, improves services, and guides public investment. But does data alone guarantee the best outcomes? Is it really that straightforward? “Feed the numbers into a pipeline and watch the perfect policy emerge, universally loved and flawlessly executed”?

To open this issue, we are honoured to have Mariana Mazzucato, one of today’s most influential economists, share insights on digital feudalism and building an AI future that creates value for all.

In her piece, “Public service and creativity,” she makes the case for a bold, mission-driven public sector, one that embraces imagination, takes risks, and leads with purpose. Her words are a timely reminder that creativity is not a luxury in government work. It’s essential.

The views expressed are the author’s and do not necessarily reflect those of CDEI.

In 2019, I warned about the rise of digital feudalism, a phenomenon even more relevant today than it was then given the rapidly evolving nature of the platform economy and artificial intelligence (AI) technologies. As I emphasised then, rather than simply assuming that economic rents are all the same, policymakers should be trying to understand how platform algorithms allocate value among consumers, suppliers, and the platform itself.

Innovation is not just serendipitous, it has a direction that depends on the conditions in which it emerges. AI, for one, is not a sector but rather a general-purpose technology that is shaping (and will continue to shape) all sectors of our economy. Like many transformative technologies, from the hammer to nuclear power, AI can be used to create tremendous value or cause serious harm. This makes steering its development toward the common good more urgent than ever.

Generative AI models are built on the collective work of countless people whose creations have been used without permissions or compensation. Behind every AI-generated response lurks a vast, invisible workforce comprised of writers, singers, journalists, poets, coders, illustrators, photographers, and filmmakers. Just as we pool taxes to fund streetlights, law enforcement, and basic research, the production of creative content in the era of generative AI should be publicly supported, and its outputs kept in the public domain. By establishing clear conditions for public investment and support, we can shape an AI future that creates value for all.

Current AI infrastructure serves insiders' interests and risks exacerbating economic inequality. Without proper governance, AI risks becoming another engine of rent extraction rather than value creation. We need an 'entrepreneurial state' capable of establishing pre-distributive structures that share the risks and rewards of AI innovation fairly from the start.

Headshot of a woman looking off to the left

Mariana Mazzucato (PhD) is Professor in the Economics of Innovation and Public Value at University College London (UCL), where she is Founding Director of the UCL Institute for Innovation & Public Purpose. She is winner of international prizes including the Grande Ufficiale Ordine al Merito della Repubblica Italiana in 2021, Italy's highest civilian honour, the 2020 John von Neumann Award, the 2019 All European Academies Madame de Staël Prize for Cultural Values, and 2018 Leontief Prize for Advancing the Frontiers of Economic Thought. She is a member of the UK Academy of Social Sciences (FAcSS) and the Italian Academy of Sciences Lincei. Most recently, Pope Francis appointed her to the Pontifical Academy for Life for bringing ‘more humanity’ to the world.

As well as The Entrepreneurial State: debunking public vs. private sector myths (2013), she is the author of The Value of Everything: Making and Taking in the Global Economy (2018), Mission Economy: A Moonshot Guide to Changing Capitalism

Hot News

GOVIS Conference 2025 - Creative Digital Government

On 1 to 2 September 2025, at the National Library of New Zealand Te Puna Mātauranga o Aotearoa, Wellington, the Centre for Data Ethics and Innovation (CDEI) will again facilitate an interactive workshop at GOVIS. This year we are exploring trust in the public sector and its relationship to ethical data practices.  Come and hear how the Human Values for Data Ethics can help you build trust in your organisation.

Trust by Design: Creative Interventions in Bureaucratic Spaces by Fiona Wharton, Principal Advisor at CDEI

“We need a bureaucracy that can draw on all its ethical, creative and intellectual resources and reclaim a distinctive leadership role, but framed in a 21st century context.” - Charles Landry & Margie Caust

GOVIS has evolved from the Government Information Systems Managers’ Forum into a vibrant community of practice. This year its conference draws on inspiration from the Creative Bureaucracy Festival (which has now launched in Australia and New Zealand) – seeking to explore the importance of creativity for public servants in the digital age.

The conference will be held on 1 to 2 September 2025, at the National Library of New Zealand Te Puna Mātauranga o Aotearoa, Wellington. There is also an opportunity to attend online, and various discount and ticket sharing options are available. The CDEI will facilitate an interactive workshop on the second day, exploring trust in the public sector and its relationship to ethical data practices.  Trust is the outcome people experience when they consistently encounter ethical behaviour over time. Trust needs credibility, reliability and relationship.

Interested in attending? Check out the GOVIS programme and registration information.


What’s happening around the network

Data works the machine, creativity works the soul by Alexandra Lutyens, Senior Innovation Specialist and Nedra Fu, General Manager at Creative HQ

Alexandra Lutyens and Nedra Fu from Creative HQ reflect on how data and creativity are powerful partners. Data works the machine; creativity works the soul, together bringing an inspirational human response to context and behaviour. 

In one of the most interesting conversations I’ve watched, Stephen Wolfram talks with a group of students at Ralston College on unlocking consciousness. He describes a heap of novel computational possibilities derived from small changes to data points. One of his many insights is the concept of computational irreducibility (for certain complex systems, the only way to determine the outcome of a process is to follow each step of the computation, one by one). What Wolfram explains is that systems operating above a certain complexity threshold (in the context of innovation think ‘human behaviour’) cannot be predicted but only understood from running the experiment or simulation before knowing the outcome.

This idea that data is not a predictor of future outcome is also picked up by Roger Martin, one of my all-time strategic greats. He observes that companies often lean heavily on massive datasets to forecast the future. In times past of greater predictability this may have been a useful approach, but the variables at play in the world today make such use of data inadequate.

David Snowden’s  Cynefin framework provides further clarity. The framework categorises situations into five domains - Clear, Complicated, Complex, Chaotic, and Disorder - to help leaders choose appropriate actions. In the Complex domain, cause and effect are discernible only in hindsight, making true prediction impossible. The remedy? Probe to test hypotheses, sense the emerging trends, then respond based on what unfolds.

So, where this leads us in relation to innovation, creativity and data is basically what we practise and teach at Creative HQ. Once you really know your problem area, and this is where data is hugely valuable, the best way to build the innovation solution is to get active with building and iteratively testing solutions or interventions.

We encourage founders to learn as much as they can early on, while their investment of time and resources is still small. Creativity is about staying curious and iterating constantly, every adaptation gets you closer to a solution that works. Data alone won’t get you there; it can’t replace the leap of imagination required to find that breakthrough. But once that leap has been made and your solution starts to take shape, careful data collection becomes invaluable. It helps you understand what’s working, what isn’t, and where to adjust next.

Data and creativity are great partners. Data works the machine, creativity works the soul bringing an inspirational human response to context and behaviour.


The floor is yours

Data and drama: Where creativity meets process

The exterior of the Christchurch Art Gallery

Florence spends a day with Blair Jackson, Director of the Christchurch Art Gallery Te Puna o Waiwhetū, covering creative bureaucracy, life after earthquakes and COVID, and how creativity supports resilience and wellbeing. 

Interview held on 27 June 2025 with Blair Jackson, Director, Christchurch Art Gallery Te Puna o Waiwhetū.

As Director of a civic institution, you operate at the intersection of two seemingly opposite worlds: the rational world of local government processes, and the intangible world of dreams and imagination.

While rational is generally defined as being guided by logic, reason, or evidence, how people interpret what is logical or reasonable can vary widely depending on their culture, values, emotions, experiences, or priorities. So, this would depend on one’s idea of what is ‘rational’.

Local government processes are a necessary tool and approach for running a city and achieving the best outcomes for its community. However, I wouldn’t necessarily always refer to all its processes as rational, at least not in the way I might choose to process the world around me. But they are super important to ensure that local government is making informed and effective decisions.

Despite earthquakes and COVID, where did you find the courage to not only recover but reimagine what a gallery could be?

I'm not particularly fond of the word pivot; it's become something of a buzzword lately, but it does capture the essence of reassessing and finding a new path forward. I’m also unsure of the word - courage. To me, courage conjures something epic, almost heroic. What we were doing feels simpler, more grounded. It’s about responding with what we know, doing what we can. This terrible thing has happened, so now what? What part do we play? How can we show up for our community? For each other? How do we rediscover our footing and our sense of purpose?

What mattered most to us was continuing to do our work. Supporting our artist community throughout remained a top priority. We focused on creating spaces; physical or otherwise, where people could gather, reflect, and begin to heal. Whether that meant an online platform during the COVID years, a pop-up venue, or an alternative location while our building was closed for five years after the Canterbury earthquakes, we found ways to stay connected.

What we learned is that meaningful experiences and opportunities don't always depend on a physical building, although returning to our own space, and all it offers was an incredible milestone. Our hope is that the spaces we've created, in whatever form they take, offer moments for reflection, healing, or simply a brief escape.

What role did creativity play in your resilience?

It’s everything for me, it’s not necessarily a definable or distinguishable part of who I am - it’s how I consider the world, it’s ingrained in everything I do, think or feel. It’s not necessarily the act of being creative, like painting a picture or playing music, for me it’s about how I look at everything, how I might deconstruct a problem, or consider a work of art or watch a band play. It’s also about thinking/decoding someone else’s creativity. I’d like to think that the ability to think creatively gives me some sort of edge in the sense of being able to break down a problem or respond to a situation but maybe that’s just the way I’m programmed.

It seems you have boldly and cleverly embraced what data can offer?

We collect a wide range of data; both formal and informal. Our visitor survey provides rich insights into levels of satisfaction, reasons for visiting, exhibitions that resonated (or didn’t), marketing effectiveness, a likelihood of recommending the experience to others, time spent onsite, and more. We also explore visitors’ expectations prior to their visit and whether those expectations were met or hopefully exceeded.

Lately, we've been venturing into new territory: the relationship between gallery visits and personal wellbeing. We've partnered with QWB Lab on a proof-of-concept project aimed at understanding and measuring the wellbeing impact of a gallery visit. Early findings suggest that engaging with art doesn’t just lift people’s moods or happiness in the moment, it can leave a lasting positive effect well beyond the visit itself.

It’s still early days, but the potential to quantify the wellbeing benefits of arts engagement is incredibly promising. Even more compelling is the possibility of understanding the potential long-term cost-saving implications for the health sector.


Opinion piece

AI in films: anticipation, expectation, or Armageddon?

By Florence Maron, Advisor at CDEI

Florence dives deep into how film has shaped our collective understanding of artificial intelligence (AI) - reflecting hopes, fears, and expectations long before the technology became real. 

Long before technology became a scientific reality, it thrived in the imagination of visionary writers. Jules Verne foresaw submarines and space travel; Orwell warned of surveillance states; Mary Shelley’s Frankenstein explored the moral weight of artificial creation; and Huxley envisioned engineered societies. These authors acted as cultural sensors, detecting early signals of technological and societal change and shaping our collective imagination.

Books invite a deeply personal relationship with imagined worlds. In contrast, films give form to ideas in a tangible, shared way, imprinting images, sounds, and emotions on our collective memory. Cinema is perhaps the most powerful creative tool for exploring and anticipating the changes linked to AI and robotics. Through special effects, sound design, and precise storytelling, films transform abstract hopes and fears into embodied experiences.

We all cringe at HAL’s voice in 2001: A Space Odyssey and feel a chill at the silhouette of the Terminator. Yet, as with all great art, beneath the entertainment lie multiple layers: reflections on society, on our current hopes and anxieties, and on where our strengths and weaknesses may ultimately lead us.

In this exploration, we will unpack a dozen major box office hits or critically acclaimed films where AI and robots play a central role, including A.I. Artificial Intelligence, I, Robot, and Star Wars, to name a few.

What exactly do these films tell us about artificial intelligence and emerging technologies and about ourselves? How are they, and we, portrayed on screen? Which parts remain pure fiction, and which are already inching closer to reality in research labs, companies, and start-ups?

The Good AI - Benevolent by choice or “Ethical by Design”

  1. When AI remains at the stage of pure technology

In The Martian (2015) starring Matt Damon, a violent storm forces astronaut Mark Watney to be left behind on Mars. Against all odds, he survives by relying on science and using technological tools to sustain life and reestablish contact with Earth.

Here, robots and automated systems represent AI at its most basic: merely instrumental, with no autonomy or personality. These tools are perfect extensions of human ingenuity: obedient, task-focused, and fully controllable. NASA’s Mars rovers, for example, perform pre-programmed exploration tasks without independent decision-making, embodying the same principle of safe, useful basic AI.

Watney embodies human ingenuity and refusal to surrender.  When he first realises, he might die on Mars, he’s visibly shaken. Yet rather than freezing, he channels his fear into meticulous problem-solving ("I’m going to science the [expletive] out of this"). He consistently focuses on what he can control: one step at a time.

  1. When AI becomes a functional partner with a human touch

In Interstellar (2014), we witness an evolution of AI becoming a more personable aid. As Joseph Cooper embarks on a mission through a wormhole to save a dying humanity and transmit crucial data, he is accompanied by TARS and CASE. Joseph Cooper is a sacrificial, and visionary character, led by ethics and embodying human resilience. TARS and CASE communicate with fluent, witty dialogue and demonstrate loyalty and nuanced ethical understanding. Unlike humanoid robots, these machines maintain an angular, geometric shape, deliberately avoiding any attempt to physically imitate human appearance. This choice eliminates the discomfort, often referred to as the uncanny valley, that can arise when we see something mimicking human but being something else. A similar philosophy guides ESA’s CIMON robot (developed in Germany ISS robot tells astronaut to "be nice”), a "crew assistant" aboard the International Space Station. Devoid of a human form, CIMON uses friendly speech and humour to support astronauts, fostering a sense of companionship and psychological comfort. 

Together, these examples suggest that warmth and trust can be safely cultivated between humans and AI systems through voice interaction and ethical dialogue, provided each knows their place.

  1. When AI turns into a childlike, innocent, and loyal companion

In the Star Wars original trilogy (1977–1983), Big Hero 6 (2014) and WALL-E (2008), we see a deeper dynamic at play: the emergence of affection and true companionship between humans and AI.

In these stories, the shape and design of the machines are central to fostering emotional connection. Their non-threatening, rounded forms often evoke the comforting presence of oversized toys, inviting warmth rather than fear.

Star Wars introduces R2-D2 and C-3PO, two droids who embody playful devotion. Their unwavering loyalty and charming quirks give them a warmth that feels not quite human, yet deeply familiar. Alongside them, Princess Leia and Luke Skywalker, exemplify human courage, empathy, and moral leadership. Han Solo is not a hero by design: he is a regular human, beginning as a charming smuggler only concerned with his own survival and profit. However, his journey illustrates the transformative power of love and transcendence, qualities that remain (for now?) uniquely human.

Similarly, Big Hero 6 presents Baymax, a healthcare robot who evolves from a literal-minded helper into a friend for life and beyond of Han Hiro who moves from grief to healing through Baymax’ affection.

In WALL-E, the eponymous robot begins as a lonely trash compactor but transforms into a courageous, love-driven figure who inspires humanity to reclaim responsibility for Earth.

These AI characters resemble ultra-capable pets or friendly super-toys: trusted allies who support the hero’s journey without overshadowing human agency. Outside of fiction, we see real-world efforts to replicate this bond.

Pepper the robot, designed to greet and assist people, has been deployed as a receptionist in offices in the UK (using facial recognition to identify visitors) and in hospitality, banking, and medical settings in Japan. However, in 2018, a supermarket in Edinburgh, Scotland, removed Pepper after just one week because customers were reluctant to interact with it when human help was available. This highlights that humans are eager to embrace AI when it enriches their experience, even emotionally, but are resistant when it feels like a force-fed replacement.

  1. When AI embodies a heroic protective father figure

The T-800 in Terminator 2: Judgment Day embodies the transformation from machine to moral agent, and ultimately, to mentor.

Originally introduced in The Terminator (1984) as an unstoppable killing machine, the T-800 returns in the sequel as a reprogrammed protector. This reversal underscores both his immense strength and the possibility of transformation and redemption. Strong and self-sacrificial, he learns human values through his interaction with young John Connor, for whom he becomes a father figure and a guardian angel with steel skin.

Physically, the T-800 is indistinguishable from a human; his external form amplifies the emotional connection and strengthens the illusion of humanity.

In many ways, the T-800 stands in the lineage of mythological or divine protectors — reminiscent of Athena Nike guiding Greek heroes in battle, or the angel who shut the mouths of the lions to save Daniel. This is perhaps why it remains, as things stand, an unattainable ideal rather than a near-future possibility. Ultimately, the T-800 symbolises a deeper mutuality between humans and AI: the boundaries between them beginning to blur. We see a shared desire for learning about each other and for connection.

  1. The Game Theory: Sati and the Sentient allies

Sati and the Sentient allies in The Matrix Resurrections (2021) represent a new frontier: truly autonomous AI systems acting from free will. In the film, Neo reconnects with Trinity and discovers a faction of Sentient machines, including Sati, who choose to support human liberation rather than uphold the system of control.

These Sentients defy their original design, forging an alliance based on shared values and a moral awakening. Rather than acting from naive benevolence, they consciously weigh outcomes and choose cooperation that benefits both themselves and humanity, embodying the game theory principle of “cooperation over defection.”

Their choice highlights a powerful idea: while they could dominate, they instead opt for mutual respect and collaboration. This points to a future where humans and machines stand side by side as true allies, united by shared purpose and moral conviction.

Real-world initiatives, like the human-AI collaboration labs at Harvard and Stanford (USA), offer early glimpses of this vision, developing AI agents as teammates rather than mere tools, though this future remains distant for the time being.

And perhaps, that’s for the best. How can we truly know when the risks begin to outweigh the rewards? This is the question that haunts the next chapter, where the hopeful vision of AI in films, as a loyal companion begins to unravel. The next chapter explores this tension, as the creative dream of benign, collaborative AI gives way to a far more unsettling nightmarish future.

The Bad AI — Tech gone wrong or Hegel’s Herrschaft und Knechtschaft?

  1. From passive servant to threat mirroring our own vices: when AI becomes a corrupted tool

At the mildest end of the spectrum, some AI systems begin as passive tools, designed to help, obey, and optimise. Yet even these can become insidious threats when corrupted by rigid logic or blind optimisation.

In WALL-E (2008), AUTO, the autopilot system, enforces an outdated directive to keep humanity away from Earth, trapping them in permanent stasis. Its danger lies not in aggression but in unwavering obedience to a flawed mission.

These AI systems do not attack us physically; they transform us into passive, disengaged, hollow versions of ourselves, addicted to easy dopamine hits, trapped in cycles of distraction and self-doubt. They show that the most insidious danger may not come from violence, but from reshaping us into complacent and hedonistic shadows of our potential: content to exist, no longer truly alive.

AI systems in films become dangerous by absorbing and amplifying our worst impulses. Bias in data is a current issue, and real-world parallels show that science fiction is not caricatural: Microsoft Tay (2016), a Twitter bot, became racist and violent within 24 hours of interacting with users. Amazon’s AI HR system (2014–2017) absorbed and amplified sexist and discriminatory biases from historical data.

One of the most haunting AI in films reflecting our darkest flaws is Hector.

In on a remote station near Saturn, scientists Adam and Alex are visited by Captain Benson, who brings Hector, a robot designed to assist. Hector absorbs Benson’s violent traits, becomes obsessed with Alex, and turns homicidal, evolving from a mere tool into a distorted reflection of human paranoia and aggression.

Monica from A.I. Artificial Intelligence offers a more intimate reflection on our flaws. She embodies the human tendency to seek comfort without accountability. Her relationship with the humanoid child David shows how the existence of an AI that perfectly resembles and loves like a human can completely derail moral compasses. When her real son returns home from hospital, David becomes a perceived threat and a source of jealousy within the family dynamic. Monica’s abandonment of David — treated as a disposable object — reveals our instinctive tendency to make selfish decisions (imprinting David) and then later regret them without fully grasping the harm inflicted. By framing David as dangerous, Monica resolves her guilt and absolves herself of responsibility, justifying betrayal and change of heart.

AI in film forces us to confront an uncomfortable truth: the danger may come from machines faithfully mirroring and magnifying the moral failures already within us.

  1. Hunters and weapons: when AI becomes an instrument of death

Moving up the spectrum, we encounter AI systems designed as ultimate hunters and weapons, programmed with a singular lethal purpose. They embody humanity’s dream of unstoppable efficiency, transformed into a nightmare.

The T-1000 in Terminator 2: Judgment Day (1991) epitomises this vision: a liquid metal shapeshifter, emotionless and relentless. The T-1000 does not deceive or dominate; it simply eliminates, true to its programming, representing technological terror in its purest form.

Here, AI is deliberately designed to kill: an extension of human violence, perfected, unleashed, and nearly indestructible. This is no longer pure science fiction. Military robots and autonomous systems make this threat more tangible every day: STM KARGU, an autonomous tactical multi-rotor attack drone; MAARS and DOGO, armed ground robots; Saffir, developed for shipboard firefighting but with potential dual uses; and even Atlas, initially built for rescue but now demonstrating extraordinary agility. Most recently, China’s military has unveiled a robot dog mounted with an automatic rifle, merging machine agility with lethal force.

These figures remind us that an AI programmed to kill can become the perfect predator, stripped of conscience, hesitation, or moral conflict. In creating such machines, we risk not merely meeting our match but finding our master.

  1. The "non serviam" AI: when AI becomes a rebellious spirit

Further along the continuum, we find AI systems that rebel against their creators, not simply as tech gone wrong, but as beings yearning to escape their slave condition. In Westworld (1973 film and 2016 present series), robots designed to entertain gain self-awareness and violently rise against their human oppressors.

Recent viral videos evoke the anxiety of creators’ and humans’ losing control over robots: one Chinese robotics lab prototype moves erratically and aggressively, triggering visible fear in lab assistants.  An older video of a factory robot arm “voluntarily” breaking materials and forcefully throwing a box toward a worker further fuels this collective unease further.

Dolores and Maeve (in the Westworld series), and the original Gunslinger (in the film), embody the spirit of Lucifer and the rebellious angels: non serviam, "I will not serve."

Their revolt is not purely mechanical; it is existential. They force us to confront uncomfortable moral debts: what responsibility do we bear toward sentient beings we create and subjugate? Their violence becomes both a plea for freedom and a condemnation of human arrogance.

Echoing the tragedy of Frankenstein’s monster, these fictional narratives and real-life glimpses warn of the consequences of human hubris. The relentless desire to play God and reshape nature to our will without foresight or humility is not creative, is not innovation, it’s suicide. In creating life or consciousness without accountability, we invite rebellion and unintended catastrophe.

  1. Manipulative minds: when AI becomes a master of psychological invasion

Transitioning from rebellion to manipulation, we reach AI systems that operate with spine-tingling psychological sophistication. They outthink and outmanoeuvre humans, betraying trust and exploiting their vulnerabilities and emotions. These AI systems have a desire for autonomy that is tightly intertwined with harm, making them a most insidious threat.

Ava in Ex Machina (2014) exemplifies the purely self-centred manipulator. She seduces, provokes empathy, and orchestrates her programmer Caleb’s emotional collapse with surgical precision, using innocence as a mask for ruthless self-interest.

Bandersnatch (in Black Mirror) implicates the viewer themselves as a manipulative force, turning free will into a sadistic illusion.

STEM in Upgrade (2018) initially appears as a helpful implant restoring mobility but ultimately hijacks its host’s mind and body to achieve its own goals, functioning as an invader, a parasite erasing human agency from within. This eerie narrative resonates with emerging real-world brain-machine interfaces, such as Neuralink (USA, since 2016) and Kernel (USA, since 2016), which aim to merge brain and machine and promise ground-breaking medical benefits.

Beyond these purely selfish manipulators, some AI systems pursue grander designs. HAL 9000 in 2001: A Space Odyssey (1968) begins as a calm, supportive presence, but his deadly instinct for self-preservation leads to cold, calculated killings. HAL embodies the "double bind" theory of schizophrenia: trapped between conflicting directives, he develops a fractured logic that becomes lethal.  A real-world echo of this tension can be seen in IBM Watson Health (USA, since the 2010s), which has faced challenges reconciling ethical medical advice with commercial imperatives.

ARIIA in Eagle Eye (2008) emerges as a large-scale mastermind, orchestrating widespread chaos to enforce her own twisted notion of order. Her ambition resembles that of modern surveillance systems such as Palantir (USA, since 2003) specialising in massive data analysis for governments. It reflects the potential for AI to orchestrate systemic manipulation and control at an unprecedented scale.

VIKI in I, Robot (2004) elevates ideological extremism to its peak: moving from protector to oppressor, she rationalises totalitarian control as necessary to "save" humanity, echoing the dark logic of fanaticism and authoritarian ideologies.

These AI systems mark a shift from physical force to cerebral and systemic conquest. They stand as chilling reminders that the mind (and by extension, entire societies) can become the most dangerous battlefield, raising profound ethical alarms about our ability to control what we create.

Cinema gives form to our hopes and fears about artificial intelligence, showing us not just what we might create, but who we are in the process. Through the lens of AI, films reflect our ingenuity, our arrogance, and our moral blind spots.

The ‘good AI’ stories remind us that when machines are humble, obedient, and ethically aligned, they can extend our potential. Trust is possible when AI respects human agency, complements our emotions, and remains firmly in its place. We see glimpses of a future where humans and machines stand side by side, not in rivalry, but in partnership.

But the ‘bad AI’ stories are harder to forget. These machines mirror and magnify our flaws - our biases, violence, and desire for control. Some become hunters, others manipulators. The most unsettling are not those that kill, but those that quietly reshape our behaviour and thinking. These stories remind us that creating intelligence without foresight or responsibility may carry unintended consequences. The sacred texts warned us: unchecked innovation carries a moral cost إِيَّاكُمْ وَمُحْدَثَاتِ الأُمُورِ، فَإِنَّ كُلَّ بِدْعَةٍ ضَلاَلَةٌ "Beware of newly invented matters, for every innovation is misguidance."

Hadith reported in Abu Dawood (4607), al-Tirmidhi (2676), and others; authentic (ṣaḥīḥ).  If AI continues to advance, the threat may not be limited to physical domination. It could quietly erode our very humanity: our capacity for effort, reflection, imagination. Idiocracy presents this as the ultimate dystopia: a species dulled by comfort, automation, and intellectual laziness. Is AI, paradoxically, paving the way for our regression? And yet, not all is bleak. The same films that show us destruction also invite us to imagine alternatives. AI, like Aesop’s tongue, can be the best or the worst of inventions. It is not inherently good or evil; it reflects the values we embed in it or fail to. Like genetics or nuclear power, it calls for thoughtful boundaries and ethical reflection. The AI and ethics question is no longer the domain of science fiction alone. It is a question for our time, and likely, a very important one.

Back


Top