Discussions of artificial intelligence (AI) are inseparable from questions of ethics. In 1942, Isaac Asimov famously popularised the idea that intelligent machines should adhere to a moral code set by humans: his fictional Three Laws of Robotics have been influential not just in science fiction but in the real-world research and development of technologies such as robotics and AI.
Today, the increased use of AI impacts everything from employment to the environment, and the ethical considerations are wide-reaching.
At the Bots vs Beings panel discussion on 13 June 2023, experts from the University of Waikato considered the question: How will AI impact your life and work? This article includes short videos of the experts and explores some of the themes raised during their panel discussion.
Jobs for machines
Robots and AI doing work once done by humans isn’t science fiction, points out Professor Mike Duke. It’s already happening and will only become more common. It could be said that much of the work now done by robots and AI is work that humans prefer not to do.
While many jobs will be lost to AI, another aspect to consider is how many jobs will be created in a fast-growing industry. Jobs such as prompt engineers, AI auditors and of course AI ethicists were unheard of a few years ago but are becoming common. How this work is sourced and remunerated, though, is still an open question as we’ll see.
AI won’t replace humans. People using AI will replace people not using AI.Dr Amanda Williamson
You may keep your job, but you are not going to be well paid for it. So your job may remain, but expect to be poor.Professor Nick Agar
The need for human understanding
Dr Amanda Williamson is a Senior Lecturer in Innovation and Strategy and a Manager in AI & Data Consultancy. Amanda says it’s important to teach and learn about the limitations of AI. The industry term for AI-generated material that seems confidently genuine but isn’t true to reality is ‘hallucination’.
Dr Williamson points out that generative AIs are also trained on data containing implicit human biases. Image generators will often assume that anyone doing a powerful or complex job should be represented by a middle-aged Pākehā male, for instance.
In 2016, Microsoft released an AI named Tay who would learn from its interactions with humans on social media. The company soon decided to take the AI offline after those interactions taught it to make hateful and bigoted remarks.
Another growing problem is the use of AI image generators to misinform and harass. False stories can be given credence through the AI-generated endorsement of celebrities or journalists. Faked images of individuals, including schoolchildren and young people, are spread online to harass and extort.
These are just a few examples of how generative AI needs to be designed and monitored so as not to replicate humans’ worst biases or facilitate antisocial behaviour. Human overseers of AI need to be deliberate in what data is and isn’t included in the datasets – and to include diverse teams capable of highlighting potential misuses before they can become real-world issues.
Environmental impacts of AI
Everything we do on computers comes with an energy cost, and AI is no exception. The servers needed to train and maintain online generative tools like ChatGPT consume significant amounts of power.
It’s estimated that training a tool of ChatGPT 3’s scale generates a similar carbon footprint in a few months as five cars would do over their entire lifetimes. The size and complexity of our AI projects will only increase – but climate change isn’t going away either.
Microsoft has announced plans to power its AI network with a complementary network of small, next-generation nuclear reactors, arguing that this is the cleanest and most sustainable way of maintaining an AI infrastructure of the planned scale.
Whose data trains the AIs?
Dr Te Taka Keegan is a computer scientist and Māori language expert. He calls the new large language models’ proficiency in te reo Māori “scarily good”, teasing out its implications while pointing out that the language data used to train the models belongs to Māori and was used without permission.
The call for Māori data sovereignty is part of a growing worldwide discussion about indigenous data governance. The CARE Principles for Indigenous Data Governance were first drafted in 2018 by a panel that included experts from Aotearoa, Australia, Africa and the Americas.
As Dr Keegan points out, huge overseas operations like ChatGPT scraping te reo Māori data from social media is the opposite of Māori data sovereignty. Risks include international companies profiting from the Māori language and the possibility that control of the use and evolution of te reo Māori is gradually transferred from iwi to AIs.
Ethics and regulation: where to from here?
Philosopher Nick Agar has written widely on the role of technology in our human future. In discussing the ethical ramifications of the AI boom, Professor Agar and Dr Williamson draw parallels with another technology that’s changed our world: the rise and prevalence of social media.
Generative AI makes its public debut in a time when the regulation of data usage is a huge international issue. While many tech workers fight for a living wage, the hard mahi of training tomorrow’s AI tools is outsourced to developing countries where workplace oversight is largely absent and payment is often in the single figures.
The work can be physically and psychologically gruelling as well as precarious. A huge amount of the human work required for AI to function is outsourced to countries like the Philippines, where climate change is already having a disastrous effect.
Dr Keegan stresses the importance, in an AI-connected world, of meeting and working kanohi ki te kanohi (face to face). It will be hard to pre-emptively regulate against the potential threats posed by AI, he argues, pointing out that traffic laws weren’t drafted until automobile fatalities demanded it.
“Question what you’re seeing and hearing and believing,” he says, advising that fostering in-person connections can counteract AI’s threats to social cohesiveness.
What do you think?
Professor Mike Duke showed a video of a robot pruner named Archie to which he’d added a simulation of the robot describing its job and joking about its superiority to humans. As Mike is careful to note, AIs don’t really “think” in this way at all. Is AI easier to understand if we personify it as Professor Duke has done? What are the advantages – or risks – of doing this?
Dr Amanda Williamson discussed some of the environmental costs of large-scale AI work, and we saw how one potential solution involves networks of small-scale nuclear reactors. Can you think of other possible ways of offsetting or minimising AI’s carbon footprint?
We’ve seen some of the potential misuses of AI such as reinforcing human biases, spreading misinformation and enabling antisocial behaviour. How might education, diversity and in-person connections counteract these dangerous effects as Dr Williamson and Dr Te Taka Keegan advise?
Nature of science and technology
AI represents a massive commercial application of advanced STEM research. Discussing these technologies and their ramifications helps us to explore the impact science and technology have on our world and lives.
The article Artificial intelligence provides a primer on how we’re starting to see AI explored and employed today.
Professor Albert Bifet’s article ChatGPT – generating text and ethical concerns goes into more depth about some of the ethical questions raised by large language models (LLMs) such as ChatGPT.
The article ChatGPT and Māori data sovereignty explores some of the cautions and promises that Dr Te Taka Keegan sees in the future of LLMs.
The Connected article Emotional robots asks us to consider what constitutes intelligence and what it might mean to attribute it to a machine or computer program.
The citizen science project AI4Mars offers students an opportunity to help train AI for scientific mahi that humans currently can’t do.
Dr Karaitiana Taiuru explains his role as a Māori Data and Emerging Technology Ethicist.
Explore the resources in the Artificial intelligence section on the Office of the Prime Minister’s Chief Science Advisor website.
Download from the Royal Society of New Zealand Te Apārangi Summary: The Age of Artificial Intelligence in Aotearoa. This 2019 report looks at what artificial intelligence is, how it is or could be used in New Zealand and the risks that need to be managed so that all New Zealanders can prosper in an AI world.
ChatGPT and other LLMs require significant input from humans and rely on our feedback to improve the technology. This article looks at LLMs from a sociological perspective.
Better Images of AI is a non-profit collaboration that examines cliched images used to illustrate AI concepts and how these hinder our understanding of AI. A common example of an inaccurate image to illustrator AI is the ‘thinking humanoid robot’.
The Royal Society Te Apārangi Mana Raraunga Data Sovereignty 2023 report outlines what data sovereignty is and why it matters in Aotearoa New Zealand. Listen to this RadioNZ interview with Professor Tahu Kukutai as she breaks down concepts like Big Data and Māori data sovereignty.
Professor Mike Duke is the Dean of Engineering and Dr John Gallagher Chair in Engineering at the University of Waikato. Mike is a founding member of Waikato Robotics Automation and Sensing (WaiRAS) research group.
Dr Amanda Williamson is a Senior Lecturer in Innovation and Strategy at the University of Waikato and a Manager in AI & Data Consultancy at Deloitte.
Professor Nick Agar is a Philosopher and Professor of Ethics at the University of Waikato.
Associate Professor Te Taka Keegan (Waikato-Maniapoto, Ngāti Porou, Ngāti Whakaue) is an Associate Professor of Computer Science, the Associate Dean Māori for Te Wānanga Pūtaiao (Division of HECS) and a co-director of Te Ipu Mahara (University of Waikato’s AI Institute).