Science Learning Hub logo
TopicsConceptsCitizen scienceTeacher PLDGlossary
Sign in
Video

Kia Pākiki Canterbury – AI in education

In this Kia Pākiki Canterbury podcast, Science Communicator Tom Goulter and co-host Associate Professor Adrian Paterson from Lincoln University interview Dr Selena Chan and Dr Amit Sarkar from Ara Institute of Canterbury.

They discuss issues related to artificial intelligence as it pertains to the education sector, particularly at the tertiary level.

Kia Pākiki Canterbury is a monthly podcast presented by the Canterbury branch of the Royal Society Te Apārangi.

Transcript

Tom Goulter

Amit Sarkar is a senior lecturer in computing at Ara’s Digital Technologies Department. He’s worked with organisations in New Zealand, Australia, Europe and North America. You may recognise him from our first episode. Dr Selena Chan is an educational developer at Ara and an Ako Prime Minister’s Supreme Award winner for excellence in tertiary teaching. She’s editor of the new book, AI in Vocational Education and Training, in which Amit was also published. Amit and Selena, welcome. 

Dr Selena Chan 

Thank you. 

Dr Amit Sarkar 

Thank you very much. 

Tom Goulter

Selena, you’ve written in your book that the advent of generative AI represents a tipping point in the way humans and digital technologies interact. Do you want to talk about that statement? 

Dr Selena Chan

In the normal scheme of things, we have used digital technology, and we have probably been able to manage it, like we, you know, we are the boss, yes, but with the advent of generative AI, it talks back to us, and sometimes it actually comes up with quite interesting feedback, which is not something that we are probably used to from the point of digital technology. Usually, when we use it, we tell it to do whatever it does, it’s supposed to do, and it does it. But now, if you are wanting to have something there that will actually challenge you a bit, it is available. The technology can be used to augment what we think that we think we’re thinking, and it can sometimes, not always, provide a sort of trigger, I suppose, for you, you know, when you’re thinking things through, to say, “Hmm, I didn’t think of that.” 

Tom Goulter

ChatGPT doesn’t have a brain, a mind as we understand it. It’s a sophisticated mechanism for predicting what words are going to come next, but it’s very easy to look at that and do the natural human thing of ascribing a mind to it. 

Dr Selena Chan

That’s right, yes, yes, and that’s always what you – we need to remember, that it is very much a pattern recognition algorithm behind it. And all it does really is to, to follow through what you are sort of having a conversation with it and mirroring back what it thinks you want. You know, I think for people who have spent a lot of time gaining and learning and obtaining expertise in, in a discipline, we use it to challenge us and to push us a bit further. We need to be very careful when we, AI is used with novices because often it appears to be very knowledgeable, but the knowledge that it actually produces might not be reliable. I think that’s a challenge for us in education. When do we introduce AI? At what age? In what ways to support teaching and learning without replacing the effort that’s required for novices to learn skills or knowledge? I think one of the things as educators we need to learn now is we need to be very careful not to just introduce it without providing our learners with some good literacies on what AI is, what it can do, what it cannot do, so that our learners become much more competent at working out how and when and why to use AI. 

Tom Goulter

We need education to be turning out thinkers who are, can think critically and creatively. And a lot of the stereotypes around AI really show it removing those faculties, but there are ways to use it which enhance those faculties of thinking. 

Dr Selena Chan

Oh, yes, but again, you know, we need to be very careful how we introduce it, how we structure our learning activities to integrate AI. In the book, there is a chapter on graphic design, and we piloted Midjourney, which is an AI that produces images in seconds if you give it the right prompt. Usually, our first-year graphic design ākonga take a little bit of time to learn what is actually quite difficult to pin down, and that is what is a good image? It’s trying to help them learn to judge what a good image means. And there are, there are lots of knowledge that are in experts’ minds that are quite difficult to articulate, and a lot of it is really about the feel of it. When the ākonga use AI, we bring AI in as a ideas generator so that we try and, with the prompts, keep AI in that role. So when the ākonga are given a brief, they have to come up with – 

Dr Amit Sarkar

Design ideas. 

Dr Selena Chan

– the writing, yeah, the ideas, the writing, and then from there, they need to produce pictures that might go into a brochure or something like that. And so the ākonga can ask Midjourney to produce an image, you know, a dog with a young person, you know, walking in the hills, something like that. And, you know, Midjourney has no problems producing dozens and dozens of images. So for the ākonga, it’s all a bit ooh, you know, “All these images, what can I do with them?” sort of to start with. And then they start to learn how to filter. They start to look at these images and they start to filter and they’re trying to work out how those images might fit what they’re actually thinking about in their mind’s eyes. It helps them look through and think, “Why is this a better picture than that one? Is it because of the, the colours, the light, the shapes? Is it because it tells a story?” I mean, all of these things take quite a bit of time for novices to attain, but when we use something that can generate lots and lots of examples very quickly, it helps people sort of skip that step of that long road to attaining those skills. When we started the project, that wasn’t the key thing that we were looking for, but after we had, you know, piloted a couple of times with groups, we realised that that was basically what AI was doing was to actually accelerate the learner’s ability to judge what was the right picture. And by the time we’d finished the project after the end of semester, 16 weeks or so, the better students had decided that AI was not for them. AI could help them ideate what they saw could be something useful, but they thought that they could actually produce something even better. 

Associate Professor Adrian Paterson 

So Selena, I was just wondering, like how did you get into this space? 

Dr Selena Chan

Well, I’ve always had an interest in technology from when I was young. I left school very early and did an apprenticeship as a pastry cook, and then I came to teach here. So when I started teaching here, I found it that there was very little literature out there about helping students learn in a practice-based environment. One of the things that again got me into technology in the late 1980s and 1990s was because our apprentices were scattered through the whole of New Zealand, and to have access to them, send a letter to them, it takes in a week to get to them. They might or might not answer. So, you know, so I always looked at technology as a way of communicating and keeping engagement with learners. So that’s basically how I started into the e-learning space. Did quite a bit of work on mobile learning because apprentices tend to own a phone but not a PC or a laptop, and that’s basically just morphed into, you know, AI because it’s all part of using technology to support learning, not to replace learning. 

Associate Professor Adrian Paterson

So in terms of increasing AI literacy essentially, like how do we go about that? 

Dr Selena Chan

I think in the long run, AI literacy needs to be part of the curriculum for everyone, because AI will be pervasive and part of, you know, how society gets on with life really. I think it needs to be taught in context, because that’s how humans learn best. The other thing obviously is to work out at what level of AI literacy would you prepare your ākonga. Katherine MacCallum and her team at the University of Canterbury has put out a scaffolded AI framework, literacy framework called SAIL, S-A-I-L. You know, it has four levels. The first level is, is to know what AI is, what it does, why it’s there, you know, the basics. The second level is to be able to evaluate and use AI tools. The third one is similar, I mean, to select and use. And then there’s a fourth one, which is about being able to create AI agents. So my work with my team here, the educational developers here at Ara, we have looked at the framework, and when we do the learning design on our programmes, we try and work out how industry is using AI and that feeds back into the level of AI literacies required by the ākonga in the programme. That also gets connected to our kaiako, our teachers, because our teachers always have to be at least one level above our learners, our ākonga. Because AI literacy is like an academic literacy and we do the academic literacies anyway, so you know, when our learners start, we teach them how to form an argument, you know, write an essay, you know, do all the usual citations and stuff. And so AI literacy is sort of interwoven with academic literacies. 

Dr Amit Sarkar

The important thing is to, learning by doing. That is kind of our sort of theme of our research as well. So it is not having a theoretical understanding about AI only. It is very important because otherwise you cannot fine-tune the AI. That is absolutely pivotal.  

Associate Professor Adrian Paterson

How ethical is AI at this point, given like training on datasets that might not belong with the AI through to when’s it appropriate to use? How does that affect what you two are doing? 

Dr Amit Sarkar 

Ethics is quite important because that is where our research started because we saw that, okay, if you give a question, and most of the time, some of the senior members can write very, very nice questions, which is almost like a prompt. So if you give it to the generative AI, they will be producing some answer, and the students will start thinking that, “Oh, this is what I exactly want to write”, and they will submit it. Then that is completely unethical, and that leads to the plagiarism, and then you have to bring them back to the exam hall, and because that’s the only way to examine them under exam condition to make sure that they, they’ve learned something. But that is kind of going back. It’s very reactive to our disruptive technology, so we try to approach it in a very different way. We try to make it as ethical as possible by saying that we will train our AI in such a way that it will never give you an answer. It will always be working in the mindset of a chain of thought prompting, and it will try to work with you relentlessly, tirelessly, so that together we can find an answer rather than, sort of, straight away it is giving you an answer. But when you get the right answer, it will not hallucinate. It’ll tell you, “Correct. You got the answer. Do you want to practise more?” Because it can see from the conversation that how long it took you to get the right answer, so it can actually prep you up. Where it is fundamentally important, I think, is that that type of concept is – when you have a classroom, we almost always think it is an uniform distribution. All the students in the classroom are almost like a quantum state and they are in different level of competency within a subject. 

Associate Professor Adrian Paterson

They certainly disappear when you observe them too closely. 

Dr Amit Sarkar 

Yes, of course. So that’s why, if you are maintaining the distance as an observer but you introduce AI as an instrument and they interacting with the AI specifically based on what support they need, that is kind of the experimentation that we are doing. That’s exactly the tools that we are thinking of creating. 

Dr Selena Chan

I think we’re aiming for where the AI would supplement and support the learner 24/7. Ākongas also need to learn how to get the best out of a tool like that, but it would be something that, you know, a learner can call on when they need to revise something or they need to work on something that they have had difficulties understanding. You know, as humans, you know, Amit has said that we need to learn by doing, but we don’t only need to learn by doing, we also need to learn by doing in a very deliberate and mindful way. In many cases, humans need repetition, so we can’t learn something if we just do it once. We need to do it more than once, sometimes many, many, many times in some cases. Our brains work when we exercise those connections, and in order to do that, we actually have to put in the hard yards. So AI can be used to help students do that revision, to repeat things, to give them problems which might start as something they can cope with but push them further and further. So we either make customised AI agents to do that, which is what Amit’s students’ projects have been about. But currently, the latest ChatGPT and also a lot of the others now have a study mode. So if you turn the study mode on, it would actually change to being a bit more of a tutor rather than being – so it’s more a ‘guide on the side’ than a ‘sage on the stage’ type of stuff. If we are the ones that don’t help our learners attain this sort of attitude of using AI like, like you would use maybe a calculator rather than using AI like a font of all knowledge. So I think that’s very part of the educational process, how AI could be more useful to your life rather than just giving you answers, that it can be something that is there to help you get better at what you do. 

Tom Goulter

You have that notion in education of the zone of proximal development, which is where a student will have trouble getting something but they can get there, particularly if they tap on the tutor’s shoulder for help. 

Dr Selena Chan

That’s right. 

Tom Goulter

And you’re talking about using AI in the same role as the tutor in that sort of scenario. 

Dr Selena Chan

That’s right, yes, and the AI needs to be prompted and trained to actually be able to work out that distance in the zone of proximal development. You know, how can I help this user or learner close the gap? 

Dr Amit Sarkar 

Also that individualisation. So I would like to emphasise the word. It’s almost like a customised personal AI buddy or tutor. 

Dr Selena Chan

That’s right. 

Dr Amit Sarkar 

So it is working on the level or on the aspects where it makes kind of a difference. So that call is going to come from the bandmaster of the orchestra, which is the educator. So that symphony is only harmonious because the conductor is conducting it well. 

Dr Selena Chan

That’s right. 

Tom Goulter

What a lovely metaphor. 

Dr Selena Chan

Yes. 

Associate Professor Adrian Paterson

So diversity, where does that come in? So sometimes there is one answer. Like you want the structure of benzene. Everywhere in the world, you want that same answer. But to come back to one of the things you were talking about before, with the images, you know, if you want to generate an image that shows, you know, someone walking in the hills, then you don’t want to have a generic kind of Rocky Mountains sort of couple of Americans walking along sort of thing. Not that there’s anything wrong with Rocky Mountains or Americans. But, you know, if you’re in New Zealand or you’re in, you know, Japan or wherever, you probably want something context – 

Dr Amit Sarkar 

Contextual. 

Dr Selena Chan

That’s right. 

Associate Professor Adrian Paterson

And so where’s the trade-off with that at the moment when we’re thinking about, you know, how we train these models? I guess that’s more coming back to is there a default view of diversity within the AI model, essentially depending on what it’s been trained on? And is that an issue or is it not really an issue? 

Dr Amit Sarkar 

I think it is an issue at this moment. It is the data that it is getting trained and tested on is very much North American. So we really have to be mindful about that. And that’s why I think the context-specific knowledge sources are important, which will look into our curriculum, which will be looking into our learning objectives and outcomes and the topics that we are covering. So it is really not off-the-shelf type of tool at this moment. It can go very wrong if you think like that. So there is a lot of work goes in there to customise it and fine-tune it and make it relevant and acceptable for the classroom type of scenario. 

Associate Professor Adrian Paterson

So is that the kind of medium-term future for AI models to be kind of more bespoke and niche, do you think? 

Dr Selena Chan

Yes, yes, and I think in New Zealand, we have quite a strong emphasis on diversity and on biculturalism. There was a recent meme of where ChatGPT and Grok, I think, got their database from, and it came out as mainly 40% something from Reddit and another 20 or 30% from Wikipedia and YouTube. So basically, the database is very crowdsourced, and that’s the other avenue that I think we would help with our ākonga is to help them realise the difference between something that’s peer-reviewed and something that comes off the internet and for them to actually understand where an AI chatbot like ChatGPT obtains its knowledge base from. In Aotearoa, I mean, we also need to understand, you know, these issues around indigenous data sovereignty. You know, different cultures have different ways of looking at what happens to the information and knowledge that is part of their culture. And Māori culture view their language, te reo, as a taonga, a treasure, and they don’t really want it to be out there in somewhere where it can, will be mangled by some unknown entity. So I mean, these are things that our learners need to learn and have in them going forward because we are going to be in a world where things like these move so rapidly that organisations don’t often have the time to actually go through with a very structured way of looking at, you know, what enough AI, I mean, all that they – they look at maybe is the bottom line and say, “Oh, if we use AI, we can replace 20 people, so why don’t we use AI?” 

Dr Amit Sarkar

Two other things that is augmenting AI is Internet of Things and our high-speed telecommunication network. 

Dr Selena Chan

That’s right. 

Dr Amit Sarkar 

So in that case, what is happening or going to happen in my positive mindset is that we are using one of the methodologies called Q method. Q method is all about prioritising. So suppose you have 10 problems, but within your resources and budget, you can only solve three of them. That can happen with organisations, that can happen with government. But if we do not fire the people, but if we retain them, now maybe with the help of AI, we can look into all the 10 problems and try to solve them. And that is quite impactful and meaningful rather than just, yeah, thinking it in a very reciprocative or reactive way. 

Tom Goulter

Another lens that you’ve talked about, Selena, is the digital divide or digital equity. How do we ensure that everyone is on the same page with this technology when we live in a society, capitalism rules who gets access to technology, how do we ensure that some people aren’t being left behind? 

Dr Selena Chan

Yes, I suppose one of the things that we discussed, Amit and I, was that, to get the most out of a lot of the AI platforms, you need to pay for it. You know, otherwise you get the standard basic, but if you wanted all the bells and whistles, you have to pay, and that is a great challenge really from the equity point of view. Because we know even in Aotearoa that there are large proportions of our population who are on the other side of the digital divide where, you know, the pandemic showed us that we had large numbers of our ākonga who did not have access to their own laptop. 

Dr Amit Sarkar

Also, not everybody has unlimited data. So, data is, costs us as well. So that is another important thing, I think. 

Dr Selena Chan

That’s right. We heard stories of ākonga sitting outside the public library using the wi-fi with their phone so that they could get into their lesson, which we were broadcasting on Zoom. These are all challenges which are not insurmountable but are things that we need to bear in mind when we think of all these shiny tools like AI. 

Associate Professor Adrian Paterson

It does kind of raise one question, though, that’s been worrying me a bit, and that’s like how do we educate the educators? 

Dr Amit Sarkar

Yeah, from the academic world, you know that creating materials and then making it presentable takes so much long time. So I think internally there is lot of work happening, but I think it is waiting for the, a little bit of more time to be a bit more prepared so that it can be public facing or it can actually come from there. From our hands-on vocational education perspective, we are always, trying to apply something. So definitely we are, we are working on that, I can assure you. Because it is a revolution. It is really going to change things. So let us face that, upfront, that it is going to disrupt. It is going to disrupt everything, and because in education is also fundamental, so it will disrupt the education space as well. 

Dr Selena Chan

Yeah, we can’t sit back and do nothing. 

Dr Amit Sarkar

Yeah, we can, we can just get into that space to talk to the colleagues, to give them the idea, not any false assurance, but a more pragmatic, academic ideas that how it can be dealt and things like that so that they can actually go back to the classroom with full confidence and things like that. And that is important. But from the tertiary level of education or higher education, we will be tackling it, but I think much before that, it will creep into the schools. So the practice and the values need to be created upfront in the school level as well so that there is no conflict when they decide to come to the university or to the polytechnic because we have done this at school and it was completely acceptable versus why you’re trying to teach us something different. So the ethos needs to be created, and that comes from a universal AI framework, and that is what I’m trying to see in how the government are going to create a common agenda platform in the type of framework that what is, what can be done. 

Associate Professor Adrian Paterson

So in 2030, you got any ideas about, you know, what would be, at this stage, your best guess about, like, the preferred outcome for 5 years down the track? 

Dr Amit Sarkar

I would not like to see something where people are getting fired from their job or becoming jobless and we are getting a sub-optimum product or service through the AI because we completely thought that the world is binary, zero or one, so we will switch off something and switch on another thing and make it a zero-sum game. What I’d love to see is that we are addressing all the problems that we thought that it doesn’t make sense in the common economic scenarios but it has a valid reason to be addressed, so we become more human in 2030 by using AI. 

Dr Selena Chan

Yes, and going back to the teacher perspective, I mean, one of the selling points we have used with our colleagues is that AI can help make your work lighter. Producing good resources with you as the arbiter of the quality is much faster. You can update and customise your course resources quicker using some AI. But again, it is important to, I think, in 5 years’ time, to evaluate where we’ve got to and to see measured against some good frameworks, especially in regards to ethics, are we actually using it ethically? Is it actually helping our students learn or not? Are our students still able to do the hard yards and use AI as a supplement? So I mean, all of these things I think need to be resolved all along the way. We can’t just carry on and just use it without really thinking through the implications each time we integrate some tool or whatever into a course.  

Tom Goulter

Selena, I wanted to ask you the question that we always ask here at Kia Pākiki, which is what’s the last thing that made you curious?  

Dr Selena Chan

Ah, yes. I was cycling into work, this morning. I’m interested in plants. And I saw one of the hebes out in the block out there, and it had a slightly different leaf pattern, and I thought, “Oh.” So I picked it and, it’s on my desk, and I’ll bring it home and see what’s happened to it. 

Tom Goulter

What a great answer. Thank you so much for your time, Selena. 

Dr Selena Chan

Thank you. 

Tom Goulter

And Amit, again. 

Dr Amit Sarkar 

Thank you so very much.

Associate Professor Adrian Paterson 

Yeah, thanks for coming along. 

Dr Selena Chan 

That’s alright. 

Acknowledgements 

Tom Goulter, Kia Pākiki Canterbury 

Associate Professor Adrian Paterson, Lincoln University 

Dr Selena Chan, Ara Institute of Canterbury 

Dr Amit Sarkar, Ara Institute of Canterbury

Kia Pākiki Canterbury logo, © Plains Media/Royal Society Te Apārangi (Canterbury Branch) 

Images of Tom Goulter and Adrian Paterson, © Royal Society Te Apārangi (Canterbury Branch) 

Images of Selena Chan and Amit Sarkar, © Ara Institute of Canterbury

AI-generated painting, rights: The University of Waikato Te Whare Wānanga o Waikato

Glossary

Rights: © Royal Society Te Apārangi, Canterbury branch
Published: 31 March 2026
Referencing Hub media

Explore related content

Appears inRelated resources

Article

Kia Pākiki – AI in education

Amit Sarkar and Selena Chan worked on the 2025 book Artificial Intelligence in Vocational Education and Training. They sat down ...

Read more

Article

Kia Pākiki – AI in education

Amit Sarkar and Selena Chan worked on the 2025 book Artificial Intelligence in Vocational Education and Training. They sat down ...

Read more
Exploring mātauranga in the classroom

Teacher PLD

Exploring mātauranga in the classroom

In this recorded professional learning session Chloe Stantiall and Greta Dromgool share their experiences exploring mātauranga as pākehā educators in ...

Read more
Slideshow from the webinar Māori Knowledge in Science Education: He Mana Ōrite, He Awa Whiria.

Teacher PLD

Māori knowledge in science education: He mana ōrite, he awa whiria

In this recorded professional learning session Professor Georgina Tuari Stewart from Te Kura Mātauranga School of Education, AUT and Associate ...

Read more

See our newsletters here.

NewsEventsAboutContact usPrivacyCopyrightHelp

The Science Learning Hub Pokapū Akoranga Pūtaiao is funded through the Ministry of Business, Innovation and Employment's Science in Society Initiative.

Science Learning Hub Pokapū Akoranga Pūtaiao © 2007-2026 The University of Waikato Te Whare Wānanga o Waikato