In Commissioner Hession David’s first episode of Un-redacted, the Sask IPC Podcast, she discusses an extremely important topic regarding children’s privacy and generative artificial intelligence (AI).
Generative Artificial Intelligence and Children’s Privacy Concerns
Diane: So my name is Diane Aldridge and I am the Deputy Commissioner for the Office of the Saskatchewan Information and Privacy Commissioner and I’m joined by the Commissioner, Grace Hession David. Our topic today is going to be Artificial Intelligence and Children’s Privacy. This is a huge topic and we may need several installments of this podcast to fully explore this area but Grace, I just wanted to ask you about the history of AI and to really start off this podcast – can you even give us a definition of what Artificial Intelligence is?
Grace: At its most basic level, Artificial Intelligence is best thought of as a computer tool that can handle a large amount of data.It is a tool that tries to simulate a process that matches or even exceeds human intelligence. And what do we mean when we refer to “human intelligence”? I think we would find common agreement in the definition of “intelligence” as one’s ability to learn, one’s ability to infer – that is to draw conclusions on the basis of facts before us and, of course, the ability to reason. The ability to reason means the ability to think logically and to process ideas, thoughts, and emotions and to interact with our fellow human beings in a peaceful, orderly and co-operative fashion.
Diane: Before we even begin to try to describe how Artificial Intelligence can interfere with the privacy of young children, can you give us a brief history of AI and where we are today in terms of its interaction with children?
Grace: AI first started as a science project amongst select brainiacs in university research pods back in the 1950’s right after we had emerged from the atomic age. But at that time artificial intelligence was really nothing more than a science fiction dream – but some thought it was possible because of the work that certain scientific pioneers like Alan Turing and John von Neumann had done during the second war to take coding and codebreaking to a whole new level. By the 1980’s engineering and computer students were using programming languages like LISP or PROLOG. These programming languages treated code like data and this allowed for powerful metaprogramming capabilities that allowed for interaction with the host and allowed the programs to modify themselves at the host’s instruction. By the late 1980’s research into AI coding hit a critical mass and became a real thing in universities.
Then in the 1990’s researchers began to explore Machine Learning (ML). As the name indicates, the machine learns on its own without having to be programmed by a human. It did so based on predictive logic from the information it was already fed. This was the machine learning algorithm that analyzed patterns in data. The more data it was given – the more confident it became at predicting. And because the logic was so fundamental to the ability to predict – the system could spot an outlier because of the way it would stand out in the data. Outliers in the data became extremely useful as cybersecurity began to emerge as a useful option for AI applications. Machine Learning exploded in the 2010’s.
Diane: So where are we today with all of this?
Grace: Machine Learning morphed into Deep Learning. Deep Learning uses “neural networks” that in a computer simulate and mimic the way a human brain works – to the extent that we understand how the brain works. Its called “deep” because there are multiple layers of neural networks in a brain and reflected in this technology. Deep Learning is very unpredictable. You can put the same question to it twice and not get the same result twice and this is because there are many layers to the neural network and the random nature of the rational capability of the programming.
In the late 2010’s Deep Learning led to the development of Generative AI. Advances in cloud computing propelled the commercial viability of the generative AI programs. Generative AI is going to do to our reality what the industrial revolution did to the reality of people in the mid 1800’s when the loom and the engine were developed.Our lives are going to be radically different and radically altered and this is going to have serious implications for everyday privacy concerns of adults and children.
Diane: So what does the term “generative” mean in terms of AI as opposed to agentive AI?
Grace: Generative AI uses creative models to produce text, images videos and a combination of all three.It works from the basis of a foundation model – a Large Language Model (LLM). This is language that is modelled and predictions are made off the language – the essence of predictive machine learning. A very simplified concept that is helpful to grasp is similar to the functioning of autocomplete when you are typing in word. The program predicts the word you are typing and even the next few words you may be using. But with Large Language Models it can predict the next sentence, the next paragraph and today there are programs like Claude by Anthropic or ChatGPT by Open AI that will write an entire document or essay based on the instruction prompt you supply. So this is AI, and its called “generative” because it generates based on its use of logic and the large language model that it works from.
Generative AI works from audio models, language models and video models.The real concern in terms of children is that these models can be used to create the deep fakes that are becoming very common in the cyber world at present. This is where a person’s voice or persona can be poached from an available social media platform and their voice can be re-created so they seem to say something that they never said in a situation that never was. Now this can have a valuable application as well – those who have lost the ability to speak can now harness this technology to type and to actually speak with a voice. That is a miraculous development. Still, Deep Fakes use audio and video in generative AI. But the caution with young children is that Generative AI can present a possibility to either generate new content or summarize large amounts of existing content to something that is bite sized and manageable for a young and untrained mind to perceive. And of course, children who are allowed unrestricted access to the internet really are not prepared mentally and intellectually to discern a deep fake from reality. To be honest, its getting harder and harder for adults to make this discernment. With an eye to this, we all noticed a video that went viral not too long ago of bunnies enjoying a trampoline in someone’s back yard in the night. That was an AI video that was created using generative AI and many people thought it was real - especially children. But the real distinction with generative AI is that a human is constantly curating or engaging with it.
Then there are agentic AI systems which are not reactive like generative AI but proactive. Like generative AI, the agentive systems will start with a prompt from an instructor and then work from a Large Language Model. The first prompt leads to a series of actions. Those actions involve a perception of the environment it is working with, which leads to a decision to take the next action and then it will execute and move on to the next decision and action. An agentive system is different from a generative system because the agentive learns from its previous action and moves to perfect in subsequent actions based on the original prompt it was given. The agentive system works on its own and seeks input from the human only when it absolutely needs instruction.The system uses chain of thought reasoning and breaks a problem down into a series of simple steps.
Diane: In the time we have left, can you just give us a hint as to what issues parents, teachers and guardians of young people need to be concerned with in terms of privacy issues for young children who cannot help but be exposed to generative AI in the cyberworld today?
Grace: So I’m going to give you and example of how we as adults must really supervise young children and their use of AI when they use their phones and laptops.Children’s use of AI really starts with how we as adults use Google – to ask questions. Children are always asking questions and the internet will now comply. There are many chatbots out there that kids like to use and Gemini is an AI chatbot created by Google especially for kids.
Diane: Please define chatbot?
Grace: So a chatbot is a computer program that is designed specifically to simulate conversation with a human – it’s a generative AI program and its basically Google.But Google has created Gemini for adults and for children. Children use it to help with homework, spark creative ideas and answer questions. It is popular with children because it uses a lot of images in its responses and it is designed to interact with children under the age of 13. Kids can activate Gemini on their own and parents tend to look cautiously at it because Google has said that it designed Gemini to block harmful or inappropriate content. But once again, this will depend on what that parent’s concept of inappropriate content is and even then, creative children can still get around the block.
One of the problems with the large language models in most generative AI programs is that whatever data you and I feed into the search engine or data model – that information becomes public – and most people don’t fully understand that, because they do not understand the whole concept of the large language model that generative AI learns from and feeds from. When it comes to children – for sure they do not know this. Google warrants that Gemini does not learn from the data supplied by children – but can we really be sure of that? On top of this – there are chatbots out there that interact with the child using video and sound. The child may think it is interacting with a cute talking kitten. The child may grow fond of the chatbot and come to trust it and see it as a friend. Do not forget that generative AI mimics human logic and the ability to reason.Children have not yet reached that stage. Parents must understand that the sheer volume of data that a child feeds into a search engine in the form of questions and then interaction with the AI model, can present a real hazard for an invasion and breach of privacy. This is true for adults but it is a real concern for children because they have no concept of what they are dealing with.
Diane: Can you give us some real guidance in how to protect our children?
Grace: I will use Gemini again as an example because it provides for, and allows for, direct parental intervention.Gemini requires the child to have an account ending in @gmail.com in order for the parent to set up parental supervision. The parent learns this through the Gemini FamilyLink spot on the app. Once the child is registered in the family link, Gemini will send the parent a notification when the child has accessed the program.You as a parent have to be an acknowledged user yourself and having done that – you can turn off the child’s access to the chatbot full stop at any point in time. You can also set up screen controls that will manage screen time, require permission to download apps and review the child’s activity and interaction with the chatbot.
Bottom line – parents and guardians need to know that there are no AI specific controls or visibility options to oversee or control how your child interacts with Gemini or most chatbots so you need to have the direct conversation with your child about the need to keep personal information private. That should be the overarching priority of any parent or guardian. Children need to be told that their interactions with the chatbots are not private.Gemini informs its users in the small print that human reviewers will review “a small subset” of the chats to assist in improving the Google service. Google indicates that the humans are reviewing to ensure that the AI responses are not of low quality, inaccurate or harmful. But this means that there is a human element added to the equation that may be reviewing the child’s interaction with the chatbot and I do not have to go further to extrapolate the danger that could be associated with this.Especially if the child’s interaction is unsupervised.
So Diane we are going to leave it at that for now and we will be back shortly to continue our discussion of generative and agentive AI and its pros and cons in the privacy world so everyone stay tuned! Our transcript has a link to all the sources we used to research this podcast – please feel free to access the transcript and the links it to learn more.
Sources:
Interview of Sam Altman on Chat GPT5: Cleo Abram, August 7, 2025: https://www.youtube.com/watch?v=hmtuvNfytjM
“AI, Machine Learning, Deep Learning and Generative AI Explained” Jeff Crume of IBM Technology, August 5, 2024 https://www.youtube.com/watch?v=qYNweeDHiyU
“Generative vs Agentive AI” Martin Kerr of IBM Technology, April 21, 2025
https://www.youtube.com/watch?v=EDb37y_MhRw
Google Gemeni for Children – Parental Supervision of the App
https://support.google.com/gemini/answer/16109150?hl=en