Meet Buddy: Your AI-Powered Lifelong Learning Companion
Education is a profoundly human enterprise. At its foundation is the transference of knowledge, understanding and skills from one human mind to another, either directly (through direct instruction), or indirectly (through books and videos, for example). There are many other ways to gain understanding of the world through lived experience, but if we want to be fully educated we need to be taught.
To teach is to pass on what you have learnt and to help others build on prior knowledge. To be educated is to be willing and able to receive teaching in one shape or another. This is why so often schools are ineffective, as they do not create the ideal conditions for students to want to learn. Students whose bodies may be in a classroom but whose minds are elsewhere are ineffective learners.
Education is social because it is about the connection of minds. The best teachers open up their students’ minds because they connect with them at the most fundamental, human level and enable this transference to occur. But that is not all. It is also about opening up to the possibilities inherent in learning, creating learners who are keen to know more, to do their own research, follow their passions. We don’t stop learning when we leave school unless we choose to. And a human who isn’t always learning and in search of their truth is only part alive.
However, there are so many barriers that stop effective learning. In some parts of the world there is simply no access to the teachers or materials to enable learning to occur. Many children therefore suffer from a lack of opportunities to learn, however they might wish to. In many parts of the developing world class sizes are also too large to enable children to have the sort of one to one attention that can create truly powerful learning.
Another challenge is a lack of motivation on the part of the student (and indeed the teacher). How often do we see bored children in a class? Ken Robinson, in his seminal Changing Educational Paradigms RSA talk back in 2010, showed that schools remain stuck in the industrial model of education, with children sitting in rows being talked at by a teacher. This is outdated, ineffective and must change. Children no longer learn well in this model (if they ever did). There are exceptions, of course, with some classrooms being places of lively enquiry, challenge and risk-taking. But for the most part, children are bored by how they are taught, and teachers are bored with how they are expected to teach. In a ‘just in time’ society, where we learn new skills and take on board new ideas when required, perpetuating this ‘just in case’ model of learning makes little sense.
But if Robinson brought this to the world’s attention more than ten years ago, why has the pace of change been so slow? In 2011, I led the first iPad one to one implementation in a UK state 6th form. On the back of that, and the myriad mistakes we made (because none of us had a clue what we were doing with ed tech back then) I worked with Apple, supporting other schools’ digital implementation strategies. At the time we thought that mobile technology was the change we were hoping for. Finally, an easy to use device that would eliminate much of the need for carrying heavy school bags, that would create more opportunities for just in time learning, and that would change the role of teacher from unidirectional transmitter of knowledge, to curator of learning pathways and guide.
However, after more than ten years there has been little fundamental transformation. Schools have played with iPads, BYOD and Chromebooks, but none of them have disrupted the teaching and learning paradigm in the ways we might have initially hoped. They may have led to less paper, or more efficiencies in administration, but the big changes have not occurred. In fact, in trying to fit them into a continued model which privileges direct instruction over independent learning and collaboration, they can distract as much as they can support. Try teaching a class with laptops when every screen faces away from the teacher and you’ll understand what I mean.
We had the same cautious excitement when interactive whiteboards found their way from boardrooms into schools in the late 90s, and a few years later virtual learning environments like Moodle moved over from Universities into classrooms. The UK’s annual BETT show is filled with vendors trying to sell Heads and Bursars the latest gizmo or gadget that is promised to improve outcomes, save money, and make their school more efficient. There are many magic bullets still being fired, but few of them ever hit their target.
Why have none of these innovations caused the revolution Ken Robinson was crying out for back in 2010? It comes down to two words.
Relationship, and trust.
The importance of trust
At its most simplistic, when we begin learning something for the first time we come up against barriers. For example, we wish to learn a new language. We are immediately confronted by sounds and written symbols that we cannot make sense of. These symbolic barriers block us from understanding this new language until we begin the process of learning vocabulary, grammar and pronunciation. This is a stepped process, beginning with the basics and building stage by stage.
At the point we find ourselves blocked we need help, like approaching a locked door and needing a key to pass through or hitting a wall and needing a ladder to climb over. This help could be a teacher in a classroom, a book, website or video, for example. We cannot learn French if we don’t have a guide that shows us how to take the language of our mother tongue and translate it. There are some exceptions — my children have grown up speaking both English and Russian. This is because I speak to them in English, and my wife in Russian. They have never had a lesson, or even looked at a text book. They have naturally absorbed two languages and therefore speak them both. However, aside from the immersion approach, most of us need more explicit support when we wish to learn something new.
At the very centre of this effective ‘unblocking’ is trust. And it is trust that is built over time, between teacher and student. In the same way that we trust books written by reputable authors, who have built credibility over time through their output, so teachers can build that same credibility through their daily and weekly interactions with their class. This is why even the best teachers take time to settle into a new role. Students often take a while to trust their teachers, particularly if there has been a high staff turnover, which sadly happens in so many schools. This lack of trust is one of the largest barriers to learning.
This is why conventional education technology solutions have not added the value we were hoping for. We cannot build a relationship with an iPad screen, laptop or phone, however much we are wedded to these devices. They are functional, serve their purpose, but we do not have the sort of warm relationship with them that facilitates effectively learning. It is the same with online courses, where students build no relationship with their instructor. Massive Open Online Courses (MOOCs) are a case in point. A 2014 study by the University of Pennsylvania found that the average MOOC dropout rate was around 90%, with only about 4% of students completing the course.
If we can agree on the following premises — that trusted relationships sit at the centre of effective learning and that they can only build over time through authentic and personalised interaction, it would therefore follow that, in order for technology to genuinely impact on learning, this relationship would somehow need to be replicated.
This is where AI comes in.
The Rise of AI in Education
AI has been around in one shape or another for more than fifty years, but it is only in the last five to ten years that we’ve seen it adding limited value to the learning process in the machine learning algorithms that vary difficulty levels in software like Atom Learning and IXL.
However, it is only in recent months that there has been a quantum leap in the level of sophistication (and potential use cases) of AI. With the arrival of image generation engines like Dalle-E, Midjourney and Stable Diffusion, language models like ChatGPT, and music generation software like Beatoven, a new world of possibility has opened up for their application in an educational context. The company behind ChatGPT, Open AI, is now integrating its GPT model into Microsoft Office, including an overhauled Bing search engine (called Bing Chat). Google is soon to release Bard, their response to ChatGPT. There will be other entrants into the market as it heats up: we may in time see ChatGPT as the MySpace of the AI world. It is impossible to say.
On the cusp of a paradigm shift
To say that we are standing on the cusp of a paradigm shift is no overstatement. AI has the potential to be the great disruptor we have been waiting for. Even the internet at its most sophisticated only serves to present the world as its contributors envisage it. Whether gaming, blogging, recipes or social media, websites do little more than offer a window into the human minds and worlds of their creators. With AI like ChatGPT and Midjourney, this changes. These create things from new, unique products given birth through a process that works in a similar way to the human brain.
Because is it the capacity for these new AI language models to learn and create that is their most compelling feature. AI systems like ChatGPT are built on enormous amounts of data: in the case of Open AI, the vast wealth of the internet up until 2021. They don’t simply regurgitate this information, but rather use it as the basis on which to learn and therefore respond to prompts. This is the intelligence of AI: it is better, as Professor Rose Luckin of the IoE has suggested, to say that these AIs ‘experience’ this information and use it to form their own conclusions. Open AI refer to training their GPT model as akin to ‘training a dog’: they absorb vast amounts of data, learn the rules of spelling, punctuation and grammar, then combine the two into a model which essentially predicts the next word, like a dog learning to associate a certain command with a certain action.
You can engineer a GPT prompt to make the model act in a certain way. For example, by beginning a prompt ‘Take the role of an English to French translator…’ or ‘Take the role of a university philosophy teacher…’ you are telling it how to act. And every request you enter after this point will be influenced by that first prompt. The model therefore learns how to act in relation to how it has been instructed.
The reason this is so important when it comes to what is proposed below, is that within an educational context these models can now learn about the students they interact with. For example, if a student uploads a number of essays to ChatGPT and asks the model to analyse them relative to a given set of criteria, it will do so. It will then be able to write an exemplar response based on what it has learnt about the student’s writing style, academic level and so on.
Let’s take a student with an average grade. This student can copy paste a number of paragraphs from a handful of literature essays and ask it to give recommendations for how the writing can be improved to target a top grade. And if the student continues to copy paste essays into the same conversation, the AI will continue to learn the student’s strengths, weaknesses and style, ensuring that any suggestions given are stylistically similar to the student’s style.
However, this still involves the manual interface between student and AI model. Whilst there are increasing automations for these models, using automated script generators like Zapier, this is still beyond the ability range of most. The barriers to entry are still relatively high to gain the sort of continuous learning support that can make a genuine difference to the average school-age student. We are still playing with this technology and seeing what it is capable of.
It must be noted that these models are not yet flawless: ChatGPT, for example, is prone to occasionally give wrong or even absurd answers, but every day that it learns it becomes more accurate. The early iterations of Bing Chat have shown the same tendency to occasional weirdness. We are still in the very earliest stages of their development, and as innovative solutions to combine with large language learning models are developed, so they will themselves make fewer mistakes.
But the foundations are there, and it is the very back and forth, iterative nature of these AI models that have such potential in an educational context.
Enter the Learning Buddy
Imagine an AI ‘buddy’ that joins a child on their learning journey from a young age. (We will call him Buddy for the purposes of this exercise.) Buddy will learn and grow with the child, becoming increasingly sophisticated and knowledgeable about the child’s strengths and weaknesses. Continuously connected to the internet but able to select according to what the child requires, their learning level and the amount of content monitoring required. Monitored to a high degree by parents and school when the child is young, then increasingly free as the child grows into their teenage years and adulthood. It will be as if the child has the world’s very best teacher, mentor and guide alongside them whenever they need it. As the child converses with Buddy so both learn and grow. A relationship develops.
This is important. Because, in the same way that ChatGPT ‘learns’ the more you interact with it, so Buddy would mature with the user. Imagine a student asking their AI buddy a question. Rather than coming back with a Siri-esque response culled from the internet, it could tease the answer from the student, asking it the sorts of Socratic questions any good teacher would ask, getting them to break down the problem and consider different possible ways through to a solution. Once the student can no longer take their response further, because they have come up to a genuine ‘learning wall’ and cannot get over, Buddy can share some ideas for further discussion. In this way, it is less about the AI simply giving the student the answer. It is more about them getting the student to draw on their prior learning, take risks and try out different ideas. This is how we learn best: when we aren’t spoon fed but rather have the chance to draw on our pre-existing knowledge whilst always being pushed to the edge of what Vygotsky coined the zone of proximal development — the area within which we feel comfortable enough to be able to add to our existing knowledge, understanding and skills.
Users might summon Buddy through simply saying ‘hey Buddy’, in a similar way to how we currently summon Siri or Alexa. An exchange between a student (let’s call her Jessie) and her AI buddy might look like this:
‘Hey Buddy.’
‘Hey Jessie, how can I help?’
‘I’m stuck and I’m stressed!’
‘Sure Jessie, what’s the problem?’
‘So I don’t know what a compound sentence is.’
‘Ok, let’s break it down. Do you know what the word compound means?’
‘It’s like somewhere you can keeps stray dogs right?’
‘Haha — well yes, that’s one definition for sure. Impressed you know that!’
‘Thanks.’
‘In this context it means something different. When things build one on the other, they compound. Does that make sense?’
‘I think so.’
‘Like you could say ‘all my problems are compounding one on another.’’
‘I know how that feels.’
‘Great! Well, not great about the problems, but great you understand. So, if compounding means building one thing on another, what do you think a compound sentence is?’
‘Erm, a sentence that builds one thing on another?’
‘Getting warmer! What things might they be, in a sentence?’
‘Ah, what are they called, the different parts of a sentence! I can’t remember..’
‘Come on you can! You learnt this with Miss Jones a few days ago. Thiiiink….’
‘Clauses! They’re called clauses.’
‘Yay! Well done! So come on, think hard…. A compound sentence is therefore….’
‘A sentence that builds one clause onto another.’
‘Amazing! Now, think about how a sentence might do that. What do you need to build a house?’
‘Bricks I guess?’
‘Yup and what do the bricks need to stick them together?’
‘Oh now I can’t remember what the stuff is called but its sort of sticky and then dries.’
‘Ok, I won’t torture you on this. It’s called mortar. So, you need mortar to stick two bricks together. And to stick two clauses together you’re going to need the sort of sentence version of mortar. Any idea what sort of words they might be?’
‘Oh I have no idea. This is so hard! Why can’t you just tell me?’
‘Haha no. Come on Jessie. You should know me well enough by now!’
‘True. Ok, help me a bit more then.’
‘Lets take these two clauses: I like dogs. I like cats. How would we stick them together into one compound sentence using our sentence mortar?’
‘Oh that’s easy! I like dogs and I like cats.’
‘Perfect! So a compound sentence needs what we call a connective like ‘and’. Aaaaand why do we call it a connective?’
‘Because it connects together the two clauses!’
‘Speechless…. Great work Jessie. You are now a compound sentence genius. Let’s try a few more examples and then we’ll talk about what you just said about problems compounding — if you’d like to of course!’
And so on….
Buddy doesn’t give Jessie the answer straight away. It teases the answer from her, staying within the zone of proximal development, pushing her to the edge of it so she has to think. It encourages, uses humour, shows it knows her and her prior knowledge, uses her language style, and is cognisant of any signs that Jessie may need to talk about something more personal. What may strike you about a dialogue of this nature is how human it sounds. It replicates the type of conversation a child might have with a parent or trusted teacher. It’s warm and friendly, but stays on track.
You can imagine its application in so many contexts. As well as learning support, Buddy could give the child advice about relationships, staying safe online (warning Jessie when she might stray into an online area that could prove dangerous to her), healthy eating and so on. There are dangers in this, of course: we want to ensure children have as many human relationships as possible and not become too reliant on a virtual friend and guide. This would have to be careful considered, but the use of it could be ring-fenced, to times where the child is engaged in school or homework tasks, or for a certain amount of free time in the evenings and at weekends. As the child matures so this limiting could be slowly withdrawn, giving the young adult more freedom to use their Buddy whenever they wish. More on this later.
Instantly leaping barriers
The power of Buddy is in enabling learners to leapfrog barriers to learning at the precise moment they are blocked from moving forwards. The problem with the current model of school-based learning is that students often don’t get the support they need to make this leap until some time after they have come up against the barrier.
A good example of this is something as simple as our student Jessie engaging in a maths homework task to answer ten multiplication problems. The problems increase in difficulty to the point where Jessie ends up having to guess the answers, and gets some wrong. What normally happens is that she has to wait until either the teacher marks the work and gives back the right answer in her work book, or the test is marked in class, with the teacher calling out the right answers. Often by the time this happens, Jessie can’t remember what she was stuck on in the first place. However, with her AI buddy, at the moment Jessie gets stuck, she can say ‘hey Buddy, I have no idea how to answer question seven, can you help?’ And Buddy goes through a similar question and answer process as above, getting Jessie to dig down into her prior learning with the easier problems and building from there.
In my experience, being able to help students leap these barriers at the very moment they are blocked by them is the single most powerful interaction a teacher can have. Having an AI ‘unblocker’ not simply giving the student a ladder to leap over the barrier but rather explicitly showing them how to build their own ladder, will enable them to build independently in future. Because of the linear, algorithmic nature of the AI process, by making visible the underlying process from which new knowledge, understanding and skills are gained, this process can be easily transferred into other contexts. For example, Buddy may only need to go through this step by step process a few times with Jessie before she can do it herself, having an internal, ‘meta’ dialogue and working out how to approach the answer herself. This is the beginning of true independence.
From monitoring to freedom
In the early years of use, there will be close monitoring of Buddy by parents and the school. Time and access limits will be strictly enforced, and because this is the same across all children of the same age there are none of the current problems with children feeling like they are being left out when their peers have online access until late at night but they don’t.
As the child matures, so the AI as monitor of their online usage reduces, reaching the point at age 18 where it is finally removed. However, even once removed, Buddy would still recommend healthier choices, and could, in theory, even continue to offer monitoring should the young adult wish. However, as in making explicit the learning process above, so through making clear the impact of bad choices on physical and mental health, Buddy may actually inculcate good habits in the young adult by the time the controls are removed.
Buddy could limit access to the internet at certain times of the day: for example, during school and study time, internet access could only be accessed through the app, meaning that only material that is of use and relevance to the student will be accessed. This cuts down on one of the points made above, with students too easily distracted by screens during the learning day and during homework times.
Buddy as a Life Saver
Imagine a scenario where a child is in danger. Perhaps it is domestic abuse or bullying. Buddy will be able to detect sudden changes in its user’s circumstances, perhaps through picking up on audio cues or through a suddenly raised heartbeat should the user be wearing a smartwatch. The appropriate authorities could then be automatically alerted. Because Buddy would know the user’s calendar, it would know for example if the user was engaged in sport. In this way, anything unusual could be detected and acted on.
For lower level concerns, for example if the user is showing signs of depression, Buddy could offer support, guidance, strategies for dealing, and could also, with the user’s or parents’ permission, flag up concerns to local health professionals.
The idea of life saving therefore has two meanings: Buddy saves data on the user’s life, making it more and more useful and personalised the older the user gets; and could potentially save a life by getting the user help at the right time.
The power of big data
It used to be that knowledge is power, but with the wealth of the world’s knowledge at our fingertips online, this is perhaps less the case today. However, data is certainly power: the reason for Google’s enormous success is down to the amount of data they harvest on its users and how vital that is for advertisers, who are able to aim their Google and YouTube ads to exactly the right target demographic. Facebook, Twitter and TikTok are the same: their business model rests on the amount of data they collect on the users of their platforms. By making their social platforms free at the point of use they have drawn in millions of users with their vital spending habits and preferences.
In general, we use data poorly in schools. We might collect assessment results and enter them into a spreadsheet or management information system, and the better schools might analyse that data to work out how to improve teaching, support students better and raise attainment, but in general we are not harnessing nearly enough useful data about learners.
However, as Professor Luckin says, there is so much more to data than that:
Data in education might be about the physical learning environment, the virtual learning environment, the curriculum, the pedagogy, the use of resources, and much more besides. In addition, the connections that exist between these factors are also a form of data, as is the connections that exist between these factors and the people who are learning.
Buddy will change our approach to how we use data in schools. It will constantly collect data on our student Jessie’s approach to learning, understanding when she is engaged, when she is switched off, the times of day she learns best, and the subjects and instructional styles that suit her. By using the front camera of the mobile device or laptop, Buddy will be able to monitor facial expressions and body language, and assess Jessie’s mood in real time. It could even prompt her to talk about how she feels by saying things like ‘hey Jessie, you feeling ok? You look a little down. Want to talk about it?’ If Jessie says no, Buddy won’t press her. It is important this doesn’t feel intrusive: Buddy should stay in the background until needed. However, this simple asking after Jessie’s welfare further builds that trusting relationship, that, as we have seen above, is vital for powerful learning to occur.
In the classroom we could take things further. By creating smart classrooms, with cameras observing interactions and gathering data on engagement levels through the lesson, so much useful real time assessment could be done which could eliminate much of the need for teachers to spend hours marking books. For example, Professor Luckin suggests that ‘eye tracking can be used to capture the direction in which a student is looking, and from this we can identify whether a group of students are synchronised in their gaze. Being synchronised in this way gives us a small clue about the effectiveness with which these students are working together. Imagine that we have another dozen small clues from our data analysis, then we might create a rule-based system that uses this information to send different sorts of feedback to the teacher to help them optimise the support that they provide for each group of students.’
Due to Buddy’s ability to constantly assess levels, the majority of manual assessment could be eliminated, freeing up teachers to focus more on supporting independent learning in the classroom.
But it could go further than this. When students are together, working in a group, their Buddies could also join together, creating a ‘hive mind’ where their collective intelligence is applied in the solving of problems. Near field communication would ensure that only those Buddies with the required permission levels could join together.
Changing school buildings
As soon as we move into a new paradigm for how learning is delivered and progress is assessed, we can finally break free from the notion of a school as a ‘box filled with boxes’. Buildings with different learning zones, cross age group learning, an end to the day broken into periods: schools can become communities of enquiry and teachers can be part of that community, supporting good learning behaviours and being one more trusted supporter of the child’s development. There will still be a place for the inspirational human learning mentor in this new model: the child should be surrounded by both the people and technology they can trust, creating a synergistic network where they feel supported to be able to take risks and push themselves outside their comfort zone.
This leads onto the final point, about how this application could reach into parts of the world currently significantly underserved by high quality education.
Truly democratised access
Buddy could be powerfully used in countries where there is limited access to quality teachers. In areas such as Sub Saharan Africa, where there is a severe shortage of teachers but where smartphones are increasingly ubiquitous, having an app which enables students not only to access the world’s information and have it curated for them, but also to have a mentor which can guide them through this material and teach them to be independent learners, is highly attractive. There is significant investment into this part of the world to enable this to be accessible for all. The idea of a ‘dollar a day’ education starts to become realisable.
The Dangers of an AI Mentor
Before we go rushing headlong into this brave new utopia, there are several caveats. We have to go into this with our eyes open. Because an AI-supported future will happen: what I have outlined above is already here, in one shape or form. It doesn’t take an enormous leap to gather all the various AI engines and solutions and combine them into an always ready AI mentor that can guide us through our lives.
Let’s look at five potential pitfalls, and how we might address them.
1. A lack of human interaction
People need people. Children particularly so. We are social beings who need to see people, interact with them, learn and grow with them. Spending all day talking to a laptop or phone can never give you that, nor should it. We must not see these AI models as replacing human interaction. Rather they should augment, being one more trusted relationship in the child’s life as it grows into adulthood.
By pairing children with human mentors, who can both act as one to one and small group learning support but also teach children how best to use their AI buddies, children as they grow can get the best of both worlds: learning mentors who are free from much of the burden of planning and assessing work so can focus on the intellectual and social development of the children in their care, and a digital guide who is always there and who will never let them down.
2. An over-reliance on technology
We have enough screens in our lives. Do we really want our children to spend their days chatting to one? Becoming too reliant on a digital relationship could prove damaging to growing minds.
It will be vital for the AI buddy to promote independent thinking, decision-making skills, and creativity outside of the digital space. Buddy could have down times where online access is removed, forcing children out of their digital worlds and into human spaces where they can enjoy time with friends and family. If buddies were synchronised, and every child knew their friends were not online, there would be none of the FOMO that causes anxiety in all of us, not only children. It would reduce the extreme tiredness children experience by staying up too late chatting to friends online, or engaging in potentially harmful activity.
3. Models that reinforce biases
It has already been shown that AI models can perpetuate and reinforce biases. They are only as good as the data fed into them, and any biases inherent in this inputted data can be perpetuated by the AI. Examples include gender and race bias, if the data inputted has any leaning towards one group or another.
The AI will need to be trained on a diverse and representative data set, and the interactions between AI buddy and user regularly monitored to ensure they are free from bias.
In addition, there will likely need to be a high degree of content moderation with the Buddy when children are young, with only certain areas of the internet prescribed as safe. In the earlier phases of its development, the data could be hard-walled, with only certain data sets fed into the model and accessible by the AI buddy (like the earlier phases of ChatGPT, which have been disconnected from the internet). In time, further algorithms can be developed to offer softer, digital walls to content depending on age.
4. A lack of emotional intelligence
As we grow, we naturally gravitate towards people who understand us, empathise with us and offer us unconditional love and support. The danger of an AI buddy is that it will simply never be able to replicate that same level of emotional intelligence, which could lead to relying on a model that always falls short, which could lead to emotional scarring in later life. It would be akin to a child growing up in a loveless and cold family, and not being able to form attachments when they are older.
However, by building emotional intelligence into the natural language processing (NLP) model, and emotional recognition technology to detect and respond to the child’s emotional state, Buddy could both guide children through their learning and offer pastoral support. It should never be seen as a full replacement for human interaction in all its messiness and warmth, but could at least offer something that many children simply do not have: a listening ear who is genuinely interested in their problems.
5. Data privacy and security concerns
There will be an enormous amount of data collected on students’ activity, learning styles, interactions, academic progress, emotional state and so on. This could be vulnerable to hacking, which has potentially serious concerns. The AI buddy would need to have rigorous cross border data protection protocols applied, including data encryption and biometric access controls, and by ensuring that the data collected is used only for educational and support purposes.
Conclusion
This future is not far away. The exponential speed at which we are seeing AI models entering the world is both exciting and terrifying, but we cannot ignore it. If we agree that at the heart of great learning is the trust that comes from a relationship over time, and that there is the potential for this trust to be replicated in the digital space, then we should not ignore the potential of AI to offer all of us the daily support we all need to learn and grow in ways we have not seen before.
There are dangers, and it is not yet perfect. We are still in the very early stages. But what we have seen in recent months is a shift in how we perceive our relationship with these new intelligences. We are wary, of course: we have seen too many movies like 2001: A Space Odyssey and Terminator to feel fully comfortable with AI. But we cannot close Pandora’s box now: it is open, and we must examine what is inside and use it to our advantage. If we don’t, someone else will, and they may use its contents for ill rather than good.
We have been stuck with a model of education which has been outdated well beyond our lifetime. It is about time we did something about it. And AI promises to be able to do just that. It will take the combined efforts of day to day educators, technical experts, theorists and investors to make it work, but the more we collaborate the more likely we are to come up with the solutions that can truly make a difference, offering an educational journey we never received that does not end as soon as the school gates close.
References:
- Luckin, Professor Rose, “Machine Learning and Human Intelligence: The Future of Education for the 21st Century” (2018)
- Luckin, Professor Rose, “Intelligence Unleashed: An Argument for AI in Education” (2017)
- https://openai.com/blog/how-should-ai-systems-behave/