The Age of AI in Education Begins with Emmanuel Ajanma, CETL

INTERESTED IN BEING A GUEST ON THE K-12 TECH PODCAST?

Send email
  • Sean

Hello. You’re listening to the K-12 tech podcast, bringing you insights into the world of education, and technology. Stay tuned as we discuss the past, the present, and most importantly, the future of technology in our schools.

Hello and thank you again for joining us on our K-12 tech podcast. My name is Sean. I’ll be your host today. We are also joined by Emmanuel. He is with the Berry Unified Union School District up in Vermont. He is their director of technology and informational services. We’re going to be talking a little bit about some of the nuances that are out in the A.I. world and how those could affect or what to look for with cybersecurity.

So, Emmanuel, I’m going to let you introduce yourself. Tell us a little bit about your background, how you got to where you’re at now, and we’ll get the conversation going.


  • Emmanuel

Absolutely. Thank you, Sean, for inviting me to this podcast. And like Shawn has already indicated, my name is Emmanuel Agama and I am the director of Technology and Information Services for the Berry Unified School District Berry School District based in Vermont. We are a school, tiny school. I would say our student enrollment is about 2500 students and about 500 staff, and we are facing the challenges. Like every school in the country is facing. So let’s jump into discussing some of these.


  • Sean

Yeah, absolutely. I know. And we talked before just to get to know each other a little bit before, you know, doing the recording. We were we kind of had a mini-podcast ourselves talking about A.I. and what was going on with Chat GPT. And there’s basically every day it seems like some sort of updates happening. And, you know, how, how could that potentially or I guess before we get to that, what are some of the newer things that I know you said you were you’ve been playing around with the AI and stuff like that for a little bit, getting to know it. So what have kind of you uncovered? Just, just being on their sites?


  • Emmanuel

Yeah. Thank you, Sean. So I just returned from a conference in Texas, Austin, Texas. So this conference is organized by the person consistent for school network. So it’s an international organization anyways. And so we had all this technology leaders from the country come together in Austin just last week and we had discussing about all of the challenges and all of the top issues facing education and guess what? And cybersecurity and the number one and number two issues facing educational leaders in the country today. So so these are the tough things going on. And like I told you, Sean, when we met before, I had been I been playing with these different types of AI, generative A.I., like Chad, GPT, and others, and they are really something in this something.

So I’m trying to then, you know, look at it and see what are the cybersecurity risks that these generative AIS like G charge is posing. You know for us in the education sector, you know, so I had some conversations with, with other educational leaders and, you know, we, we talked about a few of those. So I’ll be happy to share them as we go on in this discussion.


  • Sean

Yeah, absolutely. Now is someone who I’ve never gone on to chat GPT or try to figure out any AI platform at all. So I know very little about it. Basically, I feel like what I’ve heard is that students maybe having it rate papers and stuff like that for them, but I would say that’s probably a pretty surface-level application for those types of platforms. What do they do or what can they do?


  • Emmanuel

All right. So these tools can do a lot of things. But the course in just released a, you know, a note. Right now about subject. So there are so to say a few things that you should note. So generative AIS we have the application, the one that creates content. So content like text, you know, sentences and paragraphs like targeting and being gritty. There are also other generative is that create audio, you know, sound fool and other things. So they create audio. There are some that also create images and videos. So, so pretty much you can use generative A.I. to generate any type of content, but it’s a written context. You know, texts, paragraphs, and things like that. You can use it to generate audio, you can use it to generate video, and also you can use it to generate pictures.

So all kinds of contents, you can use these tools to generate. So this is very interesting.


  • Sean

So I guess that that’d be something, I guess that’s fun to play around with. But I mean, obviously, you can’t stop technology from evolving, right? I mean, that’s I mean, it’s pretty much in hyperspeed right now. And I’m sure if it’s available to us, then there are other platforms and things that civilians just can’t get their hands on or have access to.

So what are some of the implications then, I guess in a negative way that could affect or even not maybe even what could happen in the cybersecurity world with A.I. that, you know, could be negative or positive. I mean, there has to be two sides to the coin there.


  • Emmanuel

Absolutely. Like everything in this war, like every tool, there are good and bad. Right. And like I told you, we discussed about the most challenges for us in the education field. It’s cyber security, like, you know, and then just like targeting. So we’re looking at what are the implications of AI in cyber security. One of the problems or one of the issues that we pointed out in our discussion was things you can use these tools to generate texts, codes, pictures, and whatever. Someone could use GPT or use this type of A.I. to generate even more sophisticated and targeted malware. They can generate this malware and then target schools with these more sophisticated MA ways that were generated with this could pose a huge, huge challenge to schools, especially small schools like ours. Like mine that don’t have you know, we don’t have the funding, we don’t have the staff, you know, to fight off malware and cyber-attacks with these types of sophisticated malware’s, it will be impossible, almost impossible for us to fight.

And anybody can just use it to generate these types of codes that can be deployed on our network. So it is very, very concerning to us. Very, very concerning to us. This is the number one problem, the number one challenge that we sink with and cybersecurity.


  • Sean

So then how do you combat something that you physically can’t see and it means not just like someone walking into the building, right? There’s plenty of security protocols for keeping people out of the building that shouldn’t be there. So how do you fight essentially a self-learning or teaching virus? I don’t know if that’s the right word for it, but, you know, how do you prepare for that?


  • Emmanuel

Yeah, it’s it’s a challenge for sure. But the good news is, you know, doing all of those cyber hygiene things keeps us safe again, repeating all the cyber hygiene, things like, you know, secure and monitor our network all the time, ensuring that only authorized users have access to our sensitive information and educating our students, educating our staff, you know, educating teachers about potential risks and signs, you know, this AI generated content, good, good costs on our network, you know, implementing, you know, strong cyber security measures, you know, firewalls, intrusion detection and things like that, regularly updating, you know, softwares to keep them up to date.

And these things will help protect our systems from these types of sophisticated attacks. And one more thing, if I may add. You know, we can also use it to fight. You know, we can also partner with our vendors, you know, for them to help us in the cybersecurity world using AI to generate, you know, strong defenses against itself. So we can also use AI to protect our systems.


  • Sean

How do you make sure that something as sophisticated as AI and it’s only going to get more intricate, I’m sure? How do you know that? What you are using for good is not actually opening up more areas for bad to come in or I mean, it’s a pretty tricky thing to try to figure out as far as is this something that is on my side or not? I would assume it is.


  • Emmanuel

Again, as you indicated, it’s going to get more sophisticated, you know, more advanced. It’s not it’s not going anywhere. It is here to stay. You know, it’s going to even develop more so with the I would say the way to do this is for us to embrace it. You know, learn it, not to ban it in schools, not lock it in our schools, but embrace it to join in that conversation, learn how it works, then how we can harness it, you know, and make it use it for enhance education because there are so many good things about AI, number one. Number two, it’s not going anywhere. It is here to stay. So we might as well, you know, dip ourselves into it as leaders in education, understand it, understand the risks so that we can help teach other staff in our schools and then, you know, learn how we can use these to for good, you know, use it for good reason.

There are a lot of good things that we can use this for. So emphasize the good aspect of it and also, you know, train and, you know, create awareness into our staff and students about its negatives.


  • Sean

So that so relating it to cybersecurity is one obviously for you is incredibly important. Being in the position that you’re at. But as a like, say, as a student or a teacher, you know, I would assume that students can now use these programs to write reports or, you know, whatever information they put into it. I guess it generates a paper for them or something. So is there any way to tell that a student did or did not write something? I mean, is there how is a teacher from a teacher’s standpoint? How would I know that the information that I’ve been given actually came from a student or even an administrator? How do I know that a teacher is actually writing their lesson plans or you know, whatever they may, whatever they could use it for? How do you know?


  • Emmanuel

Like this is really funny. If you just touched on a topic that I was just discussing with some of my colleagues because there is this debate, right? If a student uses generative AI to write an assignment or homework and, you know, writes things like that, what is it? Is it plagiarism? You know, you know, that’s a good question. But it is the debate is that it is not plagiarism because here because the content that was generated was from multiple sources, you know, the AI just grabs things from the Internet and grabs things from a database that was already in existence. So it is not quoting a particular source. So it couldn’t be called plagiarism, but it could also be cheating because the student is not producing, you know, the text from their own imagination, from their own, you know, from themselves.

But there is also a debate that if I just produce this helps you to brainstorm a topic and then you develop the topic itself. And there is nothing wrong with it that I help you to, you know, have some super will bring some things, give us a topic, give us, you know, kind of a list of things, and then we can develop those list of things that there’s nothing wrong with it, that it is actually inter intellectually acceptable. And there’s another one final point I want to make, that I also use an AI to help with generating ideas, like if a teacher in the classroom, you know, brings up like they what history or whatever they are teaching, and then you use it to, you know, generic ideas for the students to then discuss. Then you are using directives in a very ethical manner. And it should be, you know, supported and welcomed as part of learning.


  • Sean

Yeah. And I think, you know, again, music the right way and it’s a good thing. But I remember, I mean, we used to have to like find the smart kid in class, and here is 50 bucks. You know, write my paper for me. And now you can just type in a bunch of stuff on the Internet that generates one for you. But I mean, how far we’ve come, just from when I was in school, we used to have I think it was called Encarta 99 or something. It was a set of disks and it was basically an encyclopedia that you could use on the computer. So we went from that to being able to use the Internet, which was great.

But you weren’t allowed to use Wikipedia because that, you know, the sources aren’t legit and blah blah blah to now you just, you just add in a few topics and it just generates a paper for you. But yeah, I guess technically it could be cheating because it’d be like paying a kid to do your homework for you. But yeah. How would that be? Plagiarism. It’s not like it’s just pulling an article from somewhere and submitting it. It’s, it’s pulling from thousands and thousands of sources. So.


  • Emmanuel

Yeah, and, and, and we’ve got, you know, you touched on something when I was listening to you. So it is more than ever important today for us to teach critical thinking for our students. Listen that I generated content doesn’t necessarily mean that that content is true. You know, remember I was just pulling from the Internet or from just databases that we’re here, that we’re there. So it could generate misleading information. It could generate false information. So we need to teach critical thinking to our kids so that they can, you know, know how to verify, you know, the information that is being generated. It doesn’t necessarily mean that it’s correct information because it came from an I know you know and don’t know the truth and falsehood is just giving you information that is out there in a database.

So critical thinking is very important today. Another thing is also misinformation. So the type of questions you asked, you need to ask the right questions to get the right answer. You know, this is a machine. So it says based on the type of information on the or the type of things you are reading. So it is important also to ask the right questions in order for you to get the right answer that you’re looking for.


  • Sean

So, okay. And never using it. So what is so let’s say I had to do I had to write a speech about I don’t know how to make the perfect peanut butter and jelly sandwich. Is that something that I can just type in there? And it will make a speech for me or do I need to make like, how does this all work? Dessert? Do I need to put in how long it needs to be? How many words or is it just a very broad statement? And then it just does what you ask it to do.


  • Emmanuel

Exactly. This is a good question. So so just coming off of what I just said earlier, so if you type that just a broad question, it will give you a broad and general answer like that. But then you could then oh, let me add this one too. So to remember what you asked earlier. So in this same chart, this is why this is this is still so mind-blowing. So if you ask a question about, let’s say, peanut butter jelly you just said it at a general answer of how to make it peanut butter. Jim, and then you come back again. I said, Oh, what if I add milk and sugar? How would you test it? Well, remember the first question you ask and then, you know, respond, adding in the second prompt that you’ve given it.

So it will give you a response based on all of your questions. So if you remember the last thing you asked. So this is this is very interesting. So, that is how you can specify. It’s like if you ask something about a general topic something and it gives you a general answer, but you say, hey, you know, there are particular things you want to know about this general topic and then you add those particulars. It will give you more specific answers based on the specific questions you are asking.


  • Sean

And I kind of want to I want to navigate back towards the cybersecurity part real quick because I just thought of something. Can you teach a so a lot of the security questions or a lot of security points when you’re on any type of website, most of them click, click the box? I’m not a robot or which squares have traffic lights in them and which ones include a bicycle or whatever. Can you teach an AI to recognize and answer those questions, or is it not there yet?


  • Emmanuel

This a very good question. So it is not there yet because, you know, you have to even using the code it is you have to you know imputed, and then it generated content for you. So when you get to a website that actually or click this, you know, to prove that you are not a robot, there is no way you will use generative AI to click those. At least not yet. But let me tell you, nothing is impossible with the way this thing is developing now. Who knows tomorrow? You know, it could get there. I wouldn’t be I wouldn’t be surprised. I would not be surprised, you know, but so nice. Not dead. But it could be. We could get there.


  • Sean

So then what is the I guess I don’t want to say like the biggest risk factor or, you know, what potentially could be the largest problem. But how would I then be able to circumnavigate some of the security measures that schools have in place to protect information like students, personal information, you know, any of the financial information that schools hold on to, how would it get into that or pose a threat in the first place versus someone that’s just hacking in on their computer at home or something?


  • Emmanuel

Yeah. So I would say the same way that traditionally, you know, breaches have occurred. Like if you use AI to, you know, generate a code that someone can then send to a school official and they click that code. And if it is a very sophisticated code that was generated by an AI, it could download itself on someone’s device and bypass because you could program it to bypass the firewall that is on the device that is currently on the device, you know, so so it makes it, you know, more complicated and easier to use it to, you know, you know, to basically inspect someone’s device with malware, you know. And that is my greatest fear. That is my greatest fear when it comes to AI and cybersecurity.


  • Sean

So a lot of this almost reminds me of like a plot to a movie or something where you’re digitally watching this virus dodge around firewalls and it’s learning what the firewalls and antivirus is looking for. And it’s changing in a way that it can’t be seen or whatever. I mean, it sounds like some movie stuff. Is that are we there now or is that where we could potentially get to where it’s actually the AI itself or the code is evolving to get away from what’s trying to stop it?


  • Emmanuel

Yeah, right. So that’s what I’m so I don’t think we are there yet because this is just still developing for now. And, and the, the, the, the, you know, companies that are making this I at this point in time are focusing on the good for now, you know, but here’s my fear. Here’s my concern. You know, at some point the bad guys will come into this market, and the bad guys will start making this. That’s my concern. And the bad guys, we focus on things like what you’re describing. You know, how can we make more sophisticated malware that will override your security defenses? You know, the bad guys will come in and start to design A.I. that will generate content or generate malware that could actually dodge these security defenses that we have.

That’s my concern. You know, so right now, at least, the companies that are doing it, I focus on the good. But I know, Shawn, you know, the bad guys are coming. It’s just a matter of time. So we always hide in the shadows. We better start preparing for it now. You know, you prepare for the worst and hope for the best.


  • Sean

Pretending I’m someone that wants to infect the school with malware, which I’m not. I just want to make that very clear on the record. Why? Why would you target a school versus I mean, if you have this technology available to you, schools are flushed with money? They’re not. I mean, the students, I’m sure they don’t have credit. I mean, what’s the point of going after a school versus something like a bank or a Fortune 500 company or someone that actually would have the ability to pay up if they decided to go that route or wouldn’t even notice that you were skimming money from, you know, a thousand different accounts if you hit like a banker. So why even go after school? What’s the point?


  • Emmanuel

All right. So this is a very, very good question. And I love that you are bringing this up. So if you go to the Microsoft dashboard, so it is updated like regulated like the life education sector is the greatest, the biggest the number one target of cyber attacks, number one. Can you believe it? I mean, way ahead of, you know, the health sector, the business sector, all of the sectors put together. Education is the number one target, schools, districts, and things like that. Again, why is this why is this your guess is as good as mine? But we’ve tried to figure out why are they coming after us. Why are these hackers coming after schools? So so one of the things that came up was that we are the soft target.

You know, schools do not have big cybersecurity budgets to protect themselves. A lot of schools do not have dedicated cybersecurity staff, you know, on their payroll. So we don’t have this stuff. We don’t have the skills like a business would hire, you know, very skilled and banks would have the money and hire very skilled cybersecurity officers who would be totally in charge of defending them from attacks. Schools don’t have that, you know, we don’t have that. We are an institution that focused on learning, not on fighting, you know, crime and violence and cyber attacks. So we don’t have all those skills. We don’t have the money, you know, to do that. That is why we are just a soft target. We are just thinking ducks, you know, we are the lowest hanging fruit. So these attackers, of course, target us.


  • Sean

And so what are they going after then? I mean, obviously, this is not to make a pun, but it’s not like they’re stealing kids lunch money. Like, what are they? What are they trying to get?


  • Emmanuel

All right, so data is gold. That’s how they describe it. Data is gold. You know what they are going after this data. They want to have our student data. And when they get these students data, they will then force the school to pay them or they will release the student data online. And which parents? Shawn, if you have kids, would you want to see your kids records out there in the public? You know, they have their names, their scores, you know, their students record. No. No parent knows who wants to see their kids. Nine-year-olds, ten-year-olds records. You know, I bet in the open for everybody to see. And these hackers know they know that we cherish this data. We cherish this student’s data. So they come after it and they get it and they don’t, you know, depending on the type of school you are, they know your budget.

So they know what you can pay. So they make you pay what they feel you can pay, you know. So then I’m going to charge you a hundred million if you’re in a small school because they know you can afford it. So they tell you what they need to pay. Also, they can also target our business office, you know, the payroll office and things like that. And if these staff members who work in the business offices for schools, if they are not trained in cybersecurity or what to look for, they are also a soft target, you know, and then they use they get access and then take payroll, you know, payroll money from teachers and school administrators. So these are the two major reasons why they target schools. Again, I say we are the lowest-hanging fruits.


  • Sean

And it’s sad. I mean, obviously, yes, schools in that aspect would be the lowest-hanging fruit. But what’s kind of funny is that funny, but those skills that these people are using, they learned probably from being in school and now they’re going back. It’s full circle and you’re teaching them how to be criminals.


  • Emmanuel

It’s so unfortunate that you just click on the link. We teach these guys, we teach them and we hope, right? We hope that the knowledge they acquire from the schools they will use it for good and majority, 95, 96, even 97, 98% of folks use it for good. But there is a handful of those who decide to put this knowledge that they learn from this class. To put it into what? Using it, for bad and for evil. And that is unfortunate. But it is true. We have to acknowledge that a still will still go on and teach and teach our students the best because we know that the majority of them will turn out using this knowledge and skills to improve and enhance, you know, the world we live in. So, so, so we’ll keep doing it.


  • Sean

Yeah. And I mean, there’s, there’s never teaching will never end. Learning will never end. And in the growth of technology, we’ll just it will not stop. I mean, at some point it maybe it’ll slow down. I don’t know. But, you know, it’s I think we’re in like a technology revolution period right now. And it’s just it’s getting crazy out there and it’s so easily accessible to just the general public that, you know, people, even if they’re not techies or designing those programs, are going to figure out a way to use those programs to maybe create other things, too, which to your point, hopefully, they’re all good.

But we all know just from being alive that everyone is not good all the time. And that’s just an unfortunate truth. But that’s where we’re at. But speaking of the evolution of tech and everything, I know the last time we talked, we just kind of started messing around with some of the movies that were out there and how the AI is portrayed in those movies. And I know we talked about iRobot and Meg in the newer movie that just came out. So do you see us? Do you see us moving toward that? And if so, I mean, what is? Is that going to mean if it’s not now? Because now it’s not just A.I. it’s a physical thing.


  • Emmanuel

Yes. You remember when a little bit bad back a little bit. This might reveal my age, but remember when the phones came out, you know, and we started using these old big phones and it was the next, you know, the best then. Right. And then right now look at phones have become mini-computers. Just mini-computers for cell phones. That has become.


  • Sean

A phone in some cases.


  • Emmanuel

Exactly. So in the Senate bill, I I’m sure that, you know, give it a few years is going to blow up. You know, and the thing that we have to be mindful of is how can we bring ethics, you know, into the development of A.I., you know, but both in the medical field, in education and all of that already there, all of these fields, we need ethics. We don’t need, you know, generative A.I. coming up and, you know, bringing all kinds of ethical problems in the world. You know, that’s a concern. That’s a conversation that we should be having. But are they going to blow up? Yes, they would. I can tell you that books we should be mindful of, we shouldn’t forget, you know, that there should be ethics, good and bad.

You know, behind this, we should we shouldn’t forget it, because if we do, then we might end up, you know, producing more evil than good.


  • Sean

Yeah, absolutely. And you bring up phones, which is just funny because that is definitely a technology that even for me, I mean, I started on a flip phone you had to pay to send a text message even now it’s an entire computer in your pocket. Essentially, anyone that works in any field can have their entire office in their in their phone at all times. But just thinking back to when we’re talking about cybersecurity and let’s just say I know people get spam texts and emails all the time, right? I mean, it just happens. And everyone says, don’t open them. So you just click, delete, whatever. But let’s just say a student, you know, a middle schooler, they don’t really know all that much better than to not open up a message that they just got.

So let’s say they open up a message, click the link, it downloads something onto their phone, and they walk into the school the next day and connect to the network. So now you’ve got this bridge between their phone and the network. How how does the school combat that? I understand firewalls and things within the Wi-Fi network, but can it just kind of go right around it?


  • Emmanuel

Exactly. So this is a big, big concern for schools, right? Yes. Granted, like you just mentioned, we have a firewall rules and not just firewalls. We also have a network. We divide and disintegrate our networks so that the network that we allow staff and students to log in with their own personal devices is segregated. Kind of wouldn’t see the other things that are like student devices, and school devices that are on the network. So it’s very we make sure that we separate too and put a very strong firewall between them. But as you said, you never know when technology could develop who knows how they would know how I could be so sophisticated as to dodge or in some cases break down these firewalls?

I mean, it’s not there to date, but it could be there tomorrow. You know, it could be there tomorrow is a concern. You know, so so is a danger that we experience. Are we then going to lock everybody out and prevent people from logging into our network with their personal devices? Are we going to do that? If you do that, you know, you also limit learning. So it’s always you know, it’s always a problem of access. And, you know, you know, how how much access do you give to your users? And when you brought them to us in, the more you give access to your network, to Wi-Fi and things like that, the riskier it becomes, you know? So we have to balance. But, you know, boots needs because students need access to Wi-Fi. They need access to Internet in order to learn, you know. But then access comes with the risk, you know, the risk of, you know, breaches and the risk of, you know, safety risks, period, you know. So we have to make a difference there.


  • Sean

So where do you see it now. You know, unless you’re unless you could predict the future, but you work with this a lot more than I do or maybe some other people listening to you. Where do you see it taking us, especially in education? Are there going to do you see teachers assigning projects that need to be completed? I mean, are we going to allow those to write papers and things for students? Is that going to be okay? Where do you see this going or are we going to end up in like a Terminator situation and Skynet is going to shut everything down?


  • Emmanuel

I can tell you right now that I will disrupt education. It will basically change how we teach or learn, teaching us role will significantly change. You remember when Google came out, you know, in those days when we were preparing teachers and, you know, schools, it was the statement was, oh, try to give assignments. That is not good. You know, you cannot Google. It’s not Google. But, you know, if you give an assignment that didn’t come, just Google to give me an answer, then, you know, that’s not a good assignment. Now we have to give assignments. If that’s even assignments, that cannot be you cannot get to as a through the air. Tell me what assignment and do you get advice from.

So basically what it would do if we can’t the role of teachers now information is out there any student can just find out anything, right? So so education, teaching, and learning we change so much so that, you know, the way we do teaching today, the way we do teaching online, it will change will shift, you know, into something that will be completely different for sure. The emphasis will generate. And so in showing your knowledge, rather than just write in long texts, you know, now be you have to show you, you have to demonstrate your knowledge, all your skills. So demonstrating of skills will be an essential part of teaching and learning moving forward. That’s what I think.


  • Sean

We’ve been working with. I for a while now and not really classified as an I. So something like your phone where you can ask it you know, I don’t want to say it because my phone’s going to go off, but or you can ask Siri or Google Homes or whatever questions and then it remembers or it plays something for you or makes a shopping list. I mean, is that considered A.I. or is that more of a call-and-response type of program?


  • Emmanuel

That is a type of AI. So like I say, it’s not new in there’s nothing new about A.I. AI has been around for years. The only thing new is generative AI, the type that typically comes with that’s something that could that you could say is new. And then it is the development the the the the advance, you know, it’s becoming more advanced now that than it was. Hey, genius. Looking up GPUs on your phone and following G.P.S., your destination is and has been around for how many years. Not a long time. So it has been with us. Yeah, has been with us. But it is now more advanced and generative AI is what has taken the world by storm. You know in recently. But I have been around for a long time.


  • Sean

I mean even so, I mean even just listening to you said there’s, you know, the text generating there’s video and image generating and then there’s audio generating and there are a few radio shows have gone in there, and typed in, you know, report the news in this city on this date. And it sounds like someone is talking on the radio. I mean, it’s in its wild, but it’s actually kind of really cool. And I’m conflicted on how to feel about it if I’m being honest because it’s cool, but it’s scary.


  • Emmanuel

And I’ll add to that, one of my friends just posted a picture on Facebook. So they used to pretty much generate an image off of ourselves, you know, so it and this picture looked exactly like her, you know, but prettier, you know, I mean, more pretty. But what was generated from it, you start a real picture, but you wouldn’t believe it. It looks so real. So this is the problem. This is the challenges that will get into just like you just said. You know, I you know, just reporting news like it is actually like it is, you know, somebody you know, I can you can also use code to produce video. I mean, it looks like real video, you know. So at some point, are we going to be confused with, you know, I generated things as opposed to, you know, real-life things, you know, whether it and so these are issues that that we should be we should be, you know, having this conversation on. We should and I thank you, Sean, for, you know, using your platform to kind of bring out these issues for everybody to hear.


  • Sean

Yeah, absolutely. And I will be looking forward to this conversation a lot because it is something where, you know, I pretty much grew up watching like Terminator, iRobot, these crazy, futuristic shows. And now the futures here and maybe not to the same extent, but I mean, it’s definitely the technology foundation for that is essentially being set now and then they’re just going to continue building on top of it. And that is that that’s again, scary, but also very exciting. And I said I’m conflicted, but I was looking forward to this conversation because you are very knowledgeable and how those applications can be used for good and bad. Obviously, we look a lot at the technology and education side of things, but you know, it’s obviously a much larger it reaches a lot more than just education.

And I think it can have a lot of implications down the road for every single person, business company, whatever. That could be bad if use the wrong way.


  • Emmanuel

Exactly. You just said it right. You know, the future is here. You know, remember, like you are saying, all those movies that we watched years ago, you know, sci-fi, new movies that, you know, at that time, it’s like all these things will never happen. Guess what? With those things are now possible, you know? So tell me, in the next ten years, in the next 20 years, can you imagine how the world would look like with the advancements continued advancement of is so the future is here.

So what I would say is we will keep having this discussion, bringing it up, looking at the ethical, you know, things that should be employed when we are developing. This is a make sure that we keep an eye on it so that we can use it for good because there are lots of good things that could happen with AI. But unfortunately, there are also lots of bad things that could happen with, unfortunately.


  • Sean

Absolutely. Absolutely. Well, Emmanuel, I want to give you some time, too, to plug, you know, any social media, any last comments or concerns about what we just talked about, any articles or anything that you may have written and posted or whatever you’d like to do, your floor is yours. Go for it.


  • Emmanuel

Thank you. Well, I am active on social media. I’m active on Twitter, and I interact with other educational technology leaders. And we post a lot of articles and lots of all discussions on A.I. and cybersecurity. So, that’s where I pretty much put all of that information on. So, you know, you can you know, see me on Twitter. My name, Emmanuel, and Gemma on Twitter. So that’s my handle. I, you know, Emmanuel, I’ve got a gentleman that’s my Twitter handle. So, you know, of course a lot of content is online.


  • Sean

Perfect. Well, get on Twitter and check them out. Emmanuel, thank you so much for joining us today on our K12 tech podcast. It’s been an absolute pleasure talking to you about A.I. Who knows? Next time I talk to you, it may not even be you. I did. Some won’t even know it. But to all of our listeners out there, please like and subscribe to the podcast.

If you’d like to be a guest on our podcast or if you have a topic that you would like us to discuss, please visit https://www.k12techrepairs.com/. That’s 12 the number not spelled out slash podcast. Reach out to us. We’d be more than happy there, have you on or find a topic that you’re interested in. But that’s all for today.


  • Emmanuel

Emmanuel, thank you again so much. I do appreciate it and I hope we get to talk to you again.


Show transcript