Episode 144
Listen to this episode on
Episode Transcript
Speaker 1 00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:11
Welcome to CyberSound. I’m your host, Jason Pufahl. Joined today, Steve Maresca, John Madura.
John Madura 00:17
Hi there.
Jason Pufahl 00:18
And we’re going to spend a bunch of time, I think, we kind of framed this a little bit as an AI governance discussion. Maybe it’s an AI tolerance discussion, as we look to sort of bring in, sort of your background and how you’re seeing that in the public schools. But John, maybe spend a second, just if you could, just on your background, just to refresh people’s memory.
John Madura 00:38
Absolut…
Speaker 1 00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:11
Welcome to CyberSound. I’m your host, Jason Pufahl. Joined today, Steve Maresca, John Madura.
John Madura 00:17
Hi there.
Jason Pufahl 00:18
And we’re going to spend a bunch of time, I think, we kind of framed this a little bit as an AI governance discussion. Maybe it’s an AI tolerance discussion, as we look to sort of bring in, sort of your background and how you’re seeing that in the public schools. But John, maybe spend a second, just if you could, just on your background, just to refresh people’s memory.
John Madura 00:38
Absolutely. I’m a technology education teacher at the Morgan School in Clinton, Connecticut. I teach a variety of courses, which is great.
Kids get exposed to a lot of different things. We have a wonderful intro to data science class. We have a traditional AP computer science class and we’ve now brought on cybersecurity. I’m bringing on networking next year. So exciting times.
We’re trying to stay embracing the transitions and technology and keeping kids prepared for the future. So exciting time to be a technology education teacher.
Jason Pufahl 01:13
And that’s the segue.
Steve Maresca 01:14
That’s great to hear.
John Madura 01:15
Yeah.
Steve Maresca 01:16
I have fond memories of foundational stuff back then. Those last, if you can create those introductions.
John Madura 01:21
Yes, absolutely hoping to do that. I mean, kids are responding to it. The courses have never been more popular.
All of our courses are maxed out in that way, which is wonderful. Students are talking about going into cybersecurity as a field, which is wonderful, into technology as a field, computer science. And they’re asking me really good questions about it and AI is a big one.
Jason Pufahl 01:44
Well, so I should look at my phone and see what time it is. Because we just spent a bunch of time discussing what we were going to talk about, which was very, it was all over the place. Where I want to start, I think then is that level of interest that you just outlined.
So kids are interested in the courses and you indicated they’re also, kids and parents are concerned that maybe AI has a negative impact on the possibility of getting jobs in the IT space that they’re kind of interested in. So I’d love for you to spend a minute on your thoughts.
John Madura 02:18
Absolutely, and that’s sort of the natural cycle of, I think, real interest in the field, is that they start asking hard questions. And the hard questions are, a student comes to me and says, hey, I’ve taken a couple of your courses.
I’m interested in cybersecurity or programming or data science, but is AI going to replace my job? My parents want to know. They don’t want to fork out all this money for an education that I’m not going to actually get a job for, and the job market is tough. And so that’s a question we’re asking and tudents are asking us.
And what we tell students is, the tradition has been, as more technology increases, the need for more people has increased. I mean, that has been true. AI is a different thing, I think, at this point. And we’re all trying to get our heads around it. The capabilities have exceeded many of the things we’ve seen with other technology revolutions, I think, a little bit. But we don’t have all the answers either.
And we’re thinking through some of these questions as well.
Steve Maresca 03:16
Earlier, we were speaking about technology as a leveling agent. An access-granting kind of capability.
I’ve heard carpenters say that power tools made it easier for lots of folks to replace a team. But that means that people that didn’t have the team or the skill set or the apprenticeship could become effective members executing that type of thing. It doesn’t map exactly to this.
But I do think that there’s an element of that.
John Madura 03:43
I completely agree. You know, and one of the things we try to do is not just teach the tools in isolation. But we do try to make them applicable to a scenario.
And we have a big advanced manufacturing drive in our technology education department as well. And we try to map many of the scenarios they see in cybersecurity or programming to what they might experience in a manufacturing firm, trying to create that reality. And what it shows is that really it becomes less about the tools and a little bit more about how you’re getting products designed or built or manufactured. Or how do you make sure your network is safe? And things like that. Or even how do you put a network together to integrate these machines?
And these are a lot of human decisions, too. Like, how does it work for the people that are doing the work? And so we try to separate the tools from the decisions that are made and the relationships that you have as a person working in technology.
Steve Maresca 04:42
So in something like additive manufacturing, 3D printing. And so that makes the distance between an idea and the cycling around it and iterations thereof a lot shorter, right?
John Madura 04:53
Yes, that’s the idea.
Steve Maresca 04:55
So how does that apply for cybersecurity or data science?
John Madura 04:59
Yeah. So what we’ve tried to do is we’re in the middle of a project where we’re integrating many of our CNC machines, which is exciting. So there’s an opportunity for us to not only just get that network established, but manage that network.
And cybersecurity is a big part of that. And so we imagine students acting as a system administrator, checking logs, things like that. And kind of working as a professional and seeing how you have to explain to me how that works, right?
Or somebody else. And so it becomes less about a tool that could be replaced by AI and more, what was your decision behind using that tool? And sort of how do they move things forward in an applied way?
Jason Pufahl 05:40
So you had introduced the idea of a survey course.
John Madura 05:46
Yes.
Jason Pufahl 05:46
And it sounds like there wasn’t really appetite for it today.
John Madura 05:48
Yeah. We definitely see a need. We’re certainly a school district, like many, that are taking a conservative approach to introducing AI formally as a tool for students to, say, write papers or do research or do the technical work that we do.
So students don’t really have the unfettered access that we do outside of school. And so one of the thoughts…
Jason Pufahl 06:15
Unfettered meaning you block it on the firewall, probably, or you prevent it from being accessed.
John Madura 06:19
Exactly. Students can still use the phones when they have a chance to. And they always use a phone.
Any reason to use a phone.
Jason Pufahl 06:26
Yeah, for sure.
John Madura 06:27
So they try that.
But formally, we don’t build that into the curriculum. So one of the thoughts we’re thinking about now is a formal AI survey course, teaching prompt engineering, the models behind AI, and some of the applications that can be done to demystify it. That’s what we do with cybersecurity, is we don’t introduce the tools so they become hackers.
It’s to demystify some of the things that they’ll experience just as a user on the network. But it’s still hard to get that course through. It’s still a work in progress.
And I think we’re all learning it together. And I think there’s some, certainly, fears and reservations about just jumping into it without having more answers.
Jason Pufahl 07:11
So what are the fears?
Because it feels to me like there’s an eventuality that you’ll have a course next year, year after probably seems like way too long, right? So is there that recognition that we really do have to introduce students to it because it’s coming?
John Madura 07:30
Yeah, I mean, I think there’s two sides of it. I certainly see, I mean, there’s no denying students have to learn this. This is just companies are making decisions.
People in life are making decisions all around AI. So to deny them that exposure is really not good in the long run. But there’s also fears that the teaching maybe hasn’t caught up with the AI, right?
And so we may be asking one of the big fears is like, are we producing or tasks or assignments that are kind of AI durable, right? Like is a student, could they short circuit the whole process of learning just by using AI? And I think, you know, we haven’t maybe looked at all of the ways in which we teach to maybe protect that, you know, at this point, certainly it hasn’t been thought through to that full level yet.
Steve Maresca 08:20
I mean, a comment I made earlier in prep with you was there’s utility and restraint. Because these are new transformative technologies that need to be taught appropriately.
It’s not just open the floodgates and have added. That has a potentially negative effect.
John Madura 08:37
Yes.
Steve Maresca 08:38
But in a constructive way, how does teaching and learning get to integrating AI and other transformative technologies in a curriculum without it being erosive?
John Madura 08:49
Hmm. Yeah. I mean, I could tell you the probably the biggest barrier to that now is teacher comfort with AI.
I mean, I think it’s been sold a lot as like to the profession, let’s just say at this point as a productivity tool. And I think there’s some truth to that, but I don’t think the productivity piece is planned out. They panned out like it was promised.
I think it’s a better learning tool, those kinds of things. We did talk about this a little bit as using it as a kind of critical friend or a personalized learning agent. You know, and I think that’s something we haven’t had a lot of practice at, you know, and I think that there’s a skill to that too, but it’s an emerging thing.
So I don’t know if there are answers to that.
Jason Pufahl 09:37
I do love the term critical friend.
Steve Maresca 09:39
Yeah. It’s a sounding board, Steven. Hey, I’ve done this.
Is this consistent with the assignment? Yet that type of thing isn’t replacing the exercise. It’s supporting it potentially.
John Madura 09:52
It’s just hard to monitor sometimes and, you know, whether like intent is big, right? Like it’s hard to know the intent of a student who’s engaging with AI to complete an assignment. Are they just doing it to get it done?
Sure. That’s a big motivation. Obviously, there’s kids are pulled in millions of directions and want to get stuff done.
Jason Pufahl 10:10
Yeah. And that’s no different today than a hundred years ago, right?
Steve Maresca 10:12
Right. Short or 10 years ago, we were talking about plagiarism of Wikipedia or something like that. Just shifted slightly.
Jason Pufahl 10:19
Shifted slightly.
John Madura 10:20
Slightly.
Steve Maresca 10:22
Substantially.
Jason Pufahl 10:23
So it’s easier and quicker.
Steve Maresca 10:24
All the same, it’s digital literacy from 20 years ago, just with different terms.
John Madura 10:29
Yes. Yes.
Steve Maresca 10:30
So then how can teachers equip students to use these things constructively without it being a problem for the lesson planning?
John Madura 10:40
Yeah. I mean, that’s a great question. I mean, I think to me, and this is a trend in education in general anyway, is like more towards sort of this kind of group learning in a way where we kind of learn together.
And I think maybe as teachers maybe model ways in which they use it and do kind of the same thinking and model that thinking for students in an AI context. I think that would be probably the most instructive, but people still have trouble getting away from the model of like the teacher as the person that knows everything, the source of knowledge. And I think saying AI knows a lot of stuff and I’m going to learn with you is a really, can be an uncomfortable place to be as a teacher.
Jason Pufahl 11:24
And then I’m sure not all your teachers are as comfortable with AI.
John Madura 11:28
Exactly.
Jason Pufahl 11:29
Right?
John Madura 11:30
No interest or comfort.
Jason Pufahl 11:31
Yeah, or no interest. There’s folks I’m sure to tail into the career that just sort of feel like, I want to wrap up before I have to deal with all of that.
I’m sure there’s a perspective of that.
John Madura 11:41
A hundred percent. There’s even teachers who will say, they’ll look at the research and say some of these studies where they’ve seen how people’s writing is affected by using AI and they take that tool away and their writing sort of decays as like a skill.
And so there’s a group of people and I don’t disagree with them that the wrong use of AI is actually stunting your research. So I think until we have some of those answers, I think there’s always going to be a group that’s a little reluctant to do that.
Steve Maresca 12:12
In some ways though, this is the same sort of lesson that we’ve tried to teach students or emerging professionals in general. Critical thought, understanding of the problem domain, they are still central to everything.
John Madura 12:24
Yes.
Steve Maresca 12:24
And recognizing reflectively that there isn’t that expertise is sometimes the mechanism that allows you to acquire it.
John Madura 12:32
Right.
Steve Maresca 12:33
How in that kind of light can you see this being used constructively?
For example, with brainstorming. Creative thinking around a project that isn’t well-defined in a deliberate way.
John Madura 12:46
Right. Yeah. I think that’s a great question. It’s hard to…
I think it comes down to planning the scope of that task, right? Where is it going to go? And having a sense of the boundaries of what working with AI would be as opposed to working with a bunch of sources that you’ve already curated for a student.
Certainly a bit more open. And I think some of these… Notebook LM I think is a fascinating tool where it collects a certain body of knowledge that you’re forced to work with.
And I think it narrows the universe a little bit of things. I think that’s an interesting step I’ve seen. I see a lot of pedagogical approaches for that.
But again, I think it comes down to a system and a procedure and a set of customs we establish around the use of it. Like what are the best practices? How do you use AI to think more critically?
How do you use it as a reflection tool? Those questions are much more open. I don’t think there’s a lot of good answers on that from the average person teaching, right?
I don’t have those answers and I deal with it all the time. But it’s something I do think about, for sure.
Steve Maresca 14:06
Would teaching by counterexample be helpful?
Here’s a particular task that AI is really in-equipped to assist with. Here’s how we can steer it toward better outcomes as an individual still who has to resolve that problem.
John Madura 14:21
Absolutely. And I think the challenge will be finding those examples and finding great examples, tools, approaches that sort of think about AI as a way to be the tool you want to use in the future to solve problems that are really big problems. But at the same time, we have such a tradition in teaching that is revolved not around AI.
And so people are very comfortable with that. And there are Socratic methods. And how do those things adapt to a technological thing like AI is very different to see.
So I think teachers need a lot of practice just like anybody else with it. And I think we’re still at the early stages of it.
Steve Maresca 15:04
So then what’s the page turn? Governance is something that’s missing in a lot of educational institutions. We were with the colleges that maybe had a head start in 2020, but are now realizing that they’re not keeping pace with change.
Jason Pufahl 11:31
Or they’re not comfortable with the tools that they bring out.
John Madura 15:18
Right, absolutely.
Jason Pufahl 15:20
Ethically or…
Steve Maresca 15:21
100%. I mean, sometimes there’s a licensing barrier or a cost barrier that prevents them from being as useful as advertising.
John Madura 15:29
Absolutely.
Steve Maresca 15:30
What’s that look like for you? What are the governance needs at the moment that are absent?
John Madura 15:37
Yeah, that’s a tough one. I mean, right now, the governance is like, we don’t really use AI. And I think part of that is an equity thing too.
Not every kid at home has premium, you know, open AI or anything. But I think there’s also just, again, there’s, I think, a little bit of a respect for the fact that if you unleash this to people, like a group of teachers, faculty, administrators, who aren’t really prepared for the questions that will result from this. I mean, plagiarism is one we caught.
Jason Pufahl 16:11
Of course, and that was immediate.
John Madura 16:13
And that’s been a, you know, it’s sort of more education around plagiarism with this now than punitive. I mean, it was a little bit more cut and dried.
And now it’s like, hey, you shouldn’t use AI to do this. And it’s like, well, but I’ve heard people say you should use it to think about ideas. And it’s like, well, you didn’t think about ideas.
You know, so there’s a lot of explaining to do. And I’ve seen people talk about, you know, keeping drafts of what you do as a way to say, hey, this was my interactions with AI. Could you see that I was actually critically thinking with it, you know, or I was…
Steve Maresca 16:46
To prove your work in exactly the same way that we think of another.
John Madura 16:49
That’s a different kind of evidence.
Steve Maresca 16:50
It is.
John Madura 16:51
That’s not evidence we’ve asked students to do before and…
Steve Maresca 16:54
But it is aligned with citations in a traditional style.
It’s really directly related to that.
Jason Pufahl 16:59
And honestly, even when using it at work, I always try to provide, here are the prompts that I used to get to the end because it actually helps contextualize how did you get there? What were you thinking?
So, you know, people don’t have to infer. So, I think there’s value in that. It oftentimes produces kind of a long and kind of peculiar output.
But you do want… If you’re going to share something with somebody and say, this is what I generated, having that information around what were the questions, what did you ask it to do, what were your follow-ups, it’s really valuable.
John Madura 17:33
I think so. And I think, like, a student would learn from that. I think in the moment, you’re probably, like, you know, in the rhythm or vibing with it.
You know, you’re kind of asking questions, learning things. And then I think if you, as a student or a teacher, reviewed that whole log, you might be discovering things about yourself and your thinking process that you probably didn’t really realize.
Jason Pufahl 17:53
And you’re learning together, right? The teacher can look at them and say, actually, I didn’t even think about asking those either. There’s no question.
I see my kids engage with AI differently than I do.
John Madura 18:03
Yes.
Jason Pufahl 18:04
And it makes me step back and realize, you know, my perspective is kind of narrow at times.
John Madura 18:09
Right.
Steve Maresca 18:09
I think there is utility in reflective review of interaction like that. For example, if there’s a rubric for evaluating performance of a student in an assignment, if that is fed in as sort of a basis to evaluate the interaction or the work. Yes.
It’s what’s missing, what’s present, where’s the strongest, where is it similar? That’s something that might steer next steps as much as anything else.
John Madura 18:40
I think that’s true.
Steve Maresca 18:40
But one thing I want to return to is that this is a teaching and learning discussion, but I think from a bias perspective, before we get to student support, there absolutely needs to be educator support. That’s what I’m hearing very strongly.
John Madura 18:55
Yes, 100%.
Steve Maresca 18:56
I hear that in colleges and universities regularly. This is universal truth.
John Madura 19:00
I hope it is, yeah.
Steve Maresca 19:01
How can we support educators?
John Madura 19:04
Yeah, I think districts have been very good. I think I’ll speak to mine about supplying tools for teachers. Teachers do have access to AI as a teaching tool, and we’ve encouraged through professional development opportunities to redesign lessons with this in mind, mostly around making sure, preserving student thinking.
Not so much about students using the tools, but if they do it, if you created an assignment, could we, can we preserve their thinking and their growth as students? But not so much like, hey, here’s how you use it. Here’s what you’re going to be asked to do with AI.
That part is still missing for us. And I think it may, we may be always a little bit behind in the sort of teaching part of this, but I will say that districts like ours, and I don’t think ours is much different in that way, are encouraging teachers to use it. I hope we’re still encouraging people to do it in a way that’s about making more interesting tasks and not about shortcuts and productivity things.
And that’s really a lot of times how it gets sold. And I don’t think that’s super effective that way.
Jason Pufahl 20:21
I think you bring up the survey course.
John Madura 20:23
Yes.
Jason Pufahl 20:24
It would be great for your faculty to take that same course.
Because I think part of this is, you sort of used two words, I think, demystify AI a little bit and teach people that there’s more to AI than just an LLM and ChatGPT interface.
John Madura 20:41
Yes.
Jason Pufahl 20:41
But unlike, you don’t necessarily need to have your faculty go through a networking course because the reality is they’re not going to do that.
But AI, I’ve heard it described as like the democratization of IT. And in some way it is that, right? It makes somebody who doesn’t have a technical background able to create a small web app or to do some things that might be useful for them day to day.
They don’t have to go to somebody else to do it, but they don’t really know the tools exist. They don’t really know how to use them. And I think that would probably aid in increasing the comfort level overall.
John Madura 21:14
Yes. A hundred percent.
Steve Maresca 21:15
I have found it extremely useful to represent it as a translation tool.
And before we were talking about…
Jason Pufahl 21:20
Translation.
Steve Maresca 21:23
Multiple flavors in that word being contemplated at the same time. You were referring earlier to it being used for language acquisition.
Jason Pufahl 21:30
Oh, yeah, yeah.
Steve Maresca 21:31
But it is a translation tool for people outside of their area of expertise.
John Madura 21:35
Yes.
Steve Maresca 21:35
And you can say, I am a X. I know these things. I don’t know these things.
Please explain this foreign subject to me in language that helps me get some bridging in between. I think that all by itself, because teachers are translators.
John Madura 21:53
Yes, they are.
Steve Maresca 21:54
It’s intrinsically supporting that, but it’s reframing it in that mode so that it doesn’t seem disjoint with the activity.
John Madura 22:02
I agree.
Steve Maresca 22:03
I think there’s room though for adversarial devil’s advocate functions against the use of these tools.
Meaning if there are folks who do not feel comfortable, they still have a place. There’s value in retaining the human.
John Madura 22:17
Yes.
Steve Maresca 22:19
Describing that as a teaching device to those who are then educating is I think useful too.
John Madura 22:26
Yes.
Steve Maresca 22:28
Being able to interrogate systems like this to understand their limits when they hallucinate, where they fall short is something that most people do not know or understand.
John Madura 22:40
Right.
Steve Maresca 22:42
And because they are not domain experts in the subjects, we have to equip them with some of those skills. That’s probably the area that I think is best for at least the teaching faculty.
John Madura 22:52
I agree. I agree.
Steve Maresca 22:53
I think students need it too.
John Madura 22:55
100%. I mean, I don’t think a student can learn it unless we’re modeling it first. I mean, that’s really the big thing.
As you were talking about that, I was thinking, yeah, we have to do that, right? Like we have to practice doing that. And then I think we’ll feel more comfortable doing that with students because it’s going to be kind of in real time a lot of times.
And I think a lot of people like that gives up a lot of control. You know? So I think that’s another piece of this that you’ve sort of said, hey, I acknowledge AI as being a knowledge source, right?
Not outside myself. And then how do I integrate that as we’re all thinking at the same time? You know, that takes quite some orchestration, I think, you know, and some comfort.
Steve Maresca 23:41
So then, you know, getting close to the future and charting a path toward whatever that may be, what are the biggest gaps for a student right now in terms of comfort? What are the biggest gaps for faculty? What do they, in a concrete way, if you have those ideas, what do they need to do?
John Madura 24:00
I think there’s a few things. I’ve thought a little bit about this and we’ve talked about a few of these. But I think, you know, getting this to be part of an educator’s practice all the time, I think, as a tool that we use, just like any other, is probably the first step.
I think there’s a surprising amount of student reluctance around AI, too, that we’re still working on. Certainly kids who were in high school as OpenAI dropped and made their, you know, chat available. I think there was a real concern by some students about the perception of plagiarism and being scared to use it because they would, you know, get themselves into trouble or get themselves into uncharted territory.
But then there’s the other side of that. There’s student reckless abandon.
Jason Pufahl 24:49
Yeah, and that’s a mature question if they’re coming back.
John Madura 24:53
Yeah. And so I think some of these things that, you know, getting back to what you’re saying about what we need to do is just, I think there’s got to be a commitment to say, this isn’t going anywhere. How do we systematically redesign how we think about teaching and learning?
So that, again, I think it’s a more collaborative thing with students and teachers, collaborative among teachers now. I mean, I think, you know, our district is having a tech expo this month of which they’re bringing some students in to talk about some of their experiences using AI. And so hopefully some of those conversations reveal some of their fears and some of their, you know, like real excitement around using it.
So really the gamut of like their motivations around it. I think understanding what they’re thinking, too. All of us are kind of thinking everyone else knows what’s going on with AI.
I mean, in my school, they’re like, oh, I don’t know. They must know way more than I do. But the truth is, we all have little pockets of things we use it for.
But I think, you know, practicing using it as like a real, you know, kind of like, like I said before, a critical friend or somebody who’s a thinker with you is like kind of where we need to go at this point. You know.
Jason Pufahl 26:10
The, you know, I found myself thinking about the, I like the critical friend term.
John Madura 26:17
You can steal that.
Steve Maresca 26:18
Well, so I’ve used, I use a variety of tools, but you know, OpenAI has this sort of persistent memory.
John Madura 26:26
Yes.
Jason Pufahl 26:27
And so it’s really valuable because it has context around, you know, what I do for work, what I do for fun, questions I’ve asked. So it can integrate that.
Right. So I, but I also find myself thinking like I want to kind of move away from OpenAI into something else, but I basically have to reintroduce, I have to, I have to make a new friend. And, and, and right, which, and I hadn’t really thought about it that way.
John Madura 26:49
No, it’s true.
Jason Pufahl 26:49
But there’s value in it knowing me some through the memory and the idea of moving is a little more complicated because of that. Cause I essentially have, it has to relearn me.
John Madura 26:59
Yes.
Jason Pufahl 27:00
And I don’t know that I’m ready to make a new friend.
John Madura 27:02
I’ve felt the same thing.
Jason Pufahl 27:03
And I hadn’t thought about it quite in that concrete term. And I was trying to figure out what was my, why I was resistant to leaving. And it’s because of that.
And, you know, you can’t, you can’t transition that memory.
Steve Maresca 27:14
There’s inertia there. I was thinking about similar subject recently in terms of companies building software through, through AI augmented workflows. If they have a really disruptive event, you need to rebuild something they’ve lost.
Well, what’s that mean? That’s, you know, six weeks of compute on thousands of GPUs in aggregate. That’s terrifying.
John Madura 27:40
Yeah, that is kind of terrifying.
Steve Maresca 27:40
You’re, you’re married to what led you to that point. Whether you’re an individual or an organization, it’s, there’s a, there are risks here.
Jason Pufahl 27:48
And reproducibility is really difficult. I thought you could ask the same question to the same engine. Two different days and get very different.
John Madura 27:57
Absolutely.
Even if it isn’t shareable.
Jason Pufahl 27:59
Yeah, right.
John Madura 28:00
It’s great.
Yeah, you get turned. Yeah. And also too, it’s, it’s, um, I find it’s oftentimes like, uh, very enthusiastic about whatever idea I’ve come to.
Jason Pufahl 28:10
Oh yeah, yeah, yeah, yeah, yeah, yeah.
Yeah, it’s very supportive.
John Madura 28:13
But this is kind of, and I’m like five minutes. So I’m like, this is a terrible, but there’s my critical friend, you know, tear it apart. You know, he’s not that critical and really likes me. What I really like it.
Steve Maresca 28:24
As long a critical friend, isn’t so sick of fancy.
Jason Pufahl 28:28
And they are kind of sick of fancy.
Yes, they sure.
So eventually it’ll back off of that. But, but it is interesting. I do love that term because I think you find yourself asking it, please.
And you’re having, you’re, you’re, you are oftentimes engaging in a very human way. And I find myself thinking like, why, but then I don’t want to be rude.
It’s crazy, it’s a crazy feeling.
John Madura 28:52
It is a crazy feeling. It is a crazy feeling.
Jason Pufahl 28:54
Here we are.
John Madura 28:55
And here we are. Right.
Jason Pufahl 28:56
So, so we’ll, we’ll chat. I mean, yeah, I think we said we want to do this more regularly. It’s been a while.
John Madura 29:01
Absolutely.
Jason Pufahl 29:04
Definitely before the start of next year.
John Madura 29:06
Yes.
Jason Pufahl 29:04
I want to talk a little bit more about this. I feel we do a lot of AI discussions, but I think there’s just so much, honestly, this uncertainty, this confusion. You’re right in the fore.
You’re in thick of it.
John Madura 29:19
But I think you’re right. I mean, it’s, it’s everywhere, but at the same time, it’s sometimes quite fragmented. And I think something like this brings a lot of those pieces together.
What things are the same that people are going through? What can you add to the discussion that maybe other fields aren’t thinking about too?
Steve Maresca 29:34
In our work, we try to bring the structure, the data safeguards, the governance attached to it, but that can’t exist in isolation without a funnel to the actual use from the users. So it, it’s intrinsically necessary to have this other side of the conversation.
John Madura 29:49
That’s good.
Jason Pufahl 29:50
Well, as always, we’re happy. I mean, we’re happy to chat on any of these topics and I don’t know, like this, this wasn’t quite as meandering as I was thinking we might, this is, this is reasonably focused, but it’s a complicated topic and you know, if you’re an educator and you want to weigh in on your opinions on how to do this, you know, we’re all ears, you know, probably help John as you go through it.
If you think some of the stuff we said is ridiculous, let us know that too, it’s quite possible. But as always, thanks for listening. And and of course we hope you got value out of this.
Thanks, guys.
John Madura 30:17
Thank you.
Steve Maresca 30:21
Sure.
Speaker 1 30:22
We’d love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn, and remember, stay vigilant, stay resilient. This has been CyberSound.



































































































