Vancord CyberSound Podcast
Episode
80

How is AI Transforming Cybersecurity?

Artificial Intelligence (AI) advancements are shaping the digital world at a revolutionary pace. From ChatGPT and image generators to solving ransomware attacks and providing curated learning, AI has demonstrated many positive usages. However, there are motivators to slow this process down, with concerns surrounding the potential threats artificial intelligence may pose.

In this episode, the CyberSound team addresses incoming AI tools, reviews how the population will likely navigate them, and have an open dialogue on their feelings towards this actively developing software.

CyberSound ep80

Episode Transcript

00:01
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity, with your hosts, Jason Pufahl, Steven Maresca and Matt Fusaro.Jason Pufahl 00:14
Welcome to CyberSound. I’m your host, Jason Pufahl, joined as always, Steve Maresca and Matt Fusaro.

Matt Fusaro 00:19
Hey everyone. Meet immediate utility.

Expand Transcript
Jason Pufahl 00:20
So I feel like this is a complicated topic because it’s the it’s sort of top of mind for a lot of people. We put our heading down, AI friend or foe. You know, maybe it’s a little sensationalized, but I don’t know, there’s definitely groups, and we could talk about this specifically, certainly groups of people who are advocating for the sort of slowdown or halting of development around AI. I think people are concerned about maybe advancing too quickly and posing a threat, right, that the foe side of things. But I think we also should cover, you know, some of those really positive applications that we’re already seeing for, you know, ChatGPT, and image recognition, and some of the stuff that Microsoft is doing, you know, to talk about, maybe where it can be a friend, if we wanted to do that. So it’s developing, it’s developing quickly. You know, we talked a little bit early on in our conversations internally here around, you know, is it similar to blockchain? You know, there’s tons of hype around blockchain. And I don’t want to say there’s been, you know, no practical applications that, of course, wouldn’t be fair, but I don’t think applications that hit the mainstream the way some of the AI,Steven Maresca 00:20
Hi.

Jason Pufahl 00:27
Yeah, like, and I think that’s been the limiter for blockchain in a lot of ways was, how does the normal person take advantage of blockchain? Right? And I think the answer largely is that they kind of can’t. With AI, though, how many people in three days logged into ChatGPT and got utility out of it? Now whether they got true value, or it was just sort of, you know, sort of something unique, they played with, right, but they could access it, they could do something and they could produce something. It was fascinating how quickly this emerged and the hype around it.

Matt Fusaro 02:19
Yeah, I think it was the most heavily adopted application ever, on all platforms.

Jason Pufahl 02:25
Overnight. Yeah, unbelievable. I mean, that was what, ChatGPT-4, I believe, is the most current version still, right. Huge investment by Microsoft, I mean, they certainly see where this can go. But, you know, there are, there’s people who are saying that we’re developing it, it’s getting too intelligent too quickly, and it poses a threat.

Matt Fusaro 02:26
Yeah. Yeah, if you’re not aware, I believe there were over, I think they said somewhere around 1,000 different tech leaders, whatever, whatever that means, probably people with the most money in their pockets, but they put a petition together to slow down AI, whatever that means. I don’t know, how you enforce that, or, or what have you, but they’re looking to slow it down or halt it for a bit. I think mostly they’re citing concerns of responsible development of it.

Steven Maresca 03:15
Yeah, I think what they were proposing was a brief pause of new training relative to datasets sourced from the internet, and basically, a creating of a charter or of sorts to define responsible use of technologies like that.

Jason Pufahl 03:35
And anybody who’s doing any substantive development in this space, typically on their, informational pages, web pages, they’ll talk about their adherence to responsible development practices, which I think is largely in some way constraining the tool that it can’t run rampant. I mean I think, there’s a real fear.

Steven Maresca 03:58
The difficulty that’s cited is that training models are somewhat desirable if they behave in slightly unexpected, evolutionary, chaotic ways, Right. Therefore, you can make that sort of assertion if you’re marketing, but you cannot, with a technical underpinning, make that assertion in fact.

Jason Pufahl 04:12
The whole purpose. Yeah, you can’t push those boundaries if you don’t allow it to, I guess, to evolve is the is the term.

Steven Maresca 04:32
It’s sort of a, it doesn’t apply quite biologically. But the notion is the same anyway. And is there a risk? Probably, potentially, by the same token, a lot of the training models might be evolutionary dead ends to extend the analogy, they may not function. That’s equally likely to something being generated that’s useful, and the probability of something coming out that’s harmful is not really possible to ascertain.

Jason Pufahl 05:02
Right, I mean, we have seen, so you spoke about the tech leaders that have asked to slow down the development. And I think your the way you framed it, probably your position a little bit is, it’s financially motivated, they want an opportunity to catch up, I suspect?

Matt Fusaro 05:02
Yeah, I mean, I’m sure that there’s a lot of that. Don’t get me wrong, I think that it’s probably not a bad idea to slow it down, kind of just do some some checks and balances on it. Because like, yeah, like, like we were just talking about, there’s some danger to this. But yeah, like all things, there’s probably a money aspect to it.

Jason Pufahl 05:42
Right.

Steven Maresca 05:43
I think that there are other motivations that are articulated to slow things down too because there are vast, well established workforces that are fearful of being displaced. We’re talking about people’s livelihoods, the perceived value of their contribution to society. Essentially, the fear that well established norms of interaction and our purpose will be disrupted by something that takes away what we see as fulfillment, or what we as people use as part of our identities. I think that’s partly behind the blocking of Italy to ChatGPT, I think it has to do with privacy as well. But you know, ultimately, that that’s also at the background of these conversations. It’s, is it a tool that is useful? Or will it fundamentally change the nature of human society in a way that it’s not a net positive? I think that’s the thought process behind it. That’s not something we can answer.

Matt Fusaro 06:54
Says no, so.

Jason Pufahl 06:57
But, you’re saying the statement around the blocking and the privacy implications is tied to the, will it improve society in a net positive, are you saying?

Steven Maresca 07:06
In part, but I think that there are fears of disruption in the actual, you know, intellectual worker class.

Jason Pufahl 07:14
There, I mean there absolutely will be.

Steven Maresca 07:16
Fears of displacement and, you know, the fears of being obviated, I think, is the main crux.

Jason Pufahl 07:22
Yeah, I mean, I honestly, I think you could see that in a couple of days of using ChatGPT, just to write articles and collect data. You know, we’ve talked about security education, I think, and the ability to do your own research and understand whether or not you’re being presented with factual data. I mean, that’s a huge risk with the tool as it stands today. I think as people gain more and more trust in it, there’ll be more and more likely just to trust what it gives them. And I think there’s a real risk there. But the fact is, it writes better than a lot of people. It writes better than a lot of people that I that I read their writing of, I mean, I get, I get people drafting documents all the time that are considerably poorer than what ChatGPT can produce, and it’s going to change things. And maybe things slow down development wise, they’re certainly not going to halt, I mean, this ship has sailed, and there’s enthusiasm in this space that we haven’t seen before.

Steven Maresca 08:25
I think that a likely outcome of technology like this is sort of a balkanization of both the tech and the international sphere as, as it pertains to national opinions about such technology, there will be some that completely avoid it, because they want to preserve the status quo for various reasons while accepting and efficiency and you know, all the things that might be associated with the benefits of having it, and there will be some that wholly, completely endorse, and dive right in. So even if there is some sort of compact to cease development, those who don’t act as signatories are free to behave as they see fit. I don’t think it will stop advancement in the way that is desirable. What it won’t produce realistically is a compartmentalized world where, you know, you confine the most risky applications of tools like this to air gap networks and things of that variety. That’s likely a requirement, maybe in law, maybe in practice.

Matt Fusaro 09:39
Yeah, higher education has a real issue with it right now, for a lot of good reasons.

Jason Pufahl 09:44
And, you know, there are plagiarism detectors or you know, the AI generated detectors. I’m not sure how good they are.

Matt Fusaro 09:51
Supposedly, they don’t work very well against the newer models, which I can see why.

Jason Pufahl 09:56
As they develop right, and we said cat and mouth, cat and mouse in one of our earlier podcasts, I think it’s sort of similar here in that regard. So, let’s segue a little bit, though, because what I don’t want this to be is totally a podcast around you, should we cease development, what are the risks? We’re seeing some really interesting applications come out of it. I think obviously, everyone’s familiar with ChatGPT, that’s the thing that got all this started. There’s the image generation tools that are out there, there are a whole variety of those. I think one of the things that I know, Matt, you’re interested in is, in following, is the Copilot tool that Microsoft has released. And I think we’re looking at it specifically because it’s sort of security related, kind of intent of the podcast, but maybe spend a minute on that, if you would.

Matt Fusaro 10:43
Yeah, it was really interesting to see how they, how Microsoft took the GPT-4 model, and kind of hooked it up to their security ecosystem that Microsoft did there. It was one of their events where they’re unveiling products, and they kind of showed how, by using basically a ChatGPT, it was able to do an entire forensic investigation into a ransomware attack without anyone being involved, where, so it detected it, it made an attack story, it made a PowerPoint presentation for executives to look at it, it isolated the host that had an issue and then remediated it, right. Soup to nuts. That’s, that’s amazing, right? That’s something that we’ve been striving for for a long time in security, because having that type of rapid response is crucial, especially in ransomware attack. But then you get get back to the, it’s going to make mistakes, sometimes, right? They even say during their during their presentations, this isn’t this is not perfect, but it’s a good tool to use right now. It’s gonna help you out, which I agree, you know, having having more tools available, to get to answers quicker, or to summarize things or to do slightly mundane tasks, like make a PowerPoint presentation so that everybody else knows what’s going on. You know, taking those things out of daily life is is good, I think. But like we’ve said before, it’s going to depend on an operator that is paying attention, knows what they’re doing and knows whether or not the results are what they should be.

Jason Pufahl 12:16
So, the interesting thing, because Steve, you had mentioned before, the concern around reducing the need for knowledge workers, in a way you can make the argument that this actually lets people get into a space that might otherwise be really difficult.

Steven Maresca 12:32
I’d agree. It could make the same set of, the same demographics that are fear, fearful of being displaced, far more effective, and applying their knowledge. I think that that is a very reasonable argument to make, because it cuts out the inefficiencies of the mundane tasks and, you know, gets you to just making decisions with better information is the framing of what we’re talking about more than anything else.

Jason Pufahl 13:00
I mean, if Copilot can do what you just described, essentially detect, provide you data on how an attack was executed, and we’ll set the PowerPoint stuff aside, if you can produce that for a more junior analyst, now they’ve got, they’ve got something to look at, that they can maybe validate rather than try to assemble from scratch, which is hugely valuable. And, and I think really does open, open the space up to people who might not have the experience that you typically need to do that kind of work.

Matt Fusaro 13:30
Yeah, it’s kind of a continuous learning thing for them, too. A lot of times a tier one analyst may not know that they’re supposed to go look at certain pieces of information or how to access them at all really, having something there to kind of coach you along. It’s pretty valuable.

Jason Pufahl 13:45
Yeah, yeah. And now, of course, I’m assuming the when it did that investigation, it did it all within the Microsoft ecosystem, right? Probably using the Microsoft tools.

Matt Fusaro 13:53
Yeah, of course. It’s a Microsoft product, so they’re gonna keep it in their ecosystem. But yeah, I mean, there’s ways to plug their data into things like that.

Steven Maresca 14:04
I mean, just for folks who don’t know, OpenAI has a lot of other capabilities. One example that is what I think of as an offshoot when when we talk about Copilot is their Whisper model. It’s, it’s a speech recognition system trained on like, better part of a million hours of multilingual speech. It is astonishingly good for transcription, for interpretation of text. It’s possible to put that model in a Raspberry Pi, for those who don’t know, a very, very lightly powered, battery powered potentially, micro controller. Something far less capable than your phone, and turn it into something that could be an assistant, like, you know, Hollywood’s notion of what AI was 25 years ago. It is right over the horizon with low powered equipment today. That’s where we’re heading. I mean, there’s a lot of opportunity there for curated learning, for guided investigations and extension of stuff like Copilot to say,

Jason Pufahl 15:14
Yeah, I think that’s what I’m referring to exactly.

Steven Maresca 15:15
Like being able to verbally say, hey, I don’t understand this finding, elaborate, you know, that all by itself verbally with no prompting texturally, I mean, that’s where we’re headed. And that’s a hugely interesting, encouraging path.

Jason Pufahl 15:32
The, you know, the challenge. So now we’re segwaying a bit, but we, in my house, we had a discussion around statistics. My wife, my son, and I, I won’t get into into, like, why, it doesn’t matter, but frankly, none of us are statistics people, we don’t have the background in it. And we asked ChatGBT, you know, can you develop a statistical model that meets the criteria that we gave it? And it gave an incredibly detailed, I mean, incredibly detailed answer that probably was accurate. But the fact is, I have no way of validating it. That’s, that’ll be a challenge. And I think sort of jumping off what you just described, if you have no foundational understanding because candidly, my, my background in statistics is pretty light, and this was pretty complicated. I think what it told me it was great, but I have to then find somebody to say, yeah, what it gave you is actually accurate. And I don’t know how you get to the point where you can take the output and truly trust it. That’ll be interesting to see how that develops. And maybe you can’t, maybe you ultimately just have to have some trust in things that are straightforward like that, but.

Steven Maresca 16:38
I don’t really have any theories, I think we’ll return to some sort of proof driven reality, because honestly, if you can’t point directly from origin, the prompt, you’re giving a system like this to the conclusion in a discernible path. Even if it’s contorted, you know, as long as it’s explained, you can get from point A to point B, that’s a better place than having no understanding of the path. And that’s kind of the place we’re in right now.

Jason Pufahl 17:13
I’d say that was the exact outcome. You know, you got a couple of good pages of really detailed notes that I felt like, well, probably right. I think that answers our question, but but I guess it’s better than the starting place, which was nobility beforehand.

Matt Fusaro 17:31
Yeah, I mean, I think our, my big takeaway from how all this is going is, it’s not going anywhere, find ways to use it properly, find ways to integrate it into what you’re doing responsibly. Because, frankly, if you don’t, and the rest of the world is, you’re going to be left behind. You know, I know a lot of people kind of are shaking their fist at it right now, but it’s not gonna go anywhere.

Jason Pufahl 18:01
This is the Matt’s hot take section. For sure.

Steven Maresca 18:04
One of the most interesting things that I’ve seen and kind of this reaction to what you’re saying is annotated models that produce sort of like the the back end thought process, so to speak, for the the content produced. Being able to say, this is a supposition. I don’t have facts to back it up, but I’m assuming that there is this fact, that’s not part of the prompting or the data set that I have, therefore, I’m making an inference. That by itself may help massively to sidestep some of the problems we’re talking about, because it’s honesty about guesses being made versus things being decided upon fact.

Jason Pufahl 18:50
It’s such an interesting thing that you’re bringing up, though, because I don’t think most people care about that. And I think that’s the that’s the reason people are so nervous about it is, I do think the enthusiasm over the tools as they existed now is that they feel like they’re conversing with you. They feel like, people feel like they’re getting accurate information with very, very, very little effort, and they trust it. And I don’t think that’s going to go away.

Steven Maresca 18:50
But people are very appreciative if I tell them, I don’t know. And if they hear me say that, it changes the interpretation of the conversation we’re having in a way that makes it understood there’s sort of a shared responsibility in navigating.

Matt Fusaro 19:32
I agree and disagree with you at the same time. Like, I think, I think that would be helpful to have that annotated in there. But at the same time, I don’t think people will interact with it the same way they do a human, right. When it says I don’t know, you assume it’s broken, yeah this isn’t working for me.

Steven Maresca 19:50
Yeah, it’s fair.

Matt Fusaro 19:51
That’s not what you think when you talk to a person though.

Steven Maresca 19:54
And the deficiencies of all of these platforms today can be benefits with the right framing and coaching. You know, if a generative AI platform emits something that it thinks is the appropriate output, maybe if if I know that fact I can backhaul that to my prompt and say, ah, I can fix my question, I didn’t ask it correctly. The tools need to be used appropriately, otherwise, honestly, you get the, you get garbage out if you put garbage in, and that’s truism for all of all computing’s history.

Jason Pufahl 20:29
So you’ve got these tools, though, that are accessible to everybody. Not everybody can formulate their questions, can update their questions based on what they’ve just received as an answer, like, there’s, there’s a lot of expectations that you have there.

Steven Maresca 20:45
I can use a chainsaw, you do not want me cutting down your tree. You know what I mean?

Jason Pufahl 20:50
Yeah, fair. I mean, fair enough. It’ll be interesting to see tools that are accessible to the entirety of the population, how they develop, what the trust, really what the trust in them is. And I think the trust is higher than it should be right now, and I don’t expect that to change probably. I’m really enthusiastic about things like, like a Copilot. And really there’s a whole other project, there’s a whole whole ton of projects out there that are using AI. There’s some fascinating uses here. And, you know, going back coming back to the blockchain for a second, like, we were all interested in crypto and blockchain and we watched all that stuff pretty closely. It was, it was always hard to get really enthusiastic because it felt so niche. And it felt like people were just searching for ways to integrate blockchain. Where with AI, it just feels like there’s a million opportunities there. So yeah, of course they are. And I think they’re still searching. What’d you say, medical records was one locate, but yeah, but there’s a handful I think that are really materialized. So I don’t know if we’ve answered anybody’s burning, burning AI questions or whether or not we’ve committed to it being a friend or a foe necessarily, but I think it’s a, this is a, this is an interesting space, one that we’re gonna watch develop. We’re seeing develop now like this is a forefront technology this minute. In fact, I would say if we went back a year, yeah, probably about a year, we had our discussion around AI in security projects.

Steven Maresca 22:30
It was machine learning at the time.

Jason Pufahl 22:32
So we’ve had it back and forth, right. We compared and contrasted the two, but all of us we’re, we’re not that bullish on AI. And I think a year, we’ve seen huge changes in that, where now we’re having a conversation around the potential value of AI relative to a forensics investigation for ransomware, right. I mean, what a change in a year. I think the next year is going to be hugely different.

Steven Maresca 22:58
Oh, yeah. We have no idea what’s coming.

Jason Pufahl 23:00
No idea. So we’re prognosticating, right, like Matt’s hot take, I’m going to, you are the crystal ball in the center of the table. That’s what it feels like. So I mean, this is a topic where I where I feel really good about saying if everybody wants to talk about it, like, we don’t have all the answers, but we’re certainly interested and we’re certainly watching it as I think a lot of people in this space are. So if there’s a conversation to be had, this is probably a good one and we’re happy to have it. So. And if nothing else, we hope you found this, this podcast interesting because there’s a lot to there’s a lot to uncover here. Thanks for listening.

23:35
We’d love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn or on Twitter at Vancordsecurity. And remember, stay vigilant, stay resilient. This has been CyberSound.

Request a Meeting

Episode Details

Hosts
Categories

Work with a Partner You Can Trust

Our goal is to provide an exceptional experience to each and every client. We learn your business and protect it as if it were our own. Our decades of experience combined with our expert team of engineers and security professionals provide you with guidance, oversight, and peace of mind that your systems are safe and secure.

Cybersecurity Tips In Your Inbox.

Get notified when we have something important to share!

Related Episodes