Vancord CyberSound Podcast

How will ChatGPT Affect Cybersecurity?

OpenAI’s ChatGPT is an emerging phenomenon in the cybersecurity space. However, as Artificial Intelligence technology advances, so does the potential for misuse. From generating phishing emails to disrupting the academic environment for youth, ChatGPT can have different impacts, and it is critical to practice appropriate usage.

Today, the team at CyberSound share their thoughts with listeners on the emergence of this AI software, as well as read a script generated by ChatGPT itself.

CyberSound episode 71

Episode Transcript

This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity, with your hosts, Jason Pufahl, Steven Maresca and Matt Fusaro.Jason Pufahl 00:14
Welcome to CyberSound. I’m your host, Jason, joined today by Steve Maresca and Matt Fusaro, as always, hey guys.Steven Maresca 00:20

Expand Transcript
Jason Pufahl 00:21
So, today we get to speak a little bit about the AI sensation that has swept, I guess swept everywhere, ChatGPT. I mean, honestly, it’s been out maybe, maybe a month at this point? And we spend a lot of time in our house talking about it, relative to sort of its implications in the academic space, as my kids explored the the ethicality and maybe the morality of using something like this. For those people who who don’t know what ChatGPT is, certainly go out and Google it now, because it’s really interesting, but it’s an AI,Matt Fusaro 00:21
Hello.Steven Maresca 01:07
It’s a chatbot.

Jason Pufahl 01:07
Yeah, I mean, I was gonna say text generator, but that’s, that’s so,

Matt Fusaro 01:10
That’s a natural language, AI, right. So it deals in some type of language, primarily English right now, I’m not sure. Actually, I’m not sure if it does other languages besides English. Linguistically, honestly, all the same rules typically apply as long as your data sets there.

Jason Pufahl 01:25
That’s a good question, I actually haven’t tried saying, you know, give me something in Italian? I wonder.

Matt Fusaro 01:30
I’m sure it would do it.

Steven Maresca 01:30
It might.

Jason Pufahl 01:31
Yeah, I think it will. So, and you can generate on it almost anything, right? If you’re, it’s as creative as the inputs that you give it. So in this case, we actually have written our podcast using ChatGPT. And we’re gonna read it for you, it probably will be shorter than our normal podcast. But, you know, just to give you a sense of what we fed it with initially, it was really three, three different inputs. It does do the update of a previously generated text sample with new data. So we wrote, “Write a copy for a three person podcast episode that discusses the impact that ChatGPT potentially has in information security, discuss how it may be used to write phishing emails.” It generated something, wanted a little bit more detail, so we said add information about impersonation, again, updated that, we wanted a little bit more detail. So we said add content about dynamic conversation generation for text messaging, phishing and scamming. So we have a, I don’t know what is it, maybe looks to be a three minute episode that it created. I didn’t tell it, do it in 5,000 words, I gave it no parameters at all. So I think the question is, who wants to be host one, host two, host three? Anybody?

Matt Fusaro 02:55
I’ll be three.

Jason Pufahl 02:56
You want to go last.

Steven Maresca 02:58
I’ll go right in the middle.

Jason Pufahl 02:58
Alright, fair enough. So I will be host two, sorry, host one. You’re right in the middle. Host one. Hello, and welcome today’s episode of CyberSound. In this episode, we’ll be discussing the impact that AI powered language models like ChatGPT have on information security, particularly when it comes to writing phishing emails, impersonation attacks, and dynamic conversation generation for text message phishing and scamming.

Steven Maresca 03:25
That’s right. As AI technology advances, the potential for misuse increases. With language models like ChatGPT, it’s becoming easier for attackers to craft highly sophisticated and convincing phishing emails that can trick even the savviest of users. Not only that, but these AI models can also be used to impersonate a trusted source, such as a known business, organization, or individual, and generate dynamic conversations to lure individuals into text message scams.

Matt Fusaro 03:50
Absolutely. ChatGPT and similar models can be trained on vast amounts of text data, including examples of phishing emails, impersonation tactics, and text message scams. This means they can generate phishing emails and text messages that are indistinguishable from ones written by a human and even impersonate a trusted source with ease. The ability to generate dynamic conversations increases the chances of luring individuals into scams and stealing sensitive information.

Jason Pufahl 04:17
The consequences of this are alarming. With more convincing phishing emails and impersonation attacks in circulation and the ability to create dynamic conversations for the text message scams, it’s becoming increasingly difficult for individuals and organizations to protect themselves from these types of threats.

Steven Maresca 04:34
And that’s why it’s crucial for individuals and organizations to be aware of these types of threats and to take steps to protect themselves. This includes staying vigilant when opening emails and text messages, checking for suspicious links or attachments, verifying the authenticity of the sender before taking any action, and being cautious of emails or messages from sources claiming to be from a trusted entity. It’s also important to be cautious of unsolicited text messages, even if they appear to be from a trusted source, and never provide sensitive information in response to a text message or a call.

Matt Fusaro 05:05
There’s no doubt that AI-powered language models like ChatGPT have the potential to revolutionize many industries. But it’s important to consider the potential consequences and take steps to mitigate any risks. By staying informed and taking proactive measures, we can help protect ourselves and our information from these types of attacks.

Jason Pufahl 05:23
Thank you for joining us today on CyberSound. We hope you found this discussion informative and thought provoking. Until next time, stay safe and keep informed.

Steven Maresca 05:32
You didn’t read “outro music”.

Jason Pufahl 05:35
I didn’t. That’s true. There was intro music and outro music.

Steven Maresca 05:39
I just want to acknowledge that that ChatGPT was thoughtful enough to include that. It’s regurgitated. It’s echoing back to you what you prompted, and that’s the trouble. I mean, to some degree, it’s only echoing what it’s read. So it’s synthesizing similar content, or the content that that it infers is similar, and then filling in the components that you gave it. And you know, it does an okay job.

Jason Pufahl 05:44
Well, so that is the interesting part. I said, create a podcast, and it does know enough to do that. So you can tell it, create a podcast, create a resume, write a screenplay, you know, it will make make a reasonable attempt at doing any of those things. But I think, I hadn’t read this out loud. It was really interesting to hear it out loud because it actually does feel, generated, isn’t a fair way to put it, but it’s doesn’t feel natural.

Matt Fusaro 06:32
Yeah, I mean, that’s why programmers especially need to be very careful using something like this. It introduces a lot of great things that you wouldn’t have normally gotten from just, you know, trusting a source on Stack Overflow, or something like that, or, you know, just Googling for it. But it does also introduce things that just aren’t correct.

Jason Pufahl 06:32
It’s okay. But arguably, this is, honestly, I think one of the worst examples that I’ve seen it create. So I’m so glad we got to read it on air. But I do feel like, you know, we threw a resume together for my 16 year old son, who doesn’t have tons of skills at 16. And it wrote a fine resume that he could then take and spend some time just updating and improving, right. And that I think is the value of it. If you if you take it purely from what it initially produces, you get something that sounds like what we just read. But if you take this and say, well, this is a reasonable first draft, that maybe I can polish up and add to and experiment with, it’s a great way to do that.

Steven Maresca 07:28
Or could have, you know, copyright and other things that are absolutely derivative of whatever it learned to produce that result. So there, there are hidden dangers that lurk without any question.

Matt Fusaro 07:41
I, I found it interesting, Jason, you shared, I guess there was some survey results. Was that correct?

Jason Pufahl 07:47
Yeah, EDUCAUSE, I actually got an EDUCAUSE survey, which clearly is, you know, ChatGPT’s implications in an academic environment focused. But they, but I answered that this morning.

Matt Fusaro 07:58
So I think one of the things I found really interesting was, so the question that was in the surveys, in general, what are your concerns related to generative AI? And one of the answers was that AI generated content becoming indistinguishable from from humans was a concern. Isn’t that kind of the whole point? Isn’t that what everyone’s been driving towards?

Jason Pufahl 08:22
That’s the dream.

Matt Fusaro 08:23
Right. I mean, obviously, we’re concerned about that. And that, and I think everyone’s starting to understand there’s some some ethical reasons why it might be a problem in so many areas, but yeah, it’s just interesting that now that it’s getting closer, now we’re afraid.

Steven Maresca 08:39
But even as phrase, that’s not quite the root of it, right? What’s the threat of being indistinguishable? In my opinion, it’s the air of authenticity and authority associated with well-structured, well-formed statements, because most people who don’t know, you know, if they’re seeking information about a subject, and they encounter language that is framed as authoritative, they take it as such. And that’s the risk, in my opinion, because it may not be legitimate in any way. It’s just constructed. That’s what I’m most worried about. And I think it’s about trust, when most people frame it as well.

Matt Fusaro 09:19
I find most people untrustworthy, for the same reasons.

Jason Pufahl 09:24
There is real concern though with people taking the information as if it is truth, and,

Steven Maresca 09:29
We have enough of a problem with that in other spheres.

Jason Pufahl 09:31
We do, and young people, this is really, I mean, I’m drawing from my own experience, right, this is really compelling for young people who say, hey, I can use this to answer questions in class or, you know, help generate an essay. But the reality is, it stopped getting data I think in 2021. So you got a little gap maybe for some of the historic stuff that they might be working on doesn’t matter as much. But I’m also not seeing the follow up review later to say, well is what was written actually accurate? Right, there’s just blind trust.

Steven Maresca 10:01
Right. Critical thinking is required for use of any tool like this. I mean, it harkens back to, you know, the as of some time ago, right, but references to, you know, what you read on the internet isn’t necessarily factual, I don’t know if anyone recalls the Northwest Tree Octopus,

Jason Pufahl 10:16
It’s not?

Steven Maresca 10:17
But it’s stuff like that, it requires thought, especially if it’s being used actively in a professional context as a tool. Maybe it’s a good starting place, maybe it’s an atrocious starting place, if it drew on material that is outdated, and therefore, since that time refuted, or wholly invalidated, for some reason on a factual basis. You don’t know that if you’re consuming something from that dataset.

Jason Pufahl 10:45
There was another survey that we had looked at that asked, you know, in what, in what areas are people most using ChatGPT? And I think the, the number one answer was, you know, marketing and advertising, which I was actually, honestly, I wasn’t surprised in at all. I think there’s such a push in some of the content based marketing that a lot of companies do to to get content out. And it’s not easy for people to do that. And not everybody has a podcast that they can produce on a on a regular basis. Boy, how, how much easier can it get than just saying, you know, talk about X product in X industry, you know, in X amount of words, and get something you can edit. So you can certainly see that there. But then it also starts to water down the content that you see, I think there’s some risks to that.

Steven Maresca 11:33
You know, that response in terms of marketing and advertising is relative to ChatGPT. But other tools are actively in use for the same purpose of improving copy, like Grammarly, as an example, has an advertisement about tone adjustments. It’s exactly the same type of technology. Just apply it a little differently. You know, some of that’s useful. I think a lot of the sentiment analysis and so forth has historically gone into advertising is a good match for a type of tool like this, because it didn’t it, it’s exactly what you want out of something of this nature.

Matt Fusaro 12:09
Yeah. Again, I think if you rely on it too heavily, you’re gonna fade into the background, right. If everyone else is using it, eventually, you’re just like everybody else, and your messaging and how it’s structured, because it’s not going to change too much.

Steven Maresca 12:22
And it won’t come up with new ideas, necessarily.

Matt Fusaro 12:25
Yeah, the creativity factor, not yet, is there.

Jason Pufahl 12:28
Yeah, but but this is, I think what has caused such a stir is just how much better this is than something that we saw two months ago. And so I think there’s this real sense of, it’s going to evolve really quickly, and is it going to get to a point where you can then really modify something for tonality for your for language adjustments, things like that.

Steven Maresca 12:50
I want to be careful about that, in terms of time. These tools have existed for a long while, the thing is that it’s now uniquely accessible, through ChatGPT, that that’s the threshold we’re at. It’s making these long existing datasets and capabilities, more consumable. And maybe that’s what is the transition needed to make them actually applicable for other industries. But,

Matt Fusaro 12:52
I’m interested to see what happens with, because Microsoft put billions I believe,

Steven Maresca 13:08
Right, tons of investment.

Matt Fusaro 13:09
Into actually making this a service on Azure. Right. So I believe it’s either this month, near the end of this month or next month, it’ll be available as a service. And it was something cheap too, like 20 something bucks a month, I could be wrong on that.

Steven Maresca 13:33
Oh yeah, the next six months should be interesting in several different areas.

Jason Pufahl 13:37
So, you know, Matt, you made a you made a comment around, isn’t this what we’ve all been asking for, essentially, as we as we talked about AI? I’m kind of curious from the two of you, and I’ll give my opinion, do you, do you consider this problematic? You know, there’s been a lot of noise about it. What are, what are your opinions?

Matt Fusaro 13:57
I, sure, I think it’s problematic. But I mean, most, most technical advances always were, right, there was always a period of some fear. I mean, this is a really simple example, but I’m sure math classes got scared when calculators became really popular.

Jason Pufahl 14:12
Right, and that and that’s a common comparison.

Matt Fusaro 14:13
It’s not a it’s not an earth shattering, excuse me, an earth shattering comparison. But, you know, we adapted to that, there’s going to be a more difficult adaptation here. I think there’s way too many things that you can apply, something like ChatGPT Two, but it’s just gonna be an evolutionary period.

Steven Maresca 14:30
I mean, I have a similar thought, and it’s, it is a general purpose tool. Therefore, it’s ill suited to very specific problems. And that’s where machine learning and AI has historically thrived. And I’m talking like 30 years back with Gulf War Logistics, which was a stunning success of early techniques of this sort. Today, you know, some of the stuff that we’ve brought up, we’ve acknowledged machine learning in similar when talking about hype and throwing water on that in the past, because it was hype. But when talking about something like this, general purpose tools have a place because earlier, far more finely honed models, they’re very limited, right. And there’s a big gulf between the types of references we’re making, the types of things we’ve talked about in security products, for example, and a general purpose tool. I’m actually somewhat encouraged, rather than fearful. And I’m really interested in what will develop over the next couple of years as more and more engineers play with the technology. I think there needs to be ethics attached to a lot of it. But, I think the possibilities are pretty substantial.

Matt Fusaro 15:49
Yeah, I think that’s what most people are worried about at this point, are the are the ethics, right. That’s, that’s a hard problem to solve. I don’t have answers for that one.

Jason Pufahl 15:57
It is, and as a consumer, so I think the technology is really compelling. And I can’t wait to see where it goes. But one of the concerns that I have is, is sort of understanding or recognizing the value of art as, as people move down this path, and you’re already seeing it in the movie industry, where it’s really difficult to tell if the actor you’re seeing is 95 years old, or you’re 45 years old, because of all the all the technologies, we have to clean that up. So there’s a lack of authenticity already in, in certain mediums, and I’m concerned that it’s gonna be harder to trust. Oh, you know, this author wrote this book, well, how much of it did they really write? And how much of it’s really theirs? Or, you know, this artist, you know, crafted this piece of art. Alright, well, how heavily modified was it by something that was you know, AI driven. And maybe that’s just something that we’re gonna have to change our perception on. Right, you can’t associate art with an individual being so unique, perhaps as we used to be able to.

Steven Maresca 17:01
Great cathedrals were built by craftsmen directed by an overseer of some kind of master architect. They were very skilled craftsmen. And that architect could not have carved the stone, on some level, if an artist is simply coming up with a rule set, and asking it to be executed by peers, or tools. I don’t think historically, we’ve actually made a distinction that that causes people to reject that art. And I don’t think we’ll see that today with anything of a common route in AI generated material. We’ve seen it in game development, we’ve seen it in historical art from the 20s. You know, the more structured art, a lot of it is very geometry oriented, you could argue they’re following the same sort of rule set that a computer happens to be executing when they’re generating art. I don’t see them as wildly, wildly deviant from each other.

Matt Fusaro 17:57
Right. On the same token, maybe movie credits shift towards the people, making the graphic versus the actor, actress, right?

Jason Pufahl 18:05
Yeah, there will be a lot of that, I have a feeling. So you know, I don’t want, everybody loves the word disruptive, I don’t know if I want to call it a disruptive technology necessarily. But I think just the impact that it had in such a short time has been really interesting to watch. The sort of the, the evolution from almost almost instantaneous outrage and the fear and the academic side to you know, there’s already some general acceptance to, well, the tool exists. So now we have to start thinking around, you know, what are expectations of students and how do we read things more critically? And what do we trust? There’ll be a lot more discussions around that, that I’m really interested to see, evolve over time. That’s probably enough ChatGPT. Hopefully, everybody loved the ChatGPT generated podcast, you can let us know what you think of that. But you know, if you do have any, of course, as always, right, any any questions or comments about this podcast or AI in general, Steve made the point that we’ve talked about it in the past, maybe not as not totally complimentary all the time. Yeah, we’re happy, we’re happy to engage further into it. I think there’s a lot of room in this AI discussion. This just brings it to the sort of to the masses more than I think people are used to. So as always, thanks for listening. Appreciate your time.

We’d love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn or on Twitter at Vancordsecurity. And remember, stay vigilant, stay resilient. This has been CyberSound.

Request a Meeting

Episode Details


Work with a Partner You Can Trust

Our goal is to provide an exceptional experience to each and every client. We learn your business and protect it as if it were our own. Our decades of experience combined with our expert team of engineers and security professionals provide you with guidance, oversight, and peace of mind that your systems are safe and secure.

Cybersecurity Tips In Your Inbox.

Get notified when we have something important to share!

Related Episodes