Episode 141
Listen to this episode on
Episode Transcript
Speaker 1 00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:11
Welcome to CyberSound. I’m your host, Jason Pufahl, joined today by Michael Grande and Steve Maresca. Today we are going to revisit the predictions we made in 2025 briefly, just to sort of see how we did, I think that, you know, generally, we hit a lot of things correctly, maybe a couple things weren’t as severe as perhaps we suggested. And we’re going to talk a little bit about what we envision some of the bigger risks for 2026 the I mean, I’ll start because we do have a list that, you know, we talked about the shift toward more aggressive social engineering and phishing using, you know, using some of the tools that we have that maybe weren’t here A handful of years ago, you know AI-related that we certainly were correct in in predicting a greater utilization of those tools and in an easier way to dupe people. And we’ve already started talking about how we think, just generally speaking, misinformation is going to be a bigger misinformation. Impersonation is going to be a bigger risk in 2026 right and Steve, you used that word supercharged,
Steven Maresca 01:32
From customers right now that are actively experiencing real world attacks in ways that you wouldn’t think are beneficial to criminal from a, you know, reward perspective, they’re not making money necessarily.
Michael Grande 01:49
Is it part of the long game, though? Right? Are they? Are they constructing narratives that that give them the opportunity to establish false trust over terms?
Steven Maresca 01:56
Almost certainly, yeah. I mean, there’s been a huge shift in phishing campaigns, you know, written emails, traditional phishing, that’s probably AI agent driven on an automated way behind the scenes, where you’re not packing all the urgency and emotional appeal and everything that’s normally expected into the initial email. Instead, it’s amortized out over time across 10 different messages back and forth, and that lowers the suspicion level that people have because they’re having a conversation with someone who’s interacting in the way they expect. Then when the other shoe drops and they have a request, their credulity isn’t, you know, in a level that causes them to pause. They act right in the same sort of problem,
Michael Grande 02:49
I think about the a few of the conversations that we had over the past several years, but maybe some of the legislature, legislators from a state basis, you know, talking about, you know, regulatory efforts and trying to hold some of the providers accountable. I think that’s the biggest surprise, from my perspective, we talked about social media, is that with this prevalence of constant onslaught of doctored videos and sound and events, I feel as though there’s absolutely no scale, or, you know, Watermark that’s distilled upon it to say, Hey, this is AI generated. Or, you know, we flagged it for some reason. I’m sure it does exist to some degree. But personally, speaking, from what the some of the things that are puzzling when, when they show up in a feed, or whatever it might be, it’s, remarkable to see how sort of it slips through the cracks. I guess, of some of the at least on the media side, concerned about that, I
Steven Maresca 03:52
have a theory there’s like regulatory capture involved here. I think that there’s sufficient lobbying behind the scenes that any sort of attempt to add teeth to regulations simply is not occurring.
Michael Grande 04:03
Yeah, I agree.
Steven Maresca 04:04
Other other nations in Europe as an example, they have well structured fines and oversight bodies and reporting pathways and basic baseline expectations for behavior if violated, there is a consequence that doesn’t exist here, and any sort of attempt to, you know, Project transparency doesn’t really have a full throated commitment from most corporate entities that are doing this stuff, in my opinion. Yeah, look at, look at how X has transformed from the Twitter days. All of the staff and hundreds of people, hundreds of people that were employed for content monitoring and curation and pulling down the offensive stuff, they’re gone right. They don’t exist anymore. So what was present has been hollowed out. So we’ve gone backwards in many respects.
Jason Pufahl 05:05
And you know, so we’re calling this the 2026, prediction. Yeah, it’s really more of the same. We How many times have we discussed from a security awareness training issue, you know, credential management and verification of identities, and, you know, understanding who you’re speaking with, and you know, trusting. You know, verifying and trusting, it’s all the same thing. It’s just more difficult now to be able to do those things. Because, just to your point, Steve, you can build you can play the long game. You can build trust over time, because, you know, it’s not expensive to do anymore from a from a human capital issue, right?
Steven Maresca 05:47
I mean, if we just use a very narrow view and think of the dollars from a context perspective, to have an agent servicing a phishing campaign for a conversation of a typical email length with an individual that’s a target victim. We’re talking about sense to target them from a context, tokens, consumptions perspective. If you’re using public compute, it’s wildly cost effective if you look at it from that lens. So why wouldn’t an attacker do it? Yeah, perhaps we do quick hits with things that you know we had in 2025 assess whether we’ve been accurate or not, and then shifted
Michael Grande 06:31
How close to the marker that we came,
Jason Pufahl 06:33
That’s fair. We started with the first one. It immediately segue him to a prediction. Yeah,
Michael Grande 06:41
I think we were, I was gonna say, I think from one of, I know one of the items that we brought up, and still feel so, you know, constantly in our face today is sort of this geopolitical atmosphere, sort of what’s happening out in the world, state actors, bad actors, nation states being involved in different actions. Maybe, you know, developments, you know, the war still waging in certain areas. You know, I thought, I think we thought instability would mean sort of there be a lot more complex attacks, and there’d be more sort of hybrid sabotage and disruptive campaigns. I don’t know if we saw quite to the level that we expected. Maybe just that’s like, back to the long play, right? You know, if it totally came true,
Steven Maresca 07:33
I think, I think it’s fair to say we’re partially accurate there. I also think that there’s probably some disruptions and outages that occurred in the last year that we will find a year from now are explained as being exactly what we thought, but certainly it wasn’t as high temperature as we thought it could be in
Jason Pufahl 07:57
and That’s probably somewhat, I mean, I’ll call it mundane. Maybe you know some of these threat actors using sort of global incidents as a way to send phishing or something right tied to that which isn’t that exciting, or that you know, really that unique. So you know, less of the infrastructure outages, less of the disruption, and probably more of that, more traditional. Let’s take advantage of news and see if we can get something to occur as a result of that. Yeah, agreed. AI utilization by threat actors. I mean, certainly we predicted it. I mean, we’ve already spent about your time on it today. Let me get you today. It’s true in 25 it’s going to be more true in 26 I don’t know there’s much more to say about that one.
Steven Maresca 08:52
I do think, at least from a 26 perspective, voice cloning. I mean, we’ve talked about it repeatedly now, but the models that are coming out right now are so hyper efficient compared to what they used to be, I think we’ll see an explosion. Yeah, of it. It’s now relatively trivial to build an agent to sit on a phone call and engage in a realistic way. I mean, I can do it with six hours of effort. That’s how straightforward it is.
Michael Grande 09:27
So I don’t want to paint a picture or suggest something, but I’m sure it’s also, it’s also sort of entered into the ethos of what’s happening out in the world that we talk about voice cloning. You know, I always go back to the old when, when we were younger, I feel like it was more common of a thing. You know, you free speech is one thing, but you can’t yell fire in a movie theater, right? This was sort of this, this common phrase that everybody sort of understood the context of, you know, I get concerned about the. Just our public safety systems being overrun by false, false calls and reports. You know, I think we’ve seen an increase in swatting. We see a lot of different types of events happening and under the cover of an agent, voice clone, or someone in either a political position or, well, no public position, you know? I just, I that’s a concern, you know, I think about, obviously, both from a school’s perspective, but, you know, out in the general public, public trust, outside of our private homes, sort of the dangers that are created from those things.
Steven Maresca 10:32
Yeah, it’s relatively trivial to imagine that someone could be framed for something. A you know, false report could be extraordinarily easily manufactured. People could be slandered with relative ease. You know, there are all sorts of allegations that could be ginned up without much effort. There’s a lot of risk there, and it will be exceptionally difficult to refute in many cases. Unfortunately, I think they’re Yeah, and I think there are some possible benefits in the discussions that I’m seeing in various corners, from a language model perspective or a image generation, video generation perspective, you know, watermarking and fingerprint fingerprinting is part of that conversation. Seems like every other month, it’s people on board, companies on board, and then they backtrack and Nope, we’re not doing that. So I don’t know what that will turn into, but it may help with this particular problem to ascertain whether there is legitimacy or not
Jason Pufahl 11:44
the So, yeah, I think to the next two predictions that we made last year, the new less AI based, more traditional security based. One was, is there essentially is security controls fatigue by end users, which you know is many controls to be put in place to help protect then people either start to ignore some of the alerts that come out of them, or your attackers find latest circumvent and my thing, the other thing that jumped out was, you the the how MFA was considered, you know, new ish, 10 years ago now, is completely mainstream, and has gotten to a point now where almost over, it requires two months validation, right? You’re constantly doing your your second factor, you’re constantly entering a number in to validate that you who who you are, because, based on your locale and whatever, there’s a there’s definitely a fatigue there. And I think you know, people are now starting to pull I know if they’re pulling back. Steve, this is probably a good comment for you, because you see it from so many clients. Are people pulling back on it? Are people resisting some of these technologies? Are you seeing more bypass?
Steven Maresca 12:59
I think it’s fair to say that lots of orgs are at different places on that arc. Some have absolutely instituted number challenges or biometric platform authentications as an augmentation. Those did so in response appropriately to the threats that they faced, and I think that they had a period of relative tranquility because those controls were working, and then the cat caught the mouse to some degree, and now they are experiencing some really ugly bypass events where the numbers challenges are being proxied. It’s possible to circumvent all of these things in some capacity, either through coercion, deception, technical achievements, and you know, misuse of the tools in question. You name it, the fatigue is less so than you think it’s it might be. There is a relatively high degree of finesse with a lot of these platforms, regardless of who we’re talking about, Okta, Azure, MFA, Google, two step verification doesn’t matter. Generally speaking, people are used to it. The issue is actually situational. If people are prepped to anticipate an MFA prompt, then it is a very, very low friction thing to deceive them into giving up access. So it’s less about the technology work failing. It’s less about the fatigue. It’s about the surrounding parameters that enabled them to be willing to give up something they shouldn’t. Generally speaking, these tools are still effective, but another wave of evolution and refinement is necessary. We’re kind of at that threshold where I think we’ll head more to the biometrics and platform side to appropriately safeguard people. That’s where we are. Yeah, I.
Michael Grande 15:00
So we’ve got our our last prediction of 25 centered around the very exciting topic, glamorous party risk. Yeah, very glamorous third party risk and vendor security assessments, which I, you know, logically pulls in a lot of the things that we’ve been talking about right as organizations expand and and, you know, understanding and assessing your vendor and your strategic partners posture and what are they doing, and how are you mitigating third party risk, obviously, in mid market and SMB environments. I think we were, I think we were pretty accurate with that one
Steven Maresca 15:39
as well. Yeah, I think it’s, it’s an expression of both the tightening budgets and the heightened risk that these things become emphasized and more critical. There’s less tolerance for error across all industries, and holding others in holding others to account formally, is a an absolute expectation. It’s a business requirement for securing revenue. It’s not changing. It’s only getting more stringent. I don’t think that’s a net negative. Overall, it has encouraged and fostered good improvement in industries that have honestly been somewhat hesitant to invest but I do think it’s had other shifts of costs rising and simply, you know, it’s being passed on in other ways that may be perceived as a negative bottom line. It’s a lot of effort. It’s getting more so, and it’s not going away.
Jason Pufahl 16:41
So it’s just that Sure, Steve, look at me like East American and the I was wondering. So Michael, Michael talks about the the prediction. My response was Yep. And then you waxed eloquently about all the reasons why it’s a reason it’s a good predict. Pretty funny. You took something that I thought was actually pretty, pretty mundane and made it pretty interesting. Which I appreciate it. My first reaction was typically, yep. That one category,
Steven Maresca 17:15
this is kind of my job. I loved it. I was
Jason Pufahl 17:20
just thinking. I don’t know how much there is to say but then, you know, you found a whole bunch of good stuff to say
Steven Maresca 17:25
about it so well, I’m glad my stream of consciousness makes sense.
Jason Pufahl 17:28
Yeah. Now repeat it. That’s what I want you to do,
Steven Maresca 17:31
not I want to cover something really specific. We’ve actually seen poisoning of commodity inference platforms like open AI, chat, GPT to to deliver malicious content to unsuspecting users. In December, November, December, there was a very interesting scenario where someone’s searching for how to increase disk space or, you know, remove data in OS X, if they search for that in Google, they would land on a transcript from chat GPT that included directions that seem plausible if you’re non technical user, to take a command and run it that if you did so, resulted in something that was unambiguously, provably verifiably a Trojan. Yeah. It doesn’t look that way, but if you are willing to copy and paste something into a terminal because you’re trying to fix the disk based problem, and I guarantee millions of people will do that, yeah, it’s from chat GPT. It’s not going to hurt me. It’s a computer, right? It, you know, it’s not malicious and not a hacker. But no, it was a Google ad that was well placed for some keywords, for unsuspecting, non technical users to actually interact with, with instructions that seem plausible and an outcome that was absolutely 100% malicious. So more of that’s coming,
Jason Pufahl 19:01
and that’s the that, honestly, that’s one of the big risks of AI in general, which is asking it questions that you don’t have really any background in the topic, and you just have no
Michael Grande 19:11
trust the results right or wrong. Yeah. I mean,
Jason Pufahl 19:15
this is kind of one of the top of mind things for me, is the whole new ability to misinform or misrepresent, which is you’re just seeing it so much now,
Steven Maresca 19:24
well, this is a utility in the sense that it’s it, it’s artistic license with respect to the image to be conveyed. It’s not one of those misuses of the same capability to misdirect, right?
Jason Pufahl 19:39
So I had a discussion. I had a long discussion with Paul and a variety of other people. We were talking about, you are people? My I posited that IT people. It’s a race to the bottom in terms of people’s behaviors. If there was a. A some sort of catastrophic event, right? If power grids go down, if people feel like you’re in a position where they have to take care of themselves, we will have sort of mass looting. We will have people doing, many people doing the wrong thing. And and he argued that, you know, people are generally good, and we probably come together. And I think there’s a lot of evidence to the contrary, and I just don’t, I just don’t, I just don’t believe that. But, and you kind of argued that, you know, look at, you know, 90 98% of people are good. I think, unfortunately, only takes 2% of the people to not do the right thing to basically have a significant negative output. And it’s at a point now where I don’t trust anything I see now on social media, because you there’s so many videos now, they’re they’re not necessarily intended to cause major harm, right? Like watching somebody who generated the avalanche that theoretically sweeps all the cars off the highway on i 70 in Denver, like, I guess, what’s the big deal? Except it’s so realistic and so difficult sometimes to discern whether you’re looking at an actual, you know, newsworthy event, or if it’s fake. And we’re seeing more and more and more of that. So sure, you know, it’s creatively using the tools, but they’re done so we can get more eyes on something that’s sensationalized to drive traffic to whatever site for whatever purpose. I feel like, if there’s no clearer example of people’s overall misuse of technology in the race to the bottom in terms of doing the right thing. I mean, it’s social media, because it right. We treat each other poorly in comments, we talk to people in ways we never would in person, we generate fake news. We generate fake images. It, it’s, it’s
Steven Maresca 21:54
awful, it’s, it’s, I find it depressing. Before there was a statement untethered from citation, and it was the burden of the audience to then do research to ascertain whether it was real or not. Today, the author of the assertion is coupling it with what appears to be evidence, and that, in and of itself, is the problem. I have an example that’s actually really tangible here. Trains in the UK, back in December, early December, were canceled because of a generated photo of a bridge collapse. And you know, the authorities didn’t know one way or the other, so they acted accordingly, because something was spreading around, hypothetically, showing something real, and they shut down a real line. Now that is a real world impact from something fabricated. There are fundamentally more and more and more examples of that over time. It’s not just salacious clickbait generating engagement. It’s actually affecting the real world in an increasing way. I don’t know what else to say, other than it’s dangerous. However, falsehood always flies faster than truth. There are a million variations of that term. It remains true, but people until it’s not. Yeah. And unfortunately, we’re in an era where people outsource their cognition to other tools, and if it seems reasonable, they believe it
Jason Pufahl 23:25
For the 2026 I mean the 21 of our 2026 predictions, absolutely is, you know, the widespread misinformation, whether it’s whether it’s truly, You know, for mal intent or not, right? The example that you just used, the has a negative outcome. Many of the many of the things they don’t really you know what you know, there’s the trend, right? We’re seeing the trend so foolish, but the trend where the mother animal runs to the vehicle and drops their baby off into the car because it’s being chased by a predator. Like, there are example after example of that trend. Like, how that got popular? I have no idea, but the first time I saw it, I’m like, Oh, that’s cute. That’s cool. And then you realize it’s all fake.
Steven Maresca 24:16
But I think there’s danger in asserting this is a future state. We’ve already passed that threshold, I like to cite the Overton Window shifts in this particular regard. From a journalism perspective, if you asked yourselves, in 2015 to respond to a frankly, now normalized, regular, everyday representation of misinformation that’s occurring in the news, you would be utterly floored and flabbergasted. There are material damage accumulating out of the out of the result of these things, even if they are as little things, tiny, tiny cuts. Society is being fractured into travel units that don’t exist, not really. They’re manufactured. There are changes to the social contract in general, from government services to corporate commitments to hiring practices to firing practices, terminations that result from falsehood, all of these things that are million different, tiny examples. And I think we are becoming desensitized to the reality that we’re already there. It just looks a little different and supercharged with imagery and videos. It’s the Supercharged.
Jason Pufahl 25:39
I think that’s the concern. I think that’s the right word. 100% right. We’ve been here for a while, and there, I’d say there was a it was easier to understand, easier to see. You know, sensationalized news or things that you just knew were somewhat ludicrous. I think now it’s so pervasive in your videos and sound in a way that that wasn’t before, that even people who probably weren’t big news consumers or or federal legitimate source consumers are now starting to get pulled into things in a way that they weren’t even before. But you’re Supercharged. Supercharged is the word, I think that is the perfect way to describe where we are today. I think that that’s the reason I wanted to put it on the list, because it’s going to impact, it’s going to impact the social engineering things that we see right people are creating their avatars that respond in a way that’s much more natural now, that looks potentially much more realistic and can easily trick somebody. And they’re building entire personas, right? They’re creating LinkedIn accounts for these people with, you know, feeds that go back for a period of time. They can stand up a website quickly so all the things that you might have used before to validate real versus not so much quicker and easier to fool people. So I think, well, I like the word supercharged, because I think it’s spot on. The reason that I had put that in the list, yeah.
Steven Maresca 27:10
Bottom line, it’s orders of magnitude more effort to discern real from illegitimate today than it was, yeah, yeah. And you need to be a specialist to actually do it.
Jason Pufahl 27:22
And I, you know, and I find I’m embarrassed. Could it, you know, use that. I’ll use the avalanche example, you know, partly because my, you know, my son went out to Colorado, so I was looking at some things, and I saw this avalanche. And it took me three or four watches to see there’s a giant boulder that theoretically swept through all the cars that then disappears a couple of the frames later, and I didn’t catch it. But I’m embarrassed that I had to watch it four times, and I’m embarrassed that I contributed to this success of the algorithm for the person who posted it. Like the whole thing bothers me, and I find, you know, and I’m not a huge social media I don’t have a big presence. I don’t I don’t consume a lot, but I’m doing less and less and less because I trust it less than I did before.
Steven Maresca 28:07
There are no easy solutions here. Critical thinking is a skill, and it erodes.
Jason Pufahl 28:14
This feels about like the midpoint of our 2026, predictions. So why don’t we stop here and pick this back up in a couple of weeks. In the interim, if people have any questions, we’re happy to answer them. We’re happy to incorporate them, perhaps in a future episode. But in two weeks, look for part two, because we’ve got more predictions to come.
Speaker 1 28:33
We’d love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn, and remember, stay vigilant, stay resilient. This has been CyberSound.



































































































