Step inside the chaos of a real ransomware attack. In this episode of CyberSound, Vancord’s experts break down the timeline of a ransomware incident—from initial compromise to full recovery. Discover how threat actors move, what response steps matter most, and what it really takes to manage the disruption. This isn’t theory—this is a behind-the-scenes look at what actually happens when a ransomware attack hits.
Episode
127
Ransomware Attack Timeline: A Walkthrough of the Disruption

Listen to this episode on
Episode Transcript
Narrator 00:01
This is CyberSound. Your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:10
Welcome to CyberSound. I’m your host. Jason Pufahl, joined today by Steve Maresca.
Steve Maresca 00:15
Hey there.
Jason Pufahl 00:16
And we’re going to speak, I think, for a little while, on ransomware, ransomware incidents, incident response in general…
Steve Maresca 00:25
The landscape of security incidents. It’s an evergreen subject, and even folks who have experienced one you know, recognize that there are differences for others as well.
Jason Pufahl 00:34
And I think that’s what we want to talk about, is people who’ve experienced it understand how these events unfold and how long they can take. I think we have conversations pretty regularly where people feel like, if there was a security incident, ransomware being common, but there’s others, of course, that, it’s a it’s a disruptive, but maybe more of a blip in we know, sort of the contrary, right? So we really want to talk about, you know, what is, what does a threat actor look like? Because I think that’s important. But then we’re going to talk then we’re going to talk about a couple of specific security incidents that we’ve managed. So it, frankly, if you have access to the YouTube feed, there’ll be a couple of displays that we have here throughout the presentation, which is unusual for us, but it’ll give you a much better picture of kind of the duration of, you know, a timeline and duration of what an incident might look like. So frame, of what we’re talking about, though, in terms of what is a threat actor? What do people think it is, and what is it really?
Steve Maresca 01:29
So usually, when we’re giving presentations to a live audience, we have a what I call a cheap trick slide. It’s your stereotype, stereotypical. I still love it. I know it works. It’s placed expectations a stereotypical guy in a hoodie, you know, shrouded in darkness. And it’s a fiction that’s a Hollywood portrayal.
Jason Pufahl 01:49
NCIS.
Steve Maresca 01:50
Sure, you know, the guy surrounded with the glowing monitors and the projection of the thing on the screen on his face, just to make it more ludicrous, reality is far more mundane, and the immediate follow on visual for that is a stock image of a large office with lots of people working in cubicles. It’s a cube farm, right? And the reason for that is that’s reality. Yeah.
Jason Pufahl 02:14
Organized crime.
Steve Maresca 02:15
Organized crime, but more so very much, like the corporate environment, we know they have managers, middle managers, they have reporting expectations, quarterly KPIs, they have revenue targets. They have regular strategy meetings and stand ups. They structure their day to achieve the best possible outcome for them in terms of illicit revenue. But nevertheless…
Jason Pufahl 02:38
It’s based off of ROI, yeah, I think that’s the important thing for people to take away. It’s a business, and every every company is a potential target. It’s not just about, you know, can you get department defense data or your confidential data? It’s, can you cause enough problems that they want to pay you a ransom, or in some way, pay them.
Steve Maresca 02:42
It’s a business. Or in some cases where there’s none of that involved your target, because you help another entity sure be attacked. So there are lots of very nuanced and complex elements to this, and it’s an ecosystem too. You know, the guys that do the breaking and entering sell access to the folks who extract the data and then interpret it your traditional marketplace, absolutely. So it warrants emphasis that targets are researched, they are sold, they are shared, and they are basically part of a strategy to achieve greater impact to others.
Jason Pufahl 03:33
And they’re not necessarily these incredibly sophisticated attacks. A lot of times it’s pretty basic, either social engineering or taking advantage of the fact that somebody didn’t patch or vulnerability basic things.
Steve Maresca 03:45
Their goal is to take the path of least resistance in all cases, because it maxes, maximizes their outcome. Absolutely. Here’s the thing, though, every incident shares some characteristics. They rhyme with each other, but I’ve managed many incidents and have stopped counting how many, and none of them are exactly the same. I think it’s worthwhile, nevertheless, talking about commonalities, just for the purpose of, you know, communicating what folks should expect. And that’s, I think, the point we’re trying to make today.
Jason Pufahl 04:17
So we’ll show a couple of, a couple of images here. We’ll start with, we’ve got the pictures in front of us, so we’ll start with the one on the right. And I think I’ll tackle it from the high level, which is, this was a large scale organization. The incident started, and they had a fair amount of time, and you we can talk about this specifically in a second, but a fair amount of time, but they didn’t have any visibility at all, but by from the day effectively that it started to the day that we say it concluded was, you know, 209 days. So really long, really disruptive for the whole org, and even more disruptive for a sort of a key set of people who had to, who had to take it to its conclusion, right?
Steve Maresca 04:59
And the really important thing to talk about in this timeline in particular. So it’s divided up into three categories. The first portion is zero visibility and initial reactions. The middle portion is actual incident response and recovery. In the long tail at the end, which we have to emphasize and we’ll get to in a minute is reputational rebuilding and making sure that they’ve done the right thing.
Jason Pufahl 05:27
And developing and re-establishing trust with partners. Exactly, that was an issue. So people probably don’t know what visibility means. So when you say your lack of visibility, visibility…
Steve Maresca 05:37
So every incident starts with candidly, an action that’s not been prevented, not been detected, or that occurs in some way that is out of view. That moment is really only developed after the fact. You can get to clarity only if data was collected in order to give you that opportunity.
Jason Pufahl 05:56
And data like log data, application log data, identity…
Steve Maresca 06:00
Every incident is a little different, but visibility in this conversation really has to do with something that gives you a clue as to what malicious activity occurred, first, what staging activity threat actor performed, and then what change in strategies took place to actually cause greater impact. So visibility being absent in this first example of ours really was related to inadequate logging, a malfunctioning endpoint defense platform and a variety of other characteristics because of the number of networks that they had so kind of stacked against them, they got late notice that they were compromised. And if you’ve had notice denied you, then you can react more slowly. That’s basically the outcome of this.
Jason Pufahl 06:48
And in this case, there was a fair amount of time where they didn’t actually know that the you know, there was the dwell time the attacker was there they didn’t even know about and then they spent some time trying to sort of contain the attacker, but failed a few times right before they ultimately…
Steve Maresca 07:03
Yeah, and this is a very common problem, organizations try to, you know, restore the system that was compromised, or wipe the device and give somebody something new. That’s often enough, if you assume that you know, the malware is localized, it didn’t go any further than that. Realistically, if you have any sort of hint that something malicious is happening, there are five other items in the background undetected. This sort of example reinforces that point, because systems were restored before investigation actually took place, and then you know they were re-impacted again. Thereafter, the threat actor was still present. So containment didn’t take place before recovery. It was a sequence of operations being out of order.
Jason Pufahl 07:50
So, looking at the slide right, we basically say, you know, day 1 through 84, in a lot of ways, is discovering it, a couple of failed attempts to contain it. Then we say, IR starts to 84
Steve Maresca 08:02
Yes, formal incident response. You know, the triggering of outside assistance, the deployment of defensive tools, information, gathering tools, and so forth. That’s how we define it anyway.
Jason Pufahl 08:12
And then it’s 40 we have 40 days, roughly, roughly 40 days until we say things are generally contained and most systems are probably restored at that point.
Steve Maresca 08:22
Right, in an incident of this variety, of this scale, it’s normal for a week or two to transpire between initial incident response and basic systems being backup and functioning. But you know, the duration is often a month, right?
Jason Pufahl 08:37
Long time.
Steve Maresca 08:37
Yeah, for the peripheral systems, for the lower priority systems. And that needs to be an expectation of every org. If you have a bunch of systems, you’re going to stand up the ones that are safe to do so, and the others you’ll get to when there’s an opportunity.
Jason Pufahl 08:52
So then, you know, we said this is 209 day incident. So we have days 121 to 209, which is much more about, I guess I’ll call it like reputational repair, maybe as much, because it’s not really a technical exercise right?
Steve Maresca 09:08
In manual business process that needed to take place until the last lingering systems were re-established. You know, it’s very common in the wake of an incident that you have to notify your business partners, and when that occurs, they are reluctant to re-engage with you. And if you’re using APIs or transferring data, they’ll want some assurance that you’ve done the right things to avoid threat to their operations. That was an exacerbating factor here. All of their infrastructure in this incident was safe, redeployed, but the day 121 to 209, was solely demonstrating the completeness of activity to third parties.
Jason Pufahl 09:47
So one of the things that I always like to highlight is the whole organization experienced the incident, maybe not quite day one, but certainly shortly thereafter for maybe a period of a few months. The IT team and maybe legal and a couple of other folks have basically a 209 day experience. So it’s very different for somebody who needs to use the system versus somebody who probably supports or deploys or deals with some of this. And in many ways, the organization feels come, you know, say come day 121 that everything’s resolved and everything’s fine, and it’s a small set of people who continue to have to work through this. So it’s really taxing for, you know, kind of a core set of people.
Steve Maresca 10:32
It is, and it disrupts a year, yeah, I mean, the timeline is the better part of a year. It’s important to remember too, that while this incident doesn’t involve it. Many others require notification of affected parties. If data theft occurred. You know, if you’re a public trade, publicly traded corporation, you’re filing 8k reports because of a material impact. You’re notifying your shareholders and the chairs. So much other activity. So it’s important to say that post incident, beyond the reputational there’s so much more from an insurance and a legal obligation perspective, every organization needs to at least contemplate it.
Jason Pufahl 11:15
So let’s so that this was a long one, you know, two thirds ish of a year. Yes, let’s shift to one that, you know, I’ll say, I’ll argue, is probably a little more typical from a time frame, you know, this is two and a half to three months, roughly.
Steve Maresca 11:28
Yeah, with maybe two weeks in the middle of true outage and recovery pain. This second example is a very typical ransomware event, really, that started with defensive systems taking note of malicious activity, so visiting it. Yes, the visibility problems that existed in the prior incident that we just talked about weren’t really present here. This is an organization that had properly functioning monitoring. Unfortunately, they had a security gap. And, you know, there was an initial access vector through the remote access systems where a user was just able to waltz in and access key systems. So very short window from an attacker making note that there was a way in, maybe a phishing email proceeded it. We don’t know. The actual causal factor was unknown, but initial entry to actual impact being visible…
Jason Pufahl 12:24
Very short.
Steve Maresca 12:25
Yeah, and the window there is usually a week, three days, something like that, for the waltz in the front door type events.
Jason Pufahl 12:31
And so I’d say this feels like it follows a little bit more of a pattern of detecting the attacker and then starting to do some of your containment work to kind of reduce the impact, taking systems offline. What did that look like?
Steve Maresca 12:46
Yeah, in this particular scenario, it was a, really, by the book, type of event. You sever external network traffic, you shut down networks between facilities. You deny the attackers the ability to use identities by disabling those accounts and things of that variety. Containment started immediately, really, the cleanup effort was mostly invested into restoring underlying systems. This is a very interesting event in that virtualization infrastructure itself was encrypted, which makes sense. If you consolidate systems to benefit from Space Reclamation power efficiencies and not needing so much physical equipment you are aggregating risk. Attackers know that, and if they can get to the VMs themselves, they don’t need to attack the servers that are running as instances. This scenario was involved in a scenario exactly like that, where an enormous amount of damage was done quickly by encrypting VMs. So not only was entry fast, not only was initial access fast, but the actual negative impacts were huge and immediate.
Jason Pufahl 14:00
And so what did I say? Recovery look like event at that point?
Steve Maresca 14:05
Fortunately, this is sort of a success story for the utility of a backup infrastructure being robust. It involved buying time to keep systems stable and identify them as safe and build new networks that didn’t have the attacker present within them, and restoring from backups. Backups were unaffected in this instance, and it’s why the timeline is compressed compared to the prior instance.
Jason Pufahl 14:29
So actually, I was going to ask that, right? So we had that three quarters of a year, two thirds of a year event, and then we have this three month event. How much do you attribute that to the fact that there was early notification, early detection around this.
Steve Maresca 14:42
So this is shortened on both sides of the equation because of that, visibility was effective at the beginning, you got early notice. It was clearly very, very severe, reasonably good data to help. Good data, very high fidelity. And at the end, because of quality backups, it was more a question of just declaring infrastructure safe and then restoring something clean, pretty easy recovery. Yeah, defense is put into place, certainly along the way, just as in the prior incident, but at the end of the day, an easier cleanup.
Jason Pufahl 15:11
So, I mean, it is a fair takeaway to say being prepared and doing some of the work upfront to, you know, get, get those detective controls in place, detective and preventative, I suppose, get your logging in place, and probably have a slightly better sense on how to handle an incident. Did that, did that contribute to this being shorter?
Steve Maresca 15:31
Yes, they drilled. They had done backup recovery tests. You know, the staff at this organization knew what Dire Straits they were in immediately because of their investment in time and personnel, and the result was better for them. Could they have avoided this? Maybe. You know, the encryption of the hypervisor infrastructure that was because of a vulnerability they could have patched. So be it, that happens all the time, and the initial entry could have been better defended. Those are things that can honestly be scenarios any organization is confronted with. So it’s just a matter of unfortunate opportunity.
Jason Pufahl 16:09
So we have a couple, we have a couple of visuals, and I do think it’s worth people taking a look at the certainly the YouTube channel, if they want to see these. We’ve got a lot of experience with your incident response in general variety, right? We talked about two ransomware incidents here, but they, I think we often see people feeling like, well, it won’t be that big a deal. And you know, if you’re if you’re adequately prepared, maybe like, to your point, that second one wasn’t quite as bad as it could be, but it’s still expensive, it’s still disruptive, even if you can recover. Frankly, a lot of these businesses have better things to do with your time than deal with an incident, right? But it is shorter if you spend some time upfront preparing. It is shorter if you have clarity about what the event was, because you can make better decisions. And I think these two really squarely show that.
Steve Maresca 17:05
Yeah, I’d agree. And not every incident is a few weeks or a couple months. Some are a week, and those are fantastic success stories. It’s just a matter of running the process, reassuring from a data perspective, that you have everything in place and you can recover appropriately, and the outcome is good, yeah?
Jason Pufahl 17:25
Well, yeah, we’ll wrap up. I think you know people, people have good sense of what incident response looks like. But if you’re concerned about it and you want to talk through, you know, tabletops or incident response preparatory documents and Nusa response plans, you know, we’re happy to do that, of course, if you’re just interested in learning more about what does incident response look like, and what kind of things you need to think about, reach out. We’re happy to chat about it.
Steve Maresca 17:48
And the strategy has changed a little bit from a National Institute of Standards technologies perspective, so maybe another strategy session is appropriate.
Jason Pufahl 17:55
Yeah, actually, and yeah, they updated even their their response approach. Absolutely right. So, so for the next steps, for the next one. As always, thanks for listening, and hopefully you got a couple good takeaways from this. Thanks, Steve.
Narrator 18:08
We’d love to hear your feedback. Feel free to get in touch at Van cord on LinkedIn, and remember, stay vigilant. Stay resilient. This has been CyberSound.
Request a Meeting
Episode Details
Hosts
Categories
Work with a Partner You Can Trust
Our goal is to provide an exceptional experience to each and every client. We learn your business and protect it as if it were our own. Our decades of experience combined with our expert team of engineers and security professionals provide you with guidance, oversight, and peace of mind that your systems are safe and secure.
Cybersecurity Tips In Your Inbox.
Get notified when we have something important to share!