Join Jason Pufahl, Steve Maresca, and Matt Fusaro on this episode of CyberSound, as they dive into the dos and don’ts of the first 48 hours after a cyber attack.
The First 48 Hours: How to Know You’ve Been Attacked
Listen to this episode on
[00:00:01.210] – Speaker 1
This is CyberSound; your simplified and fundamentals-focused source for all things cybersecurity with your hosts, Jason Pufahl and Stephen Maresca.
[00:00:11.930] – Jason Pufahl
Welcome to Cybersound. I’m your host, Jason Pufahl. Joining me, you today, as always, is Steve Maresca and Matt Fusaro.
[00:00:17.660] – Matt Fusaro
[00:00:17.660] – Jason Pufahl
[00:00:18.350] – Stephen Maresca
[00:00:20.510] – Jason Pufahl
We’ll talk a little bit, I think, incident response and maybe even incident detection today. Really focusing our discussion on that first 48 hours you’ve had an event, you come into work, you have the telltale signs of an incident. Probably a good spot to start right there. What are those telltale signs? What causes people to call us typically? How do they know they’ve even been attacked?
[00:00:44.270] – Stephen Maresca
I’d say that tends to start in confusion. There are a lot of miscellaneous reports that don’t really seem connected to one another, but widespread across an organization.
[00:00:54.470] – Matt Fusaro
Usually a downtime report.
[00:00:56.310] – Stephen Maresca
Yeah, exactly. Downtime. Somebody can’t log in, and application is slow. What do we do now?
[00:01:02.460] – Jason Pufahl
So you don’t think more specifically… I feel like there’s a lot of, “Hey, I see a note on my screen that wasn’t there yesterday.” Like that really obvious ransomware thing.
[00:01:12.490] – Stephen Maresca
It might be ransomware, but we’re starting generally here. Perhaps it’s in the preamble. You don’t know that you’ve necessarily been hit if there’s something blinking red and screaming at you. That’s one thing.
[00:01:24.840] – Matt Fusaro
I think we’re talking more along the lines of people that don’t have or maybe don’t have access to an alert somewhere, that they didn’t get an email, nobody called them. They’re just walking into work and something’s wrong.
[00:01:36.500] – Jason Pufahl
Right. Joe IT Guy just got back from vacation.
[00:01:40.150] – Matt Fusaro
Well, actually, I think in a way, we’re happy if Joe IT Guy is there. It’s probably even some user that just showed up early for work and has been trying to figure out who to call and what to do.
[00:01:49.830] – Stephen Maresca
Honestly, that’s a really good example. Multiple incidents have started precisely there. Somebody shows up, administrative assistant needs to print something. The print tray has 400 copies of something odd and another language in it. This is strange.
[00:02:05.210] – Matt Fusaro
Right. Many small businesses don’t have that IT team behind them or the security expert behind them that is already telling them something’s wrong. These are the indicators that they get, these non-specific things that they see when they’re trying to use their applications, like Steve said, log into something and it doesn’t work. You can’t get to the internet. It’s a lot of times the small indicators to the actual user.
[00:02:30.290] – Jason Pufahl
It sounds like really what you’re saying is if you see something, say something, to some degree. It’s tongue-in-cheek here. But, especially smaller organizations, it is, really. I see something that doesn’t feel right or my environment isn’t behaving as it did when I left work yesterday or whatever.
[00:02:50.550] – Stephen Maresca
I think it’s fair to say that if it’s an attack that took place over the course of a month or a week, if you look back in retrospect, there are almost always little strange signs that make sense after the attack has become more obvious that could have been triggers for investigation; like something in the printer tray, like a slow system that you can’t log into, like somebody who has a really good handle on their password getting locked out. It shouldn’t happen. They shrug it off. I think it’s typo, they move on, but it’s part of the preamble.
[00:03:25.330] – Matt Fusaro
I guess there is something to that if you see something, say something. There’s been so many times, especially back in the day when I worked in MSP service where I was working at help desk. We would find out about problems that have been going on for a year, and someone just decided that today was the day they’ll put that ticket in.
[00:03:43.800] – Jason Pufahl
It almost become normal. It acts messed up every day and you’re like, “Well, that’s just the way it is.” Your point, actually, Steve, that was a good one. My comment of jumping straight to, “Hey, you’ve gotten a ransomware note on your screen.” A lot of times, that is the culmination of potentially multiple weeks or longer of effort by the threat actor. A lot of these things are just the telltale signs. Potentially, if you report them early enough, and frankly, if you can get people’s attention to do something about them, you might be able to ward off a more sophisticated bigger impact type attack.
[00:04:20.720] – Stephen Maresca
Right. I would say that if your initial indicator is a ransom note, you’ve already missed a bunch of other suggestions that some things miss. It’s unfortunate, but the truth is that attacks do, on average, persist and dwell before actually being visible in the course of 90 days. The average is coming down, used to be 180 something north of 200. But the truth is that it’s getting faster and faster. But it’s still certainly the case that most attacks are multiple days long. There’s an opportunity-
[00:04:56.890] – Matt Fusaro
It’s only getting faster for those organizations that can actually detect these things. That’s the key. I think those dwell times are still pretty valid for small businesses, especially because they don’t have a SIEM. They might have like an IDS or a good firewall, but I’m sure it was installed by either an NSP that they dealt with once or their uncle or something like that that said, “That Palo Alto is a good thing to get,” or, “That Fortnite is a good thing to get. You should put that in there. Nobody’s watching it.”
[00:05:28.440] – Jason Pufahl
That’s the key. If you’re not looking at the logs, it might stop a few things, but you’ve probably stopped addressing policy updates, not necessarily paying attention to log events that come out of it.
[00:05:41.500] – Matt Fusaro
[00:05:42.640] – Stephen Maresca
Let’s talk more generally. At the beginning staging of an attack, threat actor has access to the environment. They’ve gained a credential through phishing. Maybe that’s the first warning sign. You know, that people have received suspicious emails. You put it off. It’s constant background noise. You dealt with it. But it’s an indicator that at least if it seemed written for your organization, that it might be targeted in a way that’s different from your average drive by phish. Thereafter, the threat actor tries to gain access, and then what? There are some preamble steps that are relatively consistent.
[00:06:22.350] – Matt Fusaro
Sometimes, especially today, there’s a lot more people using multi-factor. It’s being pushed by all the insurance companies now, so I’d expect a lot of people have it on now. You may even get multi-factor requests that you didn’t initiate. That’s a good one. Usually the app has somewhere you report the suspicious login, if not, tell whoever is in charge of IT in that business that you’re in. Because that’s probably the next step. You’ll get authentication from places that don’t initiate from your users. It’s really tough if you don’t have tools already staring at that stuff.
[00:06:59.050] – Jason Pufahl
A lot of times we see that after the fact. But you’re right about the two-factor prompts, the two factor becoming much more common. Insurance providers are pushing it. It’s a standard practice now. If you do get unexpected requests for that second factor, they should be reported immediately.
[00:07:17.120] – Matt Fusaro
Right. A lot of businesses are using things like 365 now because it’s got email and all the Office applications that they need. Most small businesses have this in place, and if you have multi-factor on, you’ll get that notification when it comes through.
[00:07:29.860] – Jason Pufahl
[00:07:30.450] – Matt Fusaro
Otherwise, you’re going to have to have your auditing logs turned on and sent somewhere. That’s more challenging for a small business with no IT department.
[00:07:39.830] – Stephen Maresca
But I would assert that it’s certainly true that many small businesses, even if they have Azure or Office 365 MFA something to that effect, they probably have some infrastructure on-prem. It’s less well defended. Protocols don’t marry well with MFA, something to that effect. You’ve gotten a couple of those odd MFA alerts, deal with it, trigger it off, figure something else is going on. Now they’ve tried to come in through less protected system, less protected service. They’ve gained access. Their initial step thereafter is, “Hey, what stuff can I touch? What privileges do I have?” That makes some noise.
[00:08:29.450] – Matt Fusaro
Again, you’ll see failed logins and things like that with lateral movement, which those will be tough. Again, a lot of this is if you don’t have tools, you’re not going to see a lot of it. The things you may see are, “Hey, my account is locked out.” Or, “I’m pretty sure I remember my password. This doesn’t seem to be working.” You’ll get those requests a lot. Those are typically indicators of someone has tried lateral movement. They tried too many times, now the account doesn’t work.
[00:08:55.590] – Stephen Maresca
Now they’ve succeeded. They’ve logged into a terminal services system. They’re trying to access some local app there. I’m another guy in the office. I see somebody who’s no longer working here who I guess still has an active account logged into that system. That’s odd.
[00:09:13.480] – Matt Fusaro
You’ll see that, you’ll see programs dropped onto desktop sometimes that you don’t remember ever interacting with, things moved around, other things installed in the system. A lot of times, too, if they’re trying exploits, maybe they’re an unskilled person trying to get into a system, you’ll get crashes. They’ll try some things that are known to be unstable. They’ll crash the system. A good one is a lot of the RDP attacks that can actually blue screen system. If you’re getting suddenly system instability, that’s another indication. Go look, there might be something going on there.
[00:09:50.960] – Stephen Maresca
Or another, echo of the same thing, your AV quarantine logs suddenly fills up on a particular set of systems. That’s probably somebody screwing around, trying to find something to exploit or move elsewhere.
[00:10:03.400] – Matt Fusaro
That’s a big one. There’s so many small businesses that their users will get notifications in AV, and they just, “I’ll take care of it.” Or, “It says it was blocked. It’s fine. I must be taking care of.” Sure, it might have caught that one thing, but there could have been several other attempts. You should probably report it. Or if you’re part of the IT team, it’s time to start investigating. How did it come in? Was it just a drive by on a website or something like that? Or is it something more serious?
[00:10:32.580] – Jason Pufahl
I think that’s a really good segue, is a lot of this is focused on an end user identifies or notices something that, “This is atypical.” They’ve reported it to an IT person who potentially now is overwhelmed. Especially in a smaller business, it might be one or two people who are dealing with every user complaining about something maybe somewhat different. All the examples that we saw could present themselves in the early stages of atypical ransomware attack. It’s really important that the IT person take these seriously and potentially reach out early for assistance.
[00:11:10.880] – Stephen Maresca
Right. We’re at an inflection point here. We’ve shifted to IT. They’ve been notified three or four different ways. Maybe they do have quarantine from AV that’s starting to produce a trail. My initial suggestion is really what’s been caught. It tells you what has been attempted. For example, if I’m an IT guy and I see that my AV log has produced indicators that I have password stealing malware, that’s a real substantial escalation in severity. Because it means that an attacker is capable of taking passwords credentials, maybe obtaining those of an administrator and moving on. That means you know that they have bigger targets in mind, that they might have ransomware as the next step. At the very least, they’re going to compromise more infrastructure.
[00:12:01.810] – Jason Pufahl
We’re trying to spend some time here on how do you know if you’re attacked? I think we have identified a whole variety of IT system anomalies that when combined or when looked at as a whole, gives you a better sense that there’s something significant that happened. I think that’s what we see so often, is these might trickle in potentially over 48 hours. It’s easy to say, “Man, Steve called me. He had a password issue. I dealt with that.” Now you’re quiet because you feel satisfied that you’re working. Then Matt calls because you got something crazy with the printer, and they could feel like isolated events to some degree. To the IT person might be like, “Today was a pain. I just had a bunch of folks call me with random things.” You do want to think, I think as an IT person, a little bit more globally when you start to see these and just balance them against what your normal day looks like. If it’s an unusual day, really try to look at them as a whole and say, is there anything here, any underlying issue that I might be able to address?
[00:13:05.520] – Jason Pufahl
Or any commonality that I need to pay attention to?
[00:13:09.990] – Matt Fusaro
One thing we used to always say when we were consulting for small businesses is write this stuff down when it happens. Timeframes are really important. That’s one… It’s always hard to come into an incident or help a company with something like this when they have no idea when it actually happened. They say “Sometime last week, somebody complained about something and I don’t know.” What do you do with that? Even if it’s you, just write it down in a notebook. We don’t really care what it is. If you have a ticketing system, great. Well, please don’t. But if you got it in the notebook, great. We know at least when something happened, don’t brush it off. I think a lot of users aren’t going to report these things. They feel like they’re doing something wrong.
[00:13:55.550] – Jason Pufahl
[00:13:57.140] – Matt Fusaro
Maybe getting a culture together of, “It’s okay if you report an issue like this to me.”
[00:14:03.570] – Jason Pufahl
We talked about this being maybe the first 48 hours. I’d say a predecessor to all of this would be have a tabletop in your company that discusses what you do in an incident. We’ve seen them constructed a whole variety of ways. It’s IT people who are going through a simulated exercise to talk about how they might respond. But it’s valuable, especially in smaller companies, to simply say, “If you see anomalies, here’s who you call, here’s what you can expect from a response. Here’s how you deal with these.” They don’t have to be full day, really complicated events. Sit around your conference room table, talk for an hour about what you should do if you feel you see something that’s anomalous. Those are really valuable exercises that can make this whole process, the detection and ultimately the resolution of these, a lot quicker.
[00:14:53.430] – Stephen Maresca
Right. Especially if you have, as an outcome, the development of an instant response plan of some sort. We talked about a variety of indicators that are in isolation, not a big deal. But if they’re clustered, for your point, they have chronology to them that makes them atypical. That tells you in a plan or in a tabletop exercise that you need to pivot to other sources of data.
[00:15:15.730] – Jason Pufahl
Your next steps.
[00:15:16.610] – Stephen Maresca
Network logs. “Hey, they’re anomalous transfers of data out of the network.” Things of that nature. They really help to frame the rest of the attack if one is ongoing.
[00:15:29.270] – Jason Pufahl
In that first 48, you honestly may want to reach out to your cyber liability insurance carrier. Especially for smaller businesses, may very well provide some resources or at least some guidance on how to approach this. I think one of the things that we see so often is when incidences begin, nobody’s really sure what the correct next steps always are. In my opinion, often tend to wait a little too long, hoping things might resolve themselves, or maybe they can deal with them on case by case basis. A lot of times we see them just grow in complexity and ultimately turn to that bigger event.
[00:16:07.460] – Matt Fusaro
Or it’s the knee jerk reaction and everything gets shut off. It’s worth noting. Try to avoid that. If you are feeling very nervous about something that’s happened unplugging network cables, that’s probably okay. We like to keep machines on if you can.
[00:16:24.380] – Jason Pufahl
Don’t turn them off.
[00:16:25.710] – Matt Fusaro
So that when you do have someone come in, there’s something for us to look at.
[00:16:30.800] – Jason Pufahl
We need evidence in order to be helpful after the fact.
[00:16:32.820] – Matt Fusaro
[00:16:37.130] – Jason Pufahl
It’s interesting. It’s so hard to say. Here are the five things that will present themselves in every IT incident, because it’s simply not the reality. I think we see a lot of ransomware. I think the items that you’ve highlighted around credential misuse, certainly is one of them. Obviously, that ransomware note is a key telltale sign, encrypted files, things like that. But there are other cyber attacks that happen. There’s some that just exist for the pure intention of stealing data, some of them are disruptive. I think fundamentally, it really is. If you see something that’s anomalous, you report it to the person you feel is most appropriate, and then the person who gets reported to should really treat it seriously and pay attention to. Because it really is the world we live in now. We see companies that would probably argue that they don’t have data that’s attractive to an attacker; fall victim to attacks all the time because there’s value in just obtaining the data and potentially leveraging it for extortion purposes or publishing it, any number of things we see. Any last thoughts around something to detect or something that you feel we’ve missed.
[00:17:54.670] – Stephen Maresca
I’d say that reports may not make sense, they may be unrelated, they might be coincidental, but if you help your future self out by taking good notes, as Matt said earlier, you just help improve outcomes if there is something that does transpire. Give yourself a leg up in the future.
[00:18:14.370] – Matt Fusaro
It’s a simple step, even if you just force anyone that has an issue, “I know we talked about it at lunch, but can you send me an email just so I have it? So if I have to go back I can do something about this.”
[00:18:29.430] – Jason Pufahl
I think on that, it’s a pretty straightforward topic in a lot of ways. I think we’re simply advocating do a little preparation ahead of time so people understand what they should do if they identify something, don’t ignore what we would say are telltale signs. Frankly, I think there are things that everybody can recognize really. Then react to them with a sense of urgency, I think, to some degree if you’re on that IT practitioner side. As always, thanks for joining us today. We hope you got some value out of this and it gets you thinking a little bit about incident response. If you’d like to continue the conversation, feel free to reach out to us LinkedIn @Vancord and we’re happy to feel the questions and answer things going forward. Thanks, guys for joining.
[00:19:09.630] – Jason Pufahl
Sure. Take care.
[00:19:12.490] – Speaker 1
Stay vigilant, stay resilient. This has been CyberSound.