Episode 143
Listen to this episode on
Episode Transcript
Speaker 1 00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:11
Welcome to Cybersound. I’m your host, Jason Pufahl, joined today by our lead of the penetration testing practice, Dylan Marquis. Nice to be here.
So we’ve talked about this before, but what I find is we’re doing a lot more penetration test work. I feel like there’s always questions about, you know, really what are penetration tests? Let’s talk about the colors that everybody always refers to in pen testing.
You know, kind of what makes us different. So I think maybe a little bit of, ultimately, a Vancord service description today. I think,…
Speaker 1 00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:11
Welcome to Cybersound. I’m your host, Jason Pufahl, joined today by our lead of the penetration testing practice, Dylan Marquis. Nice to be here.
So we’ve talked about this before, but what I find is we’re doing a lot more penetration test work. I feel like there’s always questions about, you know, really what are penetration tests? Let’s talk about the colors that everybody always refers to in pen testing.
You know, kind of what makes us different. So I think maybe a little bit of, ultimately, a Vancord service description today. I think, as much as anything else. But, you know, if you could spend a minute on, you know, what do you consider a pen test, maybe versus vulnerability assessments or some of the other things that we do?
Dylan Marquis 00:57
Sure. So with a penetration test, I think the key differentiator between a vulnerability assessment is exploitation. So in a vulnerability assessment, going through, identifying where vulnerabilities potentially exist, kind of looking at risk.
But with a penetration test, the big differentiator is actually exploiting those vulnerabilities. And what it kind of produces as a result is provable risk. So you know that, in fact, that risk does exist because it’s been proven by the test.
So it’s kind of theoretical risk versus something that’s actually provable there by the assessment.
Jason Pufahl 01:34
So if you’re doing a pen test, I know we get this question all the time. Can we do them during business hours? What’s the potential impact to infrastructure?
Dylan Marquis 01:45
That’s a big question. Yeah, we definitely get that a lot. So absolutely, they can be done during business hours.
We recommend it just so you have sight on staff in case anything does occur. But we’ve tuned a lot of our processes to ensure that there’s minimal operational impact. We really don’t want to disrupt anyone.
Ideally, you don’t even notice this at all. But yeah, really, we have very, very few issues with testing, and it’s very low impact.
Jason Pufahl 02:11
The other thing I think, especially now in the day of AI, the big question that we’re always getting is automated or maybe even AI-driven versus sort of that human approach, which I think we tend to lean towards. So maybe spend a minute on kind of what the makeup of the team is and what the differentiator might be there. But then also, we really do go in with the idea of it being manually constructed and sort of tailored to a specific customer.
Why do we do that versus AI?
Dylan Marquis 02:43
Sure. So I mean, we do use automation, but mainly to just kind of identify where risk could exist to kind of point us in the right direction and lead us. But we don’t leverage it where we’re leaning on it.
We always have an engineer that’s actually performing the exploitation. That helps with the point that we just talked about in terms of operational impact and ensuring that a human being is kind of in the loop and always performing any exploitation. But also just from an awareness standpoint, understanding what a network kind of looks like, what the organization is like, and that all kind of informs how the test proceeds and ensures that we’re really tailoring the test for the customer and not just providing kind of the same tactics, techniques, and procedures or TTPs, which is what this automation AI does.
They’re informed on certain tactics and they run those tactics, but it’s in a way that kind of proceeds in the same way every single test. Whereas with manual testing, we’re able to really get into an organization to understand at a fundamental level or as much as we can in the time allotted. And that helps us produce some really actionable results and drive value for the customer.
And that’s really what we want to do with all of our assessments.
Jason Pufahl 03:55
So I want to back up for one second. So you said TTP.
Tell everybody what a TTP is.
Dylan Marquis 03:59
Tactic, technique, and procedure. So that’s just a way of kind of a procedure for performing a certain type of attack, or let’s say it could be wider or more narrow, but it’s essentially how we’re exploiting vulnerabilities.
Jason Pufahl 04:15
Okay.
So, how specific or how tailored a test do you define for a particular client, right? I mean, I’m assuming you’re using some of the same techniques, right, back off the board, but then people are coming with specific requirements typically for these tests. So how do you have that conversation?
Dylan Marquis 04:35
So, I mean, we always tailor it as much as we can just by the way we proceed with the test. But I’d say how tailored a test is to the client, one is the desired outcomes from the client. So we always sit down beforehand.
We have a full kickoff meeting. We kind of talk about what a client’s looking for, also in pre-sales as well. But we want to make sure we’re meeting those goals, because if the outcomes are not helpful for the client, then it’s really not valuable.
And that’s what we’re looking, again, to drive value with these assessments. But the other thing too, it depends on the security maturity of the organization as well. So, I mean, organization receiving their first pen test might get kind of more off-the-shelf tactics, because those are where it’s succeeding the most.
Whereas as we kind of dive in, this is where it helps year after year. We get to know an organization and just kind of get to weed out the simpler tactics and really dig in and give some custom tactics.
Jason Pufahl 05:32
So that’s actually a good segue into people do these annually. And I know there’s clients that want to rotate test vendors for sure. But then we have a whole ton that come back.
And then you adjust that test for subsequent tests, because you assume they made, hopefully, they made some of the improvements to the recommendation to the problems that we found.
Dylan Marquis 05:55
Absolutely. I mean, that’s when we like to, a lot of clients come in and say that they like to rotate vendors. We like to take that as a little bit of a challenge and hope that they come back.
And often they do. I’ll say we’ve had a few clients change their stance on that. But I would say, you mentioned about our team makeup.
I mean, we’re always learning. We have team members from different backgrounds. So we do try to always progress our tactics year after year.
So that we’re not just kind of falling back on the same playbook over and over again. And we’re always kind of trying to learn, see what the industry does. But exactly like you said, as the organization progresses, we want to progress our tactics as well and make sure that we’re really tailoring what we’re doing to suit their needs.
Jason Pufahl 06:37
And you’ve stressed having your team get OSCP certified. Yes. And I want to bring that up for a minute because it’s pretty rigorous.
So maybe spend a little bit of time with you. What OSCP, what the timeframe might be to actually get this and what their qualifications are coming out of it?
Dylan Marquis 06:53
Yeah. So it’s usually about a year. So with the course and the test, it usually takes about a year.
It can take longer. It’s very rigorous. It’s offered through a company called Offensive Security.
And that’s essentially, they make some of our tooling. They make open source tooling. And the coursework is one of the ways that they kind of, they’re on their business side.
But it’s very rigorous and it does take, there’s quite a lot of training. And really the most rigorous part is the test. The testing is over a 48-hour period with 24 being kind of hands-on lab work.
Jason Pufahl 07:23
The test itself takes 48 hours.
Dylan Marquis 07:24
Yes.
Jason Pufahl 07:25
You have 48 hours to do it.
Dylan Marquis 07:26
Yep. Yeah. 24 for the testing and then you have to write a report.
So it’s very similar to- Can I put you on the spot and ask you how long it took you? It took me 48 hours. I think I might’ve stopped a few hours shy, but yeah, I took some breaks in there.
But yeah, no, it’s, I think I went till two or three in the morning on the testing day. So yeah, I was going after some extra points, but yeah, I didn’t, let me tell you. I was a little, I was a little scared through it.
It is definitely a daunting test. So.
Jason Pufahl 07:53
Okay. So I joked at the beginning a little bit about the colors. So, and we’ll get to that.
If we can, let’s spend a minute on the types of tests that we typically perform and the most common things that we’re being asked for.
Dylan Marquis 08:07
Yeah. So by far the most common is the internal and external penetration tests. Sometimes they’re called network, but that’s external is testing from the outside, seeing what an attacker can get from an unauthenticated position on the internet to your internal network to see if they could actually pivot in and then, you know, take action from that standpoint.
Internal is assuming sort of a compromise occurred of some type. Usually we kind of take the standpoint. Now we work with the client on what their objectives are there.
But typically it’s an attacker that’s kind of landed on a Linux-based device or a workstation is kind of looking into the network in an attempt to compromise the active directory or whatever kind of main directory system they have. Again, depending on goals.
Jason Pufahl 08:52
Do you typically have, does the client typically provide any insight into their environment? Do you go in blind? Like, how do you decide that?
Dylan Marquis 09:03
Typically, we go in blind with the scope. So they’ll give us kind of outline of what targets are in scope and what would be out of bounds. Just to make sure that things like fragile devices aren’t touched.
We can also kind of give a light touch there, but there’s generally we’re given very little information. We generally don’t like to ask for credentials. We like to get those ourselves if possible.
But again, if we were spinning our wheels and we kind of can’t progress, then we’re going to kind of ask for the next step. So that way we’re giving as much value in an assessment as possible. But typically it’s no knowledge, but it really depends on the assessment.
Some clients like to give us architectural diagrams, networking diagrams, as much information as possible. It all depends on sort of the intent of what is important to an organization and what they’re trying to kind of uncover in terms of understanding the risk.
Jason Pufahl 09:51
So internal, external application, maybe specifically web app, or is it general app? It’s, we do web app.
Dylan Marquis 09:59
We can do things like mobile testing, but it’s, I’d say a vast majority of our application testing is web application testing.
Jason Pufahl 10:07
And what makes, kind of what makes us unique for that at all?
Dylan Marquis 10:13
We have a lot of experience in it. We definitely, I mean, everything there, all the exploit, all the exploits we use are tailored to the specific application. So that’s one where, sure there’s kind of AI and automation that can perform some enumeration and things, but really kind of diving into a web application, it’s incredibly helpful to have a human engineer that’s looking at how an application is intending to work and then attempting to subvert the way that it’s intended.
So it can help uncover things. We specialize, I mean, not just on the application side, but also on the network side in identity-based attacks. Definitely unintended authorization is something that we do very well in web application testing.
Jason Pufahl 10:58
Okay.
How about wireless? And, you know, we’re seeing a lot more, you know, sort of the, say the CMMC work that we do and that concern around wireless and the signal spreading. What kind of web, sorry, wireless testing do you do?
Dylan Marquis 11:11
So for wireless, we can test what our main objective is to test the security controls around a wireless network. So, you know, what type of encryption is used? Is there any way to circumvent that?
With things like WPA and WEP, there are certain attacks that attackers can use. So we essentially will carry out those attacks. We can attempt social engineering.
There’s certain types of social engineering that are typical for wireless-based kind of tactics. And we go through all those and see, is there any kind of vulnerabilities, gaps in the security of the wireless network that can be exploited? We can furthermore, once we’re on the network, we can see, is there anything that we can kind of use to move deeper into your net, kind of bridge the gap and get out of the wireless segment deeper into the network.
But that’s also, there’s some specialized tests that we can perform as well.
Jason Pufahl 12:00
Do you need to be on like near site for that, or do you able to do those remotely?
Dylan Marquis 12:04
We do it remotely. Yeah, we send a device that has an antenna connected and then we perform it from there. Okay.
Jason Pufahl 12:11
And is that usually a combination or is that usually combined with the internal external or is that separate more often than not?
Dylan Marquis 12:17
It’s separate more often than not, but it absolutely can be combined. We also have things like wireless segmentation testing where people are interested in specific guest network. Can that touch my main network?
And if so, are there anything that’s leverageable in that network or how could you potentially break segmentation? And so that’s definitely something of interest that we get on internal external assessments.
Jason Pufahl 12:42
I mean, I would think it’s just a connection to your network or arguably probably one of the main pieces of your network.
Dylan Marquis 12:47
Yeah, absolutely. And it’s becoming more and more, we’ve seen some, in the security world, there’s definitely some research and some threat actors that have exploited wireless networks for some pretty interesting things to jump from organization to organization. So it’s pretty fascinating.
Jason Pufahl 13:03
All right. Then I think I do want to ask about the colors, right? So you got your red team and purple team.
Those are two that jump to mind. I think there’s others that maybe we don’t do as much of, but purple team comes up all the time. So if you could spend a minute on that.
Dylan Marquis 13:19
Purple team is quickly becoming one of our most popular offerings. So with that, it’s similar to a penetration test. In some regards, we still perform some penetration testing to make sure that orgs are meeting their compliance requirements.
But really, the biggest differentiator there is collaboration. So we always try to collaborate as much as possible in all of our assessments, but in a purple team, it’s sort of a facet of the service. So in that, we’re working directly with the blue team is the reason for the color analogy, but the defensive elements of the org.
So I mean, we’ve had desktop support administrators come in. We’ve had sysadmins. We’ve had IT security staff.
So really anyone that has a stake in defending the org, and that’s most of the IT team, is sort of welcomed to join. And we just talk through, you know, we show them some of our tactics, kind of give them the overview or the lay of the land on offensive security, kind of tradecraft, and kind of the shape. Everyone’s more or less interested in that.
It depends on kind of, you know, we kind of dive in as deep as the organization individuals are willing to go. But then they also kind of help us uncover, like, what are the kind of dusty corners of the org? What keeps them up at and help them understand their risk in a very direct way.
So we can sit there and work through things. They can peel away layers of security. So that way we can really dive in.
On a penetration test, we don’t always understand what we’re necessarily looking at or the function of a component. They can provide that context. We can talk through it.
And then often that kind of brings up things that maybe they never thought of by themselves. And it helps us ask questions and drive the engagement forward. Additionally, there’s an entire detection engineering component to Purple Teams as well.
So that’s really, I’d say, the main goal of the assessment, where we’re helping an org tune its EDR, its SIM. And that’s what I tend to think about more is that piece of it. That’s, I’d say, the most formalized piece and the biggest differentiator if you were to kind of look at how Purple Teams are performed across the board.
We generally, and we do absolutely focus on that. We bring in a lot of either we can kind of go through a scenario and kind of see where we’re detected along the scenario. We like to organically, you know, attack and then perform attacks and then kind of link that up as an attack path deeper into the org and see again where those points of detection are.
Additionally, we can do atomic kind of tests and test out different tactics and see exactly where those tactics are detected and make recommendations for improvement. But I’d say the collaboration is really what everyone really loves about it. And that’s kind of why we’re seeing the big draw more than just the pure detection engineering.
I think everyone, the detection engineering is added value, but the collaboration is really the heart of it.
Jason Pufahl 16:16
I think you’ve tried to interleave collaboration to any of your tests anyway. I mean, I know that you communicate, I’ll call them serious, critical, whatever. If you find critical issues, you communicate them ahead of the report typically.
Dylan Marquis 16:31
Oh yeah, 100%. Anything that’s urgent, we’ll immediately communicate with, you know, with the point of contact and any necessary stakeholders. We can have meetings, kind of whatever’s required for that situation.
But yeah, and we also just kind of ad hoc like to communicate as much as possible. If there’s members of the org that are interested in what we’re doing, we’re always happy to kind of sit down and explain it.
Jason Pufahl 16:53
Yeah, I mean, it’s fun to have somebody who cares about what you’re doing. Absolutely. Yeah, it makes it better.
So reporting, yeah, you certainly, you’ll have your ongoing conversations if something comes up during a test, but then you’re producing a pretty comprehensive report. I think tailored to each client or the objective of the client. If you could spend a minute on that, I think that’d be great.
Dylan Marquis 17:13
Yeah. So I mean, with reporting, I think my main kind of push is always, we’re not just producing tool output. It’s not just something that kind of comes out of what we’re doing and we kind of just throw it on to a haphazardly into a report.
It’s all handwritten. We really try to make things as clear, have a security engineer describe in detail what the finding actually is. And then our recommendations for remediation are to try to tailor it and make sure that it can be remediated and understood by the org.
So yeah, we try to, we have narratives in for executive summary. So that way, the sort of C-level or boards can get a high level picture. And then also we have kind of narratives for if we do get kind of a path deeper into an objective, we like to just explain what that looks like and sort of, it highlights how an attacker is moving through a system.
It’s not just a bunch of disjointed findings that kind of try to paint a picture. We do want to paint that picture at accurately as to how vulnerabilities and exploits are linked together to form a chain deeper into a network.
Jason Pufahl 18:22
So we work with a pretty wide range of clients, right? Some of them have dedicated IT teams. Some of them don’t have any.
Do you find that the recommendations that you make, you’re kind of broadly understood? How often are we involved maybe in helping implement some of those changes?
Dylan Marquis 18:40
So I would say we absolutely try to tailor our delivery for the client. We have all like you said, all types of different clients that have all different types of reasons for getting penetration tests and have different levels of understanding about those outcomes. We absolutely, you know, we have our infrastructure MSSP side of the house.
Sometimes that they absolutely do have to come in and help with the outcomes and assessment. Often, a lot of the orgs that we’re dealing with have dedicated teams. And so then we can get really technical and really describe, you know, work with them on how the practical ways of remediating and mitigating some of these findings.
Jason Pufahl 19:27
So maybe the intersection of pen testing and some of our other services, I’m interested in exploring for a second. So, you know, Steve, which is obviously a huge part of the podcast, it oversees the VISO product. And I know a lot of the VISO clients actually contract with us to do pen testing.
And at first I found that interesting because part of me thought, well, they may want to look at a different vendor, right? Because essentially we’re providing that strategic and in some cases, some of the technical implementation work, right? To turn to us to do the pen testing, you can see some of the challenges there.
And yet, I mean, fully half, if not more of our VISO clients use us for pen testing. How do you manage that internal, you know, sort of some of the communication, but then also how do you sort of keep, how do you ensure, you know, no conflict of interest and that we’re actually providing that impartial perspective?
Dylan Marquis 20:22
Yea, and that’s definitely something that comes up. Absolutely. So we always want to provide objective tests.
No matter what the situation is, we simply aren’t, you know, we want to produce the most value for the customer. And, you know, our client is, we’re going to, it really doesn’t matter who it is. We’re going to treat them the same regardless.
So we try, we, I think we navigate any conflict of interest extremely well. Obviously clients still have concerns. There’s not, if a client specifically has concerns, we limit any information sharing with the VISO.
We don’t, you know, we’re not asking them for any additional information about an org that we don’t need to understand, or something that we’re going to test. We’re not going to ask about so it doesn’t bias the test at all. But yeah, I’d say it’s extremely helpful sort of after the test, you know, in terms of handling outcomes in the long term.
That’s where that relationship makes, yeah, makes a lot of, it really makes a lot of sense because then we can, we’re fully accessible through the VISO service, the offensive team that is. And that allows, you know, clients to come back, ask follow-up questions. Sometimes outcomes will be highly complicated in terms of the remediation and potential downstream impacts.
And we can speak to all those and work with them and, you know, provide assistance directly. So through VISO.
Jason Pufahl 21:52
So there is definitely value in that collaboration between VISO and pen testing. And certainly some clients want to see separation. You know, but it has been an interesting thing for me to see how many are comfortable sort of, you know, call it staying within the Vancord ecosystem to deliver some of these.
And I do think there’s value to them long-term for that. I guess at the, you know, at the tail end here, anything that we didn’t touch on that you feel like, you know, is really important to bring up relative to your service?
Dylan Marquis 22:23
I think we touched on a lot of the facets of it. I mean, I’d just say, you know, as I said, kind of over and over again during this, you know, value is what we’re trying to provide for the clients. So we’re always trying to evolve all of these services to provide as much value as possible for the client’s dollar.
We just love doing that. We love to make sure that what a client received is receiving is, as the outcomes of these assessments are actionable, that actually help them justify standing down equipment, decommissioning things, fixing certain items. You know, that’s really what we’re there to achieve.
And so we always want to kind of provide, add in extra value. And we’re always looking for ways to help be as clear as possible in our delivery. So the client understands the outcomes.
We want to make sure that they’re getting, you know, the most that we can provide in terms of information and kind of clarity. So yeah, we’re always trying to kind of evolve that to help the client as much as possible.
Jason Pufahl 23:22
Well, Dylan, thanks for joining. Thank you. It is probably our fastest growing service.
You know, totally attributable to you and the way you’ve organized it. And certainly that we’re seeing, you know, compliance drive some of this stuff. Absolutely.
And I think our product is great. We’re going to continue to have the AI versus no AI conversation. You’re probably internally, right?
And with clients.
Dylan Marquis 23:46
We’re always, we’re always looking, we’re always going to continue to look to see what is the most helpful. But we always kind of want that edge that gives us kind of additional areas for creative thought and, you know, the kind of human touch, but we’ll use whatever tools we can to, again, get the best value for the client.
Jason Pufahl 24:06
Great. Well, if anybody has questions about pen testing, you know, of course, we’re always happy to talk more about it. Certainly we’re seeing growth in that space.
You know, candidly, part of the reason we’re doing this is to make sure people know what, you know, what we do and how maybe we differentiate. And frankly, what some of the types of tests are, because I don’t think everybody understands that. So we’ll answer anything.
Feel free to reach out and we’re happy to chat more. Thanks, everybody.
Dylan Marquis 24:27
Thanks.
Speaker 1 24:28
We’d love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn, and remember, stay vigilant, stay resilient. This has been CyberSound.


































































































