Episode Transcript
[00:00:06] Tom: Welcome to the DevSecOps podcast, where we explore the past, present and future of computing in the modern workplace.
[00:00:12] Scotti: Your hosts are a trio of experts for Cordant, each representing different areas within it. A bit like a nerdy A team.
[00:00:19] James: So join Tom, James and Scotty for a regular, mostly serious podcast providing you with pragmatic advice and insights into modernizing your IT environment.
Imagine a beautiful autumn morning. Crisp, clean air, clear blue sky. You just picked up a really good coffee on your way to the office. Your mornings kicked off with a team stand up. That was actually engaging and informative. You know it's going to be a great day. But then 1043 rolls around, your entire cloud footprint has just gone off the air. You have no visibility into your assets, your seam is inaccessible, your company website's offline, and your mobile app that thousands of customers depend on every day is refusing logins.
Have you just been targeted by a malicious nation state actor? Or did somebody simply trip over a power cable?
Hi, my name's James. With me as usual, I have my good friends Tom and Scott. Welcome to the fun and not so far fetched world of the modern threat landscape.
Now, that introductory anecdote may have seemed slightly amusing on the surface, but I've actually worked in an organization where somebody did accidentally unplug a server rack and caused all sorts of chaos.
And you have this moment where you're like, what just happened? And it's sort of terrifying, right? The reality is that cyber incidents can have genuinely catastrophic impacts to modern corporations. And we're seeing this not just locally, but globally as well. Scotty, I'm interested in your point of view. Help us out on this, right, Cyber. It's just so complex. There are so many moving parts.
What are the most significant cybersecurity threats organizations are facing at the moment? Where are they coming from?
[00:02:08] Scotti: Thanks, James. Good to talk to you again. Hi, Tom, how's it going?
[00:02:11] Tom: Very well, thanks. Guilty as charged. I have tripped over the power cord once or twice in my career, so.
[00:02:17] Scotti: It'S funny, James, as you're reliving that moment, I'm also reliving a few moments... Tom, you might remember, where there was a famous incident where someone went to a data center and just powered off all the racks as well. And then we wondered why things stopped working.
[00:02:30] Tom: Yeah, Turns out in a virtualized environment, the labels that are stuck onto physical hosts don't necessarily correlate with the virtual machines running on those hosts.
[00:02:37] Scotti: Yeah, well, loose power cables is definitely one. People just going to data centers and switching things off without checking what's running on them is definitely another. I have, over my 15 years, I, I've seen a few notable incidents, some fairly recently, which I think we'll talk about today. I think more generally, when organizations are looking at what their risks are, I think the first thing is they don't actually know what the risks are. And that seems like a really obvious answer. But often if you go into an organization and you say, okay, tell me what you think your risks and what your threats are, they can't tell you. I actually think a lot of organizations have failed to plan or fail to even understand that they need to plan for the eventuality that something like you just described, James, is actually going to happen. Beyond that, though, I would say that a lot of organizations just don't focus on foundations.
[00:03:26] James: Hey, Scott, with what you were touching on, I think there's a really, there's a really interesting thing for us to explore there, which is the notion of, okay, so if they haven't done that foundational work, how much of this is actually unique to their specific business and how much of this is just well known good practice.
[00:03:41] Scotti: Yeah. As far as what is foundational, I think having a solid foundation is the starting point. It's the hallmark of any good security program. If you look at it, organizations, nine times out of ten fall into the playing catch up over strategy camp, which means they haven't really thought about what they're actually trying to achieve from a cybersecurity program. All they're constantly doing is dealing with issues as they come up to your point, James, like, sounds like they had. There was no doctor or BCP plan in place. There was no automated failover here. And availability is a key pillar in cybersecurity. But if I look more generally, I think not just, you know, have a solid foundation. If I look at what I've been seeing over the last 12, 18 months, probably the number one is actually targeting people. I think if we look at cyber over the last 20 or 30 years, we see a bit of a revolving door around what, what attackers will actually focus on. Initially, it was the network. Because the network was largely unsecured. Things were just exposed to the Internet. As firewalls got better and people understood around NAT and things like that were actually introduced, it became harder to compromise things on the network. There were fewer services exposed to the Internet, whereas in the early days you'd Plug a computer in, you'd get a public IP address. Everything was accessible on every port. So you'd see a move towards applications. So software and then software got better. We had secure development standards, secure system design standards. So then people are starting to target the users. I still see this as one of the areas that organizations are very weak in. Specifically when we're looking at scams, business email scams, fake AI, deep fakes and almost to the point where it's very, very hard for cyber teams to keep up if we're talking about playing catch up over strategy. The key areas that I see organizations really need to get right from the get go is how they actually secure. And let's be, let's be honest, a lot of organizations use Microsoft Azure, Azure Ad or what they now call Entra id. One of the things I see time and time again is cyber teams being burdened with risky user sign ins. And those are typically, those typically come from MFA Phish kits. So those are phishing websites that look like Microsoft 365 login pages but what they're actually doing is proxying user traffic to the, to the origins. So it's sitting in front of Microsoft Entra ID or an Office 365 login user is prompted for their MFA token, it's proxied as well. Then you start seeing non interactive sign ins and these, this is something that you know you can have like 30 or 40 of these a day. If you're an average size organization and you haven't adequately trained your workforce on how to spot them and also how to report them, but also how to automate the remediation of those accounts. But more broadly beyond users supply chains, key area of interest for attackers, those are things that you rely on to deliver your services and build applications. So DNS, there was actually one recently around a JavaScript library, Polyfill. One of the key supply chain attacks recently was Polyfill. So the domain name had actually expired. It was then appropriated by a overseas organization that potentially had ties to nation state threat actors and then they started delivering malware. So essentially about a third of all the websites on the Internet were loading this malicious JavaScript. Once again a supply chain attack that you don't really have visibility into. And to be fair, Most people ignore JavaScript libraries and web applications as well.
[00:07:16] James: Scott, I'd love to pick up on this notion of the people aspect that you touched on there. Right. Because I think this is extremely important. I've got a friend who does a lot of work in cyber for government. And he was recently telling me that from his point of view, identity and access management is absolutely the hardest problem to solve at large scale in cyber. And along with that, you mentioned the notion of educating the people as well. And it's a funny thing in this industry, right, because from the end user perspective, what we see a lot of organizations investing in is some sometimes fairly corny phishing training. You know, it's like, I'm going to do yet another round of phishing training. You know, should I click on the email attachment or not yet? The reality is that when people do the wrong things, right, these breaches are becoming so expensive for organizations to deal with. You know, the average cost of breach, breach now, or even data loss, it's hitting the millions of dollars in terms of impact to organizations. And in the US specifically, where things are a bit more litigious, right, it can be double what we see at a sort of global average. I'm sort of interested to know, is this identity thing something that we need to be focusing on more? Is people education something we focus on more? I come from a sort of a cloud automation and engineering background where the number one mantra is get the people the hell out of the system, because the people are where all the problems come from. And we use a lot of code automation, we use a lot of machine level accounts, but have we got the same challenges, perhaps managing machine level accounts and service principles as we do with humans?
[00:08:45] Scotti: That's a very good question. I'm going to answer the people element first and then we'll talk about the service principle aspect. I think from a people point of view, it really depends on the type of organization that you are. Some organizations, like, let's take a SaaS company as an example, it's very different to a retail organization. If we look at the SaaS organization, they're going to have a workforce that is primarily made up of engineers, they're going to have people that primarily work in it. Let's compare that to a retail organization where the majority of your workforce are potentially bricks and mortar staff. So these are people that are working in stores, that are dealing with public, that potentially come from a whole range of backgrounds. In fact, we work with retail organizations in a fair few of them and they primarily spend most of their time focusing security, education and training on their IT workers, when in fact they should be focusing on their frontline retail staff. Once again, bringing it back to if you actually ask organisation, if you think about the type of organization you are in, what your risks and your threats Are if you're a retail organization, chances are you haven't thought about training your staff on all of the risks. I'm aware of an example recently where people have been walking into retail stores pretending to be from American Express or from EFTPOS or from Ingenico. The people that actually make the EFTPOS terminals, wanting to get access to the terminal, claiming that they're a support employee from this third party organization. These are all things that we, we wouldn't put on our bingo card, shall we say. But I guess this is one of these things. James, I'm keen to hear a little bit more around, you know, do you talk about getting the people out of the system? How is it that you actually do that?
[00:10:24] James: Certainly from the sort of technologies that I've been working with. Right. A lot of it is coming down to this notion of automation. And you know, we speak a lot about where do the issues come from with insiders, you know, and how much of this is actual malicious activity versus just accidental misconfiguration. And you know, a massive amount of it is simply accidental misconfiguration. And one of the things that we realized very early on in the cloud movement is that when you've got people doing things, particularly taking activities on production level systems, let's say you're trying to do some kind of, you know, failure or outage fix and you're trying to rapidly reconfigure something. Like you say, oh, I know what that is, that's just the Java heap runs out of memory. You know, I'll resize the stuff and while I'm at it, I'll reconfigure the stack and give it a bit more memory. Are you actually introducing some kind of challenges? You know; have you tested that the Java stack can actually operate with that new level of memory? Are you potentially introducing avenues for buffer overflow? All these sorts of strange and crazy arise. But if you look more broadly at the notion of just making sure that your systems are resilient to failure and can handle the notion that somebody might misconfigure something and it could fall over, you know, I think there's a huge challenge around saying, well that's just an education problem. You know, you need to make sure your engineers actually understand what they're doing versus accepting that humans are fallible. Right. You know, often when it outages occur, it's the classic thing.
[00:11:47] Scotti: Right.
[00:11:48] James: It's 3:00 in the morning and you're on a business critical system and you're trying to get it back online. And you know, maybe some people do their best work at 3:00 in the morning under extreme pressure. You know, I'm sure those people are out there and exist, but the majority of us don't. And so one of the big reasons that we wanted to get the human factor out of things when it came to migrating systems to cloud was the notion that we can actually use code to do a lot of the things that humans were traditionally doing in terms of configuration. When something's in code, it's very simple for us to validate, test and actually go back and say what was that value again? And if we version controlled that code, we know what the value was yesterday versus the value today. So I think one of the big factors in focusing on the removal of humans is that the tooling has evolved in order to allow us to do that. And so the more you view your infrastructure, you know the classic terminology of cattle as opposed to pets, right. The more that we, we start to try to say I'm really interested in this specific instance and I want to make sure I do everything I can to keep this instance alive. If we actually abstract things and we make it all about code and being able to redeploy systems, we don't worry so much about having to do fail and fix in real time at 3 o'clock in the morning, we simply say that instance is showing some strange behavior, I'll just simply terminate it and spin up another one. And in fact we can even make it so the systems are self healing so that they do that themselves. They say, hey, something's going awry with this particular instance. Why don't I knock it out and I'll spin up a new one. And so I think this notion has been around for quite a while in cloud and in IT automation that as much as possible we want to remove the human operators. But I think it comes back more broadly then to this notion of what are the other aspects by which we protect organizations. And it's not just when we look at cloud and it's not just through the lens of cyber. I think there's always been a fairly large focus and emphasis in the IT industry on how do we actually look after business, critical assets. And you know Tom, you would have seen this a lot in your time as well, working across a lot of large organizations. One of the things most deeply understood I think in our industry has always been what's availability versus disaster. Recovery? What's an RTO and an rpo? And who looks after business continuity planning anyway? I mean, are there sort of things that you like to probe in that area when you're talking to clients? What do you look for in a good business continuity plan, for example?
[00:14:13] Tom: Yeah, look, thanks. Thanks James. I just want to touch on something quickly there. Just to reiterate your point around automation and automation is not something that the cloud has brought about. We've had the need to take out that aspect of human error for a very long time. Just recently, working with a customer, we were rolling out a new antivirus solution there and the number of machines that actually didn't have endpoint protection on them because the installation of that endpoint protection was a manual process as part of the system onboarding. If that's something either automatically audited or automatically scripted as part of the deployment, then that takes that human error aspect out of it. So I think also we're seeing more sophistication in the attacks which actually results in more data being accessed in the event of a breach. The fundamental reasons for access being gained to these systems is as old as IT itself. It's due to poor preparation and poor processes. Back to your question about bcp. Yeah, look, the first question I like to suss out within organisations is if they really do understand the concept of disaster recovery and availability, they're actually two terms that are very easily confounded and confused and almost interchanged. And when you start talking about service level agreements and the likes with third parties, it becomes even more confusing. So I think just that general awareness is the first key. Making sure you're all singing from the same hymn book. And when you're talking about availability within for instance a data center or a cloud region, that is different to disaster recovery, where you might potentially lose an entire data center or entire region from your cloud provider and the processes that go in place when that happens. Quite often HA or high availability is something that is these days when designed right, an automated process, whereas disaster recovery does almost invariably, although done right to a particular level of sophistication, it can be completely automated and you know, seamless but invariably there will be process aspects to disaster recovery. So really primarily it's understanding does, does the organization understand what, what doctor is and how it differs from ha? Have we got a clear understanding of our non functional requirements for our IT assets and moving up the stack on that to our, to our IT systems and how they present to our customers for instance, and then working through to how do we actually validate and test that the processes and procedures that we have in place during a disaster, our business continuity procedures actually work in the event that that disaster strikes. And again, that comes back to our point around the 3am what do we do? Is it panic? Is it everyone running around like the world's on fire? Chicken Little. The sky is falling, which, in case it may actually be falling, but the sky falling isn't a problem if you've got the right processes and plans in place.
[00:17:02] James: It's interesting, isn't it? One thing to know that you have an outage and that something's wrong. But one thing I typically see in organizations is this call on, you know, do we invoke a doctor plan? Or, you know, even more significantly, a business continuity plan, Right. Typically, this notion of I've got customers I need to serve, but my IT systems are online, how do I keep my business viable? You know, they are invariably human calls at some level right around actually when we choose to trigger these things. And I think this comes somewhat back to this notion that we started with this morning, which is the question of, do I even actually understand what's going on when my system's out? How confident can I be that if I invoke a doctor plan and I maybe spin up my instance in another data center, is that actually going to solve my problem? Or am I going to encounter the exact same behavioral challenge? And this is the really big one that I find fascinating and deeply, deeply challenging in a lot of organizations is being confident that we've got the right level of observability, both over our technical assets, our security posture, and also, I think, you know, the fundamental business metrics. You know, and we've touched on this notion before, right? How do we use it as a way of measuring the correct things on behalf of a business? So, you know, are we getting the right number of transactions per second through our platform? You know, can these things tell us something and inform us that something's right or wrong about the way our systems operating?
[00:18:30] Tom: It's interesting that you touch on how we test Dr. Because something that I find consistent with a lot of organizations when we say we've got to run a doctor test, we've got to test what we've established as the doctor is right. They're afraid to do that in the event that something goes wrong. That, to me, is a red flag. If you're worried that your doctor plan isn't going to work, that means you actually need to focus on your doctor plan rather than just wave it off and say, well, we'll just assume it works.
[00:18:56] Scotti: I would extend that and say go as almost as far as saying there's only a handful of places that I've seen that has an effective doctor plan that actually works in many cases. I think the other part to that is not just testing it in the positive sense, actually testing it in the negative sense. So any offensive security person will say, you shouldn't just test for the positive case. Is it stopping me from a development point of view, you would perform the positive test. Does the application or the system do what it's designed to do? From an offensive security point of view, you would say, well, is it actually stopping me from doing something that I shouldn't do? In most cases, doctor testing is very similar. They'll test, hey, does it cut over? But they don't perform any of the negative tests that go along with that. The other interesting part is that, as you said, Tom, if they don't want to test it, that's a problem. But they are willing to have people trying to work 24 hours a day to recover systems when they go down or they're under attack. And the problem with that is that we are human. And in most organizations, they don't have adequate training for incident management or major incident management. If you've ever been in an organization where they're under active attack, the attackers are moving a lot faster than the responders can respond. And you have to remember that you've got about eight to 10, maybe 12 hours in a person, and then what happens? Who do you hand it over to? Are all your people in one place? Do you follow the sun? There are a whole bunch of things in here that organizations really need to be thinking about. And I know I'm more focusing on the cyber part of the attack here. And this is assuming that they actually know something's going on. Because in a lot of cases, if you are under attack and they're in your network, the chances are you're probably not going to know. Don't quote me on this, but I think the last time I looked at statistics, it's about three to six months that an attacker is inside your environment. And in a lot of cases have actually been and gone. They've taken the data, they've left your network because they don't need to be there anymore. But if you are under active attack, you really need to be thinking about not just do we have a plan and does it work, but how fast can we enact it?
[00:21:06] Tom: I know earlier in my career I was taken aback. The first time we ran through a BCP scenario, but they were mainly focused on either physical security or natural disasters, which of course are relevant from a cyber security point of view. But are you now seeing Scott and or James, more of these sort of cyber attack type scenarios where you've got rogue actors or foreign threats attacking and trying to exfiltrate the data?
[00:21:32] Scotti: We don't call them BCP plan tests anymore. We call them war games. The name is everything. Everybody's excited about doing the war games. The issue I see with a lot of war games scenarios or tabletop exercises, just a slightly less exciting name, is that they're largely academic, meaning a tabletop, is everyone in a room talking about what they would do rather than actually testing it. I was recently involved in testing of some certain scenarios and the thing that we realized is throughout the tabletop exercise, we were talking about how we would grant particular elements of an organization to still perform critical functions, that is transactions. So how can we still continue to trade in certain scenarios? What we realized is, until we actually went and tested it in real life, the actual reality was very different to what we'd simulated or planned for. So certain things worked. Some things worked better than we expected, some things didn't work at all. And then we had to go back and actually work out what it was. So I think we're seeing more of it. But I still think organizations, to your point, are still focusing on what potentially could we do rather than actually testing it and proving that it works in real life and not just that fire drilling it, because you might have certain key people away that understand how these processes work. So everybody needs to understand the plan. Everybody know, needs to know how to implement and initiate a disaster recovery activity, rather than just key senior people in your organization hoping that they can work it out on the fly.
[00:23:04] James: Scott, I'm really interested in this notion of simulation versus reality, because I think this comes back to tying this notion of observability and good security posture. And we can have well proven doctor and business continuity plans. But one of the things that I think our audience would be really interested in is this notion of taking three to six months for people to understand that they've had a breach. How do you even know when you're under attack? Like, unless it's something really obvious like a DDoS where someone's trying to knock out stuff and you can see massive spikes in traffic. How do you know?
[00:23:39] Scotti: That's a good question. The reality is most organizations don't. Most of the time, attackers will be in your Organization, whether that be through compromised Microsoft O365 accounts, privilege escalation, accessing a VPN. There are now inside your network, sometimes it's through software and web applications. Sometimes they don't actually need to get inside your network to get valuable data out. In a lot of cases. It's very cliche to say it, but a lot of the time it's when a customer complains that the data is being sold on the dark web or they've made it into the newspaper. Ransomware is another really interesting one. It's a very noisy event. So someone's gone into your environment, they've stolen the data, they've exfiltrated it, and then they want to try and cover their tracks. So the easiest way for an attacker to cover their tracks is to encrypt the data drives that contain the data with the logs and the evidence to prove that they were in your environment in the first place. Beyond that, yeah, it is very difficult for organizations to be able to really understand when they are under attack. DDoS has actually become less of a concern. It was one of those things that every organization was worried about because it could be initiated remotely over the Internet and it was DDoS for hire services. The advent of CDNs and things like that have actually made those issues less of a concern. I think we talked about it in one of the other episodes. It's actually not DDoS that takes out applications. If we're talking about applications and network services, it's actually increased load and a failure to plan and a failure to scale those services appropriately to meet demand.
[00:25:14] Tom: Are there tools out there that, that help with that kind of identification as well? Because obviously I sort of work in the Microsoft space a fair bit and Microsoft are very good at sprucking things like Defender for Identity and things that look at correlation of events and can track supposed rogue actors through the help of AI. Do these things help or are they snake oil? Are there elements that. Or is it another case of you have to be trained and know what you're looking at to really get the most out of them?
[00:25:42] Scotti: Microsoft has some really good tooling. They have some great. They have some great products that can help with this. What I see though is what do you do with it once you get an alert, once you get a notification? This is the operationalization of security that most organizations kind of deal with it ad hoc. For example, if an attacker is trying to gain access to your one or more accounts, Microsoft accounts, they're just going to spray Office365 so there are APIs, is Legacy API connectivity. There's a lot of things that you need to get right to start with. But let's now say you've got 50 risky sign ins. What if you've got 50,000 risky sign in alerts, how are you actually going to manage that? That actually becomes the problem. Not so much that there are tools, are there or aren't there tools available? It's what do you do, as James said, around observability? What happens when you get all this telemetry and you have all these actions that you need to take? This is where automation is, where people need to be going to essentially remediate those issues. So there are soar security operations and automated response tools that can help with that. The other part that I'm a big fan of is deception technologies. So if you think about it as a defender, you have to find all the problems. Well first you need to know that there are problems or that you have problems. Then you need to find where all the problems exist. Then you need to plug all the holes, then you need to know that you've plugged the holes correctly and this becomes an endless whack. A mole type loop. The alternative approach to that is to say, well, we know that someone's going to get in at some point. Chances are most organizations have had at least one security incident, whether that be a technical control that's not working or whether that be, you know, a human event. So someone's emailed someone something they shouldn't. Deception controls are really useful and I'm a big fan of thinkst canaries, so thinks Canary as an organization they make deception technology. So they, they work on the premise that instead of trying to identify and block threats or identify malicious users or IPs or whatever it is, they work on the premise that someone's going to get in your environment at some point. And so how do you detect them when they're in your environment? So they make these great little devices. They can be hardware, they can be virtual devices. You can deploy them in the cloud, you can deploy them on prem, you can deploy them in your data center and essentially they look like really juicy targets. They're almost so good that an attacker, well, an attacker can't actually distinguish between, as an example, it's a legitimate domain controller or it's a thin canary domain controller. You can make it look exactly like all your other domain controllers. If someone hits it, it fires an alert. And the premise behind it is that only an attacker would know that it's There. So you don't advertise it to everyone in your organization, but if someone's in your environment, they're going to start scanning, they're going to start probing, they're going to find a file server, they're going to find a SQL server. Perhaps you run those in your organization. Maybe it's an Oracle database. You can make these canaries look like anything you want and as soon as someone hits it, they start traversing it. It just looks at, it behaves exactly like what an attacker would expect. Meanwhile, your defenders, your incident responders are now being alerted that someone's IGU environment potentially doing something that they shouldn't. And this is a very big paradigm shift that I don't see a lot of organizations adopting.
[00:28:53] James: It's interesting evolution in terms of putting emphasis on these, I guess, more complex scenarios, right. And it sounds like incredibly clever technology with some very specific intent. If I look at this holistically, you know, you sort of touched on the horror scenario, right, that somebody's grabbed your organizational data, you know, probably your customer information and they've popped it up on the dark web, right. And there's possibly a bit of, you know, ransomware type thing associated with that. We've also got that element of human error as well. And you know, this notion that someone's accidentally emailed, you know, an Excel spreadsheet to the wrong email recipient. Tools evolving in such a way that they're effective across all these different types of scenarios, you know, can we protect ourselves against human error as effectively as we can malicious nation state actor? If we just look at it through the lens of loss prevention, we implemented.
[00:29:47] Tom: Starter loss prevention back it would have been probably 12 years ago. It was, we found it effective more as an awareness tool than an actual prevention tool.
So when people started doing things that were that did flag, almost like a policeman with a speed camera, when people tried to send things to multiple recipients that were external to the organisation, they'd receive the flag that, hey, you realise you're sending this to multiple recipients or these particular files have a particular classification. So people were aware that they were being watched and things were going on. And it was more of a thing to deter those that may have either been a little cheeky when leaving the organisation or just weren't aware that what they were doing was a potential risk. So I think it did have a particular effectiveness there in terms of its effectiveness against those that were actually knew what they were doing and knew how to bypass the dlp. It probably didn't protect against that sort of scenario at all.
[00:30:47] Scotti: In a lot of cases they're just agents that run on endpoints and if you're, if you have the requisite permissions, you can just disable it. I've seen that occur a lot, certainly with application whitelisting controls as well. They're there for a reason, but somebody wants to install something, it doesn't work, they just remove it. Things have moved a little bit on from dlp. We now have ueba, which is user Entity Behavior analytics platforms. But once again, similar type of issue where in many cases they're just agents that run on an endpoint or they're in certain parts of the network but they're not everywhere to get it right. I don't like emailing things. Tom, you remember a couple of days ago, how do we, how do we get this report to you? Oh, we'll just email it. Well, I personally don't like that, but if we think about it, all we needed was a method to securely transfer the files between ourselves and a third party. It's quite interesting that IT organizations don't typically have these. They still rely on things like email. And I know we're specifically just focusing on the email part of DLP here. There are obviously a whole bunch of others. We think about all the productivity tools, all the file sharing services. You've got Google Drive, you've got OneDrive, you've got Dropbox, you've got all of the others, it's actually impossible to block all of them. So to your point, I'm actually a huge fan of encouraging people to do the right thing, but you also actually need to give them the tools to do the right thing rather than just saying be careful when you email something is an attachment. And in many cases it's well known that filtering tools don't check encrypted files. So if you whack a file on a spreadsheet or on a zip, or in some cases if you just zip things up say 10 or more times, it's not, it's just going to let it straight through because it's not going to parse that on the gateway. So I think we organizations really need to be thinking about their data more holistically, where it's being stored, how it's being used and then also what controls they have around. Detecting potential malicious use, certainly by insider threats.
[00:32:51] Tom: Touched on something interesting there, Scott. It's regarding third party third parties and the proliferation of cloud based services because once upon a time I think back to that DLP installation and we Also had agents that would run on the perimeter of the organization to pick up files. Leaving the organization, that was easy because we had our exchange servers were on premises, our SharePoint was on premises. So anything leaving the organization, we had a gateway where we could inspect that traffic and stop it. Now we've got our Data spread across SaaS, providers and third parties globally. How do we deal with that challenge of this global spread of data and things outside the boundaries of potentially our control?
[00:33:29] Scotti: These days a lot of organizations focus heavily on the compliance aspect. So they, they say, hey, we're going to do a third party risk assessment. But I see it a little bit like HR and safety training. The safety training, perhaps this is an unpopular opinion, but still a lot of safety training. You know, we worked for organization that was very litigious. They absolutely love training. You'd have to do training every year on a whole range of different topics. But the thing that I took away is that I don't think they're trying to help me here. I think they're just trying to ensure that they don't become the subject of a court case or litigation themselves by saying, well, we gave them the appropriate trainings. So I see compliance being very akin to, to that type of approach. And it's not just third party risks. Right. We're actually talking about fourth party, fifth party, all of the supply chain. So your supply chain isn't just your immediate supplier or third party SAS provider or your cloud provider. It's all of the tech stack that they use as well and all of the providers that they use right down to provides their physical hardware. It's a lot larger than one single organization or the resource that I don't, in fact, I wouldn't, I'd say that almost all organizations don't have the resources to manage their entire supply chain. So it really becomes, at what point do you put in controls and checks to say this is where the demarcation is, where we will accept responsibility. And so how do we put those controls in place to make sure that anything they do doesn't negatively impact us? Or at least if it does, we're able to detect it. To James's point around observability and also then how would we recover in the incident that it is actually material impact to us?
[00:35:10] James: I think this notion of distributed data assets and how we best protect them. Yeah, I think it's a challenge that exists internally within a lot of organizations. Without even looking at the broader SaaS type landscape as well.
We speak about this notion of what can we do in terms of staff education. And how can the users within an organization actually contribute to better security posture? And how can we bring that governance lens? I think one of the things that would be really interesting to do, a bit of a shift in. If you look at data strategy at an organizational level, one of the huge challenges is that it's just hard to get access to organizational data. So what tends to happen is people tend to squirrel away aspects of that data somewhere that they know they've got access to it, that they can use it, right? So all of a sudden I've got a hard drive full of Excel spreadsheets and data dumps and all this sort of stuff, right? I enrich my data, I keep my data up to date, I work with it. I know I can trust the decisions I make on it. But, you know, Tom Scott, you guys can't access it and you've got your own views on it, and all of a sudden you've got this inconsistency creeping in across the organization. And so we talk about this notion of stewardship around data, and I think there's a lot that could be done for organizations to actually have a better grasp upon their data landscape and actually have custodians and stewards looking after that data, but more importantly, making it available to people in a way that is controlled, that is accessible, but is also audited, and then making sure when that data is enriched, we know it's actually the source of truth and that it's usable for people. One of the biggest limitations I had trying to implement a decisioning engine in one organization is that if we ran the decision engine in two different parts of the organization, we got two different results, even though we were asking the same question. And this sort of stuff, it affects business investment decisions, it affects the allocation of resources across an enterprise. And it's incredibly important. But I think a consequence of actually having a better lens on our data landscape and understanding exactly what we've got out there is that we can facilitate this notion of data sharing so much more effectively and so much more securely. And I think, you know, it comes up a lot, the notion of interagency data sharing, sharing data across different departments. And I wonder, you know, is this something that maybe we need to focus a bit more on organizational awareness and organizational operating model as opposed to just another set of tools that we plug in and ask it to take care of it, or, you know, the way the industry is going, I'm sure there'll be an AI application soon that will detect whether or not. Tom's sending me the wrong thing in an email. I don't have to worry about it as a human being anymore.
[00:37:54] Tom: Yeah, working at Oracle in the past, this concept of data democratisation is a big deal. And there they sort of acknowledge and look, Microsoft acknowledge and tableau acknowledge. There's a challenge with the presentation and curation of data, effectively creating a governance layer between the raw data and what is presented to users. Even that concept of what data do I actually need? Is difficult to obtain because you do have, by and large the business talking to technical people. And in the middle there, you know, outside of the technology challenge, you almost need a manifold, a translator, to be able to take the data requirement from the business and convert that into the technical requirements to extract and present that data to the business. So it's a whole world unto itself and we see the big problem being that spread of uncurated, ungoverned data throughout the organisation, which invariably ends up in spreadsheets and CSV files and finds its way into people's dropboxes and on their desktops at home. So, yeah, I think that's one part of it. You also sort of touched upon the sharing of data too. And I think whilst not directly related, I think a key point there when either talking to other parts of intra organisation departments or through to other third parties is make sure that the data you are sharing and presenting to them is the minimum set of data required to function that particular requirement. So if you're talking about sharing data with a third party, from an API perspective, don't share more than you need to reduce that sort of exposure risk in the event of that third party who you may trust, they may have all the practices in place just like you do. But as we've discussed multiple times during today's conversation, people and poor process can strike anyone. And we're not infallible, we're people and we make mistakes. So, you know, the largest organisations in the world have data breaches. I was looking at the stats earlier today. The Australian government statistics report that over a quarter of the reportable breaches were due to human error. I suspect in reality that figure is probably higher than that. There's also an infallibility to what people are willing to give across in the event of a crisis as well. But yeah, I think the way we treat, curate, govern data, it's a very important awareness piece. There are tools to help us do that, but it's more than just tools. Tools themselves don't solve the problem. It's an awareness And a communication channel that needs to open up between the business and IT on how to present data safely.
[00:40:31] Scotti: And marketing. I'm going to add that in there because I can guarantee you marketing think it's a great idea to run a competition to then request a whole bunch of data from their customers that are then going to store it in a third party system that it and our data custodians don't know anything about. So they're going to ask you your, you know, your addresses, your phone number, your first and last name, your date of birth down to the date. Like I think there needs to be an education for general population as well. In many cases you look and you see, hey, I'd really like to enter this competition but I personally avoid the ones that ask for huge amounts of information because it is, it's going to end up in a system that's unmanaged, that's that nobody knows about, that potentially could have pretty significant consequences. And there have been examples of that. I know you talked about the Australian ones, but there have been others. We were talking about defence force personnel and things like that that have been stored in systems that nobody knew about that are being breached. From an offensive point of view, one of my favorite things to see is where you go, you're going in and you're actually testing an application. And it's general rule of thumb that a pen tester shouldn't test production, certainly not production software or production applications, Web applications and APIs. But I can tell you that there have been so many times where I've looked on in here, I've gone hang on a minute, this looks like real data. And in fact it is because I've just simply copied production to non production because they don't have the policies or the processes or the tooling available to do it. But that the data problem is largely solved. Which means there are tools out there that can do data masking, they can do copying of production to non production, making it useful to those environments, but also simply because those environments largely aren't as secure as production, they don't have the same level of monitoring and observability around it. So the ability to use that data without actually having to decrypt it is great. One of the things that comes to mind as well through the the Optus breach that occurred feels like a long time ago now, but the reality is actually wasn't. I remember the one of the official statements was well, the data is over HTTPs and it's encrypted and I'm Thinking, well, hang on a minute, no, that's not quite the case. The transport mechanism is encrypted. But that doesn't actually solve the problem here, that I think they just wanted to use the word encryption. So actually understanding what type of encryption so homomorphic is one example, but others like actually having field level encryption. And there are database technologies out there that do this if you're in the cloud. Cloud HSMs are another great example of technology that's been introduced that organizations can use, which simply means, you know, if we take the Optus example, if the attacker had retrieved the entire database, the majority of that data would be irretrievable unless they also had access to the cryptographic key material or the HSM that encrypted it. So they might have retrieved, you know, 10, 11 million records for Australians, but that data would largely be unusable. And I think this is where organizations really need to bring it back to that design piece and saying, well, what controls can we put in place early or what controls can we even retrofit to make things harder or even impossible to retrieve during a data breach?
[00:43:40] James: Let me probe this for a second Scott, because I think this is a really interesting one where the tooling has actually evolved quite a lot. This notion of test data management and copying production to non production systems for either data exploration or perhaps even being able to do performance and volume testing. There can be many, many reasons why organizations do it. Traditionally yes, it's been the avenue of a lot of breaches and a lot of challenges, but it's also been seen as something that can be quite expensive to do. Well, I, you know, I was personally involved in putting one of the first material customer sensitive data workloads into cloud, as in public cloud, in a very, very large scale, apra sensitive way. And it was certainly something that was explored extensively in that scenario. You know, what other copies of this data exist? How are those copies being managed? Who has access to that data? What are we encrypting? How is that stored? Is this notion that it's really expensive, it's complicated, therefore it's okay for me to cut a few corners? Does that sort of, does that still carry weight these days? You know, it's one thing for you to say homomorphic encryption, you know, table level, column nut encryption tools, keys, HSMs, all this sort of stuff, Is this really accessible and achievable for most organizations in the modern era or is it still something that can be deeply, deeply challenging to implement?
[00:45:04] Scotti: I think if you're in the cloud, it's a lot easier than if you're doing it on Prem. For example, if you, if you want to use a HSM on Prem, they're actually quite expensive and then you also have the key management elements to it. You also need to think about geographic redundancy. If your data center goes up in flames and that includes the one HSM you own, that's a big problem. That essentially means that even if you had backups of that data offsite, let's say on tapes that someone took home. I know that's a really old school way of doing it these days, but as an example, you wouldn't be able to retrieve the data on it anyway. I think though if you are in the cloud it's a no brainer. In many cases the options and the configuration of these elements, there was no way to make it do it securely. To your point, James is probably why it was a large project when you, when you first did this. Nowadays I don't think it is. It becomes quite a straightforward endeavor. If you look at moving it to the cloud, it's a bit harder for applications that are perhaps a bit more legacy does require some rework. There are tools and technology. I know Oracle, being a data company actually has a large number of these tools and in many cases a lot of these tools are free. I'm not trying to plug Oracle here, but Oracle Data Safe is one of those products that will actually do data masking and it's completely free for any database you're running in the cloud. And if you do want to run it on prem as well, it's actually nominal charge and I'm sure the other cloud providers have similar products that do similar things as well.
[00:46:24] Tom: I was actually going to say we recently worked with a customer who had transactional data stored on premises. It just happened to be an Oracle database, but they actually moved to Oracle Cloud using the same licenses. They were able to introduce at REST data encryption at a database level which was a paid option for their on premises solution. So by actually moving to the cloud and the default set of security features that were on offer to them there, they actually increased their security posture. I think there's a general perception now that the cloud offers that high level of security and the vendors are responding to that to make sure that those products aren't seen as necessarily a paid option, but it's just part of the platform itself. So definitely something worth looking at. And the HSM example is another great one. We had the example where we were encrypting a particular column in the database that we knew would contain credit card data, which was great and fine and passed all the tests. But once we actually used a tool that would analyze the database as a whole, we actually found there were tens of millions of credit card numbers in fields where we didn't expect credit card numbers to be. So people were putting credit card numbers in their name field, in their address field, et cetera, et cetera. So whilst we were dealing with the known knowns, it was the unknown unknowns or the, I guess they were known unknowns in so much as they were another field within the database where data that, if that was, if that was leaked, was potentially an advisable breach due to the volume of credit card numbers and associated data in those fields. But thanks to the tool, we're able to identify that and remediate it.
[00:48:01] Scotti: And there are quite a few of those. Cloud, once again makes it really straightforward. Wiz, for example, has data security posture management elements to it. There are others as well. Most cloud providers, Amazon has Macie, for example, to search through buckets. There is a lot of tooling now that actually does it. Interestingly, Tom, I remember that example quite well. And it wasn't just in the database, it was in application logs. And then those application logs were then ingested into the siem. And when we start thinking about quite quickly how far, you know, that one bit of information has permeated throughout the entire environment outside the cde, into all sorts of other systems and in some cases where they're immutable becomes really difficult because we have to reset all of the systems that actually have that data stored in them. So it does become goes from being quite a manageable issue if you deal with it right at the beginning, which is what James has been talking about, to how uncontrollable and unmanageable it becomes very quickly, which just makes it really easy when you're an attacker. And let's be fair, how many times have I tested systems where I found things that shouldn't be there. Passwords in access logs, in web servers, as an example, because it's recording the full payload of the post request, which is a. Is a no, no, right, Those, all those types of things, those things do occur. So we really need to keep a really good grasp on all that sensitive information and actually where it's being stored, but doing it really early on. And it's one of those elements of a really good secure system design. So we're talking about OWASP controls, we're talking about how we build software, how we actually store and use that data. If we think about it really early on in the process, the outcome is that it's just by design, it's only in one place, it's encrypted, we're not storing it in the logs. We've got means to detect where we know that it will be. We've also got tools and capabilities to detect the places that potentially it's ended up that we weren't expecting.
[00:49:57] Tom: I know this sort of considered approach to software development and infrastructure design is a key one. And I often see organizations using the move to cloud not as an enabler per se, but it's the pivot point that they use to revisit how they're doing things and whether it's time to do things right or how they can leverage the cloud to do things better. James, I'd be really interested. Cause I know it's a topic close to your heart. You know, what tools are out there, what blueprints can customers follow to do this right? You know, it could be the sdlc, but just general design of the cloud to be secure inherently by design.
[00:50:33] James: I think there's a number of approaches that organizations can take to this. And what we start to see is this is where concepts such as, well, architected frameworks have evolved from the classic notion that a lot of the original cloud adoption was very much just lift and shift. You know, we want to exit a data center or we've got a compelling commercial or date driven reason in order to move a whole bunch of things to cloud. But organizations weren't necessarily diving deep enough into the opportunity that the migration to cloud necessarily represented, nor were they updating their architectural patterns. You know, like Scotty touched on things like owasp, right? These are classic and fantastic tools and resources for getting at the fundamentals of how we can design and architect our systems really well. But again, it has to be an intentional exercise. And I think that there's been perhaps a little bit of a perception in industry that as long as we move things into a reasonably safe and secure landing zone in cloud, it should just be almost drag and drop, right? Just chuck it out there that the tools will take care of it a little bit like the, you know, the AI notion, or the AI will scan it and find the anomalies. So increasingly what we're finding is that there are actually really solid frameworks that will help us think about each of the dimensions of a really good solution. This notion we touched on about availability of a solution, how do I weigh that up against Wanting to keep my solution cost effective and cheap. When and how do I weave security in? You know, is security about encrypting my data at rest and in transit, or is it about actually implementing static code analysis into my build pipeline and making sure that I'm not introducing vulnerabilities from the outset? So developing patterns and approaches for these things I think is super important. The tools are out there, the well known practices are out there. It's a matter of education and also taking the time to invest in that. And in many ways this comes back to the value proposition for why we move into cloud in the first place. We see so much emphasis on it's going to be cheaper, but cheaper in what sense? There's not enough focus on it's actually going to give us a better outcome and a better result. It's going to make our systems more secure. That has a real bottom line, dollar value to organizations. Any form of risk mitigation has a dollar value in real terms to organizations. You just take that from a simple insurance type perspective, right? These things genuinely do matter. So the notion of how do I actually pull the tooling together in a way that enables me to achieve a better outcome, I think is something that has to be an intentional consideration for organizations. But it's definitely something that is infinitely more achievable now than it was say seven or eight years ago, where those tools and those frameworks and those processes weren't very well known. And it's fascinating too, we sort of touched on some of the challenges around achieving these great outcomes that we all want to see in enterprise, right, of being legacy systems. And we can't necessarily get inside and break apart the code of a legacy system. So this notion of how do we actually improve at the database layer or at the modular layer, Is it simply that I now have a database driver that can handle encryption and my app doesn't even need to know that I was encrypting that data in transit between the database and my application consumption. But it's really fascinating. You know, we increasingly look and say, ah, the cloud's making it so much easier to address all these shortcomings in security. But we've got this massive proliferation of data, we've got all these legacy systems that we're trying to take out to cloud as well. And I think there's a point at which a lot of these things are going to come together and we're going to need to think really holistically about how can we actually bring cloud capability to systems that traditionally we haven't. And what I'm getting at here, I want to touch on something that it's a little bit of a segue, but also see it as a way of bringing together a lot of core themes. And this is the notion of running critical infrastructure and operational technology either in cloud or through leveraging cloud based technologies, maybe at a operational and control plane perspective. So I think this notion that, you know, in the past we would take something like a Security of Critical Infrastructure act and we would say, oh yeah, you know, that applies to large scale utilities, you know, energy companies. Right. The reality is that the definition of critical infrastructure is expanding rapidly. You know, we now see it applied to financial services, we now see it applied to government services, anything to do with health and safety. And I think we can't forever avoid this question of saying operational technology is going to be ring fenced from what we're doing in cloud. So therefore I'm not going to worry about it. I think the next challenge for us as a technical community is to say, how can we take really critical systems in the face of things like nation state actor threat and say, we've learned so much about protecting systems in cloud, we've learned so much about controls, they're so much more accessible and applicable now. We can handle data at massive scale, we can actually protect our systems. Well, let's now start to apply that to what was traditionally a lot of legacy technology landscapes and see what we can do to improve those.
[00:55:46] Tom: It's interesting, James. I actually worked with a customer recently who started adopting observability and monitoring tools outside their OT environment. So they're actually leveraging some of the Microsoft Azure M365 tools for the likes of Defender for Identity and Defender for endpoints to actually give them better insights into the security posture of their OT environments. Because to Scott's point earlier, we're in a position where quite often rogue actors are inside environments for longer than we actually realise they're there. And traditionally there's been this assumption in OT that the borders are so secure and so protected and things are so isolated that no one will ever get into those systems. But there is a way into those systems. Obviously the attackers are getting smarter by the day. So once someone has access to those systems, there's that absolute trust. That absolute trust, the opposite of zero trust that someone is in there and should be in there. So now with the visibility of those environments, it actually raised and highlighted some misconfigurations in the environment. And now we've got full tracking and observability of what people are doing in those environments as well. So we can also react and educate when people are doing the wrong things that potentially place critical SCADA and OT systems in a vulnerable position. So that's an example where we're starting to leverage some of the cloud based tooling to get better visibility over what's going on in an OT environment. And I think on the other side of the scale you've got things like power generation that is now coming out of solar panels and the consumer itself. Consumer side power generation is something that, that represents a really new frontier for utilities companies as an example. They've always had absolute control of that data. Now that data is actually moving out faster than they're evolving to the change in the industry. So I think we need to be on the front foot and we need to really think about embracing the cloud and what it can do for us as we start moving this, as we start moving this once critical infrastructure data outside the borders. And we have to really go with the wind rather than fight against it if we're going to stay in front of the challenges that the next sort of five to ten years hold.
[00:58:04] James: Yeah, it's an industry reality for sure. There was an article only a couple of weeks ago where a water treatment authority in Kansas in the US pivoted to manual controls due to a cyber incident. And I think it's just and acceptance of the reality that increasingly we are living in a connected world and we can't expect these systems to stay segregated and separate in perpetuity. You know, Scott, what are your thoughts on this from a cyber perspective? You know, do we have the right lens on these things? Are there ways that we can progressively phase into more critical infrastructure and actually apply what we've learned from existing systems into that domain?
[00:58:43] Scotti: It's interesting, you were talking about power generation and I'm actually looking at going solar. So I've actually started doing quite a lot of research into the technology and how it works, specifically more the control elements. So what we've actually taken here, unwittingly perhaps to some level, we've taken the idea of power generation and we've implemented it solely in it.
[00:59:06] James: Right.
[00:59:06] Scotti: We're not your home solar system. And everyone's home solar system that's installed is essentially an IT system. It's connected to the Internet. It is essentially a smart device that calls home, which you now have an app that controls essentially how it all works. You can control when you put how much solar you put into a battery when you discharge your battery. Virtual power plants, another example of that where you've essentially delegated authority to your energy provider to drain the battery out of your house. So there are a lot of impacts here that haven't, I don't believe, have been fully considered. What you are seeing, though, if we look at the federal government mandates and some of the working groups that have now been spun up recently is around, well, okay, if we had to, how could we keep the power generation working in these home solar systems without while actually severing communication with control infrastructure that's perhaps outside of the jurisdiction of Australia, or if you're in any other sovereign country, same issue would occur. So this is something that I think we are starting to understand. I think the OT world has a lot to teach the IT world as far as security goes. But similarly, the other way around, like, we can't avoid one from the other, one doesn't preclude the other. And I see, certainly if we really want to embrace a world where we have good doctor and good bcp, we have to accept that the cloud will essentially, or perhaps in a lot of cases will replace the traditional data center. And the actual footprint that exists in a traditional data center to control or gain access to these OT systems is essentially going to be moving to the cloud. So I think organizations that get ahead of the game or essentially start thinking about it now, even though they may choose to not go there if and when we actually need to, it's not going to become a oh, no, the sky is falling, we've got to move for whatever reason. It's the same as anything. It's no different than everything else we've been talking about around thinking about encryption early on and thinking about other security controls you can put in place. This is no different. It's about thinking about it now and putting the groundwork and the foundations in place so that when we do need to move potentially to cloud and have cloud connected to ot, or in the case that I'll buy a home solar system, I will actually have thought about how would I secure that best for me and my environment.
[01:01:27] James: Tom, I found it really interesting that, you know, you cited an example of an organization you're working with that is actually applying now that cloud operational and observability lens, you know, because the tooling is just so good into what is traditionally their OT environment.
I guess I'm really sort of interested to start to explore, you know, this notion of the modern threat landscape and its constant evolution what are our collective thoughts on where all this is going to go over the next two to five years? Is the shape and form of the threats and their origin going to change? Is the mechanisms of attack and vulnerability going to change? Or are we really just going to see refinement and evolution of the existing landscape?
[01:02:07] Scotti: That is interesting. I guess I'll go first here. I've given quite a bit of thought to this over the last few months, interestingly, because everybody thought AI was going to have a massive impact in phishing campaigns and things like that. The reality is it hasn't been as impactful as what people were thinking. Where I am seeing things where I believe that AI is potentially going to have a greater impact. And there are some instances of this recently is more around deepfakes. So everybody would be familiar with the. Or perhaps they're not familiar with the instance where a person joins a zoom call, or maybe it was a teams call and they've got five executives telling them all to transfer money. He transfers. I believe it's like more than a million dollars to a third party. Turns out it was completely, was complete deep fake. All of the images that he was viewing and all of the audio was completely generated. There is a really good article that talks about the actual engineering behind that and actually the amount of compute power that isn't required. Everybody at this point believes that it's, you know, requires a huge amount of GPUs. I believe for about a 5 to $10,000 investment you can generate an image. I'm completely deepfake generated. Who knows? But so I guess where I'm going with that is that the AI element of deep fakes is going to become stronger. Not specifically around, you know, large language models. Like I said, I haven't really seen that. The other one that I see, and I'll want to get your view on this as well. James is around cloud exploitation. So a lot of people have been in the cloud, a lot of people have moved to the cloud under the premise that they think that they're going to be more secure. A lot of that was traditional Lyft IaaS to the cloud, which was great, right? Because you could bring a lot of your traditional on prem tooling to the cloud. So your endpoint detection and response tooling, your vulnerability management tooling, your scene tooling, all of that kind of stuff was already there. So you had a lot of it in place. But then now everyone's starting to move to serverless. So what I predict is going to happen is there's Going to be a lot more exploitation of serverless infrastructure. Is that something that you would see happening in the near future?
[01:04:15] James: It's a really interesting scenario because traditionally what I would encourage organizations to do when they move to cloud is to move up the stack along with it. And I think, Tom, this is something that you touched on recently as well, in terms of deriving the most value from that movement to cloud. And one of the reasons that we encourage organizations to move up the stack is because it has significant and direct impact into this notion of the shared responsibility model. So what is it that I actually have to look after? So if I move up the stack and I consume database as a service, as opposed to I implement a virtual machine, I then install database software on top of it, I'm responsible for the patching and the management of all of those things. There's so many places in which I can introduce vulnerability, whereas if I move up into a serverless construction, I no longer have to worry about so many aspects of that. I just focus on my code. I deploy my code maybe into an Azure function or a AWS lambda or something similar to that. So I think this notion that we will start to see exploits at that level is a really very, very interesting one. But I think what we will see is a shift in how those exploits are being focused and being targeted. So less at the infrastructure layer and seeking vulnerabilities in the infrastructure, more so coming up again into the application stack and into the application layer. So can I somehow inject a vulnerability into a piece of code, the like functional code that's actually doing something in my organization? Can I inject something that intercepts data that's being managed or utilized by a particular piece of application code as well. So I think, you know, what we have seen in the security landscape is there's no end to how creative the attack vectors can become. You know, we often say, oh, you know, the more we introduce some kind of dynamic change or an approach to writing secure code or deployment, you know, the more we'll see all these exploits become a thing of the past, but they don't. They tend to just keep evolving and becoming increasingly sophisticated. And so I don't know that a push to serverless itself will necessarily eliminate or introduce vulnerabilities, but what it will inevitably do is cause a shift in focus on the most likely attack vectors that we will tend to see in the security landscape. And, you know, my advice to most organizations would still be to move up that stack to trust in the fact that you don't want to be running infrastructure if you don't have to. But you know, adopt any technology through the lens of what is best practice and don't just turn it on and assume that it's going to be safe forever. You know, it comes back to a point I think, Scott, you've made repeatedly, right, which is this notion that it is a game of eternal vigilance. You know, we can't just assume that what we implemented last year is going to be suitable next year. So we have to continue to invest.
[01:07:06] Tom: I'm going to take a different spin on things and look at the positive side of the next three to five years. I think as AI evolves and I think we all touched on it at some point today, you know, if we have say a security incident or something that compromises our operations from either a functional or non functional level in either the cloud or on premises, having AI help our operational teams both curate and respond to those incidents, I think is going to be a real positive for the industry we're struggling at the moment. We touched in one of our other shows about the pressures on it, tightening the belts, doing more with less. I see AI being a big contributor to allowing us to do that, whether it's categorizing and automatically responding to certain events, being able to better address and use crowd data to identify false positives sooner right through to analyzing. James, you touched on it the well architected design of a cloud platform and using natural language to ask how is this looking? Is this application resilient against failure? And then having AI provide not just a response but recommendations on how to do things better. I think that that general use of technology to uplift the capabilities and the skills rather than the challenge at the moment, which Scott alluded to, which was we implement a tool, we get a thousand alerts and it actually possibly takes us backward because it's just another tool we have to manage and another thousand alerts that just flood us with information.
I'm looking forward to the positive side of things. I'm already seeing it now. I've seen Copilot, for example, rolled out to Azure, truth be told, and a bit of feedback to Microsoft is pretty useless at the moment, but I can see the possibility there with that natural language interface to both support and advisory.
[01:08:59] Scotti: I've seen that as well. There are a lot of cyber organizations that build tools that have adopted the large language model approach. Where I see it's been really beneficial is we've talked about it. It's hard to find good people. It's Hard to retain people. What you actually want to do is have a mechanism to get people up to speed as quickly as possible. So what a large language model will do in a lot of cases will say, hey, it's not going to be as good as a real person or as a real analyst. What it will do is give everyone a good starting level playing field. So if you're a new person in organization and perhaps first time working with a particular SIEM product and you go, hey, I need you to find me similar events to this or what does this log event mean? What is this column? I don't know if anybody's looked at security event logs lately, but they're not very user friendly, shall we say. In a lot of cases it's just, you know, like tab delimited or space delimited with a whole bunch of fields. Sometimes they're even hex encoded. It can take a long time to actually understand what a particular log field is and what the implication of that is and then evaluate that over a large data set. Certainly when you're dealing with millions or tens of millions of events per second or per minute, this is where having something to do that initial summary so that you can focus in on where is the most important during an incident or during an outage is something that I think, like you say Tom, is super beneficial. The other part that I see, quite useful as well, a machine's really good at doing things repeatedly, not getting bored with it. If we look at the number of vulnerabilities and exploits that are actually being released daily, you know we're more than 50 per day, that's more than 50 CVEs that you then need to understand, hey, do I have it? Is it vulnerable? Do I need to patch it? AI is really good at that. It can say, hey, yes. And so there are CSPM products out there that do this, that will say, hey, these particular attributes of resources in your environment means that you are vulnerable to this, whether that be through serverless or through, you know, a traditionally hosted IAAS application. But then also on the flip side there is and seeing organizations start with the automatic patching. So for every mechanism there is to exploit something, and let's be really clear, reverse engineering is a quite difficult. Perhaps it's difficult, not as difficult for other people. Certainly for me, I find exploit development something that I'm not that passionate about, but machines are really good at. So if you give it some C code and you say write me an exploit for this and it doesn't work the first Time you say write another exploit, write another exploit. It only has to get it right once. And there has actually been some research out of the US where they've found that large language models that are trained in exploit development are very, very good at it. So on the flip side, if it can be found by machine, can also be fixed or protected by a machine. So this is where I see AI certainly in that if we need to respond within say, hours of a new vulnerability being released, this is where I can see that it may not be a perfect solution, but it's certainly going to get us a long way to giving the real people a chance at actually defending it. I can see that AI element giving people a real chance at being able to evaluate it and then secure it properly in the correct timeframe.
[01:12:12] James: I think you guys are really onto something here in terms of where the industry's going. We hear talk about operational AI and things being developed within the existing tool sets that are going to be increasingly leveraging AI. But if break it down to what does that really mean? You know, I think the promise of AI has always been the ability to do things at a scale that humans can't. Right? We simply want to impersonate human behavior. But like you said, Scott, they don't get tired, right? They don't fatigue. And you can be much more comprehensive then in terms of, you know, the amount of things that you're actually analyzing and the breaches that you're looking for. But I love this notion of combining it with a sort of quasi generative aspect of this, which just to say, hey, these new vulnerabilities have come out. So generate me a set of test cases that I can use to validate against my deployments, my infrastructure and my systems. And then should I be exposed in a way that actually is important to my business, please also kick off an automated patch management cycle in order to protect from that vulnerability. And I think automating this stuff will increasingly become prevalent within the industry.
So, Scott, Tom, guys, we've covered a lot of ground today in terms of the threat landscape and what it looks like in cyber today and how that's evolving. Scott, I'd sort of love to wrap up with your thoughts on this in terms of the activities that you are seeing organizations undertake right now.
Which aspects of those remain valid, other things you'd like to see organizations placing an increasing emphasis on and how might that evolve over time?
[01:13:48] Scotti: From my point of view, I would really like to see organizations take a step back. I think everybody is always, and myself included, I love shiny new technology, as I think everyone in it does. We're always looking at marketing, we're always looking at what's out there. What else could we try? Could this help? Could this help us? I think I'd really like to see organizations step back and take a look at the strategy and what they're actually trying to achieve and what that means for their business. And sometimes the outcomes or the best solutions are the ones you'd least expect. And in many cases, the advent and the introduction of more tooling isn't going to solve your existing problems of, hey, we've got, we've got this issue in our environment. Oh, we'll just buy another tool for that. Oh, we've got, we've got all these users. How do we secure their identities? Oh, we can buy a privileged account management tool for that. Most of the time you could actually do it, or cheaper, perhaps better, with a slightly different approach and thinking about it from a foundational level rather than actually what tool can I just go out there and purchase and throw into my environment? That is another thing I have to manage. That's another thing I have to learn that has its own security vulnerabilities, which is actually increasing my footprint. And I think over time what we'll see is I'd like to see more convergence of technologies. So instead of going and buying more tools from more vendors, actually looking at which ones, where is the overlap between these products and how can I choose strategically which vendors and partners I would align with to essentially mean that I've got fewer things to look at, which is obviously overall better for my team and my organization? What do you think, Todd?
[01:15:25] Tom: You touched on the three things that I wanted to highlight there. If I take things from an operational infrastructure perspective, it all sort of dovetails into those points. You know, James kicked off with it. I think we should look to automate as much as possible, take the human error element out of what we do. Don't jump into tools just for the sake of having more tools. Know what you're going to do with those tools. We talked towards the end there about AI helping us with some of those tools, but until that point that that is a true reality, we have to be cognizant of the capabilities and capacities of our teams and either be prepared to train up or size up accordingly because it becomes paradoxically counterproductive to just add more security tools to an organization if you're not prepared to consumer be able to respond to what they find. And finally, I think awareness is key. The common theme here is the human element to all of this.
Your teams are your greatest weakness as well as your greatest strength. So the more that they're aware of, of securities and the evolving landscape themselves, you know, don't keep them in the dark. You know, involve them in, in your planning and strategies. Then they become your greatest ally. And with that, I think we're, we're probably a wrap. And all our listeners. Again, thanks to Scott and James. My name's Tom. And stay safe.
[01:16:53] Scotti: If you could use a little help or advice with modernizing your IT environment, visit Cordant Au to start a conversation with us.
-----------
This has been a KBI Media production.