Episode 10

July 23, 2025

01:08:04

Episode 10 - OT Security & Infrastructure

Episode 10 - OT Security & Infrastructure
DevSecOops
Episode 10 - OT Security & Infrastructure

Jul 23 2025 | 01:08:04

/

Show Notes

Podcast Synopsis: Critical Infrastructure and Operational Technology Cybersecurity

This episode features Sam McKenzie and Karl Dawson, two seasoned professionals in cybersecurity and operational technology (OT), discussing the convergence of IT and OT in critical infrastructure, and the growing complexity facing asset operators.

Sam McKenzie, head of technology operations at the City of Stonnington, shares his early experiences growing up off-grid, which fostered a lifelong interest in protecting essential services. With a 25-year career across telecommunications, energy, and healthcare, Sam emphasises the vulnerability of modern society's reliance on critical infrastructure. His perspective blends physical asset protection and cybersecurity, drawing parallels between safeguarding farm resources and national infrastructure.

Karl Dawson, a consultant at Cordant with a background in electronics and networking, outlines his journey from technician to cybersecurity professional. With experience in water utilities, energy, and government sectors, he has moved through helpdesk, project management, and penetration testing roles—especially in smart metering systems. Karl highlights the blurred boundary between IT and OT and notes the administrative, rather than purely technical, distinction that often separates the two.

The discussion explores:

  • The definition of operational technology as an umbrella term covering industrial control systems (ICS), IoT, SCADA, and building management systems.

  • The contrast between IT and OT: IT prioritises confidentiality and data integrity, whereas OT focuses on availability, safety, and physical control.

  • The challenges introduced by the Security of Critical Infrastructure Act 2018 in Australia, which redefined the sectors deemed critical and added compliance complexity for operators.

Sam shares insights from his white paper on cyber-physical safety in Australia's critical infrastructure, based on interviews with over 50 industry leaders. He finds a persistent leadership gap in understanding and managing OT risks. This disconnect, he suggests, stems from legacy engineering assumptions being upended by the increasing interconnectivity of formerly isolated systems, often now exposed to insecure networks for operational efficiency.

Karl expands on this with practical considerations:

  • Many OT environments remain air-gapped, but increasing digital integration introduces vulnerabilities.

  • Legacy systems are often irreplaceable due to vendor constraints, budget limitations, and safety certifications, leaving infrastructure reliant on outdated software (e.g. Windows XP).

  • Contractual and operational boundaries often prevent upgrades or the addition of modern monitoring tools, risking security in the name of availability.

The conversation underscores a central tension: the imperative to modernise OT systems versus the practical and financial limitations that inhibit progress. It concludes with reflections on how leadership must evolve its view—shifting from purely technical risk management to safety-focused governance that recognises the physical consequences of cyber events.

This episode delivers a clear warning: many critical systems continue to operate on fragile, outdated infrastructure while the attack surface expands. The burden of modernisation falls not just on engineers but also on executives and regulators to align operational, financial, and safety objectives.

View Full Transcript

Episode Transcript

[00:00:07] Tom: Welcome to the DevSecOps podcast where we explore the past, present and future of computing in the modern workplace. [00:00:12] James: Your hosts are a trio of experts recordant, each representing different areas within it. [00:00:21] Scotti: A bit like a nerdy A team. So join Tom, James and Scotti for a regular, mostly serious podcast providing you with pragmatic advice and insights into modernizing your IT environment. Welcome back to another episode of of the DevSecOps podcast. My name's James and with me I have my good friend Tom Walker. How are you doing, Tom? [00:00:37] Tom: I am doing exceedingly well, thank you. James, how are you? [00:00:39] James: Yeah, really good, thanks. Now, Scott's taking a break today, but as an added bonus we're joined by two very special guests and the reason for this is we're going to tackle something that a number of our listeners have asked us to explore and that's the world of critical infrastructure and operational technology. So if you hear us talking today about ot, that's what we're talking about, operational technology. And we're literally referring to things like the control room technology that runs critical systems for platforms such as energy, water and transportation. So I guess today are Sam McKenzie and Karl Dawson. Sam, I might start with you. I don't know how you get the opportunity to fit all this into a regular week, but you're a cybersecurity committee member for the Australian Computer Society, you're a management committee member for the Australian Control Rooms Network association and also head of technology operations for the city of Stonington. Tell us a bit about your background and how you got into the world of critical infrastructure. [00:01:29] Sam: Yeah, sure. Thanks for the opportunity, gents. And my background really came from 25 years experience working, you know, primarily in telcoms, energy and healthcare. I guess I was driven into those sectors. I guess if I look back on my, perhaps my origin of I grew up off grid on a farm. We didn't really have any grid connected services, so our energy came from a windmill actually. And so when I was a kid, when I was about 6 and the lights wouldn't turn on, sometimes go out to the old battery shed with my dad, he'd handle the jumper leads which looked a bit like jump leads from cars and he'd switch over from one side of the battery shed to the other side and hopefully they were a bit more charged from the windmill. So, you know, that sort of moving on into career, I've really felt that need to pursue protecting our critical infrastructure because those are services that I didn't enjoy living without as a kid. And I think Modern society is similar today. Although we've come to rely on it so much, it's always there. You talk about utility grade equipment, utility grade services and you just expect a light switch to work when it's on. It is quite hard to work in modern society without it. In a similar bit still, back on the property in the farm, I was protecting our localised assets. So our veggie patch from the rabbits, the fruit trees that we had from, from the birds and. Well, yeah, my first pet was a chicken called Helen and, you know, we had to protect them from the foxes. There's some similarities there to cybersecurity in that you're protecting your assets, whatever's close to you, whether that's your identity, your personal identity details or the activities that you're doing in your day job, those are all particularly important. So bringing those two stories together into critical infrastructure, cybersecurity is really a passion of mine and it's led me to where I am today. [00:03:09] James: It's an amazingly unique perspective, although I'm probably going to challenge my own thought around uniqueness because you've actually got quite a bit in common with our other guests today, Karl Dawson. Now, Karl's a colleague of ours at Cordant, but Karl does quite a bit of work around wildlife protection and things like that as well. In context though, Karl, you know, you describe yourself as a cyber security consultant, but I think that probably understates a lot of what you do. Tell us a bit about your background and how you got into operational technology as well. [00:03:36] Karl: My background originally started as an electronics technician. It was probably 25, 30 odd years ago. Moved up through the ranks. This was back when the Internet was only just starting. So yes, I'm that old. There was help desk roles, so initial, initial help desk type things, network support, moved up through networking. I was a CCIE for 10 years and so doing a lot of networking stuff and that's when security, and it wasn't called cyber back then, but security started and all of this was in either water utilities, energy, power and the like, power and gas and some government facilities. Became a project manager for a while because a little bit of a control freak. So I wanted to look at both the technical side and the project management side. So I got my pmp, ran a security operations capability across Australia for a while from the networking side just transitioned through to cyber. So a lot of networking ends up being very cyber related. So the networking side then transitioned also through to hacking. So looking into breaking into different things as having an electronics background then started to drag that through into a hacking perspective. So a lot of deep down tasks looking at smart metering and the like. So we spent probably about 10, 10 years doing smart meter type pen testing and the like. So that's my background heading up from electronics all the way through networking, project management, the cybers for probably about the past 20 years. [00:05:00] James: Yeah, it's fascinating and it's really quite broad as well when I listen to your background, how you've pieced that together. And you know Tom, it's something that you and I have spoken about a bit because we probably consider ourselves more traditional IT type guys who have gravitated a little bit more towards critical infrastructure. You've had some exposure, haven't you, in terms of the sort of work that you've been doing in fairly critical industries? Yeah, yeah. [00:05:23] Tom: Recently I've sort of been dabbling, if you like, in that probably the upper Purdue layers of the OT space that it OT bridge. I would call myself far from an expert. In fact, I probably still conflate operational technology and critical infrastructure from time to time and get myself tripped over in the conversations that we have. That was actually one of the questions I was going to have to both Sam and Karl is around for my sake and probably for a lot of our listeners sakes because I know when I was first introduced to ot I actually thought someone just made the mistake of misspelling or having some unique interpretation of it. But to see that this was a world unto itself and had to be treated very differently as well because of the sort of safety criticality of all the systems that were involved. Really like to get your take on what is critical infrastructure and where do the two overlap? Where do the two differ? Maybe start with Sam. [00:06:12] Sam: Yeah, sure, feel free to jump in and correct me, Karl, because it sounds like you've got a lot of experience in the area. So I guess I would start off by saying operational technology has become an umbrella term probably even across things like industrial control systems. So ICS, IoT in some cases and BMS. So building management systems and OT has become this sort of umbrella term that encapsulates a lot of that really and it describes, you know, the technology that controls our physical environments. There's ground up components of that really around pew. Actuators that control rail tracks, circuit breakers for electricity, gas pressure valves that control obviously gas. So pipelines and things such as water pumps. And that's down at the sort of base engineering level. Those devices are controlled sort of in a rolled up perspective from PLCs. So programmable logic controllers that are all networked effectively back to a centralized system or maybe a distributed control system. So a DCS perhaps in a plant, or a supervisory control and data acquisition system, a SCADA system in a control room. That umbrella term of OT really describes that whole ecosystem of technology as discrete. From information technology, where we use it mostly around the data side of things, and controlling and managing financial payments and credit card data and health data and all those sorts of things. The OT is really around controlling the physical environment, the built environment that modern society relies on. I'd perhaps go on to say there's a lot more that can be said about those sorts of things and we'll probably touch on a bit more of that. I think critical infrastructure is probably the next part. And that's been defined in regards to the Security of Critical Infrastructure act in Australia from 2018 by the government. It's expanded out to being 11 sectors now covering things like communications, telecoms, defense, energy, water, transport, food and grocery, healthcare, those sorts of sectors. It describes the sectors rather than the technology. [00:08:08] James: I think this is a really fascinating perspective. And you know, Karl, I know this is something that you, you and I talk about quite a bit is this concept of OT versus it. And I think what Sam started with there is, you know, the traditional control room and PLC type view of the world. But as Sochi's expanded to take on so many more industries, you know, I've certainly taken the view that those boundaries have blurred over time. Do you still see them as quite distinct considerations, Karl, or are they sort of merging a lot more in the world? As you see it, there's probably a. [00:08:37] Karl: Fair bit of merging. It can be an administrative boundary as well. So a lot of OT infrastructure can still be it, but it's been defined under a different administrative requirement from an organization. So if you've got an IT team and you've got an OT team, you could have file servers that are essentially IT systems, but they're managed from the OT side. And so they're therefore considered OT infrastructure. And then as Sam says, you get down to the more specific things such as supervisor control, data acquisition, down to PLCs and the like, which are actually then controlling physical assets, which is probably what a lot of people consider to be ot. But the boundary of OT is potentially quite broad. If it's including all of your asset management systems, you could still have OT systems which are all of your security controls, but they're still considered OT because they're within that OT boundary. [00:09:34] Sam: And I think that's a Great point, Karl. And I think one of the things that often sort of separates them is what's called the CIA triad or the AIC triad add. So confidentiality, integrity, availability, kind of the priority, in that order. It switches around somewhat in OT and it's said to be sort of aic. So availability is the most important. But even above that would be safety. There's an additional added there in that the OT environment. The engineers are really trying to protect the workforce and the public from poor outcomes. With controlling the environment you end up with these multiple environments inside an organisation, the OT environment, the IT often separate teams and often working towards different goals, which can create some challenges. [00:10:16] Tom: It's interesting you mentioned that Sam, because that's the first thing that struck me as I was dealing with OT teams and it was probably more me than them and that understanding that what they're doing does tie into life or death sort of scenarios. Whereas from an IT perspective quite often it's yeah, there might be some reputational risk, there might be financial risk to the business, but very rarely does that delve into the realm of physical safety of passengers on a train or tram or consumers of electricity at home. And once I switched my mindset around to there's a reason for this care factor there and if I take my usual IT cowboy ways and apply them in the OT realm, it's. I can see why it's not very well received from time to time. So I've had to adapt to that. And it's interesting you've published a white paper. I think it's insights on cyber physical safety in Australia's critical infrastructure. Yeah. I'd be curious what motivated you to undertake that research and then ultimately publish a paper on that. [00:11:21] Sam: Yeah, well, I think, I mean obviously sort of that origin story of mine, that passion. I've worked 25 years in the industry primarily in critical infrastructure. I realised they're was a perceived gap in. Around how leadership manage these risks in critical infrastructure asset management. And I think I wanted to inquire into that and understand if it was just sort of me perceiving it or if there was others that sort of held that view as well. And I think, you know, so I went interviewed over 50 leaders and practitioners in the area, risk managers in finance, some special cyber law specialists right through to industrial controls engineers and people throughout that lifecycle and validated that there is this gap in regards to how leadership of critical infrastructure assets does manage the risks to the physical environment. I think there's a whole myriad of reasons why and perhaps we can get into some of those. We started to talk about there, the actuators, the Purdue model, which is the architecture reference model, that it is best practice to separate these environments. As Karl says, there's file servers, virtual IT operating in OT. We've got SCADA systems, DCS PLCs, all of this equipment. And we've sort of walked into this situation where we've got layers upon layers of technology which is effectively running on undependable things. We're depending on undependable things, which is to say software has bugs and it fails at times. Bad actors can get into it. And we're putting our most precious services on this block of Jenga effectively. So that was the reason for my research and that sort of validated my thoughts in that there's much to do same. [00:12:56] James: It's interesting, you know, you touched on this perspective in your paper about leadership, perhaps still perceiving this as a technical problem as opposed to that safety aspect that you spoke about. And yet even as you're describing it, you know, you're talking about those technical layers and those things that build up. Is it a bit of a challenge for leadership to actually sort of wrap their heads around this? You know, would have thought perhaps in the era of Sochi that, you know, people would be more aware of what that overall risk framework looks like. What do you think is causing that manifestation or that disconnect that you found? [00:13:28] Sam: I think it's a whole complex set of things, Karl's thoughts. But I guess what the research showed was that, and you might have seen it yourself, Tom, you know, a lot of the engineers, the industrial controls engineers, have been doing things well for a long time that's worked. But since then, we've connected a lot of this equipment across insecure networks effectively. We've exposed these underlying networks that weren't meant to be connected to the Internet. We've exposed them to the Internet for all good reasons most of the time to be able to manage our physical resources, our workforce resources and our data resources more effectively and to optimise the use of all of those resources. It builds in financial benefit. A lot of that's been baked into companies bottom lines. And that's a great benefit. It's a great benefit to the shareholders, great benefit to the public, these companies making money optimising resource use. But it's led to a situation where this equipment is now connected to the Internet, which wasn't designed to be connected to Internet 20 years ago when it was developed. That's another one of the Challenges. The operational technology assets have a long lifespan. It's a combination of reasons that have gotten us into this situation today. [00:14:35] James: Yeah, it's an interesting one. I've certainly heard it raised in the building industry. You know, that was actually my original thing back in the day when I qualified as a mechanical engineer, you know, and the whole building management system stuff was really just kicking off back then. And yeah, you know, looking at it now, a lot of the technology is quite proprietary in legacy, and there is a little bit of a fear of connectedness for some of these things because it's a bit of an unknown. But Karl, you've been around this stuff for a long time as well. Has it sort of been consistent with your view? Are you finding that it varies a little bit, you know, case by case and industry by industry? [00:15:03] Karl: Look, not all of it is, and I'm not, I'm not disagreeing with what you're saying. A lot of it though, isn't connected to the Internet, at least not directly. And I guess typically the intention is to never really have OT infrastructure connected to the Internet. And certainly from a Purdue model perspective, you wouldn't. Not all OT environments are the same. And maybe that's something that a lot of people don't realize is that, I mean, as Tom said, some systems are safety critical, a lot of systems aren't. A lot of them don't actually have like for example, a silver rating where you have something that is directly controlling something that could cause physical harm to a person. Certainly in things like transport and rail, you do. In other critical infrastructures, you won't. You may have entire environments that are just primarily, from a cyber perspective, just doing monitoring, whereas you'll have others that are actually doing control. So you could have ones where you've got physical infrastructure and physical controls that will. If gas is going through and you have a physical valve that can turn itself off if pressure gets too high, rather than it being a cyber control that actually does that. So you don't have a one size fits all from a, an architecture perspective or the type of controls you have, or even the legislation or anything else associated with it, and certainly not the same risk associated with all of them. You can then have environments, and we've certainly come across these, where you will certainly have old and outdated infrastructure that's there because a particular vendor only supports particular. Either Windows XP, it could be Windows XP, Windows 95, I mean, it could be ancient infrastructure, but from an availability perspective, their systems run on those platforms and they're probably only small Organizations in the scheme of what a lot of other IT organizations are. So they don't have the funds to then go and upgrade it to a brand new thing and then do complete retesting and guarantee that somebody won't die under this new operating system and a complete new testing regime of operating system and the like. So you end up with infrastructure that is running on old things and previously that was isolated from the rest of it, but now, now we don't have that. It doesn't mean that those systems aren't full of holes, they are, but from an availability perspective, they ran and they knew that they ran. And so you can end up with a lot of strange contractual scenarios where they're run by different organizations and if you want to change it or upgrade it, they no longer support it. Which means stuff that your physical things that you're looking after, your processes, process controls and trying to get stuff down pipelines or through power lines, they'll stop because of contractual reasons, not because of cyber or any technical reasons. So yeah, it can be a complex and a mixed bag and that doesn't mean that I don't think it shouldn't be upgraded. I'd love to see a lot of OT systems upgraded, but there are a lot of confounding reasons as to why it may not be. [00:17:47] Tom: It's interesting because for me, from an IT perspective, we sort of typically deal on three or five year life cycles of hardware and software and they're effectively considered not fit for purpose after that time. There's a part of me that understands exactly what you're saying, Karl. If availability is paramount and these things keep working, don't rock the boat. And it might be a small vendor that supports something that's a very niche product in controlling System xyz. As someone from the general public, part of me can't help but be alarmed by that, thinking that we might have some of our critical systems running on software or hardware that's over 20 years old, irrespective of the fact that they might be continuing to work. So I guess, should we be alarmed? Are we in a good state? Am I worrying about nothing? Is the answer somewhere in between? [00:18:35] Karl: I think the answer is somewhere in between. And I mean as far as whether public is aware of some of this, I guess some of the recent events that are occurring in the US at the moment, where they're talking about the Social Security database which is running on cobol, you could just go and say, well, update it. But it's not always that easy to update systems like that. And some of the major infrastructure or OT infrastructure vendors have a defined set, for example of EDR or virus protection type solutions that they've worked with and they've built their entire solutions around testing against that one particular product. And they're not really big in the scheme of IT organizations to then go and say, well okay, we're going to also now reassess and re baseline our product with all of these other either whitelisting or EDR or other type of solutions because they don't have the budget to do that. So they'll go, well, this is what works, this is what we've got and we have people available who know these solutions. That's what we're going with. Not saying it's necessarily a good thing. And again, I'd love to see a lot of OT environments updated, but there are those sort of pressures that OT vendors can be having. And even not just the vendors, but as critical infrastructure providers, they may have outsourced some of their solution to a particular either the vendor themselves to manage that infrastructure or to another solution provider. And they're contractually bound to run that solution as the client. You may be wanting to go, well, okay, I want to see more logging. I don't know that you've got some sketchy things in there and I want to be able to do some more logging and monitoring. Okay, you can, but you're voiding your warranty or your contract because you're not supposed to be playing with our stuff. You need to trust us that we're doing what said we'll do. So that can also cause some issues when the entire solution is potentially outsourced. [00:20:23] James: Sam, I imagine this challenge of desire to modernize versus the budget that's allocated must have come up as a theme during some of your research. What was the sort of perspective that you were finding as you spoke to people? [00:20:35] Sam: Yeah, I think, I mean Karl raises some great points. Some of the stuff that came up were examples of the MRI machines, you know, in Medical that are running, you know, they're million dollar machines, might be running like Karl says, on Windows 95, Windows NT, something like that. The vendor, the manufacturer of these devices that doesn't just, they don't upgrade it, it's running, so why there's not any need for them to upgrade it. If you upgraded it yourself as the sort of asset owner, then you quite likely fall out of warranty, out of support because it's not supported. So you're not going to put at risk a million dollar piece of Equipment just to reduce a small cyber risk over here. There's gotta be other ways to do that and usually that would be by network segmentation, protecting it behind some layers of defence in depth, restricting Mac addresses and things that can talk to different parts of the network, types of additional or different controls, rather than say, trying to patch or upgrade that particular terminal. In industrial control systems, if you've got energy equipment in plants ot manufacturing lines, you can't take down these lines. It's easy to cost the dollars per minute that those manufacturing lines are producing. So you know, the interviews were really saying that you just have to approach it from a different angle, acknowledge that that's the risk. How do we mitigate that through additional controls? And Shadow would be smart about that. So boil it right down to this piece of equipment's been in operation for a decade, we might get another decade out of it, but we've got to make sure that it's only connecting known trusted devices through trusted protocols and so on and so forth. So that was really sort of what came back in conversations that I was having. [00:22:09] James: I think that perspective of, you know, mitigating controls from security parlance is, you know, a bit of a classic treatment plan. But the threat landscape is changing fairly rapidly at the moment as well and it's a quite dynamic world. Do you think that perspective is keeping pace with the changing threat landscape that we have today? [00:22:26] Sam: Yeah, I think, I mean that's a real challenge. And some of the industrial control engineers I spoke to have been, you know, doing their job, doing well many years, decades in the industry perhaps haven't kept touch with what's happening in that changing threat landscape. So, and maybe just to touch on that a little bit, you know, some of it, as I'm sure you're all aware, you know, it's significantly changed. So the state sponsored actors are now so well funded and it's such a big piece on many nations budgets effectively and part of the geopolitical landscape that we're in, that they're so well resourced that they're able to maintain persistence to get access initially and then maintain persistence and hold that persistence sometimes for years at a time. So there's some examples last year with salt typhoon in the American telecoms networks. I think they're into 11 of those networks now and maintaining persistence, not executing, but being patient and waiting for when the right time on the geopolitical agenda suits them. And that's changed dramatically in the last 12 to 24 months. I would say partly how the three letter agencies the three letter Cybersecurity Defence agencies of different nations have been articulating that risk to the public. They've been much more forthright with that recently. [00:23:38] James: Yeah, it's a very interesting point, that one. I wanted to pick up on that because I recently saw Mike Burgess from ASIO coming out and just openly stating that foreign actors are actively targeting Australian critical infrastructure systems. And I think, you know, it picks up on that point that Tom was making earlier about, you know, do the public actually understand and do they have that awareness and, you know, joining the dots even further with some of the things that have been raised around, you know, do we actually even have leaders in these organizations taking the right steps? It's a really interesting one, you know, because you mentioned a couple of, you know, foreign agencies in the sense of, you know, from a defensive perspective as well. Right. And I'm just sort of wondering, you know, is Australia in a different position? How do we stack up, you know, on an international basis? [00:24:20] Sam: Yeah, I mean too, I think we've been quite behind to date. I feel like since COVID Australia sort of rushed online, it's been a decade in the uk, came back and I couldn't order anything online. When I got back here in 2016 then Covid happened and I can order everything online now, get my supermarket delivery just like I was getting, you know, 15 years ago in the UK and all the other things that we've come to know and love from online shopping, that availability. But I think we sort of rushed into it and now some of that cyber security is perhaps playing a bit of catch up in that we've wanted to offer those services right away, put everything online. There's quite a lot to do, just sort of getting back to some of the other sort of threat landscape items. The landscape's also changed in regards to the services that are available to buy off the shelf. There's ransomware as a service, there's cyber crime as a service, there's a sort of a more umbrella service, there's initial access, brokering, you know, so the barriers to entry for even the smaller gangs, single actors who are well funded. The rise of LLMs to be able to script those phishing emails with much better language, to be able to fake websites. I don't know if you've seen some of those AI tools to fake websites. You can create a whole fake website in about 30 seconds now. And if you hook that up with a well crafted, large language model phishing email with a reasonable fake link, then you can start getting people to Click pretty easily and so just the barriers to entry are a lot easier now. You buy all these services, automate some of it, it's just getting a lot easier from the perspective of gaining entry. [00:25:48] Tom: It's interesting because I was going to touch on AI and the impact of AI on critical infrastructure, both from a threat perspective as well as a benefit and utilization of the technology within OT as well. But I did have a question because we were talking about risk and you know, on the IT side of Cordant, we've been doing a bit with Wiz of late and the whole stick there is around the contextualization of risks and issues and you know, tracing attack vectors and only highl the highest risk components. I know this is something that Karl has focused a lot of in the OT space. How Karl do you contextualise and demonstrate risk in OT and how do you do that effectively? [00:26:27] Karl: I guess anyone who spent any more than probably five minutes with me will have heard me end up banging on about a tack tree analysis or something similar to contextualise risk. All of the standards that we have, Even if it's 27001 or 62443 this CSF, they all end up coming back to a risk based analysis. To help do that you need, it's some of the basics, it's needing to know what you've got and why you need to protect it. So if you don't know what your assets are and you don't know where they are and who could be targeting them, you're always going to be behind the eight ball. So that's where things like attack tree analysis makes you look at who or what you've got and then who could be targeting you. So the types of controls and the types of architecture that you would need to have in place to protect against script kiddies is going to be different to if you, if you actually think from a risk scenario perspective, you're going to be targeted by a nation state. So if you don't know what you've got and you don't know what your threat landscape is and what different threat actors are going to be targeting you, then you can't be putting in effective controls or you could be putting in controls which aren't effective against the threat actors that are actually going to be targeting you. So if you are just a small critical infrastructure provider, you may not need a full suite of controls similar to what you would need if you were a very large public profiled organization that is going to be targeted by either crime, gangs or nation State, but even when we're talking about nation state, for example, a lot of the attacks that have happened recently, and you were just mentioning things that we end up talking about ransomware, it's not overly sophisticated. It's not like it's a Stuxnet type attack where it's a really complex attack against a critical infrastructure service. It's ransomware. And I'm not diminishing ransomware. I mean, it's designed to impact on the availability of systems. And from an OT perspective, that's hitting you right in the heart because it's availability, but it's not overly complex. So a lot of your basic controls will still be able to detect, slow down and prevent against that. But again, you need to know what you've got and where it is and who could be targeting it to know as to whether ransomware or some other larger, more insidious type of threat is what you're up against. [00:28:49] James: And Karl, from your perspective, what's your take on the general maturity you're seeing in industry here in Australia? Do you think that we're progressing well, or do you see other parts of the world doing things a little bit differently or a little bit better that you'd like to see us pursuing as well? [00:29:03] Karl: I would like to see us have at least more consistencies. A lot of the different organizations that we're working across all have different problems in different areas. And I would love to see us raise the baseline of at least all organizations knowing what they've got. At least all organizations having effective log and monitoring and having all organizations having effective network segmentation or segregation can then start to move into identity and access management solutions as well. And remote access and as to how that's done, and remote access, you can, that can be considered both into the IT environment and into across into the OT environment. I mean, since most of the time ot, you try to avoid connecting it directly to the Internet, but it's going to be connected if it's coming in via the IT environment. But those fundamentals, if we can get all of those up to a decent level, and I think even Dragos as an organization who do a lot in the ICS environment, they using the SANS 5 critical controls, which is basically have a well defined risk plan, know what you've got, have a defensible architecture, you know, make sure you know what you're logging, et cetera, and risk assess accordingly. And I think that is happening more overseas than possibly what IT is. I think we probably are dragging your heels a little bit in Australia. And I would like to see those sorts of controls move forward. [00:30:24] James: It's interesting in a sense that what we're saying is leadership would look like getting the fundamentals right. But I guess for me, part of the reason I'm probably labouring the point a little bit because, Sam, I do recall reading in your white paper that this desire to have Australia as a leader in Cyber security by 2030 was raised by quite a few of the people you interviewed, wasn't it? [00:30:40] Sam: Yeah, I think so. So, and you're asking about sort of where we sit today, but I think the government's really pushing us along, along that path that you mentioned. I think a number of the respondents to the, you know, the interviews highlighted that fact. I think people are invigorated by the opportunity. There's also been all the subsequent things that are getting put in place, those building blocks, if you will, of the cybersecurity legislation coming out last year, the Cyber Security act, which is helping with how, perhaps more on the IoT side, original equipment manufacturers design and ensure the security of their devices. And I think the way the government has been positioning the ASD and the Australian Cyber Security Centre has really been pushing forward. We're coming from quite a long way back, probably in the region. I guess the research that I conducted would show that Singapore was leading in the region, so they've got their own challenges, but they've been pushing right ahead. They've got their strategy. I think we're sort of following, not following in their footsteps, but starting to push as well, which is, which is great to see. One of the big releases last year or early this year was the Operational Technology principles that the Australian government put out through the asd, the Signals Directorate that was in collaboration with a whole heap of the other security agencies internationally. And it really sort of highlights the six principles. I think it was so secure by design, network segmentation. I mean, it's sort of echoing some of the stuff Karl was saying about the SANS 5 critical controls, least privilege and needing to know your assets patching and vulnerability management, system hardening and what we've talked about there, risk based approach. I think probably the other thing I'd say is just in my discussions on the research and I kind of think of it in the nist, you know, cybersecurity framework structure of that sort of, you know, detect, identify, detect, protect, respond and recover, that sort of flow. We've spent the last 20, 30 years in cybersecurity in that really early on those stages and we've applied most of Our controls, tools, efforts, resources early on in that life cycle. And I guess it was raised in the research. I don't know how much I actually touched on it in the white paper, but that we haven't perhaps spent so much time on the response and recovery. And that's probably because over time we thought that we could defend and stop the attackers getting in. And I think we're coming to realise that that's not always going to be possible now and that we actually need to spend some more time in the response and recovery phases, that we do more of those tabletop exercises. I mean, one idea that came up during the research was, you know, we do fire drills, we involve the whole organization in fire drills. We do drills every now and again. We surprise our staff by having them walk out of the building, the people don their red hats. I became a fire warden recently in my office place. We donned the red hats and we did a fire drill. We went outside and the whole organisation, the whole building was involved. We don't do that for cyber, but maybe we should. Maybe there's an opportunity there that we should be testing for cyber. Because I know that some of the cyber incidents I've been involved in in the past, the first thing that happens is that someone goes on social media and says, oh, I can't get into my computer because we're having a cyber incident. Publish that on social media. That's a terrible state because then the media's on the back of the organisation and needing to defend against not only the attackers, but the onslaught of the media at the same time, because the information got out perhaps before you were ready. Maybe if we had cyber drills, whole of organization cyber drills, we could start to head off some of those unexpected outcomes. [00:34:03] James: Absolutely. Tom, this is a bit of an area of passion for you as well, isn't it? Because we've spoken quite a bit about things like business continuity planning and Dr. More from an IT lens. Do you think this is potentially an interesting avenue to explore where IT and OT can perhaps overlap and learn from one another and expand the dialogue? [00:34:20] Tom: I was going to say it's interesting because as you talk about the fundamentals and what needs to be done right to keep things secure, it's fundamentally no different to what we need to and should be doing in IT as well. I'm sure there's nuances to the application of all of that through our podcast series and of course, with our customers being strong advocates for running through scenarios and. Yeah, exactly to your point, Sam, having fired rules and being prepared for this sort of thing, because security is an all of business challenge. It's an all of business involvement. And I think the businesses that succeed in being prepared for the response to that will be the ones that see it as everyone's responsibility rather than just the responsibility of the security team or perhaps through extension, the infrastructure teams and application teams. Without having that view on ot, in talking to the OT teams that I have dealt with, they seem to have at least a perception that they have a good understanding of what needs to be done and how to respond to things. I don't know if that's based on old information and they're probably not ready to respond to modern threats, which is to your point, Sam. I guess certainly the challenge that comes in when you try to introduce new technologies to those environments for things as simple as visibility and you talk to visibility being key and paramount. It was almost panic stations around, hang on, this is something new and there's potentially stuff here that we didn't know about. To me that sort of represents a challenge to hang on, are you really ready for things if something as simple as finding out what you've got has alarmed you? I'm not sure to either of you, what's your view in terms of that true preparedness of OT teams to respond independent of any kind of scenarios being run? [00:35:59] Karl: I mean, look back in the day, it used to be when there was I guess, very few touch points from an IT or OT cyber perspective, as opposed to there just being physical controls for ot. It used to be that, okay, if the systems went down, you would literally have a checklist of people to go out and watch the things, watch the gauges, et cetera. And so you would just put person power in charge and that was sufficient. I think there possibly is still some organizations that think that is possibly still sufficient. And in some organizations maybe it is. I think possibly in most it's not. And I would certainly agree that there isn't enough, both tabletop and as much as possible real world scenario planning for ransomware. So even if it's just a straight out availability issue, we don't have to try to plan for something that's really, really esoteric. Just plan for straight out ransomware. Okay, all of your systems are down. What are you going to do? I mean, from a personal perspective, I went through the recent near bushfires in Melbourne and you go from not having a plan and just thinking through, it's like, oh yeah, I'll be right, I'll do this, this and this. And then the flyers get A bit closer and you realize that everything that you thought was going to be appropriate isn't going to work at all and so you throw it out the window. And I think that can also happen with organizations and that they kind of think, oh yeah, stuff goes down, we'll just do this. But if you're not testing it, if you don't try it out, there's a good chance it'll fail. So, yeah, I'm certainly a big advocate for organizations doing tabletop, but where you're actually calling in, you're using the help desk and you're letting them know though, that such and such has happened, how do you respond? And they should never be seen as, oh, hey, you failed this. It's always a learning experience because things go wrong all the time. How do you deal with that if suddenly all of your IT infrastructure ends up encrypted and then suddenly a lot of your OT infrastructure ends up encrypted? It may not even impact on the direct processes, but you may be wanting to turn things off, which is what's happened with some of the ransomware attacks in ot. IT may not have directly attacked the OT infrastructure, but they're going to turn it off because they don't want to have that speculation of whether they can rely on these systems or not. So what are you going to do? How do you respond? If you're going to be out for a day or a week and you've got to rebuild all the systems, how do you respond? You need to have those plans. And I think that is certainly missing in a lot of organizations. [00:38:19] James: It's really interesting, Karl, to listen to you touch on this idea that it needs to be more of an organizational wide thing, so involve things like the help desk, which makes me think really we need to come back to more formalised plans around risk management and risk structure. And it probably surprises me in some ways to hear that some of these things are perhaps still deficient in organisations, although perhaps it shouldn't. You know, I've been around for a while as well, Sam. Was this idea of risk management and maybe even tools or perspectives on risk management something that came up in your research as well? [00:38:49] Sam: Yeah, absolutely. And there's a great quote from one of the guys I interviewed who talks about the clients that he goes out and sees, and some of those are chemical engineering organisations and industrial controls. And often he would see cyber risk just somewhere completely disconnected from the organisation's risk register. You'd think, similar to you, that we shouldn't be surprised, you know, that we perhaps shouldn't expect that stuff to be connected to the organisation risk, but it really is, it's business risk. The OT risk shouldn't be hidden or segregated from that business risk. We need to work out how to collaborate across the IT OT engineering teams and bring that into the Holiskip business risk register and give it the appropriate criticality, ratings, priorities so that it gets the funding that IT needs. Because I think it, you know, has been in the media a lot in the last couple of years. The data breaches that impacts individuals in society makes great front page news. But what we're talking about is the safety and continuation of our essential services and arguably that's at the base of Maslov's hierarchy of needs. We need that to get the funding in the OT space, to make sure that we have availability of those services. And I think that's a challenge to do that translation and, well, not on behalf of them, but working collaboratively with them to translate that cyber risk on the business operations, the core business of a critical infrastructure asset. [00:40:10] James: It's quite fascinating. When I listen to this and join the dots on it, it feels like we've got a lot of guides, a lot of standards, a lot of government policy and advice, but it's how that's translated into the real world that feels to me like it's inconsistent. And a lot of my technical background, but more so recent years has been very much in cloud adoption and modernization of systems. And one of the things that became very evident in cloud, because people were getting it wrong is that they needed a lot more prescriptive guidance. It wasn't necessarily that people were lazy or not trying to educate themselves, it's just that the field evolves so quickly and sometimes when you can be shown strong examples of what good looks like, and we make it easier for people to actually implement things in a really solid and secure and consistent manner, it actually moves the whole industry forward. I'm curious, Karl, because, you know, you've done a little bit of this sort of stuff too. Do you think a bit of prescriptive guidance is actually useful as opposed to just saying, refer to the Purdue framework that's got all your answers. [00:41:08] Karl: Yes, but with an asterisk. Almost all of the standards and guidelines do still come back to a you need to assess the risk for your organization. So trying to put in a set of controls in a prescriptive guide that is applicable to everybody, you're going to completely over engineer some and completely under, under engineer a whole lot of others. So from an OT perspective. 62443 is good, but it's more from the, maybe even some of the lower ending your PLCs and upwards and the like. You've got SP882 which is good for ICS systems, but it's a guide that tells you this is an architecture and this is a solution to do. But it doesn't actually say, by the way, here are the things that you should do. You could then go for SP853 which is like for the federal government in the US which is an incredibly prescriptive set of controls. They're not necessarily applicable to everybody and you can spend all of your life trying to put in individual controls and not actually securing the environment. So there's a lot of different guides out there and NIST CSF is again a fantastic guide and doing that identify, protect, detect, respond, recover and covering off all of those areas are good. And that guide in particular is good because you will then drag in either 882 or another set of controls or 6243 to get the individual controls that you need to fit into each of those five categories. So probably a long winded way of trying to say that it is still difficult because the environments vary so much between organization and organization. Especially if you then as we've talked about previously, throw in IoT as well, which we can go on a segue because that's then for example cloud connected and it's cloud collected possibly into an OT environment which is a bit scary on its own. The environments can change quite significantly and it is difficult to come up with one standard that fits all of them. [00:42:59] James: And in amongst all those standards you mentioned, do any of them tell me specifically to not manage my risk in an Excel spreadsheet? [00:43:05] Karl: Some of, some of them provide you some nice guides in as to how you manage the risk. [00:43:09] Sam: But I mean look, if you're a. [00:43:10] Karl: Really small organization, managing it in an Excel spreadsheet is probably going to be okay if you have a small number of risks and if you're, you've got those well documented, if you're a large organization, you're trying to do it in risk in Excel spreadsheets and you've got 100 tabs and you know they're going for 65,000 lines on each tab, probably not the best tool for you. But again if jump back to attack tree analysis and trying to do an empirical understanding of which controls actually address your risk as opposed to let's just put in a whole lot of controls and yes, it's defense in depth and let's just hope that some of them catch. But if you've got a certain threat actor which controls are actually going to have your best bang for your buck to do that. And look at a lot of cases, network segmentation, it's old and it's not particularly sexy in any particular way, but does a really, really good job of slowing lateral movement if it's done correctly. As long as you're not just doing an any any rule and assuming we've got network segmentation, but it's any any, but it does help to slow lateral movement and then your detection systems can be able to kick in. So, yeah, for a lot of risks that works. But if you're a very large organization with nation states targeting you, you're going to need something better to manage your risks. [00:44:28] James: Hey, Tom, I'm sort of really keen to loop you back into the conversation at this point. I think there's a couple of directions I think we could potentially go here. The IOT connectedness. One is perhaps a bit of a counterpoint to the idea that we can air gap and segment networks, but the other is possibly, you know, Sam, I wouldn't mind exploring a little bit as well around the public private partnership stuff that you're doing and the ang. Well, do we need to get a little bit more investment from government into, you know, uplifting the landscape, or are we happy to leave that to the organisations that run the infrastructure? So a couple of potential directions I think we could go in. [00:45:01] Tom: I think they're two good ones to run through. I had one, but it's probably not as interesting as those. It was actually around organisational structure and yeah, we've got some organisations where IT and OT do operate almost entirely in silos and they only bubble up to the CEO at some point, whereas others collapse that in. And there's a technology leader that oversees both OT and it. It's funny because we've got organisations that we're working with at the moment who have the silos and they say we need to merge to get better union, better understanding and, you know, this cross pollination of IT and OT and then other organisations that we're working with saying we've had this cross pollination, it's not working. There hasn't been enough focus on our OT environment in exclusivity. We need to split those functions up and have them reporting directly to the CEO. So I don't know if there's anything there in terms of talking points other than. Or it could just Be a statement that it's interesting to observe that. [00:45:55] Karl: Personally, I find that always fascinating and I don't necessarily think it's one size fits all. I see a lot of organizations where they do have IT and OT and then they end up butting heads and don't touch our staff and you end up with maybe more love being given to IT and not enough being given to ot and there's animosity between them and then you can join them together and then IT all comes under effectively IT and OT is treated as an IT asset and then there'll be animosity and the like if that happens, and less security if it's not treated as an OT environment. But a lot of that can still come down to budget as well. [00:46:30] Tom: A structural, structural solution to something that's actually a cultural problem. [00:46:34] James: It is an interesting one around organizational boundaries though, because we see this in cyber security in general. [00:46:39] Sam: Right. [00:46:39] James: A lot of overlap with the cyber budget versus the IT budget and perhaps not enough clarity that the two should potentially be distinct things. I'm interested, you know, in terms of the funding avenue around OT and given the nature of critical infrastructure and the impact on citizens. You know, Sam, you do a lot around public private partnership and, you know, you fostered some amazing forums for conversation in this domain. Is there a bit of a sentiment in the industry that the government should be perhaps helping in terms of, you know, financial contribution towards maturing the OT environment? [00:47:12] Sam: That's a tricky one for the government to sort of swallow. I think the OT landscape, the critical infrastructure landscape that we rely on, is so vast now across different sectors, different organisations, different essential service outcomes. One of the key things that is really important that's come out recently is the mandatory reporting. And I think that's one of the key things that the government can do is require mandatory reporting of cyber incidents and help provide those lessons learned, you know, to other organizations so that we don't suffer from the same outcomes, the same attacks, and that we can learn from those experiences, learn how to shore up the networks. You know, back to what Karl was saying. And there's some really good opportunities there. I think there's some more work being done, you know, around ISACs, so the threat intelligence agencies. So there's a sovereign independent ISAC that's being funded in health with a government grant out of the CI for ISAC Australia. So it's a sovereign, not for profit agency that's not aligned to any particular industry, although they do have this funding for health. And it's just great work. What they're doing because you can, I guess you can sign up to this threat intelligence and you can get like real time threat intelligence across all of critical infrastructure. Because the threats, you know, some of them will be specific per industry, but there's still benefits to knowing what those threats are and what their, you know, their techniques, tactics and attack vectors are to the other industries. Because we're all running the same equipment effectively and a lot of it's similar responses that we need to take to protect against them, to defend. So I think, you know, there's always more that we could do public private partnerships and yeah, glad to be fostering some of those conversations, some of the events that I've been holding and I think, you know, just understanding we've got now in Victoria crisis, a cyber Crisis manager was a recently appointed role. So Laura Adams in the Department of Government Services is in that role. And just the opportunity now that we've got to, you know, taking IT guidance on how they're connecting emergency policy to include cyber, I think is just important work and it's good to see the government taking those steps. [00:49:15] Karl: So do you think, Sam, that there's more from a regulatory perspective that could be done, for example, either expanding out to include some OT related things? Do you think we need that to go further? Do we need something like an Australian version of any of the NIST SP 800 documentation, or do we stick with what they're doing and as in use, for example, VPDSS and then supplement it with additional guidance? [00:49:45] Sam: Yeah, that's a great, great question. I think we do need to do more. I think the challenge is it's ratcheting up quite quickly because we've been coming from so far behind. So it's how much can industry take on and how much can customers afford as well, ratcheting up these cyber controls in a short space of time, considering we were coming from such a long way back. So I think it's been coming at a fair pace. I think it needs to come faster. But then we need to do that balancing of what can the industry absorb and get in place while they're still sort of reeling with and dealing with, you know, Sochi obligations that came in last year, Cybersecurity act obligations that are coming likely are afresh to Sochi coming at some point from the consultation that was held and so forth. So there needs to be balance. I think the Government's trying to find that balance. I think industry's trying to respond to that, finding the balance. There's a lot there. So we were talking about your collaboration across and how IT and OT environments, organisations are structured and sometimes they report up through the one group and sometimes they report up through separate leadership, you know. So in the research I heard this great story about the challenges with reporting up separately and the example was given of a control room where the cyber security folks had implemented an organization wide cyber policy of screen timeouts, screen lockouts and somehow without involving OT at the time, they were rolling out this policy. So they implement screen lockouts after 10 minutes across it and OT. So the OT equipment's primarily, it's in a control room behind locked door, behind biometrics, need to be accompanied, you have to be registered with an appointment prior. If you're a visitor or guest, you have to, you need to be accompanied. So having screen lockouts in a control room that manages critical infrastructure, be that an energy network or a transport network, can be fraught with challenges. So say for example, there's a night shift and the example that was shared with me is there was night shift 3am and the controller who's operating these workstations gets locked out of their computer and they need to take responsive action, quick action to restore some of the services or redirect energy or transport network at 3am, yet they've been locked down on their computer. They were flustered, they had to respond to this event, they were locked out and then they needed to call the help desk, clip it to get unlocked on their OT workstation. So you get into this situation where you're waiting on the phone, the controller's waiting on the phone, can't respond to the event in the OT environment, potentially a safety risk, but definitely a service risk to that asset. And they're waiting on the phone for the help desk someone or three in the morning to answer the phone. Depending on the help desk structure, that could be getting someone out of bed who's trying to wake up and boot up their computer, or it could be an offshore call centre to help someone quite quickly reset a password. The resultant was that they took learnings from that event and they reworked their security policy and they got together and from the ground up they agreed with the OT engineers and the controllers. They had them in the room to set the principles for the new policy when they were designing and reworking version 2 and they came to a much better outcome which was that behind the closed locked doors they weren't going to have the timeout so strict and that was going to allow the controllers to be away from their desks, not get locked out of their devices to be able to respond to events much more quickly in a timely fashion without having to wait on the phone to the help desk. [00:53:03] James: It's a really classic example, isn't it, of how communication and collaboration is just so key. And the good story in there is obviously that they learned from that. But the burning question that comes to mind for me is, Tom, was that one of your policies? Is this something you implemented? [00:53:19] Tom: I was actually going to say it was an almost exact scenario played out and just goes to highlight the ignorance of IT around not just OT itself, but this concept even of mitigating controls around things. You talked of, you know, some fairly sophisticated physical controls to restrict access to environments there when we had scenarios where IT were looking at OT policies and group policy and the like and said these guys like to talk about how secure they are. They don't even have a, you know, auto workstation lockout policy and stuff like that. Obviously they're there for good reason. But I think it's a very clear example of where sometimes in the IT world we actually quite blinkered and see things as best practice that it doesn't impact. You know, if a user can't log into a workstation in it, they're disrupted for a couple of minutes. That could actually be something that is a life and death scenario in the OT world. So that's where safety and security, it's an interesting intersect there. [00:54:15] Karl: It's a little bit of a probably a prep project in that I've written quite a few security policies and come across policies and standards for different organizations and could be controversial, but I have the view that there should be a standardized set of policies and standards across both, but that the policy should call out different divisions of different systems and what the mitigating controls are. I mean, because you could have a workstation that is in IT and by the way, it has to have, you know, AD and MFA and a lot of other things and sitting next to it is the exact same machine, but it's connecting with an OT environment and it doesn't. And we say, well, it's a different policy and it doesn't. It can just have a admin password and that's it. Well, they're the same bits of equipment, so we could apply the same policies, but as in from a documentation perspective so that the policy still states we need all of these controls except if it's in this environment because we have these other things and here are the risk factors associated with it. Or it could be that they're still applicable but it can't do it. It's Windows 95 and it doesn't have those capabilities. And so we can mark that as a risk to go and say, well, it needs to be updated at some point where we know that that's a risk that we're carrying for that piece of equipment. But the policy that we apply is still the same. We're not saying we need to have a high level of security on this device because we're calling it it, but we don't have to have it on this device because we're calling it ot, because it's just an arbitrary statement of one's it, one's ot. The policy can still be the same, but the risk environment that it sits in is different. And at least that way you're then documenting that environment is. And this is part of what 62443 goes down to when it's got security levels, security target. Security target achieved and capable. So is that device capable of doing that capability? If it's not, document it, because it's a risk. And you may want to update that piece of equipment at a later date so that it is capable of doing that capability. But your security policy would still be the same across the board. [00:56:13] Tom: I really like that approach, Karl, and I think that's a key approach to having it better understand not just it, but the entire security landscape of the organisation. And I think where you do have IT and it, it's imperative that worlds understand each other. [00:56:28] Karl: It also means that then, from a risk perspective, you should be making sure that any exceptions that you've got in that are actually continuously being reviewed to see as to whether that risk Prof. Is still adequate, and as to whether we either need to replace that piece of equipment because we can't tolerate it having such a reduced capability, or the environment that it's in has changed. Maybe it's moved somewhere else and it doesn't have all of those physical controls in place. And so we need to make sure that the cyber aspect of it is increased. But, yeah, as part of your risk management process, it can't be just a document and then forget about it. That risk process has to be a living thing. [00:57:07] Tom: Yeah, brilliant. [00:57:07] Sam: What we're both saying is to get away from it being exceptions added on later on without having involved the OT and the engineering teams, and to collaborate on that upfront to make sure that there's involvement, to work out what those exceptions are without sort of walking into them headlong later down the track. [00:57:24] James: So I'm interested in understanding just contextualizing the things that we've discussed and what do we perceive the landscape being like, moving forward? Because, you know, I look at my personal involvement, particularly coming from cloud and AI and big data and analytics. You know, I'm one of those annoying guys that Karl alluded to that wants to plug all of my cool it cloud tools into his precious OT data. You know, I'm probably blurring boundaries and you know, hopefully being mindful and respectful and engaging the right people in the ways in which I do that. But, you know, are there some things emerging in the OT world? Is there a bit of a wish list of things that we'd like to be able to do or things, changes that we see happening in industry that we'd love to be able to respond to? Where do we see all of this going? You know, Sam, were there some things that came up in, in your research and conversations you've had with people? [00:58:10] Sam: I think the, the organisations have baked in a lot of this resource, what I would call resource optimisation. I think I sort of touched on it before, but so much to be gained and I think about even back, you know, over a decade ago, I was working for British Gas and I went out in the vans with some of the guys to fix boilers, not to observe people who were fixing boilers and they had the equipment even then back with the smart boilers to know or to have an indication as to where the fault was. And they were then able to take the right pieces of spare parts for that particular model of boiler. And the advantage there was obviously for the customer likely to get a fixed first visit because they had the equipment with them for the organization, obviously faster response time, less backward and forward to get, you know, trips to get the right pieces of equipment. And we've sort of started to bake in that intelligence, that data, that information. And we've baked that into the bottom line because now we're getting all of that data, all that information and our resource utilization is much more efficient. So I think sort of as we go forward more it, more Iot, you know, the benefits are really there. I think the challenges are really when that information isn't available and it's been baked into the bottom line and there starts to be real challenges with workforce mobilization. Perhaps you've reduced the workforce or had them move on to other areas, you know, and then you don't perhaps have the people that, you know, the workforce that you had to service that area because it was conducted by automation. Got another example. One company I was with some time ago put in some rpa, some robotic process automation, to do some clicks on the screen, on some virtual workstations to, you know, process some orders. And that, that worked really well. It was fantastic. Managed to move that team on to do other tasks. But what happened Tuesday, patch cycle came around. Microsoft workstations updated in the cloud, in the virtual environment. The screen resolution changed. The RPA robots were still clicking in the same spot, clicking on the wrong things, so they weren't processing the orders properly. So, you know, there was a major incident. We had to get that team back together out of different parts of the business that had absorbed it some months ago and get those orders processed manually. And so that resource optimization that we've done with the automation is really beneficial. I think the challenge is now that we've come to rely on it, how do we make sure that we can continue services should that automation stop? [01:00:25] James: Karl, you're a huge fan of generative AI and I know you've been doing a lot of exploration in this place. What do you think? Is there another way of coming where we actually, you know, through AI agents, improve their ability to adapt and ultimately are we going to put them in charge of our IT systems because they're more reliable than the humans? [01:00:41] Karl: Yeah, okay, that's really putting myself out in the line. Ultimately, I would say yes. But I think that's still a long way down the track to tie that answer though into back to IoT. I mean, a few years back when IoT was more on the leading edges in where everyone was wanting to implement IOT of some kind, a lot of people were wanting to put them into SCADA or OT environments because they were seeing as being too slow. It was all outdated and all of this old equipment and well, we can put in these new awesome little IoT devices and it'll do monitoring and stuff so much quicker. If they're self contained and you're managing them yourself, yes, that's good. But then we had to connect them to a cloud because they are the Internet of things. So there were things that were then connected to the Internet and then you had your OT environment connected to a cloud environment and then you had to monitor and manage that cloud solution. And so we had all of the overhead associated with managing a cloud solution and the risk associated with managing a cloud solution which is then connecting to your OT environment. And so that kind of mostly went by the wayside. I wouldn't want to see AI do the same sort of thing where it's like, well, okay, we probably can do a lot of things probably more from the IT perspective or the data analytics perspective, better with AI. But let's not introduce a backdoor by having AI systems in the OT environment that are connected back through to a cloud system. [01:02:03] Tom: Now, if you could wave the magic wand and have one thing come in to improve OT security and security culture, whatever it might be, what would that be? Because interestingly for me, during this conversation, that's probably changed from where I thought it would be at the start of the conversation. In particular, I like the point that I think Karl introduced around having a united view of security policies and having IT and OT involved collaboratively at developing those and having upfront exceptions. Because I think the working together of OT is paramount, both from an understanding and a security perspective. Because to our point, we have so much of OT now interfacing with IT as well. It is a joint responsibility there. So I guess maybe just going around the room, if I could ask the same question of you guys, maybe start with Sam, if we could wave that magic wand, what would it look like for you? [01:02:52] Sam: Yeah, I mean, we've touched on so much. There's so much that can be done. I think probably the one thing that I'd want probably ahead of the others, would be that collaboration. So crossing the floor. So whether that's an IT person crossing the floor to go understand from the OT side the perspective of the challenges that they're dealing with. And how can we, how can we collaborate to help each other do that? I mean, we're part of one organization, vice versa, you know, OT'd across the floor. Perhaps the other way for the cyber security folks to work together with the engineering teams. How can we work together? Because we're going to have to defend. We're going to do it successfully, we're going to need to do it together to protect the organization's most essential assets. [01:03:31] Karl: I would like to see. I don't think this will probably ever happen in a short period of time, but I would like to see standardization of infrastructure, as in to reduce risk by having less touch points and less things that can go wrong. I don't necessarily think that's going to happen anytime soon, but I think us being able to go, yep, we've got standardized interfaces, protocols, as in upgrading them to make sure that we're doing, doing encrypted versions of things if we need it, we can turn off if we don't. Anyway, standard infrastructure, operating systems, et cetera, I think would reduce, reduce our footprint and reduce the potential targets. [01:04:08] Tom: And are you talking standardization there between IT and ot, or are you talking about industry wise standardization of. [01:04:14] Karl: If it's from an OT perspective, you can have OT in the same way that you have commodity switches in IT and commodity routers and firewalls to have a similar sort of scenario in an OT environment, for example, for logging and EDR and the like. Whether we'll actually ever get to that point. Because a lot of vendors will have to either try to choose and collaborate on specific solutions, but to reduce attack footprint by not having the lesser terric things that are all over the place and small bespoke things that they just end up. People either forget them, there's libraries on things that nobody remembers and knows that they're using. And so you end up with a whole lot of software and hardware attack points that I would just like to reduce that footprint. But it's certainly probably magic one dreaming. [01:05:03] James: Yeah, it's interesting, you know, for me, I think what I'd love to see is the opportunity for the things we're doing in the IT world and some of the modern technologies we're adopting to have similar impact in the OT world and find a way to do that safely. You know, obviously I've done some really interesting things around analytics and machine learning and you know, we do plug that back into the OT world in the world of predictive maintenance and you know, that has human safety implications and aspects to IT as well. But I think, you know, we've learned a lot about managing distributed systems, about automating recovery from failure, you know, about having systems that are resilient to single node failure as well. And even listening to what Karl described, you know, I think of things like API gateways and API spec as a way of perhaps abstracting the immediate access to certain interfaces and perhaps we could look at some, you know, interesting methods of, you know, opening up some of the OT stuff to different modes of management whilst also protecting the assets that sit behind it. But there's amazing opportunity. Obviously, you know, there's some really good tech that can come to bear, but there's obviously, you know, process, procedure, policy and prioritisation of expenditure as well is super critical in this industry and to sort of round that idea out. You know, Sam, you're trying to bring these communities together yourself. I think you're doing a wonderful job. You're building a bit of a movement, you know, for any of our listeners out there who want to get a bit more involved in the sort of things that you're doing, what's the best way for them to do that? [01:06:28] Sam: Yeah, sure. And thanks for shining a light on that. I do feel that, you know, we're better together. I mean, people say cyber security is team sport. I have been holding events so best way to contact me is email address. So cs4numeral4ciicloud.com so we're just standing a few things up, but we've been holding events with the Australian Computer Society, isaca, Melbourne Engineers Australia and a few other groups to try and bring people together across discipline, cross sector. Because when you have a risk manager in finance and a chemical engineer having a conversation and they spark a spark, you know, ideas with each other and they see it from a different perspective, then bam. Hopefully better security, hopefully our essential services are protected just that much better from those conversations. So, yeah, we'd love to have people come to the events, get in contact and yeah, look forward to seeing people at those. [01:07:18] James: Excellent. So thanks, Sam, Karl, it's been really interesting and insightful to have you both on the show today. You know, it certainly to Tom's point, something where I've learned a bit about the way in which IT and OT can work together moving forward. And I think beyond that, you know, the collaboration between perhaps those technical and non technical roles around risk management, finance, whatever it may be. Right, that has to come together to move the industry forward, you know, with that, it's been an absolute pleasure to have you both on board today. [01:07:47] Karl: Thank you. [01:07:48] Sam: Excellent. Thank you. [01:07:51] Scotti: If you could use a little help or advice with modernizing your IT environment, visit Cordant AU to start a conversation with us. V/O: This has been a KBI Media production.

Other Episodes