Episode 8

May 30, 2025

00:42:42

Episode 8 - Bytesized: Kubernetes, AI, Oracle, and More

Episode 8 - Bytesized: Kubernetes, AI, Oracle, and More
DevSecOops
Episode 8 - Bytesized: Kubernetes, AI, Oracle, and More

May 30 2025 | 00:42:42

/

Show Notes

In this byte-sized episode of DevSecOops, Tom and Scotti dive into recent developments from the Cordant office. Tom and Scotti unpack the power and pitfalls of modern tech trends, from Kubernetes to GenAI, and cloud resilience.

Kubernetes in Focus
Tom questions the complexity of Kubernetes, while Scotti defends its scalability and abstraction benefits. Drawing from both home labs and enterprise deployments, they highlight how managed services reduce friction, enabling cloud-agnostic architecture and better DevOps alignment.

OCI Incident & Lessons in Trust
Reflecting on a real-world project from Oracle, Scotti describes auditing IAM permissions at scale using Kubernetes. They dive into cultural lessons from a major Oracle Cloud Infrastructure (OCI) incident, advocating for transparency, not blame. Tom stresses that resilience comes from what we learn, not whom we blame.

AI: Game-Changer or Crutch?
AI adoption is accelerating, with tools like ChatGPT and Claude now embedded in workflows. Tom recounts a colleague building a mobile app with zero prior experience using AI alone. Scotti sees AI as a thought partner; great for learning, risky if misused.

⚠️ Ethics & Risk
AI’s potential is massive, but so are the dangers. Open-source LLMs trained on exploits pose real threats. As Scotti warns: “Like any security tool, it can be used for good or bad.”

Key Takeaway
Balance innovation with governance. Transparency, culture, and intent define how we build secure, resilient systems.

View Full Transcript

Episode Transcript

[00:00:00] Tom: Welcome to the DevSecOps podcast where we explore the past, present and future of computing in the modern workplace. [00:00:12] Scotti: Your hosts are a trio of experts from Cordant, each representing different areas within it. A bit like a nerdy A team. So join Tom, James and Scotty for a regular, mostly serious podcast providing you with pragmatic advice and insights into modernizing your IT environment. [00:00:30] Tom: Welcome back to the devsecoops podcast. Now, we're down again a host this week, but I'm joined by my regular co host Scott Fletcher. How you doing Scotty? [00:00:38] Scotti: Yeah, I'm good, thanks Tom. How are you? [00:00:40] Tom: Pretty good I must say. Look similar to last time we were down a host. We are going to mix things up a bit and this week run of DevSecoops Bite Size where we talk about some of the things we've been working on in the Cordant office recently. Today's topics are going to scan from development platforms in the form of Kubernetes through to a recent cloud platform breach. And for those listening from the far distant future, we're referring to a March 2025 cloud platform breach. And finally wrap things up with the rapidly evolving world of AI. Now, Scott, we'll start with Kubernetes. I heard you in the office the other day talking about kubectl and pods and etcd and to be quite frank, I was seriously concerned for your welfare. Now, for those unaware, these are terms used by the Kubernetes crowd and I wouldn't call myself a denier or a skeptic, but I get this sense that Kubernetes is the industry's current solution. Looking for a problem similar to cloud in the very early days. Can you walk us through your experience, what you learned and tell me, am I wrong? [00:01:41] Scotti: Are you wrong? I know you're a big fan of saying the tail wags the dog and in this case I am a convert and I'm not sure if I'm going to regret to say this later. If we look at where applications and application hosting and application deployment, application delivery has gone over the last 20 years, there was a day that we used to stand up a server, we used to configure it with an IP address with load 1 SSL cert on it and then we'd deploy our application and then we would have to manage that like a pet server. We would have to treat it like it was mission critical 24. 7 and we've slowly moved along. You've said, hey look, we now use cloud. And that's true. We've got virtualizations we did on prem Cloud essentially with VMware and hypervisors and now we've got the extension of that into the cloud world. Kubernetes IC is an extension beyond that. After spending quite a bit of time with Kubernetes, getting it running locally at home in my home lab environment and then deploying it into a cloud environment, I can now appreciate why developers have chosen it as one of the key components for application development and delivery in today's world. And there's a few reasons for that. There was a whole bunch of things. There was a really steep learning curve at the start, certainly coming from someone who would traditionally use like a Terraform and then write scripts to deploy things in the cloud or largely just use cloud native services like Lambda Functions as an example, to kind of then take that concept of containerized applications and then deploy it into Kubernetes. There are some really interesting things that fell out for me. Things like automatic load balancing and recovery and being able to scale things without having to go and reconfigure a whole bunch of different services. And to have that neatly packaged into one place I thought was awesome. [00:03:30] Tom: And I think it's probably my. My near sightedness in all of this because I looked at things in the early days around setting up Kubernetes platforms and the like and it all seems so complicated. You'd start layering things on. You need an orchestration layer and then you had this service mesh and then you had an observability and monitoring layer as well that had to get in and. And control these things and manage the scaling and manage the clusters. And before you, you had about eight things laid on top of each other to get this thing to run effectively. And if you were only doing that for one or two containers or pods or whatever, it seemed like a great deal of overkill. But I guess that's probably where in moving up the stack and the cloud providers now all have sort of managed Kubernetes platforms that you can adopt. It probably takes away a lot of that complexity away from the users and just gives you a platform that is effectively infinitely scalable and something that you don't have to become an expert in in and of itself. [00:04:24] Scotti: I'd say it goes a little bit beyond that as well. You have essentially a ubiquitous environment. So if you have an on prem Kubernetes environment, largely. There are some subtle nuances here, but largely, if you've got a Kubernetes environment that's running in Oracle, I know we're going to talk about Oracle later. It's at the top of my mind. Or it's running in GCP or it's running in Azure. Essentially you can take your deployment and you can move it across all of those cloud providers. And it's one of the things that, as I was going through this process, certainly building things that would scale and then essentially adding new nodes and new clusters to end joining clusters together. I was saying, well, actually this is something that makes a lot of sense because if I'm a developer and I put all of my workload in a single cloud provider, a cloud provider will say, oh, you know, we do multi cloud and all this kind of stuff. Oh, but use our cloud native services. So use our serverless functions, whether that be FN functions in OCI or whether that be lambda functions in aws. But as you and I know, we've had engagements with customers where they've wanted to move all of their serverless functions from one provider to another. And it's not just a simple lift and shift. Kubernetes provides that abstraction layer away from the cloud provider to allow developers to be able to do that. And I think that's a super powerful feature. I guess perhaps that's not their initial starting point when they go to adopt Kubernetes. But if you are running Kubernetes in a cloud environment, you can now have a platform and an application that runs well across all of your cloud providers, doesn't matter, and then you essentially have redundancy. So in the case where it was the Sydney Data center outage across Azure and OCI both were affected and a lot of customers were down, you would actually be able to now scale it out to GCP and others and actually make your application work without too much heavy lifting. [00:06:08] Tom: It's a contradiction in my approach to things because I've always been big on abstraction and having things portable between platforms. So wherever you run things, whether it's on PREM or any of the cloud providers being able to take that workload and with minimum rework have that functioning. Because that's effectively what you're saying Kubernetes process provides is that complete abstraction away from cloud, proprietary in this case and even on prem, proprietary in the form of VMware, because I know a lot of the solutions from traditional infrastructure people. Well, you can run VMware in the cloud. Now, VMware is a dirty word at the moment, but this is taking it up the layer and abstracting both away from the operating system as well as any of the Individual cloud providers. [00:06:49] Scotti: Let's put it another way, right? You've led infrastructure teams before. How many times have developers come to you and said, hey, I want infrastructure that does this. And then next week they come back and say, oh, actually that doesn't work. We want it like this. How hard was that for you as an infrastructure manager? [00:07:05] Tom: That was very easy for me to say no, but very hard to keep everyone happy. And I think that that's the reality of it. This comes back to the tail wagging the dog, yet you had to draw the line somewhere. So you're suggesting that that's no longer an issue with kubernetes? [00:07:19] Scotti: Yeah, I am. And you actually have the ability to separate who's responsible for what elements. When we moved to cloud, we adopted the shared responsibility model and we understand, depending on whether we do is paas, SaaS, that we were responsible for certain elements and not others. And depending on our specific situation, we might choose one over another. This applies that similar concept. Let's say you as the infrastructure manager or the infrastructure team, you are managing the cloud environment. You've provisioned kubernetes in there, you've put all other security teams, put the guardrails around kubernetes, you have your pipelines to build things inside that EKS environment. Assuming that you've done all of the other prerequisites first, the developers are now free to build their applications however they like, and then they can then deploy those into kubernetes. So you have control of everything up into the deployment, and the development teams are then responsible for the deployment and all of the services and the containers and the actual applications themselves that run inside that EKS environment. [00:08:17] Tom: I guess the last nagging thing I've got is around complexity still, because even. Even if it's a service that you can consume easily and that that management of that platform is done by someone else, it still strikes me this complexity as you're. You're scaling out potentially infinitely. When things do go wrong, how do you start identifying where everything has gone wrong? Or again, am I thinking more around this pet mentality where you don't actually worry about that? What you worry about is just saying, well, you poison that and start with something new. [00:08:49] Scotti: I think we, you and I probably alike in this fashion. We still think of applications being more monolithic than smaller, compact, discrete pieces of code in the idea that you're then taking this monolith which has some inputs, there's a whole bunch of internal gubbins of an application that's running and Then at the end you get the outcome or the data goes somewhere in the containerized world. You're saying, well, actually I'm no longer now responsible for all of these other components. I need to build scalable infrastructure and applications. So instead of building all of that into one big application, I'm actually separating it out. So assuming that you have using open standards and you're using common things, you've now got containers for databases. I'm not advocating that you run your database inside Kubernetes. You would probably give that for the reasons we've discussed to the cloud provider for backups and restore and high availability. But if we're talking about the actual application inside a Kubernetes cluster, say you need to scale it, right? I've got some code that accepts some input. I'm then scaling that out automatically. So I'm creating a queue. I might be using RabbitMQ or something like that. I'm going to tell Kubernetes to monitor the queue and as the queue grows, I'm going to now tell it to spin up additional worker pods. So without me having to deal with scaling and stuff like that, I've essentially said, hey, my initial code send some things to a queue, Kubernetes takes care of the scaling and it all just works. Noting that I'm no longer having to be responsible for the actual build and set up of the RabbitMQ server. I'm just using an image that's been provided that I've been included and configured. So you're moving away from having to be responsible for all of the elements and, and the outcomes of the application, and you're essentially moving it to more configuration. So if we apply those kind of concepts to how we've adopted cloud services, I think there's a great value here. You're actually, as long as you do the configuration right, Touchwood, that's going to essentially help a lot of teams focus on the things that really matter, not the things that the queue needs to work. But do we really need to be labored over a server? That is paramount. It's all it's doing is providing a queue. [00:10:57] Tom: Yeah, and I think that's, that's really the nicest way anyone's ever said to me, okay, Boomer, get with the times. [00:11:03] Scotti: Hey, we're almost the same age here, so don't call me a Boomer either. [00:11:06] Tom: Lisa, you've moved with the times. Up until now, I've sort of been stuck in, stuck in my old ways. So, Scott, you're actually working on True world use case for your Kubernetes deployment as well. Can you talk through that a little for us? [00:11:18] Scotti: Yeah. My first introduction actually to Kubernetes, when I was still working at Oracle, I worked on a project that required a large amount of scalability and for a bespoke application that needed to run in a highly secure environment. And I know that's quite ironic based on what we're going to talk about next, but I'm a real practical person. So for me to learn something, I need a reason to learn it. There's no point just going and following a instructional video on how to set up Kubernetes, because so what? So one of the tools I built when I was at Oracle that I actually used, and we've used recently, is a tool that it helps customers that use OCI to evaluate their identity and access permissions. It's actually a real struggle for anybody that is listening that hasn't gone and looked at how OCI permissions and policies work. I'll give you a real 30 second overview. For customers that aren't familiar with how Oracle does identity and access management, they have this concept of compartments. You have a root compartment which is your tenancy, and then you have all a nested hierarchy of components. And then you have users, groups and policies. And when you write a policy you say a user can or a group can do a thing in a particular compartment and then you essentially provide the list of permissions that they can perform. The real crux though is that they're inherited. So if you create them at the root, it applies to all of the nested compartments below, but you can also add policies at any other of the compartments. So as you can see, it get really complex really quickly. And as someone that had to administer a tenancy of about 200 users when I was at Oracle, it's not a simple task and people create policies all over the shop. A lot of the time they're just going to create them with allow any user to do anything in my compartment. And that 100% goes against the concept of least privilege. So at the time I thought, hey, there's got to be a better way. And Oracle to this day still doesn't have a tool that does this. So I built something that would evaluate all of the permissions based on the compartments and the users and the groups that those users belong to. The problem though is that it would take a long time to run. And with some of the things that we're going to discuss next, we've had a lot of customers that come to us and say, hey, how do we do this? I could have done this the Boomer way and said, let's just scale it out, give it more cpu, give it more memory. And that might have worked, but probably not very well. The real problem is that you have to traverse the whole tree every time for every user. And there are probably ways I could optimize my code, but I opted to learn kubernetes. So that's where the queue comes in. For every user, they get added to a queue and then I scale out my worker nodes to essentially evaluate the permissions for every user and then those results are then stored in a persistent volume claim, which you're probably looking at me now going, what is that? Essentially it's storage that you can attach to the actual worker nodes. So the results are stored in a shared location, which could be like Elastic File Service or an EBS block store. It could be local storage or an NFS share on your network. And that essentially means that we can have customers now with large environments and every time they make a policy change, we can go in and reevaluate what are the effective permissions have the changes that we've made have they essentially, are they reflected in what we now see as the effect of permission for a particular user in a group? And so to be able to do this in a. In a scalable, but also turn it around, as far as performance goes, it's about 50% more performant. So instead of it taking hours, we're now taking minutes to process the same amount of data. And all I've done is essentially split it out from being one big monolith that was originally in a Docker container to two Docker containers where you have the application and then the actual worker components running in a scalable pod set. [00:14:59] Tom: Yeah, and I'm like you, I'm seeing as believing sort of thing. When you demonstrated, I too have become a believer and have parked my preconceived notions around kubernetes and microservices, because I can. Generally you see a use case where it completely makes sense and yeah, I just have to move with the times. It is unfortunate circumstances that led to us working a little more with Oracle Cloud than we had been up until now. I would have thought that was just because of Cordon's increasing profile and the continued profile and the continued proliferation of OCI adoption in Australia. But unfortunately for those unaware, or those who may not have all the details, there was a recent breach one of Oracle's identity Servers. Now this was a rogue actor. Rose 87168 came out with a release of about 6 million customer records allegedly from Oracle Cloud. There was a CVE that was exploited. 9.8 severity score. I guess Larry's been wrapped up in Trumpism too much. He came out saying fake news, fake news. But it turns out after a little bit of time, look, we did some internal investigations ourselves, had a look at some of the names. They seem to correlate at least fairly tightly with a range of no, no coa customers. So I thought there was something to it. And it turns out a couple of days ago Oracle have actually come out and started notifying customers. Say yes, there was a breach to the OCI Classic resources that they still have online. We've been contacted obviously by a number of customers saying what do we do now? And I guess can Oracle still be trusted? Is the question we're getting asked. So yes. Not the news a few weeks ago that we wanted to wake up to. [00:16:37] Scotti: This is an interesting one. You and I, I think not long ago spoke on a podcast about data breaches specifically around Optus and what was my general take on it? As this is all going on with Oracle, we should also remember that the Australian superannuation funds are having their accounts compromised left, right and center because they haven't enabled mfa. Well, MFA has been around for a really long time, so my, my view is that you are going to have a breach at some point, sooner rather than later. I think the only thing that an organization can really control at that point you deal with it. We spoke a couple of weeks ago and I was banging on about how much I loved Oracle and about how much I loved oci. I'm not as much of a fan of Microsoft. Like, I actually really like Google and I really like AWS as well. Oracle has dropped the ball as far as dealing with this incident. It's very hard though. We have to understand from their point of view, if they didn't have the information available, like what are they going to say, hey, we had a breach, but you know, based on the fact that other people think we've had a breach, they actually have to go through due diligence and process to start with. And I know you're giving me the daggers like I'm, I'm giving them a way out here. But it is very true. Like a lot of organizations, you and I have worked for some where there have been issues or we suspected there are incidents, but it's like you actually have to follow the process before you can release it. Do I think they took too long? Most definitely. I also don't think that just flat out denying it was a good idea from the start. [00:18:01] Tom: I think that's the thing. That's the thing ultimately, both as a customer and as a partner and knowing a lot of Oracle customers, it would have been nice if the first thing that they came out with, look, we don't believe there's been a breach, but we're working, working hard with, with authorities or whoever it might be to validate this information, flat out denying it and then to have that a couple of weeks later turn out to be something where there is some validity to it all. It's a bit of egg on the face type scenario. And I guess the reason too that that this has happened has been that there's been some legacy and those unaware that, you know, Oracle did have a, what they called cloud. I don't mind saying I worked for Oracle at the time, it was pretty terrible. And I can say that because the second generation cloud that came out was very, very, very good. It was re engineered from the ground up with particular focus on security. But it goes to show that no matter how good the security is of your new stuff, if you leave your old stuff hanging around, there's a potential there for breach. And that's why we bang on so much about decommission. Decommission, Decommission. Don't leave old stuff hanging around. Patch your old stuff, make sure it's not out there and exposed. Reduce your risk and exposure. [00:19:10] Scotti: I think you have to take it from two angles, right? You have to assume that as a customer, right? And let's be very clear, there's nothing that customers could do to have prevented this. If you're an OCI customer, you were caught in the crossfire simply because you were an OCI customer. We were just talking about the shared responsibility model. The whole concept that moving to the cloud is that the cloud provider is going to do some things better than you can do yourself. Oracle really proved through this whole incident that they actually don't do the basics very well. They don't have a good grasp of what's still out there. They don't have a good grasp on patch management. And ideally, yes, it was Gen1. The whole time I worked at Oracle, they never mentioned Gen1. That was kind of like a really dirty word. We only talk about the Gen 2 cloud, we don't talk about the Gen 1 cloud. In fact, I'd never seen it until you showed it to me. The other day. So from my point of view you really have to look at it and go Oracle, like where are the bits that had been gaps? Because you can't just look at it from a defense for a blue position. You can't just say, hey, we've got these policies and procedures. You actually have to look outside and say, well, what are my unknown unknowns? And that's the bit where I've got to be honest, I can't imagine them going out there and saying, oh, we're not going to patch the thing. I imagine that it was a series of unfortunate events that led to the box being not decommissioned, not patched, exposed to the Internet with not a lot of monitoring on it. Because usually when you shell a box there's usually a lot of alarm bells that should go off in a cloud environment, certainly if you're the cloud service provider. But once again we're all, we're just kind of speculating around like how did it actually end up? The only people that actually know that is Oracle and I really think that they need to come in and actually explain that, that well back to my point earlier and well, hey, you can't stop a breach from happening. It's going to happen at some point or another. I think the bits where you can really show your competency is how quickly you detect it, how quickly you responded to the incident, how quickly you communicated that you dealt with those formal two things. And if you look at some really good examples, there's been a journey for a lot of cloud SaaS providers over the years. You've got Cloudflare that did a really terrible job in the early days and, and then they got really good at it. There's another really good example where in it's one of the Nordic countries is a country called Norsk Hydro where they had an issue. I think it was ransomware and it took their entire OT environment offline. Interestingly though, you would think on that day that their shares would have tanked, but actually the reverse happened because they handled it so well and they actually, in fact they call it the Norse Hydro effect. And so what I'm looking at here is I think Oracle perhaps needs to get out of the boom of you and go, hey, maybe we need to do things a little bit differently. The market has shown ultimately that's what everyone cares about. If you're a public listed company, what is your share price? When this kind of stuff happens and it's shown if you handle it well, your stock price is going to go up. [00:22:06] Tom: So I think the fourth thing there, and you've highlighted that quite clearly, Scotty, is that what have we learned from it? Because as security professional yourself and working adjacent to security in the roles I've had, we've always sought to have not a punitive approach to security incidents, but a really collaborative what have we learned? How can we do things better approach to things and encourage people to come forward when they identify security issues as well, without that fear of retribution. And to your point, incidents are going to happen, breaches are going to happen. It's just how we respond to those and how we take our learnings and apply that in future. And I really hope Oracle has learned something through this and handles things better, both from a customer communication point of view as well as a general security hygiene point of view. Because to be honest, when customers have come in as a result of this and said, can we still trust Oracle, should we still be using OCI? I would say absolutely from a gen 2 OCI point of view, I think it's a really robust, well built cloud with great, great security operations there. No one's perfect and we make mistakes. Do I trust Oracle? I guess the question is, does anyone trust any large multinational corporation? [00:23:19] Scotti: There's two things that are going to happen. Someone else is going to have a breach and then perhaps we're just going to forget about this. I think the other part is that this is a really great opportunity for Oracle to, like you said, come out and say, hey, here's what we learned, here's what we're doing better. I certainly think they could do things like maybe scan their perimeter for vulnerable things that shouldn't be on the Internet. That's probably a really good start for them for something for them to say. And I think that is going to be where, like if they, if they look and they look and look introspectively and say, what could we do better? How have others that have been in the situation before us, how have they handled it and how are they going now? I think they really do need to change the mentality. But you have to remember this is still the organization that not so long ago like wrote it into their terms of service that you weren't allowed to reverse engineer their software because it was against some sort of agreement that someone signed. And I'm going, hang on a minute. You manage like if it's out in the wild, people are going to look at it. You can't just simply hide behind the legalities of oh well, you've contractually said you wouldn't. I just that doesn't align with my personal values as a security professional. So I'm really hoping that this is the inflection point for Oracle to turn around and say, you know what? Yep, we've really messed up here, but this is exactly what we've learned to your point and we're now going to go forth and this is the new way that we're going to do it. Because like you said, I actually really like how Oracle does it. I know I said the identity and access management was a nightmare to configure, if you get it right though. And we've seen some state government customers that have done this and I was super impressed how well they went about doing it and how well they went about managing it. I think this is something that if Oracle gets past this piece, I think they could certainly turn it into a positive. [00:25:03] Tom: Yeah, look in the, in the immortal words of Slim Shady, I'm just playing America, I still love you. [00:25:08] Scotti: Yeah. So speaking of doing better, there's a lot of us in our office that had been embracing this wonderful world of AI. And I use that term colloquially because there's a whole bunch of different services that make up AI. I know there are people that are going to listen to this that are going to cringe if I don't make that distinction. We have machine learning, we've got generative AI and of all the other variants in between, largely in our office we've been looking at the use of generative AI and certainly how it can help us in our day to day operations, both of the company and some of the work that we do. Because Tom, I'm really interested, like, is this just going to be the next blockchain and NFT crimes? [00:25:45] Tom: Growing up in the 80s and 90s, I saw VR as the thing in the 90s and I said, this is going to be the next big thing. There were movies that came out about this amazing VR world and it burnt brightly for a couple of years and then fizzled away. And again, talking back to when I joined Oracle, it was all about the blockchain, how the blockchain is going to revolutionize data and every company and every enterprise across the world is going to be using blockchain. And then a couple of years later, again, no one's really talking about corporate blockchain adoption. So I was a bit skeptical around AI because of course AI has been around for a very long time. Conceptually we talk about the early days of again 80s playing computer chess and we had actually programmed neural networks to play against human players in chess and it wasn't until fairly recently that the computers could beat a grand champion. But we've definitely seen an acceleration over the last couple of years and I think it was really chat GPT from a consumer perspective that really thrusts AI into the general conscience and spotlight. And yeah, true to form, I've sort of been resisting it for the last couple of years, but it's kind of been hard to avoid and we've been talking about it in the office. One of our guys, Carl, has entirely developed an application. He's got this application, he works with wildlife, wildlife preservation and he's written an application called Koala Tracker, effectively from scratch with. With zero knowledge around developing for iOS. Within a month he's got this really spiffy looking application up and running through the help. I think he's been using a combination of Claude and Grok. We've just had this sort of boom of a think tank in the office. How can we use AI to improve what we do and what's the impact going to be for both us and our customers? So, yeah, as an initial, as I always am skeptic, I've started embracing it recently as well. Maybe Scotty go through and come back to me on my sort of dabbles in it. But what have you been using AI for? [00:27:41] Scotti: I think I need to bring it back a little bit and say it was a real challenge for me to really want to dive into it. And it's not because I didn't think it was cool. I had flashbacks of Eliza. Do you remember the early chat, which is one of. One of the tests for the Turing Test? Like, I thought it was unreal. I went, hey, this is really cool. I've been shown all sorts of things. It makes really good poems. It will talk to you. There's all sorts of very interesting use cases that I've seen from a consumer point of view, but from a business point of view, I really did struggle for two reasons. The first one was saying, hey, it felt like, hey, I'm just outsourcing all of my work. I quickly realized though, that there are ways that you can use it because I was really concerned that I don't like just being told the answer coming from an offensive security background. There's a real desire for me to understand how something works to the nth degree. Like I absolutely have to know it inside and out. And that's where my understanding comes from. A bit like playing with kubernetes. Just doing a. Doing a lab environment isn't really that useful. It doesn't tell me how things work in the real world because you've just followed someone's very basic instructions. The same thing here applies with AI. So I've chosen a way to use it where I still exercise my brain and instead of asking it to just give me something, a very vague prompt, I've actually said, hey, here's what I'm doing, here's my inputs. I know I'm probably giving my. The way that I think away to a large language model that's then going to be retrained. [00:29:05] Tom: But. [00:29:06] Scotti: And then I ask it questions like, well, hey, how could I improve these things? What are some of the things that I've been missing? If I'm a different Persona, how would I now articulate this to them? And I think that's been a real shift change in my mind. And for example, like, I've been writing another application recently and my IDE has just popped up and said, hey, do you want to have a trial of our AI tool? And it will now document your code and it will suggest fixes and it'll, it'll even pre fill out what it thinks your method needs to look like based on the other code that you've written. I've gone, oh, this is unreal. I think a bit like Carl's app, which I wish he had called Quality. I think it's a really great way forward for people to expand their skills. I think you're really going to have a divergence between people that just use it to avoid work. And then you're going to have people that essentially use it as another tool. No different than we would use a database to store data or a, like an analytics workspace to perform analytics queries to gain insight. We're essentially just using a different tool. It just interacts with us in a way that feels more natural. So I'm a huge fan of it. I can definitely see a bit like Karl, if I'm not sure about how to do something, I can provide it. A lot of the time it hallucinates, once again, the differentiation between am I just expecting it to give me the answer or am I using it to kind of get me to the next step without having to go and read an absolute bucket load of documentation that largely is incorrect around a date. [00:30:34] Tom: Yeah, and I think that's, that's been the challenge I've found in terms of getting it to, to help me with my work, because I, I think it's been very useful in that respect. And as everyone else that's been Listening to this podcast probably knows I'm not a developer, I'm not a coder, I can be dangerous with stuff, but things like KQL queries when you're writing alerts and trying to capture monitoring data in things like Azure, I'm not very good at that. But I can steer AI to help get me to the correct answer. If we take that a step further and start using that to help architect solutions. I found time and time again because I understand what's possible and what's not. It would give me flat out incorrect or outdated information and I'd have to correct it and actually got sick of the prompt saying, hey, hey. Actually, you're right, this is the correct answer. I'm like, why couldn't you give me that the first time around? I think similar to the way I think the first wave of successful people in IT were actually the ones that knew how to use Google very, very well and could separate the wheat from the chaff. Similarly, the next wave of successful people in technology will be those that can write really intelligent queries for AI and then know how to drill down into that and to your point, look at the data it's analyzed and actually make sure and validate the veracity of the information it's providing. So I see it as that natural evolution from Google because I've been resistant to its use. But I can see some great areas. And one of the things we set the team on recently was We've got our SharePoint repository that's full of our internal policies. It's obviously full of our internal statements of work and all sorts of things that we've prepared. From a pre sales perspective, being able to search that intelligently, providing the metadata for that is a very laborious manual task that's error prone. But to get AI to do that for us and analyze what we've got there and then provide intelligent answers well beyond what I thought we'd be able to to do. So as simple as, hey, can you tell me what the policy is on footwear at Cordon through to, hey, I'm trying to do a Citrix proposal. Who's the best person to speak to internally? That used to be a case of ask around and hopefully you get the right person that knows stuff. But because it's got all this information, it can say, well, Vaughan and Jerry have worked on Citrix proposals in the past and they're the people you might want to start talking to. So I reckon there's this great opportunity for acceleration and knowledge and things that in that case, just the general onboarding and general day to day tasks are faster. But I really see the risk there being the next challenge that we have is around governance, security and fair use of AI as well. [00:33:11] Scotti: Yeah, you'd be surprised when I asked it the question hey, what's the dress code? And it said no shorts and no ripped jeans of two things that I think I've worn every single day since I've been here. It's been nearly two years. So there was some very interesting eye opening parts. I would agree on the governance and security piece. I think a lot of people like the AI today there is a quite a high barrier to entry. A lot of people have this concern. It's no different. We've been talking about cloud this morning as well and the whole concept of well, is the cloud provider in a better position to offer me more like more security utility than I'm getting today or that my team can deliver for me? I think we're slowly starting to adopt that when it comes to AI as well. And one of the key ones for me and I saw a really interesting proposal, it was from a CRM company, I don't think I'll name them but how they were handling the use of generative AI. So a lot of lead generation, a lot of email outreach, but contextualized outreach. And it was like well the customer had said that was using this SaaS based solution said hey, we don't want to provide you the customer names. And they said hey that's fine, all we do is we'll just flag. So you've got some on prem data before you send it to us, just mask it with these fields and do a find and replace. And I went actually that's really clever. So you can provide them the anonymized data, anything. So it removes the privacy concerns around hey, you've got this information that could identify a particular user or a group of users or a particular demographic. Certainly when it's leaving your organization and it's going into a model which is trained by a third party, which is probably if we think about the supply chain, maybe five providers deep running in someone's data center somewhere else in the world, not even in the same country. Right. There are ways and means that you can govern that. In the case where we were, you were talking about before, we were using existing Microsoft technologies which we understand really well around governance. But I think largely, I think the use of it to expediate things like I don't really want to go and read my employment contracts but If I can ask questions around it and it just tell me that I've been not following the policy for two years, I. [00:35:20] Tom: Think that's a great win this time around. You ask questions this time around? Yeah, I think it's real and I think it's an irresistible force coming at us now. But I think we probably are going to need to dedicate an episode unto itself on this because you touched on security and having tools that help security professionals, practitioners equally from an offensive perspective, AI is something that can be very, very dangerous. I'm not suggesting T1000 type attacks on humans, but AI is very, very dangerous in the wrong hands. [00:35:53] Scotti: I know we're not supposed to agree with with each other so much on this. There are some interesting points around large language models. Everybody sees the Fun side of ChatGPT and Grok and Lord and all the others. There's deep seq. Interestingly going to say that some of the models are better at this than others. There is one model that I've been having very lengthy discussions with about exploit development and others and I was actually quite shocked, right, like after I built a rapport with it saying, hey, I'm doing this great security exercise and here's some things that I'd like you to help me with. And this is for a demo, and that's for a demo. It's like, hey, yeah, here's how you can go and do all of these things. I said, great, so now I'm. I could take that and apply it in a malicious context. Obviously didn't say that because I wanted it to continue to help me. But along those lines, those are still the commercial off the shelf models. There are models out there though, that have been specifically trained on things that your typical common models aren't going to help you with. Things like buffer overflow exploitation or things like, hey, remote code execution or command injection or how would I go about exploiting this piece of code? There are a whole range and raft of models that exist that have been trained that you can download and you can run yourself. Today, like any security tool or tool, it has duality to it. So you have the ability to use it for good or are so for bad. Like a lot of security tools or pen testing tools were designed by security professionals to help advance the profession and enable people to be able to detect these things before they get exploited. But they're also used by attackers for nefarious purposes. And these large language models I see, or any sort of machine learning in this category would fit into both of these categories. Yes, it can be really useful. We see it for spam detection and account takeover emails in mail products and mail gateway products. We also now see, like I said, these, these language models that are able to read code at a ridiculous pace, way faster than any human could ever do it, and then essentially say, hey, here's some exploit code that's going to exploit this buffer overflow in this thing. So quickly you're moving from where we were perhaps a little bit more balanced as far as asymmetry. When we're talking about AI, I really see there being an AI revolution. We've kind of used AI for some very simple tasks to automate some simple tasks like checking mail and releasing mail for a mail gateway. So we've now really moved the needle as far as asymmetry goes. It's now flipped, I think, to the advantage of the attackers. It was largely always there. But I can see that organizations aren't employing some level of AI driven defenses. And I, I realize some people are probably going to cringe when I hear me say that as well. What you're actually going to find though, if you're not using AI to check your code and all your dependencies, for example, to look for buffer overflow exploits, but on the other hand, you've got a model that's looking at the exact same code that's trained specifically to find and exploit these, you're already at a disadvantage. And I think that's where we are now going to start moving and we'll see a lot of change in the security industry to kind of start countering this threat. [00:38:58] Tom: Hey, I think you and I really changed people. Just going through the conversations today, there are things that both of us have said that I don't think Tom and Scott from six months ago would have, would have thought we'd ever be saying. So we evolve. [00:39:10] Scotti: Yeah, look, I never thought I'd say that. I have developed this love hate relationship with kubernetes. I was, I was squarely in the camp of cloud native or the old monolith way. But I'm a convert. [00:39:22] Tom: Brilliant. Before we wrap things up. Yeah. I touched on T1000 earlier and it's something just sprung to mind as I had this picture of the Terminator running through my head. What is the depiction of technology in movies now? I actually think that that's a great piece of science fiction. But what is your depiction of technology in movies that just makes you cringe? You know, the thing you look at as a security or it professional and just shake your head and hope no one's looking, thinking that that's the sort of stuff that you do. [00:39:49] Scotti: It's funny you mentioned T1000. I went to the Melbourne Motor show last weekend and they had Optimus there and I was so disappointed that it was only a model, a full life size model. But I was thinking, in my mind I'm going, yeah, this thing, it looks quite menacing and if it really wanted to have a go, it, I'm still going to buy one. Let's just flatly get that out on the table. As soon as it comes out, I am going to buy one of these things. Even if it does nothing. I just want it to be able to talk to me and walk around the house. But as far as things that I find really frustrating in movies, there's this one and it always has burned into my brain. And there's two scenes in Swordfish, which happens to be one of the best movies of all time, I think. But there's two scenes. There's the first one where he's given a time limit and he's basically told he's got to crack this crazy cipher and he's done it in maybe, I don't know, 20 lines of code and does it in like 45 seconds. I'm thinking, yeah, it doesn't happen like that. And then the other part where the query is creating the malware and it's on about nine different screens and then it all compiles down into this beautiful geometric cube and I'm going, no, that's also not how hacking is. It's actually not that exciting. There's something about me that goes. When I see how quickly people do it and the visual representation, it just, it triggers me. What about you? [00:41:06] Tom: Yeah, mine's along similar lines. For me, it's the. Some junior developer that is a nobody and yet they're writing this code on the computer and all of a sudden the CEO of the company just comes, looks over the shoulder and starts running, running his fingers across the screen, just saying, this code is exquisite. It's like, how the heck. From one screen of code, it's probably just a set of loops or something like that. Can you tell that the code is exquisite? It's comical. And of course that person gets fast tracked all the way up to lead developer. Somehow it's just another case of that's not the way it happens. It's movie magic. It's bs, to be quite frank. And I don't know these young aspiring developers that just think if I make my code exquisite enough, I might get that fast track to the top of the organization they can be fast tracked. [00:41:50] Scotti: To the top of the company at which point they're going to say hey, we need to use kubernetes and AI to build the thing. [00:41:57] Tom: Just to wrap up today's insights, Scotty says there's more to kubernetes than meets my completely uneducated and biased Boomer eye. Oracle's decommissioning and customer relationship skills kind of suck, but their new cloud is still pretty good and AII is not just another fad. The planets have aligned and those who learn to use it right gonna flourish in the next stage of it. Or at least until our robot overlords take over and ruin it all. Thanks again for tuning in to the devsecoops podcast. We hope you've had as much fun as we've had this week. And as always, stay safe. [00:42:28] Scotti: If you could use a little help or advice with modernizing your IT environment, visit Cordant Au to start a conversation with us. This has been a KBI Media production.

Other Episodes