The Meridian Point Podcast
Guest: Tom Stiehm
Host: Kumar Dattatreyan
========================================
KUMAR: Hi everyone, Kumar Dattatreyan here with The Meridian Point, and I'm pleased to introduce Tom Stiehm, who is our guest today. He spent three decades building software and leading teams before becoming one of the country's foremost voices on DevSecOps, the practice of baking security directly into the software delivery pipeline. As the CTO of Coveros, he trained engineers and executives at major financial institutions and government agencies, and helped organizations move from releasing software every two years to releasing every sprint. He also co-authored research on using AI to automate behavior-driven development. Now in a new chapter of his career at Steampunk, he's watching the AI wave hit the software world, and he has some very specific opinions about what's about to go wrong. Please welcome Tom to the show. Thank you for joining us today, Tom. I appreciate you being here.
TOM: Thank you.
KUMAR: All right, so Tom, you and I both spent some time at Fannie Mae. You have a story about a hurricane in Houston that I think perfectly captures what real business agility looks like. Tell us what happened, because it's a story that most people in transformation work have never really seen come to life the way it did there.
TOM: Yeah, so Fannie Mae had been trying to switch to Agile for some time, and we were both there as Agile coaches. When I had started doing work with Fannie Mae, if a project was releasing every two to three years, you were doing pretty well. One of the teams that I worked with had embraced Agile to the point where they had started to release at the end of every sprint. Then they had this challenge where there was a hurricane in Houston and federal legislation was released that said if you lived in Houston and were affected by the hurricane, you would get mortgage relief. Normally, before Fannie Mae had become agile, what they would have done is there would have been no time to do anything with the software. It would have been all back-office modifications, and there would have been a lot of mistakes and errors. It would have been hard on everyone. The team I worked with said, this is the perfect example to show us what we can do. So they canceled their current sprint, went back to planning, and focused on what they would need to change to provide software relief to these people. They had about four months to do it. Because they had this business agility ingrained in them, they were actually able to get the changes done in three months and have it tested for another month before it had to go live. That allowed them to succeed. They had far fewer problems with back-end modifications, with people not getting covered, and with all the issues that would have come from manual workarounds, because they were able to quickly adjust what they were doing and provide a solution that, until the day they started replanning, they had no idea they would need to build.
KUMAR: So what made that possible with that team? This was one team out of many at Fannie Mae, all in varying degrees of agility. What was the environment like that allowed them to work with that level of agility?
TOM: I would credit the Scrum Master they had at the time. She was very proactive and had done a lot of work with the team to get to the point where they could release on a regular basis. The team itself had also done a lot of good work. They had focused on building tests so they could refactor the code, change the code, and know if there were problems by running those tests. They had gotten into a cadence of releasing. They really embraced the idea of being agile and figuring out how to do it. They had been the most progressive team I worked with. About a year before this happened, they had gotten to the point where they were releasing every sprint. It was really the team and the people on it that were able to make this shift and build that business agility.
KUMAR: Yeah, but it also speaks to the leadership of that team, the people and the system that empowered them to be able to do this. Wouldn't you say?
TOM: Yes, definitely. They embraced the idea and they embraced working with the team to be more agile. As a coach there, I actually spent more time with the leadership, the manager and director, because the Scrum Master was really good and she would ask questions and we would talk things through, but she genuinely led the team. Helping the management make that mental transition to letting the team drive the work they were doing was what made the difference.
KUMAR: That's a great point. I don't know exactly when you were there. I was there for a few years on and off and they did progress quite a bit over that time. I could definitely see the changes from my first stint to my last. But it wasn't uniform. It was pockets of agility here and there. Towards the end of my last stint, most teams were doing something agile, but not every team. What would you say was a catalyst for that broader shift at Fannie Mae?
TOM: I think the teams that succeeded were the ones that had management really supporting the idea of change and embracing it. The first Agile project I ever worked on was actually a Fannie Mae project in 2002, when I was just a member of the team. We introduced Agile to them. There was one pioneering director who said, yeah, let's try it and see how it works. The rest of the organization was waterfall. We actually did a release with everything we had signed up to do in six months. Their response was, great, we'll put it on this UAT server and it's not going to get released for another six months because we have all these other things to do that fall within our waterfall process.
KUMAR: So it sat on the shelf for six months?
TOM: Yes, it did. And compare that with the last time I was there, where people were actually able to release every sprint. That was a huge change. But it happened because management embraced that change and wanted it to happen.
KUMAR: Yeah. Interesting. That's a good story of the arc of transformation that Fannie Mae went through. I'm not sure where they are today, but hopefully they're still agile, nimble, and responsive. I want to shift to another topic, really around application security. In our preparatory conversation, you called application security the poster child for third-class citizens in software development, behind even QA. What did you mean by that, and how does DevSecOps disrupt that dynamic?
TOM: Well, security in a traditional software development process, and even in a lot of the early Agile processes I was part of, was the thing you did in the last week or two before release, when there was really no time to fix anything. So you ended up negotiating which security concerns would go into production because you didn't have time to fix them. You just hoped to address the worst ones. The most security-aware organizations would halt a release briefly to address the most critical issues. The least security-aware would just release anyway. If it was a bad quality bug, you could get attention to it. But bad security bugs just couldn't get enough attention because the philosophy of the organization was that the business features got the attention. Security was an after-the-fact concern. It wasn't baked into the culture of the organization.
KUMAR: Interesting. So how does DevSecOps change that? How does it solve that particular problem?
TOM: DevSecOps says that in order to ship quality, secure software, security has to be part of the process from the beginning, just like quality does. When you do test-driven development, you think about the tests you need to write before you write the code. You're building quality in rather than bolting it on at the end. DevSecOps says the same thing about security. We have to think about the security implications of what we're building, bake that in as we're building it, have checks on it throughout, so that at the end of the process we have something that is both functional and secure, rather than something that is functional and insecure that we then have to negotiate about before release.
KUMAR: I like that framing. Shifting gears a little bit, you've been in software for 30 years. You've seen a lot of things come and go. What's your take on AI? Because I know some people think it's going to replace everything, and other people think it's overblown. Where do you land?
TOM: I think both the AI boomers and the AI doomers are wrong. AI is a tool. It's a productivity tool. In the context of software development, we're mostly talking about generative AI and using it to help you plan, create code, create requirements, create tests. From that point of view, there are real benefits. But I think we're in a period of rapid experimentation where there's going to be a lot of failure. There are a lot of AI companies out there right now. There are something like fifteen to twenty different agentic AI IDE plugins. There aren't going to be fifteen to twenty winners in that market. If we're lucky, there will be three or four. So people are going to try different things, and some organizations will find real value, and others will not. I think we have at least five but probably more like ten years of experimenting and figuring it out ahead of us. Even if the models don't get any better than they are today, there's a lot of room for growth in just learning how to leverage them.
KUMAR: What's the danger for people who jump in without that foundation?
TOM: If you learn to code by using AI, you have no idea what to do when it fails. I liken using an AI coding assistant to working with a very enthusiastic, sometimes drunk intern. They'll do a lot of good things for you. But you have to verify everything they do. And sometimes the things they produce confidently are wrong in ways that are hard to catch if you don't know the fundamentals.
KUMAR: That's a great analogy. So you're saying the tool is useful, but you need to understand what's underneath it.
TOM: Right. And I think vibe coding is a good example of where this goes wrong. You can't vibe-code your way into a viable production application. You may be able to vibe-code a good prototype that sets a vision, but that isn't the thing you put into production and expect to work over a long period of time. I compare it to what happened with Visual Basic. Microsoft's original intent was that you would do your UI work and light lifting in Visual Basic, and if you had heavy lifting to do, you would do it in C++. But most people just did everything in Visual Basic and had no idea how to go a level deeper when something broke. Those codebases became a mess. VB got a bad reputation. Vibe coding is headed the same direction. Spec-driven development is likely closer to what we'll end up using for production software, where there's a structure underneath the AI assistance rather than just prompting and hoping.
KUMAR: You mentioned earlier the airline industry's approach to autopilot failure, where they put pilots through simulator scenarios for situations most pilots will never face in a 40-year career. Do you think we need something similar for software teams and AI failure modes?
TOM: Yes, I do think we'll get to that point. Because AI assistants are going to get better, the way that I learned how to program, and developed a sense for what's right and wrong in code, was because I wrote a lot of code, made a lot of mistakes, and learned from most of those mistakes. In order for people to develop that same judgment so they can evaluate what the AI is generating, they need some way of gaining that experience. One approach is structured exercises where developers are seeing problems and fixing them, even if those are things they might not encounter every day because AI is handling a lot of the routine work. We have to provide a way for them to experience the mistakes and learn from them in a safe environment before those mistakes show up in production.
KUMAR: That makes sense. You made an observation in our prep call that I think is critical and that most people in the AI space are missing. You said the way Agile adoptions failed is a near-perfect preview of how AI adoptions are going to fail. Do you still hold that view? Walk us through that parallel.
TOM: Yes. In my experience with IT failure, it's often the same pattern. You want the benefit of a change, but doing that change properly seems like a lot of work. So you think, can't we just do some of it and still get the benefit? If you look at AI, there are real benefits to be had. But if you give everyone a license for a code assistant, don't train them how to use it, don't ground them in the practices that will make it work, then you're going to have a lot of people spending time in trial and error rather than focused learning. They're going to develop patterns that don't work, and then they're going to develop workarounds for the problems that come from not having the right foundation. You end up with a mess, and then people say AI doesn't work. The same thing happened with Agile. Organizations bolted Scrum ceremonies on top of what they were already doing. They didn't change the culture or the incentives. They added standups and sprints but kept all the old meetings and all the old command-and-control structures. And then they said Agile doesn't work.
KUMAR: So what's the right approach? Would you advocate for smaller experiments, more iterative adoption of AI?
TOM: Yes. Smaller experiments, active training, and helping people understand how these tools are actually valuable. What mistakes have other people already made that they can learn from? When I say active training, I mean either going to a training class or using something like the dojo model, where you go in and practice specific skills under the guidance of someone who has already done it. You need a safe place to practice, make mistakes, and learn from them before you apply it in the real world.
KUMAR: You co-authored a paper on using large language models to automate behavior-driven development. For listeners who aren't in software, what is BDD, and why is automating it with AI actually a useful application?
TOM: Behavior-driven development is the idea that we want to understand how users would actually use a system, what paths they take through it. It's part of a test-driven philosophy where you write the tests before you implement the software. That helps you understand when you're done writing the software, because you're done when the tests pass. It also creates a regression suite that you constantly run to verify that functionality that used to work still works as you make changes. One of the values of AI here is that creating BDD starts with conversations, between business stakeholders, testers, and developers. Using AI to facilitate and accelerate those conversations, and then to help generate the test scripts, is a genuinely structured use of AI. It's not just prompting and hoping. It's using AI as an accelerator within a process that has real discipline underneath it.
KUMAR: It seems like a more responsible use case than pure vibe coding.
TOM: I think putting structure into how you use AI, just like putting structure into how you develop software, is what gets you real value out of it. If you want to do a quick prototype to show what something might look like, vibe coding has its place. But for production software, you need structure in how you're using AI to create code.
KUMAR: So Tom, what's next for you? You mentioned you've joined Steampunk.
TOM: Yes. I've spent the last several years mostly in the commercial world figuring out how to build software better. I've joined Steampunk, and the mission there is to bring all that learning to help build better software for the government. That's what I'm spending my time on now, applying everything I've learned to how we can get better software built in the public sector.
KUMAR: That's great. Is there anything I haven't asked you that you'd like to share before we get to the lightning round?
TOM: No, I think it's been a great conversation.
KUMAR: All right, let's do the lightning round. You've called stopping retrospectives a DevSecOps anti-pattern. What did you mean by that?
TOM: A lot of organizations stop retrospectives when they get stuck. Retrospectives can become complaint sessions where nothing gets resolved, and the next step after realizing that is to just stop having them. But that means the team is no longer inspecting and adapting, no longer running experiments to get better. That's what I mean by anti-pattern. You've removed the mechanism for continuous improvement at exactly the moment you need it most.
KUMAR: Makes perfect sense. If a team came to you tomorrow and said they want to start their DevSecOps journey today but can only do one thing, what would that one thing be?
TOM: I'd have to ask them some questions first. What problems do they have that they want to address? If the problem is a development problem, that's a different answer than if it's a security problem, which is a different answer than if it's an operations problem. It starts with understanding the specific problem you're trying to solve, and then doing the one thing that addresses that problem. Hopefully they come back with more questions after that.
KUMAR: Well, thank you so much, Tom. I really appreciate your insights. I feel like I learned a lot, and I'm sure our listeners did as well.
TOM: Thank you. I enjoyed it. It was an excellent conversation.
KUMAR: Thanks everyone for watching. Bye-bye.
========================================
END OF TRANSCRIPT