Data to AI: Navigating Disruption with Scott Ambler | The Meridian Point Podcast
Kumar: Hey everyone, Kumar Dattatreyan here with the Meridian Point, and I am pleased to welcome to the show today Scott Ambler, a pioneering voice in agile methodologies and data practices who has helped shape organizations' approaches to software development over the past three decades. As co-creator of Disciplined Agile and author of over thirty books, Scott brings unique insights into how technology and methodologies evolve. He's currently pursuing a master's in AI at Leeds University and is focused on helping organizations navigate the intersection of data quality, artificial intelligence, and organizational transformation. So without further ado, please welcome Scott to the stage. Thank you so much for being here today, Scott.
Scott: Thanks for having me, Kumar.
Kumar: You've witnessed and influenced several major shifts in software development practices over your career. What key lessons have you learned about how organizations successfully adapt to technological disruption?
Scott: There are several observations. First, it always takes longer than you think. Everybody spins a story about how it's going to be really fast and really easy, and it's never actually the case. Another critical point is that the old stuff rarely goes away. You can spin whatever story you want about technology X taking over everything, but technologies A through W are still in place and will be for a long time unless you're purposely retiring them.
We really are in these technically dense, complex environments, and we will be until we decide to deal with it. And dealing with it means cleaning up our old messes. This is one of the challenges in the data space - organizations have significant technical debt, which refers to lower quality code, architecture, data, networks, and so on.
We never clean up the mess. There's always funding to do something new, to bang out some new thing quickly, and we talk about going back and cleaning up "one day" - but one day never arrives. We're swimming in this technical debt sewage, and it's a problem. We really need to start actively fixing and addressing our technology. Some organizations do, and some developers have the maturity to clean up as they go, but some messes are pretty huge.
Kumar: Are there examples of companies that do this well? What practices do they employ that keep them on the cutting edge of technology without being weighed down by all the technical debt?
Scott: The organizations that do it well, and there aren't many, build addressing technical debt into their culture. They allow development teams to fix things and clean up messes when they discover them. Clean up as you go tends to be a common strategy, but also realizing sometimes you need to explicitly fund a project to do a major cleanup or retirement of an existing system.
The organizations that understand the implications of technical debt, who are measuring it and understand what those measures tell them, tend to do better. They also realize that some common project management practices can be problematic. The "on time, on budget" nonsense is a killer from a quality point of view. How do development teams react to being forced to meet fictional schedules and budgets? They cut quality corners, which adds to technical debt, which then slows them down. It's a problem that builds on itself.
Unless project managers push back against business pressure to get things out quickly, the organization is never going to escape this cycle. The organizations that really understand technical debt address that and realize that some "best practices" in other parts of the organization are actually relatively bad practices when viewed from a systemic perspective.
Kumar: That brings me to Disciplined Agile. The word "disciplined" in front of "agile" seems to connote an awareness of quality practices. How did that come about, and how important is quality to whatever methodology you're using, whether Waterfall or Agile?
Scott: Data is critical across the board in organizations. The data people will tell you that data is the lifeblood of your organization, and arguably it is. But my observation has always been, if you only have the lifeblood, that's referred to as a crime scene. You need to look at the bigger picture, which is what we did in DA.
We didn't just look at the agile purist approach. We looked at the overall bigger picture, which is where the discipline comes in - realizing you need to do some upfront planning, actively improve as you go, and actively choose the right techniques and ways of working for your context, which might not always be agile. Sometimes it's lean, sometimes it's more traditional.
DA was really a hybrid approach, and being a hybrid, we took data considerations into account. The rules are different for data - common DevOps techniques that developers have been using for almost two decades now work differently in the data world. Continuous integration is different for databases because they have persistent data that you can't break. Automated testing of databases is different because of persistent data. Refactoring a database is different than code refactoring.
You have this common constraint of persistent data whenever you're doing anything in the data space, and it complicates classic Agile and DevOps techniques. There are ways around it - you've got to be more sophisticated and understand the implications, but it's all doable and highly desirable.
The challenge is that the agile community, which focuses on software development, doesn't understand the data stuff very well. The data community doesn't really understand development either. There's never been much overlap between these communities, and they were both content with that. The agilists had naive strategies for dealing with data, while data folks ignored agile because they believed it didn't apply to them. Both groups were wrong.
My work has been around bringing an agile approach to data and bringing more quality to the table. The traditional data people talk about quality, but when you look at the actual results, they're often poor. Most organizations have significant technical data debt that needs cleaning up, but the traditional data folks don't have a coherent strategy for addressing it.
Kumar: You mentioned agile people thinking naively and data people staying in their silos. Where have you seen effective communication between these groups?
Scott: I'm seeing it more in organizations that are looking at AI seriously - beyond just piloting and playing with LLMs. I'm seeing it on Data Vault 2 teams, which are data warehousing/business intelligence teams working in complex environments at scale with big data and lots of change. That forces them to be agile.
Data Vault 2 adopted Disciplined Agile Delivery as its methodology because of the hardcore reality that data teams need to react to marketplace changes and stakeholder needs. An agile, lean approach is necessary - you can't spend weeks or months on data modeling while requirements change. Traditional predictive approaches fail miserably.
Reality is forcing data professionals into more agile, streamlined ways of working, which is why we're seeing DataOps becoming more popular.
Kumar: So AI is forcing the issue because it requires good data understanding, and as organizations iterate through different AI applications, it's bringing data people and agile people together?
Scott: Yes, plus the earlier trend of organizations wanting data-driven decision making has also forced the issue. Stakeholder needs change constantly, and the old approach of "we'll do it next project, next year" for a request made this morning just isn't acceptable anymore.
Between supporting real data-driven decision making and AI, the data community is being forced to address quality issues. Your decisions can only be as good as the quality of data you're basing them on, and the same applies to AI. Recent surveys on AI adoption show that data quality is consistently the number one or two challenge organizations face.
Kumar: You've written about how AI will impact management roles. How do you see leadership practices evolving in an AI-augmented workplace?
Scott: AI is affecting all white-collar work, including management. We're seeing a lot of the more onerous or repetitive work being automated away. People are using simple LLMs like ChatGPT to augment their work - writing reports, brainstorming - saving hours. Then there's all the Copilot functionality for management tools and Microsoft Office.
AI is whittling away at some of the more onerous aspects of what we do. Everything's rising. It's a very interesting environment, especially with point-specific applications.
This connects back to data quality. Organizations are slowly adopting AI solutions to support their business processes. It's slower because of data quality issues, but more organizations are moving in that direction, building AIs that augment portions of business processes or create new ones. For the rest of this decade, we'll see very interesting developments, but it comes back to whether your organization has the skills and the data. Without those two, you'll struggle, but these are surmountable problems.
Kumar: You mentioned Copilot and how non-managers are using these tools. What's your impression of tools that help people code faster or pair program with an AI?
Scott: When I was working on my masters, we had strict rules about using AI. The limited things I could do, which built good habits, included using LLMs or Copilot to suggest small pieces of code. I might write something and ask for a more Pythonic version, which would save me twenty minutes and make my code more efficient.
What I and many people do is use AI tools for small pieces of code or text. They're very good at that, but you still need to know what you're doing. I would write code, get it working, then use AI to clean it up. I could also ask it to write code for me, but that's more questionable because it's not mine and I can't trust it as much.
The point is that I know what I'm doing - I could write that code, but I'm using the tool to augment my work. The problem I'm seeing is people using these tools to write massive amounts of code without understanding how they're architected.
LLMs are prediction engines, predicting the next token or piece of text. The larger the output, the greater the chance that the later-generated parts are wrong. When people who aren't skilled at coding use these tools to generate a lot of code, you get quality problems. "A fool with a tool is still a fool."
There's no magic - you still need to know what you're doing to effectively leverage these tools. We're still in the early days of the hype. Studies show code being generated isn't as good as human-generated code, but that's partly because people are generating too much at once rather than using snippets, which generally produces solid results.
Kumar: That makes sense - like any tool, you need to know how to use it properly. Do you see a day when AI could take over completely, where you just have a conversation and it codes what you want?
Scott: I doubt that highly, unless you're asking for absolutely trivial things. I've seen demos that look sophisticated to non-coders, but are actually simple tasks like generating a data entry screen. If we ever reach a point where programmers are truly replaced by AI, society will be so transformed that it won't matter - everyone else's jobs will have been eliminated much earlier than programmers'.
Kumar: You talk a lot about context being important. The Disciplined Agile approach is about doing what makes sense rather than being dogmatic. How important is understanding context when helping organizations navigate disruptions?
Scott: It's absolutely critical. Every person, team, and organization is unique and constantly changing. One size does not fit all, regardless of what framework vendors try to sell you. A team of five works differently than a team of fifty. Co-located teams work differently than distributed ones. Teams in regulatory environments work differently than those that aren't regulated.
Even teams in identical situations will work differently because they consist of different people with different backgrounds, priorities, and preferences. Trying to force-fit a process is a fool's game. You need to understand the context and teach teams to identify appropriate ways of working for their situation that achieve their goals. As situations change, including learning, changing requirements, technologies, and priorities, they need to evolve their approach.
The challenge for organizations is how senior leaders govern and guide these teams. You need to be flexible and qualified to do your job. The harsh conversation I have with governance people is that we're not going to inflict the same way of working on everybody to make it easy for them. We're going to help teams find the best approach for their situation, then find a way to govern that's still compliant with regulations - typically a risk-based approach rather than documentation or process-driven.
In the '90s and early 2000s, many organizations bought into the CMM/CMMI vision of "one rule to rule them all," and we're still getting that out of our system. There's still a belief that all teams need to work the same way, which is why agile frameworks are popular - they sold management a fantasy that you could force people to work in a specific way and easily govern them. That never played out; it caused confusion and strife, and the numbers never worked out for most organizations.
Kumar: It really boils down to "do your job." Just because you're a senior executive doesn't mean you're qualified.
Scott: It's interesting that we've forgotten about the Peter Principle - we talked about it a lot in the '80s and '90s, then it went away. Maybe we need to bring it back. I don't want to criticize executives - they're smart people doing the best they can, but the situation they face is complex and constantly adapting. It's tough, but that's what they signed up for.
Kumar: Let's end with some fun questions. Are you a coffee or tea person?
Scott: Both, but more coffee - sixty-forty.
Kumar: I understand you collect and restore 8-bit Atari computers. What's the most challenging one you've tackled?
Scott: Obtaining an Atari 1200XL was hard - they didn't make many and most broke. They were junky. The common challenge with 35-40 year old equipment is that just turning them on can burn them out. You have to be careful with dust, grease, and residue from smokers' homes.
Kumar: What do you do - clean them out first?
Scott: I open them up first thing to check and can often smell if they've been in a smoker's home. I take them apart, then turn them on and hope for the best. Sometimes they burn out anyway.
Kumar: What are you reading right now?
Scott: I'm working through Stephen King's book on writing - the first Stephen King book I've read. I'm also reading Robert Asprin's "Thieves World" from about forty years ago. It's a collection of short stories in a fantasy world, with some notable authors contributing over the years.
Kumar: What have I not asked that you'd like to share?
Scott: What am I up to now? I'm writing a new book tentatively titled "Achieving Continual Data," about providing the data executives need for data-driven decision-making and AI development in enterprise organizations. I'm writing it as creative nonfiction.
Kumar: Are you using AI to help you write it?
Scott: No AI at all, other than grammar and spell checkers. I'm concerned about data pollution with all the mediocre AI-generated content that's out there now. It's polluting the infosphere, and I don't want to contribute to that. Plus, much of AI-generated content repurposes existing material, and I don't want potential IP issues.
Kumar: When do you expect this to come out?
Scott: Probably in the fall.
Kumar: Well, Scott, it's been a pleasure having you on. I hope you enjoyed the conversation as much as I did.
Scott: Thanks for having me. Always happy to have a great conversation.
Kumar: Thanks for watching, everyone, and we'll catch you next week. Take care.