Hey, everyone. Kumar Dattatreyan here with Agile Meridian with another unscheduled, episode of the Meridian Point. And in this episode, we're going to be doing something a little different. We're going to try to answer this question. Can AI transform software development?
The promise seems to be too good to be true, faster, cheaper, higher quality code with less human effort. Yet, we know AI has pitfalls. So, in this episode, Glenn Marshall and I will be interviewing Peter Merel, the founder of XSCALE. He recently wrote an article on AI and agile alignment and so we're gonna explore an approach called behavior-driven development that combines human collaboration with AI to pragmatically realize many of AI's benefits. Alright. So if leveraging AI without losing control sounds useful. Stick around. Thank you.
Let me invite them to the stage. Here we go, Glenn and Pete. Thank you for joining. appreciate you being here. In the interest of trying to make this video as short as possible. I'm going to fire away with the first question if you're ready, Pete. go ahead. Come on. Alright. So I'm gonna read it because, I mean, I spent time writing it, but I don't have it memorized.
How confident are you that AI-generated code will avoid problems like bias and security vulnerabilities without extensive human review? What safeguards need to be in place?
I'm not confident at all that it will avoid those problems. Not without heavily constraining what it can do. And so if we think about what happened in, the dawn of Agile, there was this shift to this idea that we want to do the simplest thing that can make the tests pass. Well, we need to get testable, executable acceptance criteria to constrain what the AIs can do. And that's kind of the point of the article.
Glenn, do you have a question? Does the 3 Amigos plus one workflow now allow enough human collaboration to keep AI aligned with business goals? How do we avoid it becoming a positive feedback group?
If the humans themselves are not aligned with business goals, then we can be pretty certain that the AI isn't going to wind up aligned with business goals. So the 3 Amigos idea was to have an ongoing continuous alignment process between the business and or these business analysts and testers and developers, If the developers don't go away just because we have AI, we will be driving the AI, but we need to have people who are critically informed on what it is doing technically involved in the conversation.
So those three amigos, they're still there. But if we're going to meet business goals, then they have to be motivated not to meet KPIs or OKRs; they're actually motivated to try and improve business throughput And we have to start thinking about throughput accounting and open book management to be able to make certain that they are aligned to global business goals rather than some the local minimum that's gonna wind up getting our AI or the resulting systems that AI produces misaligned.
That is a really good answer. And for those of you who are wondering what the three amigos are, look it up or join the AI and agile alignment group on LinkedIn, and we'll definitely provide more information there.
So the next question is, what are some examples where AI analysis analysis of systems has generated useful gherkin test suites? And how accurate have these been?
So it's fair to say that we are just at the the beginning. We’re wet behind the ears when it comes to working with the kinds of AI tools that have come up over the last year. If we were in Game of Thrones, the dragon eggs have just hatched. So we don't have touring dragons destroying the countryside, but we do have dragons. So, when it comes to the kind of approach, we're talking about here because this is an approach to solving a problem. We have to be able to use AI. Otherwise, we can't compete with the market, and we need agile teams to be able to orient it to our actual business needs. The same way that we were doing with developers, we don't have AI that can soar around the countryside destroying castles; we have to be able to work with it, and we have to be the mother of dragons. So, god, I've gone down too far that rabbit hole!
Anyway, the point is we have some people in our community who've already been doing this. and John Ferguson Smart is probably the leader at the moment. John's running workshops and offering workshops to learn how to use the BD tools that we've been using to drive agile development teams; how do you use those with AI both in terms of the AI doing the work of the development teams but also generating, the acceptance criteria using. And in what John's found so far is that on the bladder count, we still we have a ways to go. And when it comes to using BD to drive development, agents, AI agents. there are some tools there, Spectre Test, and John's been working with a tool of his own called, but this is all new ground. I've been playing with this approach using a Visual Studio plugin called GPT pilot. And I like what I'm getting out of that, but has this been done in a mature project, at corporate scales? I think it's fair to say not yet, but that's what everybody I know is involved in trying to get up to happen. And that includes some conversations with some big 4 consultants who are trying to do this.
Glenn, I think you're up with the next question. Okay. we've talked a lot about leadership as a service at scale and how that can help us with the continuous alignment of teams working with AI?
So leadership as a service is a way to get fully autonomous teams to make their own decisions and to balance or to weigh the needs of people who are directly responsible for particular kinds of outcomes against the constraints on the rest of the team they're working with. So, we can apply exactly that protocol to working with suitablly constrained and well-trained AI agents.
Whether we're ready to do that yet will depend on team dynamics and the responsibilities of those agents. So, I guess it completely comes down to a bit of a HALl 9000 problem. Are we willing to put mission critical responsibilities onto our AI? I don't know a lot of humans who'd be willing to put themselves in the little cryo-pods and have the AI take care of all their bodily functions at this point in time, but there are plenty of things where we're willing to say how AI is competent to do this. And we do it. We just want to make certain it's adequate adequately constrained upfront.
And that's where this AIDD approach comes in. There is one other aspect to your question there. The article I wrote is about AI-driven development, but the broader topic of aligning AI and agile; there are a lot of aspects that are about the structure and workflow of larger organizations that that article doesn't even attempt to cover, but which hinge on this idea of leadership as a service and the protocol for doing that. And I would feel like we'll be hearing more about that in the near future.
And that, you started to segue into the last, really, my last question, which was about alignment and AI and agile projects. What have you seen? What challenges have you seen arise if, you know, again, this is so new? You mentioned Dragon Eggs. They're just hatched, and if they have them, if you've seen any, how are they they addressed?
So it's weird that one individual human would ask another individual human this and expect that they can predict what's going to happen this year, much less going into the future. But I have had some experience, over the last 20 years, of seeing what different structured organizations have done with agile teams that bring capabilities that were disruptive to those organizations. So I can answer on that basis.
And I think I'd talk about four different styles of organization. the first one being a traditional, top-down command and control scientific management, Taylorist organization. The tree structure, long distances between doers and deciders, slow latency of lateral movement of learning across the organization, the kind of stuff that we would think of as bureaucracy. And I think that bolting AI-driven development onto those organizations is like gluing chimps to a tree. It's not going to make the tree an awful lot smarter, and it'll annoy the chimps.
Then you've got your sort of more, teal, beta codecs, sociocratic, a self-structure, the humanist approach to this stuff. I hate to use the word holacracy. That kind of thing, as well. and I think that the trouble there is that there's not enough control in those organizations to make really good use of AIDD. it's a bit like strapping outboard motors to the tentacles of the squid. you don't mind the mental picture.
So, yeah, I think that the issue there is you don't have enough immediate horizontal alignment infrastructure to to really make this stuff work. then maybe the 3rd style of organization very clearly is making excellent use of AI-driven development, whether it's BD oriented or not. And that's, organizations that are mission command oriented that typically align around an extremely energetic leader. And they work as, like, an extension of that leader's consciousness. They're bringing learning to that leader, and there are many different levels of this, and that tend to be kind of fractal structure as well. But they're some ways I'm thinking of things like Apple under Jobs or, obviously the, most companies are, are the modern experience of this. And if you want more on that, go and watch some of Joe Justice's keynotes about how, he's seen stuff working inside most companies. That's not current. He hasn't done stuff with them for a couple of years.
So this is very much a moving target. But in terms of making use of this stuff, these companies are extremely successful, and they're far ahead of everybody else in the world at these scales. But they are also, and they have the same foibles as the leader of the organization. And I don't need to talk about those foibles too much. I think that this is like, you know, the manager of the giant robot suit with a little guy inside.
They call it mechs, the gun dam, that sort of thing. so, yeah, very powerful, very dangerous, but not necessarily very good for producing harmonious results. and so I think that brings us to the stuff that we are usually focused on. where we have organizations that are more about ecosystems and more networks of mutual benefit.
And, we talk a lot about the haudenashawnee, but also in a commercial world, SRC and open book management stuff, Maya Kawa, Rendon, hey, the the the idea that you've got an ecosystem that where people are trying to provide benefit to each other and work together to achieve real commercial outcomes. and that's where the stuff around camelot model, leadership as a service, and all of the stuff that XSCALE le is good for XSCALE is really a toolkit for agile alignment.
So bolting AI-driven development onto that, I think we have an ability to outperform some of the more, obviously, the first two models, maybe the third model, I think it's more a matter of integrating those capabilities with what we're what we're doing and extending those capabilities with what we're doing. Because, ultimately, we're not working towards a world where Elon Musk in the giant is the model. We're working towards a world where we want self-aware AI corporations to be able to work for the betterment of all mankind. So we need to actually get this idea of mutual benefit baked into the way these organizations work. And otherwise, we make a road for our own day.
That's a that's a really long answer, Pete, and I'm sure that the audience will probably have more questions from this video than answers. If you do have more questions, as I mentioned before, join the community, and you can ask those questions there.
I do want to say one last thing, you know. You can tell where it is, though. So right now, we're focused is, the there's a LinkedIn group, AI, and agile alignment, and and that's where most of the faculty talking about are getting plugged in. I should've used the word permaculture as well since I haven't said it. Another keyword there.
eah. Awesome. You know, one one of the things that you wrote in response to the questions was his. Without AI, Agile software delivery can't compete in the modern world. And without agile, AI can't be trusted to deliver systems that meet changing real world constraints. So that that, to me struck, a cord, I guess, that going forward, AI is going to be intertwined in everything we do, as developing software or whatever it is. Agile methods allow us to, to produce the best possible result best possible outcomes for what we're working towards. So I think that resonates with me.
Again, we wanna keep these as short as possible. We're going to end the recording here.
Let's leave it here, but I do want to add one last thing; if we think of AI as Amplified Intelligence rather than artificial. But this is about amplifying our intelligence. and, yeah, maybe one day, the AIE system, we become simulations within the AI. I don't care. Right now, I care about business. And so this idea of taking intelligence of organizations and amplifying it, that's really where this stuff all has to play.
Yeah. I love it. Alright. Thanks. Thank you, Glenn. Thank you, Pete. And we'll see you next time next week with the next article. Yep. You bet. Alright.