Is Your Business Ready to Support AI?
Instead of treating pilots as isolated experiments, they should be intentional stress tests of your system’s ability to scale
Video Transcript
Philippe Bonneton
So for me, an AI pilot is as much a, I’m testing whether AI is going to do the right thing for me, but I’m also testing whether all of these layers of my organization, of my system are ready and are able to handle what comes with ai, the insight, the action, the acceleration, the automation, whatever it is. And that’s where you start to run into a number of, it’s kind of like a ripple effect. You run into a number of things and then you get into your compliance or your privacy questions. And so, I think the pilot is as much about that than it is about the tool or the insight that you get from ai.
Mike Cottmeyer
So, then it’s like you ask yourself the question, you say, well then what do I need to do to scale it? Does it mean that you go systematically start to remove those impediments?
Philippe Bonneton
So, I think what it means is that when you start designing your pilot, when you frame your pilot, you frame your pilot as step one of five to start with it’s what am I designing in my pilot that will be repeatable or scalable or applicable to other business units?
So, you define, I think it’s really at the onset of the pilot definition, you need to be very clear with what dependencies you’re going to be stress testing by running the pilot. So, essentially, you are starting your pilot with the scaling idea in mind already and you now need to figure out the attributes of your pilot and the conditions through which you’re going to be running it with maybe some pre-identified dependencies that exist pre-identified legacy constraints.
So, I think where I think we’re going is defining this pilot for scale starts right away, starts at the very beginning at the onset, and it’s about testing the tool, testing the insight, but also taking into account all of the things that are going to have to be right so it scales repeatability.
Again, portability. I like this idea of you’re not just testing what’s possible, but you’re testing what’s portable. And so that comes with clear set of requirements about your pilot that are beyond the AI as a technology experiment.
Mike Cottmeyer
Yeah, I just go back to you got to get the technology right? You got to get the data; you got to get the organization, its business capabilities, its domain design, its product extraction, its modernization…so, we can exploit AI use cases.
Philippe Bonneton:
I’m trying to think about a more concrete example here. So, let’s say I run a team, we’re designing an AI pilot for some sort of new agent for customer success or something like that. I have a hunch as I develop my pilot and as I work in this organization that data might be scattered. I might have data that’s locked in spreadsheets. I might have data in Salesforce. And when we talk to our clients, we often find that there is data in 10, 20 different systems.
So, when I build my pilot, I’m going to naturally be inclined to simplify this as much as possible. You know what? Let’s make it work in one environment. Let’s not constrain ourselves with this because we want to test the AI of it. I would say let’s go a different way. How can we take into account this reality that we’re going to have data that’s stuck in spreadsheets, that we have data that’s scattered across 10 or 20 different systems that we have an aging ERP. How can we simulate this reality when we run our pilot so that when we start to get good results, productive results with the pilot, the AI is actually doing the thing we want the AI to do. We don’t have this huge chasm to cross right from the simple, easy streamlined conditions we built the pilot into.
The reality of half of the data is in spreadsheets that if the person who owns that spreadsheet quits tomorrow, the data is almost lost. So, we need to take that into account as we build the pilot.
Mike Cottmeyer
So, make that real for me. Are you suggesting that you’d run the pilot with a spreadsheet as the data input?
Philippe Bonneton
You don’t run the pilot by isolating just one system or isolating one specific business unit or just working with the tech people for example. But what you do is you figure out the use case you’re testing with your pilot, and you encapsulate the patchwork of systems and things that are going to play into your pilot.
You focus on the use case and then you say, okay, we have some data that’s going to be in spreadsheets or some data that’s going to be in Salesforce or some data that’s going to be coming from customers. How can we encapsulate that, create an environment that’s representative of the reality of this use case in production and run the pilot around that?
Mike Cottmeyer
It’s almost like we’re saying, okay, we’re going to use some AI tools, we’re going to run some pilots and we’re going to see the limitations and then that’s going to inform how we improve the system. And I think that we understand to some degree how the system needs to be improved and what’s going to be required to do it.
We know this pilot’s not going to be valid if you just throw tools at it.
It’s a little bit like in the agile transformation world where we could say, okay, we can come in and teach you Scrum and we’re going to learn all your impediments and we’re going to start fixing the impediments as we go.
But what we know is that organizations are inherently not constructed well to do Scrum. So do we do the AI pilot, we do a Scrum pilot, wait for it to fail, let it show us what the problems are or do we walk in and we know exactly what the problems are going to be and our pilot is going to be differentiated because we’re going to start with a real business problem, a real organization, a real set of domains and bounded context, a real set of data. We’re going to do the work to get that slice of the organization AI ready. And it’s almost like there’s a really interesting parallel.
It’s like most of the people that ever called us weren’t like, hey, we’re Waterfall and we’ve never thought of agile and we’d like you to come and help us. Most of the people had been dorking around with agile for a long time and realized that they couldn’t press the easy button to do it. And after they had already tried to press the easy button and wasted hundreds of thousands if not millions of dollars in years doing it, they said, there has to be a better way.
We’re going to see a bunch of failed AI pilots and people are going to come to us and say, this didn’t work. We didn’t get the value out of it. How do we do this? And then we tell a structure, governance, metrics story in effect. This says, you got to pull this apart. You got to do this. You got to align your data, you got to get our stuff. And we have the tools and techniques to be able to do that.
Philippe Bonneton
I think that the three steps of awareness or of reckoning around ai, if you follow that path that you just described, is number one, I build a pilot because I’m going to test whether AI does what I want AI to do and if it does it right. And as I do that, I don’t actually test whether my organization and my system is ready to handle what comes out of ai. So, pilot fails because it worked in those ideal conditions I created, and then when I put it against my operating reality, it broke. So that’s step one.
And step two from that is it’s almost like your AI pilot is, I use the term it shines a light on the dark corners of your operation. I think it’s almost like an x-ray and having done my share of x-rays recently, it’s kind of like scanning through your organization and exposing all the stuff that’s going on. And it’s easy when you run a pilot and you’ve been misguided on what you’re actually evaluating to look at the result and say, my AI pilot failed versus my organization, my operational readiness failed at enabling AI at scale. And so, then you shoot the messenger, you know what, we tried ai, we tried those pilots, it failed. Look at AI as an x-ray or as a mirror. It shows you the stuff that’s misaligned.
It’s like you could fake until now, digital transformation, you could lift and shift to the cloud and say, now my apps are on the cloud. When you start to do AI and you try to scale it, it’s going to tell you that you didn’t do your homework maybe fully on digital transformation or you have to find some patches and workarounds because that’s what the market demanded at the time, and you didn’t necessarily clean up all the debt that you built. You then get faced with this picture, this x-ray of your system and you don’t like what you see, and it’s not AI itself that failed. It’s the matching of the use case you were trying to enable with ai, with your operating reality again.
And so, I think the reckoning then is you realize that AI is not just the pilot you’re on or the scale you want to achieve. It’s not for just a workflow enhancement or a business process automation. Like one piece there is a real challenge in operational design maybe is the term I would use here, a real challenge behind it in how do you get your organization in the right place? How do you get, it’s almost like an organization that’s well designed to start with to align business outcomes with investments in technology and organizations and so on, will most likely do well scaling AI, an organization that’s working as a collection of scaffoldings and patchworked business systems and so on with unclear accountability and clear ownership data locked in spreadsheets, like we said, yeah, most likely AI pilots won’t scale, but there’s a lot of other things that are not going to work it. AI is just the revealer. It’s not what failed.