This past week I spent some time thinking about AI from the perspective of an Enterprise Architect. Is the Enterprise Architect also the Agentic AI Architect (or just AI Architect)? Perhaps in some organizations, but I would assert that AI Architect is more analogous to an Application Architect. The focus is on the design and implementation of the AI system: its technologies, frameworks, infrastructure, development, monitoring, enhancement, and maintenance.

The Enterprise Architect considers AI from the broader corporate perspective, and how it fits into the enterprise ecosystem. At that level, AI is simply another component. A nice shiny new tool in your EA toolbox. You don’t necessarily have to know all the details of everything that’s going on under the covers of all of the systems that you oversee. In fact, I have observed that it is often easier when you don’t. I make that point in my book 6 Secrets for Delivering Impossible ProjectsMuch of what Enterprise Architects already know and already practice is still applicable in an AI-enabled enterprise. The principles are the same.

Nevertheless it is clear, even in these early stages of corporate AI adoption, that there are certain challenges that an Enterprise Architect may face when integrating AI into an enterprise application ecosystem. The purpose of this series of articles is to highlight some of these pitfalls and recommend better pathways forward.

Let’s start at the beginning. Oftentimes our first mistake is the most damaging.

Pitfall: Start with the sole objective of delivering AI.

Yes, companies want to take advantage of the new technology. Everybody else is. FOMO is a reasonable, valid, and rampant concern, but you have to ask yourself why you want to use the new technology. What do you want to accomplish? If your primary objective is simply to implement AI Agents or an Agentic AI something-or-other so that you can say that you have delivered an AI something-or-other, you have more likely than not already failed. Welcome to the 95%. It doesn’t matter if the solution is appropriate for the problem. It doesn’t matter if the problem is too big or too small. Too risky or barely impactful. As long as we can check the AI box for the shareholders, we’re good. We’re successful because we say we’re successful. Chicken lunches for everybody!!

Pathway: Maintain a clear line of sight from the AI deliverable to a business objective and its tangible business value.

Let’s take as a given that you want, need, or have been mandated to deliver an AI something-or-other. OK. Then move quickly–as in instantaneously–to this follow-up. You need to be able to articulate what you plan to improve for your customers, clients, employees, and/or partners. Management might care to some extent about the shiny new technology, but what they really want are more customers, or more revenue, or fewer, better, or more efficient employees. If you can’t explain how your project will accomplish one of those objectives, they’re not going to care and they will eventually abandon your project. 

In addition, and perhaps more importantly, by establishing the connection between your project and business outcomes at the beginning, you will be able to articulate the benefit of your project, and of AI, to the company. IT has a long and embarrassing history of over-promising and under-delivering. No fluffy or imaginary or maybe-you-can-see-them-if-you-squint benefits.

Pitfall: Over-focus on business value.

Wait a minute! I thought you just said that we need to be focused on business value. We need to cut headcount and increase revenue. Now! Yes, you do need to be focused on business value. Yes, I said that you need to maintain line of sight from your AI-focused project to business outcomes. 

You just don’t have to recoup your past and future investments all at the beginning all at once.

As I write this, a baseball game is on the TV across the room. What just happened seems eerily on point. Ninth inning. The team at bat is losing. Two outs. Two strikes. The pitch. The batter swings with all his might. And misses. Game over. The slow-motion replay shows a close-up of the batter swinging with all his might. A big grimace on his face. He had hit a dozen home runs this season. Tried for another. He had four times as many strikeouts as home runs. Tally one more in the strikeout column.

Too often we get fixated on hitting the home run. We then act surprised when we strike out. Remember that for every overly-hyped customer-facing or revenue-generating application there are nineteen more that walked dejectedly out of the batter’s box. 

Pathway: Be realistic in your expectations and especially with what you sell to management.

You and your company are probably still learning. Still experimenting. Start small. Start internal.

One tip that I’ve seen a few times is to ask around and find out how your employees are using AI themselves. For a long time companies were reluctant to dip their toes into AI, and so many employees have been playing with it on their own time. Leverage their creativity and their experience. Consider whether the problems that they’re solving could be rolled out to a broader audience.

Start with a proof of concept. That’s fine. The business benefit is the experience that you get through this exercise. You might even come out with something useful at the end of it. But remember that the gap between prototype and production is often unexpectedly wide. Something that works in a lab might not work so well in the wild. (That’s why I prefer proof of concept to prototype or pilot.)

Be clear-eyed about the costs and the benefits, and especially the risks. In a recent article I gave some examples of problems that occurred in AI systems. I wonder how thoroughly the risks were considered.

Pitfall: Asking AI to generate too much of the application all at once.

I demonstrated how ChatGPT could be used to write a computer program that solves a certain constraint satisfaction problem. I then iteratively made changes to the program to add features. This is sometimes referred to as Vibe Coding, and it is currently a very hot topic. I’m not going to comment one way or another right now about its usefulness or effectiveness, especially in a corporate setting. 

Each time I asked it to do something different, the entire program was re-generated. Since previous versions remained within its context it wasn’t starting totally from scratch, but there were some changes from version to version. (See the end of this article for the details.)

Several researchers have experimented with generating entire systems using AI, including testing, bug fixes, and CI/CD pipeline. One significant result is that sometimes fixing a bug in one place and regenerating the code created bugs in other places.

There seems to be an evolving fantasy where we can just say, “build me a customer management system” or whatever, and the AI will just make it happen. Vendor hype is cranked to eleven. This should raise the hackles of any Enterprise Architect. That’s not how it works.

Pathway: Enterprise Architecture 101 – functional decomposition and isolation.

Have we forgotten the basics of system design? Modularity and loose coupling are standard Architecture principles. Functional decomposition is a standard Enterprise Architecture practice. We don’t write entire systems all at once. 

AI is no different. 

Break the solution down into its constituent components. Focus on each individual component. One service, one task. One AI Agent, one task. A change in one component shouldn’t impact any other. 

But how do we determine what kind of components to use? We’ll get into that when the conversation continues next time.

Differences Between Adjacency Constraint Satisfaction Application Iterations
(if you’re interested)

Prompt #1: write a c program that takes as input 1) the number of consecutively numbered squares, 2) pairs of squares that are adjacent to each other, and 3) letters assigned to certain squares. The program then finds a configuration of letters assigned to all of the squares such that no two adjacent squares have consecutive numbers.

The application assumes an arbitrary maximum of 50 squares even though a larger number could be entered resulting in a memory boundary error. In subsequent versions the maximum is 26 (one for each unique letter) but the number of squares entered is never checked.

Prompt #2: modify it so that the number of distinct letters used equals the number of squares

Except for the parts of the backtracking function directly related to the number of distinct letters, the only changes were some comment wording and adding braces unnecessarily enclosing a one-line if statement.

Prompt #3: change it so that you can read the configuration from a file and also that you can specify the number of letters apart adjacent squares are prohibited from being

Again, the character assignment and backtracking functions are the same except for the number of characters away adjacent characters can be, and there were some small comment changes. The main difference is that the program parameters are read from a file instead of being prompted for input from the user. Just like in the initial iteration, there are no checks to make sure that the file could be read successfully and that its contents were as expected.

Prompt #4: change the program so that you can put comments in the input file after a pound sign

The fscanf statements in the previous iteration were replaced by two helper functions: one that reads the next integer and the other that reads either two numbers (adjacency) or a number and a letter (preset assignment). In both cases blank lines and comments are skipped even though I never asked for blank lines to be skipped. Interestingly, the error checking associated with the adjacency and preset assignment helper function was good. The next integer, not good (not present).