Last week I started looking at Artificial Intelligence from the perspective of the Enterprise Architect. Best practices covered included:

  • maintaining a clear line of sight from the AI deliverable to a business objective and its tangible business value;
  • being realistic in your expectations, especially with what you sell to management; and,
  • breaking the solution down into its constituent components through functional decomposition and isolation.

Continuing the conversation this week, I’ll start with what may be the most important consideration of all for an Enterprise Architect working in an organization that is just starting its AI journey.

Pitfall: Believing you have to implement everything (or at least everything new) with AI.

It’s easy to become enamored with this new technology, especially when it appears to be so easy to implement new applications. Just say what you want and the AI spits it out. 

But for as easy as it appears on the surface, a whole ecosystem exists around Agentic AI, AI Agents, and Generative AI just to support the technology that’s separate from what you want to actually accomplish with it. You need the Large Language Model, of course. You need prompt engineering, interfaces, collaboration models, feedback mechanisms, and more. You need to have processes to make sure that the AI does what it’s supposed to do, doesn’t hallucinate, manages problems when (not if) they occur, and makes necessary adjustments. 

It’s a lot. It can get expensive. And it’s often unnecessary.

Pathway: Keep it simple.

Decomposing a planned system into constituent components is a typical Enterprise Architect activity. One of the key decisions that an Enterprise Architect working in the AI space will then make is the selection of the most appropriate implementation approach for each component. Each must be critically evaluated. I’ve talked about this before. More often than not, this will be the simplest approach. You would think that this would be axiomatic, but when we’re all losing our minds over AI, that sometimes goes out the window.

You wouldn’t want to load all of your millions of customer transactions into ChatGPT and then ask it to produce invoices. Databases are best when data completeness and precision are required. Is the workflow consistent or the decision process rigid? Standard applications, services, orchestration, and tools will work just for that. Is the workflow fluid or are interpretation or judgment required? AI may be appropriate.

In describing the components of an Agentic AI architecture, IBM describes the situations where the use of that technology is most effective: “Agentic AI architecture should be composed of components that address the core factors of an agency: Intentionality (planning), forethought, self-reactiveness, and self-reflectiveness. These factors provide autonomy to AI agents so that they can set goals, plan, monitor their performance and reflect to reach their specific goal.”

Ask first: “Do I really need to use AI, or will something simpler suffice?”

Pitfall: Conflate accomplishing the task and interacting with the system.

I like the natural-language interfaces. My vision, for almost as long as I’ve been involved in data and analytics, has been to interact with analytical systems with a Google-like interface. The capabilities of today’s LLMs far exceed that vision. That said, interface selection doesn’t need to dictate the entire implementation.

Pathway: Separate accomplishing the task from its interface.

Put simply: just because you want to be able to ask natural language questions doesn’t mean that you have to load all of your company’s data into the large language model. Or use AI Agents for all of the components of a travel management system. It is possible to interact through an LLM that generates database queries. You want to know your top ten most profitable customers? Use the LLM to generate a query that interrogates the database and returns those top ten most profitable customers. You could even use the LLM to present the results in natural-language format. 

But use the database for what a database is best used for. Use applications and services for what they are best used for. Use tools for what those are best used for. And use AI for what it’s best used for. It’s not hard, but it does require evaluation, consideration, and judgment.

This approach has tremendous potential for democratizing analytics, combining the interpretive power of LLMs with the data storage and retrieval precision of databases and other repositories.

Pitfall: Believing that the AI will do what you want it to do.

I envision AI Agents (and Generative AI and Agentic AI) like the mischievous genie of legend. You make what seems to be a simple wish, but it gets interpreted in the most unexpected and undesirable way. “I wish I could understand what my dog is saying.” All right. POOF!! You’re a dog.

Most of our interactions with LLMs have relatively low-stakes. Summarizing Zoom calls and emails. Creating presentation pictures and TikTok Stormtrooper videos. Small-scale corporate experiments and proofs of concept. Stuff like that. But increasingly companies are looking to deploy externally-facing applications that interact not just with employees, but with customers, suppliers, partners, regulators, and the public generally. A few weeks ago, I shared some examples of what happens when AI behaves unexpectedly and undesirably.

Pathway: Spend as much time thinking about what could go wrong as you do on what you want to go right.

Get creative. Two of my favorite quotes. One is old, the other is new. First, you cannot ever make anything foolproof because fools are so ingenious. Second, you can’t build AI guard rails high enough.

Vendors are increasingly releasing guard rail frameworks that prevent certain LLM missteps. Obvious ones involve personally identifying information. If you ask ChatGPT to share what it knows about someone or an address or anything like that, it will respectfully decline. Guard rails can also filter content based on context. Want to know about the different types of bombs dropped during World War II? OK. Want to know how to build a bomb? Not OK.

You will also have restrictions based on your own company’s policies and business needs. It’s an extension of the data protection standards and requirements already incorporated into your applications and databases. Now, not only do you have to protect the data, but you also have to consider the inputs, the outputs, and the content generated. 

As an Enterprise Architect, you will (or should) have the final word on what guard rails will be implemented and where, and you would be well served to understand the risks that must be considered: bias, toxic or destructive output, sensitive information, hallucinations, and prompt injection to name a few.

This conversation about Enterprise Architecture and AI Architecture will continue (that sounded kind of like the end of the closing credits of a James Bond movie).