Emergent Software

Before You Build: A Practical Guide to Approaching AI the Right Way

by Aaron Varga

In This Blog

Everyone wants to add AI. Fewer people want to ask whether they should.

That’s not cynicism — it’s pattern recognition. Over the past few years, the conversations I have with clients have started to sound familiar: a competitor shipped an AI feature, the board is asking questions, and now there’s pressure to do something with AI before the end of the quarter. The problem is that “we should add AI” is not a strategy. It’s a starting point.

The teams that get lasting value from AI treat it like any other engineering discipline: define the problem, measure a baseline, build incrementally, and validate as you go. This guide walks through how we think about that at Emergent — from the first conversation to production.

TL;DR

  • “We should add AI” is not a strategy — start by identifying the real problem
  • Three prerequisites before any AI work: a measurable success definition, accessible data, and clear governance
  • Off-the-shelf APIs handle most use cases; custom work is usually about orchestration, not models
  • AI is the wrong tool when errors are catastrophic, data is too sensitive, or a simpler solution already exists
  • Start with a one-week internal prototype, not a production build
  • AI features require ongoing monitoring — design for production, not the demo

Slow Down Before You Speed Up

The first thing I do when a client brings up AI is slow the conversation down. Not to kill momentum, but to ask the right question: what problem are you actually trying to solve?

Sometimes the answer is a better search index. Sometimes it’s a workflow redesign. Sometimes it’s fixing a data pipeline that’s been broken for two years. AI is a powerful tool, but it’s not always the right one — and reaching for it too quickly can paper over problems that would be cheaper and faster to fix another way.

The signal I look for: are humans doing pattern recognition at scale, or constantly synthesizing messy, unstructured information under time pressure? Code review, document analysis, ticket triage — these are strong candidates. If the task requires judgment and you can tolerate occasional misses, AI is usually worth exploring.

Volume matters too. If someone does a task ten times a day, a 70% accurate suggestion may not be worth the trust overhead. If they do it ten thousand times a day, even modest gains add up fast.

Three Prerequisites That Have to Come First

Before writing a single line of AI code, three things need to be in place.

Define what “good” looks like. If success isn’t measurable, you can’t tell whether AI is helping. I’ve seen teams integrate a model and spend months guessing whether it moved the needle. Set a measurable goal before you build anything.

Get your data accessible and reasonably clean. Not perfect — perfect data only exists in demos. But if your data is spread across 17 systems with inconsistent IDs, or half of it is stale, you’ll spend most of your budget on plumbing. Sort that first.

Handle permissioned access and governance upfront. This one surprises people. Your AI system will read sensitive data and sometimes produce wrong output. You need clear rules for who can see what, what the model can access, and what happens when it gets something wrong. Security and governance are prerequisites, not cleanup work.

If those three aren’t in place, pause on AI and fix them. You’ll get better results from every other technology investment too.

Build vs. Buy: A Simple Framework

The build-vs-buy decision comes up in almost every engagement, and the answer usually isn’t complicated once you ask the right questions.

Off-the-shelf APIs — Azure OpenAI, managed RAG services, tools like Copilot Studio — are good enough for a lot of use cases. Need a docs chatbot? You probably don’t need fine-tuning. Need summarization, sentiment analysis, or extraction? Start with what’s available and see how far it gets you.

Custom work starts to make sense when your domain is specialized and generic models struggle, or when you need very tight behavior control. In a support platform, for example, a generic “answer this ticket” flow might get you 60% of the way. The rest comes from custom orchestration — pulling the right account history, product docs, and policy context so responses are accurate for your specific business.

Integration depth is the other deciding factor. If AI needs to plug into your auth, data layer, and CI/CD as part of an existing product, that’s usually custom work. Not custom models necessarily, but custom orchestration around managed models.

The framework: how specialized is the domain, how deep is the integration, how much control do you need? If the answers are “not much,” “surface-level,” and “some” — buy. Otherwise, build the orchestration layer and use managed models underneath.

When AI Is Not the Right Answer

This is the part of the conversation no one wants to have, but it’s often the most valuable.

When the cost of being wrong is catastrophic and there’s no human in the loop. AI models are probabilistic — they will be wrong sometimes. If “sometimes wrong” means an incorrect medical dosage or an irreversible financial decision, you need a human checkpoint or a deterministic system. AI can assist, but it shouldn’t be the sole decision-maker.

When you don’t have a real problem to solve. Board pressure, FOMO, competitors shipping AI features — I get it. But if you’re adding AI to check a box, you’ll spend real money building something that confuses users and creates maintenance burden with no measurable return.

When the data is too sensitive or regulation is too restrictive. Some industries can use AI, but compliance overhead can consume the entire budget. Be honest about that up front rather than discovering it in month four.

When a simpler solution already works. This is the boring answer, and it’s usually the right one. If a rules engine, strong search, or a better form solves the problem, do that. AI adds complexity, and complexity costs money.

How We Help Clients Avoid Common Pitfalls

At Emergent, we start every engagement with a reality-check phase. Before anyone writes code, we ask: what’s the business outcome, what does success look like in numbers, what data do you actually have, and how wrong can the system be before it becomes unacceptable?

That conversation kills a lot of bad ideas early. Killing a bad idea in week one is a lot cheaper than killing it in month six.

From there, we follow a few core principles:

  • Human review checkpoints wherever the stakes are high
  • Instrument everything — if the AI is producing outputs, measure quality and catch drift early
  • Design for graceful degradation — if the AI component fails or goes noisy, the product falls back to a sane path instead of breaking

We also take responsible AI seriously: bias testing, content filtering, and access controls so the system doesn’t expose data it shouldn’t. Not as a checkbox exercise, but because once users lose trust in an AI feature, it’s very hard to win back.

A Smart First Step for Teams Just Getting Started

Pick one internal process that’s painful, repetitive, and low-stakes. Something that involves manual reading and synthesis — summarizing meeting notes, categorizing incoming requests, drafting first-pass responses to common questions.

Build a prototype in a week. Not production, just a prototype. Use a managed API, keep the scope tight, put it in front of five users, and watch what happens. You’ll learn more in that week than in three months of strategy slides.

The goal isn’t shipping. The goal is building team intuition about what AI is actually good at and where it falls apart. Once you have that intuition, every decision that follows gets better.

Final Thoughts: It’s Software, Not Magic

AI features are not “set it and forget it.” Models change, data drifts, and user expectations move. You need a plan for ongoing evaluation and maintenance. If your team can’t monitor and iterate after launch, quality will drift until your customers notice.

Be skeptical of demos, too. Every demo looks great: clean data, ideal prompts, perfect outputs. Production is where edge cases, messy inputs, unpredictable users, and slow APIs show up. Design for production, not the demo.

The teams that get real value from AI treat it like any other engineering discipline. It’s not magic (yet!). It’s software — and good software still takes craft and experience.

If you’re trying to figure out where AI fits in your product or organization, we’d love to talk. Get in touch with Emergent Software and let’s work through it together.

Frequently Asked Questions

How do I know if my organization is ready for AI?
Start with the three prerequisites: a measurable success definition, accessible and reasonably clean data, and clear governance around data access. If all three are in place, you’re in a good position to start a scoped pilot. If they’re not, fix those gaps first — you’ll get better results from AI and everything else.

Should we use ChatGPT, Azure OpenAI, or build our own model?
For most business use cases, you don’t need a custom model. Managed APIs from providers like Azure OpenAI are mature and capable. The real decision is usually about orchestration — how you connect the model to your data, your workflows, and your existing systems. That layer is almost always custom work, even when the model itself is off-the-shelf.

What’s the most common mistake teams make when adding AI?
Skipping the measurement baseline. If you don’t know what “good” looks like before you start, you have no way of knowing whether your AI investment is working. Define success in concrete numbers — accuracy, time saved, error rate — before you write any code.

How long does it take to see real value from an AI integration?
A well-scoped pilot can show meaningful results in four to six weeks. The teams that struggle are usually the ones that scope too broadly, skip the data-readiness work, or don’t instrument their outputs. Start small, measure early, and expand from there.

About Emergent Software

Emergent Software offers a full set of software-based services from custom software development to ongoing system maintenance & support serving clients from all industries in the Twin Cities metro, greater Minnesota and throughout the country.

Learn more about our team.

Let's Talk About Your Project

Contact Us