In This Blog
- Tool Selection: Use AI Thoughtfully, Not Blindly
- Coding Standards: Context Is Everything
- AI for Code Generation: A Productivity Partner, Not a Replacement
- Code Review Discipline: Rigor Is Non-Negotiable
- When to Write Code Instead of Generate
- Using AI to Understand Code
- Ethical and Secure Use: Protecting Clients, IP, and Ourselves
- Final Thoughts: Use AI Intentionally, Not Automatically
- Frequently Asked Questions (FAQ)
Here’s a look at how we use AI tools: what we do, what we avoid, and why it matters.
We favor smaller, local models when possible. These are faster and use less energy, and often offer sufficient functionality without introducing additional security risks.
We regularly re-evaluate our tools as new models and providers emerge. The capabilities, safety features, and data protections offered by AI tools are improving rapidly, so our standards will adapt in step with the pace of innovation.
✅ Do |
❌ Don't |
Ensure the tool does not ingest our code or information for its own training |
Don't leak proprietary information from Emergent Software or our clients |
Ensure all data is encrypted in transit |
|
Use tools within the constraints of their licensing |
|
Carefully choose the appropriate model for the task |
|
Prefer local and small models where possible |
AI tools are only as good as the context they’re given. That’s why we make sure to guide them carefully. When using GitHub Copilot, for example, developers open the right files in their IDE to give it the full picture. Instruction files add even more clarity.
AI-generated code should always reflect the standards and patterns of the project it supports. If it doesn’t match, don’t use it.
✅ Do |
❌ Don't |
Ensure AI tools follow the coding standards and patterns in your project |
Don’t accept AI-generated code that doesn’t match the standards or patterns of the project |
Make sure AI tools have the proper context |
|
When used wisely, AI can be a great sidekick for developers. We use it for structured, pattern-based tasks like CRUD operations, SQL generation, and Tailwind CSS scaffolding. But it's never the final word. Developers review and own every line of code.
✅ Do |
❌ Don't |
Use AI Tools to accelerate repetitive or boilerplate tasks (e.g., HTML scaffolding, regex, unit test templates) |
Don’t let AI dictate the architecture of your application |
Leverage AI to comment code to ensure readability of the code for future developers |
Don’t rely on AI for business-critical logic without thorough validation |
Leverage AI Tools for productivity in areas like Tailwind CSS, SQL generation, and test scaffolding |
Don’t rely on AI to make sweeping changes across multiple files with a single prompt |
Use AI Tools for pattern-based tasks, like form validation, CRUD operations, or test generation |
Don’t allow AI to lead you into patterns you don’t understand |
Treat AI Tools as a pair programmer—use its suggestions as a starting point, not a final product |
|
Generate code in manageable chunks. Keep changes small and with clear goals |
|
When AI tools lead you into a pattern or direction you don't understand, spend the time to understand or ask a teammate for help |
|
AI can produce code that looks great on the surface but breaks under pressure. We hold AI-generated code to an even higher review standard. Developers are expected to understand, test, and validate everything, no exceptions.
✅ Do |
❌ Don't |
Always review AI-generated code with extra rigor |
Never commit code you don’t understand |
Review your own AI-generated code before creating a PR |
Don’t skip code reviews just because AI wrote it |
Be aware that AI may hallucinate, misinterpret context, or suggest insecure patterns |
Don’t assume AI understands your business logic or architectural constraints |
Be aware that logic and math is a weakness of AI code generation |
Don’t assume that a good explanation means the code is correct |
Use peer reviews and tools to validate correctness, security, and maintainability |
|
Actively combat the temptation to skip reviews due to time constraints or mental exhaustion. |
|
Keep PRs atomic. Avoid the temptation to submit large PRs because AI generated a lot of code in a short amount of time |
|
Ask for help when reviewing code outside your comfort zone |
|
Take full ownership of committed code, regardless of how it was written |
|
There’s immense value in writing code manually. It helps developers maintain their skills, explore creative solutions, and stay fully engaged. Not every task should be outsourced to AI—even if it could be.
✅ Do |
❌ Don't |
Create foundational architecture and patterns that guide AI suggestions |
Don’t default to Copilot for every task |
Write code manually to learn new technologies, techniques, or paradigms |
Don’t become just an AI code reviewer—this leads to boredom and weaker review skills |
Use manual coding to explore experimental ideas or strengthen understanding |
Don’t use AI as a crutch for tasks you should know or want to learn |
Balance speed with quality |
|
AI can be a great learning partner when you're trying to understand unfamiliar code or system behavior. But its suggestions need to be double-checked. The goal isn’t just to get an answer—it’s to build deeper understanding.
✅ Do |
❌ Don't |
Use AI tools to explain code blocks and patterns you don't understand.
|
Don't accept AI code review suggestions without understanding why.
|
Use AI tools to find where a change needs to be made. |
Don't accept AI results at face value as it can hallucinate. Remain skeptical and verify the accuracy. |
Ask AI tools to review your code for ways to improve.
|
|
Use AI tools to reframe a problem into different formats (e.g., sequence diagrams, a language you're more familiar with, etc.).
|
|
Recognize that AI tools are just one tool in your toolbox. Leverage your team, your own brain, and web search to help understand.
|
|
Responsible AI use also means guarding against ethical risks and intellectual property entanglements. We set clear boundaries between personal and professional usage, and we favor simpler, non-AI tools when they save energy, reduce complexity, or just get the job done faster.
Ethical and security standards for AI will also evolve alongside regulatory changes and emerging risks. We remain committed to revisiting our policies and practices as the external landscape shifts, ensuring we stay ahead of both technology and compliance expectations.
✅ Do |
❌ Don't |
Use separate accounts for personal and work-related AI usage |
Don’t use company-licensed AI tools for personal or side projects |
Follow GitHub’s responsible use guidelines for Copilot features |
Don’t assume AI-generated code is free from licensing or attribution concerns |
Watch for bias in AI suggestions—training data can pass on real-world bias |
Don’t use generative AI where a static tool (like a linter) would be more appropriate |
Favor local models and lightweight tools for tasks that don’t need the cloud |
|
Consider disabling AI auto-suggestions and instead trigger them intentionally (e.g., with a keystroke or comment) |
|
AI is a tool. Not a developer, not a strategist, and not a substitute for judgment. At Emergent Software, we embrace the productivity benefits of AI while staying vigilant about its risks. By setting clear standards, we empower our teams to work faster and smarter, without compromising on quality, security, or learning.
But these standards are not static. As AI progresses, our expectations, safeguards, and best practices will grow with it. We’re prepared to adapt our approach, expanding AI’s role where appropriate while maintaining a strong human-centered foundation in everything we build.
Can AI tools design your whole application architecture?
Not reliably, and definitely not safely. While AI can generate architecture diagrams or boilerplate examples, designing application architecture is about understanding tradeoffs: scalability, performance, security, maintainability, and business context. Those are deeply human decisions that involve discussion, consensus, and experience. AI can offer starting points or visual representations of existing systems, but if you let it dictate architecture, you risk building something fragile, unscalable, or insecure. At Emergent, we rely on experienced engineers to drive architecture decisions, with AI acting as a support tool, not a lead designer.
Isn’t using AI a shortcut?
In some ways, yes, but that’s not a bad thing. The key is using AI intentionally, not passively. AI shines when it takes on tasks that are tedious, repetitive, or prone to human error, like generating test scaffolding, formatting code, or producing variations of a component style. That frees up our developers to focus on high-value work like feature design, performance tuning, and system integration. When used well, AI helps us move faster without compromising quality. But we never use it to cut corners on logic, architecture, or business rules. The goal isn’t to do less thinking, it’s to think better and faster.
What happens when AI gets it wrong?
It happens more often than people realize, and that’s why human oversight is so important. One of the biggest risks with AI is that it can produce code that looks right, compiles without errors, and even passes tests, but still contains flaws, logic gaps, or security vulnerabilities. That’s why we treat AI-generated output as a first draft, never a final product. Our developers are trained to critically evaluate every suggestion. We also use peer code reviews and automated validation tools to catch issues that AI might miss. Mistakes are caught before they make it to production, because the final call always lies with a human.
Can AI replace developers?
AI reduces the amount of time it takes to build an application. We believe this will lead to building more applications, not fewer developers. As the time to develop custom applications goes down, the speed and value goes up. This will lead to a boon in custom applications replacing off-the-shelf products. More importantly, great developers do more than write code; they solve problems, align technology with business needs, and mentor teams. At Emergent, we believe AI augments human skill, it doesn’t replace it. And the best developers are the ones who know when to use AI, and when to rely on their own expertise.