Kyros Groupe
← Back to Insights
Client Engagement

How We Choose the Right AI Stack for Enterprise Clients

A behind-the-scenes look at our decision-making framework when clients approach us for AI solutions—from discovery to deployment.

Kirk BiliasMarch 202512 min read

When a client approaches us about implementing AI, the conversation rarely starts with technology. It starts with understanding their business problem—because the best AI stack is the one that solves the actual problem, not the one that looks impressive on a slide deck.

The Discovery Trap

I've seen too many AI projects fail because teams jumped straight to technology selection. "We need GPT-4" or "We should use Azure OpenAI" becomes the starting point, when it should be the conclusion of a structured discovery process.

Here's what we've learned: the technology decision is often the easiest part. The hard work is understanding what you're actually trying to achieve.

Our Discovery Framework

Phase 1: Problem Definition (Week 1)

We start every engagement with intensive stakeholder interviews. Not just IT—we talk to the people who will actually use the system.

Key questions we ask:

  • What specific business outcome are you trying to achieve?
  • What does success look like in 6 months? 12 months?
  • What's the cost of the current process (in time, money, errors)?
  • Who will use this system daily?
  • What happens when the AI gets it wrong?

That last question is crucial. A customer service chatbot that occasionally gives wrong answers has very different risk profiles than an AI reviewing legal contracts.

Phase 2: Data Assessment (Week 2)

AI is only as good as the data feeding it. We conduct a thorough assessment of:

Data availability - Do you have the training data needed? Is it accessible?

Data quality - How clean is it? What are the gaps?

Data governance - Can this data be used for AI training? Are there privacy concerns?

Data freshness - How often does it update? Is real-time processing needed?

A client recently wanted a predictive maintenance system. After our data assessment, we discovered their sensor data had significant gaps—sometimes days of missing readings. We spent the first phase of the project fixing data collection before touching any ML models.

Phase 3: Build vs. Buy Analysis

This is where we get practical. For every AI use case, there are typically three paths:

Option 1: Off-the-shelf SaaS - Products like Intercom's AI, Zendesk Answer Bot, or industry-specific solutions.

Pros: Fast deployment, proven reliability, predictable costs.

Cons: Limited customization, vendor lock-in, data leaves your environment.

Option 2: Platform + Customization - Azure OpenAI, AWS Bedrock, or Google Vertex AI with custom prompts and integrations.

Pros: More control, enterprise security, reasonable time-to-value.

Cons: Requires technical expertise, ongoing prompt engineering, usage-based costs.

Option 3: Custom Build - Fine-tuned models, custom training, bespoke architecture.

Pros: Maximum differentiation, full control, potential competitive advantage.

Cons: Longest time-to-value, highest cost, requires ML expertise.

The Decision Matrix

We evaluate each option across five dimensions, scoring 1-5:

Time to value - How quickly can this be deployed?

Total cost of ownership - Including hidden costs like maintenance and retraining.

Differentiation potential - Does this create competitive advantage?

Risk profile - What happens when it fails?

Team capability - Can your team maintain this long-term?

┌─────────────────────────────────────────────────────────────────────────┐
│                    AI STACK DECISION FRAMEWORK                           │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                          │
│   WEEK 1: DISCOVERY          WEEK 2: DATA           WEEK 3: DECISION   │
│   ─────────────────          ──────────────         ─────────────────   │
│                                                                          │
│   ┌─────────────┐           ┌─────────────┐         ┌───────────────┐   │
│   │  Business   │           │   Data      │         │   Build vs    │   │
│   │  Problem    │──────────▶│   Quality   │────────▶│   Buy         │   │
│   │  Definition │           │   Audit     │         │   Analysis    │   │
│   └─────────────┘           └─────────────┘         └───────┬───────┘   │
│         │                         │                         │           │
│         ▼                         ▼                         ▼           │
│   ┌─────────────┐           ┌─────────────┐         ┌───────────────┐   │
│   │ Stakeholder │           │  Privacy &  │         │    Model      │   │
│   │ Interviews  │           │  Governance │         │   Selection   │   │
│   └─────────────┘           └─────────────┘         └───────┬───────┘   │
│                                                              │           │
│                                                              ▼           │
│                                                     ┌───────────────┐   │
│                                                     │  RECOMMEND-   │   │
│                                                     │  ATION        │   │
│                                                     └───────────────┘   │
└─────────────────────────────────────────────────────────────────────────┘

Real Example: Customer Service Transformation

A mid-sized financial services client came to us wanting "an AI chatbot." After discovery, we learned:

  • They handled 50,000 support inquiries monthly
  • 60% were repetitive questions about account status, rates, and procedures
  • Their existing knowledge base was comprehensive but poorly organized
  • Compliance requirements meant strict accuracy standards
  • They had a small IT team with limited AI experience

Our recommendation: Azure OpenAI with RAG (Retrieval-Augmented Generation), grounding responses in their existing documentation, with human escalation for anything outside defined patterns.

We specifically avoided fine-tuning because their compliance team needed to audit exactly where answers came from. RAG gave them that transparency.

Model Selection: The Practical Considerations

Once we've decided on an approach, model selection comes down to practical tradeoffs:

GPT-4o / GPT-4 Turbo - Best general reasoning, highest cost, great for complex tasks.

GPT-3.5 Turbo - 10x cheaper, often sufficient for straightforward tasks.

Claude 3.5 Sonnet - Excellent for analysis, strong on nuance, good context windows.

Specialized models - For embeddings (ada-002), code (Codex), or domain-specific tasks.

We often recommend starting with GPT-3.5 Turbo and upgrading only where complexity demands it. Many clients are surprised how capable the smaller models are for routine tasks.

The Canadian Context

For our Ottawa and Montreal clients, data residency is often a key factor. Azure's Canada Central region gives us GPT-4 and GPT-3.5 with data staying in Canada—critical for government and regulated industries.

Key Takeaways

1. Start with the problem, not the technology. The best AI projects begin with deep understanding of the business need.

2. Assess your data before selecting tools. Poor data kills AI projects faster than poor technology choices.

3. Consider total cost of ownership. The cheapest per-token model isn't always the cheapest solution.

4. Match complexity to capability. Your team needs to maintain this after we leave.

5. Plan for failure modes. Every AI system will make mistakes—design for graceful handling.

The right AI stack isn't about using the most advanced technology. It's about using the right technology for your specific situation, team, and goals.

Need help with your data strategy?

Let's discuss how these principles apply to your specific situation.

Get in touch