Most support teams do not have an AI problem. They have a context problem.

By the time an issue reaches a human, the relevant information is already spread across a CRM, internal platform tools, help center content, shipment data, and engineering tickets. The work is less about answering one question and more about stitching together the right operational picture fast enough to act on it.

That is where AI becomes useful. Not as a general chatbot, but as a system that can pull the right context together inside the workflow the team already uses.

Start Simple

The first workable version did not require a large platform. It just required structured access.

I started with a lightweight pattern: connect an LLM client to the core systems through MCP so the model could read CRM data, check platform information, and surface Jira context inside a single conversation.

   ┌─────────────┐      ┌──────────────┐      ┌─────────────┐
   │   CRM API   │      │ Platform API │      │    Jira     │
   └──────┬──────┘      └──────┬───────┘      └──────┬──────┘
          │                    │                     │
          │ MCP                │ MCP                 │ MCP
          └────────────┬───────┴────────────┬────────┘
                       ▼                    ▼
                 ┌──────────────────────────────┐
                 │          LLM client          │
                 │ investigation in one thread  │
                 └──────────────────────────────┘

That simple connection layer was enough to change the workflow. Instead of jumping between systems and rewriting the same issue three different ways, the conversation itself became the investigation workspace.

That was the first important lesson: AI does not need to replace the support stack to improve it. It needs to connect to it cleanly.

What Better Looks Like

Once the lightweight version proved itself, the limitations were obvious too. Good support work is not only about looking things up. It starts with events, depends on case enrichment, relies on retrieval quality, and needs guardrails around what the model is allowed to do.

That naturally led to a more complete design.

┌──────────────────────────────────────────────────────────────┐
│ Events: CRM webhook | Fulfillment | SLA checks              │
│                -> CRM enrichment -> CaseContext             │
└───────────────────────────┬─────────────────────────────────┘
                            ▼
              ┌──────────────────────────────┐
              │ Triage + RAG retrieval       │
              │ intent | routing | knowledge │
              └──────────────┬───────────────┘
                             ▼
              ┌──────────────────────────────┐
              │ LLM response + action        │
              │ draft | next step | citation │
              └──────────────┬───────────────┘
                             ▼
              ┌──────────────────────────────┐
              │ Policy check + execution     │
              │ auto-resolve or agent review │
              └──────────────┬───────────────┘
                             ▼
                        Feedback loop

Why This Matters

The real bottleneck in support is fragmented context. One issue can span ticket history, platform records, shipment events, knowledge base content, and engineering follow-up. A model that only answers text prompts will always stop short of being operationally useful.

A model that can assemble context, retrieve the right knowledge, draft a response, suggest the next step, and stay inside policy boundaries is different. That starts to behave less like a chatbot and more like infrastructure for the support team.

The Main Lesson

I still think the right way to start was the lightweight one. Connect the model to real systems, prove the workflow, and learn where the friction actually is.

But the end state should be more ambitious. The best support AI systems are not built around a prompt alone. They are built around context, retrieval, policy, and action.

The first step was making AI usable. The more interesting step is making it operational.

Robert Klouda