Inside the AI Engine
How Conduit agents think

The Response flow pipeline is the reasoning core behind every AI-generated message in Conduit. It takes an incoming conversation, understands what’s being asked, gathers the right information, uses the right tools, and either produces a safe, useful reply or routes the message for human follow-up when needed.
At a high level, the process moves through five stages:
Query Analysis – understanding the request and deciding what actions are needed.
Knowledge Search and Tool Use – gathering relevant facts and calling external functions.
Agent Injection – running specialised micro-agents for field extraction or follow-up questions.
Reasoning Engine – combining all gathered context, rules, and guidance into a coherent plan.
Reflection and Response Generation – self-checking and producing the final reply.
Each stage leaves a clear trace of what happened, so you can inspect reasoning and improve behaviour over time.
1. Query Analysis
The first step is to make sure the AI understands the request in context.
Message preparation – The system trims conversation history to keep the most relevant and recent exchanges within a safe size limit.
Context building – It builds an “analysis scratchpad” that includes the latest message, recent history, and relevant workspace settings.
Classification – The message is tagged by topic for analytics and routing.
Action routing – A routing model decides whether tools should be run, whether any agents should be injected, and whether the question can be answered immediately or needs more processing.
This step ensures that all later reasoning is anchored to a clear, bounded interpretation of the user’s request.
2. Knowledge Search and Tool Execution
Once the query is understood, the engine pulls in the information it needs to answer.
Hybrid search – Searches run across both unstructured sources (knowledge articles, past conversations, uploaded files) and structured fields (property data, custom attributes).
Relevance filtering – Only the most relevant results are kept, removing duplicates and unrelated matches.
Tool calls – If the query requires an action (e.g. fetching booking data, updating a system), the relevant tools are called directly. Tool results are fed back into the scratchpad as additional context.
By the end of this stage, the engine has a working set of factual material and any real-time data it needs to answer accurately.
3. Agent Injection
Some workflows benefit from targeted, specialised reasoning inside the main pipeline. Agent injection is how we run these specialised “micro-agents” alongside the core flow.
An injected agent can:
Extract specific fields from the conversation (e.g. arrival time, guest count) without guessing.
Draft follow-up questions to move the conversation forward if key information is missing.
Multiple injected agents can run in parallel. Their outputs are combined and merged into the scratchpad so the final reasoning step naturally incorporates both the extracted facts and any follow-up prompts.
4. Reasoning Engine
With the query analysed, facts retrieved, and tools and agents run, the reasoning engine brings everything together.
Rules and guardrails – Relevant rules are matched and analysed, including workflow triggers and escalation policies.
Style guide – Workspace-level tone and formatting guidance is added so the AI’s response matches your communication style.
Planning – The engine determines the safest and most effective way to respond given the context, retrieved knowledge, and rules in effect.
This stage is where procedural knowledge (“if X, then do Y”) is applied, and where the AI plans not just what to say but how to say it.
5. Reflection and Response Generation
Before sending, the AI evaluates its own plan:
Reflection – The system checks whether it has enough information to answer confidently. If not, it may generate a clarifying question or flag the case for escalation.
Response generation – Using the combined scratchpad (facts, tool results, agent output, rules, style guide), the AI writes the reply.
Post-processing – Any placeholders or internal markers are removed, and a signature is added if required.
The final output is then returned along with structured metadata: relevant documents, tools used, reflection trace, tags describing the run, and any escalation flags.
The Outcomes of This Design
Every stage in the Responseflow pipeline exists for a reason. Together, they ensure the engine:
Understands the question before answering.
Pulls in the right facts and data at the right time.
Uses targeted agents for precise information gathering.
Applies rules and style guidance consistently.
Checks its own work before sending.
This flow makes the system transparent in its reasoning and dependable in its output, so it can be trusted to operate across your most critical communication channels.
Last updated
Was this helpful?