interlocute.ai beta

Deep Reasoning

Give your node time to think. Deep Reasoning executes a multi-step pipeline — plan, draft, critique, revise, finalize — as a single governed job. Every intermediate step is inspectable, budget-capped, and resumable.

Plan → Draft → Critique → Revise → Finalize

Deep Reasoning breaks complex tasks into a structured pipeline. The model first plans its approach, produces a thorough draft, then critiques its own work before revising and polishing. Each step runs as a separate model call with its own prompt, so reasoning is explicit — not hidden in a single completion.

Inspectable trace artifacts

Every intermediate step produces a trace artifact — the plan document, the critique, the revision delta, and the final output. These are full, persisted documents you can inspect, audit, and cite. No opaque chain-of-thought logs; real structured evidence of how the answer was reached.

Budget-capped execution

Each reasoning job runs within a budget envelope: maximum iterations, total tokens, cost ceiling, and wall-clock time limit. If the budget is exceeded, the job stops cleanly with a clear error. You control how much thinking your node is allowed to do per request.

Resume-safe step tracking

If a reasoning job is interrupted — by a timeout, a crash, or a cancellation — it resumes from the last completed step. Step-level progress is tracked via the invocation receipt, so no work is repeated and every partial result is preserved.

Cooperative cancellation

Stop a running reasoning job at any time via the API or the dashboard. The executor checks for cancellation between steps and stops cleanly, preserving all completed work. No orphaned model calls, no wasted tokens.

Registry-backed definitions

Reasoning pipelines are defined as versioned, validated entries in a recipe registry — not ad-hoc prompt chains. Each definition declares its steps, parameters, budget defaults, governance rules, and output contract. The platform validates the definition before execution.

Frequently Asked Questions

Deep Reasoning

What is Deep Reasoning?
Deep Reasoning is a multi-step model-calling pattern that breaks complex tasks into explicit phases: planning, drafting, critiquing, revising, and finalizing. Instead of relying on a single model completion to produce a good answer, the platform orchestrates multiple focused calls and lets the model review and improve its own work.
How is this different from a long system prompt?
A long system prompt asks the model to plan, write, and review all in one completion. Deep Reasoning runs each phase as a separate model call with dedicated instructions. This produces better results because each call has a focused objective, and intermediate outputs are inspectable and auditable.
Can I see what the model was thinking at each step?
Yes. Every step produces a trace artifact — a persisted, inspectable document. You can view the plan, read the critique, compare the draft to the revision, and audit the entire reasoning chain. These are real documents, not ephemeral logs.
What happens if a reasoning job times out?
The job stops at the current step boundary and records a budget-exceeded error. All completed steps and their trace artifacts are preserved. You can inspect the partial results and optionally re-submit with a larger budget.
Can I cancel a reasoning job mid-execution?
Yes. Request cancellation via the API or dashboard. The executor checks for cancellation between steps and stops cooperatively. All completed work is preserved and the invocation receipt shows which steps finished.
How is Deep Reasoning billed?
Each model call within the reasoning pipeline is metered individually — LLM tokens plus the standard platform premium. The total cost of a reasoning job is the sum of its step costs. Budget controls ensure you never exceed your configured ceiling.

Ready to build with Deep Reasoning?

Deploy your node in seconds and start using Deep Reasoning today.