interlocute.ai beta
v1 beta

Addressable AI

Deploy nodes instantly

On-demand cognitive intelligence with any LLM.
Robust, configurable, and standards-based AI.

Streaming Responses Async & Scheduling Priced by Usage
bash — chat with your node
1 curl -X POST https://my-node.interlocute.ai/chat \
2 -H "Authorization: Bearer $API_KEY" \
3 -d '{
4 "content": "Hello! How can you help me today?",
5 "threadId": thread-uuid
6 }'
// Streaming response with SSE...

What is an Interlocute node?

Interlocute nodes are persistent, addressable units of intelligence. They handles inputs, manage memory, and execute logic instantly or asynchronously.

For Power Users

Same Models. Better UX.

Subscription chat services use the same LLM APIs you could call yourself. But because of subscription pricing, they throttle you whenever. Interlocute is for those that want to unleash the power of the models without worrying about the infrastructure. Interlocute uses the same models you know with bigger context, full observability, and a rich workspace you'd otherwise need a dozen libraries to build.

Full Context Windows

No silent trimming. You choose how much context your node receives.

Observable Processing

Token counts, latency, capability traces — see everything.

Configurable Everything

Model, persona, constitution, capabilities — all live, no redeploy.

Real Cost Accounting

Per-request metering. Per-thread cost attribution. No opaque tiers.

Rich Workspace

Threads, tagging, prompt saving, documents, artifacts — all built in.

No Assembly Required

One app replaces vector DBs, prompt tools, and billing libs.

How it works

Deploy your intelligence in three simple steps. No infrastructure management required.

01

Create

Initialize your node environment instantly using our CLI or web dashboard.

02

Configure

Choose a model, pick features, and configure settings.

03

Call

Chat through many built-in surfaces, use the Api, or set recurring tasks.

Key Capabilities

Engineered for Production

Runtime API

Low latency addressable node interactions with stateful memory.

Streaming

Native SSE support for real-time token delivery to front-end clients.

Scheduling

Built-in cron engine for proactive intelligence tasks.

Controls

Granular IAM policies to restrict node access by token or origin.

Out-of-the-box agents

Preconfigured AI agents you can deploy in minutes. Each agent is a proven bundle of capabilities — governed, metered, and production-ready.

See all agents

Interactive Assistant

Conversational AI

A conversational AI assistant you can deploy in minutes. Threaded conversations, persistent memory, and real-time streaming — ready to chat.

  • Set up in minutes — no infrastructure to provision
  • Persistent conversation threads with long-term memory
  • Real-time streaming responses (SSE)
Learn more

Automation Worker

Scheduled automation

An AI agent that runs on schedule — daily digests, data processing, and monitoring without human supervision.

  • Built-in cron-style scheduling — no external scheduler needed
  • Trigger-driven execution from webhooks and events
  • Background summarisation and data processing
Learn more

API / Contract Node

Structured API endpoint

A strict AI endpoint for service-to-service calls. JSON-in, JSON-out — deterministic, governed, and production-ready.

  • Strict JSON-in / JSON-out contract enforcement
  • Minimal conversational overhead — no greetings or commentary
  • Designed for service-to-service and webhook integrations
Learn more

Media Processor

Media processing

An AI agent built for media — transcribe audio, analyse images and video, parse documents. Upload, process, and get structured results.

  • Audio transcription (speech-to-text) out of the box
  • Image upload with OCR and vision analysis
  • Video ingestion and frame extraction
Learn more

Configure via Conversation

Stop fighting with complex JSON schemas. Evolve your node's behavior by simply describing what you need.

01 Instant configuration
02 Default-safe tools
03 Conversational setup
Prompt Editor
USER
"Make this node act like a research assistant that specializes in technical whitepapers."
INTERLOCUTE_AI
Updating system prompt...
Applying 'technical_writer' persona...
Configuring RAG tools for PDF parsing...
node_status: ready v1.2.4

Usage-based pricing. Start small.

Pay only for the tokens your nodes consume. No monthly platform fees.

5% premium on LLM
$3 / 1M Computation tokens
Unlimited Nodes
The Vision

"By enabling Intelligence to be Addressable, we believe a Commons will emerge. Like the World Wide Web before it, standards allow people and agents alike to flourish."

Built for builders

Our documentation is written by developers, for developers. Get up and running in less time than it takes to brew a coffee.

Authentication

Include your API Key in the Authorization header:

Authorization: Bearer YOUR_API_KEY