Frequently asked
FAQ
Comparisons with the rest of the agent ecosystem, the standards we follow, how self-hosting works, and what it costs.
If your question is not here, ask the in-app Talk to Platos agent, it reads this same source at runtime via /api/faq.
Comparisons(10)
How is Platos different from LangChain?
LangChain is a model-abstraction library; Platos is an agent runtime. They solve different problems and can coexist. LangChain helps you wire a single model call with retrievers and parsers; Platos owns the agent loop, memory, tool gateway, approvals, evals, and observability as one self-hosted service. You can wrap a LangChain LLM call in a Platos tool. But LangChain does not have a scope-aware MCP gateway, per-agent budget caps, or a federated approval workflow. Platos does.
How is Platos different from LangGraph?
LangGraph is a graph executor for LangChain, useful for branching, cyclic, or human-in-the-loop workflows. It still leaves you to glue together memory, observability (LangSmith, paid), evals, and multi-tenancy yourself. Platos ships those wired together with the loop. If you have a stable LangGraph you like, you can keep running it and call Platos tools from inside it.
How is Platos different from Anthropic Managed Agents?
Platos is the open-source replacement. Anthropic's managed product is hosted-only, single-provider, single-region, and you cannot self-host or own the data. Platos is Apache 2.0, BYOK across providers, self-hostable in five minutes, and your conversations + memory stay on your infra. Where the managed product locks you in, Platos lets you out.
How is Platos different from Vercel AI SDK?
Vercel AI SDK is a client-side streaming and tool-calling library, great for putting a chat UI in front of an LLM. Platos is the server-side agent runtime: it owns the loop, persists turns, federates tools across entities, and runs durable background ops. Many teams use both: AI SDK in the React client, Platos as the runtime behind it.
How is Platos different from LiteLLM?
LiteLLM is a model-routing proxy. Platos includes routing (multi-key per provider, per-agent and per-entity routing) but adds the agent loop, memory, gateway, and observability on top. If you only need provider abstraction, LiteLLM is lighter. If you are building agents, Platos is the bigger piece.
How is Platos different from Portkey?
Portkey is an LLM gateway focused on routing, caching, and observability across providers. Platos overlaps on observability and routing but is primarily an agent runtime. The loop, tool federation, memory, and approvals are first-class. Portkey can sit in front of Platos as a provider proxy if you want its caching and circuit breakers.
How is Platos different from OpenAI Agents SDK?
OpenAI's Agents SDK is single-provider and built around their hosted tools. Platos is provider-agnostic, self-hostable, and has its own MCP gateway plus approval workflow. If your stack is OpenAI-only and you do not need self-hosting, the OpenAI SDK is simpler. If you need provider choice, BYOK, or to keep data on your infra, choose Platos.
How is Platos different from Mastra?
Mastra is a TypeScript agent framework with a similar shape: workflows, memory, tools. Platos differs in two ways. First, the deployment model: Platos is one docker compose file with the runtime, dashboard, and observability included. Second, the MCP gateway: Platos federates tools across multiple entities through a single OAuth-scoped surface. Mastra is closer to a framework you embed; Platos is closer to a service you run.
How is Platos different from CrewAI?
CrewAI is multi-agent orchestration: roles, tasks, sequential or hierarchical execution. Platos has agent clusters that share user memory and thread history, but the primary unit is the agent and its loop. CrewAI is opinionated about role-play orchestration; Platos is opinionated about runtime, gateway, and observability. Different sweet spots.
How is Platos different from AutoGen?
AutoGen is Microsoft's research-leaning multi-agent framework. It is excellent for experimentation but light on production concerns: deployment, observability, multi-tenancy, billing-grade cost tracking. Platos is built for production from the start. The trade-off is less flexibility on agent topology, more guarantees on what happens when something breaks at 3am.
Standards & integrations(4)
Which standards does Platos follow?
Model Context Protocol (MCP) for the federated tool gateway, OpenAI tool-calling shape for native tools, Anthropic's tool-use shape on the Anthropic adapter, OAuth 2.1 with Dynamic Client Registration for the MCP server, OpenTelemetry for traces, and JSON Schema for tool argument validation. We track upstream changes and stay current.
Does Platos support MCP?
Yes. Both as a server (other MCP clients can connect to your Platos via /mcp with OAuth 2.1 scoping) and as a client (Platos federates entity backends as MCP tool sources). The gateway is universal: entity tools, trigger meta-tools, Platos skills, and the control plane all flow through one /mcp endpoint with per-tool ACL.
Which model providers are supported?
Out of the box: Anthropic (Claude), OpenAI (GPT family), Google Vertex AI (Gemini), and Voyage AI (embeddings). The provider abstraction is a 'skill pattern applied to LLMs': manifest plus required env vars. Adding a provider is a small PR.
Which databases does Platos use?
Postgres for relational data (agents, conversations, memory, scopes), ClickHouse for traces and high-volume telemetry, Redis for queues and ephemeral state, MinIO for artifacts and attachments, and trigger.dev as the durable task engine. All five run from one docker compose file.
Hosting & deployment(6)
Can I self-host Platos?
Yes. That is the recommended path for any real workload. Clone the repo, copy .env.example, run docker compose -f docker-compose.platos.yml up -d. You get the full runtime, dashboard, and observability on your own hardware. Five-minute path documented in the quickstart.
What is the hosted demo at play.platos.dev?
A trial environment for evaluating Platos without setting up infra. It is not a production tier. Conversations are retained 30 days then purged; the environment may be reset; no SLA. Use it to kick the tires, then self-host for real workloads. See /privacy and /terms.
Where does my data live when I self-host?
Entirely on your infra. Conversations, memory, traces, attachments, and secrets all stay in your Postgres, ClickHouse, Redis, and MinIO. Platos itself never phones home. Telemetry to us is opt-in via PLATOS_TELEMETRY_DISABLED=false (default is disabled for self-hosters).
Does Platos support BYOK?
Yes. Provider keys (Anthropic, OpenAI, Vertex, Voyage, anything else) live in the trigger.dev environment-variables table, the single secret store. The model picker is scope-filtered: only providers whose required env vars are linked in the current environment show up to your agents.
Does Platos run in Kubernetes?
Yes. A Helm chart is in flight (Theme K.11/K.12 on the roadmap). Today, docker compose is the smoothest path. Multi-replica agent runtimes and Postgres read replicas are coming as part of the K wave.
Can Platos run self-hosted models?
Not directly. Platos is provider-agnostic but assumes inference happens behind an HTTP API. If you run vLLM, Ollama, or LMDeploy with an OpenAI-compatible endpoint, you can wire it as a provider. Running the model process inside the Platos runtime is explicitly out of scope.
Pricing & licensing(4)
What does Platos cost?
The software is free under Apache 2.0. You pay for the infra you run it on (your VPS or cluster) and for the LLM providers you use (BYOK). There is no Platos seat license, no managed billing, no telemetry-based pricing.
Is there a paid tier?
No paid tier of the OSS product. Platos is fully featured under Apache 2.0. Every capability you see in the docs is in the open-source codebase. Winsen Labs (the maintainers) offers consulting for teams that want help designing or operating internal agents. See https://winsenlabs.com/consulting.
What is the Platos license?
Apache License 2.0. You can use, modify, fork, and ship Platos in commercial products without royalty. The license file is at /license. The Apache 2.0 protections apply: patent grant, trademark carve-out, and contribution clause.
What is the production hosting path?
There are two. (1) Self-host on your VPS or Kubernetes cluster. Most teams pick this. (2) Have us run it for you under a separate engagement. Request a conversation at https://winsenlabs.com/consulting. There is no off-the-shelf hosted Platos product today; play.platos.dev is a trial, not a tier.
Still stuck? Use the contact form or open a thread on Discord.
