FAQ

Frequently asked questions

Everything you need to know about Sprintra. Can't find what you're looking for? Get in touch.

Sprintra is an AI-native project management tool purpose-built for the age of vibe coding. It gives your AI coding assistant (Claude Code, Cursor, Windsurf) persistent memory, architectural discipline, and full SDLC traceability through the Model Context Protocol (MCP).
Traditional project management tools weren't built for AI-assisted development. Sprintra is MCP-native — your AI agent directly reads and writes project data. No copy-pasting between tools. It also offers session replay, decision traceability, and AI-specific features like agent trust levels and completeness scoring.
Vibe coding is the practice of building software primarily through AI assistance — describing what you want in natural language and letting AI generate the code. The term was coined by Andrej Karpathy. Sprintra solves the "vibe hangover" — the context loss that happens when AI forgets everything between sessions.
Yes! The Solo Pilot plan is free forever for individual developers. It includes self-hosted deployment, up to 3 team members, 2 projects, 200 stories, and core MCP tools. The Team plan at $5/seat/month adds unlimited projects, advanced integrations, and priority support.
MCP (Model Context Protocol) is an open standard that lets AI assistants connect to external tools. Sprintra runs as an MCP server — you add it to your AI tool's configuration, and it gets 17 consolidated tools for managing projects, features, sprints, decisions, and more. Setup takes about 60 seconds.
Sprintra works with any MCP-compatible AI tool: Claude Code, Cursor, Windsurf, Claude Desktop, ChatGPT Pro (via remote MCP), and VS Code. GitHub Copilot support is coming soon. See our integrations page for details.
Both options! In self-hosted mode, all data stays in a local SQLite database on your machine. In SaaS mode (Sprintra Cloud), data is stored in PostgreSQL on Supabase with full encryption at rest and in transit. You can migrate between modes anytime.
Yes. The Team plan supports up to 25 members with role-based access control. Enterprise customers get unlimited members, SSO/SAML, custom roles, and org-level team management. All plans include real-time collaboration via the dashboard.
The core Sprintra engine (MCP server, CLI, database) is source-available. The Sprintra Cloud platform adds managed hosting, authentication, and team features. We believe in transparency — you can always inspect what the AI is doing with your data.
Three paths: (1) Claude Code plugin (recommended) — inside Claude Code run /plugin marketplace add Sprintra-io/sprintra-mcp then /plugin install sprintra@sprintra. Auto-injects project briefing on every session and registers all 20 MCP tools. (2) Sprintra Cloud + other AI tool — sign up at app.sprintra.io, run npx @sprintra/cli connect to wire MCP into Cursor, Claude Desktop, or Windsurf. (3) Self-hostednpx create-sprintra installs everything locally. See our getting started guide.
We take security seriously. Self-hosted mode keeps everything local. Cloud mode uses Supabase PostgreSQL with encryption, Better Auth for authentication, and fine-grained personal access tokens. Enterprise plan adds SSO/SAML, 2FA enforcement, and SOC 2 audit trails. See our privacy policy.
Import adapters for Linear, Jira, and GitHub Issues are on our roadmap. Currently, you can use the REST API or MCP tools to bulk-create entities. The CLI supports JSON export/import for backup and migration.
Sprintra provides 17 consolidated tools using a method dispatch pattern (inspired by GitHub's MCP implementation). Each tool handles multiple methods — for example, manage_stories supports list, create, update, and batch_update. This gives you 60+ individual operations through just 17 tool registrations.
Yes, that's a key feature! Session replay captures every MCP action as a replayable trace connected to project entities. You can see exactly what your AI did, when, and why. This is invaluable for debugging, auditing, and understanding AI-assisted development patterns.
Three layers, zero LLM cost: (1) every user message is captured to your private timeline; (2) when a session ends, the agent writes its own structured digest (key decisions, open questions, pending asks) — no background compression run; (3) raw transcripts stay on your machine and are keyword-searchable across every session you've ever had on this device. Cloud storage: ~6 MB/year per user (digests only; raw stays local). Multi-user privacy is per-user scope by default. See /docs/memory-layer.
Three tiers: (1) Cloud — user prompts and agent-written session digests, scoped per-user, 30-day rolling rotation by default (pinned digests preserved indefinitely). (2) Local — raw conversation transcripts stay on your machine; we never upload them. The sprintra transcript CLI gives you full-text search over them. (3) Cross-device — a tiny digest syncs through the cloud so your "what was I doing yesterday" works from any laptop; raw stays put.
By default, unpinned cloud entries (user prompts, session digests) are pruned after 30 days. The structured PM artifacts (stories, decisions, features, notes) stay forever — they're the canonical record. To preserve a strategic discussion past the 30-day window, pin its digest from the dashboard. Local raw transcripts auto-prune after 90 days (configurable).
No. Per-user privacy is the default. Each user retrieves only their own prompts and session digests. Cross-user reads require an explicit admin/owner role (audit use case). Shared artifacts — features, decisions, notes, comments, work sessions — remain visible at the project level just like today. Three devs in the same repo each get their own private reasoning trail; the collaborative outputs stay shared.
Three opt-out flags in your Sprintra config: capture_user_prompts: false, capture_session_digests: false, capture_transcript_index: false. To delete past data, purge cloud entries via the dashboard or run sprintra transcript prune --before DATE for local.
We're working toward three major releases: v1.0 MVP (May 2026) with webhooks and auto release notes, v1.5 Intelligence (Jul 2026) with predictive planning and DORA metrics, and v2.0 Platform (Oct 2026) with marketplace and custom workflows. Check our changelog for the latest updates.