Krawl
Deep research API — recursive tree search, multi-model intelligence, and crypto-native tools
Krawl is an autonomous deep research API that performs recursive breadth×depth tree search, synthesizes findings into a cited report with source verification, and streams progress via SSE events. It includes specialized crypto research tools, cross-session memory, document uploads, and scheduled research lookouts.
Built with FastAPI and litellm for multi-provider LLM routing. No agent framework — raw tool-calling with a 4-tier model strategy across AWS Bedrock and Anthropic.
Key Capabilities
- Recursive research tree — parallel queries at each depth level, learning extraction, gap detection, automatic recursion
- STORM perspective discovery — multi-perspective research inspired by Stanford's STORM methodology
- Multi-pass synthesis — gap check → revise cycles with quality scoring (Opus 4.7 for final synthesis)
- 4-tier model strategy — Haiku for high-volume tasks, Sonnet for research/planning, Opus for synthesis
- Cross-session memory — knowledge base that persists learnings across research sessions
- 9 research modes — deep, quick, crypto, token-analysis, protocol-research, whale-tracking, narrative, risk-assessment, yield-strategy
- Document upload — PDF, TXT, MD, CSV, JSON — up to 5 files, 10MB each
- 8 research templates — pre-built query patterns for common research tasks
- Research steering — inject instructions mid-stream to adjust research direction
- Session continuity — follow-up research that builds on prior context
- Source policy — domain allow/deny lists and freshness filters
- Structured output — JSON Schema-driven extraction alongside the markdown report
- Export — PDF, HTML, Markdown download
- Lookouts — scheduled recurring research with change detection and webhooks
- Audit trail — full log of every search, extraction, and synthesis step
- 25+ SSE event types — real-time streaming of every research phase
Architecture at a Glance
Client POST /research { query, mode, breadth, depth_levels }
│
├─ Phase 1: Planning (Sonnet 4.6)
│ └─ Structured research plan with topics and tasks
│
├─ Phase 1.5: STORM Perspective Discovery (Haiku 4.5)
│ └─ 3-5 research perspectives with key questions
│
├─ Phase 2: Recursive Tree Search
│ ├─ Generate N queries (Haiku 4.5) × depth levels
│ ├─ Execute searches in parallel (Exa, X, GitHub, crypto, etc.)
│ ├─ Extract learnings (Sonnet 4.6)
│ ├─ Detect knowledge gaps (Sonnet 4.6)
│ └─ Recurse with depth-1, breadth/2 if gaps remain
│
├─ Phase 3: Verified Synthesis (Opus 4.7, 1 call)
│ ├─ Gap check → optional re-synthesis
│ └─ Multi-pass review → revise loop (up to 2 passes)
│
└─ SSE stream: plan → perspectives → queries → sources → learnings → report_chunk → resultQuick Start
curl -X POST https://api.krawl.sh/research \
-H "Content-Type: application/json" \
-H "X-API-Key: your-key" \
-d '{"query": "What is the current state of AI agents?", "mode": "deep"}'The response is an SSE stream. See SSE Events for all event types.
Pages
- Getting Started — Setup, env vars, first request
- Architecture — Deep dive into the research pipeline
- Models — 4-tier model strategy, provider routing, circuit breaker
- Research Modes — All 9 modes explained
- API Reference — All 22 endpoints
- SSE Events — All event types with schemas
- Crypto Tools — Crypto-specific tools and modes
- Templates — 8 pre-built research templates
- Memory — Cross-session knowledge base
- Export — PDF, HTML, Markdown export
- Lookouts — Scheduled recurring research
- Deployment — Fly.io production deployment
- Testing — Test suite guide