SSE Events
All SSE event types streamed during research — schemas and examples
The /research and /research/upload endpoints return Server-Sent Events (SSE). Each event has a type and data field.
Connection
curl -N -X POST https://api.krawl.sh/research \
-H "Content-Type: application/json" \
-H "X-API-Key: your-key" \
-d '{"query": "AI agents", "mode": "deep"}'Events arrive as:
event: session
data: {"session_id": "uuid"}
event: plan
data: {"status": {...}, "plan": [...]}
event: source
data: {"url": "...", "title": "...", "tool": "webSearch"}
event: report_chunk
data: {"text": "chunk of markdown..."}
event: result
data: {"markdown": "full report...", "sources": [...]}
event: done
data: {}Keepalive pings are sent every 10 seconds.
Event Types
session
Emitted first. Contains the session ID for steering and follow-up research.
{"session_id": "550e8400-e29b-41d4-a716-446655440000"}plan
The research plan generated by the planning phase.
{
"status": {"title": "Research plan ready, starting recursive deep research"},
"plan": [
{
"title": "AI Agent Architectures",
"todos": [
"Research current AI agent frameworks",
"Find recent papers on agent architectures"
]
}
]
}perspectives
STORM perspective discovery results. 3–5 perspectives for multi-angle research.
{
"perspectives": [
{
"name": "AI Researcher",
"description": "Focuses on technical architecture and capability benchmarks",
"key_questions": ["What architectures dominate?", "How do agents handle tool use?"]
}
]
}thinking
Agent's internal reasoning between tool calls (flat/quick mode).
{
"thought": "The search results cover frameworks but not benchmarks. I should search for agent evaluation methods.",
"nextStep": "Searching for AI agent benchmarks"
}query
A search query being executed.
{"query": "AI agent frameworks 2025", "tool": "webSearch"}source
A new source discovered during search.
{
"url": "https://arxiv.org/abs/2401.00001",
"title": "A Survey of AI Agents",
"tool": "webSearch"
}content
Content extracted from a source (flat mode).
{"url": "https://...", "content": "Extracted text...", "chars": 3000}x_search
X/Twitter search results.
{
"query": "AI agents",
"text": "Summary of X discourse...",
"tweets_count": 12,
"citations": ["https://x.com/..."]
}browse_page
Page content fetched via browsePage tool.
{"url": "https://...", "title": "...", "content_length": 5000}report_chunk
Streaming markdown chunk during synthesis. Arrives many times.
{"text": "## Introduction\n\nAI agents have evolved..."}result
The final research result. Contains the complete report and all sources.
{
"markdown": "# Full Research Report\n\n...",
"sources": [
{"url": "...", "title": "...", "source_type": "exa", "content": "..."}
],
"total_steps": 24,
"total_learnings": 18,
"model": "bedrock/us.anthropic.claude-opus-4-7",
"session_id": "...",
"learnings": [...],
"structured": null
}error
An error occurred during research.
{"message": "Research failed unexpectedly"}done
Research stream is complete. No more events will be sent.
{}node_started
A new node in the recursive research tree has started.
{
"node_id": "...",
"query": "AI agent frameworks",
"depth": 3,
"breadth": 4,
"parent_id": null
}node_completed
A research tree node has completed.
{
"node_id": "...",
"learnings_count": 5,
"sources_found": 8,
"gaps_found": 2
}learning
A new learning extracted from search results.
{
"content": "GPT-4 based agents can solve 78% of SWE-bench tasks",
"source_url": "https://...",
"confidence": 0.95
}gap_detected
A knowledge gap identified during research.
{
"question": "How do agents handle long-horizon planning?",
"priority": 0.9,
"suggested_tools": ["webSearch", "academicSearch"]
}outline_updated
The dynamic report outline has been updated with new evidence.
{
"sections": [
{"title": "Agent Architectures", "evidence_count": 5},
{"title": "Tool Use Patterns", "evidence_count": 3}
]
}citation_verified
A citation has been verified against the fetched source content.
{
"claim": "GPT-4 achieves 78% on SWE-bench",
"source_url": "https://...",
"verified": true,
"source_excerpt": "Our evaluation shows GPT-4..."
}audit
The full audit trail for the research session.
{
"entries": [
{
"timestamp": "2025-01-15T10:30:00Z",
"action": "search",
"tool": "webSearch",
"sources_found": 8
}
]
}steer
A steering instruction was received and applied.
{"instruction": "Focus more on security aspects"}chart
Data visualization extracted from research findings.
{
"chart_type": "bar",
"title": "Market Cap Comparison",
"data": {...}
}structured_output
Structured data extracted per the provided output_schema.
{
"fields": {
"market_cap": {
"value": "$400B",
"confidence": 0.95,
"citation": "https://...",
"reasoning": "Multiple sources confirm..."
}
}
}review_pass
Multi-pass synthesis review results.
{
"pass_number": 1,
"score": 0.72,
"issues_count": 3
}memory_recall
Prior knowledge recalled from the cross-session knowledge base.
{
"memories_count": 5,
"memories": [
{"content": "Ethereum merged to PoS in Sep 2022", "tags": ["ethereum"]}
]
}document
Document upload processing status.
{
"filename": "report.pdf",
"chars_extracted": 15000,
"status": "processed"
}changes_since_last
Auto-comparison with the previous result in the same session (emitted for follow-up research).
{
"previous_result_id": "...",
"has_changes": true,
"new_findings": ["New finding 1"],
"removed_claims": [],
"changes": [{"metric": "price", "old_value": "$50k", "new_value": "$55k"}],
"summary": "Since your last research: price increased..."
}Event Order (Deep Mode)
Typical event sequence for a deep research session:
session— session IDmemory_recall— prior knowledge (if any)plan— research planperspectives— STORM perspectives (if query is complex enough)node_started→query→source(repeated) →learning→node_completed(per tree node)gap_detected→ recurse (depth-1)outline_updatedreport_chunk(many) — streaming synthesisreview_pass— quality reviewchart— data visualizations (if applicable)structured_output— structured data (if output_schema provided)citation_verified(multiple)audit— full audit trailresult— final resultchanges_since_last— comparison (if follow-up session)done— stream complete
JavaScript Client Example
const evtSource = new EventSource('/research', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': 'your-key',
},
body: JSON.stringify({ query: 'AI agents', mode: 'deep' }),
});
// Note: EventSource only supports GET. For POST, use fetch with ReadableStream:
const response = await fetch('https://api.krawl.sh/research', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': 'your-key',
},
body: JSON.stringify({ query: 'AI agents', mode: 'deep' }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop() || '';
for (const line of lines) {
if (line.startsWith('event: ')) {
const eventType = line.slice(7);
// Next line is data
}
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
console.log(eventType, data);
}
}
}