Try our new Setup Skill to automatically integrate Hyperspell into your project
curl --request POST \
--url https://api.hyperspell.com/context-documents/generate \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"sources": [
"<string>"
],
"user_id": "<string>",
"prompt": "<string>",
"model": "claude-opus-4-6"
}
'{
"document_id": "<string>",
"status": "<string>",
"created_at": "2023-11-07T05:31:56Z"
}Generate an LLM-synthesized context document from the app’s synced data.
This endpoint is async — it creates a ContextDocument record with status PROCESSING, emits an Inngest event, and returns immediately. The actual synthesis (which can take 1-15 minutes depending on data volume) happens in the background via the generate_context_document Inngest function.
The pipeline automatically chooses single-pass or multi-pass mode based on the total number of resources. See tasks/context_documents.py for the full pipeline architecture.
curl --request POST \
--url https://api.hyperspell.com/context-documents/generate \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"sources": [
"<string>"
],
"user_id": "<string>",
"prompt": "<string>",
"model": "claude-opus-4-6"
}
'{
"document_id": "<string>",
"status": "<string>",
"created_at": "2023-11-07T05:31:56Z"
}API Key or JWT User Token. If using an API Key, set the X-As-User header to act as a specific user. A JWT User Token is always scoped to a specific user.
Request body for POST /context-documents/generate.
All fields are optional. With no fields set, the system will:
The model field controls the Stage 2 (final synthesis) model.
Stage 1 (per-source extraction) always uses Sonnet for cost efficiency.
Integration sources to include (e.g., ['gmail', 'slack']). Defaults to all connected integrations.
Scope to a specific user's data (Tier 1). Defaults to all users in the app (Tier 3).
Custom prompt template. Replaces the default Tier 3 structured summary prompt. The formatted resource data is passed as the user message regardless of the prompt.
10000LLM model for final synthesis. This controls the Stage 2 model in multi-pass mode, or the single model in single-pass mode. Stage 1 extraction always uses Sonnet.
Was this page helpful?