StackLens

Python SDK Reference

Installation

pip install stacklens

Requires Python 3.9+. The only dependency is httpx.

Configuration

import stacklens
 
stacklens.configure(api_key="sl-xxxx")
 
# For self-hosted deployments:
stacklens.configure(api_key="sl-xxxx", endpoint="https://api.your-domain.com")

Get your API key from the dashboard under Settings → API Keys.


stacklens.configure(api_key, endpoint?)

Initialises the SDK. Call once at application startup.

ParameterTypeDefaultDescription
api_keystrrequiredYour StackLens API key (sl-...)
endpointstrhttps://api.getstacklens.aiAPI base URL (override for self-hosted)

stacklens.trace(name, *, model, provider, input_tokens, output_tokens, ...)

Record a single LLM span. Returns the trace ID string.

ParameterTypeDefaultDescription
namestrrequiredOperation name shown in the dashboard
modelstrrequiredModel identifier (e.g. "gpt-4o")
providerstrrequiredProvider name (e.g. "openai")
input_tokensintrequiredNumber of input/prompt tokens
output_tokensintrequiredNumber of output/completion tokens
total_tokensintcomputedDefaults to input_tokens + output_tokens
cost_usdfloat0.0Estimated cost in USD
attributesdict[str, str]{}Key-value metadata attached to the span
tagslist[str][]String tags for dashboard filtering
statusstr"ok""ok" or "error"
trace_id = stacklens.trace(
    "document-summary",
    model="gpt-4o",
    provider="openai",
    input_tokens=850,
    output_tokens=120,
    cost_usd=0.0012,
    attributes={"document_id": "doc_abc123"},
    tags=["summarisation", "production"],
)

stacklens.start_trace(name) — context manager

Trace a multi-step operation. Yields a Span object. Flushes to StackLens on exit.

with stacklens.start_trace("agent-run") as span:
    # ... do your work ...
    span.record_llm(model="gpt-4o", provider="openai",
                    input_tokens=200, output_tokens=150)

If an exception is raised inside the block, the span status is set to "error" automatically.


Span — methods

span.record_llm(...)

Attach LLM metadata to this span.

ParameterTypeDefaultDescription
modelstrrequiredModel identifier
providerstrrequiredProvider name
input_tokensintrequiredInput token count
output_tokensintrequiredOutput token count
total_tokensintcomputedDefaults to input + output
cost_usdfloat0.0Estimated cost in USD
temperaturefloatNoneSampling temperature
max_tokensintNoneMax tokens setting
promptstrNonePrompt content (stored encrypted at rest)
completionstrNoneCompletion content
is_streamingboolFalseWhether the response was streamed
finish_reasonstrNoneModel's finish reason

span.set_attribute(key, value)

Attach a string key-value pair to the span.

span.set_attribute("user_id", "u_123")
span.set_attribute("session_id", "sess_abc")

span.add_tag(*tags)

Add one or more tags.

span.add_tag("production", "rag-pipeline")

span.set_status(status)

Set span status to "ok" or "error". (Automatically set to "error" on exception.)


stacklens.prompts.get(name, *, env?)

Fetch the active prompt for a given name and environment.

ParameterTypeDefaultDescription
namestrrequiredPrompt name as configured in FlowOps
envstr"production""dev", "staging", or "production"

Returns the prompt content as a str.

system_prompt = stacklens.prompts.get("support-system-prompt")
dev_prompt = stacklens.prompts.get("onboarding-email", env="dev")

Exceptions

ExceptionWhen raised
stacklens.ConfigurationErrorconfigure() was not called before tracing
stacklens.AuthErrorAPI key is invalid or missing the required scope
stacklens.ApiErrorThe StackLens API returned an error (has .status_code)
stacklens.StackLensErrorBase class for all SDK exceptions
try:
    stacklens.trace("my-call", model="gpt-4o", provider="openai",
                    input_tokens=100, output_tokens=50)
except stacklens.AuthError:
    print("Check your API key")
except stacklens.ApiError as e:
    print(f"API error {e.status_code}")