Send events, get emailed when something weird happens. No dashboards to stare at. No thresholds to configure. Just statistics and email.

Get started Docs GitHub
Event spike detection
Your signup event usually gets ~50/hour. Suddenly it's 200. Or 3. You get an email.
Per-user anomalies
One user generating 100x more events than usual. Could be a bot, abuse, or a bug. You'll know.
Zero configuration
It learns what's normal from your data using Welford's online algorithm. Stays quiet until the math says something is genuinely off.
Open source
Run it yourself, fork it, rip it apart. Or just use the hosted version and move on with your life.

Three lines to get started

// deno add jsr:@uri/anomalisa
import { sendEvent } from "@uri/anomalisa";
await sendEvent({ token: "your-token", userId: "user-123", eventName: "purchase" });

How does this actually work?

Most anomaly detection tools want you to set thresholds. "Alert me if signups drop below 40 per hour." That means you need to already know what normal looks like, which defeats the purpose.

Anomalisa uses Welford's online algorithm to maintain a running mean and variance from your data. Three numbers in memory: count, mean, and sum of squared deviations. Each hour, the event count gets fed into the model. If the new count is more than 2 standard deviations from the running mean, you get an email. That's it.

No batch jobs, no time-series database. The model updates incrementally with constant memory and stays numerically stable even over millions of updates.

Three detection modes from one event stream

Total count
Your signup event usually gets ~50/hour, suddenly it's 200 or 3. Works in both directions, catches drops as well as spikes.
Percentage spike
Errors go from 2% to 30% of your traffic while total volume stays flat. Absolute counts look fine, but the ratio is off.
Per-user anomaly
One user generating 100x their normal volume. Could be a bot, abuse, or a bug in their integration.

The entire storage layer is Deno KV. Event counts in hourly buckets with a 7-day TTL, three Welford states per event name, detected anomalies with a 30-day TTL. No relational queries, no migrations. TTLs handle cleanup. The detection engine is one file you can read in five minutes.

It won't catch everything. If your system fails in a way that doesn't affect event counts, you're on your own. But most real failures do show up as something spiking or dropping, and the simplicity means there's almost nothing to debug.

Deeper technical writeup: anomaly detection with nothing but math and a key-value store