Send events, get emailed when something weird happens. No dashboards to stare at. No thresholds to configure. Just statistics and email.
// npx jsr add @uri/anomalisa import { sendEvent } from "@uri/anomalisa"; await sendEvent({ token: "your-token", userId: "user-123", eventName: "purchase" });
Most anomaly detection tools want you to set thresholds. "Alert me if signups drop below 40 per hour." That means you need to already know what normal looks like, which defeats the purpose.
Anomalisa uses Welford's online algorithm to maintain a running mean and variance from your data. Three numbers in memory: count, mean, and sum of squared deviations. Each hour, the event count gets fed into the model. If the new count is more than 2 standard deviations from the running mean, you get an email. That's it.
No batch jobs, no time-series database. The model updates incrementally with constant memory and stays numerically stable even over millions of updates.
The entire storage layer is a key-value store. Event counts in hourly buckets with a 7-day TTL, three Welford states per event name, detected anomalies with a 30-day TTL. No relational queries, no migrations. TTLs handle cleanup. The detection engine is one file you can read in five minutes.
It won't catch everything. If your system fails in a way that doesn't affect event counts, you're on your own. But most real failures do show up as something spiking or dropping, and the simplicity means there's almost nothing to debug.
Deeper technical writeup: anomaly detection with nothing but math and a key-value store