Or import directly from JSR:
Records an event. The server builds a statistical model from your events and emails you when something looks anomalous.
| Field | Type | Description |
|---|---|---|
token |
string | Your project token from the dashboard |
userId |
string | Identifies the user performing the action |
eventName |
string | Name of the event (e.g. "signup", "purchase") |
Returns an empty object. Anomaly detection and alerting happen server-side.
Fetches all detected anomalies for a project. Anomalies are kept for 30 days.
Each anomaly has:
| Field | Type | Description |
|---|---|---|
eventName |
string | The event that spiked |
metric |
"totalCount" | "userSpike" | "percentageSpike" | Type of anomaly detected |
userId |
string? | Present when metric is "userSpike" |
bucket |
string | The hourly bucket (ISO timestamp) |
expected |
number | Running mean for this event |
actual |
number | Observed count in this bucket |
zScore |
number | Standard deviations from mean (totalCount/userSpike) or percentage change (percentageSpike) |
detectedAt |
string | ISO timestamp of detection |
Events are counted in hourly buckets. The server maintains a running mean and variance per event using Welford's online algorithm. When a bucket's count deviates by more than 2 standard deviations, it's flagged as an anomaly and you get an email.
Three things are tracked independently:
Total count (z-score) -- the total number of times an event fires per hour across all users. Fires when count is more than 2 standard deviations from the mean.
Percentage spike -- fires when hourly count more than doubles compared to the rolling mean (and the absolute difference is at least 3). Catches gradual increases that z-score misses in noisy data.
Per-user spikes -- each user's hourly event count is compared against that user's historical pattern.