All use cases

Investigate API Performance Bottlenecks

Zero pulls TP95 data from Axiom, surfaces the slowest endpoints, identifies patterns, and creates a GitHub issue with findings from one Slack message.

Zero connects:SlackAxiomGitHub

What Zero delivers

What the problem is

TP95 spikes don't always page you. They sit in your metrics dashboards, quietly degrading user experience while your team is focused on shipping. Zero makes it easy to check any endpoint's performance on demand - pull the last 7 days of TP95 data, see which requests are slow, and get a GitHub issue filed automatically so nothing gets dropped.

How Zero fixes it

Step 1: Connect your tools

Axiom
Axiom
Required
Axiom is required. Zero queries your Axiom dataset for request events filtered by endpoint path and time window.
Connect
GitHub
GitHub
Optional
GitHub is optional. Zero creates issues from findings when asked, with the metric table embedded in the body.
Connect

Step 2: Ask Zero

@Zero check Axiom for the POST /api/zero/runs endpoint - show me the TP95 for the last 7 days and flag any events over 5s.
Zero queries Axiom for the endpoint
Zero pulls all request events for the specified endpoint over the time window, computing TP50, TP95, and TP99 per day.
Zero surfaces slow events
Zero filters for events above the threshold (default: 5 seconds), groups them by time of day and error pattern, and identifies the likely root cause.
GitHub issue created with findings
Zero files a structured GitHub issue with the metric table, slow event list, and root cause analysis - ready for the engineering team to act on.

Step 3: Take it further

Assign and prioritize
Route the issue to the right engineer
@Zero assign the Axiom issue to Ethan and add the label performance, priority-high.
Schedule regular monitoring
Set up a weekly TP95 check for key endpoints
@Zero every Monday at 9am, pull the last 7 days of TP95 for /api/zero/runs and post to #dev. File a GitHub issue only if TP95 exceeds 3s.
Investigate a specific slow event
Dig into a single outlier event
@Zero pull the full trace for the 6,857ms event on Apr 12 from Axiom and tell me where the time is spent.

Tips for better results

Specify a threshold in your prompt - 'flag events over 3 seconds' - so Zero's findings match your SLO, not a generic cutoff.
Ask Zero to check the endpoint right after a deploy to catch regressions before they affect users at scale.
Combine with Daily Error Triage for a complete morning health check: errors from Sentry plus performance from Axiom in one brief.