Walk into the page with Datadog already pulled
Got paged at 2am for an API latency spike? Email Carly and she'll pull the Datadog dashboard, check recent deploys, start a war-room thread in #incidents, and post the likely cause — so you land with context.
What Carly does
- 01 Acknowledge the PagerDuty alert
- 02 Pull the relevant Datadog dashboard for the window
- 03 List recent deploys in the affected service
- 04 Correlate the spike with deploys, traffic, or third-party outages
- 05 Start a war-room thread in #incidents with the hypothesis
Land with context, act fast
You're in the fix 30 seconds after the page — not 10 minutes. The team sees the hypothesis in Slack before anyone asks 'what's going on?'
Email this to Carly to kick it off.
Hey Carly, I just got paged for an API latency spike. Can you ack the PagerDuty alert and pull the relevant Datadog dashboard for the alert window so I land with context? While you're at it, list recent deploys in the affected service and correlate the spike with deploys, traffic, or any third-party outages. Then kick off a war-room thread in #incidents with your best hypothesis so the team's already up to speed when they jump in. Check with me before posting in #incidents so I can adjust the hypothesis if needed. Thanks!
More recipes for engineering teams
Cluster Sentry issues by root cause, match against Linear, and file the rest with stack + occurrence counts.
Read →First-pass review with correctness flags, missing-test calls, and a summary verdict — dropped straight into GitHub.
Read →Linear + GitHub stitched into a Notion closeout — what shipped, what carried, wins worth calling out.
Read →Ready to automate your busywork?
Carly schedules, researches, and briefs you—so you can focus on what matters.
Get Carly Today →Or try our Free Group Scheduling Tool or Free Booking Page