Select some of this text to see the custom selection colors.

/

Coreshift

How Coreshift reduced incident response time by 70% with automated engineering workflows

When something breaks at 2am, every minute counts. Coreshift used Violet to make sure the right people are notified and the right steps are triggered before anyone even opens their laptop.

70%

faster response time

5hrs

saved per incident

4 workflows

in first week

The cost of slow incident response

For a developer tools company, reliability isn't a feature — it's the product. When something breaks in Coreshift's infrastructure, their customers feel it immediately. The problem wasn't that they lacked a response process. It was that the process was entirely manual. Someone had to notice the issue, post in Slack, page the right engineer, create a ticket, update the status page, and notify affected customers. In sequence. By hand. Often in the middle of the night.

"Our on-call rotation was burning people out. Not because of the incidents themselves but because of all the manual coordination that came with each one." — Ethan Brooks, Engineering Lead, Coreshift

The anatomy of a manual incident

Step

Owner

Time

Detect and confirm issue

On-call engineer

10–20 min

Post in #incidents Slack channel

On-call engineer

5 min

Page relevant team members

On-call engineer

10 min

Create incident ticket in Linear

On-call engineer

8 min

Update public status page

On-call engineer

10 min

Notify affected customers

Support lead

20–30 min

Every incident cost at least an hour of coordination before any actual fixing began. For a team of 19, that overhead was unsustainable.

What they built with Violet

Coreshift built an incident response workflow that triggers the moment their monitoring system detects an anomaly. Everything that used to happen manually now runs in under 60 seconds.

trigger:
  type: monitoring.alert
  source: "datadog"
  severity: ["high", "critical"]
actions:
  - type: slack.send_message
    channel: "#incidents"
    message: "🔴 Incident detected: {{alert.title}} — Severity: {{alert.severity}}"
  - type: pagerduty.page
    policy: "on-call-rotation"
    message: "{{alert.title}} — {{alert.description}}"
  - type: linear.create_issue
    title: "Incident: {{alert.title}}"
    priority: urgent
    team: "engineering"
  - type: statuspage.update
    status: "investigating"
    message: "We are aware of an issue affecting {{alert.affected_service}} and are investigating."
  - type: email.send_bulk
    recipients: "{{affected_customers.list}}"
    template: "incident-notification"
trigger:
  type: monitoring.alert
  source: "datadog"
  severity: ["high", "critical"]
actions:
  - type: slack.send_message
    channel: "#incidents"
    message: "🔴 Incident detected: {{alert.title}} — Severity: {{alert.severity}}"
  - type: pagerduty.page
    policy: "on-call-rotation"
    message: "{{alert.title}} — {{alert.description}}"
  - type: linear.create_issue
    title: "Incident: {{alert.title}}"
    priority: urgent
    team: "engineering"
  - type: statuspage.update
    status: "investigating"
    message: "We are aware of an issue affecting {{alert.affected_service}} and are investigating."
  - type: email.send_bulk
    recipients: "{{affected_customers.list}}"
    template: "incident-notification"
trigger:
  type: monitoring.alert
  source: "datadog"
  severity: ["high", "critical"]
actions:
  - type: slack.send_message
    channel: "#incidents"
    message: "🔴 Incident detected: {{alert.title}} — Severity: {{alert.severity}}"
  - type: pagerduty.page
    policy: "on-call-rotation"
    message: "{{alert.title}} — {{alert.description}}"
  - type: linear.create_issue
    title: "Incident: {{alert.title}}"
    priority: urgent
    team: "engineering"
  - type: statuspage.update
    status: "investigating"
    message: "We are aware of an issue affecting {{alert.affected_service}} and are investigating."
  - type: email.send_bulk
    recipients: "{{affected_customers.list}}"
    template: "incident-notification"

The on-call engineer wakes up to a Slack message, a Linear ticket already created, the status page already updated, and customers already notified. All they have to do is fix the problem.

The results after 60 days

Metric

Before Violet

After Violet

Time to first customer notification

45 min avg

<2 min

Manual coordination per incident

~60 min

~5 min

Status page update lag

30 min avg

Instant

On-call engineer satisfaction score

5.1/10

8.7/10

"The on-call satisfaction score going up was the thing I was most proud of. We didn't reduce incidents — we just made them significantly less painful to deal with." — Ethan Brooks, Engineering Lead, Coreshift

What they automated next

After incident response, Coreshift built a deployment notification workflow — automatically posting release notes to Slack, updating their changelog, and notifying beta users whenever a new version ships.

What's next for Coreshift

Coreshift is building a post-incident review workflow that automatically compiles a timeline of events, assigns a retrospective owner, and schedules a review meeting — all triggered the moment an incident is marked as resolved.

"We went from dreading incidents to almost not noticing them operationally. Violet handles the noise so we can focus on the fix." — Ethan Brooks, Engineering Lead, Coreshift

Coreshift

A developer tools company building infrastructure products for engineering teams.

Details

Industry

Developer Tools

Company size

19 employees

Founded

2022

Region

North America

Use case

Incident response

Create a free website with Framer, the website builder loved by startups, designers and agencies.