Skip to content
New Relic logo
P

Connectors / Integration

Connect New Relic and PagerDuty to Automate Incident Response at Scale

Turn real-time observability data into actionable alerts, without manual intervention.

New Relic + PagerDuty integration

New Relic and PagerDuty are two of the most relied-on tools in a modern engineering team's stack. New Relic continuously monitors application performance, infrastructure health, and error rates, while PagerDuty gets the right people engaged the moment something goes wrong. Connecting them through tray.ai closes the gap between detecting an issue and resolving it — no manual handoffs, no dropped alerts.

Engineering and DevOps teams live and die by their mean time to detect (MTTD) and mean time to resolve (MTTR). When New Relic spots an anomaly — a spike in error rates, a degraded Apdex score, a saturated resource — every second spent manually triaging and routing that alert is a second your users feel it. Connect New Relic and PagerDuty through tray.ai and you can automatically escalate incidents to the right on-call engineer, enrich PagerDuty incidents with full New Relic diagnostic context, and auto-resolve incidents when New Relic confirms conditions are back to normal. The result is a faster, less manual incident lifecycle that keeps your SLAs intact and your engineers focused on fixing things, not coordinating.

Automate & integrate New Relic + PagerDuty

Automating New Relic and PagerDuty business processes or integrating data is made easy with Tray.ai.

new-relic

Use case

Automatic Incident Creation from New Relic Alerts

When New Relic fires a critical alert policy violation — a breached Apdex threshold, an error rate past its limit — tray.ai automatically opens a PagerDuty incident and assigns it to the correct service and escalation policy. No one has to manually translate a monitoring alert into an incident. On-call engineers get immediate, context-rich notifications without a human relay in between.

  • Cut out the manual monitoring and handoffs that cause alert fatigue
  • Every critical New Relic violation becomes a tracked PagerDuty incident instantly
  • Reduce MTTD by triggering notifications the moment a threshold is breached
new-relic

Use case

Enriching PagerDuty Incidents with New Relic Diagnostic Data

When a PagerDuty incident is created, tray.ai automatically queries New Relic for relevant metrics, error traces, deployment markers, and host health data, then attaches all of it directly to the incident as notes or custom details. Responders arrive fully briefed, rather than spending the first several minutes hunting for diagnostic information in a separate tool.

  • On-call engineers get full observability context before they even open New Relic
  • Less time context-switching between monitoring and incident management platforms
  • Faster root cause analysis with pre-populated trace and metric data
new-relic

Use case

Auto-Resolve PagerDuty Incidents When New Relic Conditions Normalize

tray.ai monitors New Relic for alert condition closures and automatically resolves or acknowledges the matching PagerDuty incident when the underlying issue clears. This prevents stale incidents from piling up in PagerDuty and stops on-call engineers from getting paged repeatedly for conditions that have already self-healed.

  • No more noise from alerts that already resolved on their own
  • PagerDuty dashboards stay clean and accurate
  • Incident duration data reflects reality, which matters for SLA reporting
new-relic

Use case

Correlate New Relic Deployments with PagerDuty Incident Spikes

By connecting New Relic deployment markers with PagerDuty incident activity, tray.ai can flag when a new deployment coincides with a surge in incidents — creating annotations or triggering a dedicated change-related incident. Engineering teams get an immediate signal when a release may be causing production issues, along with an audit trail linking code changes to operational impact.

  • Surface deployment-related regressions as structured PagerDuty incidents right away
  • Clear causal link between code changes and production degradation
  • Faster identification of the root cause after a problematic deploy
new-relic

Use case

Intelligent On-Call Routing Based on New Relic Service Ownership

tray.ai uses New Relic entity and service metadata to determine which team owns a degraded service and automatically routes the PagerDuty incident to the correct escalation policy. Instead of a single catch-all alert channel, incidents go directly to the team that actually owns the affected system.

  • Incidents route to the right team automatically based on service ownership data
  • No manual triage or reassignment overhead
  • Faster acknowledgment because the most relevant on-call engineer gets paged first
new-relic
slack

Use case

Post-Incident Reporting and SLA Compliance Tracking

After a PagerDuty incident is resolved, tray.ai automatically retrieves related New Relic performance data — anomaly duration, peak error rates, affected services — and compiles it into a structured post-incident report. That report can go out via email, Slack, or straight into a data warehouse for compliance tracking. No one has to write it by hand.

  • Post-incident reports generated automatically from real observability data
  • An auditable record of incidents, durations, and SLA impact without manual effort
  • Engineering managers aren't stuck assembling performance data after the fact

Challenges Tray.ai solves

Common obstacles when integrating New Relic and PagerDuty — and how Tray.ai handles them.

Challenge

Alert-to-Incident Latency Causing Delayed Response

In many organizations, there's a frustrating delay between when New Relic detects an issue and when an engineer actually gets paged in PagerDuty. That gap usually exists because alert routing involves manual steps, email forwarding, or poorly configured integrations that drop events when incident volume spikes.

How Tray.ai helps

tray.ai uses real-time webhook-based triggers that process New Relic alert violations the instant they fire, creating PagerDuty incidents in seconds with no polling delays or manual handoffs. Conditional logic in tray.ai means only actionable alerts — filtered by severity, service, or environment — trigger incidents, so you cut the noise without slowing down the response.

Challenge

Loss of Observability Context During Incident Response

On-call engineers routinely waste critical minutes navigating from a PagerDuty notification to New Relic just to gather the diagnostic context they need. Without automated enrichment, responders show up to an investigation blind — no error traces, no metric trends, no deployment history.

How Tray.ai helps

tray.ai automatically queries New Relic's REST API and NRQL engine the moment a PagerDuty incident is created, pulling relevant metrics and appending them directly to the incident as structured notes. Responders have full observability context in PagerDuty without switching tools during the first, most critical minutes of response.

Challenge

Stale Incidents Cluttering PagerDuty After Auto-Recovery

Many New Relic alerts self-heal — network blips, transient load spikes, and temporary resource exhaustion often resolve on their own. Without automated resolution workflows, those incidents stay open in PagerDuty, distorting SLA metrics, generating redundant pages, and adding cognitive overhead for anyone reviewing the incident queue.

How Tray.ai helps

tray.ai listens for New Relic alert closure events and automatically resolves the matching PagerDuty incident with a timestamped resolution note. This two-way sync keeps incident state accurate across both platforms and ensures SLA duration calculations reflect true incident windows, not the time someone remembered to close a ticket.

Templates

Pre-built workflows for New Relic and PagerDuty you can deploy in minutes.

New Relic Alert Violation → PagerDuty Incident

New Relic New Relic
P
PagerDuty

This template listens for New Relic alert policy violations via webhook and automatically creates a PagerDuty incident with severity mapping, service routing, and a summary of the triggering condition. No one has to manually translate a monitoring alert into an incident, and no critical violation slips through untracked.

PagerDuty Incident → Enrich with New Relic Metrics

New Relic New Relic
P
PagerDuty

Triggered when a new PagerDuty incident opens, this template queries New Relic for real-time metrics, recent error traces, and infrastructure health for the affected service, then appends that data as a note to the incident. Responders arrive with the observability context they need to start investigating immediately.

New Relic Alert Closed → Auto-Resolve PagerDuty Incident

New Relic New Relic
P
PagerDuty

When a New Relic alert condition closes because the monitored metric has returned to normal, this template automatically finds and resolves the matching open PagerDuty incident. Stale incidents stop accumulating, and SLA calculations reflect accurate resolution data.

New Relic Deployment Marker → PagerDuty Change Event

New Relic New Relic
P
PagerDuty

This template captures New Relic deployment markers and creates a PagerDuty change event, giving engineering teams full visibility into when code was deployed relative to any incidents that follow. The correlation between releases and production issues becomes immediate rather than something you have to dig for.

PagerDuty Incident Resolved → New Relic Post-Incident Report

New Relic New Relic
P
PagerDuty

After a PagerDuty incident is marked resolved, this template automatically pulls historical New Relic data for the incident window — peak error rates, affected entities, anomaly duration — and compiles a structured post-incident report delivered to a designated Slack channel or email recipient.

Scheduled New Relic Health Check → PagerDuty Proactive Alert

New Relic New Relic
P
PagerDuty

On a configurable schedule, this template queries New Relic for services approaching threshold limits — rising error budgets, elevated p99 latency — and creates low-urgency PagerDuty incidents or sends informational alerts to on-call teams before conditions turn critical.

Ship your New Relic + PagerDuty integration.

We'll walk through the exact integration you're imagining in a tailored demo.