Overview
Email alerts transform inbound emails into real-time notifications in other channels. Common use cases:
- Alert your on-call engineer in Slack when a server sends a critical error email
- Page your team in PagerDuty when a monitoring system sends an alert email
- Post a Slack message when a high-value lead's email arrives
- Send a mobile push notification when an important supplier emails
- Trigger a Zapier/Make workflow when specific keywords appear in an inbound email
The pattern is always the same: email arrives at a JsonHook address → webhook delivers JSON to your handler → handler checks conditions → handler fires the alert if conditions match. The alert can go to any channel that accepts an HTTP call: Slack, PagerDuty, Twilio, Discord, Teams, or any custom API.
Prerequisites
Requirements for email alerting:
- A JsonHook inbound address configured to receive the relevant emails
- Webhook URLs or API credentials for your alert destinations (Slack incoming webhook, PagerDuty events API key, etc.)
- Clear alert conditions: which emails should trigger an alert and what information should the alert contain
Turn Inbound Emails Into Instant Alerts
Email arrives, webhook fires, Slack pings — all in under 3 seconds.
Get Free API KeyStep-by-Step Instructions
Build an email-to-alert pipeline:
- Create a JsonHook address pointed at your alert handler webhook.
- Define your alert conditions. For example: from a specific sender, subject contains "CRITICAL" or "ERROR", or any email from a monitoring service.
- Implement the alert handler — check conditions, format the alert message, POST to the alert channel.
- Test with real emails that should and should not trigger the alert to verify your conditions work correctly.
- Add alert throttling to avoid notification storms — if 100 monitoring alerts arrive in one minute, you probably only want one Slack message.
- Monitor the alert pipeline itself — use the JsonHook delivery log to ensure alert emails are being delivered to your handler.
Code Example
Alert handler that posts to Slack for critical monitoring emails:
import express from "express";
import crypto from "crypto";
import fetch from "node-fetch";
const app = express();
app.use(express.raw({ type: "application/json" }));
// Simple in-memory throttle: allow max 1 alert per 5 minutes per sender
const alertThrottle = new Map<string, number>();
const ALERT_CONDITIONS = [
{ test: (e: any) => /critical|error|down|failed/i.test(e.subject), severity: "high" },
{ test: (e: any) => e.from.includes("monitoring@"), severity: "high" },
{ test: (e: any) => /warning|degraded/i.test(e.subject), severity: "medium" },
];
async function sendSlackAlert(email: any, severity: string) {
const icon = severity === "high" ? ":red_circle:" : ":warning:";
await fetch(process.env.SLACK_WEBHOOK_URL!, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `${icon} *Email Alert* [${severity.toUpperCase()}]`,
attachments: [{
color: severity === "high" ? "danger" : "warning",
fields: [
{ title: "From", value: email.from, short: true },
{ title: "Subject", value: email.subject, short: true },
{ title: "Preview", value: (email.textBody ?? "").slice(0, 200) },
],
}],
}),
});
}
app.post("/webhooks/alerts", async (req, res) => {
// ... verify HMAC ...
const { email, deliveryId } = JSON.parse(req.body.toString());
const throttleKey = email.from;
const lastAlert = alertThrottle.get(throttleKey) ?? 0;
const now = Date.now();
if (now - lastAlert < 5 * 60_000) {
console.log(`Throttled alert from ${email.from}`);
return res.sendStatus(200);
}
for (const condition of ALERT_CONDITIONS) {
if (condition.test(email)) {
alertThrottle.set(throttleKey, now);
await sendSlackAlert(email, condition.severity);
console.log(`Alert sent for ${deliveryId}: ${email.subject}`);
break;
}
}
res.sendStatus(200);
});
app.listen(3000);Common Pitfalls
Email alert pitfalls:
- No alert throttling. If a failing system sends 100 error emails per minute, your team receives 100 Slack messages. Implement throttling (one alert per N minutes per sender/condition) to prevent notification storms.
- Alert handler itself going down. If your alert handler is unavailable, critical alerts do not fire. Deploy the alert handler with higher availability than regular services, or use a managed function (Lambda) that cannot go down for regular deployments.
- Too broad or too narrow conditions. Overly broad conditions flood channels with noise; overly narrow conditions miss legitimate alerts. Start broad, monitor, and tighten conditions based on observed false positives and false negatives.
- Not deduplicating across retries. If your Slack post fails and the webhook is retried, you may send duplicate alerts. Add idempotency using the deliveryId before firing any external alert.
- Alerts only in one channel. For critical alerts, send to multiple channels (Slack + PagerDuty) to ensure at least one reaches the on-call engineer. Single-channel alerting has a single point of failure.