Published on

SetUp Alert Manager on Kubernetes – Beginners Guide

Kube-state-metrics

AlertManager is an open-source alerting system that works with the Prometheus Monitoring system. This blog is part of the Prometheus Kubernetes tutorial series.

In our previous posts, we have looked at the following.

  1. Setup Prometheus on Kubernetes
  2. Setup Kube State Metrics

Note: In this guide, all the Alert Manager Kubernetes objects will be created inside a namespace called monitoring. If you use a different namespace, you can replace it in the YAML files.

Alertmanager on Kubernetes

Alert Manager setup has the following key configurations.

  1. A config map for AlertManager configuration
  2. A config Map for AlertManager alert templates
  3. Alert Manager Kubernetes Deployment
  4. Alert Manager service to access the web UI.

Important Setup Notes

You should have a working Prometheus setup up and running.

Prometheus should have the correct alert manager service endpoint in its config.yaml as shown below to send the alert to Alert Manager.

If you have follow previous articles, then it is already in Prometheus configmap.

    alerting:
      alertmanagers:
        - scheme: http
            static_configs:
              - targets:
              - "alertmanager.monitoring.svc:9093"

All the alerting rules have to be present on Prometheus config based on your needs. It should be created as part of the Prometheus config map with a file named prometheus.rules and added to the config.yaml in the following way.

    rule_files:
      - /etc/prometheus/prometheus.rules

Alert manager alerts can be written based on the metrics you receive on Prometheus.

For receiving emails for alerts, you need to have a valid SMTP host in the alert manager config.yaml (smarthost parameter). You can customize the email template as per your needs in the Alert Template config map. We have given the generic template in this guide.

Let’s get started with the setup.

Alertmanager Kubernetes Manifests

Step 1: Clone the Github repo
    git clone git@github.com:magarGanga/kubernetes-monitor.git
Step 2: Config Map for Alert Manager Configuration

Edit as per your need config-alert.yaml. Here, configuration are for email-alert only.

    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: alertmanager-config
    namespace: monitoring
    data:
    config.yml: |-
        global:
        resolve_timeout: 5m
        smtp_smarthost: 'smtp-mail.outlook.com:587'
        smtp_from: 'XXX@outlook.com'
        smtp_auth_username: 'XXX@outlook.com'
        smtp_auth_password: 'XXXX'
        templates:
        - '/etc/alertmanager-templates/*.tmpl'

        route:
        group_by: ['alertname', 'cluster', 'service']
        group_wait: 30s
        group_interval: 5m
        repeat_interval: 1m
        receiver: default
        routes:

        - match:
            team: devops
            receiver: email_alert_to_devops

        - match:
            team: developer
            receiver: email_alert_to_developer

        receivers:
        - name: 'default'
        email_configs:
        - to: 'XXX@outlook.com'

        - name: 'email_alert_to_devops'
        email_configs:
        - to: 'XXX@outlook.com, XXX@outlook.com'

        - name: 'email_alert_to_developer'
        email_configs:
        - to: 'XXX@outlook.com, XXX@outlook.com, XXX@outlook.com, XXX@outlook.com'

Let’s create the config map using kubectl.

Config Map for Alert Template

We need alert template for all the receivers we use (email, slack etc). Alert manager will dynamically substitute the values and deliver alerts to the receivers based on the template. You can customize these templates based on your needs.

template.yaml can be used from Github Create the configmap using kubectl. sh kubectl apply -f config-alert -n monitoring

Create Deployment

Create a file called deployment.yaml with the following contents.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: alertmanager
    namespace: monitoring
    spec:
    replicas: 1
    selector:
        matchLabels:
        app: alertmanager
    template:
        metadata:
        name: alertmanager
        labels:
            app: alertmanager
        spec:
        securityContext:
            fsGroup: 472
            supplementalGroups:
            - 0
        containers:
        - name: alertmanager
            image: prom/alertmanager:latest
            args:
            - "--config.file=/etc/alertmanager/config.yml"
            - "--storage.path=/alertmanager"
            ports:
            - name: alertmanager
            containerPort: 9093
            resources:
                requests:
                cpu: 100m
                memory: 250M
                limits:
                cpu: 200m
                memory: 400M
            volumeMounts:
            - name: config-volume
            mountPath: /etc/alertmanager
            - name: templates-volume
            mountPath: /etc/alertmanager-templates
            - name: alertmanager
            mountPath: /alertmanager
        volumes:
        - name: config-volume
            configMap:
            name: alertmanager-config
        - name: templates-volume
            configMap:
            name: alertmanager-templates
        - name: alertmanager
            persistentVolumeClaim:
            claimName: alertmanager-pvc

Here, I have use persistentVolumeClaim also. You can check how PVC are created in Github

Create the alert manager deployment using kubectl.

    kubectl create -f deployment.yaml

Create the Alert Manager Service Endpoint

We need to expose the alert manager using NodePort or Load Balancer just to access the Web UI. Prometheus will talk to the alert manager using the internal service endpoint.

Create a Service.yaml file with the following contents.

    apiVersion: v1
    kind: Service
    metadata:
    name: alertmanager
    namespace: monitoring
    annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port:   '9093'
    spec:
    selector:
        app: alertmanager
    type: NodePort
    ports:
        - port: 9093
        targetPort: 9093
        nodePort: 31000

Create the service using kubectl.

    kubectl create -f Service.yaml

Now, you will be able to access Alert Manager on Node Port 31000.