I've been running a file and media server at home to share stuff with family and friends for many years. As the collection of media has grown, and the configuration gotten more complex, the thought of having to rebuild from a catastrophic failure is taunting.

Like any sensible person would do, I planned for some form of redundancy. After looking at ZFS, brtfs and mdadm, I opted for mdadm. Not for any particular reason, other than having the ability to add disks and grow the array. ZFS raidz + zpool seemed like too much work and I'm lazy. I'd rather a single layer of storage array that I can add disks to without having to pool devs.

There's obviously an off-site backup (yay CrashPlan), but that's there for peace of mind for when shit really hits the fan. Day to day, I would like to avoid dealing with recovering terabytes from a cloud service. It's much quicker to rebuild a RAID array than waiting for everything to download from CrashPlan.

Initially, I simply setup smartmontools to send me an email daily with the health of disks. This didn't prove very effective, as email got lost amongst countless ads for enlarging one's genitals. At one stage my RAID5 array was in limp mode for a week, until I noticed that my pretty Grafana graph for hdd temperatures was missing a drive. Looking further back, I noticed that one of the drive temperatures had a sudden spike, followed by it disappearing from graph. Quick search for the email did indeed show a drive failure.

I wanted to narrow down on the failed drive and see what the integrity of the array was like. Luckily, mdadm comes with a nice report. Simply running mdadm --detail /dev/md0 showed the failed drive and other details of the array's health.

What I really wanted, was a way of knowing the health of a given array at a glance, rather than an email report of frequently less than critical issues, that would often get buried amongs less important stuff in my inbox.


As I'm already using Pushover as the method of sending notifications from many of the applications I use for media management (natively supported by Sonarr, Radarr, SabNZBD and many more), I figured I'll stick to it as it has proven to be very reliable delivering messages to my phone and other destinations. Best part, it allows you to set priority so I don't get interrupted when there's nothing that warrants immediate attention.

As a starting point, I'm only alerting myself on number of failed drives. This can easily be obtained by running mdadm --detail /dev/md0 | awk '/Failed Devices : / {print $4;}'. The priority is set based on the returned value - anything >0 is undesireable and gets bumped to "High" priority. This tells pushover client app to ignore your phone's silent / do not disturb setting and send an audible notification.

The health check code itself is simple, as can be seen below. Feel free to use / modify / contribute. It can also be found on GitHub, as I'm planning on adding features. mdadm --detail returns many more useful stats so this has growth room, but it does the trick for now.

#!/usr/bin/env python
import argparse
import logging
import subprocess
import httplib
import urllib

LOGGER = logging.getLogger('logger')

PO_MSG_ENDPOINT = "/1/messages.json"

def mdadm_check(args):
    for array in args.arrays:
        LOGGER.info('Checking array ' + array)
        cmd = "/sbin/mdadm --detail " + array + " | awk '/Failed Devices : / {print $4;}'"
        check = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
        failed_drives, err = check.communicate()
        LOGGER.info('Found ' + failed_drives + ' failed drives, sending Pushover msg')

        if int(failed_drives) != 0:
            priority = 1
            message = "CRITICAL: There are {} failed drives in {}".format(failed_drives, array)
            post_to_pushover(args.token, args.key, str(priority), message)
        elif int(failed_drives) == 0:
            priority = -1
            message = "INFO: There are {} failed drives in {}".format(failed_drives, array)
            post_to_pushover(args.token, args.key, str(priority), message)

def post_to_pushover(token, key, priority, msg):
        LOGGER.info('Opening HTTPS connection to api.pushover.net...')
        po_api = httplib.HTTPSConnection("api.pushover.net:443")
        po_api.request("POST", PO_MSG_ENDPOINT,
                           "token": token,
                           "user": key,
                           "priority": priority,
                           "message": msg,
                       }), {"Content-type": "application/x-www-form-urlencoded"})
        response = po_api.getresponse()
        LOGGER.info("{}: {}".format(response.status, response.reason))
    except Exception as ex:
        LOGGER.error('Could not connecto to Pushover: ' + str(ex))

if __name__ == '__main__':
    PARSER = argparse.ArgumentParser(description='Simple software RAID health check tool using mdadm and Pushover.')
    PARSER.add_argument('-a', '--array', dest='arrays', action='append', help='RAID array i.e /dev/md0', required=True)
    PARSER.add_argument('-t', '--token', dest='token', help='Pushover App Token', required=True)
    PARSER.add_argument('-k', '--key', dest='key', help='Pushover User Key', required=True)
    ARGS = PARSER.parse_args()

A while after joining the Logentries team, and becoming familiar with the LEQL query language, I figured recording the state changes of HomeAssistant entities (switches, sensors, etc) to my Logentries account (free option available) is a handy way of analysing the data, by running calculations against the data points - who doesn't like pretty graphs!

The plugin itself is a very simple Python script, forwarding any state changes to a target log in my account:

import json
import logging
import requests

import voluptuous as vol

import homeassistant.helpers.config_validation as cv
from homeassistant.const import (CONF_TOKEN, EVENT_STATE_CHANGED)
from homeassistant.helpers import state as state_helper

_LOGGER = logging.getLogger(__name__)

DOMAIN = 'logentries'

DEFAULT_HOST = 'https://webhook.logentries.com/noformat/logs/'

CONFIG_SCHEMA = vol.Schema({
    DOMAIN: vol.Schema({
        vol.Required(CONF_TOKEN): cv.string,
}, extra=vol.ALLOW_EXTRA)

def setup(hass, config):
    """Set up the Logentries component."""
    conf = config[DOMAIN]
    token = conf.get(CONF_TOKEN)
    le_wh = '{}{}'.format(DEFAULT_HOST, token)

    def logentries_event_listener(event):
        """Listen for new messages on the bus and sends them to Logentries."""
        state = event.data.get('new_state')
        if state is None:
            _state = state_helper.state_as_number(state)
        except ValueError:
            _state = state.state
        json_body = [
                'domain': state.domain,
                'entity_id': state.object_id,
                'attributes': dict(state.attributes),
                'time': str(event.time_fired),
                'value': _state,
            payload = {
                "host": le_wh,
                "event": json_body
            requests.post(le_wh, data=json.dumps(payload), timeout=10)
        except requests.exceptions.RequestException as error:
            _LOGGER.exception("Error sending to Logentries: %s", error)

    hass.bus.listen(EVENT_STATE_CHANGED, logentries_event_listener)

    return True

Best part is that if you'd like to use it, it's already part of HomeAssistant. All you have to do, is enabling it in your HomeAssistant configuration:

# Example configuration.yaml entry

The idea

The build itself took place when the weather started getting colder last year. I had been using Home Assistant for a while to bridge a gap between incompatible domestic IoT devices, and figured this would be a great use case. We live in Ireland, so the winters are relatively mild, but I still wanted to make sure our four-legged family member was nice and comfy while myself and the missus were in work all day. Using Home Assistant gave me an easy way to implement automation based on darksky.net real feel attribute and provide a nice UI for manual override remotely.

The heat pad

I picked this heated outdoor pet heat pad over the more traditional space heater types, as direct contact is more efficient, and at only 30W power draw it could easily be left on all day without costing a fortune or setting the kennel on fire (nobody wants that). The doggie has been happy so far :)

The build

The first step was removing the plug as the on/off function will be handled by an EM relay and Arduino:

Heat Pad Drill Heat Pad

Next, I had to wire up the relay, microcontroller and power. The easiest approach was getting an IP66 rated junction box with enough space to house all the components.

To power the Arduino Pro Mini, I salvaged an old Blackberry charger and soldered it straight to the Arduino power header:


Once the Arduino had a power source, the relay, voltage step-down (to lower 5V output required by relay to an acceptable 3.3V for the radio) and radio could be wired in and connected to the leftover weather shielded and chew proof(!!!) cable, chopped off at the start. The radio module picked was NRF24L01+PA+LNA with an external antenna to facilitate communication between the gateway module behind a concrete wall (more on the wiring, gateway and software side below).

Wiring Junction Box Spaghetti

Once the Arduino is flashed with simple relay code (made possible by MySensors library) and everything is tested, the kennel can be put back together so the doggie won't know any different:

Insulation Complete Plugged

The wiring

The project uses the MySensors library at its core, and their wiring diagrams are quite easy to follow:

The kennel module code

The following can be flashed to the Arduino in order to make the relay controllable by MySensors gateway:

// Enable debug prints to serial monitor
#define MY_DEBUG
#define MY_RADIO_NRF24
#include <MySensors.h>
#define RELAY_1  3  // Arduino Digital I/O pin number for first relay (second on pin+1 etc)
#define NUMBER_OF_RELAYS 1 // Total number of attached relays
// You may need to flip the below values, depening on hi/lo config and wiring
#define RELAY_ON 1  // GPIO value to write to turn on attached relay
#define RELAY_OFF 0 // GPIO value to write to turn off attached relay

void before()
    for (int sensor=1, pin=RELAY_1; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) {
        pinMode(pin, OUTPUT);
        digitalWrite(pin, loadState(sensor)?RELAY_ON:RELAY_OFF);

void setup()


void presentation()
    sendSketchInfo("Relay", "1.0");
    for (int sensor=1, pin=RELAY_1; sensor<=NUMBER_OF_RELAYS; sensor++, pin++) {
        present(sensor, S_BINARY);

void loop()


void receive(const MyMessage &message)
    if (message.type==V_STATUS) {
        digitalWrite(message.sensor-1+RELAY_1, message.getBool()?RELAY_ON:RELAY_OFF);
        saveState(message.sensor, message.getBool());
        Serial.print("Incoming change for sensor:");
        Serial.print(", New status: ");

The gateway

At the time, I chose to use the ESP8266 development board as the gateway. The MySensors nodes operate using their own network protocol on the 2.4GHz frequency. In order to make it talk to HomeAssistant easily, I wanted a WiFi gateway to access the sensors over TCP/IP. There are options, including a serial gateway, but these did not suit my needs at the time. The Raspberry Pi gateway is another appealing option, as that's what I've HomeAssistant running on anyway for the time being, or MQTT that essentially sits on top of the TCP/IP version.

Building the gateway is not a terribly complex task, involving connecting a radio module (linked in the wiring section above) and flashing the ESP8266 with the sample Gateway code found on MySensors website. Only modification needed is your SSID and password.

Once the gateway starts up, and you power-cycle the relay module, a simple telnet gatewayip gatewayport command should show you the communication between the relay and ESP8266. You should be observing a row of numbers, standing for node-id ; child-sensor-id ; command ; ack ; type ; payload \n

The controller

The fun part is Home Assistant. If you haven't heard of them, go check them out. It's much like open-Hab, only written in Python, and arguably more powerful. Once everything is configured, we've a remotely controlled heated kennel:

Pretty much everything in Home Assistant is configured by editing $HASSDIR/configuration.yaml. What I was looking to achieve, was a way of automating the heating pad so that our dog was always nice and cosy while we were in work, rain or shine. To achieve that, weather forecast component using DarkSky api in Home Assistant can be used:

  - platform: darksky
    api_key: YOUR_API_KEY
      - summary
      - icon
      - temperature
      - apparent_temperature

In the above sensor entity, we're specifically interested in the sensor.dark_sky_apparent_temperature value. This is what many weather forecast services refer to as "real feel" i.e the combination of temperature / cloud cover / wind chill. Once we've configured the component to our liking, the kennel heating can be added to configuration.yaml. As we used MySensors library for both the gateway and kennel module, adding it is simple:

    - device: 'gateway.ip.address'
      persistence_file: 'path/to/mysensors3.json'
      tcp_port: 5003
  optimistic: false
  persistence: true
  retain: true
  version: 2.0

There's really nothing more to adding the heating pad relay to HomeAssistant, as it'd be automatically discovered on power-cycle (aka presentation). Next, we can grab appropriate entity IDs from Home Assistant web UI, and proceed to edit configuration.yaml to get a basic automation rule going and an item to show in the frontend:

  - alias: "turn kennel on"
      platform: numeric_state
      entity_id: sensor.dark_sky_apparent_temperature
      below: 12
      service: switch.turn_on
      entity_id: switch.relay_2_1

  - alias: "turn kennel off"
      platform: numeric_state
      entity_id: sensor.dark_sky_apparent_temperature
      above: 12
      service: switch.turn_off
      entity_id: switch.relay_2_1

  icon: mdi:radiator
  friendly_name: Kennel Heating

I would highly advise reading the Home Assistant component documentation and giving it a try. It's fairly easy to get up and running, and the project is backed by an amazing community. If you have any questions or comments, feel free to comment below :)