We’re in the process of building a feature for Rancherthat makes use of the Docker event stream. The stream is a useful feature of the Docker API that allows us to augment and enhance the Docker experience without wrapping or obfuscating Docker itself. Michael Crosby (@crosbymichael) gives a good overview of the Docker Events API here. If you’re looking for an introduction to Docker events, I recommend starting there. The code I’m working on in Rancher lives here: https://github.com/rancherio/host-api/tree/master/events. It consists of a framework for receiving and routing events and handlers to do the interesting work based off of specific events. Right now, we listen for Start events and inject an IP into the container using nsenter. This is part of our multi-host networking solution that allows users to have VPC-style networking across hosts and even across cloud providers. The event listening and routing framework is pretty simple and straightforward, thanks to the awesome power of golang channels and goroutines. To start, we have a single goroutine continuously listen for events:

func (e *EventRouter) routeEvents() {
    for {
        event := <-e.listener
        timer := time.NewTimer(e.workerTimeout)
        gotWorker := false
        for !gotWorker {
            select {
            case w := <-e.workers:
                go w.doWork(event, e)
                gotWorker = true
            case <-timer.C:
                log.Infof("Timed out waiting for worker. Re-initializing wait.")
            }
        }
    }
}

When an event is received, the router pulls a worker off of a worker channel and calls its doWork method in a separate goroutine. Both the event and worker channels block. The blocking worker channel acts as a pool so that we don’t spin up an infinite number of worker goroutines at the same time. We also added a timeout to how long we’ll wait for a worker. This is solely to allow us to log that are waiting a long time. When the timeout is reached, we’ll log that we timed out and go right back to waiting again. When the worker’s doWork method is called, the worker looks up the appropriate handler for the event and calls it. Since this is being done in a separate goroutine, any I/O the handler performs will be handled efficiently without blocking other workers:

func (w *worker) doWork(event *docker.APIEvents, e *EventRouter) {
    defer func() { e.workers <- w }()
    if handler, ok := e.handlers[event.Status]; ok {
        log.Infof("Processing event: %#v", event)
        if err := handler.Handle(event); err != nil {
            log.Errorf("Error processing event %#v", event)
        }
    }
}

When a worker is done, it sends itself back to the worker channel, thus putting itself back into the worker pool. All of our communication with the Docker API is going through fsouza’s go-dockerclient. It’s an excellent client that has met all our needs thus far. We are even using it for listening to the event stream. It would have been easy enough to write our own mechanism for listening to events, but I was quite happy with this one after reviewing its source code. It’s efficient, clean, and robust. Setup is easy too. You just have to pass a channel into the client’s AddEventListener method:

func (e *EventRouter) Start() error {
    log.Info("Starting event router.")
    go e.routeEvents()
    if err := e.dockerClient.AddEventListener(e.listener); err != nil {
        return err
    }
    return nil
}

This concludes the overview of Rancher’s framework for responding to Docker events. If you’re interested in the actual IP injection handler, you can checkout the source here: https://github.com/rancherio/host-api/blob/master/events/start_handler.go. More handlers are in the works and will be hitting master in the near future. Ultimately, we want to use this framework to allow Rancher to manage containers that have been started outside of Rancher, perhaps by other frameworks or applications like Kubernetes or Docker Compose. If you’d like more information on getting started with Rancher or RancherOS, please feel to request a demo, and we’ll answer all of your questions.