Recently, I moved to New York City. As a new resident, I decided to take part in the NYC DeveloperWeek hackathon, where our team won the NetApp challenge. In this post, I’ll walk through the product we put together, and share how we built a CI/CD pipeline for quick, iterative product development under tight constraints.
The Problem: Have you ever lived or worked in a building where it’s a pain to configure the buzzer to forward to multiple roommates or coworkers? Imagine that a friend arrives and buzzes your number, which is set to forward to your roommate who is visiting South Africa, and has no cell service. If you’re running late, your friend is just stuck outside.
The Product: We built a PBX-style application that integrations with Zang, and forwards a buzzer to multiple numbers, and even allows your friend to use a PIN on his phone to gain entry.
The constraint: For the hackathon, we had to use hardware that had already been setup and allocated.
Building our Hackathon CI/CD Pipeline
Our newly-updated eBook walks you through incorporating containers into your CI/CD pipeline. Download today!
For the competition, we knew that we wanted our builds scalable from the start, and that each deployment would snapshot our entire data environment pre-deployment using NetApp ONTAP (a sponsor of the Hackathon, and a really nice group of folks); if any deployment had an issue, we could simply and quickly roll back.
Fortunately, I am a firm believer in building stateless, Docker container-ready, rebuildable architecture; I can’t go back to the world that existed before: no real CI/CD, lots of SSH-ing into other systems, or SCP-ing data from one place to another. Thankfully, today we have tools and strategies that can help us.
For anyone working in IT, the excitement around containers has been hard to miss. According to RightScale, enterprise deployments of Docker over doubled in 2016 with 29% of organizations using the software versus just 14% in 2015 . Even more impressive, fully 67% of organizations surveyed are either using Docker or plan to adopt it. While many of these efforts are early stage, separate research shows that over two thirds of organizations who try Docker report that it meets or exceeds expectations , and the average Docker deployment quintuples in size in just nine months. Clearly, Docker is here to stay. Read more
The cloud vs. on-premises debate is an old one. It goes back to the days when the cloud was new and people were trying to decide whether to keep workloads in on-premises datacenters or migrate to cloud hosts.
But the Docker revolution has introduced a new dimension to the debate. As more and more organizations adopt containers, they are now asking themselves whether the best place to host containers is on-premises or in the cloud.
As you might imagine, there’s no single answer that fits everyone. In this post, we’ll consider the pros and cons of both cloud and on-premises container deployment and consider which factors can make one option or the other the right choice for your organization. Read more
Docker containers make app development easier. But deploying them in production can be hard.
Software developers are typically focused on a single application, application stack or workload that they need to run on a specific infrastructure. In production, however, a diverse set of applications run on a variety of technology (e.g. Java, LAMP, etc.), which need to be deployed on heterogeneous infrastructure running on-premises, in the cloud or both. This gives rise to several challenges Read more
Docker has been a source of excitement and experimentation among developers since March 2013, when it was released into the world as an open source project. As the platform has become more stable and achieved increased acceptance from development teams, a conversation about when and how to move from experimentation to the introduction of containers into a continuous integration environment is inevitable.
What form that conversation takes will depend on the players involved and the risk to the organization. What follows are five important considerations which should be included in that discussion.
Define the Container Support Infrastructure
When you only have a developer or two experimenting with containers, the creation and storage of Docker images on local development workstations is to be expected, and the stakes aren’t high. When the decision is made to use containers in a production environment, however, important decisions need to be made surrounding the creation and storage of Docker images.
Before embarking on any kind of production deployment journey, ask and answer the following questions: Read more
What do Docker containers have to do with Infrastructure as Code (IaC)?
In a word, everything.
Let me explain. When you compare monolithic applications to microservices, there are a number of trade-offs. On the one hand, moving from a monolithic model to a microservices model allows the processing to be separated into distinct units of work. This lets developers focus on a single function at a time, and facilitates testing and scalability. On the other hand, by dividing everything out into separate services, you have to manage the infrastructure for each service instead of just managing the infrastructure around a single deployable unit. Infrastructure as Code was born as a solution to this challenge.
Container technology has been around for some time, and it has been implemented in various forms and withvarying degrees of success, starting with chroot in the early 1980s and taking the form of products such as Virtuozzo and Sysjail since then. It wasn’t until Docker burst onto the scene in 2013 that all the pieces came together for a revolution affecting how applications can be developed, tested and deployed in a containerized model. Together with the practice of Infrastructure as Code, Docker containers represent one of the most profoundly disruptive and innovative changes to the process of how we develop and release software today. Read more