AWS does Container Deployment Automation

I talked about some adventures in the land of Docker a few weeks ago, in Docker is an Immature Technology on AWS ECS but it looks like that’s all wrong now with the announcement of AWS Fargate , which promises an end to managing the infrastructure underlying your container deployments.

More to come on this, but it feels like this will push containers even further into the mainstream, making full-stack deployment similar to serverless deployment, with the underlying hardware abstracted away.

As application developers we’re concerned with delivering applications. We’re really not too fussed about being experts in hardware, networking, scaling out, and the other myriad details that will hopefully be simplified with Fargate.

Early days, but I’m excited about this further step towards commoditising our infrastructure.

The state of front end development

Have you read Every JavaScript framework tutorial written more than 5 minutes ago ? You really should if you have anything to do with today’s Javascript development. Some way from ‘blink’ and ‘marquee’ HTML tags. And DHTML! Remember that? HTML that’s dynamic. Whatever next.

So the world has landed on React, VueJS and Angular 2 for front end development. Along with an ever-changing bestiary of tools that pretty much require a Docker image to run inside. Welcome to the world’s biggest build tool an entire operating system stack dedicated to dependency management, transpiling, munging, concatenating and otherwise mutating your source code into… well, into more source code actually.

The best way to keep up with what’s happening in this world is to build stuff. So I’ve started to throw together a tool to help browse the AWS product catalogue/price catalogue. (It’s the time of the year to make reservations for AWS kit!) I proudly present the first pre-release version of Cloud Price Explorer.

It’s incomplete now but I need it to work, so it’ll get better.

And apart from it being useful (well, I mean it will be useful one day), I built it to learn a React stack. (Vue is nicer but there are fewer shiny third-party things to use in its ecosystem. Angular is, well, too enterprisey for this project!)

So this is React Boilerplate which gives you a production-ready setup that runs happily in a simple node docker container, and includes all the cool stuff (redux, reselect, redux-saga, etc) along with example implementations to steal from. Ahem. To learn from. It also has a complete test setup using jest and enzyme, and a webpack config that just works out of the box for development, test and production builds. Very nice and very simple to get working.

The only speedbump so far was the need to manually npm install -g lodash cross-env in my Docker container for setup to work. See the github issue for more discussion.

But it works on my machine!

Don’t let “it works on my machine” be a thing!

It’s convenient to just git pull a project, install some dependencies and get on with development. But that doesn’t work well enough, even with runtime version managers like ruby’s RVM and python’s virtualenv , to be reliable. You may be able to control the runtime environment but what about all the binary dependencies?

And so technologies like vagrant with virtualbox and now docker , have transformed one-off snowflakey development environments into virtualised sandboxes that can be brought up and down as we hop from project to project without dependency versioning tripping anyone up.

But then you have to consider the differences between development and production environments.

Docker allows container images to be (almost) identical between development and production, which is the ideal. But a vagrant+virtualbox workflow, which we use at my workplace due to limitations with Amazon’s ECS is dangerous without a means of keeping development in sync with production. The solution is to base vagrant and EC2 images on the same version of linux and then to provision both types of environment using the same definition, for example using chef cookbooks.

Of course a development environment will probably contain ‘extra’ stuff, for example database servers that are externalised on the production platform (e.g. by using RDS), in which case keeping the versions consistent becomes an easily-solved issue within the cookbooks. Again, docker provides a more elegant solution with its support (especially via docker-compose) for multi-container deployments.

How can you share your chef configuration between vagrant and AWS EC2? It used to be difficult/a hack, but the vagrant-aws plugin makes life easier. It’s not perfect since the EC2 instances created by one developer end up being private to him and not able to be managed by other developers, but bringing up new instances instead of updating existing ones is a best practice anyway so that’s not a real blocker.

But the Hashicorp folks, who own Vagrant, are rolling out a reallt interesting and rather complete suite of tools that appear to wrap up all these concerns and much more. The part of the jigsaw that solves the problem of sharing development and production platform builds is the concern of their Packer product. While the end-to-end solution includes Terraform to build AWS infrastructures, as well as more tools to deal with secrets management, service orchestration and scheduling that I’m very interested to look at for the future.

Finally, to reiterate… don’t let ‘required’ tools or dependencies creep onto developers’ laptops outside of your virtualised and controlled development environments. Because then you’re back to “it works on my machine”. If there are tools required that are shared between projects or that are difficult to install inside project environments then make a sandbox (container or VM) specifically for the tool.

Don’t be satisfied with “OK let’s make sure everyone does brew install foo” because you’ll end up in a world of “it doesn’t work anywhere except my machine”.

The AWS “new features” email continues to grow longer and longer with the passing weeks! Here are some recent announcements that caught my eye.

Chime

Amazon noticed the success of Slack and the interest in Skype: online collaboration is the next battleground for integration technologies as these products are pressed into service as a human-oriented “Enterprise Service Bus”. Not to mention that the clunky old-school VOIP solutions are hopelessly outclassed by these ad-hoc solutions that expose APIs for app developers to hook into.

Chime is clearly receiving some attention within AWS as it matures nicely, see Amazon Chime Now Allows You to Log Console Actions Using AWS CloudTrail and Amazon Chime Now Supports Quick Actions on iPhone, iPad, and Apple Watch .

Cloudfront TLS Policies

Recent security breaches and the continuing exploitation of SSL weaknesses puts the spotlight on the aging protocols that terminate our “secure” web infrastructures. AWS does a great job of keeping up to date with the latest and greatest in SSL/TLS protocol suites at the Load Balancer layer. With Amazon CloudFront now lets you select a security policy with minimum TLS v1.1, v1.2, and security ciphers for viewer connections! we have some welcome parity at the Cloudfront layer.

Network Load Balancer on Elastic Beanstalk

And then there were three varieties of Elastic Load Balancer on the AWS platform! The original (and now pretty much deprecated) Classic Load Balancer has been joined by the Layer 7 focussed Application Load Balancer for flexible routing, and the Layer 4 focussed Network Load Balancer for high-performance network balancing.

Now all three can be deployed using Elastic Beanstalk as described in the recent announcement.

Care Needed When Navigating the Chef Ecosystem

Chef is a config management tool that allows you to put the build of a server under version control as a text (ruby) description of the steps needed to bring it to the correct state. It is one of several tools that allow you to automate server builds which is essential for reliable deployment of solutions. Similar tools include Puppet and ansible, but my most recent experience is specific to Chef on MacOS and Linux, the basis for this discussion.

As is the norm these days, a Chef stack is built on a towering pile of open source dependencies that decay and expire with the changing seasons, leaving behind a trail of failed builds and much swearing.

Chef itself, with its dependency manager Berkshelf, is installed as a bundled ‘Chef DK’ that even includes the underlying ruby runtime (such is the fragility of these tech stacks). So installing the tool itself is now fairly fool-proof. But there’s trouble brewing of you plan to use the tool to actually do some work!

You drive Chef with ‘cookbooks’ that contain ‘recipes’ holding the instructions to deploy the components of a platform. Cookbooks are written in ruby and can have dependencies between them that you resolve using the Berkshelf tool, similar to ruby’s native ‘bundler’ or node’s ‘npm’ (that’ll be ‘yarn’ these days).

This tends to be one source of pain when a cookbook depends on a third party cookbook… which gets updated in such a way as to become incompatible. I’m not sure why this is a major problem, given that semantic versioning should be standard, but it is. Versioning is just a hard problem.

The other important source of incompatibility is between cookbooks and Chef itself. Chef has been growing and changing for years now, and it seems quite common to come across cookbooks that only work with v11 of chef and not v12, or vice versa.

I suspect part of the problem is that deployments tend to fail only when you actively update cookbooks, which is less important to do compared with keeping libraries updated for security reasons; cookbooks themselves rarely need updating on the basis of security. And so a deployment that worked last year and hasn’t changed since then, can suddenly break when a cookbook needs to be updated for some reason (perhaps to install an updated version of some package when the version is hard-coded into a recipe… more common than you might hope).

The take-away message is that a Chef deployment can easily decay unnoticed over time, so taking regular time out for maintenance is essential to avoid panic when an emergency deployment of a fix turns into a traumatic chef-cookbook-dependency-tracking-down-why-doesn’t-it-work-athon.

Not that that ever happens in real life.