Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rather than using Puppet etc.? Basically you've returned to before configuration management and that is somehow good?

How do you change network configuration, load balancers or other external configuration in lock step with your application? Scripting by hand?

And deployments are a just small part of configuration management. How do you find out which of your applications use libz 1.2.3 (to measure CVE applicability for example)? By looking in each and every one of them?

How do you find out which application run a cron job? And which application connect to backend x without going through the load balancer?

Which application has a client certificate about to expire? How do you guarantee two applications have the same shared secret, and change it in lock step?

With configuration management all this is just a look up away. And best of all, most of this information is authoritative.

When I come somewhere new, the use of these tools instead of a directory of miscellaneous scripts is like night and day. It literally turns a two day job into a ten minute one when the environments are complex.

Docker is great for a lot of things. It enables to do daring things to your environment even if it's so complex you don't fully understand it all because you can just shuffle containers around. But I would never use it instead of configuration management.



> Basically you've returned to before configuration management and that is somehow good?

I would argue I accomplish most configuration management via the Dockerfile. In Docker 1.7, even if I want to use a different filesystem like ZFS, I can.

I agree with you that I probably have missed many important features in my architecture. However, it's simple and works for what I got. It sounds like in the future, I'll need a combination of Puppet / Chef & Docker rather than relying purely on ECS. For now though, it's quick & easy. Thanks for the thoughts, they all make me think.


Re: '...Rather than do all this work with Puppet, Chef, or even Ansible, I can just declare what the systems ought to be and cluster them within code branches...'

It sounds like the poster is doing something similar to a 'Gold System Image' though with Container technologies.

Configuration Management is great, but I'm with the view that you should start with a combination of gold images/system builds and then stack on Cfg Mgmt on top of that.

Theory is you will have a known base and identical builds etc, otherwise you would get subtle drift in your configurations, especially for things which are not explicitly under control of Config Mgmt (shared library versions, packages).

Of course, this is not always feasible but if I were to start from scratch, I would probably try to do it this way.


It should be a combination, really.

You want to have your "Gold system image" available and this is most certainly what you should be deploying from; however your configuration management - whether it's Chef, Puppet, whatever - should be able to take a base operating system and get it setup completely from scratch.

This then solves the problem of ensure that you can completely reproduce your application from scratch; but also removes the possibly horrendous slow scale up time by using a "Gold image" that has already done this.

My current process is: Jenkins runs puppet/chef, verifies that the application is healthy after the run and everything is happy, calls AWS and images the machine, then it iterates over all instances in my load balancer 33% at a time and replaces them with the new image resulting in zero-downtime. Of course another solution is to pull out those instances and apply the update, and then put them back in.

And I'm sure someone else will have their $0.02 on their own process, which actually I'd love to hear :-)


Here's mine, with the preface that we're still iterating our deployment procedure as things are quite early.

I have an Ansible repo with an init task, which configures new boxen (makes sure proper files, dependencies etc. exist). Then to deploy, I have another task that ships a Dockerfile to the target boxen group (dev, staging, or prod) and has them build the new image, then restart. This happens more-or-less in lockstep across the whole group, and scaling up is relatively easy - just provision more boxen from AWS and add the IPs to the Ansible inventory file. Config is loaded from a secrets server, each deploy uses a unique lease token that's good for 5 minutes and exactly one use.

I'd love to hear how to improve this process, since I'm dev before ops. My next TODO is to move Docker image building locally and deploy the resulting tarball instead (though that complicates the interaction with the secrets server).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: