The problem I see with using a mostly-baked AMI like this is what happens if your git repo is down when you're trying to bring up a new instance? Bringing up a new server with old code is just asking for all sorts of little problems to spring up if, for example, the git server is down. Given your environment, partial failure seems to be a much worse scenario than an instance not coming up.
We've experimented with a similar configuration at GoGuardian. We settled on:
- Gzipping the subset of code that needs to be deployed
- Uploading that tar to S3 and creating some infrastructure so that new instances pull the current from S3 on boot
- Creating a fab script that ssh's into our instances and re-runs the on-boot script to point them to a new deploy version
That sounds like a pretty reasonable deploy process to me. Reasonably quick, and a much more straightforward way of getting instances going. I think their concerns about load spikes would be better handled by having an instance or two already spun up, so that additional load and/or crashed servers can be handled without needing a process that has more failure conditions.