I am bothered by some of the language in this post:
- we were aware of the potential security threat behind post-deploy hooks and were about to disable them [...] but...
- we were days away from replacing this server
- They were a short-term stopgap measure we had been planning to replace
To me, it sounds like the real problem could have been stated as "We were lax on security," but almost worse than that is the lack of accountability that I sense from company. Yeah, maybe it won't happen again, but it's hard to be full of confidence to buy into a service like that.
They seemed to be blaming it on "bad timing" as if these things were ever excusable. These are also things that you either do or don't do. Your systems are either secure or they're not. "They were going to be secure tomorrow" does no one any good. It doesn't look like any of the parties involved learned much of anything from this episode.
[citation needed]. Security is never binary. No matter what security measures you take, there are always zero-day exploits, social engineering, physical access, heavily-researched-and-highly-targeted attack vectors, etc.
Security is the opposite of convenience and accessibility. The right thing to do is to analyze what you are trying to secure and ensure an appropriate level of security proportional to the sensitivity and business impact of the potentially-exposed system.
There's no such thing as "secure." It's a continuum and it's always a tradeoff. Would you spend $5000 to protect something that's worth $50? It sounds like this site was in beta mode, and they made an understandable decision to focus on building the product and growing a customer base in lieu of ensuring top-notch security. In retrospect it was the wrong decision, but you don't hear about the companies who follow this approach and don't get publicly hacked. If they spent all their time on security from the outset, they wouldn't have anything to protect.
I read him being very apologetic for their security shortcomings in all of the appropriate places, and only blaming delayed fixes on timing issues. He was very contrite and forthcoming about their security issues. Accountability was all over the article.
I disagree. Lucas names these children and attempts to personify them so blame can be shifted to the "bad guys" rather than his company.
Lucas's post does not say "We screwed up." He says "We got screwed by Elliot."
I'm saddened most because Lucas is not embarrassed to point out he was outwitted by children.
When I foul up at my job I don't send an email detailing how some nasty client did something. I summarize what went wrong, how it should have been prevented and what steps I will be taking to prevent it in the future.
I would never write an email:
James Smith, a really evil customer (who happened to be working while there was thunder and lightning like Dr Frankenstein!), decided to try system("rm -fr /"). I knew it was possible, but I didn't feel like fixing it. Also I didn't feel like securing any of our other systems which explains those tweets, blog posts, DNS changes, and email compromises. I was lazy, but It's not my fault.
gg,
parfe
P.S. Credit cards probably didn't get compromised. Tim the intern was the one who implemented the payment system and he had his own passwords set.
(Note: I move this comment as I replied by mistake to CGamesPlay.)
Lucas's post says: "This was really naive and irresponsible of me." That doesn't sound like he's shifting blame to me.
You say: "I summarize what went wrong, how it should have been prevented and what steps I will be taking to prevent it in the future."
The article is essentially just that, with one exception; they didn't list steps they "will be taking" to prevent it, they listed steps they have already taken in the last 3 days.
As for Credit Card:
"Credit cards – We have never stored credit cards on any PHP Fog server. There was never any possibility that credit cards could have been compromised by this attack."
I didn't even make it through the headline before being concerned... particularly the part saying "Why it Will Never Happen Again".
I mean, yes, by all means implement measures to avoid this sort of thing from happening in the future but "It Will Never Happen Again" is a very, very bold statement on security. The kind I associate with people who still don't really "get it".
If I was a customer of theirs, I wouldn't have really been (too) bothered about the initial intrusion. However, hearing them say "Why it will never happen again" would make me switch providers. In my mind being willing to say "it will never happen again" implies a basic misunderstanding of the security environment and is tantamount to a guarantee that it will, in fact, happen again - perhaps even regularly.
They sound incredibly laxed on security and the "we were days away from fixing it" could be complete bull. To Lucas, it probably sounds better to say they were close to fixing it instead of admitting they were unaware of these exploits.
I find the disclosure in the blog post great, but the conditions they had leading up to the hack very disappointing.
If they were aware of the exploits, they should have taken quicker action. They'll probably be focusing on security big time now... they have no other choice.
i felt they apologized rather well. its difficult to apologize and explain what happened at the same time without sounding like you're making excuses or trying to skirt responsibility.
- we were aware of the potential security threat behind post-deploy hooks and were about to disable them [...] but...
- we were days away from replacing this server
- They were a short-term stopgap measure we had been planning to replace
To me, it sounds like the real problem could have been stated as "We were lax on security," but almost worse than that is the lack of accountability that I sense from company. Yeah, maybe it won't happen again, but it's hard to be full of confidence to buy into a service like that.