Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I understand a certain level of displeasure at their lack of specificity while they mitigated the issue. But... in this case, the time to remediate doesn't really change your response to the threat. No matter what, you need to change all your keys, generate new private keys, etc.

They got it fixed within 48 hours, globally, which, if you ask me, is incredible at their scale.

I would hardly describe anything AWS does as amateur. But maybe that's just me.



Calling the team at AWS "amateurs" is a great way to discredit everything the author wrote. AWS is a gigantic infrastructure and they got everything fixed IN LESS THAN 48 HOURS. That is not an amateur response time.

Sorry their updates weren't to your liking but they were responding and posting bulletins the whole time and again: they solved the issue very quickly given the number of clients they support.


The way they communicated it was very, very amateurish. They needed to know when they could re-key their certs - seems like it was impossible to tell. That's something that needed to be done ASAP. Without knowing precisely when the environment that affected them was updated, this client couldn't get that done as quickly as they might have.


It's not amateurish, it was just poor communication in a situation that they've (fortunately) not had to deal with. Which happens from time to time; and good companies recognize their failures and fix them. One thing I know about Amazon from friends who work there is that they don't tolerate failure. They have a culture of owning your mistakes and fixing them; anyone who doesn't buy into that attitude will get fired pretty quickly (and Amazon fires a lot of people.)

It's pretty fucking professional to update the infrastructure that runs half the internet in under 48 hours with no issues. But again, communication can be a problem when you have as many customers as they do.

OP raised some legitimate concerns, but his credibility was undercut by attacking Amazon and calling them names. Ironically, his post was a much more amateur move as his concerns would likely be taken more seriously if he had stuck to the issues and not resorted to name-calling. The essence of professionalism is sticking to the issues at hand and not being sidetracked by extraneous factors.


The nail in the coffin was some comment at the end about moving off of AWS to something else over blog posts lacking timestamps.


> sorry their updates weren't to your liking but...

Folks, this is an account created 19 hours ago making half an apology and then rationalizing not listening to customers because they solved a "big" problem quickly. That's an ad hominum argument and as a result, a big tell on intent.

From my perspective, pushing out a new SSL build to a bunch of load balancers in a highly automated network like AWS is probably, by this point, a trivial task. Actually listening to the customer and responding decently is MUCH harder. Clearly it could be done better, which is the point of the post.

Rise above getting offended/scared about being called "amateurs" and start talking more about what goes on in that creepy black box that is AWS. You owe the world that much, at the very least.


> From my perspective, pushing out a new SSL build to a bunch of load balancers in a highly automated network like AWS is probably, by this point, a trivial task.

These responses I always see on HN when there is an AWS issue always show me how disconnected many of the commenters are from reality, or from ever being involved in a huge infrastructure.

Sure the AWS status page doesn't have a hip web 2.0 AJAX backed d3.js powered cool looking status page. Yes they don't update it every 3 minutes with new info, but many (most) of the problems that one off customers see do not reach a threshold that would ever effect enough customers to make it into a dashboard post. I do think they need to speed up their status updates, but these posts need to get OK'd by a decent number of people before they get thrown up.

There are usually multiple ELB instances living on every rack of every datacenter in every AZ in every region of AWS. Relaunching / patching hundreds of thousands of instances in 48 hours with minimial disruption to customers, is a lot harder than you think.


I'm not disconnected from reality. I actually understand the problem at hand and understand it is a lot of work for some engineers. However, it's still likely it's old hat techniques by now, hence the 'trivial' remark. Also, the context of the comment I was responding to was correlating 'fixing in 48 hours' to not being marketing amateurs.

My primary point agrees with your second paragraph, which is that they could do better on the status updates. Unfortunately this has been going on for YEARS at AWS, so it's worth ratcheting up the tone when talking about it. It's important, and they need to fix it.


Hey there super sleuth... not sure what you're trying to imply but I'm not affiliated with Amazon nor do I use their AWS service. I do not speak for them. I am not making an apology on their behalf. I suggest you stop levying false accusations and avoid using words that you simply don't understand ie. ad hominum (not only did you spell it wrong, but you've misapplied it).

To clarify: I'm a long time HN reader that finally got around to making an account (and certainly not for the express purpose of defending Amazon). However, I did want to call the author out on writing a terribly unfair knee-jerk, heat-of-the-moment indictment of AWS (this type of thing is unfortunately all-too-common in the tech community: actual amateurs writing as if they are a central authority about subjects that they have something approaching 0 understanding of. For example: the multitude of complex engineering and PR challenges a service provider like AWS faces during something like the Great OpenSSL Exploit of 2014). What I'm trying to say is: cut them some slack. Their response seemed perfectly reasonable to me.

Hope this helps.


I spell things wrong all the time and sometimes hominem doesn't get caught by the spell checker. So fucking what? It's ironic that you call this out because, well, it's an ad hominem argument in and of itself. You are attacking one thing (or supporting one thing) to prove another point. Wikipedia says it best, "claim or argument is rejected on the basis of some irrelevant fact". Claiming AWS isn't practicing amatuer hour based on the fact they rolled out fixes in 48 hours is making an ad hominem argument. Amazon's marketing department is distinctly different from their engineering department. It's irrelevant that they are technically competent enough to patch this when you consider the fact the marketing/communications department could give two shits about how they tell anyone what has been fixed.

In retrospect, what I should have done is called out the blaming statements you made in your first post. That's what brought me to action and caused me to write my response the way I did. I should know better than trying to rationalize with someone who is in dissonance. BTW, narrowrail called you out below for this blaming statement here. Pay attention - people are giving you feedback. Take it or leave it.

Vote down all my comments if that makes you feel better. Karma is meant to burn. It's also a tell that this story dropped off the main page and I'm still getting downvotes on my comment. AWS koolaid much?

Oh, and FWIW, I am a super sleuth. A super sleuth of human behavior and emotional response. I also watch what I say about others, trying not to blame and indicate opinion where needed. That's why I said your behaviors were a 'tell on intent'. I have no idea who you are or why you created an account just to comment on this story, but I guarantee there is more to it than what meets the eye.


As a long time HN reader, you should know that the condescending "super sleuth" mention was unecessary. It should also be apparent that your highly defensive comment with a very new account would raise eyebrows. The "hope this helps" ending also comes off as passive-aggressive. We can do better.


The condescension and passive-aggression was fully intentional and really the only way to respond to such an asinine comment. Hope THIS helps; I could do better. Next time perhaps active aggression will be called for?


Please don't make posts like this to HN.

The way to respond to an asinine comment, if you must respond, is to politely refute it.


The thing AWS does which can be described as amateur, was the communication. AWS has always been really quiet when there's an issue on their services.

It's a nightmare to know when your problem is due to your infrastructure or if their's a bigger scale issue at AWS cause they never talk about it...


They're not amateur. We pay for their premium support and their communication is professional, polite, and usually totally useless.


Their communication is not impressive. I work with AWS daily and I've been aware of issues LONG before they publicly announce anything on their status pages.


Typically AWS is great at communicating with post-mortems, for example: http://aws.amazon.com/message/2329B7/

In this case, they probably didn't want to be too explicit about the details of patching tens of thousands of machines while the remediation was still ongoing.

I do agree that it's unexpectedly hard to find a link to the "security notice" page anywhere.


That outage notice you've linked includes a promise to improve communication.

Also, how'd you find that link? If you happened to just have it laying around, that's fine but it would be better if they had these things linked somewhere customers can find them when new ones are posted (like a page covering service post-mortems) and the timeline is missing little details like the year when the outage happened and a point of contact (it's signed by "the aws team") if you have questions.


it would be better if they had these things linked somewhere customers can find them when new ones are posted

That's a great point. I found it by googling "AWS post mortem", but I only knew it existed because I had been linked to this page before.


Amazon has a status page for all AWS services and provides RSS feeds for each: http://status.aws.amazon.com/


Trust me, this doesn't cover all their issues. Don't get me wrong, we love our AWS stack, but we've had all sorts of weird stuff happen over the years, from zombie ELBs hitting the wrong hosts to invisible SGs... sometimes we see from twitter that others are seeing the same things, but AWS don't ever confirm or deny anything, they just stall then tell you when it's fixed. I guess its support in terms of something to give to your PHB, but when something goes wrong within AWS, it's a very black box - which isn't surprising.


I know.

Sadly this page is not updated often enough. I mean, when there's a known issue on one of the aws service this page display a little "i" icon, which is barely visible.

And when, you encounter some problem with your AWS stuff that clearly come from their side, if the problem it's not wide, they just say nothing. At that point you can search yourself for hour to be sure it's not your responsibility, and after-woods, you just wait, blind.


You're absolutely right. The author of the post is just whining for the sake of whining. For every post hating on amazon there's one hating on heroku<http://www.holovaty.com/writing/aws-notes>. Best course of action is to just deal with it.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: