Its been about 10 hours now that our core business operations is down because AWS security system decided that our account has unauthorized service usage and place a restriction on some part of services we can have access to.
One of them is obviously AWS Lambda. Two weeks ago we started processing large volume of transactions (We provide wallet and digital assets infrastructure for businesses) and needed to increase our services performance from request time out to memory to even our RDS instance to be able to accommodate the kind of requests we are currently processing.
However yesterday (29th of July) at around 5:04PM we got an email from AWS about unautorized service charges due to some unauthorized activity and could be a potential account hack or compromise.
My heart first sank because I was just waking up from a nap after reviewing the new lambda functions the teams worked on and planning to promote to production and staging server.
The second is this is a very high risk security issues and could be a potential hack. We primarily use MPC base vault to securely store and process transactions. Regardless, this is a very a big issue and had to respond by first of all changing the root passwords, deactivating all API keys, disabling console login access for all other users and removing any type of old keys.
After doing this will at least help secure our account for the main time. But then I was curious as well since the suspected activity is because of the unauthorized service charge, decided to go check all services from each region and end up checking our AWS Bill and carefully looking at the bill.
At the end it's a bill the team expected following our new changes from increasing our lambda function memory to time out and upgrading our RDS instance, the bill basically makes sense. AWS must have mistakenly detected this as unauthorized charge.
However the issue here is it's been 10 hours, and we have literally lost access to AWS Lambda functions the moment they notified us of this issues and one of our core business solutions is down because AWS shut it down.
The customer support experience from the AWS support team is the worst I have seen in my entire life. This is outrageous and never wished anyone to experience this. For over 10 hours, our customers (businesses such as payment gateway and exchanges) can't run some business operations because the lambda service (which cant be easily migrated to another alternative service) is down.
Their team keep saying they are waiting for their internal team response for over hours and no good response yet. They didn't provide a valid reason nor why this is taking too much time to fix. Its like they are lack of empathy and just robots behind the keyboards.
The AWS Support team are very heartless with no sense of urgency and this is not the first time such encounter is happening.
This has taught us a lesson not to build our product to be tightly locked with a cloud provider as migrating would be like starting from scratch and be more complicated.
If you work in AWS, the technical support team or customer support team and you would like to help us get past this. Please you can reach out here: hi[[at]]powr[[dot]]finance.
Thanks.
Personally, my last AWS Support ticket was pertaining to Lambdas and I got a very good answer. I was impressed.
It's important I think to appreciate working in support is difficult work, every single day is a customer with their own urgent problem. When urgency is the norm, it's not urgent. And heart? It can be soul sucking work.
In my observation support takes the brunt of the rest of the orgs shortcomings, bad releases, deprecated features, etc, drive customers towards you in unfortunate circumstances. Sometimes there's a whole waterfall of shit raining down on you, and it ain't your fault, and there's nothing you can do or could have done.
And to add insult to injury, you're normally at the bottom of the org pecking order.
As I say, difficult work. I salute all those who do it!