Code review is a policy. If there's automated enforcement, it's through software written by someone, configured by someone, on a server that someone has root access to, in a room that someone has physical access to. If you have code signing, someone set up and patches the code signing server. Someone configures the code signing enforcement plane on the target devices. Someone gets to provision new accounts and enroll new keys.
Code reviews are a thing, but physical/mathematical assurance that zero people in your organization can bypass them are not. However amazing your tower of automation and policy may be, there's at least one sysadmin underneath it all, and at least one guy with keys to the cabinet underneath him. That only starts to diverge when you're running on FIPS 140-2 Level 3+ HSMs and you need to assemble a quorum of operators to do anything. And those are quite easy to sabotage - just trip the tamper protection mechanisms.
This++. Even at large, serious organizations with certifications and important government contracts, there are inevitably dozens of people who are/were involved at various levels of the security infrastructure and who happen to know of some aspect of one of the "base turtles" that is secretly a shit show amounting to "this set of people is special." We used to like to play "where's the bullshit" in security design review, because you know there's always something in there with a big fat TBD at some level, and the folks who know what they're doing will readily own up to it and have a future plan for mitigation (often something which will always be a "future" plan). In my experience, the best designs are the ones that don't try to be 100% impossible to subvert, but at least can be audited. Meaning you may be able to come up with a way to push your code into the production line, but the stakes are high because you're probably gonna get caught after the fact.
There may have been. The article speaks of the employee using "false usernames". Code review may stop some foolish person pushing broken code, but it's not going to prevent a determined saboteur who's masquerading as other authorised users.
I'm baffled at your lack of imagination, but: keyloggers, unlocked terminals, API token sniffing from cookies, reactivated old accounts, changing the reviewing account id in the database, …
At Facebook, you could alter code even after somebody had given the OK for code review. I know some people specifically kept some small commits open after being approved, just so they could quickly make changes without needing approval if they ever needed to.
I guess my thinking is if the disgruntled employee was so upset over not getting a promotion to cause so much damage they must have been in a high enough position already to warrant such a disposition, and "high enough" might mean high enough to have push access themselves.
Edit: see closeparens comment above. Complicated systems can always be subverted when trust is broken.