This is probably true, given his history with YCombinator, ties to Thiel and Microsoft, leadership of OpenAI, and other investments in Reddit, AirBnB, Stripe, and others.
Power doesn't always look like what you think it does.
That's probably why they are two of the most powerful men currently in existence.
When there is something genuinely acknowledged as being valuable - and a $900B company certainly qualifies - people are going to fight over it. Only natural, because in most cases the way to get power is to fight for things that will make you powerful. Just look at the history of Facebook or Twitter or Google Chauffeur/Waymo or Cisco or the U.S. presidency.
When you get wealth and power without fighting, it's usually because you managed to identify something that would eventually make you powerful without anyone else realizing that it's important, until you become too big to overthrow. This is the story of the Google founders or E-bay or Github or...I can't really think of others, it's a pretty rare path to success. Either that, or seem non-threatening and mild-mannered enough that nobody attacks you and then be the last one standing after all the combative types have destroyed each other, like how Sundar got to be CEO of Alphabet or Bran Stark won the Seven Kingdoms.
Google's executive leadership team in the late 2000s to early 2010s famously consisted of a number of very strong and ambitious personalities. Amit Singhal ran Search Eng; Marissa Mayer ran Search PM/UX; the two of them famously did not get along, to the point where they would actively undermine each others' decisions. Alan Eustace was over them and often responsible for mediating, but he was close to retirement (which he later did to set the world record for highest skydive [1]). Sundar started by running the Google Toolbar efforts and then Chrome. Andy Rubin was in charge of Android. Urs seemed content to run hardware and datacenter ops; he's one of the non-combative ones, and is still there now though he's semi-retired. Likewise with Susan Wojcicki and Ads. Milo Medin had been brought in from Excite@Home to lead Google Fiber. Sebastian Thrun and Anthony Levandowski had been brought in from acquisitions to form Google Chauffeur (now Waymo).
Over the 5 years between 2010-2015, the search PM/TLM org chain basically revolted and said "We can't get anything done when Marissa tells us to do it one way and Amit says no, we have to do it another way". Marissa was moved to run Geo and then eventually left to become CEO of Yahoo. Amit himself got canned over a few sexual harassment complaints. Alan Eustace retired, as everyone expected. Andy Rubin made some ill-faited acquisitions of Motorola and Boston Dynamics that lost him a lot of cred, and then the nail in his coffin was when the media discovered his sex dungeon [2][3]. GFiber failed to gain significant traction, so Milo wasn't really a viable successor candidate (arguably he wasn't really in the running, and was there just to run Google's ISP ambitions). Anthony Levandowski stole Google's trade secrets on self-driving cars and sold them to Uber; he was criminally convicted for this but was pardoned by President Trump and is now founding a religion [4]. Sebastion Thrun left to found Udacity.
Sundar, meanwhile, steadily delivered on Toolbar, and then Chrome, and then took over Android when Andy Rubin was ousted, and steadily improved Android as well. He was unremarkable on the product front, but had a reputation as a peacemaker among Google's more volatile execs, as well as a very good translator of Larry's brilliant ideas into terms that mere mortals could understand. When he became CEO, it wasn't because he was remarkable, but because he was unremarkable enough to manage a number of remarkable personalities that had the egos to go along with it.
You don't really need it anymore - CitC let you do views (mapping just part of the monorepo into your filesystem via FUSE) since about 2013, and then that functionality just got built into Piper. When I returned in 2020 you'd have a file at the top of your source tree that included all the relevant file mappings as well as any Blaze flags needed to build the project, and you could just point your IDE at that and it'd map in just what you need.
The history of Google's relationship to version control is even more interesting than editors - it went from CVS in 1998 to Perforce (P4) in 2000, then gcheckout and g4 in ~2006, then OverlayFS was invented in 2008, git5 came out in 2009, CitC obsoleted OverlayFS in ~2012, Piper built this all into the VCS in ~2013-2014, while I was gone from 2014-2020 apparently we got hg and jujutsu frameworks, and then when I got back in 2020 you'd just check out a .blazeproject from your IDE and everything would magically work. Many of these started as 20% projects (I used to have lunch with the guy who invented OverlayFS; interesting character and one of the best programmers I knew) and then got folded into the "official" way of doing things once grassroot adoption showed the execs that this was how people really wanted to work.
> Muneeb Akhter asked Sohaib Akhter for the plaintext password of an individual who submitted a complaint to the Equal Employment Opportunity Commission’s Public Portal, which was maintained by the Akhters’ employer. Sohaib Akhter conducted a database query on the EEOC database and then provided the password to Muneeb Akhter.
Was there 2009-2014 and then again 2020-2026. I think there are a lot of aspects of IDE use and culture at Google that this post omits.
My recollection from 2009-2011 is that emacs and vim were the dominant editors (just as the TV show Silicon Valley depicted), and there was a decent-sized minority using Eclipse and Intellij, both of which had official support for Google tooling. The command line still largely ruled though, even though the official Google developer workstation was Goobuntu, Google-flavored Ubuntu. This reflected the overall developer population of the time.
I think Cider actually was invented a little earlier than the article describes. I have vague memories of some engineers experimenting with web-based IDEs that would integrated directly with Critique (the code-review software) as early as 2013-2014. Its use was not widespread when I left in 2014; there was still the impression that it wasn't powerful enough for daily driving.
When I came back in 2020, emacs/vim use was much lower, again probably reflecting differences in the general population of developers. Many more of the developers had been trained in the post-2010 developer ecosystem of VSCode, IntelliJ, etc, and this was reflected in tool usage at Google too. I'd say IntelliJ was the dominant IDE, with Cider a close second and Cider-V just starting to take market share. You still had to pry emacs and vim from a grizzled old veteran's hands.
By 2022 I'd transferred to an Android team, and Android Studio with Blaze was the dominant IDE, even as general IntelliJ usage in the company was falling. Cider just didn't have the same Android-specific support. Company-wide Cider-V was growing the fastest, taking market share from both IntelliJ and Cider-V.
By 2024 Cider-V was dominant and there started to be a concerted push to standardize on it, particularly since new AI agent tools were coming out and they couldn't be supported on all editors that Googlers wanted to use.
As of my departure in 2026, the company-wide push was to standardize on Antigravity [1], which, as I understand it, won a turf war within the developer tools org and got blessed as the "official" Google AI coding agent. This also has the effect of concentrating developer time dogfooding Google's external AI coding offering, which hopefully should improve its quality. There's still significant Cider-V usage, but it's dropping, and execs are pushing Antigravity hard.
How many new googlers use vim or emacs do you think? I can imagine at least a small amount of new vim people since vim will always be popular, but I would love to know if more than a handful of new googlers a year use emacs
Famously Jeff Dean uses emacs. Emacs integration to internal systems (source code, code search, LSP, build, etc) was super solid when I was there ~2020.
There is an internal website that tracks statistics of tool use, where “tool” is defined liberally and includes emacs. It would be tracked if you just (require 'google) somewhere in your initialization code.
I joined gdm recently, and previously used (neo)vim exclusively. Begrudgingly Cider-V is very, very good. It might be possible to get by without it, but the system is so locked down you’re going to make a lot of sacrifices. (very few authorised extensions, codebase is so large it’s going to break whatever tools your used to using anyway, no git)
I’m well thinking I may as well trade my brick of an m5 pro for a 13” chromebook, it’s a strange time.
When I was there all the cool people used mercurial. Git5 was creaky and didn’t work well but hg worked brilliantly. The cool people used hg to do stacked CLs so they were productive even when blocked by code review.
Fun fact: This particular version of hg with its extensions actually originated from Meta.
> codebase is so large it’s going to break whatever tools your used to using anyway, no git
There is Jujutsu (with Piper backend) officially supported, and that is better than git. But of course, you will not be grepping the source code, there is code search for that.
I've switched to emacs and I no longer use IDEs. This is because I do all my edits, as a personal policy, via LLM. I mostly use emacs for magit (I work on a git-on-borg repo).
I don't think so, I think they forked VS Code directly or possibly forked Windsurf which forked VS Code. Hence the turf war and internal controversy; a lot of the effort on Cider-V got dropped on the floor, right at the height of Cider-V's popularity when they were getting large amounts of features.
Duckie does still exist, and is probably one of the most used (and useful) AI tools at Google. Yes, it's just a Gemini wrapper with access to all the internal documentation. I wasn't doing daily development when I left so I don't know if it ever got into Cider-V.
A bit offtopic but seeing your karma, I remembered that sometime ago I had made a website which finds how many words a person has written on hackernews and leaderboard stats around it.
I got curious so I checked it with your account and you are globally #23 in that you have written most words and you have written 1,822,427 words which is like 6+ games of thrones (if one GOT has around 300k words)
I also just saw that you have been on hackernews since feb 2007
Your hackernews account is older than me as I was born in 2008 ;)
I am curious how did you find hackernews and what you made stick to the platform for so long and are you perhaps some user number x of hackernews itself like say user number 230 of hackernews, I would be curious to find this data if you might know.
I do wonder if there are any tips in general life that you have for a person like me and I would love to hear your answers!
Locally readable is what I want for LLM-generated code, though. If I need to change the whole architecture, I re-prompt the LLM and have it rewrite the code for me. The changes that I'd need the code to be human-readable for are quick fixes where the LLM got something simple wrong and it'd take longer to explain to the LLM where it went off-track than to just fix it myself.
Usually you'll iterate several times on #1, which is where LLMs are really helpful. They let you get working code from stage #1 quite quickly, so you can check the output and behavior, and then oftentimes you'll find that you framed the problem incorrectly in the first place. Then you can fix your problem definition, have the LLM rewrite the code, try it again, and so on, until you get the results you want.
#1 -> #2 is a gap, but it also helps if you ask the LLM to explain its thinking and generate a human-readable design-doc of the approach it took and code organization it used. Then you read the design doc to gain the context, and pick up with #2.
This seems closely related to the problem of model collapse [1][2][3], where LLMs lose the tails of the distribution, and so when you recursively train on the output of an LLM, or otherwise feed the output back into the input in subsequent stages, you lose the precision and diversity that human authors bring to the work. Eventually everything regresses to the mean and anything that would've made the content unique, useful, and differentiated gets lost.
My takeaway from this is that AI is a temporary phenomena, the end stage of the Internet age. It's going to destroy the Internet as we know it as well as much of the technological knowledge of the developed world, and then we're going to have to start fresh and rebuild everything we know. My takeaway is that I'm trying to use AI to identify and download the remaining sources of facts on the Internet, the human-authored stuff that isn't generated for engagement but comes from the era when people were just putting useful stuff online to share information.
Yep humans and civilization are subject to the same model-collapse phenomenon as they interact more with LLMs, but engineering knowledge has always been held by a small minority with certain personality characteristics. Maybe the minority will get smaller but I'm not sure it will completely disappear. There's always people like yourself building archives.
There are plenty of AIs that are immune to this because they're trained on something that won't be flooded with slop. E.g. robotics, self-driving cars (both trained on real camera/sensor inputs) or programming/proof-assistant stuff (trained on things that are verifiable).
Power doesn't always look like what you think it does.
reply