You're not the first person to posit that adding a new VM "almost guarantees breakage". So what? Keep the old VM, transition to the new one by specifying that's what you're targeting, then deprecate the old VM and eventually lose it altogether.
You make it sound like the computing world has never introduced a new version of a language that has implications for the older ones and managed to overcome it. Even the web did this (remember putting your scripts in comment tags?).
> You're not the first person to posit that adding a new VM "almost guarantees breakage". So what? Keep the old VM, transition to the new one by specifying that's what you're targeting, then deprecate the old VM and eventually lose it altogether.
That means both VMs will be in the browser together at some point. That is a hard problem, for example because of cross-VM garbage collection (the WebKit thread where Apple refuses to accept Dart into WebKit references some papers on that).
This will cause an immediate slowdown on existing code, and a very large maintenance burden, for example you typically need to optimize fast paths in the DOM for your VM, with two VMs that is double the work, and if the VMs are allowed to communicate - and they can, if they can both access the DOM - then you have three paths to optimize now.
It's possible to migrate from one version of a language to another - say Python 2.x to 3.x. But they are not both running in the same process and communicating directly with a third shared environment like we would have on the web.
I hate to be negative about this, because as engineers we all love technical challenges. But the fact is, it's very hard to do this stuff well. (And it's even harder to standardize it.) We need to be realistic about this: Compiling into JavaScript is the only way we will see other languages on the web for the foreseeable future.
I'm not aware of anyone disrupting critical code the way you're suggesting. It would be akin to rewriting malloc. I'm not discounting a slow transition akin to what you suggest, but you're talking about a long time frame of supporting 2 VMs side by side. Probably 10+ years.
And even then, you still have to support JavaScript being written directly in HTML <script> tags, something no other language could enjoy. So every browser must include a JavaScript to (new VM) bytecode compiler. JavaScript will always be in a priviledged position.
I do not necessarily think the web needs generic VMs to move forward, or that JS needs replacing (I happen to enjoy JS with all its quirks quite a lot).
I do, however, think that such a transition is possible if the powers that be (Google, Mozilla et. al) could agree on a fairly consistent strategy.
Yes, 2 VMs side by side would probably be needed for some time, but 10+ years? Surely pure JS VMs could be moved to plugin status and be phased out faster than that, especially since someone would just write a compiler that compiled JS to this byte code format. No, it wouldn't be as fast as a VM highly optimized for specific JS quirks, but it could probably suffice.
> It would be akin to rewriting malloc.
Custom mallocs has been written for many different purposes like for example demanding game engines. While certainly a complicated task, it has been done numerous times.
> And even then, you still have to support JavaScript being written directly in HTML <script> tags, something no other language could enjoy. So every browser must include a JavaScript to (new VM) bytecode compiler. JavaScript will always be in a priviledged position.
Why would <script> tags HAVE to contain JS? I see no reason why if we're changing to a generic byte code web, JS in <script> tags could be phased out as well.
However, as I said, I actually like JavaScript and I think experiments of modernization like Dart are approaching the "problem" in the entirely wrong way. I only think that if we "had" to change up, it could be done without much degradation to the user experience if any at all. Some old sites would break, but really, who cares? If you could compile your javascript codebase to byte code and have it still work, every one would do that without much problem.
Aside from the fact that memory allocators are written, rewritten, and tweaked all the time, your own organization's actions defang this argument: Firefox 3 included a switch to jemalloc.
A version of jemalloc specifically written for mozilla's codebase.
If it were a 100% compatible drop in replacement for malloc, why hasn't every other project switched to jemalloc yet?
(Searching for switching to jemalloc, seems like every group that tried it out found that their code base or use case exposed new problems in jemalloc. That these could be fixed doesn't negate the main point -- you couldn't just switch malloc out in existing codebases and expect everything to work right.)
I don't think I've ever seen a more dramatic real-world example of unwittingly strengthening the point you're arguing against. jemalloc didn't work quite as they wanted, so they customized it, and you really think that's a counterpoint to an argument for trying new and different things?
Ah, you're right, it's so simple! If changing the JS engine breaks a web site, the sites authors will either customize the JS engine, or choose not to switch after all!
You make it sound like the computing world has never introduced a new version of a language that has implications for the older ones and managed to overcome it. Even the web did this (remember putting your scripts in comment tags?).