There's a bunch of things: Python allows metaprogramming in a way that JS does, which means you end up needing more guards (or conflating more guards); the Python ecosystem fairly heavily relies on CPython extension modules, and if you wish to remain compatible with them you're constrained in some ways, especially if you care about performance of calling into/from them.
And of course money, lots of it. The amount of money invested in optimizing v8 is staggering -- Google brought Lars Bak out of retirement[1] to start v8, and that guy is no joke.
> Python allows metaprogramming in a way that JS does, which means you end up needing more guards (or conflating more guards)
JS allows you to dynamically modify some of the scopes that names refer to, as well as changing the actual prototype chain itself. I'm not sure you can do such crazy things with Python classes/metaclasses.
Of course, for v8 in particular, doing any of this crazy manipulation tends to set off alarm klaxons that kick your code off every optimization path, but the language still permits it.
> the Python ecosystem fairly heavily relies on CPython extension modules, and if you wish to remain compatible with them you're constrained in some ways, especially if you care about performance of calling into/from them
And for JS, very low overhead of calling into the DOM APIs (written in C++) is a necessary feature for having competitive performance. Arguably more so than in Python, since the overhead of the FFI trampoline itself here is considered a bottleneck.
> dynamically modify some of the scopes that names refer to
You can do some fairly disgusting things to name resolution in class bodies, but names within functions are resolved statically nowadays.
> as well as changing the actual prototype chain itself
You can change a class's MRO, if that's the closest analogue.
class Foo:
x = 'foo'
class Bar:
x = 'bar'
class Baz(Foo):
pass
print(Baz.x)
Baz.__bases__ = (Bar,)
print(Baz.x)
In Python you can also hook your own entire custom import system into importlib, or just arbitrarily change the meaning of the `import` statement by replacing builtins.__import__:
You can make your own class that inherits from types.ModuleType and use it to replace an existing module's class and add interesting new behaviors to its object:
> JS allows you to dynamically modify some of the scopes that names refer to, as well as changing the actual prototype chain itself. I'm not sure you can do such crazy things with Python classes/metaclasses.
> Of course, for v8 in particular, doing any of this crazy manipulation tends to set off alarm klaxons that kick your code off every optimization path, but the language still permits it.
Most of the real badness in JS (direct eval and the with statement stand out above everything else here) can be statically detected; the fact in Python that you can fundamentally change operation of things already on the call stack through prodding at things via the `sys` module makes this an order of magnitude worse (and yes, guards and OSR in principle can be used here, but it's very easy to end up with a _lot_ of guards).
> And for JS, very low overhead of calling into the DOM APIs (written in C++) is a necessary feature for having competitive performance. Arguably more so than in Python, since the overhead of the FFI trampoline itself here is considered a bottleneck.
Oh yes, it's absolutely essential, but the definition is on a very different level: we might have an interface defined in WebIDL that must be exposed to JS in a certain way, but how that's implemented is an implementation detail (and there's nothing in the public API stopping a browser from changing how their JS VM represents strings, for example; the JS VMs themselves don't really have totally stable APIs). Whereas in Python, the C API is public and includes implementation details like refcounting, string representation, etc.
You can't change the inheritance as far as I know after creation without some hacks, but you can change the class that an instance refers to which can kinda sorta achieve the same thing. You can't necessarily add properties to a base class and have all of those reflect immediately unless you use some hackery with class properties.
Would there be a performance penalty (I’m guessing in cache coherency) in having an interpreter that’s really two interpreters in the same process, where modules that use the “strict subset” of the language (the part that doesn’t require the more advanced object-model, or any FFI preemption safety) run their code through a more minimal interpreter, and then whenever your code jumps into a module that requires those things, the interpreter itself jumps into a more-complete “fallback” interpreter? Sort of doing what profile-guided JIT optimization does, but without the need for JITing (and before JITing would even kick in), just instead using a little bit of static analysis during the interpreter’s source-parsing step.
I ask because I know that this is something hardware “interpreters” (CISC CPU microcode decoders) do, by detecting whether the stream of CISC opcodes in the decode pipeline entirely consist of some particular uarch, and then shunting decode to an optimized decode circuit for that uarch that doesn’t need to consider cases the uarch can’t encode. But, of course, unlike hardware, software interpreters have to try to fit in a CPU’s cache lines and stay branch-predicted, so there might not be a similar win.
(Tangent: I once considered writing a compiler that takes Ruby code, rewrites the modules using only a “strict subset” of it to another language, and then either has that language’s runtime host a Ruby interpreter for the fallback, or has the Ruby runtime call the optimized modules through its FFI. I never got far enough into this to determine the performance implications; the plan was actually to enable better concurrency by transpiling Rails web-apps into Phoenix ones, switching out the stack entirely at the framework level and keeping only the “app” code, so single-request performance wasn’t actually the top-level goal.)