Nothing new. Whenever a new layer of abstraction is added, people say it's worse and will never be as good as the old way. Though it's a totally biased opinion, we just have issues with giving up things we like as human being.
99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.
In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.
And because of that, we check in the generated code, not the high-level abstraction. So to understand your program, you have to read the output, not the input.
Totally possible and we can already do it ! Simply put, just set the temperature to 0 and reuse the same seed. But it's just not what people really want, and providers are reluctant because they cost up to 5x more to generate.
It's also not 100% non-deterministic, because cloud providers don't run on the same hardware, with the same conditions required for producing the same output. So, in practice, not so good, but in theory if you need it and can afford it, you can.