Hacker Newsnew | past | comments | ask | show | jobs | submit | epistasis's commentslogin

The Arab Spring is a bad example if you're trying to say that the US is installing governments... South America's history provides far better examples.

That said, the US doesn't need to be perfect to still be an example of providing freedom for its own citizens.


There’s a lot of examples, yes in South America too, but the US helped replace or tried to help replace some governments during the Arab Spring. Libya being the biggest example, where the US and its allies imposed a no fly zone to help topple a dictator it didn’t like [0]. It could have done that in other places, but you didn’t hear a peep from the US when those protests were crushed by their governments during the Arab Spring.

0. https://en.wikipedia.org/wiki/2011_military_intervention_in_...


Libya is a super super bad example if you're looking for bad US behavior. This is literally the very first sentence of your own source:

> On 19 March 2011, a NATO-led coalition began a military intervention into the ongoing Libyan Civil War to implement United Nations Security Council Resolution 1973 (UNSCR 1973).

Compared to the South America stuff, this is saintly and angelic behavior helping out the world in every way. It's not the US alone, it's a coalition that expands beyond NATO, there's a UN resolution...

In fact bringing this up as a "bad behavior" example proves just how much of a shining city on a hill the US has been around the world. It's been bad, but it's also done lots of good stuff.


I don't think you're understanding what OP actually said. They didn't cite the Libya example as an example of bad behaviour; there wasn't any value statement on it at all. They were saying the fact that they intervened in Libya but not elsewhere was an example of the US intervening when it suits them.

I'm not an expert in US foreign policy so I'll refrain from entering the debate itself, I just think you're not arguing against what the OP is actually saying.


> Libya is a super super bad example if you're looking for bad US behavior. This is literally the very first sentence of your own source:

> > On 19 March 2011, a NATO-led coalition

Contradicting yourself ?


In what way does cutting off the sentence create a contradiction? You'll need to at least point out some words that are a contradiction, or address some of the words in my comment.

for its own citizens that were fortunate enough to be born at the right place at the right time. how should the rest of the world feel about the US if they get all the freedoms, comforts and opportunities and the rest of the world doesn’t?

Is that a country to be admired by all others or resented.


States don't have friends, only interest (of transforming humans in bomb targets and genocide victims).

> States don't have friends, only interest

Quite the opposite. Actually states don't have interests - interest groups do - and those of them who are friends with the state get to install theirs as the state's.


> states don't have interests - interest groups do

People have interests. To promote those interests, they organise. Sometimes as interest groups within states. Sometimes as business corporations. Sometimes as states.


Not having to burn gas is cheaper than burning gas. There will be a decade or two of transition with rarely used gas turbines getting their yearly packet in a short amount of time. Eventually other tech will take over, or the gas infrastructure will pare down and be cost optimized for its new role or rare usage.

Europe, and Germany and the UK in particular, are really poorly suited to take advantage of this new cheap technology. If these countries don't figure out alternatives, the countries with better and cheaper energy resources will take over energy intensive industries.

This is not a problem for solar and storage to solve, it's a problem that countries with poor resources need to solve if they want to compete in global industry.


This is exactly right, IMHO. We were in a course to counter China's momentum, we had handled COVID so much better, our industry had a huuuuuuge investment in it and was poised to take tiff.

And then it was all killed. And we are killing off our other competitive edges over China, the way we attract all the world's best science and tech talent to build here in the US rather than in their own countries. We have sat back scientific research 2-5 years by drastically cutting grants in nonsensical ways and stopping and decimating a class of grad students.

We were the most admired country in the world, and in a short amount of time we have destroyed decades of hard work building a good reputation.

We won't get that back in a year or two, it's going to be decades of work.


>our industry had a huuuuuuge investment in it

Which industry? How 'huuuuuge' was the investment?

>We were the most admired country in the world

According to who?


This was reported all over, but certain circles considered it politically incorrect to acknowledge that anything good happened in the years 2020-2024, so perhaps you can be excused for missing it. Some random web hits. Check out the graphs herein the massive investment in factories:

https://www.atlanticcouncil.org/blogs/econographics/the-ira-...

https://www.manufacturingdive.com/news/inflation-reduction-a...

Back then when I would inform the politically cloistered about this massive boom in factory construction and the hope for US manufacturing in strategically important energy tech, the most pointed critique was "yeah there's lots of spending but that doesn't mean that the factories are going to make anything." Turns out the skeptics were right. It was a huge mistake that all this stuff went into areas where it is politically incorrect to acknowledge that clean energy is changing the world. Management was not able to trumpet the new investment and the workers dont want to acknowledge what's driving the new higher wages.

As for the US being the most admired country, I work in science and a bit in entrepreneurship. The US was so far and away the leader in these that there's no comparison at all to any other country. Any visitor is completely blown away when they see what's going on, even when they heard ahead of time how much better science and startups are in the US. It's a bit shocking that you think the US was not one of the most admired countries out there, unless you're posting from China or Russia.


Concentrating generation made sense when transmission was cheap in comparison.

But one effect of ever cheaper solar is that transmission costs start to dominate generation costs, because transmission is not getting cheaper.

Cheap solar and storage requires rethinking every aspect and all conventional wisdom about the grid. Storage in particular is a massive game changer on a scale that few in the industry understand.


You are certainly not alone in your beliefs, but it always amazes me which technologies get the benefit of doubt and which are severely penalized by unfounded doubt. Solar and especially batteries are completely penalized and doubted in a way that defies any honest assessment of reality. The EIA and IEA forecasts are as terrible as they are because the reflect this unrealistic doubt (random blog spam link, but this observation is so old that it's hard to find the higher quality initial graphs)

https://optimisticstorm.com/iea-forecasts-wrong-again/

Similarly, nuclear power gets way too much benefit of the doubt, which should simply vanish after a small amount of due diligence on construction costs over its history. It's very complex, expensive, high labor, and has none of the traits that let it get cheaper as it scales.


https://www.reuters.com/sustainability/boards-policy-regulat...

10 new plants at USD 2.7 Billion each. They take six years to build. USD 2/Watt. They have standardised designs, have invested in grownig their manpower and know-how.


If you believe China's internal pricing numbers, sure....

But their actual investments in billions of dollars and in GW show that nuclear is not competing with solar, and is sticking around for hedging bets. They the are deploying far far far more solar and storage than nuclear. And if those nuclear costs were accurate, then nuclear would be far preferable. $2/W is incredible, as in perhaps not credible, but it would also be far cheaper than solar.

And even if China figured out how to build that cheaply, it doesn't mean that highly developed countries will be able to replicate that. Nuclear requires a huge amount of high skill, specialized labor, and doing that cheaply is only possible at certain levels of economic development. As economies develop to ever higher productivity, the cost of labor goes up, and it's likely that nuclear only ever makes sense at a very narrow band of economic development.


Original source of that observation was Auke Hoekstra nearly a decade ago:

https://x.com/AukeHoekstra/status/1730992987021226002


The only thing worse than launching the JVM from the command line, with it's looooooooooooong and inexplicable load time, was hitting a web page and having it lock the browser for that amount of load time.

I remember a few decades ago somebody saying the JVM was incredible technology, and as a user and programmer I still have zero clue what the hell they could have been thinking was good about the JVM.

I hear that now, decades into Java, they have figured out how to launch a program without slowing a computer down for 10+ seconds, but I'll be damned if I find out. There are still so many rough edges that they never even bothered to try to fix about launching a .jar with classpath dependencies. What a mess!


I understand the sarcasm but this take is devoid of fact. Modern Java loads fast, Java 21 has pretty good functional programming featurez. The ecosystem churns out language level features at a pace and a budget that would put most large funded startups to shame.

Java is also the workhorse of the big data ecosystem and moves enough money either as product revenue or as transactions than most nations GDP. They didn't figure out startup times for 10+ years, they were busy dealing with Oracle and its messy management. I think it will simply continue to get better given that Java has endured through so many language fads. It has its ways to go but it will end up like SQL - here before we were alive and will be here when most of us are dead.


Mostly agreed that Java, warts and all, has gotten better, and will stick around. It's the new COBOL, for better or worse. (I still wouldn't want to use it voluntarily, but if someone pays me enough money, sure.)

However:

> Java is also the workhorse of the big data ecosystem and moves enough money either as product revenue or as transactions than most nations GDP.

The global financial system moves so much money around that comparisons to GDP are a bit silly. Financial transactions dwarf GDP by so much that even a bit player of a technology will facilitate more transactions than global GDP.

(And that's fine. Many of these transactions are offsetting, and that it's a sign of an efficient market that the mispricings are so small that participants needs giant gross flows to profit from them.

Somewhat related: a single high capacity fire hose (at about 75kg of water per second) moves about the same number of electrons as you'd need to power the total US electricity consumption at 120V. Obviously, your fire hose also sprays plenty of pesky protons which completely offset the electrical current from the electrons.)


> The global financial system moves so much money around that comparisons to GDP are a bit silly.

Agreed. I guess its comparing production capacity to distribution capacity. Distribution capacity will equal n_tx * tx_amt. Having said that, another metric to look at is how much of software infrastructure is built on Java. Simply adding AWS to this equation proves the value added by Java backed systems. Hard to say that about any other langauge. Also we can look at versatility, Java is used to write very large data processing systems, CDN networks, API servers and even widely used consumer apps (IntelliJ products). Its very hard to find any other language that has had an outsized impact across domains. Of course the counter being Linux written on C powers all of the internet. True but C doesnt have the cross domain impact that Java has had.

So I disagree with the assessment that Java is a terrible langauge performance or productivity wise or it wouldnt have had this impact.


You could easily say the same about C/C++, as the operating systems and most databases are written in the language(s).

There's zero sarcasm in my comment.

The JVM is quite different from Java language features or Scala language features. I've written entire programs in JVM bytecode, without a compiler, and I see very little of value in it. A stack based machine? Why? Not a huge blocker, it's weird, but usable. The poor engineering around the JVM for many use cases? That's a blocker for me, and where are the alternatives in implementation that don't have the atrocious launch performance and interface for specifying class path and jars?

Java may be used a lot, but so is Windows. It's an accident of history, of early adoption and network effects, rather than being inherently good technology. Java, the language, made a very wide and broad swath of programmers productive, just as Windows lets a very wide and broad set of IT people run IT systems, without having to learn as much or know as much as they would need to with, say, Linux. But Java's low-barrier-to-entry is quite distinct from the weaknesses of the JVM...


> A stack based machine? Why?

The JVM being a stack-machine is probably the least controversial thing about it. Wasm, CPython and Emacs all also have a stack-based bytecode language. The value, of course, comes from having a generic machine that you can then compile down into whatever machine code you want. Having a register machine doesn't seem very useful, as it's completely unnecessary for the front-end compiler to minimize register usage (the backend compiler will do that for you).

Specifying classpath isn't fun, I agree with that. Launch performance isn't good, and is generally a consequence of its high degree of dynamicism and JIT compiler, though of course there are ways around that (Leyden).

> I've written entire programs in JVM bytecode, without a compiler, and I see very little of value in it

I agree, I also see very little value in manually writing JVM bytecode programs. However, compiling into the JVM classfile format? Pretty darn useful.


> Having a register machine doesn't seem very useful...

Requires fewer instructions, so potentially faster evaluation, which is good for short-lived programs that ends before the JIT kicks in.

Stack machines requires less space per instruction, however, which reduces the size of the program (faster to load).


> Java may be used a lot, but so is Windows. It's an accident of history, of early adoption and network effects, rather than being inherently good technology.

Going on a tangent: Windows is an interesting example to bring up, because the Windows versions everyone uses today have about as much to do with the 'accident of history / early adoption' versions that were based on DOS as using Wine on Linux has.

It would perhaps be like today's JVM being register based, when the first version were stack based.

I don't actually know how much the JVM has changed over time.


> I remember a few decades ago somebody saying the JVM was incredible technology, and as a user and programmer I still have zero clue what the hell they could have been thinking was good about the JVM.

I see what you mean. In that case we can add Scala backed systems as well to the JVM balance sheet. If we simply look at the JVM and the systems it backs, there's very little evidence that it isnt a marvel of technology. It powers more impactful systems than few other technologies.


I guess in the era of SSDs (vs. spinning disks) and multi-GHz cores, the startup really isn't a big issue anymore?

I wonder how long Teams or Slack would take to launch when it's on a 5400rpm disk on a 2000 era computer...


I remember my 2000 computer could play an mp3. But thats it. Your system is 100% utilized. No way it could even think about a modern gas guzzling app.

So your lack of technical knowledge or curiosity means Java wasn't incredible? That's certainly... a take. I'm almost curious: why did you end up holding strong beliefs like these, instead of actually investigating? As a curious person, when I hear something I don't know I like to learn - not just dismiss it. FYI, your .jar complaint is almost a decade out of date.

The JVM proved to the mainstream that a virtual machine good be as fast (sometimes even faster) than a compiled binary. Because of that it took a lot of the market share of C/C++ in the 90s.

You got a buffer overflow safe language without compromise of speed. After it has been loaded, of course. But that's why Java had such a tremendous effect in Web services where the load times are negligible to the run time.


Of course, eliminating buffer overflows is orthogonal to using a virtual machine.

You also got a language easier to use and learn than C/C++.

With universities almost immediately jumping to Java as an introductory language you got way more potential employees.


No, it's not? Using a VM is one way of preventing buffer overflows, it's not orthogonal.

You can prevent buffer overflows even when you don't use a VM. Eg it's perfectly legal for your C compiler to insert checks. But there are also languages like Rust or Haskell that demand an absence of buffer overflows.

You can design a VM that still allows for buffer overflows. Eg you can compile C via the low-level-virtual-machine, and still get buffer overflows.

Any combination of VM (Yes/No) and buffer-overflows (Yes/No) is possible.

I agree that using a VM is one possible way to prevent buffer overflows.


>I still have zero clue what the hell they could have been thinking was good about the JVM.

Running one packaged program across every platform. Write once, run anywhere was Sun's slogan for Java. (Though oftentimes ended up being debug anywhere.) As for the slow start part, programs can either be often-launched short-running or seldom-launched forever-running. Assume because enterprise software falls to the later part (and runtime performance > startup time + memory use), focus was there.


> There are still so many rough edges that they never even bothered to try to fix about launching a .jar with classpath dependencies.

Feels like you are still living in year 2010 ?


Java wasn't that bad for crappy 2D adventure games, but for the rest it was atrocious. Even TCL/Tk looked faster with AMSN than trying to use Java based software which was like trying to run Gnome 4 under 1GB of RAM.

What the heck are you writing about, you clearly have no clue about last 2+ decades of Java or topic in general but felt the urgent need to emotionally vent off because... ?

We were there. It still was atrociously slow compared to most TCL/Tk stuff I've used. TCL and Tk improved a little on speed and it almost looks native on tons of software, meanwhile with Java if you have to run some biggie software on legacy machines you are doomed by watching the widgets redraw themselves in some cases.

And, on its Android cousin... pick any S60 based Symbian phone (or anything else)... and try telling us the same. The lag, the latency, the bullshit of Java we are suffering because, you know, for phone developers, switch from J2ME to another Java stack was pretty much an easy task, but hell for the user. Even Inferno would have been better if it were free and it had a mobile ecosystem developed for it.


Yeah, I'm going to re-learn Java just to spite the guy.

And it hogs all your RAM or runs out of heap space, and online help says to pass more -Xmxwhatever flags that flip it between those two.

Lithium is not scarce, and not a limiting factor for scaling up batteries.

There's more than enough lithium out there, more discovered every month, and the perception that we are limited by lithium is mostly out there because certain media sources are trying to help out there fossil fuel friends by delaying the energy interchange by a few years.

Whether battery ocean shipping containers make technical sense is a different question, but I wouldn't worry about lithium use!


All resources are "scarce" at very low price points, below which most nations are unable or unwilling to extract them.

Lithium, rare earth metals, and a bunch of others are only "scarce" because right now China is the only country willing to put up with the pollution levels that the cheap, dirty version of their extraction produces.

Everything can be produced cleanly, safely, etc... but that comes at a price.

It's like when employers complain that "nobody wants to work". That needs to be translated to "nobody wants to work for the low wages I'm willing to pay".


By the time we get around to building these it would likely be sodium ion anyway

Maybe not scarce in an absolute sense but what about whether there is a spare million tons lying around to make ship batteries?

What's more scarce is the factory capacity to build the batteries, and the scale of their supply chains. But even that is expanding by 10x every five years. We are currently building more than a TWh per year of batteries.

If there is demand for batteries in ships, it is going to be far smaller than for cars, which is currently 80% of battery demand (the rest is mostly grid storage). So ship batteries will at most slow the fall of battery pricing by a small amount.


This is indeed true, but doesn't fiber have a far longer lifetime than GPU heavy data centers? The major cost center is the hardware, which has a fairly short shelf life.

Well you still get the establishment of 1) large industrial buildings 2) water/electricity distribution 3) trained employees who know how to manage a data center

Even if all of the GPUs inside burn out and you want to put something else entirely inside of the building, that's all still ready to go.

Although there is the possibility they all become dilapidated buildings, like abandoned factories


The building and electrical infrastructure are far cheaper than the hardware. So much so that the electricity is a small cost of the data center build out, but a major cost for the grid.

Of the most valuable part is quickly depreciating and goes unused within the first few years, it won't have a chance for long term value like fiber. If data centers become, I don't know, battery grid storage, it will be very very expensive grid storage.

Which is to say that while there was an early salivation for fiber that was eventually useful, overallocation of capital to GPUs goes to pure waste.


I'm sure there are other "emerging" markets that could make use of the GPUs, I heard game streaming is relatively popular so you can play PC games on your phone for example. I'd guess things similar to that would benefit from a ton of spare GPUs and become significantly more viable.

>The building and electrical infrastructure are far cheaper than the hardware.

Maybe it's cheaper if we measure by dollars or something, but at the same time we lack the political will to actually do it without something like AI on the horizon.

For example, many data center operators are pushing for nuclear power: https://www.ehn.org/why-microsoft-s-move-to-reopen-three-mil...

That's one example among many.

So I'm hesitant to believe that "electricity is a small cost" of the whole thing, when they are pushing for something as controversial as nuclear.

Also the 2 are not mutually exclusive. Chip fabs are energy intensive. https://www.tomshardware.com/tech-industry/semiconductors/ts...


Nuclear is not very controversial, there are tons of places that would be very happy to have additional reactors, namely those with successful reactors right now. It's just super expensive to build and usually a financial boondoggle.

AI companies are saying they are trying to build nuclear because it makes them sound serious. But they are not going to build nuclear, solar and storage is cheaper more flexible and faster to build. The only real nuclear commitment is Microsoft reopening an old nuclear reactor that had become uneconomic to operate. Building anything new would be a five+ year endeavor, if we were in a place with high construction productivity like China. In the US, new nuclear is 10 years away.

But as soon as Microsoft restarted an old reactor, all their competitors felt like they had to sound as serious, so they did showy things that won't result in solving their immediate needs. Everybody's renewable commitments dwarf their nuclear commitments.

AI companies can flaunt expensive electricity at high cost for high investor impact precisely because electricity is a small cost component of their inputs. It's a hugely necessary input, and the limiting factor for most of their plans, but the dollar amount for the electricity is small. The current valuations of AI assume that a kWh put towards AI will generate far far more value than the average kWh on the grid.


> Think more of “a pharma lab wants to explore all possible interactions for a particular drug”

Pharma does not trust OpenAI with their data, and they don't work on tokens for any of the protein or chemical modeling.

There will undoubtedly be tons of deep nets used by pharma, with many $1-10k buys replacing more expensive physical assays, but it won't be through OpenAI, and it won't be as big as a consumer business.

Of course there may be other new markets opened up but current pharma is not big enough to move the needle in a major way for a company with an OpenAI valuation.


My claim is that there will exist some company which pharma is willing to trust for AI research…they presumably trust Microsoft with their email today.

But my bigger claim is that ~half the Fortune 500 will be able to profitably deploy AI with spends in the tens or hundreds of millions per year quite soon. Not that pharma itself is a major contributor to that effect.


But for 'AI' to be a winner-take-all market, it seems that the winner would have to be using customer data to improve the 'AI'. Not only do you have to believe that one of these (relatively) under-capitalized upstarts can corral the money, but also that they can convince (enterprise) customers to 'fork over' their proprietary data to only one provider, and also that the provider can then charge a monopoly rent.

Those all seem possible, but I wouldn't assign greater than a 50% probability to any of them, and the valuations seem to imply near-certainty.


Siri was also completely miscommunicated from the beginning. I could never get Siri to do what I wanted, because I didn't realize that it had a very strict and narrow menu, but it never communicated what that menu was, and had no way of saying "here are the 5 things you can tell me about." And then there were the network communication issues where you don't know why you're not getting a response, or if Siri is going to work at all.

Every few years I would try to use it for a few days, then quit in frustration at how useless it was. Accidentally activating Siri is a major frustration point of using Apple products for me.


In game design we used to call this opacity “hunt the verb” in text adventures.

All chat bots suffer this flaw.

GUIs solve it.

CLIs could be said to have it, but there is no invitation to guess, and no one pretends you don’t need the manual.


For CLIs - most reasonable commands either have a `-h`, `--help`, `-help`, `/?`, or what have you. And manpages exist. Hunt the verb isn't really a problem for CLIs.

And furthermore - aren't there shells that will give you the --help if you try to tab-complete certain commands? Obviously there's the issue of a lack of standardization for how command-line switches work, but broadly speaking it's not difficult to have a list of common (or even uncommon) commands and how their args work.

(spends a few minutes researching...)

This project evidently exists, and I think it's even fairly well supported in e.g. Debian-based systems: https://github.com/scop/bash-completion.


> For CLIs - most reasonable commands either have a `-h`, `--help`, `-help`, `/?`, or what have you. And manpages exist. Hunt the verb isn't really a problem for CLIs.

"Hunt the verb" means that the user doesn't know which commands (verbs) exist. Which a neophyte at a blank console will not. This absolutely is a problem with CLIs.


Discoverability is quite literally the textbook problem with CLIs, in that many textbooks on UI & human factors research over the last 50 years discuss the problem.


"Hunt the verb" can be alleviated to some degree for programs that require parameters by just showing the manpage when invalid or missing parameters are specified. It's highly frustrating when programs require you to go through every possible help parameter until you get lucky.


Per the thread OP, nobody pretends that CLIs do not need a manual.

Many users like myself enjoy a good manual and will lean into a CLI at every opportunity. This is absolutely counter to the value proposition of a natural language assistant.


I think this is a naming problem. CLI is usually the name for the interface to an application. A Shell is the interface to the OS. Nonetheless agree with your post but this might be part of the difficulty in the discussion


To be super pedantic, wouldn’t the interface to a shell itself be a Command Line Interface? ;)


that’s the ambiguity that I think is tripping the discussion up a little. Also the idea of a CLI/Shell/Terminal is also quite coupled to a system, rather than services. Hence the whole ‘web service’ hope to normalise remote APIs that if you squint hard enough become ‘curl’ on the command line

But the point is none of that is intrinsic or interesting to the underlying idea, it’s just of annoying practical relevance to interfacing with APIs today


Wow, I now feel old.


Yes. But I think the point is a good one. With CLI there is a recognition that there must be a method of learning what the verbs are. And there are many traditions which give us expectations and defaults. That doesn’t exist in the chat format.

Every time I try to interact with one of these llm gatekeepers I just say what I want and hope it figures out to send me to a person. The rest of the time I’m trying to convince the Taco Bell to record a customer complaint about how its existence itself is dystopian.


> And manpages exist.

For older tools, sure. Newer tools eschew man pages and just offer some help flag, even though there are excellent libraries that generate manpages like https://crates.io/crates/clap_mangen or https://crates.io/crates/mandown (for Rust, but I am sure most languages have one) without requiring you to learn troff.


Newer in maturity though, I'd say, no not like 'modern' tools vs. only vintage tools have them. It's just not something people tend to consider early on I think, but popular stuff gets there. (For one thing, at some point someone points it out in the issue tracker!)


All of this is true. “Invitation to guess” is the key phrase in my comment. CLIs present as cryptic, which is a UX _advantage_ over chat wrappers because the implied call to action is “go do some reading before you touch this”.

An AI wrapper typically has few actual capabilities, concealed behind a skeuomorphic “fake person” UX. It may have a private list of capabilities but it otherwise doesn’t know if it knows something or not and will just say stuff.

It really needs to be 100% before it’s useful and not just frustrating.


the comment you're replying to said:

> but there is no invitation to guess, and no one pretends you don’t need the manual

which is basically what you're saying too? the problem with voice UIs and some LLM tools is that it's unclear which options and tools exist and there's no documentation of it.


Siri does have documentation: https://support.apple.com/en-ca/guide/iphone/ipha48873ed6/io.... This list (recursively) contains more things than probably 95% of users ever do with Siri. The problem really boils down to the fact that a CLI is imposing enough that someone will need a manual (or a teacher), whereas a natural language interface looks like it should support "basically any query" but in practice does not (and cannot) due to fundamental limitations. Those limitations are not obvious, especially to lay users, making it impossible in practice to know what can and cannot be done.


Well that's largely theoretical and Siri needs largely more input than is worth the trouble. It lacks context and because of Apple focus on privacy/security is largely unable to learn who you are to be able to do stuff depending on what it knows about you.

If you ask Siri about playing some music, it will go the dumb route of finding the tracks that seems to be a close linguistic match of what you said (if it correctly understood you in the first place) when in fact you may have meant another track of the same name. Which means you always need to overspecify with lots of details (like the artist and album) and that defeat the purpose of having an "assistant".

Another example would be asking it to call your father, which it will fail to do so unless you have correctly filled the contact card with a relation field linked to you. So you need to fill in all the details about everyone (and remember what name/details you used), otherwise you are stuck just relying on rigid naming like a phone book. Moderately useful and since it require upfront work the payoff potential isn't very good. If Siri would be able to figure out who's who just from the communications happening on your device, it could be better, but Apple has dug itself into a hole with their privacy marketing.

The whole point of an (human) assistant is that it knows you, your behaviors, how you think, what you like. So he/she can help you with less effort on your part because you don't have to overspecify every details that would be obvious to you and anyone who knows you well enough. Siri is hopeless because it doesn't really know you, it only use some very simple heuristic to try to be useful. One example is how it always offer to give me the route home when I turn on the car, even when I'm only running errands and the next stop is just another shop. It is not only unhelpful but annoying because giving me the route home when I'm only a few kilometers away is not particularly useful in the first place.


CLI + small LLM (I am aware of the oxymoron) trained on docs could be fun


If you like deleting all your files, sure. LLMs, especially small ones, have far too high a propensity for consequential mistakes to risk them on something like that.


I was thinking more in context of interactive help that will just find and display relevant manual info (to get around the problem of "it remembered wrong") rather than vibe coder favourite of "just run what you hallucinated immediately"


The lack of an advertised set of capabilities is intentional so that data can be gathered on what users want the system to do (even if it can't). Unfortunately, this is a terrible experience for the user as they are frustrated over and over again.


Given that they made no apparent use of such information in practice, the unfortunate thing is that they had the idea to begin with.


This is a problem all over our industry:

- almost every search field (when an end user modifies the search instead of clicking one of the results for the second time that should be a clear signal that something is off.)

- almost every chat bot (Don't even get me started about the Fisher Price toy level of interactions provided by most of them. And worse: I know they can be great, one company I interact with now has a great one and a previous company I worked for had another great one. It just seems people throw chatbots at the page like it is a checkbox that needs to be checked.)

- almost all server logs (what pages are people linking to that now return 404?)

- referer headers (you product is being discussed in an open forum and no one cares to even read it?)

We collect so much data and then we don't use it for anything that could actually delight our users. Either it is thrown away or worse it is fed back into "targeted" advertising that besides being an ugly idea also seems to be a stupid idea in many cases: years go by betweeen each time I see a "targeted" ad that actually makes me want to buy something, much less actually buy something.


That's explain why there is a limited set of recommended verbs in PowerShell.


Instead, you get to hunt the nouns.



Very well written, I'm wondering when current "cli haxxxor assistant" FAD will fade away and focus will move into proper, well thought out and adjusted to changed paradigm IDEs instead of wasting resources. Well, maybe not completely wasting as this is probably still part of discovery process.


A lot of AI models also suffer this flaw.


>GUIs solve it.

Very charitable, but rarely true.


I get this pain with Apple in a bunch of different areas. The things they do well, they do better than anyone, but part of the design language is to never admit defeat so very few of the interfaces will ever show you an error message of any kind. The silent failure modes everywhere gets really frustrating.

I’m looking at you, Photos sync.

EDIT: just noticed this exact problem is on the front page in its own right (https://eclecticlight.co/2025/11/30/last-week-on-my-mac-losi...)


> The silent failure modes everywhere gets really frustrating.

I literally just experienced this with RCS failing to activate. No failure message, dug into logs, says userinteractionrequired. Front page of HN, nobody knows, apple corp response, 'thats interesting no you cant talk to engineering'.

Read the RCS spec definition document to fall asleep to after the board swap and the call saying they won't work on it since issue resolved, answers exactly what that meant, Apple never implemented handling for it, my followup post: https://wt.gd/working-rcs-messaging


Bingo. My wife’s phone failed to backup to iCloud. To be fair, there’s an error message. However, the list of what takes up space does not show what’s actually taking up space. Such as videos texted to or from you (can easily be multiple gigs as they add up over a year or two)

The list didn’t show the god damn GoPro app, which was taking up 20GB of space from downloaded videos. I guessed it was the problem because it showed up in the device storage list, but literally not reported when you look at the list of data to backup.

iMessage is another great example of a failure. I changed my iMessage email and didn’t receive messages in a family group chat until I noticed — I had to text the chat before stuff started coning through. Previously sent messages were never delivered. And they all have my phone number, which has been my primary iMessage for LITERALLY over a decade. iMessage’s identity system is seriously messed up on a fundamental level. (I’ve had numerous other issues with it, but I’ll digress.)


It’s messed up, but it can be fixed by turning off iMessage and MMS in settings.app and then turning it back on. It’s an old bug. Since it hasn’t been fixed, I’m guessing the solution introduces more problems than it a solves for whatever reason.


I don't even use Photos, except in extreme situation. It was such a major UX downgrade from iPhoto that I could never get it to work without lots of mystery meat guessing, and every interaction with it was so unpleasant because of that.

Knowing that a company had competent product designers that made a good product, but then shitcanned the working product for a bunch of amateur output from people that don't understand dry very basics of UI, from the one company that made good UI its primary feature for decades... well it just felt like full on betrayal. The same thing happened with absolutely shitty Apple Music, which I never, ever use, because it's so painful to remember what could have been with iTunes...


Just think that they marketed Photos as a worthwhile replacement for Aperture as well.

I remember advising many photographs friends on using Aperture for photo library management. Now I feel so bad for ever recommending that. I mean Lightroom now has a stupid subscription, but using Apple software was kind of the point: avoiding the risk of software becoming too expensive or bad because the hardware premium funds the development of good software.

Now you get to pay more for the hardware but you have to deal with shitty or expensive software as well. Makes no sense.


My biggest pet peeve with macOS Music is that you can't go back a track in the "infinite play" mode. Not only can you not go back to the previous track, but you can't even go back to the beginning of the song - the button is just greyed out. It's a purely arbitrary limitation because the same functionality works fine in iOS.

I don't know why it bugs me so much, but I'm at the point of moving my library into a self-hosted Navidrome instance so I can stop using Music.


Photos is horrific for this. No progress, no indicators. And what little status you get has no connection to reality.

Will it sync? When? Who knows? You’re on WiFi with a full battery and charging? So? Might be a minute, might be an hour. Oh, you restarted Photos? Who cares? Not Photos.


Agree that new Photos is abysmal compared to what it was before. And that's before the macOS only features that you don't know are macOS only features (like naming photos! Seriously!)


There's a lot of arcane lore about how to get it to sync. Closing all applications, restarting, then starting photos, then hiding the photos main window, then waiting, was how I got it to work last time. It worked twice, YMMV. If there's a cli alternative, please tell me.


You're not saying it, but ugh, yeah, anything along those lines of magic incantations and this is all the very antithesis of what Apple claims to embody.

Ironically this manages to break all four of Apple's famous UI principles from Bruce Tognazzini: discoverability, transparency, feedback and recovery


Yeah, it's a classic CLI v GUI blunder. If you don't know exactly what the commands are, the interface is not going to be particularly usable.

I've found I appreciate having Siri for a few things, but it's not good enough to make it something I reach for frequently. Once burned, twice shy.


Siri does have a "Ask me some things I can do" feature, but the problem is it's way too big; most of the things are things I don't care about ("Ask me for a joke", "Ask me who won the Superbowl last night"); and a lot of times even forx things I can do, its comprehension is just poor. "Hey Siri, play me Ein Klein Nachtmusik by Mozart". "OK, here's Gonna Pump Big by Diddy Q[1]". "Hey Siri, tell Paul I'm on my way". "OK, calling Mike smith".

[1] Made up title


This is just the conversational interface issue. You need the system to be able to do most of the things you would expect a human to be able to do (e.g. if you're talking to your phone, you'd expect it to be able to do most phone things). If the conversational system can only do a small subset of those, then it just becomes a game of "discover the magical incantation that will be in the set of possibilities", and becomes an exercise in frustration.

This is why LLMs are the first conversational interface to actually have a chance of working, once you give them enough tools.


> once you give them enough tools

are there solutions to the error rates when picking from dozens or even hundreds of tools i'm not aware of?


Yes, there are a few. Anthropic released one just last week.


Any hints on how it works? Or is it unscrutainizable secret sauce?


I didn’t know for years that you can ask it to do things remotely over SSH.


Can you explain?


You can run shortcuts using Siri. You can create a shortcut with an action that executes via SSH: https://matsbauer.medium.com/how-to-run-ssh-terminal-command....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: