Hacker Newsnew | past | comments | ask | show | jobs | submit | Kuinox's commentslogin

Never managed to make it work in background.

The French TGV managed to reach 574km/h, so 300km/h is not an hard limit. https://www.youtube.com/watch?v=EOdATLzRGHc


It may seems that a terrible idea, but I think that's good to run quick scripts. It means you can delegate some uninteresting parts the AI is likely to succeed at.

For example, connecting to endpoints, etc... then the logic of your script can run.


> The American and French revolutions originated in the middle classes.

I don't know about the american revolution, but that's wrong for the french revolution. I'll link to french wikipédia pages since they are far better on the subject. https://fr.wikipedia.org/wiki/%C3%89tats_g%C3%A9n%C3%A9raux_... Here we can see the first National Assembly was half nobility and clergy. The third estate was the other half.

https://fr.wikipedia.org/wiki/Tiers_%C3%A9tat > Par ailleurs, les députés du tiers état aux états généraux représentaient essentiellement la bourgeoisie[2].

Which indicate that the majority of the third estate representative were bourgeois.


> Which indicate that the majority of the third estate representative were bourgeois.

The bourgeois are the middle class.


Were the middle class, but what people think of middle class today, doesn't apply to what it was back then.

> The bourgeoisie are a class of business owners, merchants and wealthy people, in general, which emerged in the Late Middle Ages, originally as a " middle class" between the peasantry and aristocracy. They are traditionally contrasted with the proletariat by their wealth, political power, and education, as well as their access to and control of cultural, social, and financial capital.

https://en.wikipedia.org/wiki/Bourgeoisie

Today the meaning of bourgeoisie, still applies to "business owners, merchants and wealthy people", but is now seen as upper class.


Yes, the proletariat has been brainwashed and convinced that they're the middle class, while the middle classes have become the new aristocracy. The disappearance of hereditary nobility and rise of liberalism (which brings along separation of church and state, which removes the power of the clergy) made the old distinctions less useful, so we have the modern lower (proletariat), middle (skilled workers), and upper (bourgeois) classes.


Are you using some weird extension ? Never had this on my pixel 6.


Only extension is ublock origin and most of the settings are default.


Yet translation was the main application for applied language machine learning.


When would've thought that to solve natural language translation, one would first need to solve... natural language.

All those arguments about agents and hallucinations kind of distracted people from noticing we've accodentally built a universal translator.


It's been at least 10 years that google translate had hallucinations. Some translation simply change depending of a ponctuation mark. But peoples complain only now that they heard about AI.

Of course it's not perfect, but I agree that we didn't had a machine translation as good before.


As someone both exposed to this new wave of LLM style translation in various media, and someone who has background in translation, no we didn't.


Could you please explain briefly then why my statement is wrong? What are the fundamental challenges not addressed by LLMs today? Do you think the whole approach has insurmountable roadblocks ahead, or is it more of a matter of refinement?


Context dependant phrases, from simple pronouns to whole domain specific terms, are still randomly wrong, sometimes appallingly so. Hallucinations still happen. Auto-AI translation youtube uses is, bluntly, horrid. Any jokes, even obvious ones, are still fumbled frequently.

LLM based translation looks more convincing but requires the same level of scrutiny that previous tools did. From a workflow POV they only added higher compute costs for very questionable gains.


> Auto-AI translation youtube uses is, bluntly, horrid. Any jokes, even obvious ones, are still fumbled frequently.

Youtube auto-translations are horrible indeed, and I say that as someone that has to live with the fact that Youtube decides to badly translate titles from a language I understad to Spanish because bilingual people don't exist I suppose. But that is because they use some dumb cheap model to make the translations; probably not even a Gemini-based model.


> Hallucinations still happen. Auto-AI translation youtube uses is, bluntly, horrid. Any jokes, even obvious ones, are still fumbled frequently.

I've seen that too, but these were all dedicated translation tools and auto-translate functionality.

My benchmark is against SOTA LLMs used directly. I.e. I copy the text (or media) in question, paste directly to ChatGPT or Gemini (using the best model on basic paid tier), and ask for translation. Not always perfect, but nearly so - and they naturally ingest additional context if available - such as the surrounding text, or title/ID/URI of the document/website you're looking at, or additional explanations in the prompt - and make very good use of it. This has always been missing in dedicated tools, historically built around the mistaken assumption that translation is merely a function of input text and pair of language designators (from, to). The shorter the input, the more apparent it becomes how much context matters.

RE YouTube and such - or, like any auto-transcription in video calls I've seen - I can't explain that by anything other than service providers cheapening out on this.

> From a workflow POV they only added higher compute costs for very questionable gains.

Regarding the costs - I imagine they may be an issue at scale, but for regular use (on-demand translation of individual passages, documents, recordings), it feels like it shouldn't be that noticeable anymore. You don't need to run GPT-5 for everything, some models you can run client-side already seem decent enough, and they keep improving.

> LLM based translation looks more convincing but requires the same level of scrutiny that previous tools did.

That's fair. Ultimately, if you don't know both languages, you can only trust the translation as much as you trust the translator (human or otherwise). We'll have to get a feel for this as much as we did with Google Translate, et al. In my experience, whenever I can verify them, results from LLMs are already vastly superior to prior art.

--

Tangent, and why I started considering LLMs as solving universal translation in the first place: 6 months ago, when I needed to talk with someone with whom I had zero language overlap, I tried several well-known translation apps (notably Google and Samsung), and none could manage - but then, on a whim, I just asked ChatGPT (in "advanced voice" mode) to "play a game" where it listens in and repeats whatever was just said in language A, but translated to language B, and vice versa -- and it worked flawlessly on first try.


Permaculture is the art of picking words that sounds logical and smart, make studies with n=1 to determine what is better, erect rules to follow based on that, and the communities that group around that. This is the same thing for computers.


No, it's not. It's Permanent Agriculture. Designing and growing sustainable food producing ecosystems.


Permaculture is a contraction of permanent culture.


No, it's not. It's Permanent Agriculture. It's a farming methodology.


https://en.wiktionary.org/wiki/permaculture

> Blend of permanent +‎ agriculture, coined by Bill Mollison and David Holmgren in 1978.


An MQTT Broker just mean server, that's MQTT terminology.


Dark humor is like food.

Not everybody gets it.


Here it's more Poe's law.


Maybe?

Poe's law is always described as being about parodies of extreme viewpoints, not about perceived misunderstandings of what is being commented on.

I don't think that the viewpoint that these people will sell your data in a heartbeat is extreme.


I'm interested to see if they will try to tackle the segregation of human vs AI code. The downside of agents is that they make too much changes to review, I prefer being able to track which changes I wrote or validated from the code the AI wrote.



You click on laptop and somehow that's a gotcha, I click on single thread and the M5 is at the very top. What is that?


You are seeing basically the lithography node used to make the CPU. Since Apple books more capacity than anyone else, they have their chip 5-6 months ahead of the market, you'll see chips with similar performance by core.


And what about the M3 Ultra, that sits at number 3 and came out ten months ago? Why was it not beaten five months ago? Might I add that the M3 Ultra is on an older node than the M5. And what about the A19 Pro, which is better at single core than every desktop chip in the world, and happens to be inside a phone!

Apple has the best silicon team in the world. They choose perf per watt over pure perf, which means they don't win on multi-core, but they're simply the best in the world in the most complicated, difficult, and impossible metric to game: single core perf.


It's bench score on single thread is 0.6% better than the Intel Core Ultra 9 285K, which have a lower TDP and was released 6 months before. Boths use the same lithography node. If you look at the chip by their lithography node, the Apple silicons are the same than the others...


Apple's M-series chips are fantastic, but I do agree with you that it's mostly a combination of newer process and lots of cache.

Even when they were new, they competed with AMD's high end desktop chips. Many years later, they're still excellent in the laptop power range - but not in the desktop power range, where chips with a lot of cache match it in single core performance and obliterate it in multicore.

https://www.cpu-monkey.com/en/compare_cpu-apple_m4-vs-amd_ry...


Alternatively, in the same socket and without the 3D stacked cache: https://www.cpu-monkey.com/en/compare_cpu-apple_m4-vs-amd_ry... with double the cores.

And in laptop form compared with a m4 max: https://www.cpu-monkey.com/en/compare_cpu-apple_m4_max_14_cp...


> Apple's M-series chips are fantastic, but I do agree with you that it's mostly a combination of newer process and lots of cache.

Why does it matter how they achieved their thunderous performance? Why must it be diminished to just a boatload of cache? Does it matter from which implementation detail you got the best single-core performance in the world? If it's just way more cache, why isn't Intel just cranking up the cache?


Intel IS cranking up the cache. Unfortunately, Intel chose to allocate significant resources to improving their fabs instead of immediately going to TSMC and pumping out a competitive chip, and in the years where they were misspending their resources, their competitors were gobbling up market share. Their new stuff that's competitive with Apple is all built by TSMC.

It's worth noting that Intel is not a stranger to building CPUs with lots of cache - they just segmented it into their server chips and not their consumer ones.

It matters because it is useful to understand why a given chip is faster or slower than its competitors. Apple didn't achieve this with their architecture/ISA or with some snazzy new hardware (with some notable exceptions like their x86 memory emulator), they did it by noticing how important cache was becoming to consumer workloads.


Apple was not tasked with producing the very best supercomputer with the ARM architecture.

That was Fujitsu. They each have their own specialties.

https://en.wikipedia.org/wiki/Fugaku_(supercomputer)


But at the end of the day, the fact is the best gear is made by Apple.


maybe. But then you have to use macOS which by far not the best OS


MacOS is the worst OS, except when compared to all of the other ones.


I'm sure there are some OS worse than macOS but that's not a high bar to clear.


Windows and Linux come to mind. I'm sure there's probably others.


It depends at which point in time and what you consider is the best gear.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: