Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not only has OpenAI's market share gone down significantly in the last 6mo, Nvidia has been using its newfound liquid funds to train its own family of models[1]. An alliance with OpenAI just makes less sense today than it did 6mo ago.

[1] https://blogs.nvidia.com/blog/open-models-data-tools-acceler...



> Nvidia has been using its newfound liquid funds to train its own family of models

Nvidia has always had its own family of models, it's nothing new and not something you should read too much into IMHO. They use those as template other people can leverage and they are of course optimized for Nvidia hardware.

Nvidia has been training models in the Megatron family as well as many others since at least 2019 which was used as blueprint by many players. [1]

[1] https://arxiv.org/abs/1909.08053


Nemotron-3-Nano-30B-A3B[0][1] is a very impressive local model. It is good with tool calling and works great with llama.cpp/Visual Studio Code/Roo Code for local development.

It doesn't get a ton of attention on /r/LocalLLaMA but it is worth trying out, even if you have a relatively modest machine.

[0] https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B...

[1] https://huggingface.co/unsloth/Nemotron-3-Nano-30B-A3B-GGUF


Some of NVIDIA's models also tend to have interesting architectures. For example, usage of the MAMBA architecture instead of purely transformers: https://developer.nvidia.com/blog/inside-nvidia-nemotron-3-t...


Deep SSMs, including the entire S4 to Mamba saga, are a very interesting alternative to transformers. In some of my genomics use cases, Mamba has been easier to train and scale over large context windows, compared to transformers.


It was good for like, one month. Qwen3 30b dominated for half a year before that, and GLM-4.7 Flash 30b took over the crown soon after Nemotron 3 Nano came out. There was basically no time period for it to shine.


It is still good, even if not the new hotness. But I understand your point.

It isn't as though GLM-4.7 Flash is significantly better, and honestly, I have had poor experiences with it (and yes, always the latest llama.cpp and the updated GGUFs).


Genuinely exciting to be around for this. Reminds me of the time when computers were said to be obsolete by the time you drove them home.


I recently tried GLM-4.7 Flash 30b and didn’t have a good experience with it at all.


It feels like GLM has either a bit of a fan club or maybe some paid supporters...


I find the Q8 runs a bit more than twice as fast as gpt-120b since I don’t have to offload as many MoE layers, but is just about as capable if not better.


Oh those ghastly model names. https://www.smbc-comics.com/comic/version


Do they have a good multilingual embedding model? Ideally, with a decent context size like 16/32K. I think Qwen has one at 32K. Even the Gemma contexts are pretty small (8K).


Nemo is different to Megatron.

Megatron was a research project.

NVidia has professional services selling companies on using Nemo for user facing applications.


its a finetune..


And the whole AI craze is becoming nothing but a commodity business where all kinds of models are popping in and out, one better this update, the other better the next update etc. In short - they're basically indistinguishable for the average layman.

Commodity businesses are price chasers. That's the only thing to compete on when product offerings are similar enough. AI valuations are not setup for this. AI Valuations are for 'winner takes all' implications. These are clearly now falling apart.


It doesn't really feel like AI for coding is commoditized atm.

As problematic as SWE-Bench is as a benchmark, the top commercial models are far better than anything else and it seems tough to see this as anything but a 3 horse race atm.


When you have more users you get more data to improve your models. The bet is that one company will be able to lock in to this and be at the top constantly.

I'm not saying this is what will happen, but people obviously bet a lot of money on that.


Problem is you can easily train one model on the other. And at the end of the day everyone has access to enough data in one way or another.


Yeah. Even if OpenAI models were the best, I still wouldn't used them, given how the Sam Altman persona is despicable (constantly hyping, lying, asking for no regulations, then asking for regulations, leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims...). I know other companies are not better, but at least they have a business model and something to lose.


> leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims

Point me to these? Would like to have a look.


Sorry, not leaked emails, but it's the Greg Brockman's diary and leaked texts.

I didn't find the original lawsuit documents, but there's a screenshot in this video: https://youtu.be/csybdOY_CQM?si=otx3yn4N26iZoN7L&t=182 (timestamp is 3:02 if you don't see it)

There's more details about the behind-the-scenes and greg brockman's diary leaks in this article: https://www.techbuzz.ai/articles/open-ai-lawsuit-exposed-the... Some documents are made public thanks to the Musk-OpenAI trial.

I'll let you read a few articles about this lawsuit, but basically they said to Musk (and frankly, to everyone else) that they were committed to the non-profit model, while behind the scenes thinking about "making the billion" and turning for-profit.


Hate that bringing fraud to justice means paying out to the wealthiest person on the planet....


Justice should be blind


Much appreciated!

Edit: Ah, so the fake investment announcements started from the very beginning. Incredible.


Literally everyone raising money is just searching for the magic combo of stuff to make it happen. Nobody enjoys raising money. Wouldn’t read that much into this.


I agree. Especially the whole Johny Ive and Altman's hype video in that coffee shop was absolutely disgusting. Oh how far their egos have been inflated, which leads to very bad decision making. Not to be trusted.


Oh could I get a link to that one ?




Just fantastic. Hadn't seen it. Thanks for sharing that!


I think there are two things that happened

1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.

2. Sam Altman is profoundly unlikable.


> Consumers have mostly rejected AI.

People like to complain about things, but consumers are heavily using AI.

ChatGPT.com is now up to the 4th most visited website in the world: https://explodingtopics.com/blog/chatgpt-users


We’ve seen many times that platforms can be popular and widely disliked at the same time. Facebook is a clear example.

The difference there is it became hated after it was established and financially successful. If you need to turn free visitors in to paying customers, that general mood of “AI is bad and going to make me lose my job/fuck up society” is yet another hurdle OpenAI will have to overcome.


Yeah, every single big website is totally free. People have complex emotions toward Facebook, Instagram and TikTok, but they don't have to pull out their wallet. That's a bridge too far for many people.


Are they paying through? Reddit was also popular for a long time and didn't make much money.

My point was more that it seems this wave of AI is more profitable if you're in B2B vs. B2C.


It’s incorrect to point out that consumers have rejected AI.

The strategy here is more valid in my opinion. The value in AI is much more legible when the consumer uses it directly from their chat UI than whatever enterprises can come up with.

I can suggest many ways that consumers can use it directly from chat window. Value from enterprise use is actually not that clear. I can see coding but that’s about it. Can you tell me ways in which enterprises can use AI in ways that is not just providing their employees with chaggpt access?


Usually when I hear about people using ChatGPT they are usually just using it as a search engine that delivers summarized results. The average person wouldn't use email if they had to pay for it, good luck making money off of all of those visitors without just becoming another ad tech company competing with the other ad tech companies.


#2 cannot be understated


Was the golden boy for a while? What shifted? I don't even remember what he did "first" to get the status. Is it maybe just a case of familiarity breeding contempt?


It is starting to become clear to more and more people that Sam is a dyed in the wool True Believer in AGI. While it's obvious in hindsight that OpenAI would never have gotten anywhere if he wasn't, seeing it so starkly is really rubbing a lot of people the wrong way.


Advertising Generated Income?



Damm this is smart. I like it


Someone else said it first here


it's even worse than that and i hope people recognize that it's not that he's a True Believer (though the TBs are often hilarious)

it's that he has no ethics to speak of at all. it's not that he's out of touch, it's that he simply does not care.


Why would him believing in AGI make people dislike him?

He is clearly disliked by a lot of tech community, I don't see his AGI belief as a big part of that.


Well, in the world where AGI is created and it goes suboptimally, everybody gets turned into computronium and goes extinct, which is a prospect some are miffed about. And, in the world where it goes well, no decision of any consequence is made by a human being ever again, since the computer has planned every significant life event since before their birth. Free will in a very literal sense will have been erased. Sam being a true believer means he is not going to stop working until one of these worlds comes true. People who understand the stakes are understandably irked by him.


Well, he made mistake many billionaires do, he opened his mouth with his own thoughts, instead of just reading what PR department told him to read


All the manipulation and lying that got him fired.


He is a pretty interesting case. According to the book "Empire of AI" about OpenAI, he lies constantly, even about things that are too trivial to matter. So it may be part of some compulsive behavior.

And when two people want different things from him, he "resolves" the conflict by agreeing with each of them separately, and then each assumes they got what they wanted, until they talk to the other person and find out that nothing was resolved.

Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.


He was once a big pin in Y Combinator (I think kind of ran it?)... Paul Graham thought he was great for YC.

Interesting that he's got as far as he has with this issue. I don't think you can run a company effectively if you don't deal in truth.

Some of his videos have seemed quite bizarre as well, quite sarcastic about concerns people have about AI in general.


> He was once a big pin in Y Combinator (I think kind of ran it?)... Paul Graham thought he was great for YC.

And today it seems everyone will at YC hate him but pretend not


Saw Empire of AI in a bookshop recently but held off buying as wasn’t sure if it was going to be surface level. You’d recommend?


Understandable worry, but it's not surface-level at all. Karen Hao is a great journalist. Highly recommend.


It's sort of two books combined into one: The first one is the story of OpenAI from the beginning, with all the drama explained with quotes from inside sources. This part was informative and interesting. It includes some details about Elon being convinced that Demis Hassabis is going to create an evil super-intelligence that will destroy humanity, because he once worked on a video game with an evil supervillain. I guess his brain was cooked much earlier than we thought.

The second one is a bunch of SJW hand-wringing about things that are only tangentially related, like indigenous Bolivians being oppressed by Spanish Conquistadors centuries ago. That part I don't care for as much.


Not a case, society call them sociopaths. Witch includes power struggle, manipulation and physiological abuse of the people around them.

Example, Sam Altman and OpenAI hoarding 40% of the RAM supply as unprocessed wafers stored in warehouses bought with magical bubble investors money in GPUs that don't exist yet and that they will not be able to install because there's not enough electricity to feed such botched tech, in data centers that are still to be built, with intention to punch the competence supply, and all the people of the planet in the process along two years (at least).


> Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.

For a brief moment I thought you were talking about Elon there


He is a sociopath. It's ok to say it.


Yep the various -path adjectives get overused but in this case he's the real deal, something is really really off about him.

You can see it when he talks, he's clearly trying (very unconvincingly) to emulate normal human emotions like concern and empathy. He doesn't feel them.

People like that are capable of great evil and there's a part of our lizard brains that can sense it


Sounds like when people are politicking he just takes a “whatever” approach haha. That seems reasonable.


No, that's not what he's doing.


Cringey to watch their interviews.


*Overstated


oops, yup.


Indeed. Sama seems to be incredibly delusional. OAI going bust is going to really damage his well-being, irrespective of his financial wealth. Brother really thought he was going to take over the world at one point.


Scariest part is it probably won't, and he'll be back in five year with something else.


Do you see Sam Bankman-Fried getting reinstated?

I don't and I see Sam Altman as a greater fraud than that (loathsome) individual. And I don't think Sam gets through the coming bubble pop without being widely exposed (and likely prosecuted) as a fraudster.


People lying to everyone lie to themselves the most


Instead of anecdotes about “what you saw on TikTok and Reddit”, it’s really not that hard to lookup how many paid users ChatGPT has.

Besides OpenAI was never going to recoup the billions of dollars based on advertising or $20/month subscriptions


Is CEO likeability a reliable predictor?


I think it depends how visible the CEO is to (potential) customers, in this case very visible, he is in the media all the time.


They pay to be in the media


good point.

I don't think it is at all

The CEO just has to have followership: the people who work there have to think that this is a good person to follow. Even they don't have to "like" him


Ask Tesla about the impact of their CEOs likeability on their sales.


> OpenAI bet largely on consumer

Source on that?

Lots of organizations offer ChatGPT subscriptions, and Microsoft pushes Copilot as hard as it can which uses GPT models.


Those who is publicly hating LLMs still use them though, even for the stuff the claim to hate, like writing fanfic.


HN is such a bubble. ChatGPT is wildly successful, and about to be an order of magnitude more so, once they add ads. And I have never heard a non-technical person mention Altman. I highly doubt they have any idea who he is, or care. They’re all still using ChatGPT.


> and about to be an order of magnitude more so, once they add ads.

How do you figure?


You have to give credit to Sam, he’s charismatic enough to the right people to climb man made corporate structures. He was also smart enough to be at the right place at the right time to enrich himself (Silicon Valley). He seems to be pretty good at cutting deals. Unfortunately all of the above seems to be at odds with having any sort of moral core.


Ermmm what?

He and his personality caused people like Ilya to leave. At that point the failure risk of OAI jumped tremendously. The reality he will have to face is, he has caused OAIs demise.

Perhaps hes ok with that as long as OAI goes down with him. Would expect nothing less from him.


All this drama is mostly irrelevant outside a very narrow and very online community.

The demise of OpenAI is rooted in the bad product market fit, since many people like using ChatGPT for free, but fewer are ready to pay for it. And that’s pretty much all there is to it. OpenAI bet on consumers, made a slopstagram that unsurprisingly didn’t revolutionise content, and doesn’t sell as many licenses as they would like.


Imo they'll soon make a lot of money with advertisement. Whenever chatgpt brings you to some website to buy a product they will get some share.


good luck with that when there's gemini which does it far better


Ilya took a swing at the king and missed. It would have been awkward to hang around after that debacle.


Naive to call Sam Altman unlikeable.


I actually think Sam is “better” than say Elon or Dario because he seems like a typical SF/SV tech bro. You probably know the type (not talking about some 600k TC fang worker, I mean entrepreneurs).

He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling. I don’t know him personally but he comes across like an average person if that makes sense (in this environment that is).

I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months. It’s hard for me to trust a megalomaniac or a total nerd. So Sam is kinda in the middle there.

I hope OpenAI continues to dominate even if the margins of winning tighten.


Elon is one of the most unlikable people on the planet, so I wouldn't consider him much of a bar.


It’s kind of sad. I can’t believe I used to like him back in the iron man days. Back then I thought he was cool for the various ideas and projects he was working on. I still think many of those are great but he as a person let me down.

Now I have him muted on X.


Back then he had a PR firm working for him, getting him cameos and good press. But in 2020 he fired them deciding that his own "radically awesome" personality doesn't need any filtering.

Personally I don't think Elon is the worst billionaire, he's just the one dumb enough to not have any PR (since 2020). They're all pretty reprehensible creatures.


Any number of past mega-rich were probably equally nuts and out of touch and reprehensible but they just didn't let people find out. Then Twitter enabled an unfiltered mass-media broadcast of anyone's personal insanity, and certain public figures got addicted and exposed.

There will always be enough people willing to suck up to money that they'll have all the yes-men they need to rationalize it as "it's EVERYONE ELSE who's wrong!"


The watershed moment for me was when he pretended to be a top tier gamer on Path of Exile. Anyone in the know saw right through it, and honestly makes me wonder if we just spotted this behavior because it's "our turf", but actually he and people like him just operate this way in absolutely everything they do


Yeah, Putin is probably the worst billionaire. Elon might be a close second though, or maybe it's a US politician if they actually are a billionaire.


Peter Thiel who thinks the Pope or Greta Thunberg might be the antichrist, and that freedom is incompatible with democracy

https://www.nationalmemo.com/peter-thiel-antichrist


I think you did not understand his argument. He said it is a great danger that people might unite behind an antichrist like figure.


Exactly, other billionaires having calmer personality types does not make them less nuts.


> Now I have him muted on X.

Props to him for letting people mute him on his own platform. The issue with Sam and OpenAI is they their bias on any controversional topic can't be switched off.


But you're still on Twitter and calling it X...


So? I bet you think you're clever. You're using platforms daily that are ran by insane people. Don't forget that the internet itself was a military invention.


Hah, you beat me to it, serves me right for writing longer comments. Have an upvote ;)


Not extreme? Have you seen his interviews? I guess his wording and delivery are not extreme, but if you really listen to what he's saying, it's kinda nuts.


That Dyson sphere interview should've been a wake up call for the OpenAI faithful.


I understand what GP is saying in the sense that, yes, on an objective scale, what Sam is saying is absolutely and completely nuts... but on a relative scale he's just hyping his startup. Relative to the scale he's at, it’s no worse than the average support tool startup founder claiming they will defeat Salesforce, for example.


Exactly. Thanks for getting it, it is refreshing to encounter people who get it. Good luck with everything!


You too!


He's definitely not. If Altman. Is a "typical" SF/SV tech bro then that's an indication the valley has turned full d-bag. Altman's past is gross. So, if he's the norm then I will vehemently avoid any dollars of mine going to OAI. I paid for an account for a while, but just like Musk I lose nothing over actively avoiding his Ponzi scheme of a company.


Altman is a consummate liar and manipulator with no moral scruples. I think this LLM business is ethically compromised from the start, but Dario is easily the least worst of the three.


Darío unsettles me the most, he kinda reminds me of SBF, I wouldn’t be surprised if, well they’re all bad its to stack rank them.


I don't think he's good, but afaik he isn't trying to make everyone psychologically dependent on Claude and releasing sex bots.


He and SBF are both big into effective altruism, and SBF gave Anthropic their seed funding, so yeah, that checks out.


There's nothing wrong with effective altruism -- making money to give it away -- it's SBF.


Of course there is. The whole thing is a cult, designed to pull in suckers.


Your argument is guilt by association. Association with something that isn't morally wrong, it's just a way to try to spend money on charity in an effective way? You can take a lot of ideas too far and end up with a bad result of course.


There’s 4 though, where does Demis fit in the stack rank?


TBH, I hadn't heard of him until now. Looks like he's had a crazy legit professional career. I'd put him at the top for his work at Bullfrog alone.


Demis is the reason Google is afloat with a good shot at winning the whole race. The issue currently is he isn’t willing to become the alphabet CEO. IMHO he’ll need to for the final legs.


I’d hate the job too. It would be interesting to see how Google might evolve with him at the helm, for sure.


Pfft. Dario has been making nonsense fear mongering that never comes true.


> I actually think Sam is “better” than say Elon or even Dario because he seems like a typical SF/SV tech bro.

If you nail the bar to the floor, then sure, you can pass over it.

> He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling.

I don't now what your definition of extreme is but by mine he's pretty extreme.

> I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months.

All of them suffer from thinking their money makes them somehow better.

> I hope OpenAI continues to dominate even if the margins of winning tighten.

I couldn't care less. I'm on the whole impressed with AI, less than happy about all of the slop and the societal problems it brings and wished it had been a more robust world that this had been brought in to because I'm not convinced the current one needed another issue of that magnitude to deal with.


> All of them suffer from thinking their money makes them somehow better.

Let's assume they think they're better than others.

What makes you think that they think it's because of their money, as opposed to, say, because of their success at growing their products and businesses to the top of their field?


Even if it's success rather than money, you still have survivorship bias to contend with, so it's not really much of a helpful distinction.


Because they wouldn't talk about money as much or try to convert a non-profit into a for profit company.


Do they talk about money that much? 99.99% of the people I see talking about money (especially other people's money and what they should be doing with it) are non-billionaires.


That’s ok, but AI is useful in particular use cases for many people. I use it a lot and I prefer the Codex 5.2 extra high reasoning model. The AI slop and dumb shit on IG/YT is like the LCD of humans though. They’ve always been there and always will be there to be annoying af. Before AI slop we had brain rot made by humans.

I think over time it (LLM based) will become like an augmenter, not something like what they’re selling as some doomsday thing. It can help people be more efficient at their jobs by quickly learning something new or helping do some tasks.

I find it makes me a lot more productive because I can have it follow my architecture and other docs to pump out changes across 10 files that I can then review. In the old way, it would have taken me quite a while longer to just draft those 10 files (I work on a fairly complex system), and I had some crazy code gen scripts and shit I’d built over the years. So I’d say it gives me about 50% more efficiency which I think is good.

Of course, everyone’s mileage may vary. Kinda reminds me of when everyone was shitting on GUIs, or scripting languages or opinionated frameworks. Except over time those things made productivity increase and led to a lot more solutions. We can nitpick but I think the broader positive implication remains.


some people are so determined to be positive about AI that at some point it just comes across like they’re getting paid to be


There are quite a lot of posts like that. Just a bit too eager. Proselytising as if AI is a religion.


Mods tolerate it for some reason I suppose.


I don't think I did that at all, and I call out that sort of bullshit all the time and get downvoted lol (idgaf :P)


Maybe some/many even are? For "AI" companies it's not really a big expense in comparison and they depend hugely on keeping the hype going.


It's very hard to see downsides on something like GUIS, scripting languages or opinionated frameworks compared to a broad, easily weaponized tool like generative AI.


Nvidia isn’t competing with OpenAI for frontier models.


Au contraire, they’re selling the shovels.


Bored of hearing this


[flagged]


ChatGPT has nowhere the lead it used to have. Gemini is excellent and Google and Anthropic are very serious competitors. And open weight models are slowly catching up.


ChatGPT is a goner. OpenAI will probably rule the scam creation, porn bot, and social media slop markets.

Gemini will own everything normie and professional services, and Anthropic will own engineering (at least software)

Honestly as of the last few months anyone still hyping ChatGPT is outing themselves.


[flagged]


"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."

https://news.ycombinator.com/newsguidelines.html

https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...


Sorry dang you are right and I was wrong to say that. Mea culpa.


Nobody. Did you talk to all the models? Can you actually have a non-coder, human conversation?


You mean the DOW right?


I'm afraid I don't know what that is.

I meant thinking patterns that go beyond our understanding. High functioning autism that is beyond jealousy/envy, and beyond the need to hold or be on a leash and beyond the enigma of emotions that come with the influence to dump and pump stock market prices of precious, precious metals.

Or, in other terms, the kind of intelligence that is built for abstract, distant, symbiotic humanity. From the POV of Earth as a system, we're quite the dumb nuisance. "Just get it, man". :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: