Hacker Newsnew | past | comments | ask | show | jobs | submit | josh-sematic's commentslogin

Yes it’s LTS but the point is that the LTS system has overlapping support so you can wait on an older LTS for a bit before upgrading to a newer one. And it’s somewhat prudent to do so if you value stability highly, because often a few new issues will be discovered and patched after LTS goes live for a bit.

I’m sure you can get fairly close at design time but then need to tune it (I think “regulate” is the term of art) to get it just right before sending the watch off.

This is yet one more indication to me that the winds have shifted with regards to the utility of the “agent” paradigm of coding with an LLM. With all the talk around Opus 4.5 I decided to finally make the jump there myself and haven’t yet been disappointed (though admittedly I’m starting it on some pretty straightforward stuff).


> My estimate is that within 2 to 3 years, the lowest cost way to generate AI compute will be in space.

I have never been so tempted to join Kalshi


Strange to me that they don’t mention HuggingFace, which I think of as a pretty leading player in open{source|weight|data} AI.


This is the main factor that makes me leery to try going the full self host route.


> long enough to become lost and/or irrelevant

For the vast majority of things yes. But thankfully the cream of the crop that does stand the test of time eventually makes it to the public.


Can you look at any arbitrary program and tell if it halts without running it indefinitely? If so, you should explain how and collect your Nobel. Telling everybody whether the Collatz conjecture is correct is a good warm up. If not, you can’t solve the halting program either. What does that have to do with consciousness though?

Having read “I Am a Strange Loop” I do not believe Hofstadter indicates that the existence of Gödel’s theorem precludes consciousness being realizable on a Turing machine. Rather if I recall correctly he points out that as a possible argument and then attempts to refute it.

On the other hand Penrose is a prominent believer that human’s ability to understand Gödel’s theorem indicates consciousness can’t be realized on a Turing machine but there’s far from universal agreement on that point.


per halting problem: any system capable of self reference has unprovable (un)truths, the system can not be complete and consistent. consciousness falls under this umbrella

I'll try and ask OG q more clearly: why would the brain, consciousness, be formalizable?

I think there's a yearn view nature as adhering to an underlying model, and a contrary view that consciousness is transcendental, and I lean towards the latter


I hadn’t heard of that until today. Wild, it seems some people report genuinely feeling deeply in love with the personas they’ve crafted for their chatbots. It seems like an incredibly precarious position to be in to have a deep relationship where you have to perpetually pay a 3rd party company to keep it going, and the company may destroy your “partner” or change their personality at a whim. Very “Black Mirror”.


There were a lot of that type who were upset when chatGPT was changed to be less personable and sycophantic. Like, openly grieving upset.


This was actually a plot point in Blade Runner 2049.


You are implying here that the financial connection/dependence is the problem. How is this any different than (hetero) men who lose their jobs (or suffer significant financial losses) while in a long term relationship? Their chances of divorce / break-up skyrocket in these cases. To be clear, I'm not here to make women look bad. The inverse/reverse is women getting a long-term illness that requires significant care. The man is many times more likely to leave the relationship due to a sharp fall in (emotional and physical) intimacy.

Final hot take: The AI boyfriend is a trillion dollar product waiting to happen. Many women can be happy without physical intimacy, only getting emotional intimacy from a chatbot.


Funny. Artificial Boyfriends were a software problem, while Artificial Girlfriends are more of a hardware issue.


In a truly depressing thread, this made me laugh.

And think.

Thank you


A slight non-sequitur, but I always hate when people talk about the increase in a "chance". It's extremely not useful contextually. A "4x more likely statement" can mean it changes something from a 1/1000 chance to a 4/1000 chance, or it can mean it's now a certainty if the beginning rate was a 1/4 chance. The absolute measures need to be included if you're going to use relative measures.

Sorry for not answering the question, I find it hard because there are so many differences it's hard to choose where to start and how to put it into words. To begin with one is the actions of someone in the relationship, the other is the actions of a corporation that owns one half of the relationship. There's differing expectations of behavior and power and etc.


I think some of the point is what you can’t do with it rather than what you can. It’s an intentionally very restrictive protocol.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: