Hacker Newsnew | past | comments | ask | show | jobs | submit | ysleepy's commentslogin

I agree that the rust community frowns a little too much on the use of Arc/Cloning/Box. If you use swift, everything is ref counted, why subject yourself to so much pain for marginal gain.

Tutorials and books should be more open about that, instead of pushing complex lifetime hacks etc., also show the safe and easy ways.

The article gives Java a worse devx rank than Go and I can't agree. Java is at least on par with go in every aspect of devx, and better in IDE support, expressiveness, and dependency mgmt.


> I agree that the rust community frowns a little too much on the use of Arc/Cloning/Box

As usual, this depends heavily on what you do. I had written a program where Arc reference counting was 25 % of the runtime. All from a single instance as well. I refactored to use borrows and annotated relevant structs with lifetimes. This also enabled additional optimisation where I could also avoid some copies, and in total I saved around 36% of the total runtime compared to before.

The reason Arc was so expensive here is likely that the reference count was contended, so the cacheline was bouncing back and forth between the cores that ran the threads in the threadpool working on the task.

In conclusion, neither extreme is correct. Often Arc is fine, sometimes it really isn't. And the only way to know is to profile. Always profile, humans are terrible at predicting performance.

(And to quite a few people, coming up with ways to avoid Arc/clone/Box, etc can be fun, so there is that too. If you don't enjoy that, then don't participate in that hobby.)


For the use cases outlined in the OP, a 36% performance gain for an optimization that complex would be considered a waste of time. OP was explicitly not talking about code that cares about the performance of its hot path that much. Most applications spend 90% of their runtime waiting for IO anyway, so optimizations of this scale don't do anything.

> Most applications spend 90% of their runtime waiting for IO anyway, so optimizations of this scale don't do anything.

Again, depends on what you are doing. If you are doing web servers, electron apps or microcontrollers, sure. If you are doing batch computation, games, simulation, anything number crunchy, etc: no. As soon as you are CPU or memory bandwidth bound, optimisation does matter. And if you care about battery usage you also want to go to sleep as soon as possible (so any phone apps for example).


Pretty sure by devx they mean something like syntax ergonomics. Because otherwise rust's devx first class (cargo, clippy, crates.io) so kind of a nonstandard definition.

I think it's fair to say Java's "syntax ergonomics" are a little below the rest / somewhat manual like rust or C++ by default.


Yeah, but worse than Go?

go's type system is significantly more limited in what it can do for you, as opposed to rust or ts. Limited syntax seems to be one of the overarching design decisions for that language, making it more like C with better concurrency primitives. It can feel a bit limiting at times, but at least they have generics now and you can almost do things like union types by constructing interfaces that mimic them, but it isn't exactly ergonomic.

There is a significant chunk of the rust community that encourages the use of Arc, clone and Box outside of the hot path. Perhaps you're just hooked up with the wrong part of the community?

You're likely to get more pushback when creating public crates: you don't know if it's going to be someone else's hot path.

But the internal code for pretty much any major rust shop contains a lot more Arc, Box and clone than external code.


I am not sure the community frowns on these. In fact I use those in almost every Rust code I write. The point is that i can grep for those because Rust does not do it in a hidden way.

I remember many of these events as I was running FreeBSD a lot and subscribed to the mailing lists.

Why on earth would you give this monstrosity of a company so much free labour?

I get that volunteering is fun, but donating your time and competence to a hyper capitalist company is short sighted. I hope there was appropriate compensation, and I'm not including "early access".


Loved the details about how memory access actually maps addresses to channels, ranks, blocks and whatever, this is rarely discussed.

Not sure how this works for larger data structures, but my first thought was that this should be implemented as some microcode or instruction.

Most computation is not thaat jitter sensitive, perception is not really in the nano to microsecond scale, but maybe a cool gadget for like dtrace or interrupt handers etc.


The S25 (edge) runs this very well. 29 tok/s for E2B.

On my M4 Pro MLX has almost 2x tok/s

I'm actually not sure that's true. Apart from people buying the device with or without the neural accelerator, the perf/watt could be on par or better with the big iron. The efficiency sweet-spot is usually below the peak performance point, see big.little architectures etc.

(glassfish is a Java application container, provides DB, http server etc for apps using the standardized interfaces, now more in the micro-profile corner away from the oldern days JavaEE tar pit)

I use jersey+glassfish to build very small micro-profile applications. It's stable, small and works.

Not a fan of the HK2 dpendency injector though. Maybe that's my general dislike of how convoluted the spec and implementation (of EE di) is.

I hate how sprawling the (other) implementations are, no it is not ok to pull in 90mb dependencies to support things I don't need. These app servers tend to grow into huge uncontrollable messes. Nobody uses standalone containers anymore and forcing people to pull in all or nothing for the embedded version is just asinine engineering.


I enjoyed reading theses perspectives, they are reasoned and insightful.

I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.

For another area, prose, literature, emails, I am firm in my rejection of gen AI. I read to connect with other humans, the price of admission is spending the time.

For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?

Can it even advance beyond patterns/approaches that we have built until then?

I have many more questions and few answers and both embracing and rejecting feels foolish.


I'm worried about a few big companies owning the means of production for software and tightening the screws.


Given how fast the Open Source models have been able to catch up their closed-source counterparts, I think at least on the model/software side this will be a non-issue. The hardware situation is a bit grimmer, especially with the recent RAM prices. Time will tell: if in 2–3 years time, we can get to a situation where a 512GB–1TB VRAM / unified memory + good fp8 rig is a few thousands and not tens of thousands of dollars, we'll probably be good.


A few thousands of dollars plus the energy to run the system is unaffordable to most of the world's developers. Not that it is going to be the first way in which the Global South is kept from closing the gap.


You don't need exclusive, 24h access, so people can pool, share or rent the hardware. Solar energy is also now cheap enough that it likely won't really be a problem.


This has already happened or happening quite fast with cloud. Where setting up own data center, or even few servers could be crime against humanity if it does not use whole Kubernetes/Devops/Observability stack.


This is my immediate concern as well. Sam said in an interview that he sees "intelligence" as a utility that companies like OpenAI would own and rent out.


The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.

[0] https://huggingface.co/docs/transformers/index


Im thinking mainly if they manage to get some kind of regulations that make open source impractical for commercial use, or hardware gets too expensive for small hobbyists and bootstrapped startups, or if the large data center models wildly out class open source models. I love using open source models but I can't do what I can do with 1m context opus, and that gap could get worse? Or maybe not, it could close, I don't know for sure, and how long will Chinese companies keep giving out their open source models? Lots of unknowns.


I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.

[0] https://www.gartner.com/en/articles/domain-specific-language...


Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it, since the end product ("intelligence") can be swapped out with little concern over who is providing it.


> Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it

I believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.

All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.

https://www.latimes.com/business/story/2026-02-13/openai-acc...

https://cloud.google.com/blog/topics/threat-intelligence/dis...

https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...


Which is a wildly hypocritical tack for them to take considering how all their models were created, but I certainly wouldn’t be surprised if they did.


In other words, it is an existential question for them. And given that some of the people running these companies have no moral convictions, expect a complete shitshow. Regulation. Natural security classifications. Endless lawfare. Outright bribery. Anything and everything to retain their valuations.


> For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

Humans are vital for non-craftsmanship reasons. Human curiosity and the ability to grok the big picture was vital in detecting the XZ backdoor attempt. If there is an wholesale AI-takeover, I don't think such attacks would have been detected 5 years in the future.

AI will make future attacks much easier for several reasons: changes ostensibly by multiple personas but actually controled by the same entity. Maintainers who are open to AI-assisted contributions will accept drive-by contributions, and will likely have less time to review each contribution in depth, and will have a narrower context than the attacker on each PR.

AI-generated code fucks with trust and reputation: I trust the code I generate [1] with or without AI, I trust AI-generated code by others far less than their manually generated code. I'm not aure what the repercussions are yet.

1. I am biased and likely over-optimistic about the security and number of bugs.


> I'm undecided about my stance for gen AI in code.

Just make sure that you isolate whatever is generated so that if you ever decide that copyright means something to you after all that you don't end up with a worthless codebase.


Oh no, what will Roko's Basilisk think about that!


The samsung one is usually easier. On non-apple devices the volume is very limited, especially in the EU version.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: