Hacker Newsnew | past | comments | ask | show | jobs | submit | randta's commentslogin

Better than NAND performance if ever so slightly for a first generation memory product but I still feel like it would not able to scale to DRAM speeds. The search for a universal memory goes on...


They're not advertising it as universal memory or a DRAM replacement though, so I'm not sure why you would make that comparison. They're advertising it as sitting between RAM and an SSD on the storage hierarchy, and it seems to mostly deliver on that promise.


One of the proposed uses for 3D XPoint has been replacing system memory, owing to its extremely low latency (which it has already delivered), high throughput, where it's early, and supposed almost unimaginable reliability/rewrites. In DIMM form it wouldn't have the overhead of a storage controller, could hypothetically be very parallel, etc.

It's extremely early in the technology, and I imagine we will get there. The first SSDs were terrible compared to SSDs now, and 3D XPoint as a technology is extremely scalable and refinable.


> extremely low latency (which it has already delivered)

Are we looking at the same numbers? "probably under 10 microseconds" is pretty terrible compared to DRAM.


3D XPoint has extremely low latency (e.g. 7 usec), and has already been demonstrated as such. Putting it through a PCI slot and a storage controller is not the same. The discussion is about running it as DIMM memory through a normal memory controller.


So you remove the storage controller and PCIe bus and you go from 10us to 7us. 7us is still a hundred times slower than DRAM, is it not?


You're ignoring a lot of context being created by the parent comment. While the latencies may never directly compete with DRAM, planned densities are already more than favourable. Having a 2TB chunk of persistent memory mapped into your address space with single-thread performance exceeding 100,000 random reads/sec with extremely consistent latency is a game-changer in for example database applications.


What context am I missing? While Analemma_ was talking about using it to augment DRAM, like you are, endorphone was very specifically talking about replacing DRAM in that comment. And that comment claims that it already has good enough latency for the job.


It's turtles all the way down, and I think this is needlessly splitting hairs.

L1->L2->L3->(L4->)DRAM->Storage

3D XPoint is being considered as the DIMM-module, byte-addressed "memory" before storage. Saying that it "augments" DRAM is almost meaningless because we already have multiple levels of DRAM.


If I am not mistaken, we have only one level of DRAM. Caches are SRAM. (Also, DRAM on DIMM is not byte-addressable)


The PS2, Xbox 360, Wii, and some intel chips with iris provide on-die DRAM caches.


It's about 30 times slower than the real-world latency of RAM, which is obviously a problem but it's one that caching might resolve (e.g. an L4 of 32GB of GDDR5)


They had a demo at Cebit showing that it outperforms RAM access "to the wrong CPU's RAM" via QPI on a dual socket board. For real world use cases I'd say that's not too bad.


Not sure this looks like "ever so slightly". They're measuring a idle random I/O latencies of less than 10us! Not DRAM, no. But still close to an order of magnitude better than what you can get from flash.


Good thing computers don't run on feelings.


I just think that C over the years has had more bashing than pragmatic advocacy for where it excels and as a result, there is a very knee jerk reaction to the language itself.

On the other hand, where the language gets promoted, I haven't seen anyone really promote C in a way that would sound modern in any sense, where people who have years of experience with C stick with C89 or use C99 in a C++ compatible fashion (i.e. without any C99 syntax which C++ has not adopted officially like restrict keyword or designated initializers). While that is fine for personal preferences, I think it does a disservice to people who have to learn the language and use the language for various justified reasons of their own and there isn't a unified response in how to really teach C to them.

I think the author really hits the hammer on the head with this part in the introduction.

"In contrast to the ubiquitous presence of C programs and systems, good knowledge of and about C is much more scarce. Even experienced C programmers often appear to be stuck in some degree of self-inflicted ignorance about the modern evolution of the C language. A likely reason for this is that C is seen as an "easy to learn" language, allowing a programmer with little experience to quickly write or copy snippets of code that at least appear to do what it’s supposed to. In a way, C fails to motivate its users to climb to higher levels of knowledge."

But on this book, from what I have read from past revisions, it's very well written and I have even learned some things from it I didn't know existed in C, like using the keyword static in array indices in parameter declarations. While it is not a perfect resource and I don't think anyone new to programming would be able to read this without guidance, it does some things extraordinarily well which I haven't really seen other C books touch on. Its treatment of undefined behavior is top notch and the way that it tries to explain the memory model of C is pretty good as well.

But had this book been written for another "antique" language, like Modern Fotran or Modern Cobol (which both had their last ISO standards most recently in 2008 and 2014 respectively, mind you), I doubt there would be this much polarization in the comments section.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: