Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"They consistently get outmanuevered in gpus, ssds, low power socs ..."

Aren't Intel SSDs considered the benchmark for all datacenter/server work ?

I know we make a point to source Intel SSDs and I don't remember any horror stories like there were with other vendors' SSD parts ...



Intel's problem with SSDs is that their cycles are really long. They come up with a great drive, then let it languish for 2+ years. This is fine in the datacenter market, but it means they're always getting leapfrogged in the faster-paced consumer market. They've tried to compensate by using third-party controllers and even third-party NAND to fill in the gaps, but those efforts have had mixed success.


While I don't know much about data centre SSDs, I follow the consumer hw market closely, albeit not as a professional (I build gaming PCs for friends, as a hobby) and in the consumer space Intel's NVMe SSDs have been recently outclassed by Samsung's 900 series. At least here in Australia you just get slightly better bang for your buck by going for Samsung. You are correct that Intel's cycles are really long. When their first NVMe drives came out, they were the best in terms of performance per dollar but then Intel remained stagnant and in the meantime Samsung caught up with them.


Intel's consumer 750 series NVMe SSD was only on top for a short period of time, and only because it was literally the only consumer retail NVMe product when it launched. It was released around April 2015 and was outclassed for real-world performance by the Samsung 950 Pro in October/November 2015. The Samsung 960 series that launched last fall just increased the lead, and Intel has not yet announced a consumer product based on their second-generation NVMe controller that can actually fit on a M.2 card.


Impossibly broad question, but have we not reached the point of... not needing too much more performance out of our SSDs for most workloads?

I say that as somebody who jumped on the consumer SSD train early (ten years ago, I guess) and never looked back, because even with those terrible first-gen controllers (JMicron, Indilinx Barefoot) the advantages were so incredible.

For a while now, though, things have seemed good enough. For my workloads (software dev, gaming) there seems to be no real-world noticeable difference between the Samsung 830 (or 840?) in my 2011 MacBook Pro and whatever new-ish PCIe drive is in my 2015 MBP.

Now obviously there will always be outliers that need that extra speed and reduced latency of course.

And maybe if there was another quantum leap in drive performance, I'd come up with new workflows. I wouldn't say "no" to more perf, obviously.


We've reached the point where the biggest performance bottleneck for SSDs on client/consumer workloads is the read latency of flash memory, which can only be substantially improved by changing to a fundamentally different memory technology (eg Intel/Micron 3D XPoint). There's still peripheral performance optimization happening to ensure drives can deliver the best burst performance possible given the underlying media, and so that they can sustain that burst performance long enough for any common workload. There's also a lot of room for improvement on power management, especially when it comes to the latency of coming out of deep power saving states.


> read latency of flash memory, which can only be substantially improved by changing to a fundamentally different memory technology (eg Intel/Micron 3D XPoint)

I wonder if this will ever replace flash, or if it will end up being used as a supplement to it?

> There's also a lot of room for improvement on power management, especially when it comes to the latency of coming out of deep power saving states.

Interesting! I'd never thought about that. It would be awesome if drives could just seamlessly wake up and start delivering data with no real penalty. Anywhere I can learn more about this or the burst optimization? Is that something anybody in the press is measuring and benchmarking today?


> Interesting! I'd never thought about that. It would be awesome if drives could just seamlessly wake up and start delivering data with no real penalty. Anywhere I can learn more about this or the burst optimization? Is that something anybody in the press is measuring and benchmarking today?

The main burst optimization is SLC write caching, which is universal on client/consumer drives that use TLC NAND flash (three bits per cell), and common on more recent drives that use MLC NAND flash (two bits per cell). M.2 PCIe SSDs also suffer from the thermal constraints of their small form factor and they will throttle under sustained benchmarking, but almost all of them can stay below their thermal limits when subjected to real-world workloads.

As for power management wake-up latency, I'm about halfway through testing my collection, and it'll be a part of my SSD reviews going forward. It's not an issue for desktops because they seldom make use of drive and link power management, but laptops face some serious tradeoffs. I'll make a full article of it over the next few weeks, but I have to finish a few other reviews first. Keep an eye on AnandTech.com next month.


>Impossibly broad question, but have we not reached the point of... not needing too much more performance out of our SSDs for most workloads?

This is really not possible to answer in such vague terms ("too much more", "most workloads"). It depends on what your workload is and how disk dependent it is. Storage is still an order of magnitude slower than DRAM. So improving the performance of disk, depending on what you're doing, would still significantly increase performance.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: