Hacker Newsnew | past | comments | ask | show | jobs | submit | jmpe's commentslogin

How i learned to stop worrying and love the miniaturization:

https://www.youtube.com/watch?v=bKGhvKyjgLY

Seriously, watch it. Memristor-tech relies on this kind of miniaturization and can provide a speed boost in several areas in current architectures.

Secondly, having worked for semi: there's a lot of conservative force holding back development. We could have had current tech with much less worries than we have now if they didn't respond so allergically to everything that looks a little exotic in the CMOS process, like high-k dielectrics.


HP and the Memristor is a good PR story. Unfortunately there is not too much beef behind it.

The Memristor is actually the same as an RRAM element (Resistive Random Access Memory). Companies other than HP have started working on it long before and are significantly ahead. For example Micron recently presented a multi-gigabit prototype chip. But there is still a lot to be done. HP lacks the funds, manpower and manufacturing muscle to really get anywhere in this area.


> HP lacks the funds, manpower and manufacturing muscle to really get anywhere in this area.

They might lack the political will, but HP has ~$15B cash on its balance sheet, 350k employees, and tens of billions in fixed assets.


They have money, but not the semiconductor expertise, assets or experience of an Intel. See this explanation forvwhy that matters: https://news.ycombinator.com/item?id=7922277


>Secondly, having worked for semi: there's a lot of conservative force holding back development. We could have had current tech with much less worries than we have now if they didn't respond so allergically to everything that looks a little exotic in the CMOS process, like high-k dielectrics.

Such as? Getting new materials into manufacturing is only the last step. Before that there has to be a significant benefit of doing it. And yes, that is usually benchmarked against risks and capex.


They respond allergically to everything that looks a little exotic because the costs of failure are colossal, and I would bet money that conservatism has allowed the industry to dodge more than a few bullets you haven't heard about. High-k just happens to be one of the things that worked out in the end.


First: I jumped into the comments before reading the article.

I have quite some experience with Flash in the form of eMMC and SD/CF. SSDs aren't that much different from those on the low level.

The controller that comes with the flash storage contains a core that manages the bad blocks. Comparable to bad sector management on HD. The software these controllers run contain a lot of rules of thumb to manage bad blocks, which is where these full failures come from IMO.

Each controller has access to a pool of reserve blocks that are used when bad blocks are detected. Once those run out the embedded software starts showing weird behavior when using the device and shortly after there's a complete fail.

I think the pool of reserve blocks is "Used_Rsvd_Blk_Cnt_Tot" in your list. Apparently there are 100, of which you consumed 0. There's a threshold at 10 so I assume that's where the diagnostics software will warn you.


> Apparently there are 100, of which you consumed 0.

The 100 is a normalized number, it's not the actual number of blocks. (A percentage basically, so 100% are still left.)

If the drive used any blocks at all I'd worry about it, I would not consider it a wear indicator but rather a failure indicator.


> The 100 is a normalized number, it's not the actual number of blocks.

I'm not too sure about that. The only ref I can give is that they use the suffix "Cnt_Tot" which means "total count". When it's a percentage they denote it as such as in "Perc_Rated_Life_Used" and "Workld_Host_Reads_Perc". Don't be surprised by the low count (100).


That's how SMART attributes work. 100 means AOK and 0 means failed. The normalized number is reported by the drive, calculated by a formula based off the raw values and MTBF data determined by the manufacturer.


Ok, dug around in the code.

In Smartmontools I found the code for this variable:

http://smartmontools.sourceforge.net/doxygen/atacmds_8cpp_so...

It's code 179 (0xB3).

From Samsung's website:

http://www.samsung.com/global/business/semiconductor/minisit...

  ID # 179 Used Reserved Block Count (total)

  This attribute represents the number of reserved blocks that have been used as a result of a read, program or erase failure. This value is related to attribute 5 (Reallocated Sector Count) and will vary based on SSD density.
.. so at least for samsung there's use of exact numbers.

From Intel's website:

http://download.intel.com/newsroom/kits/ssd/pdfs/intel_ssd_5...

(Ctrl-F for "Available Reserved Space")

.. they use a normalized value (100).

So it can be either percentage or absolute value, depending on manufacturer.


The 4 & 5LP are 32 bit ARM, the 1 & 3 are 8 bit.


Ayup. Sorry. And the PSoC1 barely deserves the name '8-bit', it's a bastardized version of 8051 that Cypress calls M8C. It's nasty.


You wouldn't happen to know how close these ARM cores are to LPC1xxx or STM32? Would it be worthwhile to port Chibios to them?


A TI-73 or TI-83 is $20 or less on eBay.


HP's claim isn't rediculous if you consider that they're not developing memristor tech to replace flash storage. They want to unify mass storage drives with DRAM. You do need an O.S. that can be partially rewritten, because the system can remain in the state where you switched it off. Richard Stanley Williams is the name to Google if you want insight into their roadmap.


Anyone experimented with this yet? I'd like to know how this is resolved when the architecture doesn't support Intel's SIMD approach, they map the objects pretty close to the instructions (SIMD.float32x4.sub and the likes).

I'm trying to figure out what happens when you port this to ARM NEON, and how you catch it with architectures that don't support NEON (they often lack them in Marvell and Allwinner).


I'm a Mozilla engineer involved in this. NEON support is very important and we're designing the spec to support it well.

CPUs that lack SIMD units can support the functionality (though not the performance of course), and there's even a polyfill library that can lower this API into scalar operations for SIMD-less browsers too.


It would be great, if you could detect SIMDable operations in classic JS (e.g. in loops) and use SIMD for interpret them. I think that adding low-level features into a high-level language is not good practice.


We will probably do that too at some point, but it won't replace explicit SIMD, just as widely-available auto vectorization support in C++ hasn't eliminated the need for explicit SIMD extensions there either.

One thing to keep in mind is that most programmers probably won't want to use this feature directly; it'll be used in libraries that expose higher-level APIs. It's still true that every feature we add increases overall clutter, but SIMD seems sufficently useful and sufficiently self-contained that it's worth the tradeoff.


It's based on SIMD support in Dart, which has been available for a while now and does supports NEON.

https://www.dartlang.org/articles/simd/

The primitives are pretty generic, just a few new vector types based on typed arrays. Operations on those types are supported on CPUs without a SIMD unit, they're just slower, but not any slower than coding with non-SIMD operations.


What about 8 and 16 bit ints? How about signed vs unsigned? Or what about pixel like data that clamps instead of overflows? What about 64-bit IEEE? What id the SIMD unit is 64 bits wide? Or 256? It just seems so not future and varying implementation proof.


> architectures that don't support NEON (they often lack them in Marvell and Allwinner).

I'm probably nitpicking here, but:

* All Allwinner SoCs have NEON[0]

* Most current ARMv7 processors have NEON. Of the current ARM cores, only Cortex-A5 and Cortex-A9 don't have mandatory NEON support (it's optional). Cortex-A5 is intended for embedded applications. Of the existing Cortex-A9 processors, AFAIK the only somewhat popular one without NEON support is NVIDIA Tegra 2, which is retired. Out of the third party cores, all Qualcomm and Apple ones have NEON support.

[0] http://linux-sunxi.org/Allwinner_SoC_Family


Marvell has a license to design their own cores, don't they?


Yes, and as jmpe said they have quite a few SoCs without NEON (I think basically everything apart from ARMADA 1500 plus). They seem to be targeting devices like smart TVs and STBs nowadays, so I guess it's not a big deal.


Correct, it was wrong of me to point to "architectures" that lack Neon; you're correct in your reply. I should have mentioned specific implementations. My experience with Allwinners without Neon indeed comes from smart TVs and STB. You know your stuff ;)


It happens in the browser, right? I think ultimately there just needs to be a unified API, or maybe more domain-specific APIs (like BLAS) that map to NEON or SSE instructions as appropriate, and do everything the slow way if they aren't available


SIMD.float32x4 and SIMD.int32x4 classes are available in Firefox Nightly, but without Float32x4Array and Int32x4Array loads and stores are horribly slow. About 100x slower than normal JavaScript in my tests.


"Our lives are spent trying to pixellate a fractal planet."

~A. King in Society


That's beautiful and so spot on.

I have found that one key to happiness is to drop judgement and just observe the nature of the world. Appreciate it for what it is, not what you can do to it.


you can bend/compress wood with steam. But look at the width of that bottle neck.

The others I looked at (playing cards, rubicks cube, ...) can be disassembled in smaller pieces that go through. That's his technique. So how do you disassemble a plank? I think it's a hollow piece of thin plywood (1 piece, no seams) that was soaked, folded up and then filled with something (resin).


Anyone know what material the mirrors are made from? I'm surprised that the configuration is a horizontal device with the mirrors vertical, I'd assume a vertical device would be easier.

Edit: hmm, on second thought, a vertical device would have the electrode vertical as well and it's probably metal. Maybe a polysilicon electrode?


Same as regular chips: in a grid on a wafer, then cut and pick&placed.

Don't forget: yield goes up when the die shrinks because the defects are typically small spots. The smaller the die the better you pixelate the defects.

The wafers are only 2 inch diameter to avoid yield loss due to edge effects: at the edge of the wafer you have lowest quality components (optical ring effects)


I doubt that they 2-inch wafers to avoid yield loss from edge effects. The ratio of area to perimeter rises as you increase the wafer size. Perhaps the reasons they use 2-inch wafers are for better uniformity, better flexibility, or lower capital investment.


Probably uniformity & capital. The profit margins on LED aren't what they used to be.


Your reasoning for the 2" wafer surprises me because most semiconductor work scales well, and the cost to buy and process larger wafers is only slightly more expensive while the surface area is much larger. A 6" wafer is 9 times as big as a 2" one.


A 6" wafer will also have lots of dies on the edge that don't pass automatic testing. Don't forget that most devices are located near the edge.

Secondly, upgrading your Fab to switch your process to larger wafer size has a 10-figure $ price tag. It's not trivial for digital, let alone opto electronics. LEDs are special in that they deviate severely from the standard mos process. When Monsanto first produced them they almost dumped the idea for LEDs because of all the issues to deal with the exotic materials.


> A 6" wafer will also have lots of dies on the edge that don't pass automatic testing. Don't forget that most devices are located near the edge.

I don't understand this. Given a fixed size die, the number of dies near the edge would go up linearly as the wafer size goes up. The number of interior dies would go up quadratically.


Yeah, what he/she is saying doesn't really make sense. Perhaps he/she means that there are problems with uniformity and the dies on the edge suffer most, not because they are next to an edge, but just because they are further from the center. In that case 2-inch wafers may make more sense than 6-inch wafers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: