Hacker Newsnew | past | comments | ask | show | jobs | submit | alpine01's commentslogin

3D reconstruction of old spaces which no longer exist seems like a clear use case to me. There's loads of old videos of driving down a street in the 80s, or neighborhoods in cities which got replaced.

I can imagine future iterations of this which bring together other stills of the same space at that time to augment the dataset. Then perhaps another pass to fill in gaps with likely missing content based on probability or data from say the same street 10 years later.

It won't be 100% real, but I think it'd be very cool to be able to have a google-street view style experience of areas before google street view existed.


> it'd be very cool to be able to have a google-street view style experience of areas before google street view existed.

Now do Kowloon Walled City.


There's a now famous Harvard lecture video on YouTube of Zuckerberg earlier in the Facebook days, where he walks through the issues they hit early on.

https://www.youtube.com/watch?v=xFFs9UgOAlE

I watched it ages ago, but I seem to remember one thing that I liked was that each time they changed the architecture, it was to solve a problem they had, or were beginning to have. They seemed to be staying away from pre-optimization and instead took the approach of tackling problems as they had as they appeared, rather than imagining problems long before/if they occurred.

It's a bit like the "perfect is the enemy of done" concept - you could spend 2-3x the time making it much more scalable, but that might have an opportunity cost which weakens you somewhere else or makes it harder/more expensive to maintain and support.

Take it with a pinch of salt, but I thought it seemed like quite a good level-headed approach to choosing how to spend time/money early on, when there's a lot of financial/time constraints.


It's also worth noting that the electrical performance of your cheaper card's chip may not be as good as the higher grade chip, so may overheat more or have a failure earlier in it's life.

On chip lines like these, there's usually a process called "Speed Binning" where basically when you test the performance of each chip, and you put the higher performing ones in a different "Bin" than the slightly less efficient ones. You then sell the super high performance ones for a higher price, or put them in the more expensive product lines, as they will be less likely to fail.

i.e: Of all the chips that don't fail their tests and are rejected, 85% are of C grade performance, 12% are B grade, and 3% are A grade. Intel does this to get the "Extreme Edition" chips, and i'm assuming Nvidia does this to select the chips for their higher grade product lines.


I had no idea that the manufacturing variation among chips is large enough to create performance tiers. Pass/fail I get, but this sounds like more extreme. Like, the difference between C and fail may be such that some non-negligible percentage of chips that test into the C bin should really be in the fail bin.


It's many, many different things that can make the quality vary so much. From the purity and quality of the raw silicon itself, to the quality of the design of the chip itself, where some critical part of the chip architecture is incredibly hard to fabricate to the highest standard.

Remember that the widths of the oxide tracks within the silicon is on average 40nm these days (that's only ~400 atoms across!) or even smaller. with hundreds of process steps. One big molecule from some tiny error in the production process on the wrong part of the chip may not cripple it, but may impede performance, it's just probability at the end of the day.

With regards to potential fails going into production, it does happen, there are several test phases during production to catch as many as you can, but at the end of the day you won't get them all.

Semiconductor fabrication is fantastically expensive, new Fab plants cost several billion to build, so if you want to guarantee quality you have to pay for it.


It does make sense in that they are always pushing the edge as hard as they can, and it sounds like this process allows them to do just that. They get to sell the 'lucky' results for a premium, their 'average' for their bread and butter, and even the sub-par chips provide income.

It seems like a really good idea. It also seems like it must be pretty hard to do in a way that gives you a reliable (say) 10% emerging as 'lucky.'


When the PS3 first came out, they were having really bad yield issues. The design of the chip is one slowish "normal" processor, plus 6 "synergistic processing elements" which were really fast little vector processors. Well, not exactly 6... If you look at the chip http://www.trustedreviews.com/Sony-PlayStation-3_Games_revie... you'll see 8! OK so one was reserved by the OS for a hypervisor that would run in the background all the time and was not available for mere mortals to access. But that still only explains 7. Turns out they just disabled one of the SPEs. They tested each of them and if one happened to be broken, they would pick that one to be disabled. This N+1 redundancy improved effective yields even though a lot of the chips were still broken!


To be fair, those percentages I gave, I made up for that example. Though I'd imagine they are something close to that. But it depends on the chip, the market and the company selling them.

You design those percentage bands, generally to the size of the market you're aiming the chip tiers at. The silicon will be as good as they can get it generally, nobody wants to push bad products out. Recalls and returns probably cost more in the long term than failing more chips and suffering a worse yield.

However if only 0.5% of your customer base is interested in paying more money for a faster chip, you only cream off the top 0.5% of chips.


Keep in mind that chips can be tested at different frequencies with different amounts of input and expected output. So that C bin could have a perfect test score under certain reasonable conditions.

Clever little hardware hacks gives you oodles of geek street cred, but it may mean putting up with occasional bizarre behavior so that cred is well-earned. When silicon fails, the apparent effects can defy all logic. We can be talking about a logical AND that does something else entirely <0.001% of the time.


They do the same thing with regular electrical components. Resistors for example have tolerance bands so they make a ton of them and then if it fits in the 1% range it gets marked with the 1% color band and sold for more than the 5% tolerance resistors. It's easier to change the marketing strategy than the manufacturing process.


Assuming that this is the Stott who wrote the message and looking at the detailed nature of the mission to bomb the Tirpitz, could this information be used to programatically reverse engineer the key in a sensible amount of time? I.e. assuming certain keywords may be in the message, like "Tirpitz" or "down" etc...


Looks more like a 2 wheeled Unicycle, than a "simplified" bicycle.

"In geometry, there's nothing as strong as a triangle. Diamond-frame bikes consist basically of two triangles. The elegance and simplicity of this design is very hard to improve upon. Billions of diamond-frame bikes have been made from tubing for over a century, and during that time, hundreds of thousands of very smart people have spent billions of hours riding along and thinking about ways to fine-tune the performance of their bikes. The tubular diamond frame has been fine-tuned by an evolutionary process to the point where it is very close to perfection, given the basic design and materials. " - Sheldon Brown


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: