Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google and LG creates VR AMOLED 120 Hz at 5500 x 3000 (blurbusters.com)
247 points by methyl on March 15, 2018 | hide | past | favorite | 142 comments


I'm looking forward to the next generation of VR headsets immensly. A lot of people have been quick to jump on the "VR is already dead" train, but having picked one up during Christmas this year, it's obvious how much potential is there.

There are a few things that need to be accomplished before widespread adoption:

- Removal of wires. It restricts movement too much and removes immersion. The new HTC headset is a step toward this.

- Higher resolution screens. VR AMOLEDs like this are a step in the right direction.

- Prices for GPUs need to go down, and/or a few more years are needed for average computers to be able to render high frame-rates without breaking the bank.

- Headsets need to be lighter and smaller.

- Removal of sensor placement the room. This will be harder to do, but cameras/sensors built on the headsets themselves could potentially accomplish this.

The way I see it, we're in the iPhone 1 stage of VR right now. Imagine the iPhone X version: lighter, smaller, higher resolution, more colors, higher frame-rate, less hassle. These are all inevitabilities, and at that point it will become much easier to adopt the technology. We're also missing a true "killer app" that will get people to purchase a headset JUST for that. I think it will take some sort of truly massive MMO the likes of WoW to accomplish that.

The future is definitely exciting in this field. I hope hardware vendors don't give up and can see the light at the end of the tunnel.


Holy server hug, batman! (Chief Blur Buster here, I noticed the traffic spike).

BTW, GPU is a problem, but we're expecting Frame Rate Amplification Technologies to solve the problem. Basically improved versions of Oculus Spacewarp that can do large framerate multiplication factors with zero parallax artifacts (unlike today).

I covered this topic near the bottom of a different article about the journey to 1000 Hz displays at https://www.blurbusters.com/1000hz-journey

The gist is that within five to ten years, we'll have many tricks to increase framerates with the same number of transistors, without needing to reduce detail levels or make textures/edges blurry, without input lag, and without interpolation artifacts.


I‘m intrigued how a lag-less frame interpolation would work - the algorithm can’t look into the future, or can it?


Current headsets, at least the Rift, already do "look into the future" to lower the motion-to-photon latency (the amount of time between you moving your head and the screen updating based on that).

When you're dealing with a head moving, and very brief slices of time, inertia plays a large role and allows for fairly accurate prediction. After rendering the frame they check head position again, update their prediction for head position at time of display, and move/warp the frame slightly to match. This does require rendering a slightly larger view.

I remember when Oculus cracked the 20 ms mark and got down into imperceptible lag, it was very exciting. They bragged at the time that their predictive models would let them get down to 0 ms eventually, but I'm not sure if they've hit that yet.


You can make educated guesses about the future, which is how Oculus's Asynchronous Spacewarp works. Rendering a whole frame is slow, but warping a pre-rendered frame is fast. If the next frame is taking too long to render, you can warp the last frame to roughly match the perspective that corresponds with the current head tracking data. You get some artefacts, but they're not as noticeable as the judder caused by a missed frame. Prediction can also be used to estimate the head-tracking data at the time the frame is drawn to the display, rather than at the time the frame starts to render.

Similar techniques are used in video compression - encoding the exact value of every pixel is expensive, but you can trade bandwidth for processing by encoding transformations of a previous frame. A modern compressed video consists mainly of these interpolated frames, with only a minority of frames containing a full image. This interpolation can use data from both past and future frames (B frames) but can also use just the data in previous frames (P frames). This works extremely well most of the time, but there are some edge cases:

https://www.youtube.com/watch?v=r6Rp-uo6HmI

https://en.wikipedia.org/wiki/Motion_compensation


Not all things that look like "interpolators" need traditional lookforward lag.

Mice and head trackers can already run at 1000 Hz. It's the GPU that cannot keep up.

Instead of black-box interpolators (e.g. Sony MotionFlow), a smart interpolator can be made to know the high-frequency controller inputs in realtime, and doesn't even need to use guesswork-based interpolation for everything.

Just shift everything around based on the high-refresh 1000Hz controller input. (In other words, "reprojection").

Also, knowing more data about the source (e.g. near-zero-lag controller input stream) eliminates lots of interpolation guesswork. It's much like how H.264 (video compression) is heavily interpolation-based mathematics during the video codec, but it had full awareness of the source video material, to successfully compress it virtually artifact-free.

So basically, you are simply giving a smart interpolator full awareness of things like geometry & input at a higher rate than the GPU renders. To avoid guesswork on those kinds of items.

Things like future multilayer Z-buffers can help solve a lot of parallax-reveal problems of trying to create intermediate frames, and there are future tweaks they are working on to eliminate reprojection artifacts. Like artifacts or reprojection distortions around edges of objects in front of objects. So adding intermediate frames with full parallax effects can eventually become artifact free because of the GPU's knowledge-in-advance of what-behind-what. Basically, more advanced reprojection algorithms that can create near-flawless intermediate GPU frames (without lookforward) without a full polygonal rerender.

Prediction helps (as it does for Oculus), but remember, we have controllers that already go at ultra high frequencies, and it is expected headtrackers will eventually become ultra high frequency too -- and that extra data can reduce the need to do lookforward prediction.

It's all very complex, with many researchers working on multiple solutions, but it can reduce the average processing-power-required per extra frame, and it can theoretically allow high reprojection ratios without lookforward lag (e.g. theoretical future 10:1, such as multiplying 100fps to 1000fps, at least with 1000Hz input devices like 1000Hz gaming mice, and 1000Hz head trackers).

Several VR scientists have indeed advocated the need for 1000Hz eventually, someday in humankind, as there are confirmed tangible immersion benefits to getting that high and beyond.

That's why I wrote that article full of motion demos explaining the visual science concepts of why 1000Hz displays are needed. It will be useful for passing a theoretical future Holodeck Turing Test (not telling apart a VR headset versus transparent ski goggles in a reality-versus-VR blind test), in terms of Morarity-style or Matrix-style "it's real" VR.

Many tricks layers upon each other, to achieve what's being achieved today, and this creativity will only continue. Lagless lookbehind-only interpolation (utilizing ultra-high-Hz controller input to reproject new 3D position). Foveated rendering too, yes. Realtime beamtracing with realtime denoising (NVIDIA scientist paper), perhaps. Maybe even all piled on top of each other simultaneously, perhaps.


I think lag is defined as the time between when you make a movement and the movement is displayed, right?

So if that delay is 200ms, we know it makes people sick, for example. A delay of 0ms would be "zero lag" IMO.


I agree that VR is here to stay, and most of the things people complain about will be solved within a couple of years. I get the impression that most people here are chronically underwhelmed by things. My mind is still blown by the PSVR, the least advanced of the big 3. But I'm older, and I've been waiting for this since 1990. To me the killer app is VR. Myself and everyone I know (including non-techies) got VR to experience VR. I realize a true killer app would have greater reach. I don't think those are knowable except in hindsight. I thought Google Earth was killer, but I guess not everyone agrees. I would love to see game companies invest in decent ports of existing AAA games, including WoW. Again I don't think it needs any big whiz-bang extra, besides being in VR.


Just making a branded VR game (ie. Zelda, Super Mario et al) would move VR hardware like hot cakes.


Its chicken and egg though. No company will make VR games unless the market is large enough. Its up to the little guys who can take risk, to prove the market before the big guys come in and take over.


And long term, this technology will definitely shake up who the little guys and big guys are, as big companies fail to adapt and small companies make hits.


I dont think vr is iphone1. Its more like first non-smart phones lets say nokia 6110.

Its quite possible that with progress made in brains wavelengths readings, the true vr in form of iphone1 will be a head cap you put on and your view/smell/touch perceptions are overwritten by cap’s sensor. That would trully be regular vr/phone versus true vr/smartphone/iphone1


> Its quite possible that with progress made in brains wavelengths readings, the true vr in form of iphone1 will be a head cap you put on and your view/smell/touch perceptions are overwritten by cap’s sensor.

That sort of headset would be such a massive breakthrough in neuroscience that the VR aspect of it would be tiny in comparison.


- Removal of sensor placement the room. This will be harder to do, but cameras/sensors built on the headsets themselves could potentially accomplish this.

Inside out tracking is a reality in consumer devices now. All Windows MR devices that shipped late last year have 6DOF inside out tracking via cameras on the front of the headset with no external sensors. Moving forward there will be more devices from other vendors that use inside out tracking. Qualcomm has shown prototypes, Google + HTC were working on a tango device that got cancelled, HTC is working on an inside out standalone for the Chinese market, Oculus has shown standalone inside out tracked prototypes, etc.


How reliable are they? The Vive's tracking is pretty rock solid. I'd be very disappointed with anything less (e.g., even 97% solid is not good enough) given that any glitches are REALLY jarring and nauseating in VR.


>How reliable are they?

They are very, very good. I've owned every major HMD since the DK2 came out in 2014, and I would say the Samsung Odyssey is the best one to date. The inside-out tracking is fantastic and just as good as Lighthouse (in practical usage, not theoretically). When you consider that there is no setup involved, it makes it a no brainer that this is the way forward.


The problem I find with inside-out tracking is the range of motion. One of the most powerful concepts in VR is being able to do things with your hands when you are not looking at them.

Maybe they could do inside-out tracking on the controllers?


>The problem I find with inside-out tracking is the range of motion. One of the most powerful concepts in VR is being able to do things with your hands when you are not looking at them.

Agreed, there is a bit of an occlusion issue when your arms are behind/above the HMD. I feel like they could probably solve this though, with inductive tracking like Sixsense [0] integrated into the controllers and used in conjunction with IMU/camera data.

https://www.sixense.com/platform/hardware/ [0]


I dont think we're even in iPhone 1 stage - more like Motorola Razr (the old one, back in 2004/05). The real cool stuff is barely even being thought of now, let alone built and sold.

Also, in a perfect world this will all just be a transition step before we get full on holodecks.


The other thing that needs to happen is that the skill, time and money required to create quality VR & AR content needs to be significantly reduced. This will come with time too.

We're doing our own small part to try and make that happen with our automated 3D scanning platform (http://realityzero.one)


From my experiences, I think there are several completely different axies of quality.

If it’s a cartoonish game, the imaging doesn’t need to be photo-realistic. If it’s ‘toon style or lit and textured by photography of real environments, the polygon count doesn’t need to be high.

What it does need, beyond the hardware, is immersive physics, and a gameplay-justified reason for why you can’t run through the furniture/wall/cable that you can no longer see. The game “I Expect You to Die” does that perfectly because you’re sitting down the whole time.


I don't think the skill,time,money required for VR is the man problem. It's not all that much harder to make a VR title vs non-VR title the coat is roughly the same. The problem is there is not a large enough VR market to justify the normal AAA size budgets.


That's a very good point. No content = no point.


I have similar opinion. And I dont think it is even iPhone 1 stage yet. It is more like early Smartphone. iPhone manage to get to a very decent and wide usage Smartphone within a 5 years time. I dont see this happening yet with VR, as it is extremely limited by hardware and software.

As we finally realize, how not very real time our OS and software / hardware. After all these years of abstraction and slight delay added to everywhere in stack. We finally have a motive to unwind / improve those.

May be some day we could have Sword Art Online Style Game to move VR forward. Link Start.


When i think about VR, i think first about alot of other stuff before i think of gaming.

Alone for remodeling my flat (kitchen, bathroom) or building a house.

I would also like to train on a virtual lathe before using a real one.

I might buy the new htc vive and i will see it as an early adopter beta hype thing because there is still work to do, but in general it feels already really good.

That Valve Portal Demo, wow that frightned me a little bit :)


Apart from the issues you list, the biggest problem for me is game quality / support. The VR native titles, aside from few exceptions, feel and play like a tablet games, the non-native titles are usually riddled with rendering artifacts, despicable camera work or really unstable framerates (showstopper for me in VR).


> we're in the iPhone 1 stage of VR

You could immediately do a number of useful (read:productivity enhancing) things with the iPhone 1 - play your music, make calls, browse the web, take pictures, send emails. It deprecated a lot of what needed to be done on traditional phones, desktops, and laptop computers. It would last a day without charging, you wouldn't have to hook it up to a GPU or strap it to your body to use it, and it wouldn't give you motion sickness. Cellular technologies developed quickly to support the bandwidth needed for even better user experiences. Shipments jumped from 1M in 2007 to 20M in 2009.

There are very few polished games or apps available for VR 2 years after the "new" generation of VR headsets was released in 2016 by Oculus and HTC, and total headset unit shipments for the entire market (excluding phone-mounting headsets such as Gear) are probably in the low single digit millions for 2017. It hasn't yet deprecated any traditional dedicated communications technology or functions.

I wish I could get excited about the future in this field, but I really just don't see what the killer app will be for VR. Facebook thinks it will be virtual meetings for the enterprise and hanging out virtually with friends/family for the consumer market...I am very skeptical but want to be proven wrong as VR is one of the last platforms pushing hardware and software innovation forward at the moment.


> There are very few polished games or apps available for VR

Not true. Examples: Brass Tactics, Robo Recall, In Death, Lone Echo. There are already more quality games in the Oculus Store and Steam than most people will have the time to play.


Id say VR is still in the windows mobile and palm pilot days.


Or in the MS-DOS 2.5D days of gaming (Think DOOM). Despite the technical limitations, it is this awesome feeling of experiencing something revolutionary and new.

Previous VR endeavours, like 90s, early 2000s would then be the Pac-Man and Donkey Kong in this analogy :-)


> Or in the MS-DOS 2.5D days of gaming (Think DOOM).

We're most definitely there. Here's the Original DOOM, modded for VR:

http://rotatingpenguin.com/gz3doom/


I'm not convinced that removing sensor placement or wires is a huge problem. It's not any more involved or awkward than a big home theater setup is.

As someone who has had an oculus since the consumer version was released, my main problems are:

1) Eye strain. Even though you have a 3d effect you're still looking at something a few inches from your eye and that disconnect causes eye pain and headaches after playing for more than an hour or so.

2) Locomotion. I've yet to find any way of moving around in VR space that doesn't either make you nauseous or pull you right out of the realism of the experience.


Eye strain will be mitigated with eye tracking and varifocal displays (https://www.roadtovr.com/oculus-research-demonstrate-groundb...), as well as higher resolution. It might still be a problem because you're still staring a screen on your face.

Locomotion is less of a problem than people who think its a problem is, and I think that stems from a lot of people in VR being hardcore gamers, and exploration of large spaces being a core mechanic and selling point for 3D video games for the last 25 years. I don't think omni-directional treadmills or vestibular stimulation or anything inconvenient like that will catch on for locomotion, and I think we'll either use various forms of teleport, or sliding (traditional 3D) locomotion for the foreseeable future, and people will mostly be ok with it. It's possible that with wireless and/or standalone headsets that redirected walking, will be a popular option. You could imagine a headset where Chaperone\Guardian builds on SLAM used for inside out tracking, and can give apps information about the layout of your house, allowing for large procedurally created virtual spaces. This still doesn't solve the problem for people that don't have medium sized private spaces to play in, and also makes it harder to do multiplayer games where players are playing in very different spaces.


One problem rarely mentioned regarding VR is input. Jim Sterling mentioned this in a recent Jimquisition[0] video, and I think it's a valid criticism - the complexity required of modern games can't often be replicated with VR as well as it can with a controller and/or keyboard, certainly not improved upon. Motion controls are often not precise enough - players can do more with buttons sitting down than they can waving their limbs around.

[0]https://youtu.be/7_h6GYI8ddA?t=6m56s


A similar thing was said about keyboard and mouse compared to dual analog.

And those things were true.

Games and people adapted.

Personally, I feel something work that can pick up nerve impulses as well as deliver modest feedback, tactile and or electrical will close much of this gap.

New input paradigms will advance too, just as they currently are for touch.

Touch today is getting good. The finger in the way problem is being chipped away.


Even if touch and motion controls improve, what evidence is there that they present a better paradigm for interaction than a keyboard or controller?


None. The question really is when does input get transparent enough to not be a bother. Whether it beats other forms and technologies is a different question. One that doesn't necessarily require an answer.


Unfortunately there seems to be no end in sight to this crypto currency nonsense, and it has made GPUs double in price.

VR was already too expensive. Now it’s completely out of the question for most people. Adding an absurdly high res and high refresh HMD to the mix right now doesn’t seem like a good idea.


>Imagine the iPhone X version

There's going to be a black bar across the top middle of my field of vision?


Actually there might be. Just like the iPhone, we need somewhere to put the face and eye trackers.


VR is not dead because the resolution isn't high enough. It's dead because it's been around for 30 years and no one has found a useful application for it.


Are you a troll or just brain-dead?


Google plans to stream vr content including games. So the GPU cost is not such a hurdle for the average consumer


It's physically impossible to stream VR games from a remote server without unacceptable lag.


I think it depends on the definition of "stream". You could maybe trade pre-rendering for file size; say prerender a million versions of a 360 degree image along with some kind of 3d shadow map - something that took a lot of the work from the unit.

We already have 3d video - I'd be surprised if the concept couldn't be expanded, leaving the unit to merge streams of background environment and 3d animated sprites.


> average computers to be able to render high frame-rates without breaking the bank

High res stereo at 120Hz is never going to have to same graphics as the latest high budget big game release. Current GPUs are already very powerful, but if people have the expectations of getting the same graphics when they use VR they are going to be very frustrated.


I'm all for it if it means that AAA games will finally step off the photorealism treadmill and admit that pushing polygon counts is not a substitute for art direction. I'll take a Fortnite look over a PUBG look any day.


you can say this, but if you look at the most popular mods on skyrim they tend to make the game look more photorealistic, not less. the general trend for video game consumers is towards realism.


Never is a long time. Screens can only get so good before they're pointlessly higher resolution, and at that point GPUs will keep increasing in performance.

At some point your GPU runs out of things to do.


Sounds suspiciously like not ever needing more than 64KB of RAM.

The reality is that greater horsepower allows for greater abstraction, and easier to program APIs. Increasing developer productivity 2x reduces performance 10-100x, or something like that. So there’s never “enough” performance for the same reason there’s never “enough” powerful/usable APIs.


A current GTX 1080ti is overpowered for a 1080p display, it's too much GPU for too few pixels. If you're driving a 4K display or a head-mounted display with higher refresh rates it will break a sweat, but not on today's games with today's workloads.

Audio used to be really difficult to process in real-time but now it's trivial. There's only so much audio processing you can do before it's ridiculous and pointless.

The same goes for video. Once you have, say, a 40K display for each eye at 244Hz there's no point in going for more pixels or faster refresh rates. If a GPU can handle that, easily, then that GPU will probably be best put to use doing other things in addition to rendering graphics.

Memory is not tied to your senses, we can always find uses for more. Audio and video are, and at some point it's as good as real.


There are rumors about foveated rendering being demoed around the same time.


The specs sound impressive and would truely make VR much more complelling (I have an oculus rift and the resolution is noticable even on low spec games). On the other hand I would wait until actual units are out there but it doesn't seem that outlandish that Google and LG would be unable to pull this off.

I assume this is targeted to the gaming market because at those specs unless you sell it at a loss the MSRP is going to be at least $500. I wonder if this signals Google entring into the VR arena as a publisher to compete with Oculus. I know that also VR is used for Advertisements and they have engagement for properly setup ads for VR as 5 minutes or more so maybe they will be pushing on that front. From what I have seen from Google their VR has mostly been focused on mobile so them focusing on the high end market is really interesting.


> their VR has mostly been focused on mobile so them focusing on the high end market is really interesting

The one consistent thing about Google's VR/AR strategy so far is that it's extremely inconsistent. Different groups within Google are taking a lot of shots on different technologies and form factors with different partners. Some examples include Google Glass (built by Google X, now revived as Glass X for Enterprise) which was supposed to be picked up by Tony Fadell of Nest who then ended up leaving Google altogether, Google Cardboard (a side project by 2 smart Google France engineers for a conference), Google Dream w/ Daydream View, Google Tango (built by ATAP group and shut down after announcement of ARCore), and now ARCore w/ Asus and LG.

There is no discernible Google VR/AR strategy other than - let's see what bubbles up from different dev groups and, when needed, react to market forces (ARCore was a direct reaction to Apple's ARKit). IMO until Google really "focuses" on VR/AR with a dedicated group and strategy this new tech will remain a sideshow without much traction.


>There is no discernible Google VR/AR strategy other than - let's see what bubbles up from different dev groups and, when needed, react to market forces.

Clay Bavor's presentation at Google I/O[1] and SID Display Week[2] seems to outline their strategy concerning AR/VR pretty well.

>Some examples include Google Glass

Released in 2013 and pivoted to enterprise use. Now under the control of the Google hardware division.

>Tony Fadell of Nest who then ended up leaving Google altogether

Yes, he did leave Google, but only after being assigned to the roof.

>Google Cardboard (a side project by 2 smart Google France engineers for a conference)

A low cost VR solution developed in 2014 by two Googlers during their 20% innovation time off. I believe it's also the most widely used VR solution in the world.

[1]https://www.youtube.com/watch?v=tto90e-DfeM

[2]https://www.youtube.com/watch?v=IlADpD1fvuA


Their inconsistency can be a result of them releasing products not to actually have a consumer goal but to just collect massive amounts of data and killing it when they are done.


Of course that’s a problem with all of Google, not just VR. The last major top-down strategic direction they had was social, and that didn’t go very well.


I would also add that the price of the panel is one thing, the price of the systems needed to drive those resolutions is another. I wonder how they'll pull off appropriate frame rates at that resolution per panel.


All you need is a hardware upscaler in the display cable to take lower resolution up to native resolution of this display.

Clarity isn't as important as getting rid of the screen door effect.


This can't be overstated. You can achieve really convincing presence with nothing but flat shaded primitives, but the effect is greatly diminished if it appears there's something external in your field of view.

The other bonus here is that with the huge resolution bump, other use cases become available. Reading text in VR is tricky with the shipping headsets right now, but with this, a virtual desktop could be realistic.


This seems like a dumb question, but why is it that SDE (screen door effect) is dependent upon resolution? It seems to me like extremely high resolutions would force SDE to be solved but that SDE could also be solved by making "fatter" pixels at the same PPI. The cheapest way to do that would presumably be some slightly diffusive layer on top of the screen that diffuses by about 1/2 the distance between pixels. This seems like such a simple solution that it would have been tried already though, so why is the focus on resolution?


The diffuser will blur the image, the opposite of what you want.

Higher resolution screen with upscaled input does not sacrifice clarity for blur/no SDE. Upscaled input won't have as high detail as a higher resolution input, but the performance gains of the upscaled input far outweigh the negatives.


It seems like there should be a middle ground where you blur enough to remove SDE but not enough so that pixels overlap. The whole problem with SDE is that there is a perceptible gap between pixels - blur will spread the pixel over the gap but the gap is what ensures that the pixels do not become blurred ontop of each other. I assume this doesn't work in practice (since I don't think anyone is doing it), so I'd be interested in an argument for why it doesn't.

Conceivably, a higher resolution display would have a worse SDE than a lower one. If the pixels became smaller, then the gaps between pixels could get worse even if there are more pixels per inch. For example, [1] shows a game of Tetris played on a building and the problem with it is not that the resolution is low, but that the pixels have too gaps between them. Doubling the resolution would only help a little but make each pixel a big square instead of a dot would help immensely. A diffusive screen over that building could help potentially quite a lot. This is obviously an absurdly exaggerated scenario.

[1] https://mashable.com/2014/04/06/building-tetris-philadelphia...


Instead of blur, microlens arrays can be used to increase the pixel size to cover the gaps.

The microlens can also be designed to blur the RGB together to provide a true white pixel from the approx three coloured subpixels.


Absolutely something I was thinking, but not sure what the right terminology is! "Microlens" sounds fancier than you need, I'm really happy to just diffuse it but with a limited blur so that they don't bleed.


Only if the diffuser is in the optical chain and not the surface target at the focal plane of a projection.

Whenever we crumple our eyes up to squint we're creating a diffraction grating that works a treat for the myopic amnesiac I am when without my spectacles.

The ground glass of the focusing screen in my Nikon is diffuse. This particular diffusion assists with guaging the lens focus, bit only for smaller apertures than f/2.0, so that expensive 85mm f/1.4 prime lenses need autofocus. Dunno how I managed for two decades professionally before the digital switch rendered great optics practically redundant*

So what's preventing the projection of the high resolution display information into a kind of focusing screen like the old fashioned Single Lens Reflex cameras won the market with in the fifties and sixties?

Is it more physical constraints of the imposition of the display into the field of vision for infinity focus?

If long ago my amateur interest would have carried my thinking further, sadly I have no applicable reading even on the subjects, so forgive me if this is a beginner supposition in error.

I certainly see the opportunity to directly encourage the hardware development, even if the equipment will be unwieldy to use, even more so than today, for specialist fields such as fabric design and magazine printing. I have been involved in the latter for the entirety of my career, two decades of that running the company I hoped would open up enormous markets and that field even defenestrated Google in 2004, leading to the most epic narrative and pivot story if I manage to get things kicking again. * Surely there is a very good overlap between the current generation of customers for whom the budget simply isn't the obstacle, and the same customers who will allocate whatever you want to charge in the future, if you can deliver real advantages? I'm optimistic about VR, but not in any kind of short term horizon. I founded my own business on a thirty year plan.. that still could be met, and made decades one and two, kinda just the latter. Maybe this is what we all need: real long term benefit analysis and commitment. This isn't a factor of the youthfulness of the Valley to which short termism is often ascribed,when flighty CEOs are discounted, I was very young and junior by a generation and some to my co-founder, when I realized how I had found my calling. Surely the VR sector has plenty of similar thinkers young enough to put such time to their dreams?

*I kinda continue a bit in my profile, where I may be more bold soon. I welcome any feedback or inquiry, I may be less than the epitome of clarity when I come close to professional engagement with ideas I can't believe.. the industry hasn't destroyed my hopes because of politics not technology, not yet anyhow..


Everyone says that a powerful GPU is needed to drive this thing, but I don't think that's a must. Careful art direction based on current hardware limitations can produce nice-looking virtual worlds — not photorealistic, but still immersive and fun. Nintendo pulls this thing consistently.


I absolutely agree ! I do see the appeal of photorealism but lots of excellent 3D experiences are far from photorealism. A sensible art direction can go very far, and I'd even argue that a less than ideal graphical experience can be forgiven if the content is meaningful and engaging enough. I think we all have examples of this in mind


I would love to walk around in a VR world that looked like the old vector displays. Slightly glowing wire frames but with occlusion. This technically doesn't even need polygon rendering, though doing so with mostly black textures except the edges might be cool. At todays resolutions I think small cylinders along the edges of flat black surfaces might look better. And that's just one stylized world that doesn't require a mammoth GPU to handle.


You also don't need to render at full resolution.

A significant benefit of higher res VR displays is not being able to physically see the pixels / screen door effect of the screen. You still get this benefit by rendering at a lower resolution and upscaling to the display resolution. Certainly you will be able to see 'render pixels', but that's a much less significant problem and just reduces the realism of the scene.


If you add eye/focus tracking you can selectively render the part in focus in a higher resolution and use lower resolution for peripheral vision. The end result is that you are rendering at exactly the bandwidth that the human eye can handle, which is optimal and probably not that high. Going forwards, this is IMHO where the real optimization gains are.


That's a really interesting idea, but can you imagine how incredibly annoying it would be if there were any level of latency? Seems like even a few 10s of milliseconds of latency would make that rather horrible. Especially considering how fast your eyes can dart around a screen, could we even do that today?


I dunno. There are a lot of flaws in your visual perception as it is (blind spot, nose hiding, differences in light/dark/color sensitivity based on location in FOV, etc).

The brain corrects for things like that automatically. Might work just as well here.


> Everyone says that a powerful GPU is needed to drive this thing, but I don't think that's a must.

It is not a must, but it is a mostly. Most people want something approaching realism by default. Stylism is okay but it isn't the primary use case.

The video game sales show this. Top selling games are mostly ultra realistic, Witcher, Call of Duty, Modern Warfare, Battlefront, etc. Yes there is Nintendo's low end strategy but most are ultra realistic.


This is so true. Case in point: Brass Tactics uses cartoonish "game of thrones intro" graphics that is quite optimized. A current graphics card can even render the game world from a different overhead view on the monitor, with capacity to spare.


I do not agree. I’m getting pretty tired of the shiny cartoony untextured polygon VR look. I’m ready for more R in my VR.


Personally, I'm ready for more _fun_. I don't care how realistic it is. The new experience alone is captivating. We have "realistic" in the games industry already, and its produced the worst games of all time.


Atari sold millions of 2600s that had 160x192 resolution because joysticks are easier to use than hand waving.


For the people worrying about GPU usage, it also says "Foveated driving logic for VR and AR applications was implemented".


I had to google foveated driving logic: It means tracking the user’s eye movements and only rendering the most central area to the eye in high-definition.

Source: https://www.roadtovr.com/google-shares-new-research-foveated...


Real source: https://research.googleblog.com/2017/12/introducing-new-fove...

(the one you linked came up first for me too when searching around, took a little more digging to find the original article).


Huh, I've been wondering for like a decade if we'd start doing this, but the eye-tracking latency must have to be insane.


Is this foveated as in Field-Of-View-ated?



thanks!


In laymen terms, sure! Its a useful enough way to think about it.

Technically though the fov is refering to fovea, not field of view.


That sounds really nice.


Ya this is effectively a non issue. https://www.roadtovr.com/nvidia-perceptually-based-foveated-... The primary issues in this space is latency not raw power.


> Ya this is effectively a non issue.

Yeah. If the new panel is presented as 3x or 4x 1080p (two foveal insets and one or two backgrounds), and tolerates 30 fps, I could run it off the old integrated graphics I'm typing at now.

> The primary issues in this space is latency

For immersive VR, yes. But as one moves away from gaming, and immersive, and VR, design constraints relax a lot.

There's so much ambient confusion about the shape of the broader design space, and misattribution of constraints.

I usually use my WMR (and Vive before that) on an old laptop with Intel integrated graphics. A duct-taped-on 30 fps camera-passthrough AR serves for balance, permitting <30 fps rendering for a software development 3D "desktop". Latency, and its variance, is just not a challenge here.

I can even do subpixel rendering on the WMR. Lens focus is so bad, subpixels are only worth bothering with for like (0.5 kP)^2/eye (of the ~1 kP square/eye usable area, of the 1.4 kP square/eye panel). There's no point in even rendering full native resolution over much of the visual field. And not caring about immersion, there's no need for barrel and chromatic correction, nor for blending away seams between resolutions.

Such a vast amount of effort is spent on dealing with "horrible immersion-breaking visual artifacts", which for some parts of design space, are like "meh, sooo don't care". Remember the skeuomorphic user interface fad of a decade ago? A calender app should look like a leather-bound calendar book? And... remember all the claims that calendar apps wouldn't be viable for years, because of the GPU demands of faithfully rendering light reflection off the leather. Yeah, me neither. That would have been silly. Just like expecting a 3D/HMD professional "desktop" environment to prioritize immersion.


Here's an optical illusion to show how small your fovea's FoV is: https://www.shadertoy.com/view/4dsXzM


Does anyone have the GPUs to drive this tech though? I've been thinking lately that the main reason why the 2015-2016 hype about VR completely flamed out is that, on top of the ~$500 headset, you needed a new ~$1500 gaming PC to do anything with it. As a result, not enough consumers got on board to kickstart the virtuous cycle of more consumers -> more spending dollars -> more content developers -> more attractive content -> more consumers. The kind of GPU you'll need to drive a 5500x3000 display at 120 Hz certainly isn't going to help with that.


Not just GPU but display bandwidth too, 5500x3000x120Hz requires a monstrous 59.40 Gbps according to this calculator: https://k.kramerav.com/support/bwcalculator.asp

For that you'd need multiple displayport cables.


Assuming YUV 4:2:2 at 8 bit per component you end up with about 32Gb/s, 40Gb/s for 10bits.

By the time GPUs able to render at this resolution and framerate become mainstream I'm sure we'll have the cables to connect them. The big problem with VR is that you want very long cables to be able to move around unless you manage to stuff everything into a backpack. Very high bandwidth and long cables don't always play very nice together. Worst case scenario there's always optical fiber...


The assumption is that the system would use foveated rendering to keep bandwidth demands down.


Thunderbolt 3 runs at 40Gbps, so it wouldn't require very much foveated rendering to get the bandwidth to that spec.


I believe DisplayPort 1.4 with DSC would already support this, though I think it's pretty close to the maximum.


Given the crypto mining price inflation of the kind of GPU's needed to power something like this, it's doubly dead.


Quad 1080Ti cards... Oh wait, even if you were able to afford it, they don't support it anymore do they? It's simply not possible to drive those with the current tech, I guess?


Nvidia doesn't support it, but they don't have to.

Vulkan allows game developers to spread the load across all GPUs, if they wish to. You can do SLI with 8 1080Tis, an intel iGPU and an RX VEGA all together without issues


Without issues is probably stretching the truth a bit.


Well, it’s entirely up to the developer – you won’t have any issues with the driver, or of similar nature.

In contrast to SLI, where the driver frequently was the cause of issues.


How's the reality situation? I mean, are there any games which run perfectly on a random GPU array?


AMD has usually used Ashes of the Singularity as demo to show how well a game can run with DX12 or Vulkan multi-GPU.

But many of the modern engines have started splitting their render tasks so that some can be put onto additional GPUs (or physics tasks, or animation tasks, ...)


Whatever the price crypto currency cultists will buy them all.


The people that buy GPUs for mining don't really have to believe that cryptocurrencies are the future; they just have to run the numbers and see that mining is profitable with a specific hardware, it's not belief but economics.


You have to believe that it'll stay profitable for long enough to repay the investment, which is dubious at present. It definitely is a belief.


Participating in cults is profitable for a subset of cultists.


Providing services to cultists certainly does not make you a cultist :)

What motivates people to write these silly comments attacking cryptocurrency investors? Jealousy?


Desire to easily buy GPUs for gaming at a reasonable price!


Gaming today is big. So big I dont think anyone forsee this coming. And VR is going to push this into another level.

I would have loved Apple to capture on this, as they stand to be the company that makes software and hardware well integrated. The problem is Apple has never gave a damn about gaming. They say they do, but they dont. And It isn't their priority, it isn't in their DNA. ( I bet none of the VPs are gamers of any sort ).

Nintendo could be another fit. But they are always lacking in Hardware and VR is very much hardware limited.


Everyone is talking about gaming but these are the kinds of resolutions you need for monitor replacement and more business related tasks.


Fantastic. I've heard that to truly remove the awkwardness of VR, the refresh rates need to go as high as possible. 240hz would be better, and I've heard it argued that 1024hz is optimal.

Also, with high enough resolutions, you don't need anti-aliasing. This is pretty close to 8K, which is probably around the time that anti-aliasing stops mattering.


I've never found the framerate of my Rift to be a problem however I'd gladly trade it for something with better resolution and improved contrast and brigtness.

That being said my high-end gaming PC (using a GTX1080) struggles to maintain native framerate in demanding games without reprojection on the Rift, I can't imagine what kind of futuristic computer you'll need to drive 5500x3000@120Hz in VR. Let's hope the cryptocurrency mining gold rush will have subdued by then.


Foveated rendering should help a lot, but you'll still definitely need a powerful one.


Per-pixel operations in games have become extremely expensive in the quest for ever more impressive rendering. That gives lots of room to scale back, and still provide a very immersive experience, just with more modest rendering.


The specs are nice but I wonder if it can work well on games with current high end computers. If they really release it I will most likely buy it though.

I think that this kind of headset is meaningful for both VR power users and developers but Google and Facebook should focus on non gaming content to attract a wider audience.

Facebook is in the odd position where they could make a WebVR social killer app over night but they won't because they really badly want to become the Apple of VR with their Oculus Store. I think that they will regret this strategy in the future. That or they will have to cash billions out once more to acquire a company that found out what to do with this tech.


Now we just need a decade or two until consumer computing power can actually power it.


Even with a lower rendered resolution, this would kill any spec of Screen Door Effect left on the current generation of VR headsets (talking about the Samsung Odyssey and the Vive Pro).


Or foveated rendering. That'll probably be commonplace in far less than a decade.


Isn't it a very scalable problem? Just put more processors to GPU.


Right but "consumer" implies a price point, which is not very scalable.


The impact on your wallet scales too.


At this point, I'd be more concerned with improving the optics: lenses that don't smudge and don't have the Fresnel lens flaring, or perhaps even with a wider field of view.


Improving resolution allows for reading text smaller than a billboard. FOV and lens flare are bad, but text is a higher priority IMO.


this has impact in lenses as well, because with high enough resolution you could emulate lenses on the screen.


You can't "emulate lenses on a screen" unless it's a lightfield display (which it's not).

With higher resolution you could correct for more lens distortion, but you still need lenses, and you won't be able to fix lens flare on fresnel lenses.


You'll have to give me more details about that, because that seems physically impossible to me (at least with the common definition of "screen").


Interesting, I wonder whether dynamically adjustable lenses can be used to treat vision problems.


I quite sure that's physically impossible with an oled display.


From a GPU perspective this is going to require some powerful hardware to drive. If you look at the 3D game benchmarks for the geforce1080 vs the 1080ti (and slightly overclocked, more expensive versions of the 1080ti), the only video card that will actually deliver close to a consistent 60 Hz at 3840x2160 on a SINGLE DISPLAY is a 1080ti. Now multiply that by two screens, and a resolution per screen that is considerably higher than standard 4K.


As a home user who has a passing interest in VR. I would throw down a bunch of cash to get something like a STAR VR setup for watching TV/movies.

Something that can give immediate immersion when put on, scaled down, lightweight/comfortable, portable and can last around 5 hours would be a great first step.

[0]: https://www.starvr.com


We've been working with Google to create content that can harness these new specs:

https://www.youtube.com/playlist?list=PLL-lmlkrmJalNqp7Q_dLA...


That’s pretty impressive. The current generation of VR is too pixelated. Hopefully this will fix that. Now only the GPU needs to keep up. Note 60 GHz link on the Vive Pro.


I'm afraid that we'd need to see the performance jump that came with the latest generation of nVidia cards many times to be able to get there. Even a single 2160p 144Hz monitor is hard to satisfy unless we are talking about an SLI configuration, which is extremely costly given the high demand caused by people after easy money.


I am wondering why all VR device is designed like a glass. Isn't it more comfortable to the user if we design it like a helmet?


No helmets are heavy and get hot and sweaty very fast.


Think Google is going to need to develop some custom silicon to drive this level of resolution and not going to be easy.


It would seem they've already developed it.

A custom high bandwidth driver IC was fabricated. Foveated driving logic for VR and AR applications was implemented.


This was the first thought I had, also, but from a optimistic point of view:

If you are interested in modelling the behaviour of a modern fabric loom, which can be a four storey high proposition with thousands of spindles feeding air guided bobbins, where the weight of the threads in your weave design affects the entire process, the way your thread hangs requires a different tension, how it stretches affects the feed and speed with which the bobbin can fly through the weft (vertical line frame into which the pattern is woven) and the elasticity and friction tension the weft sometimes intentionally... I am willing to bet it would suit me if the visual rendering is put on ASICS or FPGAs, while the physical modelling for cloth behaviour might be a more general purpose solution.


To be clear, is this per-eye? It's got to be, right?

A single 4.3 inch isn't big enough for VR, right?


The size of the screen isn't very relevant, because for proper use there needs to be a lot of lenses between the screen and your eyes, and they can use those lenses to magnify the screen to whatever field of view they want. Mostly a smaller screen just makes the headset smaller.


I want this so bad




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: