I've seen this question a few times, so I'll try to address...
Yes, you can take a bi-axial rastor scanning MEMS mirror (similar to what Microvision uses in their Pico projector, or those supplied by Mirrorcle), shine it into your pupil (at the appropriate) distance and BAM!, you have a point source VRD. However, I would heed the warning on Microvisions Pico projector as those ARE NOT eye safe lasers. It is nit difficult, however, to get your hands on eyesafe laser sources suitable for VRD.
However, with this simple device your FOV will be limited by the angular range of your MEMS. Your resolution will be limited by the pulse width of your synchronized laser. Most importantly though is that the exit pupil (eyebox) is very small such that when you move your eyeball a few degrees, the image will start to disappear. So, you need redundant point sources to expand the FOV and also expand the eyebox to permit eye rotation. These point sources must be spaced in a matrix of less than 2mmx2mm (the typical minimum dilation of the pupil in bright ambient settings).
That's a lot of point sources, so if you try to just use a bunch of MEMS mirrors, the driving electronics will generally occlude your natural view of the environment. That's not a problem for video see-through displays, but for optical see-through displays that's a deal breaker. So, you're back to trying to disassociate the light beam point sources from the scanning-driver electronics. The typical strategy is to use some sort of waveguide - to transport the light beam (for a given portion to the image) through a transparent material (e.g. glass), then get that beam to outcouple from the waveguide at the precise location, then have that light beam continue to raster scan. Now you need to do that for, a few hundred light beams (per eye!) to fully fill the natural human FOV with a composite image greater than 35 pixels per degree (for text readability)
All of the above items are achievable, but it requires precision optics and alignment to get everything working in harmony. Add the component of affordability, and you've got a pretty big engineering challenge on your hand.
I'm sure someone can do it. This is just one viable path to truly transformative AR displays, but I think it's the most promising.
I've seen this question a few times, so I'll try to address...
Yes, you can take a bi-axial rastor scanning MEMS mirror (similar to what Microvision uses in their Pico projector, or those supplied by Mirrorcle), shine it into your pupil (at the appropriate) distance and BAM!, you have a point source VRD. However, I would heed the warning on Microvisions Pico projector as those ARE NOT eye safe lasers. It is nit difficult, however, to get your hands on eyesafe laser sources suitable for VRD.
However, with this simple device your FOV will be limited by the angular range of your MEMS. Your resolution will be limited by the pulse width of your synchronized laser. Most importantly though is that the exit pupil (eyebox) is very small such that when you move your eyeball a few degrees, the image will start to disappear. So, you need redundant point sources to expand the FOV and also expand the eyebox to permit eye rotation. These point sources must be spaced in a matrix of less than 2mmx2mm (the typical minimum dilation of the pupil in bright ambient settings).
That's a lot of point sources, so if you try to just use a bunch of MEMS mirrors, the driving electronics will generally occlude your natural view of the environment. That's not a problem for video see-through displays, but for optical see-through displays that's a deal breaker. So, you're back to trying to disassociate the light beam point sources from the scanning-driver electronics. The typical strategy is to use some sort of waveguide - to transport the light beam (for a given portion to the image) through a transparent material (e.g. glass), then get that beam to outcouple from the waveguide at the precise location, then have that light beam continue to raster scan. Now you need to do that for, a few hundred light beams (per eye!) to fully fill the natural human FOV with a composite image greater than 35 pixels per degree (for text readability)
All of the above items are achievable, but it requires precision optics and alignment to get everything working in harmony. Add the component of affordability, and you've got a pretty big engineering challenge on your hand.
I'm sure someone can do it. This is just one viable path to truly transformative AR displays, but I think it's the most promising.