collection of notes on virtual reality (vr) and stereoscopic image recording and viewing.
stereoscopy involves using two distinct images, one for each eye, to replicate the separate perspectives that each eye would naturally perceive. this technique primarily enhances depth perception. various animals employ different methods to perceive depth, such as analyzing light distribution, projection patterns, and parallax effects. a particularly significant method involves the relative distance of objects as projected onto eyes that are slightly apart. eyes do not capture three-dimensional information; they receive flat, two-dimensional images. this is why stereoscopy is sometimes referred to as "2.5d" rather than true "3d", as it does not provide full volumetric perception. photographs and videos using one image per eye typically capture only the position and orientation of the camera at the time of recording. consequently, when viewing the result, moving the head will not reveal more of the objects' sides. this is another reason why "stereoscopic" is often preferred over "3d", which can be misleading. if images are dynamically projected from three-dimensional data (e.g. computer graphics) and the display device supports head tracking, stereoscopy can simulate a viewing experience that responds realistically to head movements.
beam splitter
attachment to lenses or dual lens
two horizontally offset openings that direct light onto halves of the image sensor
side-by-side
example: two handheld cameras mounted in portrait position using right-angle brackets
half-mirror
cameras offset horizontally and vertically with mirror to redirect image to second camera
for cameras too large for side-by-side
mirrors reduce image quality slightly and require careful cleaning
consumer recording devices are still rare in 2024.
box cameras
panasonic lumix dc-bgh1e z cam e2 youtube: z cam e2 3d rig
cannot mount closer than 93 mm lens center distance
designed for cable synchronization (genlock in)
two images side-by-side at horizontal distance close to viewer's interpupillary distance (ipd), ~65 mm common. same compression and file formats as monoscopic, but two images encoded in one larger frame. by placement:
interpupillary distance and stereo image separation ratio strongly affect realism. if stereo image separation misaligns with viewer ipd, objects may appear cross-eyed.
also called head-mounted displays. most software assumes two controllers and active tracking.
tracking: continuous 3d position determination to adjust virtual environment view.
outside-in tracking
inside-out tracking
pcvr
bigscreen beyond
steamvr support
4320x2160p, oled
compact/lightweight (~143×52×49 mm, 127 g)
public exact dimensions for 3d printing
requires iphone (€200+)
after comparing vive pro + controllers, hp reverb g2, valve index, chose vive pro.
htc vive pro
corrective lenses still required in vr to avoid blur, as headset displays are fixed-distance and magnified by optics.
standards:
web stereoscopy:
works well: waves, spraying water, underwater, caves, interiors, dense urban areas, product viewing, art installations, sculptures, volumetric objects, 3d animation. works poorly: fast-moving objects toward viewer, close unsightly views, empty rooms, foreground obstructions, excessive parallax/focus changes.