9. Approaching the Natural Solution
Li 2006, PNAS
Hasan 2017, Optics Express
https://www.theverge.com/2017/1/29/14403924/smart-glasses-automatic-focus-presbyopia-ces-2017
28. Nitish Padmanaban
Computational Imaging Lab
Stanford University
nitish.me
computationalimaging.org
Nitish Padmanaban, Robert Konrad, Gordon Wetzstein. Autofocals: Evaluating Gaze-
contingent Eyeglasses for Presbyopes. Science Advances 2019.
Special thanks to EliasWu, JonathanGriffin, and Evan Peng and the Computational Imaging Lab
Everyone eventually, roughly 20% of the world population, or 1.3 billion people is the current number
Hasan’s group has the new eye tracking, but no implementation
Add more of their papers?
Add more papers in general?
DeepOptics claims eye tracking, but no product is out
Trufocals went out of business, but it was a manual adjustment on the bridge
Similar triangles
Note that the gaze direction (central ray) doesn’t matter as long as the distance between the gaze points is correct
IPD and calibration screen distance need to be known ahead of time, as well as mapping from gaze points to physical size via screen size
We test vergence at some known distances
But the vergence is off, why?
The estimated distance changes because of IPD and calibration distance aren’t too bad
Note IPD error is irrelevant at the calibration distance (gaze points have no separation), but tiny even with a huge 5mm error
Gaze error can be disastrous if both inwards or outwards by just a couple of degrees
Calibration error is 0 at ∞ because eyes point straight forward, making distance to screen irrelevant
RealSense rated for 0.5m–3.5m indoors, we push it closer with some success
The depth map has holes, which we need to inpaint
Navier-Stokes inpainting from OpenCV, which is naïve
That erroneous bright spot in the far distance area was fine in this case, but the inpainting can exaggerate the error
It’s worse at near distances
Combine with depth…
but noisy so clean it up a little bit with an exponential filter towards new values, accept new depth only if within 0.5 D after filter
But depth is also slower (30 Hz vs 120 Hz), so we want to keep using vergence
Use vergence + error, with exponential moving average error based on depth
Still jittery, so limit it to jumps of ≥0.25 D
This is viewed through the old prototype, so there’s still coma
Distances of the monitors are labeled, letter sizes normalized across monitors
Monitor brightness for all of them was roughly 190 cd/m^2 (nits) for a white pixel
Resolutions for all monitors support at least 20/10 acuity (60 cpd, 0.5 arcmin feature size, -0.3 logMAR, 3 lines below 20/20)
Sloan letters to match the ETDRS chart format, chosen randomly without repetition from the set of 10 standard letters
Each line decreases by 0.1 logMAR (factor of 1.26), 20/20 acuity defined as 0 logMAR
Stop when 3+ letters wrong in a single line
Score = last line attempted – 0.02 * number of letters wrong over the trial
Repeated 3 times per distance, per correction
Autofocals are better everywhere except at the far distance with progressives wearers
But they don’t have an obvious downward trend
Our near focus may suffer from some VAC because we assume zero accommodation
Magnification by the lens power may have improved intermediate
14 progressives aged 55–70, 5 monovision ages 52–67, error bars are standard error
Monitors at distances of 6m and 40cm (0.167 and 2.5 D), side by side at eye level
Sloan font alphabet, size 0.1 logMAR = 1 line bigger than 20/20
Letters the same with 50% probability
2 minutes timed
Progressive users faster and more accurate, monovision only faster
14 progressives aged 55–70, 4 monovision aged 52–67, error bars are standard error
37 people aged 50–66, about 1/3 of those that responded had reading glasses, most of the rest had progressives
*, **, and *** are 0.05, 0.01, and 0.001 significance levels
37 people aged 50–66, about 1/3 of those that responded had reading glasses, most of the rest had progressives
*, **, and *** are 0.05, 0.01, and 0.001 significance levels