by pg on 8/27/23, 12:51 AM with 209 comments
by Componica on 8/27/23, 4:38 AM
by AndrewKemendo on 8/27/23, 1:44 AM
They have generally struggled to find funding for their eye tracking focused work, and have recently had to pivot away from the really exciting but hard to fund stuff into PTSD screening (which is important too).
I can connect you with the founder if desired via the email in my bio
by justinlloyd on 8/27/23, 4:21 AM
The problem is not the eye-tracking, it is reasonably easy to build robust systems that can do that easily enough, even with custom hardware under all sorts of lighting conditions. The hard part is the UX if you are trying to build something that isn't hampered by current UI paradigms.
Rapid typing and menus of custom actions with just eye movement, though fatiguing, shouldn't be hard to solve, and then render the output however you want; text, text to speech, commands issued to an machine, etc. Making a usable user interface to do anything else, that's where the rubber hits the road.
@pg, which software is your friend using? If it is anything like I've looked in to in the past, it's over-priced accessibility crap with a UI straight out of the 1990s.
by dewarrn1 on 8/27/23, 1:43 AM
by blackguardx on 8/27/23, 1:46 AM
by sam_goody on 8/27/23, 11:48 AM
Either:
- Part of the whole world-coin thing was privately trying to get the data to help his friend
- He doesn't want to say "looking to develop eye tracking tech for my world-coin scam", since most devs won't touch that thing. Conveniently found a "friend" with ALS.
Saying, on behalf of a friend, that he doesn't believe PG.by musesum on 8/27/23, 2:58 AM
Hopefully, whomever takes this on doesn't take the standard Accessibility approach, which is adding an extra layer of complexity on an existing UI.
A good friend, Gordon Fuller, found out he was going blind. So, he co-founded one of the first VR startups in the 90's. Why? For wayfinding.
What we came up with is a concept of Universal design. Start over from first principles. Seeing Gordon use an Accessible UI is painful to watch, it takes three times as many steps to navigate and confirm. So, what is the factor? 0.3 X?
Imagine if we could refactor all apps with a LLM, and then couple it with an auto compete menu. Within that menu is personal history of all your past transversals.
What would be the result? A 10X? Would my sister in a wheelchair be able to use it? Would love to find out!
by kubi07 on 8/27/23, 2:20 PM
He is using tobii eye tracker. There is a video he made about the eye tracker. It's in Turkish but you can see how he uses it.
https://www.youtube.com/watch?v=pzSXyiWN_uw
Here is a article about him in English: https://www.dexerto.com/entertainment/twitch-streamer-with-a...
by fartjetpack on 8/27/23, 1:42 AM
1. https://www.nasa.gov/centers/ames/news/releases/2004/subvoca...
by fastball on 8/27/23, 1:25 AM
by readyplayernull on 8/27/23, 1:49 AM
When I worked for one of the big game engines I got contacted by the makers of the tech that Stephen Hawking used to communicate, which includes an eye tracker:
https://www.businessinsider.com/an-eye-tracking-interface-he...
by lostdog on 8/27/23, 10:38 AM
By my math, 5k people in the US are diagnosed per year, and if your keyboard costs $1k, then your ARR is $5m, and maybe the company valuation is $50m. Numerically, this is pretty far from the goal of a typical YC company.
I hate to be so cold-hearted about the calculations, but I've had a few friends get really passionate about assistive tech, and then get crushed by the financial realities. Just from the comments, you can see how many startups went either the military route or got acquired into VR programs.
The worst I've seen, btw, is trying to build a better powered wheelchair. All the tech is out there to make powered wheelchairs less bulky and more functional, but the costs of getting it approved for health insurance to pay the price, combined with any possible risk of them falling over, combined with the tiny market you are addressing makes it nearly impossible to develop and ship an improvement. I do hope that we reach a tipping point in the near future where a new wheelchair makes sense to build, because something more nimble would be a big improvement to people's lives.
by caspar on 8/28/23, 9:22 AM
IMO Talon wins* for that by supporting voice recognition and mouth noises (think lip popping), which are less fatiguing than one-eye blinks for common actions like clicking. The creator is active here sometimes.
(* An alternative is to roll your own sort of thing with https://github.com/dictation-toolbox/dragonfly and other tools as I did, but it's a lot more effort)
by splatcollision on 8/27/23, 1:42 AM
https://en.wikipedia.org/wiki/EyeWriter
https://github.com/eyewriter/eyewriter
by arketyp on 8/27/23, 5:17 AM
by bacon_waffle on 8/27/23, 8:56 AM
https://dasher.acecentre.net/ , source at https://github.com/dasher-project/dasher
---
I remember seeing a program years ago, which used the mouse cursor in a really neat way to enter text. Seems like it would be far better than clicking on keys of a virtual keyboard, but I can't remember the name of this program nor seem to find it...
Will probably get some of this wrong, but just in case it rings a bell (or someone wants to reinvent it - wouldn't be hard):
The interface felt like a side-scrolling through through a map of characters. Moving left and right controlled speed through the characters; for instance moving to the left extent would backspace, and moving further to the right would enter more characters per time.
Up and down would select the next character - in my memory these are presented as a stack of map-coloured boxes where each box held a letter (or, group of letters?), say 'a' to 'z' top-to-bottom, plus a few punctuation marks. The height of each box was proportional to the likelihood that letter would be the next you'd want, so the most likely targets would be easier+quicker to navigate to. Navigating in to a box for a character would "type" it. IIRC, at any instant, you could see a couple levels of letters, so if you had entered c-o, maybe 'o' and 'u' would be particularly large, and inside the 'o' box you might see that 'l' and 'k' are bigger so it's easy to write "cool" or "cook".
(I do hardware+firmware in Rust and regularly reference Richard Hamming, Fred Brooks, Donald Norman, Tufte. Could be up for a change)
by maccard on 8/27/23, 10:26 AM
by gwurldz on 8/27/23, 1:32 AM
https://thinksmartbox.com/products/eye-gaze/
I once interviewed at this company. Unfortunately didn't get the job but very impressed nonetheless.
by modeless on 8/27/23, 4:27 AM
[1] https://techcrunch.com/2016/10/24/google-buys-eyefluence-eye...
by sprocket on 8/27/23, 3:49 AM
https://www.optikey.org/
which ran on a < $1k computerAt the time, the other options were much more expensive (> $10-15k) which were sadly out of out budget.
by sailplease on 8/27/23, 2:58 AM
by yyyk on 8/27/23, 10:35 AM
by acyou on 8/27/23, 5:53 AM
I would also recommend Jean-Dominique Bauby's Le Scaphandre et le Papillon to anyone interested in this topic. Typing using eye movements was used in that book in a slow, inefficient manner. In the book's case, the question one should ask is, was his UI paced at the exact correct speed? I was and still am deeply emotionally moved by what the author was able to accomplish and convey. I am unsure if a faster keyboard would have made a meaningful and positive difference in that particular case, to the author's quality of life. I'll need to give that book another read with that question in mind.
Happily, I expect eye tracking to find fascinating, novel and unexpected applications. As others have stated, UI/UX design is an interesting part of this puzzle. For example, if you ask an LLM to output short branches of text and have a writer look at the words that he wants to convey. It's definitely blurring the line between reading and writing. Myself, finding writing to be a tactile exercise, I think that emotional state comes into play. That's what I'm interested in. Yes, can you literally read someone's eyes and tell what they are thinking?
by ZeroCool2u on 8/27/23, 3:38 AM
by zefzefzef on 8/27/23, 1:44 PM
For inspiration, check out the Vocal Eyes Becker Communication System: https://jasonbecker.com/archive/eye_communication.html
A system invented for ALS patient Jason Becker by his dad: https://www.youtube.com/watch?v=wGFDWTC8B8g
Also already mentioned in here, EyeWriter ( https://en.wikipedia.org/wiki/EyeWriter ) and Dasher ( https://en.wikipedia.org/wiki/Dasher_(software) ) are two interesting projects to look into.
by anupamchugh on 9/5/23, 9:39 PM
by Schwolop on 8/28/23, 2:50 AM
@pg - If your friend has not tried adding a mouse-click via something they can activate other than eye-gaze, this would be worth a shot. We have a lot of MND patients who use our combination to great success. If they can twitch an eyebrow, wiggle a toe or a finger, or even flex their abdomen, we can put electrodes there and give them a way forward.
Also, my contact details are in my profile. I'd be happy to put you in touch with our CEO and I'm confident that offers of funding would be of interest. The company is listed on the Australian stock exchange, but could likely go much further with a direct injection of capital to bolster the engineering team.
Cheers, Tom
by mhb on 8/27/23, 1:40 PM
by mercurialsolo on 8/27/23, 5:49 AM
by claytongulick on 8/27/23, 12:08 PM
We built a prototype for roadside sobriety checks. The idea was to take race/subjectivity out of the equation in these traffic stops.
We modified an oculus quest and added IR LEDs and cameras with small PI zero's. I wrote software for the quest that gave instructions and had a series of examinations where you'd follow a 3D ball, the screen would brighten and darken, and several others while I looked for eye jerks (saccades) and pupil dilation. The officer was able to see your pupil (enlarged) on a laptop in real time and we'd mark suspicious times on the video timeline for review.
It was an interesting combination of video decoding, OpenCV and real-time streams with a pretty slick UI. The Pi Zero was easily capable of handling real-time video stream decoding, OpenCV and Node. Where I ran into performance problems I wrote node -> c++ bindings.
We did it all on something silly like a 50k budget. Neat project.
by archo on 8/27/23, 9:51 AM
by dimask on 8/27/23, 11:53 AM
With my group we are developing an eyetracker for studying developmental and clinical populations, which typically present challenges to conventional eyetrackers. It is a spin off from our academic work with infants, and we already have a study almost done that uses it. We are still into the very beginning phase in terms of where this may lead us, but we are interested in looking into contexts where eyetracking for different reasons may be more challenging.
by MasterYoda on 8/27/23, 8:59 AM
by imranq on 8/27/23, 1:15 PM
I'm guessing a combination of projection mapping, built in lighting, and some crowdsourced data will get accuracy to very usable levels
by user3939382 on 8/27/23, 2:21 AM
Or how about a UI that automatically adapts to your eye movement and access patterns to minimize the amount of eye movement required to complete your most common tasks by rearranging the UI elements.
by ricardobayes on 8/27/23, 10:29 AM
by PBnFlash on 8/27/23, 1:32 AM
by quietthrow on 8/27/23, 4:26 AM
by TheGuyWhoCodes on 8/27/23, 5:22 PM
by peter_retief on 8/27/23, 8:06 AM
by amelius on 8/27/23, 9:42 AM
by tmalsburg2 on 8/27/23, 6:37 AM
by DoingIsLearning on 8/27/23, 5:56 PM
Seems like all the solutions out there are some flavour or variation of this.
by 6stringmerc on 8/27/23, 6:07 AM
by joshm93 on 8/27/23, 6:52 AM
It would be great to hear from paul about how his friend uses the keyboard and what kind of tasks he’d love to do but can’t with current solutions.
It seems like a throughput problem to me. How can you type quickly using only your eyes?
Have people explored using small phonetic alphabets or Morse code style encoding?
Once I got tensorflow working, I’d start mapping different kinds of ux. Throughput is king.
by jjbcb on 8/27/23, 3:32 AM
by frakkingcylons on 8/27/23, 1:51 AM
by kken on 8/27/23, 2:19 AM
Both apple and Facebook acquired eye tracking companies to kickstart their own development.
Here are some Top-lists
https://imotions.com/blog/insights/trend/top-eye-tracking-ha... https://valentinazezelj.medium.com/top-10-eye-tracking-compa...
Its also an active research field, this is one of the bigger conferences: https://etra.acm.org/2023/
by quickthrower2 on 8/27/23, 3:09 AM
by jacquesm on 8/27/23, 4:10 AM
by hardwaregeek on 8/27/23, 1:33 AM
by rsync on 8/27/23, 6:48 AM
Go to hell.
Unless, of course, you'd like to commit the funded work to the free commons, unencumbered by patents and copyrights, and free to use by any entity for any purpose.
That's what we'd do for ALS, right ?
by aaron695 on 8/27/23, 1:27 AM
It'd be good to know what rate we need to beat and some other metrics.
by atleastoptimal on 8/27/23, 4:04 AM
by dennis_jeeves1 on 8/27/23, 12:48 PM
As far I know, I don't don't think mainstream medicine is close to solving _any_ chronic condition, except managing it.
by turnsout on 8/27/23, 1:38 AM
by soligern on 8/27/23, 1:24 AM
by morkalork on 8/27/23, 1:46 AM
Whilst it plays an unskippable and unblockable ad (thanks weiapi!)
by FlamingMoe on 8/27/23, 1:39 AM
by adamnemecek on 8/27/23, 2:28 AM
Interested in hearing more?