Yeah, but you have to build a business that makes money first. It's still a step forward. Every time someone solves a problem with voice, it becomes easier for the next person.
Some in the hacker community are working on voice solutions to help program with off the shelf programs, for example:
I have no real knowledge, but I imagine that gesture interfaces would be massively worse than keyboard for visually impaired because the lack of tactile feedback.
Not necessarily. Successfully thinking about this stuff requires stripping interactions all the way back to their intent. The intent of either a key press or gesture is to change some state and the intent of the feedback is to communicate both the change and the new state.
The tactile feedback from a key press is at best only a small proportion of that. You get to know you mushed a key, but not what (if anything) it changed. Assistive tech using keyboard input will generally fill in that gap with voice feedback. Gesture input with haptic and audio/voice feedback should be usable in some scenarios. I'm currently working on a couple of iOS apps in the fitness space that have this as a key part of their UI.
you can provide audible feedback when you move your hands. Many gestures don't provide anyone tactile feedback. Move your hands or fingers then something happens on the screen. Google's new chip could probably help interpret a series of gestures:
Some in the hacker community are working on voice solutions to help program with off the shelf programs, for example:
https://github.com/melling/ErgonomicNotes/blob/master/progra...
Someone could probably use the same solution to help the visually impaired.
Throw in the new motion gesture technology, like that by Intel: http://www.intel.com/content/www/us/en/architecture-and-tech...
And soon we'll all have more natural ways to interact with computers.