Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I also find current voice interfaces are terrible. I only use voice commands to set timers or play music.

That said, voice is the original social interface for humans. We learn to speak much earlier than we learn to read/write.

Better voice UIs will be built to make new workflows with AI feel natural. I'm thinking along the lines of a conversational companion, like the "Jarvis" AI in the Iron Man movies.

That doesn't exist right now, but it seems inevitable that real-time, voice-directed AI agent interfaces will be perfected in coming years. Companies, like [Eleven Labs](https://elevenlabs.io/), are already working on the building blocks.



Young people don't even speak to each other on the phone anymore.


For a voice-directed interface to be perfected, speech recognition would need to be perfected first. What makes that development seem inevitable?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: