I'm a co-founder of Snips, we are building a private-by-design Voice Assistant platform which allows companies and makers to build a smart assistant 100% on-device.
Why do we do this? We want assistants of the future to respect user privacy, and not stream your voice or your most important questions to servers that you do not control.
With Snips, 100% of what we do runs on the device (the platform ships for Raspberry Pi, more platforms are available for entreprise customers, contact@snips.ai)
We are using state-of-the-art deep-learning Automated Speech Recognition and Natural Language Understanding to allow makers to plug a voice assistant in their device in 5 minutes.
Why do we do this? We want assistants of the future to respect user privacy, and not stream your voice or your most important questions to servers that you do not control.
With Snips, 100% of what we do runs on the device (the platform ships for Raspberry Pi, more platforms are available for entreprise customers, contact@snips.ai)
We are using state-of-the-art deep-learning Automated Speech Recognition and Natural Language Understanding to allow makers to plug a voice assistant in their device in 5 minutes.
We are actually benchmarked our NLP and are outperforming most of the commercially available NLU providers: https://medium.com/snips-ai/benchmarking-natural-language-un...