Speech Input API for Android

People love their mobile phones because they can stay in touch wherever they are. That means not just talking, but e-mailing, texting, microblogging, and so on. So, in addition to search by voice and voice shortcuts like "Navigate to", we included a voice-enabled keyboard in Android 2.1, which makes it even easier to stay connected. Now you can dictate your message instead of typing it. Just tap the new microphone button on the keyboard, and you can speak just about anywhere you would normally type.



We believe speech can fundamentally change the mobile experience. We would like to invite every Android application developer to consider integrating speech input capabilities via the Android SDK.
One of my favorite apps in the Market that integrates speech input is Handcent SMS, because you can dictate a reply to any SMS with a quick tap on the SMS popup window.

Speech input integrated into Handcent SMS

The Android SDK makes it easy to integrate speech input directly into your own application—just copy and paste from this sample application to get started. Android is an open platform, so your application can potentially make use of any speech recognition service on the device that's registered to receive a RecognizerIntent. Google's Voice Search application, which is pre-installed on many Android devices, responds to a RecognizerIntent by displaying the "Speak now" dialog and streaming audio to Google's servers—the same servers used when a user taps the microphone button on the search widget or the voice-enabled keyboard. (You can check if Voice Search is installed in Settings ➝ Applications ➝ Manage applications.)

One important tip: for speech input to be as accurate as possible, it's helpful to have an idea of what words are likely to be spoken. While a message like "Mom, I'm writing you this message with my voice!" might be appropriate for an email or SMS message, you're probably more likely to say something like "weather in Mountain View" if you're using Google Search. You can make sure your users have the best experience possible by requesting the appropriate language model: "free_form" for dictation, or "web_search" for shorter, search-like phrases. We developed the "free form" model to improve dictation accuracy for the voice keyboard on the Nexus One, while the "web search" model is used when users want to search by voice.

Google's servers currently support English, Mandarin Chinese, and Japanese. The web search model is available in all three languages, while free-form has primarily been optimized for English. As we work hard to support more models in more languages, and to improve the accuracy of the speech recognition technology we use in our products, Android developers who integrate speech capabilities directly into their applications can reap the benefits as well.

No comments:

Post a Comment