Level: Intermediate to Advanced
Your computer can recognize your voice and detect words in a speech dictation, but can it truly understand the meaning of what you are saying? Can it analyze your intent and respond accordingly? You don't need a PhD in artificial intelligence to integrate speech and natural language understanding in your projects. Microsoft Cognitive Services (aka "Project Oxford") provides a portfolio of cloud-based REST APIs and SDKs powered by Machine Learning that lets you write applications to understand the content within the rapidly growing set of multimedia data. The Cognitive Services APIs will help you understand and interact with audio, text, image, and video.
This session will start with an overview of available services for speech recognition and speech synthesis. Then you'll explore through live demos how to leverage the Language Understanding Intelligent Service which lets you determine intent, detect entities in user speech and improve language understanding models to work more efficiently with user data. Come learn how your apps can tap into the same active learning services behind the brain of Cortana, and get started writing smart applications that can understand what your users are saying.
You will learn:
- The role of speech and natural language understanding in software development projects
- About Microsoft Project Oxford and how to get started with the Language Understanding Intelligent Service
- Build simple scenarios that leverage speech, natural language understanding and tie them into data-driven scenarios