

Now, let's define the handle result function that we've wired up to the recog reference. Also, we'll want to clean up after ourselves, so we'll return a function that will remove event listener for the results event bound to the handle result function. Listen to the result event, and handle that with the handle result callback, which we haven't defined yet, but will very soon. This one we'll grab the recog reference and add an event listener. At this point, starting or stopping doesn't do anything yet. Otherwise, we will stop the recognition instance. If we are listening, then we will tell our recall graph to start.

We'll add our listening prop inside of our useEffect dependency array. Now, we'll add another React useEffect so that we can toggle whether or not our SpeechRecognition system should start or stop listening. This will let us get access to quicker results, although they aren't final and may not be as accurate if we waited a bit longer. Then, we're going to set it to a continuous mode so that results are continuously captured when we start instead of just one result, and also will set interim results to true. Depending on which one exists, we'll create a new instance of it and assign it to the current property of recog reference. Here, we will either reference the real SpeechRecognition constructor or the vendor-prefixed webkitSpeechRecognition version. This one, we'll just execute after the initial render of the component. To do this, we'll create a recog reference using eRef(null). The first thing we're going to do is to create an instance of SpeechRecognition. Eventually, we want to wire up the SpeechRecognition API, and auto-scroll the content for us as the user speaks. Right now, we're just looping over the words and displaying them in spans. The teleprompter component is just a shell of what we'll be building. It currently accepts an array of words, a Boolean if it should be listening, the current progress, and a callback function when the progress changes.

However, the teleprompter itself isn't written yet. On the left, we have the beginnings of a web application, which is handling text input, button interactions, and managing the progress of the teleprompter. The intent is we'll be able to type up some text in the above text area, click "Start," and then it'll recognize your voice and will automatically scroll for you so you don't have to do it manually. We'll be using the SpeechRecognition interface to build this app. Instructor: In this lesson, we're going to build a teleprompter web application using the Web Speech API.
