SightConsec: Use automatic speech recognition to improve your consecutive interpreting

This month, I interviewed Lilia Pino Blouin - a rockstar member of the techforword insiders community and New York-based freelance interpreter who works from Italian, French and Spanish into English and English into Italian - about her experience with a new hybrid interpreting technique.

In SightConsec, the interpreter uses automatic speech recognition to transcribe speech in real time, then interprets from the transcription when the speaker pauses.

Read on to learn more about Lilia’s experience with this exciting new technique - or check out the video of our interview!


How did you get interested in speech recognition? 

Thanks to you, obviously! I've always thought speech recognition was interesting, but didn't know how to apply it. Then I saw your automatic speech recognition course and figured it would be a great primer. 

You've experimented with different speech recognition tools. Which ones do you like most? 

For English, Otter.ai works shockingly well. I first tried it during the pandemic, for former New York Governor Cuomo’s daily press conferences. I figured: “I’m home and have nothing else to do. Why don't I use this for practice?”

So I used the method from your course and set up Loopback on my Macbook Pro to transcribe what Cuomo was saying. I started practicing “SightConsec” and thought it was brilliant. 

It transcribed everything. All the numbers. All the names. Complicated names of institutions.  Even personal names. 

Of course, it's not perfect, and it took a lot of getting used to, but it was awesome. You didn't have to train the tool at all! I just connected it up - which takes about two minutes - then hit a button and it started transcribing. It was perfect. I was so impressed.

What do you use for your other languages? 

Otter.ai is limited to English. For my other languages, I tried several other options and settled on Web Captioner. You can set the language and even the variety, like French from Quebec or France. It works really well with my languages - Italian, French, and Spanish - with no training. Every now and then a word isn’t accurate - let's say 5%. But normally I would interpret with nothing. So having 95% versus nothing is great!

When did you first try SightConsec? 

The first time I tried it was a book prize on Zoom. The event was very information-dense. I had my whole setup ready because I’d been using it for practice. I just gave it a try! I had Web Captioner on and started taking notes, but also sight translated the transcription. It was brilliant. It saved my life. I did it for the entire hour-long event, and it really saved the day. 

Did you take notes? 

Yes. I didn't really trust the technology. I knew it was reliable because I'd done it a million times in practice. But you never know. So I also took notes. At first, I thought I’d rely on my notes and just look at the text if I wasn’t sure about something. But my notes were so much less accurate than the transcription! 

I kept reverting to the transcription. When I realized the tech was working, I started taking very minimal notes - just so I could say something if there was an epic fail. 

I still take notes to be on the safe side. But at the end of the day, my notes are a little useless. 

What gear do you need? 

Just Loopback and my computer. I have the meeting on my MacBook and the transcription and reference material on a 27” external monitor. But you could watch the meeting and transcription on your computer screen. Of course, you use a headset - and camera, because sometimes you’ll be on video.

So, you just route the sound into Web Captioner or Otter.ai and see the transcription in another window?

Yes. Other than that, everything is normal. Setting this up is the easiest thing in the world. I’m really grateful for your help. Your course is very hands-on and includes a step-by-step guide that I followed to the letter. When it started to work, it seemed magical. I thought, “This can’t be real.” It was so accurate. 

What about confidentiality?

I only use speech recognition for public meetings. Confidentiality is vital, so we can’t use cloud-based transcription. 

Do you use two different tools when you work between two languages?

Yes. I use Otter.ai and Web Captioner. Toggling them on and off would be too much work. When I speak Italian, Otter transcribes mumbo jumbo and vice versa. But that helps me find the right spot quickly. The only challenge is page scrolls. With long speeches, you have to go back to the beginning, which can take a while. But flipping through your notes in a physical meeting poses the same challenge. 

When you're listening to the speech, do you use a glossary or research anything you hear?

At some events, I had my glossary open in another window. When I heard a technical term, I knew I could stop taking notes and look up my term.

What I really love about this technique is that it gives me time to think about solutions. In regular consecutive, I’m so stressed about having complete notes that I write more than I should. I'm terrified I won’t remember things. When something is complicated, I flag it in my notes and think I’ll have to come up with a really creative solution. But with SightConsec, I can already think about my translation. I feel I do a much better job. 

How do you discuss speech recognition with clients? 

If the event is public and open to anyone, I don’t necessarily feel the need to discuss this with the client. Clients hire me to interpret, and that’s what I do. 

For example, when I use pen and paper, I don’t talk about my note-taking method with clients. Similarly, I don’t think I need to specify that I’m using speech recognition. I’m not hiding it from them. I just don’t think how I work makes any difference to them - unless they feel more relaxed about being able to speak for long periods.

At the end of the day, what matters is the quality of my interpreting - not the tech I use.

Any final comments?

If it were up to me, I would do this all the time. I have a very high-profile assignment next week, and I would give anything to be able to transcribe it!

I hope many colleagues go out and try SightConsec - and then report back and tell us how it's working for them! 


This post originally appeared in the November 2021 edition of the Tool Box Journal.

Previous
Previous

How to use a tablet for consecutive note-taking

Next
Next

Black Friday 2021!