Voice-activated technologies like Siri and Cortana help facilitate communication between humans and artificial intelligence. It's a simple pattern: the human speaks, the machine processes that speech and the machine answers.
Language Technologies Institute Professor Alan Black recently spoke to CIO.com about how this current model of artificial intelligence interaction results in stilted communication. He believes great improvements can be made to make the interactions between humans and machines more efficient, natural and personalized.
In the article, Black noted that most technologies employing synthetic voices typically use speech recorded by humans reading direct sentences, which makes machine responses one-sided and not engaging for users. As an alternate approach, Black and his students have been experimenting with using voices recorded in human dialog to incorporate the variance in human responses. The idea is that machines would then be able to pick up on more natural communication cues present in real-life interactions.
"On a higher level, it's a matter of being personalized," Black said. "That can be creepy, but it can also be appropriate. … It's close to what humans expect and makes it easier to have this conversation."
Read more about Black’s input on improving communication with AI on CIO.com.