Investigating an Interactive Computational Framework for Nonverbal Interpersonal Skills Training
Interpersonal skills (IS) like public speaking are essential assets for many professions and in everyday life. Current IS training involves a combination of textbook learning, practice with expert human role-players, and critiquing videos of the trainees’ performances. This project will create the computational foundations to automatically assess IS expertise and help people improve their skills using an interactive virtual human (VH) framework. Our computational framework not only encompasses the recent advances in computer vision, speech signal processing, and VHs, but also defines new probabilistic algorithms to jointly analyze speech, visual gestures and interpersonal models to create an engaging virtual audience. Our work contains three main goals: (1) to develop a probabilistic computational model to learn the temporal and multimodal dependencies and infer a speaker’s public speaking performance from acoustic and visual nonverbal behavior; (2) to understand the design challenges of developing a VH audience that is interactive, believable, and that provides meaningful feedback; and (3) to understand the virtual audience's impact on a speaker's performance and learning outcomes by performing a comparative study investigating alternative feedback and training approaches.