[MSc level] Virtual assistants are becoming commonplace but the ability for them to gesture naturally and appropriately is still a huge research challenge (see image which shows a typical assistant displayed from the neck up, without gesturing arms and hands). In this project, you will develop a method to automatically learn structure from speech sequences from motion capture of gestures, head movements, and eye-gaze. The system should receive motion capture data of a speaker as an input and learn a structure from the motion which will then be used to generate appropriate non-verbal behaviours for an animated virtual assistant. See example Gesticulator system paper. Knowledge of computer graphics and machine learning are required.