Gesture recognition using the chalearn dataset
Project using Microsoft’s Situational Intelligence Platform
Phone-based scanning allows users to take pictures from multiple viewpoints to extract a usable 3D model(e.g. 3D Creator). 3D computer vision and ARKit can be used to develop a 2D segmentation of objects for cut and paste applications such as here. Logitech are intersted in the use of Lidar to capture a 3D model and … Read more
Working on a computer all day has become a norm for a majority of professionals and students, but there is no clear distinction between “busy work” and productive work. This study aims to utilise the power of computer vision to identify how the user’s state of flow can be identified and reported. The output of … Read more
FYP/MASTERS: This project is focused on detecting and analysing non-verbal communication by means of gestures. The project will investigate the learning of the gestures in a gesture database such as https://20bn.com/datasets/jester. The human skeleton can be extracted using a system such as Google’s Blazepose https://google.github.io/mediapipe/solutions/pose#web. The research will be to process the temporal skeleton information … Read more
FYP LEVEL: visual guidance of an autonomous drone. Using a Tello drone https://www.ryzerobotics.com/tello This project will be to use computer vision on a drone to identify way points and complete a circuit. The project will use https://gobot.io/. This project would suit a Final Year student with an interest in Go and robotics.
Masters Level: This project is concerned with the recognition of non-verbal communications two person communication e.g. via video conference. This project will use the SEWA dataset https://db.sewaproject.eu/ as a reference and learn to analyse interpersonal communication. This project would suit someone interested in non-verbal signalling in human to human and human to avatar conversations.