Sign Language Recognition (SLR) is a field of Computational Linguistics that sits at the intersection of Computer Vision and Natural Language Processing – aiming to effectively extract salient linguistic features from visual data of sign language users. Recent SLR research has centred Machine Learning approaches, however, these methods typically rely on large-scale datasets which are uncommon in the field [1,2]. The imbalanced nature of natural language data is a further challenge to this task, with many key vocabulary items occurring rarely, if at all. This project will explore various techniques to address this imbalance with the aim to improve recognition accuracy of minority signs.
[1] LeCun, Y., Bengio, Y. and Hinton, G., 2015. Deep learning. nature, 521(7553), pp.436-444.
[2] Fowley, F. and Ventresque, A., 2021, July. Sign Language Fingerspelling Recognition using Synthetic Data. In AICS(pp. 84-95).
[3] Rastgoo, R., Kiani, K. and Escalera, S., 2021. Sign language recognition: A deep survey. Expert Systems with Applications, 164, p.113794.