[TAKEN] Sign Language Recognition

Sign Language Recognition (SLR) is a field of Computational Linguistics that sits at the intersection of Computer Vision and Natural Language Processing – aiming to effectively extract salient linguistic features from visual data of sign language users. Recent SLR research has centered Machine Learning approaches, however, these methods typically rely on large-scale datasets which are uncommon in the field [1,2]. While Transfer Learning offers a viable means of addressing this limitation, many existing pre-training strategies are not well aligned with the specific demands of SLR [3]. This project will investigate the trade-off between scale and domain-specificity in pre-training, exploring which offers more benefit to downstream SLR performance.

[1] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444.

[2] Fowley, F., & Ventresque, A. (2021, July). Sign Language Fingerspelling Recognition using Synthetic Data. In AICS (pp. 84-95).

[3] Holmes, R., Rushe, E., De Coster, M., Bonnaerens, M., Satoh, S. I., Sugimoto, A., & Ventresque, A. (2023). From scarcity to understanding: Transfer learning for the extremely low resource irish sign language. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2008-2017).