Success Stories

Research

Blog

Healthcare for the deaf and dumb can be significantly improved with a Sign Language Recognition (SLR) system capable of recognizing medical terms. This paper is an effort to build such a system. SLR can be modelled as a video classification problem and we have used vision based deep learning approach. The dynamic nature of sign language poses additional challenge to the classification. This work explores the use of OpenPose with convolutional neural network (CNN) to recognize the sign language video sequences of 20 medical terms in ISL (Indian Sign Language) which are dynamic in nature. All the videos are recorded using common smartphone camera. The results show that even without the use of recurrent neural networks (RNN) to model the temporal information, the combined system works well with 85% accuracy. This eliminates the need for specialized camera with depth sensors or wearables. All the training and testing was done on CPU with the support of Google Colab environment.

Author(s): Aditya U, Smriti Jha