5 September 2019

Research pick: Recognising the signs - "Real time sign language recognition using depth sensor"

Machine recognition of sign languages is on the cards thanks to work by a team in India who are using a Microsoft Kinect movement-identifying controller. Writing in the International Journal of Computational Vision and Robotics, the team explains how their system uses just 11 of the 20 joints tracked by a Kinect and can extract novel features per frame, based on distances, angles, and velocity involving upper body joints. The team reports how the algorithm recognizes 35 gestures from Indian sign in real time with almost 90 percent accuracy.

Jayesh Gangrade and Jyoti Bharti of the Maunala Azad National Institute of Technology, in Bhopal, India, explain how many people with a significant hearing deficit utilize gesture-based communication, hand movements, and orientation together with facial expression are used dynamically to convey meaning in as nuanced and expressive a way as any other language dialect.

The development of technology that could also be competent in sign languages would give those who rely on this form of communication a new approach to interacting with machines and computers. Camera, digital gloves, and other gadgets have been investigated previously in this context. However, the potential of an inexpensive video game controller, such as the Kinect, that can track body movements could facilitate this rapidly.

The team points out that their approach requires no markers nor special clothing with tracking objects as was necessary with some of the earlier efforts in this area. “We have experimented with a minimal set of features to distinguish between the given signs with practical accuracies,” the team writes. They are now experimenting with the Kinect v2 sensor which is more accurate and could push the research closer to its ultimate goal.

Gangrade, J. and Bharti, J. (2019) ‘Real time sign language recognition using depth sensor’, Int. J. Computational Vision and Robotics, Vol. 9, No. 4, pp.329–339.

No comments: