Hello everybody! Really busy lately ￣へ￣, but today I am going to bring another project!
This time is not about AI, but actually a project a I did a few years ago. The project is pretty interesting and I hope you can benefit from looking at this(PS: I actually won first price in an innovation competition in China, so I am just going to use the presentation I used for simplicity. I won't tell it is actually I am lazy ¯\_(ツ)_/¯)
Anyway, as you can see by the title, this is the sign language recognizer. Without further ado, lets get to main point.
To help the deaf to better communicate with other people.
As modern technology advanced, people’s life have become more and more convenient. However, there are groups of people that cannot enjoy the convenience brought by technology, the deaf is one of them. So I hope to use nowadays technology to help the deaf people to communicate with people who do not understand sign language. Once, I heard my teacher talking about a type of infrared sensor, which can precisely capture the information of two hands. I though, what if I can analyze and compare the data to transfer it into common-used language? So I decided to try to create such software. After countless debugging and testing, I am able to achieve some exciting results.
How it works:
The infrared sensor used in this project is a sensor created by a company called LeapMotion, which can track gesture into mathematical models for developers to further utilize
The LeapMotion company provides a API (application program interface), so I decided to build this project base on this API to obtain the information, and analyze them further to transfer them into common-used language. Although I know this cannot recognize all the gestures, but I believe after more optimization and debugging, I can make it even better!
Results that it can achieve for now:
1. The recognizer can recognize these gestures for now(but can be improved in the future)
2. Number 0-9(10 numbers，primarily analyzing the number of fingers and angles between knuckles)
3. Part of the alphabet(primarily using angles between fingers，and angles between knuckles)
Some common expressions(primarily analyzing the vectors or coordinates)
1.It is relatively cheap, it only requires a infrared sensor.
2. It is easy to setup and operate, and does not require a lot of complicated operations.
3. The infrared is able to detect subtle information, and can quickly recognize the information of the hand, causing little or almost no lags.
4.The project is relatively easy to be extend and upgraded, it is easy to improve its accuracy and add new gestures.
1. Leapmotion is not able to detect all of the information of the hand at some special angles, affecting its precision.
2. During the process of recognition, it requires a lot of computations, thus the usage of CPU is a little bit high
3. Some complicated gestures requires sophisticated 3D math knowledge, thus having requirements for the developer.
4.For dynamic actions, it requires a lot of codes and analysis.
THAT IS A LOT OF TEXT to read O(∩_∩)O HAHA~
Now for the easier visual part
Excerpt from the source code: