How to get the gestures needed?
Generally, in order to detect hand and extract gestures from dymanic background environments, some algrithms such as color segment and some kinds of ‘mean’ algrithms are neccessary, indeed. And, as far as i have learned until now, i found 2 methods to effectively extract target gestures from background images. Those methods are color segment, motion detection and more advanced sensing like infrared detecting integrating with motion or color. This step is quit easy to archive because of the limited algrithms and hand characters which include color, motion and what? Nothing! Probably temperature, but that is not so universal.
What chatacters could be used for recognizing gestures?
So, if the gestures are extracted from the complicated background images, the next step would be recognize them. Recognition is not as simple as detection which you just do some filter work. It needs several unique characters from the detected images. Most of the time, these characters probably contain: 1) angle – detecting the rotation 2) size – detecting the area 3) peak and valley which is specific for the open hand 4) moving speed and direction.
3 Comments On “ Getting to know the hands’ gestures”