Detecting hand gestures using Haarcascades training

hand gestures

hand gesturesrecognised hand gesture

Haarcascades training (haartraining) is seemed an quick tool to achieve accurate hand gesture detection and recognition. The face and body detection examples included in openCV’s installation example folders (\opencv\data\haarcascades\) demonstrate how fast the haarcascades files help to do the job. More information about how to train the haarcascades files can go to sonots.com.

Many face-image databases have been provided for haarcascades training, e.g. http://www.face-rec.org/databases/. But hand images are less contributed for haarcascades training. At least, these are more difficult than face images to find in the internet. I was also asked by many blog readers for the haarcascades training file to support their hand detection or recognition related projects.

So I post an example of using haarcascades training file 1256617233-1-haarcascade_hand.xml for hand gesture detection.

The source codes can be downloaded from (or go to the download page):

http://download.andol.me/1256617233-1-haarcascade_hand.xml
http://download.andol.me/1256617233-2-haarcascade-hand.xml
http://download.andol.me/haarcascades-based%20hand%20detection.cpp

Author: Andol Li

A HCI researcher, a digital media lecturer, an information product designer, and a python/php/java coder.

40 Comments On “ Detecting hand gestures using Haarcascades training”

  1. hai andol. may i ask u what software you use to run this program? i have use Microsoft Visual Studio 2010 Expres. and i try put in c++ but it failed to build because have so many eror… thanks ^^

    • @Jerry
      the source code is based on C++, the application development environment is visual studio 2008, and the openCV version is 2.1. Please be aware of the differences of openCV versions’ setting ups in visual studio, as they have different head files and including libraries.

      • @andol i also doing my project with c# @visual studio 2010. i have download your 1256617233-1-haarcascade_hand.xml. but it did’nt work @my project. would u help me to find out why it did’nt work. because for the other detection like face or nose my source code is working…this is my part project

        using System;
        using System.Collections.Generic;
        using System.Diagnostics;
        using System.ComponentModel;
        using System.Data;
        using System.Drawing;
        using System.Runtime.InteropServices;
        using System.Linq;
        using System.Text;
        using System.Windows.Forms;
        using Emgu.CV;
        using Emgu.CV.Structure;
        using Emgu.CV.UI;
        using Emgu.CV.GPU;

        namespace WindowsFormsApplication1
        {
        public partial class Form1 : Form
        {

        public Form1()
        {
        InitializeComponent();
        }

        public DialogResult res { get; set; }

        public Bitmap image { get; set; }
        public String path;

        private void Form1_Load(object sender, EventArgs e)
        {

        }

        private void pictureBox1_Click(object sender, EventArgs e)
        {

        }

        private void button1_Click_1(object sender, EventArgs e)
        {
        DialogResult res = openFileDialog1.ShowDialog();
        if (res == DialogResult.OK)
        {
        path = openFileDialog1.FileName;
        image = (Bitmap)Bitmap.FromFile(openFileDialog1.FileName);
        pictureBox1.Image = image;

        }
        }

        private void detect_Click(object sender, EventArgs e)
        {
        Image image = new Image(path); //Read the files as an 8-bit Bgr image
        Stopwatch watch;
        String bodyFileName = “full.xml”;
        String ubodyFileName = “haarcascade_mcs_upperbody.xml”;
        String lbodyFileName = “low.xml”;
        String faceFileName = “haarcascade_frontalface_alt.xml”;
        String eyeFileName = “haarcascade_eye.xml”;
        String noseFileName = “haarcascade_mcs_nose.xml”;
        String mouthFIleName = “mouth.xml”;
        String rearFileName = “haarcascade_mcs_leftear.xml”;
        String learFileName = “haarcascade_mcs_rightear.xml”;
        String handFileName = “hand.xml”;

        switch (cmbpilih.Text)
        {
        case “Full Body Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier body = new GpuCascadeClassifier(bodyFileName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] bodyRegion = body.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in bodyRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage bodyImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = bodyImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade body = new HaarCascade(bodyFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] bodysDetected = body.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in bodysDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();
        //display the image
        //ImageViewer.Show(image);// String.Format(
        //”Completed face and eye detection using {0} in {1} milliseconds”,
        //GpuInvoke.HasCuda ? “GPU” : “CPU”,
        //watch.ElapsedMilliseconds));
        break;

        case “Upper Body Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier ubody = new GpuCascadeClassifier(ubodyFileName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] ubodyRegion = ubody.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in ubodyRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage ubodyImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = ubodyImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade ubody = new HaarCascade(ubodyFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] ubodysDetected = ubody.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in ubodysDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Lower Body Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier lbody = new GpuCascadeClassifier(lbodyFileName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] lbodyRegion = lbody.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in lbodyRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage lbodyImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = lbodyImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade lbody = new HaarCascade(lbodyFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] lbodysDetected = lbody.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in lbodysDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Face Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier face = new GpuCascadeClassifier(faceFileName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] faceRegion = face.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in faceRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage faceImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = faceImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade face = new HaarCascade(faceFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] facesDetected = face.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in facesDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Eye Detection”:

        if (GpuInvoke.HasCuda)
        {

        using (GpuCascadeClassifier eye = new GpuCascadeClassifier(eyeFileName))
        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] eyeRegion = eye.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in eyeRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage eyeImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = eyeImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects

        using (HaarCascade eye = new HaarCascade(eyeFileName))
        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] eyesDetected = eye.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in eyesDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Nose Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier nose = new GpuCascadeClassifier(noseFileName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] noseRegion = nose.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in noseRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage noseImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = noseImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade nose = new HaarCascade(noseFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] nosesDetected = nose.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in nosesDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Mouth Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier mouth = new GpuCascadeClassifier(mouthFIleName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] mouthRegion = mouth.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in mouthRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage mouthImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = mouthImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade mouth = new HaarCascade(mouthFIleName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] mouthsDetected = mouth.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in mouthsDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Right Ear Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier rear = new GpuCascadeClassifier(rearFileName))
        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] rearRegion = rear.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in rearRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage rearImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = rearImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade rear = new HaarCascade(rearFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] rearsDetected = rear.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in rearsDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Left Ear Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier lear = new GpuCascadeClassifier(learFileName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] learRegion = lear.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in learRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage learImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = learImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade lear = new HaarCascade(learFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] learsDetected = lear.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in learsDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;

        case “Hand Detection”:

        if (GpuInvoke.HasCuda)
        {
        using (GpuCascadeClassifier hand = new GpuCascadeClassifier(handFileName))

        {
        watch = Stopwatch.StartNew();
        using (GpuImage gpuImage = new GpuImage(image))
        using (GpuImage gpuGray = gpuImage.Convert())
        {
        Rectangle[] handRegion = hand.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
        foreach (Rectangle f in handRegion)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f, new Bgr(Color.Blue), 2);
        using (GpuImage faceImg = gpuGray.GetSubRect(f))
        {
        //For some reason a clone is required.
        //Might be a bug of GpuCascadeClassifier in opencv
        using (GpuImage clone = faceImg.Clone())
        {

        }
        }
        }
        }
        watch.Stop();
        }
        }
        else
        {
        //Read the HaarCascade objects
        using (HaarCascade hand = new HaarCascade(handFileName))

        {
        watch = Stopwatch.StartNew();
        using (Image gray = image.Convert()) //Convert it to Grayscale
        {
        //normalizes brightness and increases contrast of the image
        gray._EqualizeHist();

        //Detect the faces from the gray scale image and store the locations as rectangle
        //The first dimensional is the channel
        //The second dimension is the index of the rectangle in the specific channel
        MCvAvgComp[] handsDetected = hand.Detect(
        gray,
        1.1,
        10,
        Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
        new Size(20, 20));

        foreach (MCvAvgComp f in handsDetected)
        {
        //draw the face detected in the 0th (gray) channel with blue color
        image.Draw(f.rect, new Bgr(Color.Blue), 2);

        //Set the region of interest on the faces
        gray.ROI = f.rect;

        }
        }
        watch.Stop();
        }
        }
        pictureBox1.Image = image.ToBitmap();

        break;
        }
        }

        }
        }

  2. hai andol i have doing my final project for human body detection with c# @visual studio 2010, with emguCV(openCV in c#), i have download your haarcascade for hand, but it not work at my project, but for the other detection like nose,face,eye with the haarcascade too, my source code is working.. would you give me some sugest for it? this is my source code

    using System;
    using System.Collections.Generic;
    using System.Diagnostics;
    using System.ComponentModel;
    using System.Data;
    using System.Drawing;
    using System.Runtime.InteropServices;
    using System.Linq;
    using System.Text;
    using System.Windows.Forms;
    using Emgu.CV;
    using Emgu.CV.Structure;
    using Emgu.CV.UI;
    using Emgu.CV.GPU;

    namespace WindowsFormsApplication1
    {
    public partial class Form1 : Form
    {

    public Form1()
    {
    InitializeComponent();
    }

    public DialogResult res { get; set; }

    public Bitmap image { get; set; }
    public String path;

    private void Form1_Load(object sender, EventArgs e)
    {

    }

    private void pictureBox1_Click(object sender, EventArgs e)
    {

    }

    private void button1_Click_1(object sender, EventArgs e)
    {
    DialogResult res = openFileDialog1.ShowDialog();
    if (res == DialogResult.OK)
    {
    path = openFileDialog1.FileName;
    image = (Bitmap)Bitmap.FromFile(openFileDialog1.FileName);
    pictureBox1.Image = image;

    }
    }

    private void detect_Click(object sender, EventArgs e)
    {
    Image image = new Image(path); //Read the files as an 8-bit Bgr image
    Stopwatch watch;
    String bodyFileName = “full.xml”;
    String ubodyFileName = “haarcascade_mcs_upperbody.xml”;
    String lbodyFileName = “low.xml”;
    String faceFileName = “haarcascade_frontalface_alt.xml”;
    String eyeFileName = “haarcascade_eye.xml”;
    String noseFileName = “haarcascade_mcs_nose.xml”;
    String mouthFIleName = “mouth.xml”;
    String rearFileName = “haarcascade_mcs_leftear.xml”;
    String learFileName = “haarcascade_mcs_rightear.xml”;
    String handFileName = “hand.xml”;

    switch (cmbpilih.Text)
    {
    case “Full Body Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier body = new GpuCascadeClassifier(bodyFileName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] bodyRegion = body.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in bodyRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage bodyImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = bodyImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade body = new HaarCascade(bodyFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] bodysDetected = body.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in bodysDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();
    //display the image
    //ImageViewer.Show(image);// String.Format(
    //”Completed face and eye detection using {0} in {1} milliseconds”,
    //GpuInvoke.HasCuda ? “GPU” : “CPU”,
    //watch.ElapsedMilliseconds));
    break;

    case “Upper Body Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier ubody = new GpuCascadeClassifier(ubodyFileName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] ubodyRegion = ubody.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in ubodyRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage ubodyImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = ubodyImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade ubody = new HaarCascade(ubodyFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] ubodysDetected = ubody.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in ubodysDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Lower Body Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier lbody = new GpuCascadeClassifier(lbodyFileName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] lbodyRegion = lbody.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in lbodyRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage lbodyImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = lbodyImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade lbody = new HaarCascade(lbodyFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] lbodysDetected = lbody.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in lbodysDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Face Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier face = new GpuCascadeClassifier(faceFileName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] faceRegion = face.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in faceRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage faceImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = faceImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade face = new HaarCascade(faceFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] facesDetected = face.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in facesDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Eye Detection”:

    if (GpuInvoke.HasCuda)
    {

    using (GpuCascadeClassifier eye = new GpuCascadeClassifier(eyeFileName))
    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] eyeRegion = eye.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in eyeRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage eyeImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = eyeImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects

    using (HaarCascade eye = new HaarCascade(eyeFileName))
    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] eyesDetected = eye.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in eyesDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Nose Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier nose = new GpuCascadeClassifier(noseFileName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] noseRegion = nose.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in noseRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage noseImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = noseImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade nose = new HaarCascade(noseFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] nosesDetected = nose.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in nosesDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Mouth Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier mouth = new GpuCascadeClassifier(mouthFIleName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] mouthRegion = mouth.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in mouthRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage mouthImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = mouthImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade mouth = new HaarCascade(mouthFIleName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] mouthsDetected = mouth.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in mouthsDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Right Ear Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier rear = new GpuCascadeClassifier(rearFileName))
    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] rearRegion = rear.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in rearRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage rearImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = rearImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade rear = new HaarCascade(rearFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] rearsDetected = rear.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in rearsDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Left Ear Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier lear = new GpuCascadeClassifier(learFileName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] learRegion = lear.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in learRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage learImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = learImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade lear = new HaarCascade(learFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] learsDetected = lear.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in learsDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;

    case “Hand Detection”:

    if (GpuInvoke.HasCuda)
    {
    using (GpuCascadeClassifier hand = new GpuCascadeClassifier(handFileName))

    {
    watch = Stopwatch.StartNew();
    using (GpuImage gpuImage = new GpuImage(image))
    using (GpuImage gpuGray = gpuImage.Convert())
    {
    Rectangle[] handRegion = hand.DetectMultiScale(gpuGray, 1.1, 10, Size.Empty);
    foreach (Rectangle f in handRegion)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f, new Bgr(Color.Blue), 2);
    using (GpuImage faceImg = gpuGray.GetSubRect(f))
    {
    //For some reason a clone is required.
    //Might be a bug of GpuCascadeClassifier in opencv
    using (GpuImage clone = faceImg.Clone())
    {

    }
    }
    }
    }
    watch.Stop();
    }
    }
    else
    {
    //Read the HaarCascade objects
    using (HaarCascade hand = new HaarCascade(handFileName))

    {
    watch = Stopwatch.StartNew();
    using (Image gray = image.Convert()) //Convert it to Grayscale
    {
    //normalizes brightness and increases contrast of the image
    gray._EqualizeHist();

    //Detect the faces from the gray scale image and store the locations as rectangle
    //The first dimensional is the channel
    //The second dimension is the index of the rectangle in the specific channel
    MCvAvgComp[] handsDetected = hand.Detect(
    gray,
    1.1,
    10,
    Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
    new Size(20, 20));

    foreach (MCvAvgComp f in handsDetected)
    {
    //draw the face detected in the 0th (gray) channel with blue color
    image.Draw(f.rect, new Bgr(Color.Blue), 2);

    //Set the region of interest on the faces
    gray.ROI = f.rect;

    }
    }
    watch.Stop();
    }
    }
    pictureBox1.Image = image.ToBitmap();

    break;
    }
    }

    }
    }

    • @Jerry
      Hi Jerry, sorry I am quite busy these days because I have to rush up my thesis writing before Xmas holiday, so the programming work will be hung for a while until then.
      About the bug you had in your project, I guess this might be caused by some bugs in my source codes, due to some system configuration differences or programming faults. So my idea for you is, to write the code on your own, as you can still refer to the logics in my source codes. As well, please be aware of the haar training files, as these were not comprehensively compatible with the programmes.
      Good luck, cheers

  3. Hi Andol,
    I’m researching about object detection. I downloaded your xml file and used it in “performance” test of OpenCV haartraining. Unfortunately, your xml doesn’t work. I don’t understand, but I think that’s why my testing images are not good. Can you tell me about your positive folder ? If it’s secret, you should send your answer to my mail.
    Thank you very much

    • @Tien Do
      Hi Tien Do, I will try to check the haartraining xml files carefully and reply back to you later.
      Thanks for your valuable feedbacks. Would you mind sharing some of your ongoing project?

  4. Hi Andol,
    My project’s positive images just have one gesture in another angles on black background because my project use for detecting hand but not detecting gesture which is step 2. I have 2 problem :
    – How can substract background in positive image. When I detected, it only detected the hand in black background 🙁
    – I only used OpenCV haartraining and I want to rebuild source code in “HaarDetectingObject” function. But I really feel confused with their code ^^.
    I will send you a mail with my xml file and picture of positive folder. I’m not allowed share positive images, sorry for this.
    Thank you very much

  5. @andol
    in your post there already two haarcascade for hand, one of them can work for the left hand, but not work very well, because in my program just a view hand in special condition that will detect.. have u any haarcascade for hand ?

    • @jerry
      should be having some haartraining xml files left for hand trainings, will have a quick check and upload these to the download page.
      how is your work going, jerry
      cheers

  6. hi andol sir,
    i m gettin run time error…….
    “Unhandled exception at 0x00d0186b in andol hand.exe: 0xC0000005: Access violation reading location 0x00000004.”
    does it need special camera?

    • @Lokesh
      no special cameras needed.
      this may be caused by memory reading when the variable has been removed or released.check the variables before using them.
      this code comes with a bug of this.

  7. thanx for reply sir,
    i m working on a project for mouse control using hand gestures.
    but i m just a beginner, can you please suggest me steps or methods …….

    • @Lokesh
      do not know how much you have achieve that, but generally three steps to do the job.
      1. detect the gestures and project the coordinates to the screen
      2. get the function working to move the mouse, and also make clicking operations
      3. connect these two steps

  8. sir,
    detecting gesture is a major issue for me..
    how to create xml file in windows..
    will i be suitable for my project for real time gesture recgnition ..
    i need 5 gestures (left clk,right clk, doubl clk, scrol up, scrol dwn)
    i read skin color based gesture recognition too but it has bad background substraction.
    how to identify these gestures ????????

  9. Hi sir, I’m trying to run your “VERSION: HAND DETECTION 1.0” code. but I’m getting this error “Unhandled exception at 0x77c415de in Test(18_01_2012).exe: 0xC0000005: Access violation reading location 0x00000000.”

    I know that you already told what is the main issue upon this error. but I m still duno how to fix the error. What I’ve observed from the program is the “call stack window” shows memory cant be read at this line “cvInRangeS(hsv_image, hsv_min, hsv_max, hsv_mask);”

    Will it related to the different version of openCV to be used? because I’m using openCV2.0.

    Thanks in advance! I m looking forward for kindly feedback asap.

  10. Hi Andol,
    How far can you detect hand ??? My issues need to solve is increase range of detection. But I only detect well from 1m to 2.5m, can you help to to solve this problem or propose this ?
    Thank you very much

    • @Tien Do
      so far, the hand detection could detect and recognise simple gestures, such like the palm, fist, and very clear finger gestures.
      The hand shape extraction from dynamic backgrounds has been quite robust, while the hand gesture recognition, depending on the gesture angles, are still not robust enough for practical use.

  11. hi sir,am doing my project in opencv with visual studio,,,so can u send the scene change detection code…to identify the scene change in video

    • @Pradeep
      I am afraid I do not have codes at hands for scene change detection at the moment, but I guess that is like somehow background detection. If yes, I guess I know which may help you.

  12. hello sir , I had detected a frame which contains a hand image from live cam..How to detect the frame that is repeated for about 2 seconds(say for about 10 consecutive frames)…

  13. I am new in image processing.. so..
    When I compile your source code it gives runtime error in this line //CvSeq *hand = cvHaarDetectObjects(img, cascade, hstorage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(100, 100));//
    How can I fix this…..
    In my project I try to move the mouse pointer using hand and try to implement mouse actions using hand gesture recogition… can I use haarcascade hand detection to detect hand????

    • @Nayanajith
      the answer to if haarclassifier can be used to detect hand gestures is a big YES.
      However, there are some issues needed to overcome before going to use hand gestures to control mouse pointers. The very tricky one is detecting hand gestures in many poses such one finger and a changing palm. The requirement to recognise these gestures using haarclassifier is that there need a huge number of examples – both positive and negative – as training database. Without this training, this task seems impossible.

    • @Asim
      the work of using hand gesture to control mouse cursor has been preliminary implemented. another post relating to mouse control can be found here http://www.andol.me/hci/1981.htm. sorry i cannot share you the latest source code of robust hand gesture detection, as some of these codes were not decided for public use yet. however, as i mentioned preliminary in this article, the ways to recognising hand gestures for further interactions are clear.

  14. THANKS ANDDOL
    CAN U SHARE WITH ME YOUR XML FILE FOR HAND GESTURES RECOGNITION.
    ALSO TELL ME ABOUT SOME SITE WHICH CAN HELP ME IN MOUSE PROGRAMING IN C/C++ NOT IN C#.
    PLZ HELP ME,MY FYP IS NOT WPRKING
    THANKX

Leave a comment
Due to technical adjustments, the comment function is shortly closed and will be re-openning soon. Thanks.


Copyrights 2006-2017 © All rights reserved
Theme Tree2, re-designed by Andol Li, powered by WordPress and Bootsrap
WWW.ANDOL.ME | 浙ICP备15040508号-1
公安备案图标 浙公网安备33010602004018号
Back to top