In my last video blog for this series, I stated that I was going to research what I needed to do too get a better capture of my facial features (I took a video of myself). I took a look at Youtube and found a series of OpenCV videos for Python; the author was "sentdex". I found sentdex's videos very informative, so I can recommend his videos for anyone wanting to learn OpenCV, Python, and several other languages and topics (I didn't watch those other videos, but sentdex seems to know what he's talking about in the videos I watched - I'm sure his other videos are great too). There's a link to my new code at the bottom of this blog; it still needs to check for different head angles (if I'm not looking straight into my webcam with a "frontal" positioning of my head, the facial feature detection is adversely affected) but for now I'm looking at the code as a good point to start testing for "Feelings". ( I just read my own blog. I left out that what I got from sentdex's videos was that I needed to create a region of image for each frame, and check for each facial feature in that frame, before looping to the next video frame - seems obvious in retrospect, but I'm new to this type of programming.):
If you look at the code for this script (click on the link below for "Streaming Video Script") you'll see that I'm only using the smile xml file for mouth detection; so far it seems to be great for detecting mouths, but not so great for detecting smiles. My last blog has a link to haarcascade_smile.xml (in case you want to download the file or take a look at it - it comes with the OpenCV package).