In my last blog I increased the angle I searched around the mouth, in the vector file. I had already increased the search angle the previous blog (two blogs back) but I realized I had to do that in the vector file too, or I wouldn't have any smiles in the increased angle too search for. It didn't help (apparently) so I ended the blog in a disappointed tone, saying I needed to go back to my earlier efforts at creating an improved haarcascade file to search for smiles, and find out where I went wrong. I've done another Raspberry Pi blog since then, Using a Raspberry Pi Model B, Rev 2 as a WebCam, which gave me some distance from this blog series, and let me realize I hadn't tried my other idea for finding more smiles; increase the area that I searched for smiles around the mouth. Success!!!!!!
A huge improvement for smile captures. All the models in this video are smiling, so there really ins't a control group. I need to select a video that shows a variety of emotional states, and make sure it only captures smiles. ...or, I could use a video of non-smiles and make sure there are no false positives; I already know that smiles are captured. In my links at the bottom of this page, I include a link to video_streamer17Surface.py; the file that plays the original video and creates a video of smile captures (e.g., the video in this blog). I increased the area of the search area by hand (look at the video_streamer file too see what I'm talking about). For those that don't like doing things by hand, they can use the method cv2.resize()
, or the generic function resize()
. I should probably continue work on the haarcascade file for this project by adding some more smiles to it, however, I just as probably should go on to some other emotions (frowns --> sadness).