My first NoFeelings blog included a picture of Jenny McCarthy. JM is famous for championing the rights of autistic people, so I felt she was an appropriate person to use in that blog. I'm doing a series of blogs about a program I'm working on to detect emotions. People who suffer severe Alexithymia (look that up on the Internet) are unable to read the emotions of other people just by looking at their faces. 85% of Autistic people suffer from Alexithymia, and 50% of Autistic people suffer from severe Alexithymia. Hence the appropriateness of using a picture of Jenny McCarthy in my blog.
My second blog displays a picture of Daryl Hannah, an actress I've enjoyed watching in films over the last 4 decades. DH is also well known from having Asperger's; Autism lite. DH has said in interviews that her social awkwardness prevented her from attending film openings and other Hollwood ceremonies that were important to her acting career. Social awkwardness due to Asperger's is attributable to Alexithymia, and hence the appropriatness of Daryl Hannah's picture in my last blog.
Last year a psychologist my wife knows, told her that her husband (me) appeared to have Asperger's. The psychologist said she would have to run a battery of tests on me to determine if I actually had Asperger's or not. I agreed to the tests, and tested positive for Asperger's. A lot of idiosyncrasies I was aware of having turned out to be symptoms of Asperger's, including a difficulty in reading people. Hence the appropriateness of using a picture of me in my blog. Besides, this program is supposed to work with a webcam, or even a smartphone; I needed to check out if my code worked with a webcam.
That's a picture of me on the left side of this webpage; the input to facial6.py. The picture on the right is the output of facial6.py. As you can see, facial6.py correctly identified my face, eyes, nose, and mouth. In my last blog I mention that I had just seen the dentist. My mouth is still swollen on one side, so it's not entirely framed like Daryl Hannah's mouth, in my last blog.
Here's the code I used to process the picture of me:
facial6.py haarcascade_mcs_mouth.xml Nariz.xml
There are other XML files besides the two listed above; see my the first blog in this series too get those XML files. In my last blog I state that I need to get XML files for the different emotions. I.e., I was planning on taking the smiles XML file (it comes with OpenCV) and searching it for the mouth found by my program; a hit would denote joy. Likewise, I needed an XML file with frowns too determine if somebody was sad, etc. ...but, before doing that, I needed to make sure my Python script worked when receiving it's images from a webcam, as a video. Using me again as the webcam model, I got some strange results:
I was expecting green rectangles around my eyes, a purple rectangle around my nose and a red one around my mouth. All of these rectangles do appear in the correct locations at different points in the video, but not consistently. Plus, I'm getting bogus rectangles showing up at different times in the video, too. The simple solution should be to take single screenshots of the webcam, and wait a few seconds before taking each one; maybe, maybe not. Well, this is where my program stands now, but obviously I need to research (do my own tests) what needs to be done too proceed.