Robertinas first steps towards Empahty!
Currently we are working on making our Robot Swarm perceive humans. The vision sense of our robot is a camera; the robot sees its environment, but that alone is not enough. The robot needs to learn to differentiate a bit more.
Previously we used Python and could already detect emotions, but to be more flexible and have the possibility to have more control, we are porting our coding environment to c++.
With the goal to respond to human emotions the primary goal is to detect faces. In order to teach Robertina to identify a human face, we use the computer vision and machine learning libraries OpenCV and dllib.
And see what she can already perceive: That’s Adam and me and she can identify two heads and amazingly also our so called facial landmarks that play a relevant role in her future ability to actually perceive emotions!
Pingback: Face-Projection Experiments: Preparing for Izlog Festival – Saturn3 | Hochschuh & Donovan