WSEAS Transactions on Information Science and Applications


Print ISSN: 1790-0832
E-ISSN: 2224-3402

Volume 14, 2017

Notice: As of 2014 and for the forthcoming years, the publication frequency/periodicity of WSEAS Journals is adapted to the 'continuously updated' model. What this means is that instead of being separated into issues, new papers will be added on a continuous basis, allowing a more regular flow and shorter publication times. The papers will appear in reverse order, therefore the most recent one will be on top.



Comparing the Performance of Emotion-Recognition Implementations in OpenCV, Cognitive Services, and Google Vision APIs

AUTHORS: Luis Antonio Beltrán Prieto, Zuzana Komínková Oplatková

Download as PDF

ABSTRACT: Emotions represent feelings about people in several situations. Various machine learning algorithms have been developed for emotion detection in a multimedia element, such as an image or a video. These techniques can be measured by comparing their accuracy with a given dataset in order to determine which algorithm can be selected among others. This paper deals with the comparison of three implementations of emotion recognition in faces, each implemented with specific technology. OpenCV is an open-source library of functions and packages mostly used for computer-vision analysis and applications. Cognitive services, as well as Google Cloud AI, are sets of APIs which provide machine learning and artificial intelligence algorithms to develop smart applications capable of integrate computer-vision, speech, knowledge, and language processing features. Three Android mobile applications were developed in order to test the performance between an OpenCV algorithm for emotion recognition, an implementation of Emotion cognitive service, and a Google Cloud Vision deployment for emotion-detection in faces. For this research, one thousand tests were carried out per experiment. Our findings show that the OpenCV implementation got the best performance, which can be improved by increasing the sample size per emotion during the training step.

KEYWORDS: Emotion recognition, OpenCV, Fisherfaces, Cognitive Services, Cloud Vision, face detection

REFERENCES:

[1] Garn AC, Simonton K, Dasingert T, Simonton A. Predicting changes in student engagement in university physical education: Application of control-value theory of achievement emotions, Psychology of Sport and Exercise, Vol.29, pp. 93-102.

[2] Fernandez-Caballero A, Martinez-Rodrigo A, Pastor JM, Castillo JC, Lozano-Monasor E, Lopez MT, Zangroniz R, Latorre JM, Fernandez-Sotos A, Smart environment architecture for emotion detection and regulation, Journal of Biomedical Informatics, Vol.64 pp-57-73.

[3] Felbermayr A, Nanopoulos A, The Role of Emotions for the Perceived Usefulness in Online Customer Reviews, Jounal of Interactive Marketing, Vol.36, pp. 60-76.

[4] Gennari R, Melonio A, Raccanello D, Brondino M, Dodero G, Pasini M, Torello S, Children's emotions and quality of products in participatory game design, International Journal of Human-Computer Studies, Vol.101, pp. 45-61.

[5] Wei J, He J, Chen K, Zhou Y, Tang Z, Collaborative filtering and deep learning based recommendation system for cold start items, Expert Systems with Applications, Vol.69, pp. 29-39.

[6] Matos FA, Ferreira DR, Carvalho PJ, Deep learning for plasma tomography using the bolometer system at JET, Fusion Engineering and Design, Vol.114, pp. 18-25.

[7] Liu H, Lu J, Feng J, Zhou J, Group-aware deep feature learning for facial age estimation, Pattern Recognition, Vol.66, pp. 82-94.

[8] Vieira S, Pinaya WHL, Mechelli A, Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications, Neuroscience and Biobehavioral Reviews, Vol.74, pp. 58-75.

[9] Parkhi OM, Vedaldi A, Zisserman A, Speeding up Convolutional Neural Networks with Low Rank Expansions, Proceedings of the British Machine Vision Conference, BMVA Press, 2014.

[10] Yogesh CK, Hariharan M, Ngadiran R, Adom AH, Yaacob S, Polat K, Hybrid BBO_PSO and higher order spectral features for emotion and stress recognition from natural speech, Applied Soft Computing, Vol.56, pp. 217-232.

[11] Ronao CA, Cho S, Human activity recognition with smartphone sensors using deep learning neural networks, Expert Systems with Applications, Vol.59, pp. 235-244.

[12] OpenCV library. http://opencv.org

[Online: accessed 01-Jun-2017]

[13] Cognitive Services – Intelligence Applications. http://microsoft.com/cognitive

[Online: accessed 03-Jun-2017]

[14] Google Cloud Machine Learning at Scale https://cloud.google.com/products/machinelearning/

[Online: accessed 21-Ago-2017]

[15] Kanade T, Cohn JF, Tian Y, Comprehensive database for facial expression analysis., Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG'00), pp. 46-53.

[16] Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I, The Extended CohnKanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression, Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), pp. 94-101.

[17] Turk M, Pentland A, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, Vol.3, No. 1, pp. 71-86.

WSEAS Transactions on Information Science and Applications, ISSN / E-ISSN: 1790-0832 / 2224-3402, Volume 14, 2017, Art. #20, pp. 184-190


Copyright © 2017 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution License 4.0

Bulletin Board

Currently:

The editorial board is accepting papers.


WSEAS Main Site