##plugins.themes.academic_pro.article.main##
Abstract
The human face is an important organ of an individual‘s body and it especially plays an important role in
extraction of an individual‘s behavior and emotional state. Manually segregating the list of songs and
generating an appropriate system based on an individual‘s emotional features is a very tedious, time
consuming, labor intensive and upheld task. Various algorithms have been proposed and developed for
automating the system generation process. However the proposed existing algorithms in use are
computationally slow, less accurate and sometimes even require use of additional hardware like sensors.
This proposed system based on facial expression extracted will generate a system automatically thereby
reducing the effort and time involved in rendering the process manually. Thus the proposed system tends to
reduce the computational time involved in obtaining the results and the overall cost of the designed system,
thereby increasing the overall accuracy of the system. Testing of the system is done on both user dependent
(dynamic) and user independent (static) dataset. Facial expressions[1] are captured using an camera. The
accuracy of the emotion detection algorithm used in the system for real time images is around 85-90%,
while for static images it is around 98- 100%.The proposed algorithm on an average calculated estimation
takes around 0.95-1.05 sec to generate an emotion based music system. Thus, it yields better accuracy in
terms of performance and computational time and reduces the designing cost, compared to the algorithms
used in the literature survey.