##plugins.themes.academic_pro.article.main##

Abstract

Visual attention is studied by detecting a salient object in an input image. Visual attention is used in various image processing applications such as image segmentation, patch rarities, pattern recognition etc. In this paper, the saliency measurement is performed by using wavelet transform.  In the proposed model we introduce a saliency measurement model based on two color spaces. One is the RGB and second is Lab. Both colour spaces gives six different channels and those channels generates different feature maps using wavelet transform. Next, the measures of saliency (local and global) are calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch from its surrounding patches. Global saliency is the inverse of a patch’s probability of happening over the entire image. The final saliency map is built by normalizing and fusing local and global saliency maps of all channels from both color systems. Experimental evaluation gives the better results from the proposed model.

Key Words: Saliency Map, Wavelet Transform, local saliency, global saliency

##plugins.themes.academic_pro.article.details##

Author Biographies

A. Srilakshmi, Dr. HS MIC College of Technology, Kanchikacherla, Krishna District, AP

M.Tech Student, Dept of DECS,DVR

Mr. D. Prabhakar, Dr. HS MIC College of Technology, Kanchikacherla, Krishna District, AP

Associate Professor, Dept of ECE, DVR
How to Cite
Srilakshmi, A., & Prabhakar, M. D. (2014). A Saliency Detection Model Based on Wavelet Transform Through Fusion of Color Spaces. International Journal of Emerging Trends in Science and Technology, 1(09). Retrieved from https://igmpublication.org/ijetst.in/index.php/ijetst/article/view/419

References

1. Nevrez İmamoğlu, Weisi Lin,Yuming Fang, “A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform”, IEEE transactions on multimedia, Vol. 15, No. 1, January 2013, pp-96-105.
2. A. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognit. Psychol., vol. 12, no. 1, pp. 97–136, 1980.
3. C. Koch and S. Ullman, “Shifts in selective visual attention: To-wards the underlying neural circuitry,” Human Neurobiol., vol. 4, pp. 219–227, 1985.
4. L. Itti, “Models of bottom-up and top-down visual attention,” Ph.D. dissertation, Dept. Computat. Neur. Syst., California Inst. Technol, Pasadena, 2000.
5. R. J. E. Merry, Wavelet Theory and Application: A Literature Study, DCT 2005.53. Eindhoven, The Netherlands: Eindhoven Univ. Technol., 2005.
6. L. Itti, C. Koch, and E. Niebur, “Model of saliency-based visual atten-tion for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, Nov. 1998.
7. S. Theodoridis and K. Koutroumbas, Pattern Recognition, 4th ed. London, U.K.: Academic/Elsevier, 2009, pp. 20–24.
8. A. Oliva, A. Torralba, M. S. Castelhano, and J. M. Henderson, “Top-down control of visual attention in object detection,” in Proc. IEEE Int. Conf. Image Processing, 2003, vol. 1, pp. 253–256.
9. T. Liu, J. Sun, N.-N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” in Proc. IEEE Int. Conf. Comput. Vision and Pattern Recognition, 2007, pp. 1–8.
10. R.C. Gonzalez, R. E. Woods, and S. L.Eddins,Digital Signal Procesing Using Matlab®. Englewood Cliffs, NJ: Prentice Hall, 2004.