Thèse - Visual attention modeling for 3D scenes in Virtual and Mixed Reality
- J. Wang, M. Perreira Da Silva, P. Le Callet, et al. Computational model of stereoscopic 3D visual saliency. IEEE Transactions on Image Processing, 2013, vol. 22, no 6, p. 2151-2165.
- P. Lebreton, A. Raake, M. Barkowsky, P. Le Callet. Evaluating depth perception of 3D stereoscopic videos. IEEE Journal of Selected Topics in Signal Processing, 2012, vol. 6, no 6, p. 710-720.
- T. Vigier, M. Perreira Da Silva, P. Le Callet. Impact of visual angle on attention deployment and robustness of visual saliency models in videos: From SD to UHD. In : Image Processing (ICIP), 2016 IEEE International Conference on. IEEE, 2016. p. 689-693.
The proposed PhD position concerns the item (3) above. Our goal is to build computational visual attention models that will predict both head and eye-movement in 3 and possibly 6 degrees of freedom environment, taking into account, mesh saliency, rendered scene saliency and human visual behavior (perceptual biases, bottom-up and top down influences, etc.). The visual attention models will also be used for predicting the interaction of the user in the virtual environment (translation + zoom / dezoom). The PhD candidate will benefit from the work already conducted in the LS2N IPI team related to 3D visual attention and perceptual biases: