Home » Évènement

Séminaire IPI : « Predicting artificial visual field losses: a gaze-based inference study »

The next IPI seminar will held
on Friday, the 25th of January (2pm-3pm).
The room (at Polytech) will be D005.

The speaker will be Erwan David who is PhD student in the IPI team.

Title: Predicting artificial visual field losses: a gaze-based inference study

Visual field defects are a world-wide concern, the proportion of the population experiencing vision loss is ever increasing. Macular degeneration and glaucoma are among the four leading causes of permanent vision loss. Identifying visual field losses from gaze alone could prove crucial in the future for screening tests. Gaze movements and scanpaths contain a wealth of information (Coutrot, Hsiao, & Chan, 2018). Gaze features related to saccades and fixations have demonstrated their usefulness in the identification of mental states, cognitive processes and neuropathologies (Itti, 2015).
54 participants took part in a free-viewing task of visual scenes while experiencing artificial scotomas (central and peripheral) of varying diameters in a gaze-contingent paradigm (Duchowski, Cournia, & Murphy, 2004). We study the importance of a set of gaze features as predictors to best differentiate between scotoma conditions. We first report effect sizes with Linear Mixed Models (LMMs), then show redundancies in variance with a correlation and a factorial analyses. We end by implementing Hidden Markov Models (HMMs) and Recurrent Neural Networks (RNNs) as classifiers in order to measure predictive usefulness of gaze features. We demonstrate that saccade relative angle, amplitude and peak velocity are the best gaze features to distinguish between artificial scotomas of different types and diameters.

4th Mojette Day

We are delighted to invite you to the Fourth Mojette Day which will be held at Polytech Nantes on Thursday, January 17th 2019.
We look forward to welcoming you in the main amphitheater (A1) in the IRESTE building at Polytech Nantes, France.

Mojette Day is our unique annual occasion for a review of ongoing research concerning Discrete Tomography, including the Mojette transform, the Finite Radon Transform and related projective representations.

As at the last edition, we shall have (at least) three guest speakers:
1. Rob Tijdeman, from Leiden University, Netherlands, who has worked on Discrete Tomography since 1999 with many colleagues around the word. Rob will present some ongoing research about redundancy created by the tomographic representation.
2. Imants Svalbe, from Monash University in Melbourne Australia. Imants works on several aspects on Discrete Tomography, especially on the Finite Radon Transform, its strong links with the Mojette transform and their application as tools to build perfect auto-correlation arrays.
3. Silvia Maria Carla Pagani, from Politecnico di Milano. Silvia has an expertise on binary tomography, particularly regarding ghosts or switching elements and the characterization of regions of uniqueness.

8:30 – 9:00 | Coffee at IPI
9:00 | Welcome by Philippe Dépincé, Director Polytech Nantes
9:10 | Silvia Pagani (Università Cattolica del Sacro Cuore, Brescia, Italia): »A conjecture about hv-convex sets: some recent developments »
9:55 | Robert Tijdeman (Mathematical Institute, Leiden University, Netherland): »Maximal and near-minimal ghost components. »
10:40 | Coffee break at IPI
11:15 | Imants Svalbe (Monash Univ. Melbourne, Australia): « Discrete 1D projections to forge 2D discrete shapes »
12:00 | Apéro + Lunch + coffee at IPI
14:00 | Şuayb Ş. Arslan (MEF University, Istanbul, Turkey) : »Asymptotically MDS Array BP-XOR Codes »
14:30 | Nicolas Normand : »Lagrange ghost prints »
15:00 | Didier Féron (Rozo Systems, Nantes / San Francisco): « Optimized Mojette transform implementation for fast storage »
15:30 | Jean-Pierre Guédon : »Generating Mojette projections sets from Halphen series »
16:00 | Open discussion 1: Tools for Discrete Tomography
16:30 | Open discussion 2: Open problems in Discrete Tomography
17:00: End of the Mojette Day – Pot at IPI

There are no attendance fees, but you must register to attend (to ensure we have sufficient food and Muscadet for the lunch, sponsored by ROZO systems).

Nicolas Normand & Jean-Pierre Guédon, co-organizers, team IPI / Polytech

Séminaire IPI : « Hybrid-MST: A Hybrid Active Sampling Strategy for Pairwise Preference Aggregation »

Le prochain séminaire de l’équipe IPI aura lieu vendredi 30 novembre de 15h à 16h à Polytech, en salle D010 du bâtiment IRESTE.
L’orateur est Jing Li, post-doc au sein de l’équipe et qui abordera, en Anglais, le thème suivant : Hybrid-MST: A Hybrid Active Sampling Strategy for Pairwise Preference Aggregation.
The next IPI seminar will held on Friday the 30th of November (3pm-4pm), in room D010 in Ireste Building at Polytech’Nantes.

The speaker will be Dr Jing Li who is postdoc in the IPI team.
Hybrid-MST: A Hybrid Active Sampling Strategy for Pairwise Preference Aggregation
In this talk, a hybrid active sampling strategy for pairwise preference aggregation is presented, which aims at recovering the underlying rating of the test candidates from sparse and noisy pairwise labeling. This method employs Bayesian optimization framework and Bradley-Terry model to construct the utility function, then to obtain the Expected Information Gain (EIG) of each pair. For computational efficiency, Gaussian-Hermite quadrature is used for estimation of EIG. In this work, a hybrid active sampling strategy is proposed, either using Global Maximum (GM) EIG sampling or Minimum Spanning Tree (MST) sampling in each trial, which is determined by the test budget. The proposed method has been validated on both simulated and real-world datasets, where it shows higher preference aggregation ability than the state-of-the-art methods.

Intervention d’A. Coutrot aux Utopiales 2018 : « Le corps dans tous ses états »

Invité par la radio associative Le labo des savoirs, Antoine Coutrot, chercheur au sein de l’équipe IPI, a débattu du « corps dans tous ses états » aux Utopiales le 4 novembre 2018 avec Héloïse Chochois (illustratrice), Roland Lehoucq (astrophysicien), Marc-André Selosse (professeur du Muséum).

Il a discuté de l’utilisation des données personnelles pour créer des expériences vidéo ludiques et personnelles voire thérapeutiques.

Retrouvez l’émission en podcast sur le site de la radio [58 min].

Soutenance de thèse de Suiyi LING (équipe IPI)

Suiyi Ling, doctorant au sein de l’équipe IPI soutiendra sa thèse intitulée « Perceptual representations of structural and geometric information in images: bio-inspired and machine learning approaches – Application to visual quality assessment of immersive media » / « Représentations perceptuelles de l’information structurelle et géométrique des images : approches bio inspirées et par apprentissage machine – Application à la qualité visuelle de médias immersifs »

lundi 29 octobre à partir de 14h, à Polytech, dans l’amphi A1 du bâtiment Ireste.

Jury : Patrick le Callet (dir thèse), Frédéric Dufaux (Rapporteur, L2S), Dragan Kukolj (rapporteur, University of Novi Sad, Serbie), Luce Morin (INSA Rennes), Vincent Courboulay (Université La Rochelle), Nathalie Guyader (Université Grenoble Alpes)

This work aims to better evaluate the perceptual quality of image/video that contains structural and geometric related distortions in the context of immersive multimedia. We propose and explore a hierarchical framework of visual perception for image/video. Inspired by representation mechanism of the visual system, low-level (elementary visual features, e.g. edges), mid-level (intermediate visual patterns, e.g. codebook of edges), and higher-level (abstraction of visual input, e.g. category of distorted edges) image/video representations are investigated for quality assessment. The first part of this thesis addresses the low-level structure and texture related representations. A bilateral filter-based model is first introduced to qualify the respective role of structure and texture information in various assessment tasks (utility, quality …). An image quality/video quality measure is proposed to quantify structure deformation spatially and temporally using new elastic metric. The second part explores mid-level structure related representations. A sketch-token based model and a context tree based model are presented in this part for the image and video quality evaluation. The third part explores higher-level structure related representations. Two machine learning approaches are proposed to learn higher-level representation: a convolutional sparse coding based and a generative adversarial network. Along the thesis, experiments an user studies have been conducted on different databases for different applications where special structure related distortions are observed (FTV, multi-view rendering, omni directional imaging …).


Résumé :
Ce travail vise à mieux évaluer la qualité perceptuelle des images contenant des distorsions structurelles et géométriques notamment dans le contexte de médias immersifs. Nous proposons et explorons un cadre algorithmique hiérarchique de la perception visuelle. Inspiré par le système visuel humain, nous investiguons plusieurs niveaux de représentations des images : bas niveau (caractéristiques élémentaires comme les segments), niveau intermédiaire (motif complexe, encodage de contours), haut niveau (abstraction et reconnaissance des données visuelles). La première partie du manuscrit traite des représentations bas niveau pour la structure et texture. U n modèle basé filtre bilatéral est d’abord introduit pour qualifier les rôles respectifs de l’information texturelle et structurelle dans diverses tâches d’évaluation (utilité, qualité…). Une mesure de qualité d’image/video est proposée pour quantifier les déformations de structure spatiales et temporelles perçues en utilisant une métrique dite élastique. La seconde partie du mémoire explore les représentations de niveaux intermédiaires. Un modèle basé « schetch token » et un autre basé sur codage d’un arbre de contexte sont présentés pour évaluer la qualité perçue. La troisième partie traite des représentations haut niveau. Deux approches d’apprentissage machine sont proposées pour apprendre ces représentations : une basée sur un technique de convolutional sparse coding, l’autre sur des réseaux profonds de type generative adversarial network. Au long du manuscrit, plusieurs expériences sont menées sur différentes bases de données pour plusieurs applications (FTV, visualisation multivues, images panoramiques 360…) ainsi que des études utilisateurs.

Prix de la meilleure présentation orale JDOC 2018 décerné à Julien Langlois !

Le prix de la meilleure présentation orale aux Journée des doctorants du site nantais de l’école doctorale MathSTIC – JDOC 2018 a été accordé à Julien Langlois, doctorant en thèse CIFRE au sein de l’équipe IPI, et dans l’entreprise Multitude Technologies du groupe WeDo à Laval. Sa présentation était intitulée (comme sa thèse) « Vision industrielle et réseaux de neurones profonds ».

Séminaire IPI – Pr. Mai XU (Beihang University): « Embracing Intelligence in Video Compression »

The next IPI seminar will held on Friday, the 14th of September (4pm-5pm), in D010 room at Polytech.
The speaker will be Pr. Mai XU, who is associate professor of school of Electronic and Information Engineering, Beihang University.

Title of the seminar: « Embracing Intelligence in Video Compression »

Recently, along with the explosion of multimedia content, visual communications have become increasingly prominent in communication networks, affecting the daily life of billions of citizens and millions of businesses in the world. The amount of data over networks is expected to grow almost 40-fold in the next five years. Given the limited spectrum, video applications have encountered the bandwidth-hungry bottleneck. The pioneering research on delivering the perceived content of human is relieving the bandwidth-hungry issue from the perspective of perceptual compression and coding, in which artificial intelligence (AI) techniques, such as computer vision and machine learning, have been actively studies.
In this talk, we mainly focus on perception-inspired video compression, which learns from human intelligence for significantly removing perceptual redundancy of video data. Specifically, our talk first presents our works in data-driven saliency detection, which can be used to explore perceptual redundancy of video. Based on saliency detection, we then discuss our approaches on perception-inspired video compression for dramatically removing redundancy of video compression, such that both bit-rate and complexity can be significantly reduced without any degradation on quality of experience (QoE). Finally, we briefly introduce our latest works in panoramic video (also called 360-degree video) compression, which improves rate-distortion through predicting viewports of panoramic video.

Mai Xu is an associate professor of school of Electronic and Information Engineering, Beihang University. He is a senior member of IEEE. He received the B.S. degree from Beihang University in 2003, the M.S. degree from Tsinghua University in 2006, and the Ph.D. degree from Imperial College London in 2010. From 2010 to 2012, he was a research fellow with the Electrical Engineering Department, Tsinghua University. Since 2013, he has been with Beihang University as an Associate Professor. From 2013 to 2014, he was also with Tsinghua University as a visiting researcher. From 2014 to 2015, he was a visiting researcher of MSRA. He was a member of technical program committee (TPC) in many international conferences, e.g., ACM MM, ICCVE etc. He has published over 30 technical papers in international journals, including IEEE TPAMI, JSAC, TIP, J-STSP, TMM and TCSVT. He also published over 40 papers in conference proceedings, including CVPR, ICCV, ECCV, DCC and ACM MM. He received young research award of IEEE ICCV in 2015. He was also a recipient of best paper awards of two IEEE conferences, and best paper award finalist of IEEE ICME.

Séminaire IPI chercheur invité – Pr. Dietmar Saupe (University of Konstanz) : « KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment »

Le prochain séminaire invité IPI aura lieu vendredi 6 juillet de 14h à 15h à Polytech, salle D005.

L’orateur est Pr. Dietmar Saupe, professeur à l’Université de Constance en Allemagne.

Titre du séminaire :  » KonIQ-10k: une base de données écologique pour l’apprentissage en profondeur de l’évaluation de la qualité d’image à l’aveugle  »


The next IPI seminar will held on Friday the 6th of July (2pm-3pm), in D005 room at Polytech.
The speaker is Pr. Dietmar Saupe, who is full professor at the University of Konstanz, in Germany.

Title of the seminar: « KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment »

Abstract: This talk is about building a large and diverse image quality database via crowdsourcing, and introducing a deep learning approach that can make best use of it. The main challenge in applying state-of-the-art deep learning methods to predict image quality in-the-wild is the relatively small size of existing quality scored datasets. The reason for the lack of larger datasets is the massive resources required in generating diverse and publishable content. We present a new systematic and scalable approach to create large-scale, authentic and diverse image datasets for Image Quality Assessment (IQA). We show how we built an IQA database, KonIQ-10k, consisting of 10,073 images, on which we performed very large scale crowdsourcing experiments in order to obtain reliable quality ratings from 1,467 crowd workers (1.2 million ratings). We argue for its ecological validity by analyzing the diversity of the dataset, by comparing it to state-of-the-art IQA databases, and by checking the reliability of our user studies. Our novel BIQA method is based on deep learning with convolutional neural networks (CNN); it is trained on full and arbitrarily sized images rather than small image patches or resized input images as usually done in CNNs for image classification and quality assessment. The resolution independence is achieved by pyramid pooling. This work is the first that applies a fine-tuned residual deep learning network (ResNet-101) to BIQA. In contrast to previous methods we do not train to approximate the MOS directly, but rather use the distributions of scores. Experiments were carried out on three benchmark image quality databases. The results showed clear improvements of the accuracy of the estimated MOS values, compared to current state-of-the-art algorithms.

Bio: Since 2002 Dietmar Saupe is a full professor for Computer Science at the University of Konstanz, Germany. He is head of a research group focussing on multimedia signal processing including applications in sports science. Previous positions as professor or lecturer were at the universities of Leipzig (1998-2002), Freiburg (1993-98), Bremen (1987-93), all of them in Germany, and at the University of California, Santa Cruz. U.S.A. (1985-87). His academic degrees are are a diploma, a doctorate and a habilitation degree, all in (applied) mathematics and from the University in Bremen.
Over the years Dietmar Saupe’s areas of interest included numerical methods, dynamical systems, scientific visualization, computer graphics, image and video compression, medical image processing, computer vision, 3D models, and sports informatics. He has co-authored several award-winning books on fractals and chaos. Currently, his research group is engaged in two projects, one on image and video quality assessment and the other on modelling and optimizing performance in endurance sports. He is member of the International Association Computer Science and Sport, and the German professional associations for mathematics and computer science. Saupe currently advises 5 doctorate students and had been academic advisor of 23 more doctorate students, three of whom became professors in computer science or electrical engineering. At the University of Konstanz, he has been Chair of the Department of Computer Science and Vice Dean of the Faculty of Natural Sciences for four years.

Copyright : LS2N 2017 - Mentions Légales -