projekte
Projekt 1
Projekt 2
Projekt 3
Projekt 4
Projekt 5
Projekt 6
Projekt 7
Projekt 8
Projekt 9

Aufmerksamkeitsgesteuerte Darstellungsalgorithmen

Projektpartner: MPI, CG-TÜ, CG-KN, VIS-KN, HCI-KN

Die immense Menge darstellbarer Information macht es nötig, den Fokus des Benutzers effektiv zu messen und immer wieder gezielt zu lenken, damit er wichtige Dinge nicht aus den Augen verliert. Dies muss durch geeignete perzeptionsbasierte Verfahren geschehen, die die Darstellung an den Benutzer und die jeweilige Situation anpassen. Am MPI sollen daher entsprechende Verfahren für HR-Displays entwickelt werden, die auf existierenden Forschungsarbeiten zu diesem Thema aufbauen.

Räumliche Auflösungsfähigkeit der Retina eines gesunden menschlichenn Auges (links) und Beispiel einer daran angepassten Darstellung (rechts)

Es wurden Verfahren zum mobilen Augentracking entwickelt. Hierzu ist es nötig, Geräte zum Augentracking mit weiteren Geräten zum Körpertracking zu kombinieren und durch eine Kombination der Messwerte auf die Blickrichtung zu schließen. Die Methoden wurden in der LibGaze den anderen Projektpartnern (und der Öffentlichkeit) zur Verfügung gestellt. In einer Reihe von psychophysikalischen Experimenten wurden Benutzerinteraktionen mit den Geräten untersucht.

Langfristige Zielsetzung

(1) Technical goal: create a mobile system for accurate, real-time gaze-tracking of observers as they move around and freely interact with wall-sized high-resolution displays. (2) Basic research goal: perform psychophysical experiments to develop models of natural human head- and eye-movements while interacting with the display. (3) Combine (1) and (2) to enable gaze-contingent display modifications.

Stand der Forschung

Mobile gaze-tracking. Several commercial eye-tracking systems exist (e.g. Eyelink, Arlington, LC Technologies), but none has the ability to track gaze in 3D as the user moves head or body. Recent research provides basic algorithms for combining a head-mounted eye tracker with a body motion capture system, to calculate the observer’s gaze (Johnson 2007; Ronsse et al 2007) . However, these solutions were tied to a specific hardware setup and the software is not openly available. Furthermore, the existing systems have not been optimized for use with large display screens.

Gaze-contingent displays (GCDs). Most previous work on GCDs focuses on exploiting the limited resolution of human peripheral vision by adapting rendering quality depending on the current gaze of the observer (for a review, see Parkhurst, 2002; Duchowski, 2007). Typical systems vary image resolution, colour (e.g. grayscale instead of 3 channels), or mesh resolution, aiming to reduce computational load without any visible degradation. Most existing GCDs use small visual displays and fixed head position. However, with large screens, updating the virtual camera in real time to match the viewer’s position allows very compelling 3D visualizations. Gaze contingent changes to abstract data visualization are rare.

Eye and Head movements. Head movements are important to gaze control. When unrestrained, head movements are initiated for eye movements larger than 10° in visual angle, even when the target is within the oculo-motor range (Stahl, 1999). Given that head movements are effortful, it is likely that head movements are made to bring regions of interest within a preferred and narrower oculo-motor range. A recent study has shown that head movements are often planned to orient gaze towards regions where observers expect to find current and future task-relevant information (Oomen, Smith & Stahl, 2004). Considering this, the statistics of head movements in natural gaze behavior can provide insight into what the user might consider to be informative and might even be used to predict where the user’s gaze might be directed next.

Theoretical models that seek to predict gaze behavior typically focus on image statistics and how they might influence gaze behavior. For example, gaze fixations can be drawn to image regions of high luminance-contrast and edge density (for recent example, see Tatler, Baddeley & Gilchrist, 2005). Unfortunately, the predictive value of current primarily image based models of gaze behavior tends to be rather weak (0.45 correlation, Parkhurst etal. 2002) and short-lived (Itti, 2005). Furthermore, task-demands often override gaze behavior that is driven by image statistics (Einhäuser, 2008).

The figure below provides a clear example of how fixation patterns are different, even for the same individual, depending on task requirements. Also, Individual differences such as prior expertise and motor preferences can influence gaze behavior (Rayner, Li, Williams, Cave & Well, 2007; Fuller, 1992). Hence, task requirements and individual differences should be considered when designing a usable model capable of predicting gaze behavior across a range of well-specified tasks. Some work has been done.

A well-known example of how the spatial distribution of gaze fixations is task-dependent. From left to right: i) Image used by Yarbus (1967) for which eyemovements were recorded of observers while ii) free viewing, iii) asked to name the ages of depicted humans.

Stand der eigenen Arbeiten

(1) Mobile Gaze Tracking. The technical component of the project is now largely completed. We have developed a working system, which combines off-the-shelf head-mounted eye-trackers with an optical tracking systems for continuously tracking gaze. The eyetracker is connected to a small battery-powered PC which can be easily carried by the user in a backpack. This PC is connected to the framework over a wireless LAN. The core of the system is a software framework (libGaze) which was developed in C by Sebastian Herholz. libGaze is responsible for managing the tracking systems and making their raw data easily accessible. Through an interactive calibration method libGaze applies a coordinate transformation in real-time to map the raw data from each tracking system to the viewer’s position and gaze direction in 3D. Figure 6.2 shows the system in use.

Due to its modular design, the framework is not bound to a specific hardware setup. The current system uses an „Eyelink2“ eyetracker and „Vicon“ MoCap, but we have also tested it with a „Chro-nos“ eyetracker and at Konstanz using an „A.R.T.“ MoCap. LibGaze runs under Linux, MacOS and WindowsXP. Additionally, APIs for different programming languages including Python (pyGaze) or Java (JGaze) were developed to make it easy for our project colaborators to use libGaze in their current workflow. A number of other research labs in Germany have already expressed an interest in using the system. To make the libGaze framework widely accessible, we have released it as an open source project, available from www.sourceforge.net/projects/libgaze.

We have tested the system with two display walls (at MPI and Konstanz), and are currently adapting it to work with non-planar screens as well. We have completed psychophysical experiments to evaluate the accuracy and stability of the system, finding that gaze can be reliably tracked with error of <1deg visual angle. The latency of the system is <10ms, enabling real-time applications. A complete description of the system and a report of the psychophysical evaluation are currently being written up for publication.

(2) Psychophysical Experiments. We are now working on the experimental component of the project. We have created a large database of high-resolution and semantically labelled natural scene images (ca. 1500 images) for use as stimuli in the experiments. These make it possible to relate gaze behaviour to the positions of well-defined objects and thus develop high-level gaze prediction models that are related to the tasks given to participants e.g., visual search for visible animals. Thomas Tanner has also completed experiments on the role of eye-movements in an attentionally-demanding multiple object tracking (MOT) task on a standard desktop screen, although we will adapt it subsequently to large display. We are currently evaluating a model for predicting optimal gaze locations for this data. MOT is valuable for investigating how observers resolve the conflict of moving to a currently optimal gaze position vs. losing track of the scene due blindness during gaze-shift.

The combined eye- and head-tracker, and the system in use with the Konstanz Powerwall. The user is viewing a gaze-contingent multi-resolution display demonstration. Note that high-resolution graphics are only rendered for the image region that the user is gazing upon, although the low resolution in the periphery is undetectable to the user.

Veröffentlichungen

[in preparation] Herholz, S., Tanner, T., Fleming, R.W. and H. H. Bülthoff. LibGaze: A combined eye- and head-tracking system for freely-moving observers. To be submitted to Journal of Neuroscience Methods.

Sonstige Forschungsleistungen (Patente/Vorträge/Ausstellungen etc.)

[Software]: LibGaze is available for download from SourceForge: click
8-11-2007 [Talk]: Vortrag auf der VMV im Rahmen des BW-FIT Kolloquiums
31-09-2007 [Abstract]: Tanner, T. G., L. H. Canto-Pereira and H. H. Bülthoff. Free vs constrained gaze in a multiple-object-tracking paradigm. Perception 36 (ECVP 2007 Abstract Supplement)
26-09-2007 [Abstract]: Canto-Pereira, L. H., T. G. Tanner, S. Herholz, R. W. Fleming and H. H. Bülthoff. Integrated real-time eye, head, and body tracking in front of a wall-sized display. Perception 36 (ECVP 2007 Abstract Supplement)
20-08-2007 [Abstract]: Herholz, S., T. G. Tanner, L. H. Canto-Pereira, R. W. Fleming and H. H. Bülthoff: Real-time gaze-tracking for freely-moving observers. 14th European Conference on Eye Movements (ECEM2007), Potsdam, Germany

Sonstige Aktivitäten im Projektverbund

We have collaborated extensively with the partners at Konstanz. Frequent exchanges have allowed us to test our system with alternative optical trackers, screens and rendering environments. Addi-tionally, we have tested the applicability of the system to two Human Computer Interface scenarios. In one scenario (in collaboration with Werner König), we integrated data from the user’s current gaze position with the Konstanz laser-based pointing device, to help identify which of two nearby objects a user wishes to select. The laser-pointer data resolves ambiguity when gaze-estimation is imprecise, allowing smoother, task-directed gaze-based interaction, as shown in Figure 6.3 (left panel).

In the second scenario (in collaboration with Joachim Bieg) we have developed a system for navigating a collection of photographs primarily using gaze. The user can zoom and move images presented on the Powerwall by looking at them (by gaze), selecting them (by mouse button), and physically moving (by gaze). The system is implemented in Java and uses OpenGL + Chromium for the distributed rendering. To combine the system with the libGaze framework the Java Api JGaze was developed by Sebstian Herholz and Joachim Bieg. An example is shown in Figure 6.3 (right panel).

[left:] The user is selecting between three yellow targets by sight. The estimate of gaze position is shown by the green ring. The laser-based pointing is shown by the red dot and the mouse cursor, which can disambiguate the desired object even without pointing directly to the target. Thus inaccurate pointing and innaccurate gaze tracking can be combined to yield accurate target selection. [right:] Still from a video sequence demonstrating a user using gaze to interact with images.

Referenzen

Barth, E., Dorr, M., Böhme, M., Gegenfurtner, K., & Martinetz, T., 2006, In Bernice E Rogowitz, Thrasyvoulos N Pappas, and Scott J Daly, (Eds.), Human Vision and Electronic Imaging, Proceedings of SPIE, 6057, Eye Movements, Visual Search, and Attention: a Tribute to Larry Stark

Duchowski, A.T., & Çöltekin, A., 2007, Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, & stereo imaging, ACM Transactions on Multimedia Computing, Communications, and Applications, 3, 1-21.

Fuller, J.H., 1992, Head movement propensity, Experimental Brain Research, 92, 152-164

Itti, L, 2005, Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes, Visual Cognition, 12, 1093-1123.

Johnson, J.S., Liu, L., Thomas, G., Spencer, J.P., 2007, Calibration algorithm for eyetracking with unrestricted head movement, Behavior Research Methods, 39, 123-132

Oomen, B.S., Smith, R.M., & Stahl, J.S., 2004, The influence of future gaze orientation upon eye-head coupling during saccades, Experimental Brian Research, 155, 9-18

Parkhurst, D.J., & Niebur, E., 2002, Variable-resolution displays: A theoretical, practical and behavioral evaluation, Human Factors, 44, 611-629

Rayner, K., Li, X., Williams, C.C., Cave, K.R., & Well, A.D., 2007, Eye movements during information processing tasks: Individual differences and cultural effects, Vision Research, 47, 2714-2726

Ronsse, R., White, O., & Lefèvre, P., 2007, Computation of gaze orientation under unrestrained head movements, Journal of Neuroscience Methods, 159, 158-169.

Stahl, J.S., 1999, Amplitude of human head movements associated with horzontal saccades, Experimental Brain Research, 126, 41-54

Tatler, B.W., Baddeley, R.J., & Gilchrist, I.D., 2005, Visual correlates of fixation selection: Effects of scale and time, Vision Research, 45, 643-659.

Yarbus, A.L., 1967, Eye movements and vision, New York: Plenum Press.

Koordination:

Prof. Dr. Heinrich Bülthoff (MPI)
Max-Planck Institut für Biologische Kybernetik
Spemannstraße 38, 72076 Tübingen
Tel: 07071/601-601
Fax: 07071/601-616
heinrich.buelthoff@tuebingen.mpg.de
http://www.kyb.mpg.de