Article
Computer Science, Information Systems
Shu-Yu Chen, Yu-Kun Lai, Shihong Xia, Paul L. Rosin, Lin Gao
Summary: With the rapid development of VR technology, the need for bidirectional communication in immersive VR has arisen. This paper introduces a real-time system that can capture and reconstruct 3D faces wearing HMDs, and recover eye gaze effectively. It also proposes a novel method to map eye gaze directions to the 3D virtual world, offering a new interactive mode in VR. The effectiveness of the system is demonstrated through comparison with state-of-the-art techniques and live capture.
IEEE TRANSACTIONS ON MULTIMEDIA
(2023)
Article
Chemistry, Analytical
Moayad Mokatren, Tsvi Kuflik, Ilan Shimshoni
Summary: In this paper, a framework for 3D gaze estimation using a corneal imaging system is presented. The framework utilizes a headset with three cameras - a scene camera, an IR camera, and an RGB camera. Deep learning algorithms are used to detect pupil in IR and RGB images, creating a real-time 3D model of the eye. By calibrating the eye tracker transparently, the framework achieves low 3D gaze error and high accuracy in acquiring corneal images. The framework shows promising results in realistic settings and can be used in various mobile scenarios.
Review
Neurosciences
Laura Carelli, Federica Solca, Sofia Tagini, Silvia Torre, Federico Verde, Nicola Ticozzi, Roberta Ferrucci, Gabriella Pravettoni, Edoardo Nicolo Aiello, Vincenzo Silani, Barbara Poletti
Summary: Eye-Tracking techniques have the potential to enhance cognitive interventions by providing feedback. This systematic review found that gaze-contingent training using Eye-Tracking showed effectiveness in improving cognitive and emotional deficits. However, the heterogeneity of the studied populations limits the generalizability of the results. The integration of Eye-Tracking protocols with traditional cognitive approaches shows promise.
Article
Computer Science, Information Systems
Taiki Fukiage, Shin'ya Nishida
Summary: This research proposes a new gaze-contingent display technique that addresses the conflict between eye vergence and accommodation by shifting the disparity distribution around the gaze position. It also uses Hidden Stereo to synthesize clear 2D content. With this technology, viewers wearing glasses can enjoy natural 3D content while other viewers without glasses can see clear 2D images.
Article
Chemistry, Analytical
Suparat Yeamkuan, Kosin Chamnongthai
Summary: This study introduces a method for determining 3D points of intention using multimodal fusion of hand pointing and eye gaze, and experimental results demonstrate the accuracy of the method in measuring 3D points of intention at different distances.
Article
Computer Science, Artificial Intelligence
Jie-chun Chen, Pin-qing Yu, Chun-ying Yao, Li-ping Zhao, Yu-yang Qiao
Summary: In this paper, a video-based eye detector that adopts a coarse-to-fine strategy is proposed. The detector consists of three classifiers and a method for coarse pupil localization is also proposed. Experimental results show that the eye detector has high detection rate and localization speed.
EXPERT SYSTEMS WITH APPLICATIONS
(2024)
Article
Computer Science, Interdisciplinary Applications
Bert Hartfiel, Rainer Stark
Summary: The development of new car interior concepts requires tools that enable subjective experiences and evaluations, with virtual reality technologies being generally suitable for such evaluations but exceptions may exist in state-of-the-art driving simulators. In a validation study comparing primary driving tasks in two HMD-based simulators with test runs in a real car, participants showed more valid behavior in the dynamic system than in the static simulator condition.
Article
Psychology, Mathematical
Scott A. Stone, Quinn A. Boser, T. Riley Dawson, Albert H. Vette, Jacqueline S. Hebert, Patrick M. Pilarski, Craig S. Chapman
Summary: Assessing gaze behavior during real-world tasks is difficult, but combining eye tracking with motion capture can generate high-quality 3D gaze vectors.
BEHAVIOR RESEARCH METHODS
(2022)
Article
Chemistry, Multidisciplinary
Dongwoo Kang, Hyun Sung Chang
Summary: This study presents a pupil-tracking method for drivers with and without sunglasses, with low complexity and algorithm performance, suitable for implementation in AR 3D HUD systems.
APPLIED SCIENCES-BASEL
(2021)
Article
Multidisciplinary Sciences
Zihan Yan, Yue Wu, Yifei Shan, Wenqian Chen, Xiangdong Li
Summary: In this research, the ARGaze dataset was constructed to enable the design of a calibration-free eye tracking device. The dataset includes eye gaze images and augmented reality scene videos, and achieved low gaze estimation error, indicating its significance for generalizing the use of eye tracking.
Article
Computer Science, Information Systems
Zhenqi Fu, Feng Shao, Qiuping Jiang, Xiangchao Meng, Yo-Sung Ho
Summary: The study aims to propose a quality assessment method for binocular stereoscopic image retargeting, establishing a database with 720 stereoscopic images and introducing an objective metric based on grid deformation and information loss. Experimental results demonstrate the superiority of the proposed metric over existing methods in measuring the quality of stereoscopic retargeted images.
IEEE TRANSACTIONS ON MULTIMEDIA
(2021)
Review
Psychology, Multidisciplinary
Ralf Kredel, Julia Hernandez, Ernst-Joachim Hossner, Stephan Zahno
Summary: This short review provides an update on the progress of eye-tracking technology and natural gaze behavior in sports from 2016 to 2022. Through a systematic review of 31 studies, it was found that there is increased interest in the gaze behavior of officials, but there are still limitations in terms of sample sizes, number of trials, eye-tracking technology, and analysis procedures. However, the review also highlights the potential of automated gaze-cue-allocations (GCA) in mobile eye-tracking studies to enhance objectivity and reduce manual workload.
FRONTIERS IN PSYCHOLOGY
(2023)
Article
Computer Science, Software Engineering
Zhiyong Wang, Jinxiang Chai, Shihong Xia
Summary: This paper proposes a realtime and accurate method for 3D eye gaze tracking using deep convolutional neural networks, which combines training efficiency and tracking accuracy in various conditions, advancing the state of the art in single RGB camera eye tracking.
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
(2021)
Article
Computer Science, Cybernetics
Lida Huang, Thomas Westin, Mirjam Palosaari Eladhar, Sindri Magnusson, Hao Chen
Summary: EyeCompass is a system that enables high-fidelity and efficient free-eye drawing through unimodal gaze control. By addressing the challenges of gaze-control drawing, it provides value to people with motor disabilities. The system utilizes brush damping dynamics and the gaze-oriented method as initial mechanisms to improve the drawing performance, as demonstrated in user experiments and a field study.
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES
(2023)
Article
Psychiatry
GyeongSu Jeon, Hyeon-Seok Choi, Do-Un Jung, Sunghyuk Moon, Gwanwoo Kim, Sung-Jin Kim, Jung-Joon Moon, Yeon-Sue Kim, Dong-Wook Jeon
Summary: This study investigated the correlation between gaze avoidance and psychopathology in patients with schizophrenia through eye movement measurements in real-life interpersonal situations. The results showed that psychopathology, particularly positive symptoms, was associated with gaze avoidance and pupil diameter in these patients.
ASIAN JOURNAL OF PSYCHIATRY
(2022)
Article
Computer Science, Software Engineering
Peter Vangorp, Karol Myszkowski, Erich W. Graf, Rafal K. Mantiuk
ACM TRANSACTIONS ON GRAPHICS
(2015)
Article
Computer Science, Software Engineering
Petr Kellnhofer, Tobias Ritschel, Karol Myszkowski, Elmar Eisemann, Hans-Peter Seidel
COMPUTER GRAPHICS FORUM
(2015)
Article
Computer Science, Software Engineering
Krzysztof Templin, Piotr Didyk, Karol Myszkowski, Hans-Peter Seidel
ACM TRANSACTIONS ON GRAPHICS
(2016)
Editorial Material
Computer Science, Software Engineering
Guillaume Lavoue, Hantao Liu, Karol Myszkowski, Weisi Lin
IEEE COMPUTER GRAPHICS AND APPLICATIONS
(2016)
Article
Engineering, Electrical & Electronic
Petr Kellnhofer, Tobias Ritschel, Karol Myszkowski, Hans-Peter Seidel
JOURNAL OF ELECTRONIC IMAGING
(2016)
Article
Computer Science, Software Engineering
Thomas Leimkuehler, Petr Kellnhofer, Tobias Ritschel, Karol Myszkowski, Hans-Peter Seidel
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
(2018)
Article
Computer Science, Software Engineering
C. Wang, B. Chen, HP. Seidel, K. Myszkowski, A. Serrano
Summary: This paper introduces a learning-based self-supervised tone mapping operator that can be trained for each HDR image without any data labeling. The key novelty of this approach is the carefully designed loss function based on contrast perception, which allows for direct comparison of content in HDR and tone mapped images. Extensive studies and exploration of parameters demonstrate that this solution outperforms existing approaches in both objective and subjective metrics.
COMPUTER GRAPHICS FORUM
(2022)
Article
Computer Science, Software Engineering
Vassillen Chizhov, Iliyan Georgiev, Karol Myszkowski, Gurprit Singh
Summary: This paper proposes a perception-oriented framework to optimize rendering errors by distributing the error as visually pleasing blue noise in image space. It leverages models based on human perception to improve image fidelity and shows substantial improvements over prior state of the art.
ACM TRANSACTIONS ON GRAPHICS
(2022)
Article
Computer Science, Software Engineering
Krzysztof Wolski, Fangcheng Zhong, Karol Myszkowski, Rafal K. Mantiuk
Summary: This paper introduces a model and contrast enhancement algorithm that can improve the accuracy of binocular depth cues under low brightness, allowing high-quality stereoscopic images to be displayed even at very low luminance.
ACM TRANSACTIONS ON GRAPHICS
(2022)
Article
Computer Science, Software Engineering
Luca Surace, Marek Wernikowski, Cara Tursun, Karol Myszkowski, Radoslaw Mantiuk, Piotr Didyk
Summary: This research aims to guide the training of foveated image reconstruction using generative adversarial networks (GANs) to reconstruct visually important image features. By conducting psychophysical experiments, building a dataset, and devising a training procedure, a strategy is proposed to penalize only perceptually important output deviations, resulting in improved image reconstruction quality.
ACM TRANSACTIONS ON APPLIED PERCEPTION
(2023)
Article
Computer Science, Software Engineering
U. Cogalan, M. Bemana, H. P. Seidel, K. Myszkowski
Summary: This study explores the advantages of using dual-exposure sensors to accurately handle HDR scenes with complex motion. By spatially registering sharp short and blurry long exposures and aligning their ends temporally, motion blur provides continuous information on scene motion that, combined with the reference, enables more precise motion sampling. The proposed neural network outperforms existing solutions in VFI tasks and HDR frame reconstruction.
COMPUTER GRAPHICS FORUM
(2023)
Article
Computer Science, Software Engineering
Tobias Rittig, Denis Sumin, Vahid Babaei, Piotr Didyk, Alexey Voloboy, Alexander Wilkie, Bernd Bickel, Karol Myszkowski, Tim Weyrich, Jaroslav Krivanek
Summary: With the rise of full-color 3D printers, the need for color-accurate 3D-print preparation has increased, but traditional printing materials' translucency poses a challenge in maintaining color texture details. Our data-driven approach significantly speeds up the optimization process, achieving results similar in quality to Monte Carlo rendering but in a fraction of the time.
COMPUTER GRAPHICS FORUM
(2021)
Proceedings Paper
Computer Science, Artificial Intelligence
Petr Kellnhofer, Thomas Leimkuehler, Tobias Ritschel, Karol Myszkowski, Hans-Peter Seidel
SAP 2015: ACM SIGGRAPH SYMPOSIUM ON APPLIED PERCEPTION
(2016)
Proceedings Paper
Computer Science, Artificial Intelligence
Petr Kellnhofer, Tobias Ritschel, Karol Myszkowski, Hans-Peter Seidel
HUMAN VISION AND ELECTRONIC IMAGING XX
(2015)