Dichoptic Text Highlighting for Focus Support in VR

This project explores dichoptic cues (i.e., rendering different images to each eye) as a tool to help readers maintain focus during reading tasks in virtual reality. You will implement a real-time text highlighting system in VR that subtly manipulates binocular visual input—e.g., contrast, stereo disparity, or ocular singleton techniques—to draw attention to relevant lines or words without overt visual cues.
Importantly, the system should support two modes of control:
- A gaze-contingent mode for gaze-enabled VR headsets (e.g., using eye tracking to highlight where the user is expected to focus)
- A manual control mode for standard VR setups (e.g., highlighting controlled via keyboard or controller input)
In addition, you will conduct a literature review of existing attention control and reading support strategies (e.g., moving windows, preview cues, perceptual enhancements), and implement one or more additional methods based on your findings for comparison.
For further information, please contact virmarie.maquiling(at)tum.de.
Tasks
- Conduct a structured literature review on attention-guiding strategies for reading in VR and traditional media
- Design and implement a VR reading interface in Unity
- Implement a dichoptic text highlighting tool with both gaze-contingent and manual modes
- Implement one or more additional attention-guidance strategies based on the literature
- Conduct pilot tests (e.g., usability or preliminary focus behavior with eye tracking or subjective feedback)
Requirements
- Familiarity with Unity and C#
- Interest in HCI, VR, cognitive/visual perception, and eye tracking
- Prior coursework or experience with eye tracking, or human factors is a plus
Deliverables
- Working VR prototype with at least two implemented attention-guidance strategies
- Written report including literature review, implementation details, and pilot evaluation