My work focuses on developing interactive technologies, leveraging advancements in display technology, low-power sensing, wearables, robotics and actuation, soft electronics, interactive textile, and human-computer interaction. I am specifically interested in techniques that enable new interaction for the augmentation and empowerment of human abilities. This includes augmented reality, ubiquitous computing, mobile devices, 3D user interfaces, interaction techniques, interfaces for accessibility and health, medical imaging, multimodal systems, and software/hardware prototyping.

These projects are from collaborations during my time at different research labs, including Google Research, MIT, Columbia University, University of California, KTH (Royal Institute of Technology), and Microsoft Research. I have taught at Stanford University, Rhode Island School of Design and KTH.

Research Projects Publications Google Scholar Open Source
Alex Olwal, Ph.D.
Sr Research Scientist, Google
olwal [at]
Alex Olwal

Wearable Hearing Accessibility
Wearable Subtitles facilitates communication for deaf and hard-of-hearing individuals through real-time speech-to-text in the user's line of sight in a proof-of-concept eyewear display. The hybrid, low-power wireless architecture is designed for all-day use with up to 15 hours of continuous operation.
Wearable Subtitles: Augmenting Spoken Communication with Lightweight Eyewear for All-day Captioning
Olwal, A., Balke, K., Votintcev, D., Starner, T., Conn, P., Chinh, B., and Corda, B.
Proceedings of UIST 2020 - Best Demo Honorable Mention Award (ACM Symposium on User Interface Software and Technology), Virtual Event, Oct 20-23, 2020, pp. 1108-1120.

UIST 2020 - Best Demo Honorable Mention Award
PDF [16MB]