I direct research and development of interaction technologies based on advancements in display technology, low-power and high-speed sensing, wearables, actuation, electronic textiles, and human—computer interaction. I am passionate about accelerating innovation and disruption through tools, techniques and devices that enable augmentation and empowerment of human abilities. Research interests include augmented reality, ubiquitous computing, mobile devices, 3D user interfaces, interaction techniques, interfaces for accessibility and health, medical imaging, and software/hardware prototyping.

These projects are from collaborations during my time at different research labs, including Google Research, MIT, Columbia University, University of California, KTH (Royal Institute of Technology), and Microsoft Research. I have taught at Stanford University, Rhode Island School of Design and KTH.

Research Projects Publications Google Scholar Open Source
Alex Olwal, Ph.D.
Staff Research Scientist, Google
olwal [at] acm.org
Alex Olwal

Analyzing Gaze and Gestures
MAVEN interprets user intention in AR/VR by fusing speech, gesture, viewpoint, pointing direction, and SenseShapes statistics, to improve recognition through multimodal disambiguation.
SenseShapes: Using Statistical Geometry for Object Selection in a Multimodal Augmented Reality System
Olwal, A., Benko, H., and Feiner, S.
Proceedings of ISMAR 2003 (IEEE and ACM International Symposium on Mixed and Augmented Reality), Tokyo, Japan, Oct 7-10, 2003, pp. 300-301.

ISMAR 2003
PDF [1.2MB]
MAVEN: Mutual Disambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality
Kaiser, E., Olwal, A., McGee, D., Benko, H., Corradini, A., Li, X., Feiner, S., and Cohen, P.
Proceedings of ICMI 2003 (International Conference on Multimodal Interfaces), Vancouver, BC, Nov 5-7, 2003, pp. 12-19.

ICMI 2003
PDF [2.1MB]
thumbnails/senseshapes/2.jpg