I design and develop interactions and technologies that embrace digital and physical experiences. I am specifically interested in tools, techniques and devices that enable new interaction concepts for the augmentation and empowerment of human abilities. This includes 3D user interfaces, interaction techniques, augmented reality, mixed reality, virtual reality, ubiquitous computing, mobile devices, novel interfaces for medical imaging, multimodal systems, touch-screen interaction, and software/hardware prototyping.

The research projects are from exciting times and inspiring collaborations at different research labs and institutions, including MIT, Columbia University, University of California, KTH (Royal Institute of Technology), and Microsoft Research.I teach at Stanford University and have previously taught at Rhode Island School of Design and KTH.

Research Projects » Publications » Google Scholar »
Alex Olwal, Ph.D.
Sr Research Scientist, Google
olwal [at] acm.org
Alex Olwal

Analyzing Gaze and Gestures
MAVEN interprets user intention in AR/VR by fusing speech, gesture, viewpoint, pointing direction, and SenseShapes statistics, to improve recognition through multimodal disambiguation.
SenseShapes: Using Statistical Geometry for Object Selection in a Multimodal Augmented Reality System
Olwal, A., Benko, H., and Feiner, S.
Proceedings of ISMAR 2003 (IEEE and ACM International Symposium on Mixed and Augmented Reality), Tokyo, Japan, Oct 7-10, 2003, pp. 300-301.

ISMAR 2003
PDF []
MAVEN: Mutual Disambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality
Kaiser, E., Olwal, A., McGee, D., Benko, H., Corradini, A., Li, X., Feiner, S., and Cohen, P.
Proceedings of ICMI 2003 (International Conference on Multimodal Interfaces), Vancouver, BC, Nov 5-7, 2003, pp. 12-19.

ICMI 2003
PDF []
thumbnails/senseshapes/2.jpg