|Augmented Language: Translating Speech in Everyday Glasses
| Augmented Language was featured in the Google I/O 2022 Keynote. Live translation of speech in everyday glasses
has the potential to make language more universally accessible and understandable.
"Let's see what happens when we take our advances in translation and transcription, and deliver them in your line-of-sight in one of the early prototypes that we have been testing." (Sundar Pichai, CEO Google)
|Electronic Textile: Making Soft Materials Interactive with Sensors and Displays
| Hidden Interfaces create extremely bright displays that can appear and disappear in wood, textile, plastic and mirrored surfaces. User interfaces can therefore blend into natural materials and environments without any compromise to their design or aesthetics.
E-Textile Microinteractions andI/O Braid make textiles interactive. We sense the user's proximity, touch and twist, and detect gestures, such as flicks, slides, pinches, grabs and pats, using machine learning and capactive touch sensing. Fiber optics provide embedded light feedback.
|Ubiquitous Sensing in Everyday Objects and Devices
| Haptics with Input introduces passive and active sensing for the Linear Resonant Actuator (LRA), which is widely used in wearable and mobile devices. We demonstrate new touch and pressure sensing, and how mobile devices can sense which surfaces they are placed on.
Google AI Blog: Haptics with Input ->
|Shape Displays: Spatial Interaction with Dynamic Physical Form
| Physical Telepresence uses shape capture and display to physically enhance interactions with remote people and environments.
inFORM dynamically changes material and form to continuously adapt the physical and virtual interface to user interactions.
Sublimate explores rapid and fluid transitions between physical and visual representations of dynamic digital content.
Jamming User Interfaces enable programmable stiffness, haptic feedback and deformation, for new types of flexible and shape-changing interactions.