2024
Personal Project
People rely on cars to move safely from one point to another. The driving experience needs to be simple and understandable for everyone.
The Problem: there is a lack of consistency in design, usability & interaction in the automotive landscape. Misleading UI, unintuitive affordances, and a lack of clarity compromise driver safety.
** An estimated 324,819 people were injured in distracted driving crashes in 2023.
I studied the way people drive, and learned about the common pain points they experience to find opportunities to improve driver safety.
Today phone pairing in-vehicle systems such as Google’s Android Auto are at the leading edge of delivering safe hands-free access to necessary apps and features while driving. But I wondered, are Android Auto, and Android Automotive services limited by being confined to a screen?
The Concept: Google Movement Operating System a new Google Automotive product that integrates contextual intelligence and multimodal interactions to maximize driver safety in any vehicle.
The system Includes Spatial elements such as Light, Sound and Haptic feedback, Core components such as the home page, Windows, flex bar and Context Control, and features such as Navigation, Media, HVAC, and Controls.
The system can configure to multiple vehicle segments: SUV & Compact Cars, the most minimal configuration, moderately sized displays and essential physical controls. Trucks, large displays for cameras, and maps, additional switches bank for towing and drive modes. Luxury Vehicles, Slim wide display for an elevated feel, Remote Display control to enhance reclining during autonomous drive.
Light, Sound and Haptic feedback enhance driver awareness. For example, the visual cue of a glowing light in the steering wheel confirms interactions with the voice assistant, and shows system states such as the autopilot mode in the driver’s peripherals.
The Interface layout blends familiarity with contextual features that surface what the driver needs when they need it. Considered physical controls augment the interface and are located intuitively, near to relevant digital information.
The Flex Bar and context control use data from the driver’s profile to anticipate their needs surface relevant controls and features.
Multi Controls put the drivers most frequently used controls at their fingertips. Without looking away from the road, or moving their hands the driver can control vehicle autopilot features, media, volume, calls, and the voice assistant via simple d pad presses.
Context Controls reliably surface relevant vehicle controls such as climate. The virtual controls are located in proximity to multifunction physical switches that make these core features quickly accessibly while the driver focuses on the road.