
Intelligent Driving Assitant — Building the Interaction Framework for Tencent’s Digital Cockpit
Background
Reimagine the digital cockpit experience under Tencent’s Intelligent Driving initiative. As the boundary between software and vehicle dissolves, in-car interaction demands a unified framework that integrates safety, information, and immersion. This project explores the future paradigm of intelligent mobility, creating a scalable interaction system for driver instrument (IC), central display (CID), and HUD.
What I Led
-
Conducted competitive benchmarking and industry research to identify experience gaps across global intelligent cockpit solutions.
-
Defined the product strategy and interaction framework for a three-screen ecosystem, ensuring a seamless flow between information hierarchy, driver focus, and visual continuity.
-
Developed the system architecture and motion logic, mapping key driver scenarios — from lane-level navigation to vehicle-state visualization — into coherent user journeys.
-
Created visual and motion prototypes using C4D, AE, and Blender, collaborating with the visual design team to establish the lane-level HD map style and adaptive 3D motion principles.
-
Contributed design concepts that served as creative direction references for the final production system.
Outcomes & Impact
-
Delivered a scalable interaction model that became the foundation for Tencent’s intelligent cockpit UX framework.
-
Enabled the mass-production release of Tencent’s lane-level HD map, setting new standards for visual precision and spatial feedback.
-
Elevated the UX discipline from interface definition to system-level strategy and cross-domain collaboration.
-
Demonstrated how product strategy, design logic, and visual storytelling can converge to define the future of human–machine experience.
This project builds a holistic interaction system that bridges information, motion, and emotion. Empower the digital cockpit to evolve from function delivery to intelligent experience orchestration.

