AI-powered social digital twin technology with traffic data from Pittsburgh

Fujitsu Limited and Carnegie Mellon University have announced the creation of an innovative technology designed to visualise traffic scenarios, including both pedestrians and vehicles.

This development is part of their collaborative research on Social Digital Twin initiated in 2022. Utilising artificial intelligence, the technology converts a 2D image captured by a single-lens RGB camera into a digitised 3D format. It accurately estimates the 3D shapes and positions of people and objects, allowing for precise visualisation of dynamic 3D scenes. From 22nd February to 31st May 2024, Fujitsu and Carnegie Mellon University will undertake field trials in Pittsburgh, USA, using data from urban intersections to assess the technology’s practicality.

This system employs AI trained through deep learning to identify the shapes of people and objects. It features two primary technologies: 3D Occupancy Estimation Technology, which deduces the 3D space occupied by objects using only a single-lens RGB camera, and 3D Projection Technology, which precisely situates each object within 3D scene models. These advancements enable the dynamic reconstruction of densely populated scenes, such as intersections, in 3D virtual space, providing a valuable tool for advanced traffic analysis and the prevention of potential accidents. For privacy preservation, faces and license plates are anonymised.

Looking ahead, Fujitsu and Carnegie Mellon University plan to commercialise this technology by FY 2025, exploring its utility not only in transportation but also within smart cities and traffic safety realms to broaden its application.

Background

The joint research effort between Fujitsu and Carnegie Mellon University, initiated in February 2022, focuses on Social Digital Twin technology. This technology dynamically replicates complex interactions within urban environments in 3D. Although the research identified limitations with existing video analysis methods for dynamic 3D reconstruction from video footage, the newly developed technology overcomes these challenges, enabling dynamic 3D scene modelling from images captured by a stationary monocular RGB camera without the need for simultaneous multi-camera imaging.

Technology development overview

3D Occupancy Estimation Technology utilises deep learning to transform images from a city captured at various angles into a 3D voxel representation, classifying objects like buildings and people for comprehensive scene analysis.

3D Projection Technology, based on the outcomes of the 3D Occupancy Estimation, creates a 3D digital twin, and applies human behaviour analysis to map movements realistically within 3D spaces, enhancing accuracy even when objects are partially obscured.

Field trials

Scheduled from 22nd February to 31st May 2024 in Pittsburgh, Pennsylvania, USA, the field trials aim to replicate urban scenes around Carnegie Mellon University in a Social Digital Twin. The trials will evaluate the technology’s effectiveness by analysing crowd and traffic conditions, identifying potential accident spots like blind corners and temporary crowd build-ups, and exploring prevention strategies.

Prof. László A. Jeni, Assistant Research Professor, Carnegie Mellon University said: “This achievement is the result of collaborative research between Fujitsu’s team, Prof. Sean Qian, Prof. Srinivasa Narasimhan, and my team at CMU. I am delighted to announce it. CMU will continue to advance research on cutting-edge technologies through this collaboration in the future.”

Daiki Masumoto, Fellow and Head of the Converging Technologies Laboratory of Fujitsu Research, Fujitsu Limited said: “Our purpose is to make the world more sustainable by building trust in society through innovation. The Social Digital Twin technology we are developing aims to address a wide range of societal issues, aligning with this mission. I am thrilled to announce this milestone achieved in collaboration with CMU, marking a significant step towards our goal.”