Advances in edge computing and their impact on driver safety

Any discussion on driver safety should perhaps start with a brief analysis of why car crashes happen in the first place. And what measures exist currently to moderate them. The number one cause of vehicle crashes is driver distraction and the main reason for this is mobile phone usage. When we think about driver safety, we generally think about events occurring outside the car. And while these are important, if we really want to improve driver safety, we must focus technology both on monitoring events outside the car but also analysing the driver and their attention to the road inside the car.

Existing approaches

With the exception of a handful of vehicles (Tesla etc), 2023 cars are limited by their architecture. Where they have sensors, these are not connected to a central bus where all data can be processed. They also lack any significant edge compute power. Safety takes the form of solutions like lidar to estimate distance relative to other vehicles. These sensors operate independently of each other and data is not analysed by a central car “brain”. If the lidar sensor detects a vehicle is close and the car attempts to execute a turn an alert is sent to the driver. This is quite effective as reducing certain collision events.

The vehicle of the future

From 2025/26 cars will evolve very significantly. Companies like Nvidia are designing new vehicle architectures that connect multiple sensors (Cameras + Lidar) to a central processing unit with enormous power. All sensor data will be analysed in real time and the car will be able to alert the driver to possible numerous risks that currently are not monitored – distraction, fatigue, intoxication etc.. The enabling technology is computer vision. Computer Vision, similar to other “AI” technologies, is undergoing an exponential evolution in its utility. In order to “run” state of the Computer Vision models the latest chips and software frameworks are required. At Insurevision.ai we are using computer vision to monitor driver safety for vehicle fleets. Using the latest Nvidia Orin chip, which powers our state-of-the-art AI dashcam, we are able to estimate the full range of driver attention events and intelligently alert the driver to potential dangers, moderating behaviour, preventing crashes and improving safety.

In summary

Advances in chipsets, vehicle architecture and AI Computer Vision frameworks will allow many more driver events to be monitored. The focus will develop from measuring simple distance and speed events to more complex analysis of attention and focus. This will, over time, significantly improve road safety and reduce accidents.

Mark Miller is a technology entrepreneur and founder of InsureVision. He previously founded and sold Dictate IT.