‘DribbleBot’: a four-legged robotic system that plays football

A team of researchers from MIT’s Improbable Artificial Intelligence Lab, which is a part of the Computer Science and Artificial Intelligence Laboratory (CSAIL), have created a robotic system with legs that can dribble a football in conditions similar to those faced by humans.

‘DribbleBot’ incorporates both onboard sensing and computing to navigate through various natural terrains, including sand, gravel, mud, and snow, and adjust to their diverse effects on the ball’s movement. Similar to a dedicated athlete, the bot was capable of standing up and retrieving the ball after a fall.

The field of research dedicated to programming robots to play football has been bustling for quite some time. Nonetheless, the group aimed to teach the robots how to move their legs while dribbling in an automatic manner, so as to uncover intricate skills that are difficult to script and respond to various terrains such as sand, gravel, snow, grass, and pavement. That’s where simulation comes into play.

Inside the digital twin of the natural world lies a terrain, ball, and robot. By loading in the bot and other assets and configuring physics parameters, the simulation handles the forward dynamics. In real time, 4,000 versions of the robot are parallelly simulated, leading to data collection that is 4,000 times faster than using only one robot.

Initially, the robot is not equipped with the skill of dribbling the ball. It relies on positive or negative feedback to learn, as it receives a reward for successful attempts and negative reinforcement for mistakes. Therefore, it is attempting to determine the optimal sequence of leg movements to apply.

“One aspect of this reinforcement learning approach is that we must design a good reward to facilitate the robot learning a successful dribbling behaviour,” says MIT PhD student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab. “Once we’ve designed that reward, then it’s practice time for the robot: In real time, it’s a couple of days, and in the simulator, hundreds of days. Over time it learns to get better and better at manipulating the soccer ball to match the desired velocity.”

Thanks to a recovery controller integrated into its system, the bot is capable of manoeuvring through unfamiliar terrains and rebounding from falls. This unique controller enables the robot to quickly regain its footing and seamlessly transition back to its dribbling controller, allowing it to effectively address unexpected disruptions and terrains that lie beyond its typical range of operation.

“If you look around today, most robots are wheeled. But imagine that there’s a disaster scenario, flooding, or an earthquake, and we want robots to aid humans in the search-and-rescue process. We need the machines to go over terrains that aren’t flat, and wheeled robots can’t traverse those landscapes,” says Pulkit Agrawal, MIT professor, CSAIL Principal Investigator, and Director of Improbable AI Lab. “The whole point of studying legged robots is to go terrains outside the reach of current robotic systems. Our goal in developing algorithms for legged robots is to provide autonomy in challenging and complex terrains that are currently beyond the reach of robotic systems.”

When compared to walking solo, ‘DribbleBot’ faces greater limitations in its movement and the types of terrain it can navigate while dribbling a football. The robot must adjust its locomotion to exert force on the ball in order to maintain dribble. The way the ball interacts with the environment may differ from how the robot interacts with it, depending on the surface, whether it is thick grass or pavement.

As an illustration, while a football encounters a resistance force on grass that is absent on pavement, it also undergoes a force of acceleration on an incline, causing it to deviate from its normal trajectory. Nevertheless, the capacity of the robot to navigate diverse landscapes is usually relatively unaffected by these dissimilarities in dynamics, as long as it avoids slipping. Hence, the football experiment can detect fluctuations in terrain that are not discernible through locomotion alone.

“Past approaches simplify the dribbling problem, making a modelling assumption of flat, hard ground. The motion is also designed to be more static; the robot isn’t trying to run and manipulate the ball simultaneously,” says Ji. “That’s where more difficult dynamics enter the control problem. We tackled this by extending recent advances that have enabled better outdoor locomotion into this compound task which combines aspects of locomotion and dexterous manipulation together.”

The robot is equipped with sensors that enable it to perceive its surroundings, providing a sense of location and visual awareness. It also has actuators that allow it to apply force and manipulate objects. Serving as the intermediary between the sensors and actuators, the computer acts as the ‘brain’ of the robot, interpreting sensor data and executing actions through the motors.

As the robot moves on snow, it lacks visual perception of the snowy terrain; however, its motor sensors enable it to sense the texture. Nonetheless, playing football is a more demanding task than walking. To cope with this challenge, the team incorporated cameras onto the robot’s head and body, amplifying its sensory capabilities through vision in addition to motor skills. Thus equipped, the robot can now dribble like a pro.

“Our robot can go in the wild because it carries all its sensors, cameras, and compute on board. That required some innovations in terms of getting the whole controller to fit onto this onboard compute,” says Margolis. “That’s one area where learning helps because we can run a lightweight neural network and train it to process noisy sensor data observed by the moving robot. This is in stark contrast with most robots today: Typically, a robot arm is mounted on a fixed base and sits on a workbench with a giant computer plugged right into it. Neither the computer nor the sensors are in the robotic arm! So, the whole thing is weighty, hard to move around.”

Developing robots that can match the agility of their natural counterparts is still a work in progress. ‘DribbleBot’ faced difficulties traversing certain terrains due to this limitation. At present, the controller lacks training in simulated environments that incorporate slopes or stairs. Furthermore, the robot only estimates the material contact properties of the terrain, such as friction, and does not perceive its geometry. As a result, it can become stuck when faced with an obstacle such as a step-up, preventing it from lifting the ball over the obstacle. The team intends to investigate this area further in the future.

The researchers are enthusiastic about utilising the knowledge gained from creating ‘DribbleBot’ for other activities that require a combination of movement and object handling, such as rapidly transferring various items using either the arms or legs.

“DribbleBot is an impressive demonstration of the feasibility of such a system in a complex problem space that requires dynamic whole-body control,” says Vikash Kumar, a research scientist at Facebook AI Research who was not involved in the work. “What’s impressive about DribbleBot is that all sensorimotor skills are synthesised in real time on a low-cost system using onboard computational resources. While it exhibits remarkable agility and coordination, it’s merely ‘kick-off’ for the next era. Game-on!”

The research is supported by the DARPA Machine Common Sense Program, the MIT-IBM Watson AI Lab, the National Science Foundation Institute of Artificial Intelligence and Fundamental Interactions, the US Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. A paper on the work will be presented at the 2023 IEEE International Conference on Robotics and Automation (ICRA).

Photo credit: Mike Grimmett/MIT CSAIL

There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.