The video below shows the perspective of a NAO as it plays robot soccer. The NAO localizes using known landmarks on the field (the goal posts and field lines), and keeps track of its odometry as it walks. The camera's field-of-view is very narrow (compared to human eyes), so it only sees a small portion of the environment in each camera image. We label the image pixels with known colors (orange for the ball, green for the field, etc) and perform object recognition on the color-segmented image.
In order to play a game of robot soccer, the NAOs on the team communicate constantly. They share information such as their global position on the field, their current role, and the ball's position. All this information is stored on a shared world model. The video below shows two NAOs communicating and sharing the ball's position. In this way, the NAO is able to orient its head towards the ball's position even if the ball is blocked by an obstacle. In a robot soccer game, this allows a teammate robot to find the ball more quickly.
Below are highlights of our games from 2009 to 2011, showing the improvements of the team over the years:
You can download our CMurfs codebase here!