Do Aibos Dream of Electric Corner Kicks?
Dr. Peter Stone and his team of 20 grad students gear up for Robocup VII
The clock is ticking for Dr. Peter Stone. Can this assistant professor of computer science at the University of Texas lead 20 graduate students and eight soccer-playing robot dogs to victory at the RoboCup VII meet at Padua, Italy, in July?
I wouldn't bet against him.
Stone began playing soccer when he was 7 years old and went on to play on the varsity team at the University of Chicago while an undergraduate student. His interest in artificial intelligence -- and eventually, RoboCup -- came later, when he was a Ph.D. candidate and postdoctoral fellow at Carnegie Mellon University in Pittsburgh, Penn., from 1993 to 1999. He had been working on individual "agents" (what Stone defines as "an entity with perceptions, goals, cognition, actions, and domain knowledge, situated in an environment"), trying to learn how to program them to manipulate the world from the way it is now to some goal state down the road.
At the 1995 Artificial Intelligence Conference held in Seattle, Stone witnessed a demonstration of a one-on-one soccer system using wheeled robots with one robot on each team. Being a soccer player himself, he quickly ascertained that they were missing the boat. The essence of soccer, he thought, is to have multiple agents working together as a team, not just one individual agent.
In 1997 he learned of RoboCup, brainchild of the eminent Japanese roboticist Hiroaki Kitano. RoboCup holds annual robotic soccer competitions featuring an international array of players under such divisions as Simulated League, Small-Size League, Middle-Size League, Sony Four-Legged Robot League, and Humanoid League. Kitano's oft-quoted goal is to have autonomous humanoid robots defeat human players by the year 2050.
Shades of Deep Blue beating Karasparov at chess?
Stone points out some interesting contrasts between the two games. "In chess you have a complete picture of the world, whereas in robotic soccer, you have a very partial picture of the world. A robot can only see where it's looking, which might include the ball, or might not include the ball. In chess there is a turn-taking system -- I move, then you get to move -- whereas in soccer, there's nothing but continuous motion. If one robot sits there trying to figure out what the best thing to do might be, the opponents can just come and take away the opportunity to act at all. So it's continuous time, and it is also continuous space. In chess you have fixed positions on the chessboard, but in soccer you can't say that the ball is in a fixed position and then it moves to the next grid. It is moving continuously at various velocities."
And so he delved into multiagent systems and machine learning as his research topics while plunging into the RoboCup competitions, first in the Small-Sized League in 1997 and 1998 (with robots Stone describes as being "about the size and shape of Rubik's cubes with wheels playing on a Ping-Pong table"). He then changed his focus to research issues for the Simulated League in the 1999 and 2000 competitions. In simulated soccer, the action takes place on a computer screen instead of a Ping-Pong table. These activities were the fuel for the engine that led him to rack up several RoboCup championships, as well as editing RoboCup 2000: Robot Soccer World Cup IV and writing Layered Learning in Multiagent Systems: A Winning Approach to Robotic Soccer. The book, which is based on his Ph.D. thesis (completed in the summer of 1998), extrapolates on some of the challenges in multiagent systems and machine learning that can be embodied in the simulated robotic soccer domain.
"Unlike a soccer video game where you have one program that can tell all the players where to go, here each player is controlled by a separate program, so we have to figure out ways to stop them from all going to the ball, to organize the kind of behaviors that lead to a goal. When they should play defense, when they should play offense, where they should pass, where they should go, etc. And the simulation is very detailed. The players get tired if they run too much. They have partial views of the field, they can only see in the direction that they are facing. And so we won this competition in 1999 with this team that is described in Layered Learning in Multiagent Systems that incorporated several aspects of machine learning, or layered learning. And we ended up beating the opponents by a score of 110 to nothing over eight games. It was a dominating team, and it has been emulated and built upon ever since.
"The main idea of multiagent systems is what we call a locker-room agreement, so the idea is the players, before the action begins, can get together and agree upon some preset contingencies and plans and formations so that when the actual play is going on they don't need to do the negotiations. And they can do this again at halftime. And so we have things that allow them to do set plays; basically, when there is a frequently occurring situation, they can plan in advance for it and figure out who is going to play which role and then string together a set of passes. Or they can actually switch positions or switch strategies on the fly because they all knew under what circumstances this would happen. So that was one contribution. The other is layered learning, which is a hierarchical machine-learning paradigm where there is learning done at different levels."
But this year Stone has left the domain of simulated soccer and entered into the brave new world of what is called the "four-legged robot" competition, which refers to the Sony Aibo robot dogs. A product of Sony's deep-pocketed robotics division, Aibos were made available to the American public in the year 2000 and have since become an international cultural phenomenon, selling thousands each year around the globe. Aibo enthusiasts, while drawn to these artificial pets because of the lack of household maintenance (Aibos don't eat, defecate, or attract fleas), often become emotionally attached to them. Aibos are also the first mass-marketed robots that have earned the respect of artificial-intelligence scientists, so it was only natural that they would eventually attract the attention of RoboCup.
"Sony has been a major sponsor of RoboCup and until this year only RoboCup players were able to program Aibos to their own specifications," says Dr. Stone. "They market them with these software packages that allow you to do high-level and simple things with them. But RoboCup players have always had access to the low-level source code that allows you to access the cameras, access the joints, and do the kinds of things that we are doing on them. And now they've made it available to everybody. So most people who buy these robots will get prepackaged programs that come out of the box so that they can do cute things with them and have them exhibit personalities. But our intention has always been to have access to everything about them."
Dr. Stone's team of 20 graduate students, who began developing their own software for the Aibos from scratch at the beginning of the semester, is broken into seven subteams. "One subteam focuses on vision, trying to figure out what is red, what is orange, and what is pink. When you look around the room you see a white wall and a blue poster, but an image to a robot is just a bunch of numbers, some of which mean blue and some of which mean orange. And so we need to write the algorithms that can convert those numbers into meaningful colors so that the robot knows where the orange is and where blue is. Another subteam is working on localization, which is, once you know where these colors are, how you can use them to figure out where you are on the field. Another subteam is working on walking, getting the robots to walk in a particular direction. Another one is working on kicking, getting the Aibos to kick the ball; another one is working on communication; another one is working on fall recovery. These robots have accelerometers in them so that they can detect when they've fallen over, and once they've fallen over they need to be able to get back up. So we've broken down the task into a lot of basic units right now. Once we get through this phase so that they know how to see, know how to walk, etc., we will have to move to the multiagent challenge of how to get them to work as a team."
So how do these robomutts talk to each other? Stone explains that they are all connected by wireless Ethernet. For example, he demonstrated that moving the leg of one Aibo will cause another Aibo to mimic it exactly. So in this way the Aibos can communicate what they see, or where they think they are, or they can tell another Aibo that the ball is beside or behind it.
When queried about whether his eight Aibos have different personalities, Stone says that ideally they shouldn't. But Aibos aren't seamlessly identical. "If you loaded the same program on two of them one will walk straight and the other will walk a bit crooked. Whether or not they have personalities, they often appear to. So we might say that the red player is a much better player than the blue one, when really their software is identical, and therefore their behavior should be identical. But it always works out that parts are getting slowed down in some of them, and then you end up with the challenge of heterogeneous robots."
Stone is already looking beyond four-legged robot competition. He notes that Sony has a humanoid robot called the SDR4 that will be put on the market in the near future at a cost equitable to that of a luxury automobile. The SDR4 is equipped with the same operating systems used by Aibos, so Stone and his students will have a head start on programming the SDR4 in future Humanoid League competitions. Actually, 2003 will be the second year for the Humanoid League in RoboCup. These pint-sized robots with wacky names -- Tao-Pie-Pie, Foot-Prints, GuRoo, HITS Dream, Robo-Erectus, and Kitano's own eerily beautiful Morph3 (which looks like a metallic human pumped up with steroids attached to a birdlike head) -- are nowhere even near being able to take on human players. But progress is being made in that direction.
Kurt Dresner is a first-year Ph.D. student, and this is the second semester he has worked with Stone. "We students are extremely excited about working on the project. It is a lot of fun to work with the robots and program them. We are aspiring to win the competition."
Is there a cash prize?
Stone interjects. "There isn't, because if money were involved then people wouldn't share what their approaches were, and this is an academic endeavor. The benefit of winning is publication opportunities, to write papers and share with your colleagues what you have done, which is really the coin of the realm in academia."
Dresner elaborates. "Sony has a new piece of software that allows the Aibos to find their way back to the recharging station in order to recharge themselves. This is a technology that was developed by RoboCup. I think that it would be really cool for something that I am working on here to end up being a standardized part of Aibo as well."
It's a sunny afternoon on the last Thursday in March, and I'm in Stone's laboratory in Taylor Hall at the University of Texas. The 2.1-meter-by-4.5-meter green-carpeted field has been set up and is surrounded by halogen lights on Home Depot stands. Four of the eight Aibos are on the field (RoboCup designates that only four players on each team can be on the field at the same time) in various positions. One is near the yellow goal practicing his kicking skills, while another meanders aimlessly near the middle of the playing field. Another is ensconced in the blue goal, while the fourth tries to move the ball into that goal. The Aibo currently hanging out in the goal is not a goalie (that position requires a goalie memory stick), and it finally vacates the goal, stumbling toward the sidelines. This gives the fourth Aibo the opportunity to get close enough to push -- but not kick -- the ball into the blue goal on the first try, a good omen. One of the graduate students examines the Aibo that was practicing his kicking skills, and the Aibo seems to get the ball confused with the student's shirt, staying put while its head tracks the student as he walks to and fro. The students mumble something about needing to tweak the Aibo's color-perception algorithm.
Stone and his team have about another month to get the team in shape for the RoboCup VII American Open competition to be held at Dr. Stone's alma mater, Carnegie Mellon University, in Pittsburgh on April 30. And they have their work cut out for them.
"I'm incredibly impressed that the students have gotten to the point where they are now. There were tons and tons of technical challenges that we had to address. Most teams do this over the course of a year or so, and in the course of nine weeks we are ready to begin showing soccer-playing behavior. This has happened, from a robotics perspective, incredibly quickly. So right now the Aibos are beginning to notice that they have lost the ball and start searching for it again, instead of continually going toward the goal once they think they have the ball."
Stone continues. "The Aibo that put the ball into the goal didn't really know that it had the ball, it was just trusting that the ball was underneath it, and apparently it was looking for the goal and decided that it was close enough to try to walk toward it. So this is very early prototype behavior, and the challenge to get them to switch among different behaviors at this point is relatively small."
While Stone's team is set for the RoboCup VII American Open on April 30, they had until April 1 to cut a three- to five-minute demonstration video of the Aibos playing on the field in order to be considered as candidates for the RoboCup VII international competition that takes place in Padua, July 2-11. While Stone does not expect to win at the American Open, he does expect to be one of the more competitive teams, which would guarantee them serious consideration for Padua.
"It's not just about performance, it's also about the research involved with the team. RoboCup is primarily a research initiative. The goal is to advance artificial-intelligence and robotics research. This is not like the Grand Prix where they are just going to take the fastest and the best. They are going to take a balance between the teams that look fast right now and those that have the most research potential and are putting in the effort. Along with the video we are sending in our research interests, our approach, our algorithms, and the effort we have put in it. The fact that we've got to this point in such a short period of time makes it obvious that we're going to be at a competitive level by the time of Padua."
At RoboCup VII in Padua, it is not yet known whether Hiroaki Kitano will employ another one of his own humanoid robots called SIG (the acronym for Symbiotic Intelligence Group) as a commentator, as if to suggest a robotic version of Howard Cosell, which he has done in previous RoboCups. But RoboCup Rescue -- another Kitano brainwave, which focuses on robots used for landmine clearing and search-and-rescue operations (activities that certainly resonate these days) -- has been scheduled. Perhaps RoboCup is moving us toward a future when we will all be watched over by variations of these increasingly intelligent machines and, hopefully ... with tender loving care.
For more on RoboCup VII, see www.robocup.org. For more on Peter Stone's Robotic Soccer Page, see www-2.cs.cmu.edu/~pstone/robosoccer.html.