Start United States USA — IT What Virtual Reality Can Teach a Driverless Car

What Virtual Reality Can Teach a Driverless Car

265
0
TEILEN

Self-driving vehicles have taken to the roads in a number of cities. They are also training inside computerized simulations of those communities.
SAN FRANCISCO — As the computers that operate driverless cars digest the rules of the road, some engineers think it might be nice if they can learn from mistakes made in virtual reality rather than on real streets.
Companies like Toyota, Uber and Waymo have discussed at length how they are testing autonomous vehicles on the streets of Mountain View, Calif., Phoenix and other cities. What is not as well known is that they are also testing vehicles inside computer simulations of these same cities. Virtual cars, equipped with the same software as the real thing, spend thousands of hours driving their digital worlds.
Think of it as a way of identifying flaws in the way the cars operate without endangering real people. If a car makes a mistake on a simulated drive, engineers can tweak its software accordingly, laying down new rules of behavior. On Monday, Waymo, the autonomous car company that spun out of Google, is expected to show off its simulator tests when it takes a group of reporters to its secretive testing center in California’s Central Valley.
Researchers are also developing methods that would allow cars to actually learn new behavior from these simulations, gathering skills more quickly than human engineers could ever lay them down with explicit software code. “Simulation is a tremendous thing,” said Gill Pratt, chief executive of the Toyota Research Institute, one of the artificial intelligence labs exploring this kind of virtual training for autonomous vehicles and other robotics.
These methods are part of a sweeping effort to accelerate the development of autonomous cars through so-called machine learning. When Google designed its first self-driving cars nearly a decade ago, engineers built most of the software line by line, carefully coding each tiny piece of behavior. But increasingly, thanks to recent improvements in computing power, autonomous carmakers are embracing complex algorithms that can learn tasks on their own, like identifying pedestrians on the roadways or predicting future events.
“This is why we think we can move fast,” said Luc Vincent, who recently started an autonomous vehicle project at Lyft, Uber’s main rival. “This stuff didn’t exist 10 years ago when Google started.”
There are still questions hanging over this research. Most notably, because these algorithms learn by analyzing more information than any human ever could, it is sometimes difficult to audit their behavior and understand why they make particular decisions. But in the years to come, machine learning will be essential to the continued progress of autonomous vehicles.
Today’s vehicles are not nearly as autonomous as they may seem. After 10 years of research, development and testing, Google’s cars are poised to offer public rides on the streets of Arizona. Waymo, which operates under Google’s parent company, is preparing to start a taxi service near Phoenix, according to a recent report, and unlike other services, it will not put a human behind the wheel as a backup. But its cars will still be on a tight leash.
For now, if it doesn’t carry a backup driver, any autonomous vehicle will probably be limited to a small area with large streets, little precipitation, and relatively few pedestrians. And it will drive at low speeds, often waiting for extended periods before making a left-hand turn or merging into traffic without the help of a stoplight or street sign — if it doesn’t avoid these situations altogether.
At the leading companies, the belief is that these cars can eventually handle more difficult situations with help from continued development and testing, new sensors that can provide a more detailed view of the surrounding world and machine learning.
Waymo and many of its rivals have already embraced deep neural networks, complex algorithms that can learn tasks by analyzing data. By analyzing photos of pedestrians, for example, a neutral network can learn to identify a pedestrian. These kinds of algorithms are also helping to identify street signs and lane markers, predict what will happen next on the road, and plan routes forward.
The trouble is that this requires enormous amounts of data collected by cameras, radar and other sensors that document real-world objects and situations. And humans must label this data, identifying pedestrians, street signs and the like. Gathering and labeling data describing every conceivable situation is an impossibility. Data on accidents, for instance, is hard to come by. This is where simulations can help.
Recently, Waymo unveiled a roadway simulator it calls Carcraft. Today, the company said, this simulator provides a way of testing its cars at a scale that is not possible in the real world. Its cars can spend far more time on virtual roads than the real thing. Presumably, like other companies, Waymo is also exploring ways that its algorithms can actually learn new behavior from this kind of simulator.
Mr. Pratt said Toyota is already using images of simulated roadways to train neural networks, and this approach has yielded promising results. In other words, the simulations are similar enough to the physical world to reliably train the systems that operate the cars.
Part of the advantage with a simulator is that researchers have complete control over it. They need not spend time and money labeling images — and potentially making mistakes with these labels. “You have ground truth,” Mr. Pratt explained. “You know where every car is. You know where every pedestrian is. You know where every bicycler is. You know the weather.”
Others are exploring a more complex method called reinforcement learning. This a major area of research inside many of the world’s top artificial intelligence labs, including DeepMind ( the London-based lab owned by Google), the Berkeley AI Research Lab, and OpenAI ( the San Francisco-based lab founded by Tesla’s chief executive Elon Musk and others). These labs are building algorithms that allow machines to learn tasks inside virtual worlds through intensive trial and error.
DeepMind used this method to build a machine that could play the ancient game Go better than any human. In essence, the machine played thousands upon thousands of Go games against itself, carefully recording which moves proved successful and which didn’t. And now, DeepMind and other leading labs are using similar techniques in building machines that can play complex video games like StarCraft.

Continue reading...