Using AI to Beat Traffic Jams
News Sep 24, 2018 | Original Press Release from the University of Southern California
Credit: Myriam Thyes https://creativecommons.org/licenses/by-sa/3.0
Americans spend an average of 25.4 minutes commuting to work, according to U.S. Census Bureau data. That amounts to 208 hours per year spent in traffic, with averages steadily rising every year.
In Southern California, commutes are double the national average and considered the most stressful in the nation. The main culprit for these high numbers: congestion. The number of cars on highways increases annually, leading to more intense bottlenecks at interchanges, slower speeds on packed roads and a higher frequency of accidents.
Researchers at the USC Viterbi School of Engineering are hoping to reverse that trend by adding a new type of artificial intelligence to speed-forecasting technology, giving drivers predictive information for the fastest commute in every probable way. Compared to state-of-the-art technology, their model’s predictions result in 12 to 15 percent fewer errors. The model was presented at the seventh International Conference on Learning Representations— the leading conference on AI and neural network research.
New ways to reduce traffic jams: turning to AI
Yaguang Li, a computer science PhD candidate at USC Viterbi, joined faculty to create an AI deep-learning model called the Diffusion Convolutional Recurrent Network.
Li’s collaborators include professors Yan Liu, the Philip and Cayley MacDonald Endowed Early Career Chair and Cyrus Shahabi, the chair of USC Viterbi’s Department of Computer Science, and Rose Yu, a recent USC Viterbi PhD graduate and now assistant professor at Northeastern University.
The new network pulls from both historical and real-time data to process and predict future speeds along a road. It also can confirm and optimize those predictions by analyzing real-time data, increasing its accuracy by learning which methods predicted the most accurate speeds. In other words, the model gets smarter over time.
The model also learns various patterns that influence varying traffic speeds. It’s able to capture the spatial dependency among adjacent roads and uses those speed changes for more accurate forecasting. Chance events such as accidents or closures are also accounted for in real time, allowing the model to preemptively adjust the prediction promptly, giving drivers a clear picture of future road conditions.
“There are both temporal patterns and spatial patterns for which you have to account,” Shahabi said. “The sweet spot where traffic is light enough is always changing because the variables are changing.”
Shahabi explained that a congested freeway, such as the unpopular 405, may not be congested when you’re leaving work and mapping your route on current navigation systems, but it can stack up by the time you hit the road.
“Our model is able to predict what the conditions will be when you get there before you get there,” Shahabi said.
Traffic forecasting, self-driving fleets: Ways to reduce traffic jams
This model also helps to improve the drive for autonomous vehicles.
It synthesizes information that sends self-driving cars the answers to questions such as where to turn, based on its predictions. This allows the vehicle to exhibit an “anticipation” characteristic, much like a human driver would in keeping his or her eyes on the road.
But this model isn’t built to just fix traffic woes.
“In the future, it could be used to predict the movements of self-driving fleets and optimize transportation efficiency,” Yu said.
“To predict, we need to have enough data to teach the model what to look for.”
This article has been republished from materials provided by the University of Southern California. Note: material may have been edited for length and content. For further information, please contact the cited source.
Computer bits are binary, with a value of 0 or 1. By contrast, neurons in the brain can have all kinds of different internal states, depending on the input that they received. This allows the brain to process information in a more energy-efficient manner than a computer. A new study hopes to bring the two closer together.
MIT researchers have developed a cryptographic system that could help neural networks identify promising drug candidates in massive pharmacological datasets, while keeping the data private. Secure computation done at such a massive scale could enable broad pooling of sensitive pharmacological data for predictive drug discovery.
5th International Congress on Epigenetics & Chromatin
Aug 22 - Aug 23, 2019