Earlier this year, the HumanDrive project sent unmanned vehicles down 370 kilometers of country roads over busy highways. The consortium that runs the project includes Nissan, Cranfield and Leeds Universities, Atkins, Highways England, Amesun, Holi Bamira, SBD Automotive, Connected Place Catapult, and Hitachi, which has developed software to recognize and plan the external environment. Europe was included.Explain safe routes using obstacles, restrictions, and other road users AI And machine learning.
The purpose of HumanDrive was to create a smooth, natural-feeling driverless experience, “rather than a jerky robot, as many self-driving features did,” said Hitachi Europe’s Chief Innovation Strategist. Nick Blake says. “I want to improve passenger comfort and acceptance,” he said.
To that end, the team fused artificial intelligence with human intelligence and taught the system using road data and professional drivers. “Data was collected from professional test drivers,” said Ioannis Souflas, a senior researcher at Hitachi Europe, adding that it was used to train the car to drive in a more comfortable way for passengers. ..
But that came with challenges. It was difficult to get enough data on how a good driver would react in a dangerous situation. After all, I had to ask people to take risks. “If you’re training an AI model using human driving data, there’s a lot of data on good driving in a normal safe environment, but not a lot of data on how to get out of the problem,” Blake said. say. .. “There is very little data that can be trained to train the model correctly.”
Souflas said: “You can’t train AI to turn left or right just by driving straight.”
There is a simulation to fill the gap. “Professional drivers may not be able to capture every edge case, but you can perform machine learning tricks to extend your data and create artificial edge cases,” Souflas said. Explains.
But humans learn to drive differently than machines. First, most of us learn to drive as a teenager after 16 or 18 years of vision development. “We have already learned to have a good perception,” says Souflas. “Then all we learn is how to control the vehicle.”
One solution was to split the AI into four different modules. The first is to manage perception and get data from various sensors and inputs. The second is to use the pulled and localized data to understand the scene. The next AI module plans the route and the last AI module controls the car. Some systems combine all or part of these aspects into one, but divide the AI into different sections. The various modules allow Hitachi to add features for different driving environments and styles and insert rigorous verification and safety checks throughout the software process.
Human input was primarily built into the route planning module, says Souflas. We all look at things in much the same way, and maps tell us where we are. Blake states that they are “truths” rather than value judgments. “But route planning is the point at which we, as drivers, behave differently,” Souflas adds. “This is where there is some variation from person to person.”
The idea of modular AI was also welcomed by car makers as the project revealed that they were not keen on the opaque decisions inherent in generalist AI such as black box systems. If the test drive fails, the manufacturer wants to know where the error occurred. Did the car’s sensors not recognize something on the road, did not understand what it was, or could it not navigate? “Manufacturers are very conservative in using non-transparent technology,” says Souflas.
Separating the system for each core feature makes it easier to identify and fix where the failure occurred. “If they fail to perceive, they can fix the problem and don’t have to worry about route planning,” Souflas explains. In addition, being able to identify where the obstacle is allows the team to know how serious it is. The closer the problem is to the controlled car, the more imminent it becomes. “If you fail to plan or manage, it’s much more important than perception because it’s closer to action,” he adds.
You can also build AI in a modular format and reuse it elsewhere. For Hitachi, this development work is much broader than self-driving cars. In addition to vehicles such as trams and other vehicles, these systems can also be automated by sensing, interpreting, and executing them. Production line, For example. Not surprisingly, factories have different control requirements than automobiles, but sensing systems are likely to remain the same. “The modularity of the system enables multiple functions or applications of these basic technologies,” said Massimiliano Lenardi, head of the Automotive Industry Research Institute of Hitachi Europe.
Hitachi is also using some of this technology in smart space work, and smart sensors can track people through public areas. “Vehicles that can see objects along the road use the same techniques as video intelligence, allowing them to isolate and track people passing through the station,” says Blake. “We use almost the same technique,” says Souflas, with one major difference. In automobiles, processing time is much faster.
Having modular AI means that the system can be updated individually. This brings the idea back to the core of Human Drive, a reintroduction of human thinking to solve problems in the process of self-driving cars. One of the next challenges Hitachi developers are tackling is navigating urban areas, as GPS can be overwhelming and difficult to navigate in densely populated areas.
One of the solutions being considered is to teach the car’s locating system to look around like a human. Most of us don’t have to dig up our smartphones and display maps to find out where we are. You just look around and recognize your surroundings. “We’re creating features that can be localized from visual features,” explains Souflas. “For example, when you’re in King’s Cross, you don’t need GPS to tell you you’re there. This builds redundancy in your system and brings improvements that combine AI with traditional approaches. ”
Souflas adds: “To develop this AI and this intelligent software, we need human intelligence.”
Today’s life is saturated with data and technology is emerging almost every day, but how can these innovations be used to make a real difference to the world? Hitachi believes that social innovation is all about it and that we can find ways to tackle the biggest problems we face today.visit Social-Innovation.Hitachi Learn how Hitachi Social Innovation is powerful and helping drive change around the world.
Teach self-driving cars to drive like humans
Source link Teach self-driving cars to drive like humans