Speaking of “automatic driving”, we have to mention Tesla. Although it is the hottest autopilot company in the world, there are dozens of traffic accidents, and even many fatal cases. But these can’t stop Tesla’s share price soaring all the way. Musk’s value even rose by more than 30 billion RMB. Not only Tesla, but also China’s enterprises such as Xiaopeng, Weilai and ideal are also bullish all the way.
This undoubtedly reflects the market recognition of automatic driving technology and its future development. As the founder of Xiaopeng automobile once said:“Intelligence will be the watershed of the next automobile era.”
The most important issue of intelligent replacement of artificial is safety, especially in the field of driving on the road, which needs to be responsible for the life safety of oneself and others. Recently, some developers have used motion capture technology to teach driverless cars to understand the body language of pedestrians on the street and improve the safety of automatic driving.
Intelligent driving should make prediction before the accident occurs
Intelligent driving includes two concepts: automatic driving and unmanned driving. The current technology can only achieve automatic driving. The automatic driving vehicle is divided into five levels from L1-L5 according to the intelligence level. Currently, the mainstream auto driving cars on the market are still in L2 and L3. The level is that the driver can let go of his hands and feet and drive automatically by the system, but he must always be ready to take over the control of the vehicle. It is a bit like the safety officer sitting next to the examinee in driving license test, more to deal with emergencies.
Tesla’s accident is also due to the fact that the driver has too much faith in the automatic driving system, and he has too much freedom to drive the car and be careless with the road conditions. Just like the case of Tesla hitting a police car in the United States before, the driver was watching movies with his mobile phone when the system was driving automatically. Unexpectedly, Tesla didn’t recognize that the police car was parked on the side of the road and hit it directly.
At present, self driving cars on the market, no matter which brands are Tesla, Xiaopeng, Wei Lai and ideal, are mainly used in visual schemes plus lidar to judge road conditions. In addition to observing the situation outside the car body, there is also a visual scheme to monitor the situation inside the vehicle, by judging whether the driver concentrates attention to remind the driver to drive safely.
But the response of others to deal with emergencies is always a step later than the occurrence of accidents. A better way is to make a prediction before the accident occurs.
If the road is under repair, only two of the original four lane roads can pass. When the workers stand in front of the warning signs and make a gesture of “turn to the other lane” to the incoming vehicle, almost all drivers can understand it. They don’t need to stop to ask about the situation to get to the correct route. But this is difficult to judge for the autopilot system. There are no large obstructions in front and no pedestrians cross the road. It has a great probability to drive directly according to the correct route judged by the system. This situation will make the autopilot parked on its tracks. He can understand the signs of parking, but can not understand the gestures of directions.
Whether motorists or autopilot systems are faced with this and more complex situations every day. For example, when a pedestrian reaches out his hand to indicate to the driver that he wants to cross the road, it is easy for the driver to understand the meaning of the gesture and slow down to let pedestrians pass.
Machine learning model for judging pedestrian gesture
These challenges are met safely and seamlessly without interruption of traffic flow, requiring autopilot systems to understand these common gestures used to guide drivers to cope with unexpected situations. A team of developers was inspired to use motion capture technology to teach autopilot to learn how to drive by learning the gesture of pedestrians.
The expression of gesture and body language to the vehicles that may cause danger is a response signal that people make without thinking. This response poses a challenge to the computer system.
Developers rely on machine learning to improve the ability of vehicle recognition to make emergency response. They choose to run vehicles in the most complex terrain environment of the United States every day for data collection, and the learning speed of the model is very fast.
But at present, there is not a set of standard gestures in the world for the communication between pedestrians and cars on the road. Developers have to identify each situation from different angles and distances, as well as under different lighting conditions, and learn as much as possible through different combinations of conditions, but this approach can take years of experience.
In order to expand the scope of learning and improve learning efficiency, developers have found an innovative solution to solve the data gap: gesture motion capture. Game developers create roles and provide training data sets for machine learning models by simulating the real world.
Another challenge facing the development team is that the automatic driving system may misinterpret certain actions of pedestrians as commands to the system, such as pedestrians waving to their opposite friends or raising their arms to block the sun. To solve these problems, the developer finally identified five key messages conveyed by gestures, including stop, forward, left turn, right turn and “irrelevant”.
There are still many challenges for autonomous driving on the road
Automatic driving technology is bound to bring subversive changes to the automobile market, but the technology which is still at the level of L2 and L3 is still a long way from the real unmanned driving.
Autopilot car training for motion capture data can better adapt to the city driving situation. But training autopilot cars to understand gestures is only the beginning. These systems need not only to detect human basic movements, but also to detect more. The gestures that people use to convey information are not uniform, and these training data sets include not only the current movements, but also many data sets are being trained and continuously supplemented. The development team is also working through training systems to understand the concept of humans carrying or pushing other objects, such as people pushing bicycles, because the actions of people pushing or riding bicycles are usually different from those of pedestrians on foot.
The development team also recruited five volunteers with different physical characteristics to calibrate the system, and conducted a multi angle test on the amplitude, strength, one hand or both hands of the gesture.
Ten years later, China is expected to become the world’s largest autonomous driving market
From the continuous delivery of automatic driving cars, we can see that the market of autopilot is forming the Great Red Sea.
In July, Tesla reported sales revenue of $1.4 billion in China in the second quarter, up nearly 103% year-on-year. Along with Xiaopeng, the Great Wall, ideal and other auto driving automobile enterprises also gradually began to carry out intelligent industrial upgrading. The reason why Xiaopeng, whose data delivery is not eye-catching, soared 60% as soon as the U.S. stock market opened, lies in its clear route in the field of smart cars.
At the end of 2019, McKinsey also made a forecast that China would become the world’s largest autonomous driving market. By 2030, the revenue generated by the sales of new auto driving related vehicles and travel services would exceed $500 billion. Driverless cars will change our way of life, and creative technology will make the autonomous driving car better in the city.