2015 AAPEX: New Product Showcase Winners Announced

What's the best way to increase horsepower?

Method of Controlling Common Rail Fuel Injection Device

Wei to car machine learning expert interpretation of automatic driving

1/22/2018 1:58 PM
A few days ago, GCP Silicon Valley experts invited Wei to North America R \u0026 D Department Common Rail of Machine Learning experts to a professional perspective on the current car architecture, the basic components of automatic driving and machine learning in automatic driving to share the role, the following brings Share the contents of the transcript finishing.

   Crown is the first machine learning engineer to Wei to the car, involved in the design of Wei Lai to all current machine learning project, which developed the main components. Former LinkedIn senior data scientist, first introduced machine learning to LinkedIn Business Analytics and helped expand the team from 3 to 70 people. Dr. Wang graduated from UIC and studied data mining and machine learning from Philip Yu. He published more than ten papers at conferences such as KDD, ICDM, WWW, ICDE and CIKM. In-car computers mainly include CPUs. As depth learning more involved in in-car computing, there are also dedicated electronic units and chips such as GPUs, FPGAs and ASICs to meet the computing needs and take care of the car's own characteristics. Different from the computer hardware we use, the car's CPU and GPU have more stringent requirements on heat dissipation and power consumption. Additional testing is required to meet the requirements of vehicle specifications. Then there is data BUS (data bus) in the car to send data messages. In addition, the on-board computer's memory in the next generation of cars will be higher performance, it is estimated that the car all the sensors to collect data an hour to reach 1 ~ 2TB of data.

   The current development of chip hardware needs to be matched with advances in deep learning to adapt and optimize computing needs to enable it to be cured on the chip. Researchers then re-use this development board for testing, which is a big step beyond Udacity's test car. The depot and the chip company also form a collaborative development process: the car carries the chip to get the data on the road, and then fed back to the chip or the algorithm company, which is beneficial to the technology to realize the upgrade and change more effectively. This will be a long process. Car-equipped cameras are not integrated package, just the lens and some optics, you need to write your own chip memory program. Camera will have a variety of performance requirements, such as when the light is very strong in the daytime camera exposure can not be too high, at night you need night vision, and sometimes have a wide angle fish eye camera. As a result, several cameras are needed. Each camera also comes from a different supplier. Different functions require that the depot itself need to go through the assembly process and access the real-time processing of the on-board computer. The project volume is huge and the software development requirements are also very high. Of course, there are suppliers to provide a complete solution, but in that case the customization cycle will be very long. At present, both high-end Mercedes-Benz cars, or Toyota, Honda cars in this low-end network structure is very simple, equivalent to the level of the network in the sixties and seventies. The first is insecurity: now the vehicle network data is not encrypted, any instruction issued (such as the instructions of the lift window), other controllers can receive; followed by the slow network, low bandwidth, the current car network simply can not Satisfy the need for autonomous driving data flow; then fault tolerance (reliability), damage to a communication node should not affect the whole. Autopilot therefore requires a more secure network structure that meets high data flows and is reliably connected to the cloud.

   If the controller is more mature, such as setting the car steering how many degrees, through the digital analog signal has been able to achieve better vehicle control. Waymo places great emphasis on this and will design two subsystems to safely rescue itself in the event of a car damage. Sharing guests Once in a demo on Waymo, engineers watched a power-supply line cut off during auto-driving. The car judged the system was abnormal, started the emergency mechanism, and stopped automatically and safely at the curb. In the future smarter car, this redundancy is very important.
    Simple positioning is to solve the 'where' problem, the need to rely on laser radar, camera sensors repeatedly on the road to collect data to build high-precision maps. Of course, there are also some problems: the price of LIDAR is still high at present, and the point clouds emitted are sparse and difficult to distinguish and identify the target object. Point cloud make up for the same road repeatedly run many times until the coverage of the point cloud data is dense enough; or with the camera, the use of the camera's target recognition ability to selectively launch and collect point cloud data objects. Camera calibration is also a big problem. General camera with internal calibration and external calibration, internal calibration is generally set at the factory camera, such as focal length, external calibration refers to the camera to accurately locate the camera in the car's installation location, to achieve more difficult. Car camera in the installation process is difficult Common Rail Nozzle to avoid errors, resulting in its location and set a good axis does not match the car's perception system and therefore deviate. Now there are automatic calibration techniques to make the camera to a certain extent self-correction, such as the use of Visual Odometry technology and several filters (kalmanfilter, particle filter, etc.), but still not enough to meet the requirements. Most of the actual mass production of semi-automatic procedures to correct, affecting the mass production efficiency. Assuming a high-precision maps, and calibration of the sensor under the premise of positioning can be very good solution. You can also generate a map of your route with your own positioning and historical data. This is what SLAM technology needs to do. However, the algorithm has not yet reached the requirements of automatic driving, GPS accuracy requirements are also high. So 'doing maps' and 'driving with maps' are two different projects, but they complement each other. The machine learning, especially the following to talk about the perception of technology is an integral part of these two projects are indispensable.

   Perceived part of the depth of learning to use more for target identification and detection. But the current perception is still shallow, the actual detection accuracy is only 70 to 80%. For example, most systems only recognize that the target is a car, but do not understand the different models will affect our driving judgment. Such as a fire truck, a van or a police car, or when their lights are on, our driving behavior is to be changed, and the autopilot algorithm has not yet done so. Car positioning, perception, the need for further planning of the next driving behavior. Planning is mainly divided into the following categories: Route planning refers to the macro level for the car to set the driving route, similar to the map in our cell phone, enter the starting point of the end point, planning a path, the technology is now very mature , And achieve millisecond-level response; micro-behavior planning (behavior planning) is based on the perception of the surrounding situation to predict whether to make the steering, acceleration and deceleration movements; Motion planning (motion planning) more granular, planning a short car Within the steering angle, acceleration changes. In 1 ~ 2 seconds, the path planning of the vehicle can use the RRT (fast search random tree) or CC-RRT (probability calculation + RRT algorithm) technology to detect the surrounding objects and forecast the probability of its future location by the machine learning system, To reduce the chances of bumping into other objects, the car plans an immediate path through tree search. One of the best labs for CC-RRT is MIT. The author of the CC-RRT went to the head of path planning for Google's autopilot after his MIT graduation. So Waymo's system is likely to do that today. However, the above only for 1 to 2 seconds to determine the behavior, 30 seconds in advance of the current unresolved, in its infancy.

   Many car manufacturers in the path planning is still based on the manual rules system is conducted, set in advance thousands of expected driving rules to avoid common accidents, but the ever-changing traffic scenarios, based on the rules set can not cover all the scenes . Machine learning covers many aspects of autonomous driving, including many details about positioning, perception, and decision making. Here to share important part of the decision-making planning, a lot of research work has made good progress, but not yet to artificial intelligence can independently control the point of driving. The image data accumulated by driving is used for machine learning. It needs to mark the target object, the drivable area and the change of driving route in each frame of image, which can help the machine learn the training of perception, prediction and behavior planning in the later period. It is possible to autonomously judge unprocessed images later. Training Machine Learning Systems There are currently several models, such as Behavior cloning, which use CNN (Convolutional Neural Networks) and LSTM (Long-Term Memory Networks) to learn past driving data to deal with similar scenarios that occur later. However, the model will be difficult to deal with the scenes it did not encounter. Another approach is Common Rail Injector Valve to input human driving data into the GAN (Generative Confrontation Network) to train itself to generate behavioral data comparable to human driving and put it into the LSTM to output possible future driving behaviors according to historical behaviors, so that the system has predictive ability . At present, there are a lot of lab work at Stanford, but the research is still in its infancy and there is still a long way to go to mass production vehicles.

   If the machine learning system to guide the car autopilot, is still very early stage. Relatively good is machine learning to target recognition, perception, prediction, and then the use of robotics (CC-RRT) search and planning path.

« Back