Avnet: From L3 to L5, how to take the road of autonomous driving

Autonomous driving is undoubtedly a “big pie” drawn by the auto industry for users. Being able to leave the steering wheel makes driving, a boring and extremely risky “hard work” a kind of enjoyment, and it is too tempting.

Autonomous driving is undoubtedly a “big pie” drawn by the auto industry for users. Being able to leave the steering wheel makes driving, a boring and extremely risky “hard work” a kind of enjoyment, and it is too tempting.

But the reality is that after yelling for many years, we seem to be quite far away from true autonomous driving. If you talk to someone in the industry, he may list a series of difficulties from technology to safety, from business models to laws and regulations, to explain to you the “long road” of autonomous driving. But no matter how many reasons there are, the trend is there. Facing this ultimate goal that everyone is striving for, I am afraid it is necessary to go up with the conditions, and to go up without the conditions to create conditions. But how this road should go and how to go more smoothly requires a reasonable plan.

Avnet: From L3 to L5, how to take the road of autonomous driving

In fact, technically speaking, the realization of autonomous driving has always been faced with a scalability problem, because the ultimate goal of autonomous driving is achieved in stages and in stages, rather than in one step. Therefore, how to build a scalable in this long process It has become a very important proposition to respond to the requirements of all autonomous driving levels in terms of computing power and safety. Moreover, such a scalable architecture is also beneficial to the formation of high-, middle- and low-end differentiated products in the process, to adapt to the needs of different user markets, and to realize timely technology investment.

Classification of autonomous driving

In order to perfectly answer this question, we still have to return to the classification of autonomous driving. According to the definition given by the American Society of Automotive Engineers SAE, autonomous driving is divided into five levels from L1 to L5, corresponding to driving support, partial automation, conditional automation, high automation, and full automation.

Avnet: From L3 to L5, how to take the road of autonomous driving

Classification of Autonomous Driving

It is not difficult to see from the figure that the differences between the various levels are defined according to the ownership of driving control. The lower the level of automatic driving, the stronger the driver’s control over the vehicle. For example, in L1, it includes several contents such as automatic cruise, automatic braking and lane keeping. They actually only allow the vehicle to perform automatic control of acceleration or deceleration in one direction, and does not include the steering operation. The driver still has control over the vehicle. With absolute control, correct judgments and decisions must be made by personally observing the environment; at L5, the vehicle is in a fully automated state without driver intervention. In most cases, the driver does not even have a “speaking about the driving of the vehicle”. right”.

From this grading rule, we can also see that there is actually a very high “step” between L3 and L4. If the autonomous driving system from L1 to L3 is still a driver-oriented product, and the core point is that people control the car, then at L4 and L5, the car is basically equivalent to a robot, in most cases it is It is in a state of cutting off contact with “people” and operates autonomously. It can also be said that from L1 to L3, no matter how mysterious the product advertising slogans are, it is still ADAS. Only at L4 and L5 can you truly enter the realm of autonomous driving.

This span from L1 to L5, in contrast to the scalability of the technical architecture mentioned above, is even more challenging.

Scalable technical architecture

To solve this problem, you first need to simplify it on the basis of in-depth understanding. At present, a relatively mainstream cognition in the industry is that autonomous driving decision-making (THINK) can be divided into two parts (domains): one is perception and modeling (Perception and Modeling), and the other is safe computing (Safe Computing).

Specifically, perception and modeling is to perform feature extraction, classification, recognition, and tracking of data from vehicle sensors to obtain information such as what the target is, the XYZ coordinate position of the target, and the speed and angle of the target’s movement, and Output a grid graph. The output of the perception and modeling domain can be used as the input of the safe computing domain. What safe computing needs to do is to fuse the target grid map with environmental information, plan the best route, and dynamically predict the possibility in the next few seconds. The calculation result is output as two control signals of vehicle acceleration and deceleration and steering. Such calculation process is repeated to form a coherent automatic driving behavior.

Due to the different functions of the two domains of perception, modeling, and safe computing, the specific technical requirements are also different, which is mainly reflected in functional safety and computational efficiency.

For perception and modeling, since the front-end input comes from multiple transmitters-including three types of cameras, millimeter wave radars and lidars-in order to adapt to complex application scenarios, at least two sensors are required to meet comprehensive and accurate Data acquisition requirements, the diversity and redundancy of this sensor, make a single sensor’s perception and modeling system only need to meet the functional safety requirements of ASIL-B, and it can reach the functional safety level of ASIL-D as a whole. In terms of computing power, fixed-point computing can meet most of the requirements of perception and modeling data processing.

The safety calculation is very different. After sensor fusion, there is no data diversity and redundancy, so the safety calculation processor must meet the functional safety requirements of ASIL-D. At the same time, due to the high computational complexity, fixed-point arithmetic and floating-point arithmetic must be used at the same time-floating-point arithmetic is mainly for vector and linear algebra acceleration-and from the perspective of security, neural networks are not competent because they cannot backtrack because they must Using deterministic algorithms, these computational efficiency requirements all require the support of a corresponding computing architecture.

Imagine that if a single computing architecture is used to complete the two tasks of perception, modeling, and safe computing at the same time, it is obviously not economical and it loses flexibility. For example, when you want to expand the number or type of sensors, you have to replace the entire processor structure. Therefore, the idea of ​​a scalable architecture is to design different processor chips for the two domains to correspond to them, so that subsequent system expansion and upgrades will be easier.

In this way, an architecture can meet the technical requirements of all levels of autonomous driving from L1 to L5. Whether developers are doing future-oriented technological exploration or doing product research and development for the current market needs, they can move forward and backward with ease. With this understanding and technical support, the pace of advancement will be more determined on the steps leading to autonomous driving.

The Links:   FZ400R12KS4 LM320081