Top 10 technology trends in the global semiconductor industry in 2022

The global chip shortage that began in the fall of 2020 has not been alleviated for a whole year in 2021. While the semiconductor industry is expanding its production capacity, it is also actively upgrading its process to increase its output rate. On the other hand, the new crown virus continues to mutate, and the continuation of the epidemic still has an impact on the entire semiconductor industry. The formation of remote office, online meetings and online education habits has accelerated the digital transformation of multiple industries and promoted the network from the side. Technology updates such as communications, AI, storage and cloud services.

The AspenCore global analyst team communicated with industry experts and manufacturers during this year, and after a summary and analysis, selected 10 major technology trends that will appear or develop rapidly in the global semiconductor industry in 2022.

  Top 10 technology trends in the global semiconductor industry in 2022

 1. Mass production of 3nm process, increasing uncertainty of 2nm competition

In terms of semiconductor cutting-edge manufacturing processes, Samsung foundry will temporarily adjust 4LPE to a complete process node in 2020-that is, the 4nm process will become the focus of Samsung’s promotion for the next period of time. In addition, the news released by TSMC in October 2021 has basically made it clear that the N3 process will be slightly delayed, and 2022 may become the year of the 4nm process; it is almost hopeless for the iPhone 14 to catch up with the 3nm process.

However, it is basically clear that although chips using TSMC’s N3 process may not be available until the first quarter of 2023 at the earliest, mass production of the N3 process is clearly in the fourth quarter of 2022.

At the same time, we believe that Samsung’s 3nm GAA may be a little later than TSMC’s N3. Samsung started to use GAA transistors at the 3nm node as the focus, but in fact Samsung failed to advance as planned. And based on Samsung’s current public data, its earliest 3nm process may have greater technical uncertainty.

As for Intel 3, even according to the plan, it will not be able to keep up with the 2022 shuttle bus at all. We believe that TSMC N3 will continue to maintain a dominant position in the market, and has a significant lead over the other two rivals for the time being. But stepping on the brakes on the N3 actually laid a hidden danger for the advent of the 2nm era.

On the one hand, the Intel 20A process is expected to arrive in the first half of 2024, and Intel 18A may be seen in the second half of 2025-Intel’s determination to return to the technological leadership at these two nodes is considerable; on the other hand, Samsung The 2nm process, which is expected to be mass-produced in the second half of 2025, will be its third-generation GAA structure transistor. That is, although its 3nm process is difficult to achieve a market advantage, it will technically provide strong support for its 2nm process. All of these have added uncertainty to the subsequent 2nm process market competition.

 2. DDR5 standard memory enters mass production and commercial use

On July 15, 2020, in order to solve the performance and power challenges faced by a wide range of applications from client systems to high-performance servers, the Solid State Technology Association (JEDEC) officially released the final specification of the next-generation mainstream memory standard DDR5 SDRAM (JESD79 -5), which opened the prelude to a new era of global computer memory technology. JEDEC describes DDR5 as a “revolutionary” memory architecture, and believes that its emergence marks the transition of the entire industry to DDR5 server dual in-line memory modules (DIMMs).

Omdia, a market research organization, pointed out that the market demand for DDR5 has gradually emerged from 2020. By 2022, DDR5 will occupy 10% of the entire DRAM market, and will further expand to 43% in 2024; Yole Development predicts that The widespread adoption of DDR5 should begin in the server market in 2022. In 2023, mainstream markets such as mobile phones, laptops, and PCs will begin to adopt DDR5, with shipments significantly surpassing DDR4, and a rapid transition between the two technologies will be completed.

The increase in memory bandwidth is far behind the increase in processor performance, which is the fundamental motivation behind the launch of DDR5. But instead of previous iterations of products focusing on how to reduce power consumption and regarding PC as the application priority, the industry generally believes that DDR5 will follow the pace of DDR4 and be the first to be introduced into the data center.

The most eye-catching part of DDR5 is that it is faster than the already “super fast” DDR4. Compared with the maximum transmission speed of DDR4 memory at a clock frequency of 1.6GHz, which is 3.2Gbps, the maximum transmission rate of the new DDR5 memory reaches 6.4Gbps, and the power supply voltage is reduced from 1.2V of DDR4 to 1.1V simultaneously, which further improves the energy efficiency of the memory. Performance.

At present, global storage giants such as Samsung, SK Hynix and Micron have announced their respective DDR5 product mass production and commercial timetables. However, the launch of DDR5 will not happen overnight. It needs the strong support of the ecosystem including system and chip service providers, channel providers, cloud service providers and original equipment manufacturers.

 3. The DPU market cake continues to grow bigger and explode

The name DPU has become louder since the end of 2020. We believe that the market behavior that made the term DPU become popular is that after Nvidia acquired the Israeli company Mellanox, the term “DPU” was invented the following year; the second was that the start-up company Fungible promoted the name DPU in the same year.

The D of DPU refers to data data. I have to admit that Huang is a marketing genius, smartNIC has become a DPU data processor; and with lightning speed, dozens of DPU start-ups have emerged in a short period of time.

DPU is essentially the evolution of smartNIC, but from the popularity of DPU, it is not difficult to see the ardent desire of data centers for dedicated processors in the data direction, as well as further fixation and standardization in form.

In the early years, there was a word for data centers called “data center tax”, that is, servers buy many core CPUs, but for the final business, some of the cores are “swallowed” by default. Because these processor resources need to be used for data virtual networking, security, storage, virtualization, etc. When these tasks became more and more complex, DPU appeared. Just as there is GPU for graphics computing and NPU for AI computing, DPU is also a product of the rise of dedicated computing in this era.

Generally speaking, we say that the work of DPU includes the first, offload (unloading) tasks such as OVS, storage, and security services that originally belonged to the CPU; second, isolation and virtualization implementation with hypervisor management; third, various Ways to further speed up data processing across nodes.

It is not difficult to understand that DPU has become the standard configuration of data centers. However, it should be noted that, in terms of specific implementation, different DPUs should not compete on the same stage, which is caused by the difference in their roles. For example, Intel’s IPU is also a DPU, but it is still different from Nvidia DPU in terms of responsibilities and work preference. Therefore, there is a certain possibility that the DPU market may be subdivided. And data center system companies are self-developing more adaptable DPUs, which brings uncertainty to the DPU market.

  4. Overcoming the “Storage Wall” and “Power Wall” in one storage and calculation

The formation of the processing in-memory (PIM) concept can be traced back to the 1970s, but it was limited by the complexity of chip design and manufacturing costs, and lacked killer big data applications to drive it. Has been tepid.

With the advancement of chip manufacturing technology and the development of artificial intelligence (AI) applications in recent years, processors have become more and more powerful in computing power, faster in computing speed, and larger in storage capacity. Facing the data torrent, problems such as slow data handling and high energy consumption have become computing bottlenecks. To extract data from the memory outside the processing unit, the handling time is often hundreds or thousands of times the computing time. The energy consumption of the whole process is about 60%-90%, and the energy efficiency is very low.

On the other hand, Moore’s Law, which is close to the limit, and the Von Neumann architecture limited by the storage wall, can no longer meet the needs of this era in terms of computing power improvement. Currently, a variety of non-von Neumann architectures that try to deal with “memory walls” and “power walls” include low-voltage subthreshold digital logic ASICs, neuromorphics calculations, and analog calculations. The integration of storage and calculation is the most direct and efficient one.

The integration of storage and calculation can be understood as embedding algorithms in the memory to enable the storage unit to have computing capabilities. This is a new type of computing architecture that does two-dimensional and three-dimensional matrix multiplication operations instead of optimizing on traditional logic computing units. This can theoretically eliminate the delay and power consumption of data movement, improve AI computing efficiency by hundreds of times, and reduce costs, so it is especially suitable for neural networks.

At present, a large number of integrated storage and computing chip companies at home and abroad have surfaced with financing information. The amount of financing from 100 million yuan has also fully proved that in the post-Moor era, heterogeneous computing and new architecture are gaining the favor of capital. Based on different storage media, each company will adopt different technical directions when doing storage-calculation integrated technology. Some are memristors, and some are SRAM, DRAM, Flash, etc. With the development of 3D stacking technology and the increasing maturity of new non-volatile storage devices, the integration of storage and computing will usher in its era.

 5. The focus of 5G construction is shifting to independent networking and millimeter waves

With fiber-like speed, ultra-low latency, and large network capacity, 5G is generating as much influence as electricity, revolutionizing all walks of life.

As a powerful supplement to the Sub-6GHz frequency band, 5G millimeter wave has a large frequency broadband capacity, is easy to combine with beamforming, and has many outstanding advantages such as ultra-low latency. It is conducive to the promotion of industrial Internet, AR/VR, cloud gaming, and real-time The development of computing and other industries. At the same time, millimeter waves can support deployment in dense areas, high-precision positioning, and high equipment integration, which will help promote the miniaturization of base stations and terminals.

According to the GSMA “Millimeter Wave Application Value” report, it is estimated that by 2035, 5G millimeter waves will create a global GDP of 565 billion U.S. dollars and generate 152 billion U.S. dollars in taxes, accounting for 25% of the total value created by 5G. Another “5G Millimeter Wave in China” report pointed out that it is estimated that by 2034, the economic benefits of using millimeter wave frequency bands in China will reach 104 billion U.S. dollars, including manufacturing and hydropower in vertical industries. Utilities accounted for 62% of the total contribution, professional services and financial services accounted for 12%, and information communications and trade accounted for 10%.

At present, 186 operators in 48 countries are planning to develop 5G on the 26-28GHz, 37-40GHz and 47-48GHz millimeter wave spectrum; 134 operators in 23 countries have licenses for millimeter wave Deployment, North America, Europe and Asia account for 75% of all spectrum deployments. Among them, 26-28GHz is the millimeter wave frequency band that has been deployed and issued the most licenses, followed by the 37-40GHz frequency band.

But not all application scenarios require millimeter wave coverage. In July 2021, the Ministry of Industry and Information Technology of China and ten departments jointly issued the “5G Application “Sailing” Action Plan (2021-2023)”, proposing to conduct 5G services in 9 scenarios including industrial Internet of Things, Internet of Vehicles, logistics, ports, power, and agriculture Deepening the advancement, and the above-mentioned scenarios have very high requirements on bandwidth and delay, and it is easy for millimeter waves to take advantage of their own advantages.

 6. EDA tools began to use AI to design chips

At present, terminals such as smart phones, Internet of Vehicles, and IoT have put forward higher requirements for the PPA (power consumption, performance, area) of the system-on-chip (SoC). Faced with the scale of chip design with tens of billions of transistors, as well as new packaging directions such as heterogeneous integration, system-in-package, and chiplets, if there is no assistance from machine learning (ML) and artificial intelligence, only existing design methods are used. Engineers will face increasingly severe challenges.

Upgrade AI design from concept to actual combat, whether it is “AI Inside” that uses AI algorithms in EDA tools to empower chip design, or “AI Outside” that focuses on how to design EDA tools to help AI chip design efficiently, EDA industry and The academic community has already begun to act. At the national strategic level, the U.S. Defense Advanced Research Projects Agency (DARPA) has even begun to use Electronic Asset Intelligent Design (IEDA) as a representative project, focusing on breakthroughs in optimization algorithms, chip design support below 7nm, wiring and equipment automation and other key technical problems.

In fact, the use of AI in chip design is nothing new. Google used AI technology when designing TPU chips; Samsung integrated AI technology into chip design, which is said to surpass the previously achieved chip PPA effect; NVIDIA also AI algorithms are being used to optimize the design of 5nm and 3nm chips…

In general, the back-end of chip design (or physical implementation), especially the layout and layout fields with a huge proportion of manpower, is the key to AI’s efforts. Rapid modeling, circuit simulation, and improvement of VLSI QoR are also EDA. The direction of using AI. It can be seen that the current advantage of AI lies in the execution of large-scale calculations, comparison extraction or enhancement of some functions, and in the “from 0 to 1” creation stage and decision-making stage, it is still necessary to cooperate with human engineers. But in any case, AI will be the ultimate form of EDA’s future development, and it is also the key to improving the efficiency of chip design in the next few years.

 7. Matter will promote the unification of IoT and smart home interconnection standards

The Connectivity Standards Alliance (formerly Zigbee Alliance) and smart home manufacturers such as Amazon, Apple and Google have developed Matter, a standardized interconnection protocol based on the original Project Connected Home over IP (CHIP), It aims to enable interoperability and compatibility of IoT devices from different manufacturers and adopting various wireless connection standards, so as to provide consumers with a better device installation and operation experience, while simplifying the things of manufacturers and developers. Development process of networked equipment.

Matter, as the application layer, can unify devices that use various IP protocols and interconnection standards to support cross-platform communication. Products certified by Matter are compatible with smart home ecosystems such as Amazon Alexa, Apple HomeKit, and Google. The Matter protocol currently supports three underlying communication protocols: Ethernet, Wi-Fi, and Thread, and also uses Bluetooth Low Energy (BLE) as the pairing method. Matter will not replace any existing wireless protocols for the Internet of Things. It is an architecture that runs on top of existing protocols and will support more protocols in the future, including Zigbee and Z-Wave.

The Matter standard has been obtained by Internet giants (Amazon, Apple, and Google), chip suppliers (Silicon Labs, NXP, and Espressif Wulian) is expected to rapidly grow and spread globally starting from 2022, becoming a unified interconnection standard for the Internet of Things and smart homes.

 8. RISC-V architecture processors enter the field of high-performance computing applications

RISC-V, which originated from UC-Berkeley 10 years ago, has now become the mainstream microprocessor architecture instruction set (ISA), but its main application is still limited to the field of embedded systems and microcontrollers (MCU), especially emerging The Internet of Things market. Can this open-source, free and free microprocessor architecture be as important as x86 and Arm as high-performance computing (HPC)? From chip giants, fabless startups to microprocessor core IP developers, all are trying to introduce RISC-V into high-performance computing applications such as data centers, AI, 5G, and servers. RISC-V has the potential to share the world with x86 and Arm. .

SiFive’s Performance series is its highest performance RISC-V core, designed for networks, edge computing, autonomous machines, 5G base stations, and virtual/augmented reality. The latest P550 microprocessor uses RISC-V RV64GBC ISA, a 13-stage pipeline/three-transmission/out-of-order execution micro-architecture, a quad-core cluster with 4MB of three-level cache, and a main frequency of 2.4 GHz. The SPECint 2006 test performance of the P550 core is 8.65/GHz. Compared with the Arm Cortex-A75, it has higher performance in the SPECint2006 and SPECfp2006 integer/floating point benchmark tests, while the footprint is much smaller. The quad-core P550 cluster occupies The space is roughly equivalent to that of a single Cortex-A75.

Intel will use the P550 core in its 7nm Horse Creek platform. By combining Intel interface IP (such as DDR and PCIe) with SiFive’s highest performance processors, Horse Creek will provide valuable and scalable high-end RISC-V applications development tools.

Esperanto, a Silicon Valley IC design start-up company, has launched an AI accelerator chip ET-SoC-1 that integrates more than 1,000 RISC-V cores, specifically designed for data center AI reasoning. The chip uses TSMC’s 7nm process and integrates 24 billion transistors. ET-SoC-1 contains 1088 high-performance ET-Minion 64-bit RISC-V ordered cores (and each core comes with a vector/tensor unit); 4 high-performance ET-Maxion 64-bit RISC-V chaotic Sequence core; more than 160MB on-chip SRAM; external large-capacity memory interface of LPDDR4x DRAM and eMMC FLASH; PCIe x8 Gen4 and other general I/O interfaces. The peak computing performance of this chip is 100-200 TOPS, which is suitable for ML inference, and its working power consumption is less than 20W.

Ali Pingtou’s Xuantie 910 RISC-V processor uses a 12nm process and has 16 cores with a clock speed of up to 2.5GHz and a performance of up to 7.1 Coremark/MHz. This high-performance processor IP can be used to design high-performance chips for applications in 5G, artificial intelligence, network communications, and autonomous driving. The RVB-ICE equipped with the Xuantie 910 processor is a RISC-V development board that supports Android basic functions developed by Pingtou, with a frequency of up to 1.2GHz, integrated WIFI and GMAC network communication interfaces, and 16GB EMMC storage. Developers can use the development board to participate in the ecological construction of RISC-V and Android.

 9. Advanced packaging technology becomes “New Moore’s Law”

In the past few decades, Moore’s Law has led the development of the semiconductor industry like a beacon. However, due to physical limitations and manufacturing costs, when advanced process technology reaches 5nm, 3nm, or even 2nm, transistor scaling will be used to achieve higher levels. The logic of economic value is gradually becoming no longer valid.

From the perspective of market trends, in the past ten years, the development of data computing has exceeded the sum of the past four decades. Cloud computing, big data analysis, artificial intelligence, AI inference, mobile computing, and even autonomous vehicles require massive computing. . To solve the problem of computing power growth, in addition to continuing to increase density through CMOS scaling, heterogeneous computing that can combine hardware with different processes/architectures, different instruction sets, and different functions has also become one of the important methods.

As a result, an IC technology development route that is no longer a straight line and the market’s demand for innovative solutions have pushed packaging, especially advanced packaging technology, to the forefront of innovation.

The latest survey data shows that from 2020 to 2026, the compound annual growth rate of the advanced packaging market is about 7.9%. By 2025, the market revenue will exceed 42 billion U.S. dollars, which is almost three times the expected growth rate (2.2%) of the traditional packaging market. Among them, 2.5D/3D stacked IC, embedded die packaging (Embedded Die, ED) and fan-out packaging (Fan-Out, FO) are the fastest growing technology platforms, with compound annual growth rates of 21% and 18%, respectively And 16%.

At present, OSAT companies, wafer foundries, IDM, Fabless companies, EDA tool manufacturers, etc. have joined the competition in the advanced packaging market, and have invested huge amounts of money. But in general, in the foreseeable future, 2.5D/3D packaging technology will become the core of “advanced packaging”. Increasing interconnection density and adopting Chiplet design will be two technological paths driving the development of “advanced packaging”. The greatest value of advanced packaging requires the coordination of the entire industry chain.

10. Car domain controller and car brain

As the automotive industry continues to evolve toward the “new four modernizations”, the entire automotive electronic and electrical architecture is undergoing a transition from a traditional distributed architecture (Distributed), to a domain-based centralized architecture (DCU based centralized), and then to a belt based on domain integration. The development history of architecture (DCU fusion basedzonal).

At present, the automotive electronic and electrical architectures at home and abroad are mainly presented as a three-domain control architecture, namely, smart cabin, smart computing, and smart driving. It is expected that after 2030, with the gradual maturity of the autopilot technology route, the autopilot high-performance chip will be further integrated with the cockpit main control chip to the central computing chip, thereby further improving computing efficiency and reducing costs through integration.

This means that today’s cars need a very powerful “brain”-both to be able to act as a hardware hub, but also to have very powerful computing capabilities to meet the new requirements for software and hardware that arise during the above-mentioned transformation.

In fact, for the development of autonomous driving systems, the industry generally believes that the progressive route from L2+ assisted driving to L4/L5 autonomous driving is the most feasible path. This requires the corresponding central computing platform to have super scalability, support the smooth evolution of system development, meet the differentiated requirements for computing power and power consumption of automatic driving at all levels, and improve the development efficiency of partners such as OEMs.

Of course, automotive brain chips cannot only care about peak computing power, but must be fully balanced. Information security, functional security, heterogeneous architecture design, processing of different data types, thermal management, and other aspects should all be considered. At the same time, considering that “software-defined cars” has become an industry consensus, it is necessary to reserve enough redundant space to cope with the continuous changes in automotive architecture and AI algorithms when designing.

In the future, the automobile will undoubtedly become a mechatronics intelligent device. It will become a trend that the existing subsystems are integrated as much as possible. This also makes the hardware development bottleneck breakthrough, and the excellent user experience led by software begins to become the automobile. Important selling point.

  

The Links:   LQ104V1DC31 CM1000HA-24E GETCOMPONENT