Industry News

NVIDIA has made large upfront payments to SK Hynix, Micron to secure HBM3 memory supply, sources say

Views : 12
Update time : 2024-01-02 14:50:03
        Dec. 31 (Bloomberg) -- South Korea's Chosun Biz reports that in addition to booking a large amount of TSMC capacity, NVIDIA has also spent a huge amount of money to pick up contracts to supply HBM3 memory.
 
 
        The company has pre-ordered between 70 billion and 1 trillion won (Note: currently about 384 million to 5.49 billion yuan) of HBM3 memory from Micron and SK Hynix, the sources said. Although there is no information about the specific use of these sums of money, the industry generally believes that the purpose is to ensure the stability of the HBM supply in 2024, to avoid the release of the new generation of AI and HPC GPUs because of this kind of thing and drop the chain.
        In addition, there are industry sources revealed that Samsung Electronics, SK Hynix, Micron three major storage companies next year's HBM capacity has been completely sold out. The industry believes that the HBM market is expected to realize rapid growth in the next two years due to the fierce competition among semiconductor companies in the AI industry. According to the available information, NVIDIA is preparing to launch two products equipped with HBM3E memory: the H200 GPU equipped with 141GB of HBM3E and the GH200 superchip, so NVIDIA needs a large amount of HBM memory. H200 is the world's strongest AI chip at present, and it is also the world's first GPU with HBM3e, based on NVIDIA Hopper architecture. The H200 is the world's most powerful AI chip and the world's first GPU with HBM3e, based on NVIDIA's Hopper architecture, and is interoperable with the H100, delivering 4.8 TB/s.
        In terms of AI, NVIDIA says the HGX H200 doubles the speed of inference on Llama 2 (70 billion parameter LLM) compared to the H100.The HGX H200 will be available in 4-way and 8-way configurations that are compatible with the software and hardware in the H100 system. It will be available for every type of data center (local, cloud, hybrid cloud and edge) and deployed by Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure, among others, and will be available in the second quarter of 2024.
Related News
Read More >>
How many chips does a car need? How many chips does a car need?
Sep .19.2024
Automotive chips can be divided into four types according to their functions: control (MCU and AI chips), power, sensors, and others (such as memory). The market is monopolized by international giants. The automotive chips people often talk about refer to
Position and Function of Main Automotive Sensors Position and Function of Main Automotive Sensors
Sep .18.2024
The function of the air flow sensor is to convert the amount of air inhaled into the engine into an electrical signal and provide it to the electronic control unit (ECU). It is the main basis for determining the basic fuel injection volume. Vane type: The
Chip: The increasingly intelligent electronic brain Chip: The increasingly intelligent electronic brain
Sep .14.2024
In this era of rapid technological development, we often marvel at how mobile phones can run various application software smoothly, how online classes can be free of lag and achieve zero latency, and how the functions of electronic devices are becoming mo
LDA100 Optocoupler: Outstanding Performance, Wide Applications LDA100 Optocoupler: Outstanding Performance, Wide Applications
Sep .13.2024
In terms of characteristics, LDA100 is outstanding. It offers AC and DC input versions for optional selection, enabling it to work stably in different power supply environments. The small 6-pin DIP package not only saves space but also facilitates install