Industry News

With more than 4,000 chips strung together, Google says its supercomputers are faster and more energy efficient than Nvidia's

Views : 19
Update time : 2023-04-18 12:00:50
        Alphabet Inc.'s Google Inc. on Tuesday unveiled new details about its supercomputers used to train artificial intelligence models, saying the systems are faster and more power-efficient than their Nvidia counterparts.
        Google has designed its own chips, called Tensor Processing Units (TPUs), to train artificial intelligence models, which are used in more than 90 percent of the company's AI training efforts, for tasks such as answering questions in human language or generating images. Google's TPUs are now in their fourth generation. Google published a scientific paper on Tuesday detailing how they use their own custom-developed optical switches to connect more than 4,000 chips in series into a supercomputer.
 
        Improving these connections has become a key point of competition between companies building AI supercomputers, as the size of the so-called large language models that power technologies like Google's Bard or OpenAI's ChatGPT has exploded, meaning they are too big to be stored on a single chip. 
        These models have to be partitioned into thousands of chips, which then have to work in tandem for weeks or more to train the models. Google's PaLM model - its largest publicly disclosed language model to date - was trained over 50 days by spreading it across two supercomputers with 4,000 chips. 
        Google says its supercomputers can easily reconfigure the connections between the chips in real time, helping to avoid problems and improve performance. 
        In a blog post about the system, Google researcher Norm Jouppi and Google Distinguished Engineer David Patterson wrote: "Circuit switching made it easy for us to bypass faulty components. This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of ML (machine learning) models." 
        Although Google is only now announcing details of its supercomputer, it is already coming online internally in 2020, running in a data centre in Mayes County, Oklahoma (USA). Google said the startup Midjourney used the system to train its model, which can generate images after inputting text. 
        In its paper, Google said its supercomputer was 1.7 times faster and 1.9 times more energy efficient than a system based on the Nvidia A100 chip for a system of the same size. Google said it did not compare its fourth-generation product to Nvidia's current flagship H100 chip because the H100 came to market after Google's chip and was built with newer technology. Google hinted that they may be working on a new TPU to compete with the Nvidia H100.


 
Related News
Read More >>
How many chips does a car need? How many chips does a car need?
Sep .19.2024
Automotive chips can be divided into four types according to their functions: control (MCU and AI chips), power, sensors, and others (such as memory). The market is monopolized by international giants. The automotive chips people often talk about refer to
Position and Function of Main Automotive Sensors Position and Function of Main Automotive Sensors
Sep .18.2024
The function of the air flow sensor is to convert the amount of air inhaled into the engine into an electrical signal and provide it to the electronic control unit (ECU). It is the main basis for determining the basic fuel injection volume. Vane type: The
Chip: The increasingly intelligent electronic brain Chip: The increasingly intelligent electronic brain
Sep .14.2024
In this era of rapid technological development, we often marvel at how mobile phones can run various application software smoothly, how online classes can be free of lag and achieve zero latency, and how the functions of electronic devices are becoming mo
LDA100 Optocoupler: Outstanding Performance, Wide Applications LDA100 Optocoupler: Outstanding Performance, Wide Applications
Sep .13.2024
In terms of characteristics, LDA100 is outstanding. It offers AC and DC input versions for optional selection, enabling it to work stably in different power supply environments. The small 6-pin DIP package not only saves space but also facilitates install