Moore's Law has come to its end, says Nvidia CEO Jensen Huang Jean Chu, Taipei; Willis Ke, DIGITIMES [Wednesday 27 September 2017] Nvidia founder and CEO Jensen Huang has said that with the emergence of GPU computing following the decline of the CPU era, Moore's Law has come to an end, stressing that his company's GPU-centered ecosystem has won support from China's top-five AI (artificial intelligence) players. Huang made the statement when delivering a keynote speech at the GPU Technology Conference (GTC) China 2017, held on September 26 in Beijing, on the topic of "AI: Trends, Challenges and Opportunities." This made him the first head of a major semiconductor company to say publicly what academics have been suggesting for some time: Moore's Law is dead. Moore's Law, named after Intel co-founder Gordon Moore, reflects his observation that the number of transistors in a dense integrated circuit doubles approximately every two years. Huang's remarks came in sharp contrast to an earlier statement that Moore's Law will not fail, as issued by Intel at its Technology and Manufacturing Day held also in Beijing, on September 19, when the company provided updates for its 10nm process. Huang said now is an era beyond Moore's Law, which has become outdated. He stressed that both GPU computing capability and neural network performance are developing at a faster pace than set in Moore's Law. He continued that while the number of CPU transistors has grown at an annual pace of 50%, the CPU performance has advanced by only 10%, adding that designers can hardly work out more advanced parallel instruction set architectures for CPU and therefore GPU will soon replace CPU. GPU most ideal for AI applications Nvidia has been rivaling with Intel in the AI chip development, with the former stressing GPU as the future choice, and the latter maintaining better performance for CPU. Huang said that Nvidia's GPU can fill up the deficiency of CPU, as strengthening high intensity computational capacity is the most ideal solutions for AI application scenarios. He also disclosed that Alibaba, Baidu, Tencent, JD.com, and iFLYTEK, now China's top 5 e-commerce and AI players, have adopted the Nvidia Volta GPU architectures to support their cloud services, while Huawei, Inspur and Lenovo have also deployed the firm's HGX-based GPU servers. At the conference, Nvidia also showcased its latest product TensorRT3, a programmable inference accelerator that can sharply boost the performance and slash the cost of inferencing from the cloud to edge devices, including self-driving cars and robots. With the TensorRT3, there will be no need to maintain data centers as it is suitable for diverse applications and thus helps to save a lot of costs. Nvidia is now teaming up with Huawei, Inspur and Lenovo to develop Tesla 100 HGX-1 accelerator dedicated to AI application. Huang said that a single HGX server can outperform 150 traditional CPU servers, in handling voice, speech and image recognition and inferencing operations, but the cost of using a HGX-1 server for AI training and inferencing is only one fifth and one tenth, respectively, that of using a traditional CPU server.