Google's Tensor Processing Unit Could Advance Moore's Law 7 Years Into The Future

Image Credits: Google Cloud Platform

Google says its Tensor Processing Unit (TPU) advances machine learning capability by a factor of three generations. "TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA," said Google CEO Sundar Pichai during the company's I/O developer conference on Wednesday. 

The chips powered the AlphaGo computer that beat Lee Sedol, world champion of the game called Go. "We've been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law)," said Google's blog post

"TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly."

Post a Comment

Previous Post Next Post