Latest News

Google's Tensor Processing Unit explained

Whenever Google unveiled its Tensor Processing Unit (TPU) in this year's Google I/O conference within Mountain View, California, it finally ticked with this editor in particular that machine learning may be the future of computing hardware.



Obviously, the TPU is only part of the firm's mission to drive machine learning – the exercise that powers chat bots, Siri and so on – forward. (It's also the chip that defeated the planet Go champion recently. ) Google also offers TensorFlow, its open source collection of machine intelligence software.

As well as sure, the chips that we find within our laptops and smartphones will still get faster and more flexible. But, it seems as if we have already seen the extent from the computing experiences that these processors can offer, if only limited by the actual devices they power.

Now, it is the TPU, a meticulous amalgamation of silicon built especially for one purpose, and other specialized processors each already here (like Apple's M9 co-processor) and also to come, that stands to push the actual advancement of mankind's processing power – and consequently our device's capabilities – further and faster than in the past.

So, we wanted to find out more about this new kind of nick, how it's different exactly, precisely how powerful it is and how it had been made. While Google Distinguished Hardware Professional Norm Jouppi wouldn't disclose much concerning the chip's construction (it's apparently that special to Google), he enlightened us over email regarding precisely what the TPU is capable of and its potential for future years of machine learning.

TechRadar: What's the chip exactly?

Norm Jouppi: [The] Tensor Digesting Unit (TPU) is our very first custom accelerator ASIC [application-specific integrated circuit] with regard to machine learning [ML], and it fits within the same footprint as a hard disk. It is customized to give high end and power efficiency when operating TensorFlow.

Great software shines even brighter with great hardware beneath it.

What makes the TPU not the same as your standard processor specifically?

TPUs tend to be customized for machine learning programs using TensorFlow. Note that we still use CPUs [central processing units] and GPUs [graphics processing units] with regard to ML.

How does the nick operate any differently from regular CPUs?

Our custom TPU is exclusive in that it uses less computational bits. It only fires up the bits that you'll require, when you need them. This enables more operations per second, using the same amount of silicon.

No comments