Latest News

Google boasts TensorFlow, its open source catalogue of machine intelligence software.

While Google unveiled its Tensor Processing Unit (TPU) on this year's Google I/O conference throughout Mountain View, California, it finally ticked just for this editor in particular that machine learning will be the future of computing hardware.


Naturally, the TPU is only an element of the firm's mission to force machine learning – the train that powers chat bots, Siri etc – forward. (It's also the chip that defeated the globe Go champion recently. ) Google boasts TensorFlow, its open source catalogue of machine intelligence software.

Along with sure, the chips that we find in your laptops and smartphones will carry on and get faster and more functional. But, it seems as if we have now already seen the extent in the computing experiences that these processors offers, if only limited by your devices they power.

Now, it does not take TPU, a meticulous amalgamation of silicon built for one purpose, and other specialized processors the two already here (like Apple's M9 co-processor) and come, that stands to push your advancement of mankind's processing power – and therefore our device's capabilities – further and faster than ever.

So, we wanted to find out more on this new kind of chips, how it's different exactly, how powerful it is and how it turned out made. While Google Distinguished Hardware Industrial engineer Norm Jouppi wouldn't disclose much regarding the chip's construction (it's apparently this special to Google), he enlightened us over email regarding what exactly the TPU is capable of and its potential money of machine learning.

TechRadar: What on earth is the chip exactly?

Norm Jouppi: [The] Tensor Control Unit (TPU) is our 1st custom accelerator ASIC [application-specific integrated circuit] pertaining to machine learning [ML], and it fits inside same footprint as a harddrive. It is customized to give top rated and power efficiency when jogging TensorFlow.

Great software shines even brighter with great hardware beneath it.

What makes the TPU completely different from your standard processor specifically?

TPUs are generally customized for machine learning purposes using TensorFlow. Note that we carry on and use CPUs [central processing units] and GPUs [graphics processing units] pertaining to ML.

How does the chips operate any differently from standard CPUs?

Our custom TPU is exclusive in that it uses a lesser number of computational bits. It only fires up the bits which you are required, when you need them. This gives more operations per second, while using same amount of silicon.

No comments