Think of a startup that was founded less than 2 years ago. Its creators came up with a novel processor idea for machine learning. In addition, in 3 funding rounds, startup raised $110 million. I am speaking of Graphcore, the Intelligent Processing Unit (IPU) producer.
Bristol based semiconductor startup, Graphcore, was launched in 2016 and not long after, it secured $30 million Series A funding led by Robert Bosch Venture Capital. In the next round, it raised $30 million led by Atomico. Finally, in November 2017, Sequoia Capital invested $50 million in Graphcore. That was a big deal. Not because of money itself. Because of the investor.
Sequoia is the venture capital firm that backed Google, Apple and WhatsApp. Another important point, Sequoia Board is extremely picky, when it comes to European startups. Thus, having Sequoia in your investors list means more than tens of million dollars.
So, how could a hardware startup without having a ready-to-ship product, raise more than $100 million in less than 2 years? If you are familiar with the difficulties of startup deal-making process, you should be asking this question.
Let me explain how: Graphcore CEO, Nigel Toon and CTO, Simon Knowles are well-known industry professionals who founded Icera in 2002. You never heard of Icera before? It is a British semiconductor company that was acquired by Nvidia for $436 million in 2011.
(Another interesting detail on Icera: Bristol based self-driving tech startup FiveAI’s CEO, Stan Boland was a co-founder and the CEO of Icera. It looks like there is a clockwork startup ecosystem in Bristol.)
That is to say, founders have a proven track record in processor market. Moreover, they hold a more precious property: a bright idea and a respectable reputation in semiconductors market. Therefore, the investors would not just ignore Graphcore.
Graphcore’s Alexnet Deep Neural Network Illustration (Image taken from Graphcore.ai Blog)
Before we go any further, we need a better understanding of Graphcore’s technology. Let’s start with these 2 crucial questions:
- What is an IPU?
Intelligent Processing Unit (IPU) is a processor (designed by Graphcore people) which offers 10x to 100x more processing power compared to today’s systems. Currently, people are mainly using Graphic Processing Units (GPU) or some other modified massively parallel processors for Machine Learning (ML) and Artificial Intelligence (AI) practices. However, the architecture of these CPU’s and GPU’s are not generally designed for ML and AI applications. Hence, they are struggling with real-time parallel processing of today’s complex models. Graphcore offers more than a processor to solve this problem: A System-on-Chip (SoC). It is offering a software framework with tools, drivers, libraries and C++, Python and MXNet (an open-source deep learning framework) interfaces. Simply, IPU will be faster and more power efficient than all other processors in the market.
- Why is it important for autonomous driving technology?
These lines are from a Wired article, titled Self-driving cars use crazy amounts of power, and it’s becoming a problem: “A production car you can buy today, with just cameras and radar, generates something like 6 gigabytes of data every 30 seconds. It’s even more for a self-driver, with additional sensors like lidar. All the data needs to be combined, sorted, and turned into a robot-friendly picture of the world, with instructions on how to move through it. That takes huge computing power, which means huge electricity demands. Prototypes use around 2,500 watts, enough to light 40 incandescent light bulbs.” Obviously, power consumption of the autonomous vehicle (AV) driving systems will be an issue for car manufacturers in the near future. And, Graphcore offers a solid product to resolve this issue.
Fact is: in the self-driving tech market, there are big chipmakers like Intel, NVidia or ARM. And they have their own way of doing massively parallel processing. For example, NVidia released its monster CPU, Xavier in September 2016: a system with an 8 core CPU and a 512 core GPU. Yes, it is brute power. Yet, it is still an old way of doing things.
NVidia’s Xavier: It will replace the Drive PX2 platform (Image taken from NVidia Tech Blog)
Graphcore is the new boy in the town and it has its unique way of sorting out complex machine learning problems. Graphcore’s IPU is optimised for massively parallel, mixed-precision floating-point computation. Startup says its product has over 100x more memory bandwidth than known solutions. Thanks to its architecture, IPU holds complete machine learning model in its memory just like a human brain. Sounds promising, right?
IPU can be a game changer in the whole machine learning market. It can also ease the development of self-driving cars. Reducing power consumption of AV’s could let engineers to develop cheaper and more reliable systems. It looks like this potential of the product is appreciated by others as well. Because there are strong hints of solid engagement when IPU becomes real. Graphcore CTO Simon Knowles stated that they are in touch with important AI players. Fingers crossed!
We still do not know whether Graphcore’s novel approach will evolve into a ground-breaking product or not. However, I think Graphcore has a great chance to make a breakthrough with its IPU. Especially in the AV CPU market. What do you think? Say your word.