Motley Fool
0
All posts from Motley Fool
Motley Fool in Motley Fool,

NVIDIA vs. Alphabet in the World of AI Technology

NVIDIA (NASDAQ: NVDA) and Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) have turned out to be the unlikeliest of rivals in a slugfest for a greater share of the artificial intelligence (AI) market. So far, Alphabet has been using  NVIDIA's GPUs (graphics processing units) to power AI applications on the Google Cloud Platform, though it looks like the search giant has now decided to go it alone in this lucrative space.

Let's take a closer look at NVIDIA and Google's AI feud and the potential implications for both companies.

Training the machines

Alphabet revealed  its plans for its own AI chip -- the Tensor Processing Unit (TPU) -- at last year's Google I/O conference. The TPU chip was already deployed across a variety of applications, including optimizing search results and speech recognition, and in Alphabet's data centers.

At the same  time, Google was sourcing NVIDIA's Tesla GPUs for use in its cloud computing platform and helping customers train AI models. For instance, in November 2016, NVIDIA announced  that Google had selected its Tesla P100 GPUs and K80 accelerators to provide AI services to Google Compute Engine and Google Cloud Machine Learning users.

But Google shocked the tech world at May's I/O conference when it revealed  that the company's second-generation TPU chip will be available for use by its cloud users. Google said in an April blog post that its TPU is 15 to 30 times  faster than current generation CPUs (central processing units) and GPUs.

Earlier, Google's TPU was good enough only for inferencing (applying what it knows to new data)  but the company has now added AI training capabilities (teaching a machine to make inferences) in the second generation. What's more, Google users can use multiple stand-alone TPUs in a custom high-speed network to build  machine learning-capable supercomputers with a lot of computing power.

In fact, Google says it has been able to cut down  the time to train certain AI models to just a few hours, while the same used to take a day using other commercially available GPUs. This could possibly spell the end of NVIDIA's relationship with Google in AI chips once the TPU roll-out happens by the end of the year.

NVIDIA counters

NVIDIA is contesting  Alphabet's claims regarding TPU speed, saying that the TPU has been benchmarked against older Kepler GPUs and not against the new Pascal GPUs. What's more, the graphics specialist isn't going to sit idle and let Google take the lead in this space, claiming  that its upcoming Volta-based GPUs will be even better at deep learning applications.

NVIDIA CEO Jensen Huang. Image Source: NVIDIA,

In fact, NVIDIA aims to become more than just a hardware provider as it is working on a new offering known as NVIDIA GPU Cloud (NGC), which will stack  a GPU, such as the Volta-based Tesla V100, along with the company's deep learning libraries. The chipmaker is focusing on providing its AI computing as a platform-as-a-service, which is different from Google's approach.

Google isn't going  to sell its AI chip externally; rather it will use them to power its own  Google Compute platform. Therefore, NVIDIA's worries remain lmited for now as it potentially runs the risk of just losing the Google account. The Volta GPU platform has already started gaining traction as Amazon has committed  to buying the V100 chips as soon as they are commercially available later this year.

Another advantage for NVIDIA is that Google hasn't yet decided to open source its AI framework, so users of its TPUs will remain locked  into Google's platform. Meanwhile, the NVIDIA GPUs support a wide range of cloud computing platforms, including those of Amazon, Microsoft, Google, and IBM, giving users the opportunity to choose from different cloud service providers (CSPs).

Image Source: NVIDIA.

Additionally, NVIDIA CEO Jensen Huang took a jab at Google when he wrote in a blog post  that the company is open-sourcing its deep learning accelerator, which performs the functions of TPU. "No one else needs to invest in building an inferencing TPU. We have one for free -- designed by some of the best chip designers in the world," he wrote in the blog.

The open-source nature of the graphics specialist's AI platform as compared to Google's constraints has made it the chip of choice for the largest CSPs, including Amazon and Microsoft. Meanwhile, Google's cloud platform isn't as successful  as Amazon's and Microsoft's, which could restrict the growth of its AI chips.

10 stocks we like better than Alphabet (A shares)
When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Alphabet (A shares) wasn't one of them! That's right -- they think these 10 stocks are even better buys.

Click here to learn about these picks!

*Stock Advisor returns as of June 5, 2017

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Harsh Chauhan has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Amazon, and Nvidia. The Motley Fool has a disclosure policy.