Shares of chip maker Nvidia (NVDA) are up $10.23, or 6%, at $179.64, after Evercore ISI’s C.J. Muse this morning reiterated an Outperform rating on the shares, and raised his price target to $250 from $180, based on his belief the company is creating “the industry standard” in artificial intelligence computing.
Muse relates having gone on a road show “the last few days” in New York and Boston with Nvidia’s CEO, Jen-Hsun Huang, CFO, Colette Kress, and the company’s head of investor relations, for meetings with investors.
He says the “major takeaway” of those meetings was that “management believes that investors still severely underestimates [sic] the impact of A.I. and the size of the potential market.”
For his part, Muse concludes that with a “first-mover advantage, its unified GPU architecture and a system level approach (including an extensive CUDA software ecosystem supported by $10B+ in historical R&D dollars),” the company has built “an industry standard for AI systems that will be nearly impossible to replicate.”
“NVDA dominates Training today, and looks to be the leader in Inference tomorrow,” he writes, referring to the two primary components of machine learning.
Muse digs into Nvidia’s “CUDA” programming environment:
Since its introduction in 2006, CUDA has been widely deployed through thousands of applications and published research papers, and supported by an installed base of hundreds of millions of CUDA- enabled GPUs. Nvidia has created an ecosystem over the past 10+ years and a business model that leverages Nvidia’s software and algorithm expertise above and beyond its leading- edge silicon designs. NVIDIA GPUs have been the dominant choice in AI Training due to NVIDIA’s CUDA library support for major AI frameworks, efficient compilation attributes, a cuDNN library structure that abstracts away underlying hardware to data scientists, and an engaged developer base that has grown over time on NVIDIA’s CUDA architecture. The company believes its powerful compiler technology remains a competitive differentiator with the TensorRT compiler enabling a powerful solution for Deep Learning Inference (allows graph optimizations for vertical and horizontal layer fusion with GPU-specific optimizations allowing developers to import models from Caffe and TensorFlow). Add it all up and we believe NVDA, with its system-based approach to solving AI, has built the industry standard for AI computing […] The company believes its total R&D spend of 10s of billions of dollars has created a pre-emptive technology lead making it harder for others to replicate its success.
The next plateau, writes Muse, is the potential for Nvidia to play in inferencing, he writes, where Intel (INTC) CPUs dominate. Muse muses that Nvidia could conceivable license CUDA to others for that function:
With Deep Learning, edge compute is increasingly becoming an AI Inference problem, with a long-tailed opportunity offering sustainable long-term growth in areas like factory robots, commercial drones, smart cameras, etc. […] Moreover, as we think about the more commoditized parts of the IoT market at the edge that will want to be part of the AI ecosystem, we do wonder if there is an opportunity for NVDA to offer CUDA as a licensing model over time. When asked, the company suggested they have thought about it but haven’t come up with a commercial solution yet. Stay tuned here…