MADISON, Wis. — Nvidia pushed the computational performance battle for autonomous cars to a new level, unveiling a new member of its Drive PX family, codenamed Pegasus, at its own GPU Technology Conference in Munich.
Noting that Pegasus can compute 320 trillion operations per second, CEO Jensen Huang boasted, “Our new DRIVE PX Pegasus AI computer — roughly the size of a license plate — can replace the entire trunk full of computing equipment used in today’s Level 5 autonomous prototypes… DRIVE PX Pegasus has the AI performance of a 100-server data center.”
Nvidia CEO Jensen Huang at GTC in Munich (Photo: Nvidia)
Nvidia’s Pegasus reportedly comes with a four-chip architecture featuring the equivalent of two Xavier units, plus two next-generation discrete GPUs.
Although computational power alone can’t solve all the challenges still posed by Level 5 autonomous cars, Nvidia appears to have edged ahead of its rivals.
Luca De Ambroggi, principal analyst for automotive electronics and semiconductors at IHS Markit, noted, “Processing power is a very important point and with Pegasus we are getting close to the POPS (Peta operations per second), which I expect to be the minimum requirement for L5 vehicle.”
In De Ambroggi’s opinion, the Pegasus platform is likely to be ready for “Geo-fenced L5” self-driving cars — Robo-taxis — but not for the mass market. “We will probably see a few more generations of ICs (such as Nvidia’s Xavier 3, 4, 5 and Intel/Mobileye’s EyeQ 5, 6, 7)” to improve performance, De Ambroggi said.
Mike Demler, a senior analyst at the Linley Group, cautioned that Nvidia is “now pre-announcing chips more than one year before we see first samples.” But he, too, acknowledged that “the combination of Nvidia’s more open software platform and the GPU-compute architecture” position Nvidia well to address deep learning.
While industry analysts aren’t declaring Nvidia the sole winner of the autonomous car race yet, they aren’t refuting the clear leadership role Nvidia has seized.
Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), said, “Nvidia is developing and learning just like everyone else.” However, he added, Nvidia has “put a lot into developing automated vehicle technologies and their efforts are beginning to pay off. They have essentially democratized AI in automotive, which is to be commended considering the auto industry’s position on AI up until now.”
Currently, no vehicle commercially available today exceeds Level 2 autonomy. The future of higher level automation still hangs in the balance but, at least, Nvidia is “the first to offer a complete A/V stack for L4/L5,” noted Magney.
Describing Pegasus as “a production ready platform to support L4/L5 automation,” Magney added, “It is very robust and has lots of redundancies and fall back methods. It will run on QNX which is an ASIL D embedded operating system.”
Other functional safety measures Nvidia has installed in Pegasus, according to Magney, include, “Decomposing the neural network components and validating the AI libraries.” He added, “Nvidia says they can examine the performance of the network layers by isolating certain tasks such as perception.”
Asked about sensor fusion on Pegasus, Magney who was at Nvidia’s GTC in Munich this week, said, “Nothing different… Nvidia advocates fusing raw data despite the high capacity physical layer. Nvidia claims their architecture can handle massive amounts of data so no need to process outside of the domain controller.” Pegasus “supports more sensors – up to 16,” he added.
How does Pegasus stack up?
Before going into competitive analysis, how does Pegasus stack up against Nvidia’s own, other Drive PX chips?
The Linley Group’s Demler said Pegasus is not fundamentally different [from other Drive PX chips]. “In fact, architecturally Nvidia is actually reversing course from their initial positioning of Xavier,” he observed.
Next page: Comparison with Intel/Mobileye
When Nvidia unveiled Xavier a year ago, Demler recalls, “At first they said Xavier on its own would deliver Level 4/5 fully-autonomous driving.”
Nvidia then touted Xavier as “a complete system-on-chip (SoC), integrating a new GPU architecture called Volta, a custom 8 core CPU architecture, and a new computer vision accelerator.” Xavier “delivers 20 TOPS (trillion operations per second) of performance, while consuming only 20 watts of power,” the company said at the time.
Demle added, “Earlier this year at GTC-Silicon Valley, it was 30 TOPS at 30W. Now, Pegasus goes back to being up to a four-chip system, including up to two Xaviers with up to two ‘next-generation’ GPUs. That’s the same architecture as Drive PX2, but now at 10 times the total compute power (320 TOPs).”
Comparison with Intel/Mobileye
At 320 TOPS, Pegasus’ computational performance literally blows Nvidia’s competitors out of the water. Right?
Demler noted, “Mobileye has said their next-generation EyeQ5 will deliver 12 TOPs, so, as it was, it would take at least two of those chips to match one Xavier.”
He added, “With Pegasus and the 10x boost from the next-gen GPU, you now have a greater than 20:1 difference in compute power.”
That means that “Intel would need to add some very powerful inference-engine accelerators to EyeQ to come close to that, but they don’t have anything to match that power-performance for in-vehicle use,” Demler observed.
Asked about the competition, Magney told us that Pegasus appears more tightly integrated than alternative solutions such as Intel/Mobile. “Nvidia has a more complete HW/SW stack than most other A/V stacks being offered,” Magney said.
IHS Markit De Ambroggi agreed. “I think that Nvidia is ahead of the competition at this point of time.” But he cautioned that “all other players in this space, from traditional to new suppliers, are expected pretty soon to announce their solutions.” He noted, “Intel first (being in the same category with Nvidia in term of performance and targets), and more traditional automotive players like NXP or Renesas will be announcing soon their alternatives…and more to come.”
De Ambroggi added, “One advantage that Nvidia has, however, and that makes me feel they are in a better position, is that Nvidia has already a capillary presence into the automotive supply chain from OEMs down to tiers. Nvidia also offers broad product offering (including software).” Looking at the market as a whole from server to cloud, De Ambroggi said the only company that can match Nvidia might be Intel.
Fleets, delivery trucks and robotaxis
NVIDIA also announced that Deutsche Post DHL Group (DPDHL), the world’s largest mail and logistics company, and ZF, a large automotive Tier One, have partnered o deploy a test fleet of autonomous delivery trucks, starting in 2018.
DPDHL will outfit electric light trucks with the ZF ProAI self-driving system, based on Nvidia’s Drive PX technology, for automating package transportation and delivery, including the “last mile” of deliveries, the companies said.
Next page: Still in a huge hype cycle
Magney told EE Times, “I was impressed with the DHL announcement. This will heighten the urgency to develop last mile delivery services. ZF wins on this too, since they have created a hardened commercial solution that will appeal to other commercial operators that want to get going as well.”
Where in the market will the first autonomous vehicles appear? Cabs? A fleet of trucks? Golf carts?
Demler said, “The first deployments are on private roads and campuses, as we’ve already seen. As soon as you put a vehicle on public roads, you need to address the safety issues, regardless of whether the vehicle is only carrying just freight or not. It would still need to interact with passenger vehicles.”
That said, Demler added, “Nevertheless, trucking fleets, delivery trucks, and robo-taxis are three different things. Autonomous-trucking fleets only apply to so called ‘over-the-road’ heavy freight, so their usage would be on specifically designated highway routes. Delivery vehicles could also be limited to designated routes, either on private or public roads. Robotaxis will need to navigate urban environments and safely transport human passengers, so that raises many more issues.”
Still in a huge hype cycle
After all is said and done, however, it’s important to note, as Demler pointed out, “We’re still in a huge hype cycle regarding full autonomy. It’s not an exaggeration to say nobody has ‘solved’ anything regarding Level 5.”
He explained, “We barely have the first (extremely limited) Level 3 vehicles just reaching production, so we are a long way from having vehicles that can drive anywhere and everywhere under all conditions that a human driver can do today, and at the same time safely interactive with those human-driven vehicles.”
— Junko Yoshida, Chief International Correspondent, EE Times
Let’s block ads! (Why?)