During its presentation at Microsoft Ignite, NVIDIA showed its news in the Artificial Intelligence segment, from Omniverse Cloud Services, to AI Foundry Services, in partnership with the Redmond giant.
TudoCelular followed the news announced by the company and tells you the highlights below.
Tech
13 Nov
Releases
13 Nov
Omniverse Cloud Services on Microsoft Azure
One of the launches consists of the addition of Omniverse Cloud Services to Microsoft Azure. Hosted on Microsoft’s cloud, the services can provide a virtual factory and autonomous vehicle simulation engines to speed up the design, construction and operation of cars.
The first will have a set of customizable applications and services to connect large-scale industrial data sets with real-time analysis; while the second provides physically based sensor simulation for AV and robotics developers to run autonomous systems in a closed-loop virtual environment.
These two additions will enable automakers around the world to unify digitalization across their core products and business processes. In addition, companies will be able to generate faster and more efficient production, as well as improve sustainability initiatives.
“Through Omniverse Cloud, the automotive sector will be able to develop its processes with more agility, optimizing time and quality in workflows. This has a positive impact on automakers and, consequently, on the end consumer.”
Marcio Aguiar
Director of NVIDIA Enterprise Division for Latin America
The factory simulation engine is now available to customers through an Omniverse Cloud enterprise private offering in the Azure Marketplace, while the sensor simulation engine is coming soon.
Generative AI Foundry on Microsoft Azure
Microsoft Azure also now features Generative AI Foundry, an AI foundry service aimed at boosting the development and tuning of custom generative AI applications for companies and startups.
The new feature has three elements, which include a collection of NVIDIA AI Foundation models, the NVIDIA NeMo framework and tools, in addition to the supercomputing and AI services of NVIDIA DGX Cloud.
Enterprises will have access to NVIDIA AI Enterprise software to deploy their custom models and power generative AI applications such as intelligent search, summarization, and content generation. Among the first to create and custom LLMs are SAP, Amdocs and Getty Images.
Models available from the NVIDIA AI Foundation include the NVIDIA-optimized Llama 2 and a new family of NVIDIA Nemotron-3 8B models, both hosted in the Azure Machine Learning catalog. NVIDIA DGX Cloud AI supercomputing can also be found in the Azure Marketplace.
NVIDIA H100 Tensor Core GPU on Microsoft Azure
Microsoft Azure also now features NVIDIA H100 Tensor Core GPU-based instances, with a focus on accelerating the toughest Artificial Intelligence workloads.
The partnership between NVIDIA and Microsoft aims to create a next-generation infrastructure with confidential computing and two new instances for customers to expand generative AI capabilities.
Azure announced the new NC H100 v5 VM series, featuring the industry’s first cloud instances powered by a pair of H100 GPUs, connected via NVIDIA NVLink. The hardware delivers almost 4 petaflops of computing and 188 GB of HBM3 memory.
In addition, the Microsoft cloud also revealed the ND H200 v5 series virtual machine, an AI-optimized VM that already features the recently introduced NVIDIA H200 Tensor Core GPU – which brings considerable increases in capacity and bandwidth, for through state-of-the-art HBM3e memory.
According to the Redmond giant, Azure Confidential VMs in the NCH100 v5 VM series will be launched soon.
NVIDIA AI Foundation in the browser
Developers can now try new NVIDIA AI Foundation models directly from a browser on a Windows system, with the ability to then customize through their unique business data.
The availability of models includes the main ones, such as Llama 2, Stable Diffusion XL and Mistral, all formatted to help professionals simplify customization with proprietary data.
They also received optimizations with NVIDIA TensorRT-LLM, aiming to offer higher throughput and lower latency. They will also be able to run at scale on any NVIDIA GPU-accelerated stack.
The NVIDIA Nemotron-3 8B family also appears on the list, with support for creating chat and business Q&A applications in sectors such as healthcare, telecommunications and financial services.
NVIDIA AI Foundation models are now available for free in the NVIDIA NGC catalog and Hugging Face, in addition to the Microsoft Azure AI model portfolio.
NVIDIA and Amdocs
NVIDIA and Amdocs have joined in a new partnership to generate custom generative AI models for the $1.7 trillion global telecommunications industry. The collaboration includes Microsoft and enables service providers to deploy applications in secure and reliable environments, whether on-premises or in the cloud.
Amdocs will use the AI foundry service to optimize large enterprise-grade language models for the telecom segment. These LLMs will run on NVIDIA accelerated computing, as part of the software company’s amAlz framework.
By training models on proprietary data, telcos will be able to deliver customized solutions to produce more accurate results in their use cases.
Did you like NVIDIA’s news for the AI segment? Share your opinion with us!
Tags: NVIDIA brings Omniverse Cloud Services models H100 GPU Microsoft Azure
--