Microsoft and Nvidia announce major new integrations, breakthroughs and more at GTC

9 Min Read

Offered by Microsoft


Microsoft’s bulletins about brand-new collaborations with long-standing companion Nvidia put the corporate on the forefront of this 12 months’s Nvdia GTC AI conference in San Jose, March 18 – 21.

The week’s spherical of AI innovation information ran the gamut from AI infrastructure and repair advances to new platform integrations, trade breakthroughs and extra. Plus, Nidhi Chappell,V P of Azure Generative AI and HPC Platform Microsoft, sat down for an unique one-on-one dialog with VentureBeat Senior Author Sharon Goldman to speak about Microsoft’s partnership with each OpenAI and Nvidia, the place the market is headed and extra.

“Should you have a look at what obtained us to right here, partnership is de facto on the middle of every part we do. Whenever you’re coaching a big foundational mannequin, you need to have infrastructure at massive scale that may run for an extended time frame,” Chappell mentioned. “We’ve invested a number of effort and time with Nvidia to ensure we will ship efficiency, we will do it reliably, and we will do it globally the world over in order that [using our Azure OpenAI service] enterprise prospects can seamlessly combine that of their current flows or they’ll begin their new work on our software.”

Watch the total interview beneath, Live from GTC: A Conversation with Microsoft | NVIDIA On-Demand, learn on for a have a look at the most important convention bulletins and don’t miss Microsoft’s in-depth sequence of panels and talks, all free to watch on demand.   

AI infrastructure ranges up with main new integrations

Workloads are getting extra subtle and requiring extra heavy lifting – which suggests {hardware} innovation has to step in. Bulletins to that finish: first, Microsoft is among the first organizations to make use of the Nvidia G200 Grace Blackwell Superchip and Nvidia Quantum-X800 InfiniBand networking, integrating these into Azure. Plus, the Azure NC H100 v5 VM digital machine sequence is now accessible to organizations of each measurement.

See also  Microsoft hires ex-OpenAI leaders Sam Altman and Greg Brockman to lead new AI team

The Nvidia G200 Grace Blackwell Superchip is particularly designed to deal with the heavy lifting of more and more complicated AI workloads, high-performing workloads and information processing. New Azure situations primarily based on the most recent GB200 and lately introduced Nvidia Quantum-X800 InfiniBand networking will assist speed up frontier and foundational fashions for pure language processing, laptop imaginative and prescient, speech recognition and extra. It options as much as 16 TB/s of reminiscence bandwidth and as much as an estimated 45 instances higher inference on trillion parameter fashions than the earlier era. The Nvidia Quantum-X800 InfiniBand networking platform works to increase the GB200’s parallel computing duties into large GPU scale.

Study extra concerning the Nvidia and Microsoft integrations here.

The Azure NC H100 v5 VM sequence, constructed for mid-range coaching, inferencing and high-performance compute (HPC) simulations, is now accessible to organizations of each measurement. The VM sequence relies on the Nvidia H100 NVL platform, which is out there with one or two Nvidia H100 94GB PCIe Tensor Core GPUs linked by NVLink with 600 GB/s of bandwidth.

It helps 128GB/s bi-directional communication between the host processor and the GPU to scale back information switch latency and overhead to make AI and HPC purposes sooner and extra scalable. With Nvidia multi-instance GPU (MIG) know-how help, prospects also can partition every GPU into as much as seven situations.

See what customers are achieving now.

Main breakthroughs in healthcare and life sciences

AI has been a serious breakthrough for rapid-paced improvements in drugs and the life sciences, from analysis to drug discovery and affected person care. The expanded collaboration pairs Microsoft Azure with Nvidia DGX Cloud and the Nvidia Clara suite of microservices to present healthcare suppliers, pharmaceutical and biotechnology firms and medical system builders the flexibility to quick monitor innovation in scientific analysis, drug discovery and affected person care.

See also  Microsoft opens its Copilot GPT Builder to all Pro subscribers

The checklist of organizations already leveraging cloud computing and AI embody: Sanofi, the Broad Institute of MIT and Harvard, Flywheel and Sophia Genetics, tutorial medical facilities just like the College of Wisconsin College of Drugs and Public Well being, and well being methods like Mass Normal Brigham. They’re driving transformative modifications in healthcare, enhancing affected person care and democratizing AI for healthcare professionals and extra.

Learn how AI is transforming the healthcare industry.

Industrial digital twins gaining traction with Omniverse APIs on Azure

Nvidia Omniverse Cloud APIs are coming to Microsoft Azure, extending the Omniverse platform’s attain. Builders can now combine core Omniverse applied sciences instantly into current design and automation software program purposes for digital twins, or their simulation workflows for testing and validating autonomous machines like robots or self-driving automobiles.

Microsoft demonstrated a preview of what’s attainable utilizing Omniverse Cloud APIs on Azure. As an illustration, manufacturing unit operators can see real-time manufacturing unit information overlaid on a 3D digital twin of their facility to achieve new insights that may velocity up manufacturing.

In his GTC keynote, Nvidia CEO Jensen Huang confirmed Teamcenter X linked to Omniverse APIs, giving the software program the flexibility to attach design information to Nvidia generative AI APIs, and use Omniverse RTX rendering instantly contained in the app.

Learn more about the ways organizations are deploying Omniverse Cloud APIs in Azure.

Enhancing real-time contextualized intelligence

Copilot for Microsoft 365, quickly accessible as a devoted physical keyboard key on Home windows 11 PCs, combines the facility of huge language fashions with proprietary enterprise information. Nvidia GPUs and Nvidia Triton Inference Server energy up AI inference predictions for real-time intelligence that’s contextualized, enabling customers to reinforce their creativity, productiveness and abilities.

See also  CES 2024: Everything revealed so far, from Nvidia to rabbit's pocket AI to Kodiak's autonomous semi truck

Turbocharging AI coaching and AI deployment

Nvidia NIM inference microservices, a part of the Nvidia AI Enterprise software program platform, gives cloud-native microservices for optimized inference on greater than two dozen standard basis fashions. For deployment, the microservices ship prebuilt, run-anywhere containers powered by Nvidia AI Enterprise inference software program — together with Triton Inference Server, TensorRT and TensorRT-LLM — to assist builders velocity time to market of performance-optimized manufacturing AI purposes.

Integration of Nvidia DGX Cloud with Microsoft Material will get deeper

Microsoft and Nvidia are pairing up to make sure Microsoft Fabric, the all-in-one analytics answer for enterprises, is additional built-in into Nvidia DGX Cloud compute. That implies that Nvidia’s workload-specific optimized runtimes, LLMs and machine studying will work seamlessly with Microsoft Material. With Material OneLake because the underlying information storage, builders can apply data-intensive use circumstances like digital twins and climate forecasting. The combination additionally offers prospects the choice to make use of DGX Cloud to speed up their Material information science and information engineering workloads.

See what you missed at GTC 2024

Microsoft dove into the highly effective potential of all its collaborations with Nvidia, and demonstrated why Azure is a vital part of a profitable AI technique for organizations at each measurement. Watch all of Microsoft’s panels and talks right here, free to stream on demand.

Study extra about Microsoft and NVIDIA AI options:


VB Lab Insights content material is created in collaboration with an organization that’s both paying for the put up or has a enterprise relationship with VentureBeat, they usually’re at all times clearly marked. For extra data, contact gross sales@venturebeat.com.

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter CoinGecko Free Api Key to get this plugin works.