Run:ai Completes Proof of Concept with NVIDIA to Maximize GPU Workload Flexibility on Any Cloud

Run:ai deployed on NVIDIA VMIs enables multi-cloud scaling as well as 'lift & shift' cloud deployments

TEL AVIV, Israel, March 24, 2022 — (PRNewswire) —  Run:ai, the company simplifying AI infrastructure orchestration and management, today announced details of a completed proof of concept (POC) which enables multi-cloud GPU flexibility for companies using NVIDIA GPUs in the cloud. NVIDIA's software suite includes virtual machine images, or VMIs, which are optimized for NVIDIA GPUs running in clouds such as Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud. Run:ai software deployed on NVIDIA VMIs enables cloud customers to move AI workloads from one cloud to another, as well as to use multiple clouds simultaneously for different AI workloads with zero code changes.

Run:ai's workload-aware orchestration ensures that every type of AI workload gets the right amount of compute resources when needed, and provides deep integration into NVIDIA GPUs to achieve optimal utilization of these resources. Run:ai's Kubernetes-based Atlas platform and NVIDIA VMIs were used together in the POC to support 'lift & shift' as well as multi-node scaling in the cloud. NVIDIA customers and partners can de-risk their AI cloud deployments with a streamlined and portable solution for cloud AI infrastructure from Run:ai. Customers looking to cost-optimize their cloud computing resources can choose among supported cloud providers for the best-fit configuration. They can also manage AI workloads on multiple clouds with a single control plane.

NVIDIA VMIs are available on each of the major public cloud providers. NVIDIA publishes these with regular updates to both OS and drivers. The VMIs are optimized for performance on the latest generations of NVIDIA GPUs and allow for easy and fast deployment of GPU-accelerated instances on the public cloud.

"By combining accelerated computing power from NVIDIA with Run:ai's Atlas platform, organizations have a stellar AI foundation that enables them to successfully deliver on their AI initiatives," said Omri Geller, CEO and co-founder of Run:ai. "We appreciate the close relationship we have with the NVIDIA cloud team and their commitment to support NVIDIA accelerated computing customers everywhere."

"From innovative startups to world-leading enterprises, NVIDIA-accelerated cloud computing provides customers with flexible options for powering their most demanding workloads," said Paresh Kharya, senior director, Accelerated Computing at NVIDIA. "Paired with NVIDIA-accelerated instances from leading cloud service providers, the Run:ai Atlas platform helps customers maximize the efficiency and value of AI workload operations."

The Run:ai Atlas Platform brings simplicity to GPU management by providing researchers with on-demand access to pooled resources for any AI workload and has built-in integration with NVIDIA Triton Inference Server, NVIDIA's open source inference serving software that lets teams deploy trained AI models from any framework on GPU or CPU infrastructure.

As an innovative cloud-native operating-system which includes a workload-aware scheduler and a GPU abstraction layer, the platform helps IT managers simplify AI implementation, increase team productivity, and gain full utilization of GPUs. Run:ai now offers a simple solution to teams with a multi-cloud AI infrastructure strategy. The solution is available in beta - reach out to partners@run.ai to learn more.

Additionally, Run:ai and NVIDIA are further expanding their collaboration to support customers who are operationalizing AI development. Run:ai is among the NVIDIA DGX-Ready Software partners joining the NVIDIA AI Accelerated program, which offers customers validated, enterprise-grade workflow and cluster management, scheduling and orchestration solutions for a variety of NVIDIA accelerated systems.

Cision View original content: https://www.prnewswire.com/news-releases/runai-completes-proof-of-concept-with-nvidia-to-maximize-gpu-workload-flexibility-on-any-cloud-301510320.html

SOURCE Run:AI

Contact:
Company Name: Run:AI
Lazer Cohen
Email Contact +1 347-753-8256

Featured Video
Jobs
Business Technology Analyst for Surface Water Management at Everett, Washington
GIS Specialist for Washington State Department of Natural Resources at Olympia, Washington
Business Development Manager for Berntsen International, Inc. at Madison, Wisconsin
Mechanical Test Engineer, Platforms Infrastructure for Google at Mountain View, California
Mechanical Engineer 3 for Lam Research at Fremont, California
Manufacturing Test Engineer for Google at Prague, Czechia, Czech Republic
Upcoming Events
Dimensions User Conference 2024 at The Venetian Resort Las Vegas NV - Nov 11 - 13, 2024
URISA GIS Leadership Academy at Embassy Suites Fort Worth Downtown 600 Commerce Street Fort Worth, TX - Nov 18 - 22, 2024



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering EDACafe - Electronic Design Automation TechJobsCafe - Technical Jobs and Resumes  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise