Gcore, Graphcore and UbiOps Collaborate to Boost ML and AI Workloads

New Partnership Offers AI Teams Access to Advanced Hardware for AI and ML Workloads

Luxembourg: July 26, 2023 — Gcore — a European provider of high-performance, low-latency, international cloud, and edge solutions — partners with UbiOps and Graphcore to offer powerful computing resources on-demand, specifically designed for the rising requirements of modern AI tasks.

By partnering with Graphcore and UbiOps, Gcore Cloud is taking a significant step forward in empowering AI teams with a unique service offering that combines the best of Graphcore’s Intelligence Processing Units (IPU) hardware, powerful machine learning operations (MLOps) platform UbiOps, and cloud infrastructure.

Andre Reitenbach, CEO at Gcore, said: “The collaboration between Gcore, Graphcore, and UbiOps brings a seamless experience for AI teams. This enables effortless utilization of Gcore’s cloud infrastructure with Graphcore’s IPUs on the UbiOps platform. This means that users can take advantage of the exceptional computational capabilities of IPUs for their specific AI tasks. Also, users can leverage UbiOps’ out-of-the-box MLOps features such as model versioning, governance, and monitoring.

These features help teams to accelerate time to market with AI solutions, save on computing resource costs, and efficiently use them with on-demand hardware scaling. We’re thrilled about this partnership’s potential to enable AI projects to succeed and reach their goals.”

To demonstrate significant IPU benefits compared to other devices, Gcore benchmarked workloads on three different compute resources: CPU, GPU, and IPU. Gcore trained a Convolutional Neural Network (CNN), a model designed for image analysis, on the CIFAR-10 dataset containing 60,000 labeled images, on these three different devices. Then they compared how fast the training went for different amounts of data

By measuring training speeds for different batch sizes, Gcore found that training on CPUs was slow, even for a relatively simple CNN and small dataset. At the same time, IPUs and GPUs significantly accelerated the process. With minimal optimization, an even shorter training time could be achieved on IPU versus GPU.

TypeEffective batch size*Graph Compilation (s)Training duration (s)Time per epoch (s)Unit cost ($/h)
IPU-POD450~18047208.1From $2.5
IPU-POD48~180142026.0From $2.5
GPU50044308.6From $4
GPU80261651.7From $4
CPU4010+ hours10+ minutesFrom $1.3
CPU500~5 hours330From $1.3

Thanks to the collaboration between Gcore Cloud, Graphcore and UbiOps, AI teams can now easily access powerful hardware specifically designed for demanding AI and ML workloads. The integration of Gcore Cloud, Graphcore’s IPUs and UbiOps’ MLOps platform helps teams work more efficiently and cost-effectively, enabling more and more AI projects to become successful and achieve their goals.

About Gcore

Gcore is an international leader in public cloud and edge computing, content delivery, hosting, and security solutions. Gcore is headquartered in Luxembourg and has offices in Germany, Poland, Lithuania, Cyprus, and Georgia. It provides infrastructure to global leaders in an array of industries, including TEDx, Saber Interactive, Bandai Namco, Wargaming, and Avast. Gcore manages its own global IT infrastructure across six continents. Its network consists of 140+ points of presence around the world in reliable Tier IV and Tier III data centers. Andre Reitenbach has been the CEO at Gcore since 2014.

Website | Twitter | Facebook | LinkedIn

About Graphcore

Graphcore is the leading company in the development of IPU hardware. These specialized chips are designed to meet the demanding requirements of modern AI tasks. IPUs primarily leverage model parallelization to speed up computational tasks, compared to data parallelization offered by GPUs. Website | Twitter | Facebook | LinkedIn

About UbiOps

UbiOps is a developer of powerful machine learning operations (MLOps) platform that simplifies AI model deployment, orchestration, and management. UbiOps helps businesses to efficiently run AI models and workflows in various (cloud) computing environments. Website | Twitter | LinkedIn

1200x200-Pallycon

Leave a Comment

Your email address will not be published. Required fields are marked *

Enjoying this article? Subscribe to OTTVerse and receive exclusive news and information from the OTT Industry.