00:00 - 23:59
Win up to €1,000,000 in GPU hours
Time's up
Thank you for your interest.
Our team is going to review the applications. We will contact all participants by the end of July 2025 to announce the results.
Scaleway remains committed to help you break free, scale fast and dream bigger.

Successful projects powered by Scaleway's infrastructure
Holo1 from H company
H company launched a couple of agents in the past few months:
- Surfer H to navigate the web
- Runner H to move from running instructions to performing sequence of actions
- Tester H to verify the results of automated tasks on corporate website.
What is behind Surfer H? Holo1, an Open Source action vLM designed for deep web UI understanding and precise localization. Shared on Hugging Face in June 2025, this family of models has been trained on Scaleway Sovereign GPU infrastructure.
Moshi from Kyutai
Moshi, Kyutai’s revolutionary AI voice assistant brings unprecedented vocal capabilities. Trained using Scaleway’s high-performance Cluster and served with L4 GPU instances, Moshi excels in conveying emotions and accents with 300x codec compression. This setup enabled Moshi to process 70 different emotions and accents with ultra-low latency, allowing for seamless, human-like conversations. Thanks to this high-performance environment, Kyutai was able to achieve this breakthrough.
Mixtral from Mistral AI
Mistral AI, used Nabu, a Custom-built Cluster, to build its Mixtral model (Mistral — Mixtral 8x7B., Jiang et al., 2024), a highly efficient mixture-of-experts model. At its release, Mixtral outperformed existing closed and open weight models across most benchmarks, offering superior performance with fewer active parameters, making it a major innovation in the field of AI. The collaboration with Scaleway enabled Mistral to scale its training efficiently, allowing Mixtral to achieve groundbreaking results in record time.
Why Scaleway for AI?
Integrated, AI-Ready Infrastructure
GPU compute is just the beginning. Build, scale, and optimize your AI pipeline end to end.
- NVIDIA GPUs (H100, GH200, L40S, P100 & more) — Choose the right power for your needs.
- Preconfigured Docker AI Images — Faster deployment with PyTorch, TensorFlow, Jupyter & more.
- Object Storage — Stream data directly into training and inference pipelines.
- Kubernetes Kapsule with GPU Support — Scale containerized AI workloads with ease.
Built in Europe
Keep sensitive data in Europe
- Sovereign cloud infrastructure across Europe
- Powered by 100% renewable energy
- No vendor lock-in — open standards and flexibility by design