GPU Compute

Empower your cloud experience high-efficiency cost-effective GPU compute


In a world where AI is becoming increasingly important, and models are becoming larger & more complex, GPU compute has become a sought-after resource that cloud providers often charge a premium for. Our GPU compute instances are powered by NVIDIA Tesla GPUs, providing high-efficiency & cost-effective GPU compute for AI inference, machine learning, and other graphics-accelerated workloads.

We offer access to the NVIDIA Tesla T4, featuring 16 GB of GDDR6 memory, 2560 CUDA cores, and 8.1 TFLOPS of FP32 performance (65.1 TFLOPS FP16). In addition, the T4 also features 320 Tensor Cores, which are designed to accelerate AI inference workloads.

All GPU compute instances run on our AMD1 compute instance lineup, which is powered by dedicated Ryzen 9 5950X cores and paired with fast DDR4 ECC RAM. We utilise CPU pinning to directly allocate a physical core to your instance, ensuring you are the sole tenant of that core, and allowing you to leverage its full potential. Block storage is provided by our high-performance NVMe SSDs, offering up to 1 GB/s of read/write performance, and up to 100,000 IOPS.

Our instances are hosted on the London Internet Exchange (LINX LON1), and offer first-class connectivity typically reserved for huge companies that can build out their own datacentres. We can provide you with a BGP session so you can bring your own IPs - completely free of charge. In addition, we can also offer up to 10G fiber networking for your instances upon request. All instances have a burst bandwidth of 1 Gbps (unmetered), and DDoS protection.

Try now →
Name Cores Memory GPU Bandwidth
nvidia1.large 8 32 GB Tesla T4 800 Mbps £0.1301 £95.00
nvidia2.large 8 32 GB A2000 12G 800 Mbps £0.1301 £95.00

* Assuming an average of 730 hours in a month

Prices are VAT exclusive where applicable

Lagrange logo

© 2024 Lagrange Cloud

Lagrange Cloud Technologies Limited is a company registered in England and Wales with company number 13466318