H100 vs L40
Explore a head to head comparison of specifications, performance, and pricing.
H100
The NVIDIA H100 is a Hopper-based GPU that provides exceptional performance, scalability, and economics for AI, deep learning, and HPC workloads.
ManufacturerNVIDIA
GPU ArchitectureHopper
Average Price$10.09/hr
GPU VRAM80 GB
Cloud Availability13 clouds
System Memory1920 GB
CPU Cores252
Storage31.3 TB
L40
The NVIDIA L40 delivers high-performance computing capabilities for AI, machine learning, and data science applications.
ManufacturerNVIDIA
GPU Architecture—
Average Price$3.42/hr
GPU VRAM48 GB
Cloud Availability2 clouds
System Memory768 GB
CPU Cores252
Storage6.6 TB
See how the H100 & L40 compare
Compare detailed hardware specifications and average pricing for the H100 and L40.
Compare Hardware Specifications
| H100 | L40 | |
|---|---|---|
| GPU Type | H100 | L40 |
| VRAM per GPU | 80 GB | 48 GB |
| Manufacturer | NVIDIA | NVIDIA |
| Architecture | Hopper | Ada Lovelace |
| Interconnect | PCIe Gen5 or SXM5 | PCIe Gen4 |
| Memory Bandwidth | 3.35 TB/s | 864 GB/s |
| FP16 TFLOPS | 267.6 TFLOPS (4:1) | 90.52 TFLOPS (1:1) |
| CUDA Cores | 16896 | 18176 |
| Tensor Cores | 528 (4th Gen) | 568 (4th Gen) |
| RT Cores | N/A | 142 (3rd Gen) |
| Base Clock | 1365 MHz | 735 MHz |
| Boost Clock | 1785 MHz | 2490 MHz |
| TDP | 350-700W | 300W |
| Process Node | TSMC 4N | TSMC 4N |
| Data Formats | FP8, INT8, BF16, FP16, TF32, FP32, FP64 | FP8, INT8, BF16, FP16, TF32, FP32 |
Compare Average On-Demand Pricing
| H100 | L40 | |
|---|---|---|
| 1 GPU | $2.84 /hr | $0.99 /hr |
| 2 GPUs | $5.19 /hr | $1.99 /hr |
| 4 GPUs | $9.79 /hr | $4.98 /hr |
| 8 GPUs | $19.23 /hr | $8.00 /hr |
Explore H100 & L40 Instances
Browse available instances with H100 and L40 GPUs. Filter by provider, availability, and more to find the perfect instance for your needs.
Explore more GPU comparisons
Select any two GPUs to compare their specifications and explore pricing across providers.