When I try to build a compute engine with gpu, I encounter some errors.
Example1:
- us-east1-c
- 8_vcpu
- 30gb_ram
- 4xTESLA_T4
- Deep Learning Image: PyTorch 1.1.0 and fastai m25 CUDA 10.0 with 30gb
- both firewall choices (http, https) are selected
Error1: Quota 'NVIDIA_T4_GPUS' exceeded. Limit: 1.0 in region us-east1.
Example2:
- us-east1-c
- 8_vcpu
- 30gb_ram
- 1xTESLA_T4
- Deep Learning Image: PyTorch 1.1.0 and fastai m25 CUDA 10.0 with 30gb
- both firewall choices (http, https) are selected
Error2: Quota 'GPUS_ALL_REGIONS' exceeded. Limit: 0.0 globally.
Example3:
- us-central1-a
- 8_vcpu
- 30gb_ram
- 2xTESLA_V100
- Deep Learning Image: PyTorch 1.1.0 and fastai m25 CUDA 10.0 with 30gb
- both firewall choices (http, https) are selected
Error3: Quota 'NVIDIA_V100_GPUS' exceeded. Limit: 1.0 in region us-central1.
Example4:
- us-west1-b
- 8_vcpu
- 30gb_ram
- 1xTESLA_V100
- Deep Learning Image: PyTorch 1.1.0 and fastai m25 CUDA 10.0 with 30gb
- both firewall choices (http, https) are selected
Error4: Quota 'GPUS_ALL_REGIONS' exceeded. Limit: 0.0 globally.
Is there any other way to get a compute engine with gpu?
Best Answer
To summarize and add documentation to the existing answer and its respective comments:
GPUs are not included in the ‘Always Free’ GCP resources as explained by the following documentation.
If you’re using the ‘12-month, $300 free trial’, you can find on the 'Program coverage' row of the table pointed by this link that it is not possible to add GPUs to Compute Engine instances.
In this documentation, you can find the GPU models that are available for both compute and graphics workloads as well as the zones in which they’re available.
For their respective pricing you can find information here.
For more background on GPU quotas and how to request their increase, refer to this section of the Compute Engine documentation.