You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since DeepVariant does not support multi-GPU training (Can model_train be run on multiple GPUs?), I am pretty curious about "We have tested training with 1 and 2 GPUs and observed the following runtimes:" mentioned in the training case.
Specifically, how is the training with 2 GPUs tested?
Thank you!
Regards : )
The text was updated successfully, but these errors were encountered:
It is actually possible to train DeepVariant on multiple GPUs, using the MirroredStrategy. You can find the tensorflow documentation here: link.
It looks like we need to update the FAQ to reflect that—thanks for brining it to our attention! The training case study is up-to-date, so feel to continue to reference that. It looks like it already applies the mirrored strategy.
Hi,
Since DeepVariant does not support multi-GPU training (Can model_train be run on multiple GPUs?), I am pretty curious about "We have tested training with 1 and 2 GPUs and observed the following runtimes:" mentioned in the training case.
Specifically, how is the training with 2 GPUs tested?
Thank you!
Regards : )
The text was updated successfully, but these errors were encountered: