Today, we encountered a problem of “cudnn ﹣ status ﹣ internal ﹣ error” when we moved the NMT code to tf2.0 stable. Please record it briefly.
It is a problem related to GPU display memory. Specifying GPU dynamic allocation is enough.
Gpuconfig in tensorflow2.0 was moved to tf.config.experimental
physical_devices = tf.config.experimental.list_physical_devices('GPU') assert len(physical_devices) > 0, "Not enough GPU hardware devices available" tf.config.experimental.set_memory_growth(physical_devices, True)
Just put it before network initialize
GitHub related issues