Dynamic graph and static graph
At present, neural network framework is divided into static graph framework and dynamic graph framework. The biggest difference between pytorch and tensorflow, Caffe and other frameworks is that they have different computational graph forms. Tensorflow uses a static graph, which means that we first define a calculation graph and then use it continuously. In pytorch, a new calculation graph is rebuilt every time. Through this course, we will understand the advantages and disadvantages between static graph and dynamic graph.
For users, the two forms of calculation diagrams are very different. At the same time, static diagrams and dynamic diagrams have their own advantages. For example, dynamic diagrams are more convenient for debugging. Users can debug in any way they like. At the same time, they are very intuitive. Static diagrams are defined first and then run, After that, when running again, there is no need to rebuild the calculation diagram, so it will be faster than the dynamic diagram.
# tensorflow import tensorflow as tf first_counter = tf.constant(0) second_counter = tf.constant(10) # tensorflow import tensorflow as tf first_counter = tf.constant(0) second_counter = tf.constant(10) def cond(first_counter, second_counter, *args): return first_counter < second_counter def body(first_counter, second_counter): first_counter = tf.add(first_counter, 2) second_counter = tf.add(second_counter, 1) return first_counter, second_counter c1, c2 = tf.while_loop(cond, body, [first_counter, second_counter]) with tf.Session() as sess: counter_1_res, counter_2_res = sess.run([c1, c2]) print(counter_1_res) print(counter_2_res)
It can be seen that tensorflow needs to construct the whole graph into a static one. In other words, the graph is the same every time it runs and cannot be changed. Therefore, Python’s while loop statement cannot be used directly, and the auxiliary function tf.while needs to be used_ Loop is written in tensorflow’s internal form
# pytorch import torch first_counter = torch.Tensor() second_counter = torch.Tensor() while (first_counter < second_counter): first_counter += 2 second_counter += 1 print(first_counter) print(second_counter)
It can be seen that pytorch is written in exactly the same way as python, without any additional learning cost
The above pytorch learning: examples of dynamic graphs and static graphs are all the contents shared by Xiaobian. I hope to give you a reference and support developeppaer.