Pytorch learning: examples of dynamic and static graphs

Time:2021-7-30

Dynamic graph and static graph

At present, neural network framework is divided into static graph framework and dynamic graph framework. The biggest difference between pytorch and tensorflow, Caffe and other frameworks is that they have different computational graph forms. Tensorflow uses a static graph, which means that we first define a calculation graph and then use it continuously. In pytorch, a new calculation graph is rebuilt every time. Through this course, we will understand the advantages and disadvantages between static graph and dynamic graph.

For users, the two forms of calculation diagrams are very different. At the same time, static diagrams and dynamic diagrams have their own advantages. For example, dynamic diagrams are more convenient for debugging. Users can debug in any way they like. At the same time, they are very intuitive. Static diagrams are defined first and then run, After that, when running again, there is no need to rebuild the calculation diagram, so it will be faster than the dynamic diagram.


# tensorflow
import tensorflow as tf
first_counter = tf.constant(0)
second_counter = tf.constant(10)
# tensorflow
import tensorflow as tf
first_counter = tf.constant(0)
second_counter = tf.constant(10)
def cond(first_counter, second_counter, *args):
  return first_counter < second_counter
def body(first_counter, second_counter):
  first_counter = tf.add(first_counter, 2)
  second_counter = tf.add(second_counter, 1)
  return first_counter, second_counter
c1, c2 = tf.while_loop(cond, body, [first_counter, second_counter])
with tf.Session() as sess:
  counter_1_res, counter_2_res = sess.run([c1, c2])
print(counter_1_res)
print(counter_2_res)

It can be seen that tensorflow needs to construct the whole graph into a static one. In other words, the graph is the same every time it runs and cannot be changed. Therefore, Python’s while loop statement cannot be used directly, and the auxiliary function tf.while needs to be used_ Loop is written in tensorflow’s internal form


# pytorch
import torch
first_counter = torch.Tensor([0])
second_counter = torch.Tensor([10])
 
while (first_counter < second_counter)[0]:
  first_counter += 2
  second_counter += 1
 
print(first_counter)
print(second_counter)

It can be seen that pytorch is written in exactly the same way as python, without any additional learning cost

The above pytorch learning: examples of dynamic graphs and static graphs are all the contents shared by Xiaobian. I hope to give you a reference and support developeppaer.

Recommended Today

Implementation example of go operation etcd

etcdIt is an open-source, distributed key value pair data storage system, which provides shared configuration, service registration and discovery. This paper mainly introduces the installation and use of etcd. Etcdetcd introduction etcdIt is an open source and highly available distributed key value storage system developed with go language, which can be used to configure sharing […]