（1） , tf.train.saver ()
(1) . TF. Train. Saver() is used to save tensorflow training model. All parameters are saved by default
(2) . it is used to load parameters. Note: only the parameters to be trained such as weights and offset items stored in data are loaded, and others are not loaded, including the graphs in the meta file
（2） , tf.train.import_ meta_ graph
(1) Get is used to load the graph in the meta file and the node parameters defined on the graph, including the parameters to be trained such as weight offset term, as well as the intermediate parameters generated in the training process. All parameters are called through the graph interface_ tensor_ by_ Name (name = “parameter name during training”)
（3） . summary
(1) . save using TF. Train. Saver()
(2) . tf.train.import can be used for loading_ meta_ Graph (“. Meta file”), you can obtain the required parameters directly through the training parameter name, but you need to know the parameter name during training in advance. You should understand the tensorflow naming rules
(3) The disadvantage of. Tf.train.saver (“.. / checkpoints directory /”) loading is that only training parameters are loaded and must be defined the same (shape, dtype and tf.type, for example: I am a placeholder and you are also a tf.placeholder). When you want to obtain training intermediate parameters, you need to build the same network as the training process
Supplement: tf.train.import_ meta_ Graph reports keyerror
When I restore the model, I execute tf.train.import_ meta_ An error is reported in the graph
Later, I found that my model was trained on the server. The tensorflow version on the server was 1.11.0, and I executed tf.train.import on the local computer_ meta_ Graph, my local tensorflow is 1.5.0. I updated tensorflow to 1.11.0 and solved it.
The above is my personal experience. I hope I can give you a reference, and I hope you can support developepper.