成都创新互联网站制作重庆分公司

TensorFlow如何安装及使用

这篇文章主要介绍了TensorFlow如何安装及使用,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。

10年积累的网站设计制作、网站制作经验,可以快速应对客户对网站的新想法和需求。提供各种问题对应的解决方案。让选择我们的客户得到更好、更有力的网络服务。我虽然不认识你,你也不认识我。但先制作网站后付款的网站建设流程,更有丰城免费网站建设让你可以放心的选择与我们合作。

安装

(1)安装包安装:pip install tensorflow==1.14 -i https://pypi.douban.com/simple

virtualenv -p /usr/bin/python2.7 venv-python2.7-tf1.14.0source ./venv-python2.7-tf1.14.0/bin/activatepip listpythonpip install numpy==1.16.5 opt-einsum==2.3.2 future -i https://pypi.douban.com/simplepip install tensorflow==1.14.0 -i https://pypi.douban.com/simple

(2)源码编译安装:https://tensorflow.google.cn/install/source

Install bazel-0.25.2# wget https://github.com/bazelbuild/bazel/releases/download/0.25.2/bazel-0.25.2-linux-x86_64# chmod u+x bazel-0.25.2-linux-x86_64# ln -s /path/bazel-0.25.2-linux-x86_64 /usr/bin/bazel# bazel versionBuild label: 0.25.2Install tensorflow-1.14.0# git clone https://github.com/tensorflow/tensorflow.git# cd tensorflow# git checkout v1.14.0# ./configure   # /usr/bin/python3, others are default# bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package# ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg# mv /tmp/tensorflow_pkg/tensorflow-1.14.0-cp36-cp36m-linux_x86_64.whl ./# python3 -m pip install ./tensorflow-1.14.0-cp36-cp36m-linux_x86_64.whlinstall tensorflow-1.14.0 with MKL&Patch# git clone https://github.com/tensorflow/tensorflow.git# cd tensorflow/# git checkout v1.14.0# patch -p0 < /path/tf-mkl.patch# ./configure   # /usr/bin/python3, others are default# bazel build --config=mkl --config=opt //tensorflow/tools/pip_package:build_pip_package# ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg# mv /tmp/tensorflow_pkg/tensorflow-1.14.0-cp36-cp36m-linux_x86_64.whl ./# python3 -m pip install ./tensorflow-1.14.0-cp36-cp36m-linux_x86_64.whl# python3 -m pip listtensorboard (1.14.0)tensorflow (1.14.0)tensorflow-estimator (1.14.0)

使用

模型优化

(1)查看 saved_model 模型的输入和输出

# bazel build tensorflow/python/tools:saved_model_cli# saved_model_cli show --dir detection/ --all或者# python3 /usr/local/lib/python3.6/site-packages/tensorflow/python/tools/saved_model_cli.py show --dir detection/ --allMetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):inputs['image'] tensor_info:dtype: DT_UINT8shape: (1, -1, -1, 3)name: image:0inputs['true_image_shape'] tensor_info:dtype: DT_INT32shape: (1, 3)name: true_image_shape:0
  The given SavedModel SignatureDef contains the following output(s):outputs['detection_boxes'] tensor_info:dtype: DT_FLOATshape: (1, -1, 4)name: ChangeCoordToOriginalImage/stack:0outputs['detection_classes'] tensor_info:dtype: DT_INT32shape: (1, -1)name: add:0outputs['detection_keypoints'] tensor_info:dtype: DT_FLOATshape: (1, -1, 4, 2)name: TextKeypointPostProcess/Reshape_2:0outputs['detection_scores'] tensor_info:dtype: DT_FLOATshape: (1, -1)name: strided_slice_3:0outputs['num_detections'] tensor_info:dtype: DT_INT32shape: (1)name: BatchMultiClassNonMaxSuppression/stack_8:0
  Method name is: tensorflow/serving/predict

(2)将 tf 的 saved_model 保存成 frozen_model

# bazel build tensorflow/python/tools:freeze_graph# freeze_graph --input_saved_model_dir detection/ --output_graph detection_frozen_model.pb --output_node_names ChangeCoordToOriginalImage/stack,add,TextKeypointPostProcess/Reshape_2,strided_slice_3,BatchMultiClassNonMaxSuppression/stack_8或者# python3 /usr/local/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py --input_saved_model_dir detection/ --output_graph detection_frozen_model.pb --output_node_names ChangeCoordToOriginalImage/stack,add,TextKeypointPostProcess/Reshape_2,strided_slice_3,BatchMultiClassNonMaxSuppression/stack_8

(3)将 frozen_model 通过优化得到 optimized_model

# bazel build tensorflow/python/tools:optimize_for_inference   // ouput: bazel-bin/tensorflow/python/tools/optimize_for_inference# optimize_for_inference --input detection_frozen_model.pb --output detection_optimized_model.pb --input_names image,true_image_shape --output_names ChangeCoordToOriginalImage/stack,add,TextKeypointPostProcess/Reshape_2,strided_slice_3,BatchMultiClassNonMaxSuppression/stack_8 --frozen_graph true --placeholder_type_enum 4,3,1,3,1,1,3或者# python3 /usr/local/lib/python3.6/site-packages/tensorflow/python/tools/optimize_for_inference.py --input detection_frozen_model.pb --output detection_optimized_model.pb --input_names image,true_image_shape --output_names ChangeCoordToOriginalImage/stack,add,TextKeypointPostProcess/Reshape_2,strided_slice_3,BatchMultiClassNonMaxSuppression/stack_8 --frozen_graph true --placeholder_type_enum 4,3,1,3,1,1,3

其中 placeholder_type_enum 详情如下: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/types.proto

(4)将 pb 模型输出成 TensorFlow 的可视化 graph

# bazel build tensorflow/python/tools:import_pb_to_tensorboard# import_pb_to_tensorboard --model_dir ./recognition_frozen_model.pb --log_dir ./recognition_log或者# python3 /usr/local/lib/python3.6/site-packages/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir ./recognition_frozen_model.pb --log_dir ./recognition_frozen_model.graph# nohup tensorboard --logdir=./recognition_frozen_model.graph --port=6006 2>&1 &

可视化工具 TensorBoard 用法: https://blog.csdn.net/gg_18826075157/article/details/78440766

(5)量化、固化、优化 pb 模型 官方手册:https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms intel 量化手册:https://github.com/IntelAI/tools/tree/master/tensorflow_quantization

# bazel build tensorflow/tools/graph_transforms:transform_graph# bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph="./detection_frozen_model.pb" --out_graph="./detection_transformed_model.pb" --inputs="image,true_image_shape" --outputs="ChangeCoordToOriginalImage/stack,add,TextKeypointPostProcess/Reshape_2,strided_slice_3,BatchMultiClassNonMaxSuppression/stack_8" --transforms='  add_default_attributes
  strip_unused_nodes()
  remove_nodes(op=Identity, op=CheckNumerics)
  fold_constants(ignore_errors=true)
  fold_batch_norms
  fold_old_batch_norms
  quantize_weights'

PS: 模型优化 refer:https://blog.csdn.net/qq_14845119/article/details/78846372 模型量化:https://www.jianshu.com/p/d2637646cda1

tf的log和vlog输出配置

There are two flags, similarly named, but with somewhat different semantics: TF_CPP_MIN_LOG_LEVEL - which has 3 or 4 basic levels - low numbers = more messages.

0 outputs Information, Warning, Error, and Fatals (default) 1 outputs Warning, and above 2 outputs Errors and above. etc... I didn't check edge cases

TF_CPP_MIN_VLOG_LEVEL - which causes very very many extra Information errors - really for debugging only - low numbers = less messages.

3 Outputs lots and lots of stuff 2 Outputs less 1 Outputs even less 0 Outputs nothing extra (default)

感谢你能够认真阅读完这篇文章,希望小编分享的“TensorFlow如何安装及使用”这篇文章对大家有帮助,同时也希望大家多多支持创新互联,关注创新互联行业资讯频道,更多相关知识等着你来学习!


当前标题:TensorFlow如何安装及使用
URL标题:http://cxhlcq.com/article/jdipjo.html

其他资讯

在线咨询

微信咨询

电话咨询

028-86922220(工作日)

18980820575(7×24)

提交需求

返回顶部