AI神经系统(用于训练AI)
作者:微信文章·TensorFlow: Developed by Google, TensorFlow is a powerful and flexible open-source machine learning framework used for a wide range of tasks, including deep learning and neural networks.
https://github.com/tensorflow/tensorflow
·PyTorch: Created by Facebook's AI Research lab, PyTorch is another popular deep learning framework that is widely used for research and development due to its ease of use and dynamic computation graph.
https://github.com/pytorch/pytorch
·Scikit-learn: This is a simple and efficient tool for data mining and data analysis built on Python. It provides a range of supervised and unsupervised learning algorithms.
https://github.com/scikit-learn/scikit-learn
·Apache MXNet: An open-source deep learning framework designed for both efficiency and flexibility, MXNet supports multiple programming languages and is used for training and deploying deep learning models.
https://github.com/scikit-learn/scikit-learn
·OpenCV: Originally developed by Intel, OpenCV is a library of programming functions mainly aimed at real-time computer vision.
https://github.com/opencv/opencv
·Keras: A high-level neural networks API, Keras can run on top of TensorFlow, Theano, or CNTK and is known for its user-friendliness and modularity.
https://github.com/keras-team/keras
·H2O.ai: This open-source machine learning platform is designed for business analysts and data scientists to build machine learning models quickly and efficiently
https://github.com/h2oai
将张量流部署到服务器涉及几个步骤,直到指南完成该过程
pip install tensorflow
Set Up TensorFlow Serving: TensorFlow Serving is a flexible, high-performance serving system for machine learning models. You can install it using Docker, which is the easiest and most straightforward method3. Here's how to install TensorFlow Serving using Docker
设置TensorFlow Serving:TensorFlow Serving 是一种灵活的高性能机器学习模型服务系统。您可以使用 Docker 安装它,这是最简单、最直接的方法3。以下是使用 Docker 安装 TensorFlow Serving 的方法
docker pull tensorflow/serving
docker run -p 8501:8501 --name=tf_serving -t tensorflow/serving
Save Your Model: Save your trained TensorFlow model in a format that TensorFlow Serving can use. Typically, you'll save it as a SavedModel:
保存您的模型:以TensorFlow Serving 可以使用的格式保存经过训练的 TensorFlow 模型。通常,您会将其保存为 SavedModel:
model.save('path_to_save_model/saved_model')
Serve the Model: Place your SavedModel in a directory that TensorFlow Serving can access. Then, start TensorFlow Serving with the model directory specified:
服务模型:将SavedModel 放置在TensorFlow Serving 可以访问的目录中。然后,使用指定的模型目录启动 TensorFlow Serving:
docker run -p 8501:8501 --name=tf_serving -v /path_to_saved_model:/models/my_model -t tensorflow/serving
Test the Deployment: Once TensorFlow Serving is running, you can test the deployment by sending requests to the REST API endpoint
测试部署:TensorFlow Serving 运行后,您可以通过向REST API 端点发送请求来测试部署:
curl http://localhost:8501/v1/models/my_model:predict -d '{"instances": }'
页:
[1]