mirror of
https://gitee.com/nanjing-yimao-information/ieemoo-ai-gift.git
synced 2025-08-23 23:50:25 +00:00
update
This commit is contained in:
169
docs/en/integrations/amazon-sagemaker.md
Normal file
169
docs/en/integrations/amazon-sagemaker.md
Normal file
@ -0,0 +1,169 @@
|
||||
---
|
||||
comments: true
|
||||
Description: Learn how to deploy YOLOv8 models on Amazon SageMaker Endpoints. This guide covers the essentials of AWS environment setup, model preparation, and deployment using AWS CloudFormation and the AWS Cloud Development Kit (CDK).
|
||||
keywords: YOLOv8, Ultralytics, Amazon SageMaker, AWS, CloudFormation, AWS CDK, PyTorch, Model Deployment, Machine Learning, Computer Vision
|
||||
---
|
||||
|
||||
# A Guide to Deploying YOLOv8 on Amazon SageMaker Endpoints
|
||||
|
||||
Deploying advanced computer vision models like [Ultralytics’ YOLOv8](https://github.com/ultralytics/ultralytics) on Amazon SageMaker Endpoints opens up a wide range of possibilities for various machine learning applications. The key to effectively using these models lies in understanding their setup, configuration, and deployment processes. YOLOv8 becomes even more powerful when integrated seamlessly with Amazon SageMaker, a robust and scalable machine learning service by AWS.
|
||||
|
||||
This guide will take you through the process of deploying YOLOv8 PyTorch models on Amazon SageMaker Endpoints step by step. You'll learn the essentials of preparing your AWS environment, configuring the model appropriately, and using tools like AWS CloudFormation and the AWS Cloud Development Kit (CDK) for deployment.
|
||||
|
||||
## Amazon SageMaker
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://d1.awsstatic.com/sagemaker/Amazon-SageMaker-Studio%402x.aa0572ebf4ea9237571644c7f853c914c1d0c985.png" alt="Amazon SageMaker Overview">
|
||||
</p>
|
||||
|
||||
[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a machine learning service from Amazon Web Services (AWS) that simplifies the process of building, training, and deploying machine learning models. It provides a broad range of tools for handling various aspects of machine learning workflows. This includes automated features for tuning models, options for training models at scale, and straightforward methods for deploying models into production. SageMaker supports popular machine learning frameworks, offering the flexibility needed for diverse projects. Its features also cover data labeling, workflow management, and performance analysis.
|
||||
|
||||
## Deploying YOLOv8 on Amazon SageMaker Endpoints
|
||||
|
||||
Deploying YOLOv8 on Amazon SageMaker lets you use its managed environment for real-time inference and take advantage of features like autoscaling. Take a look at the AWS architecture below.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2023/02/28/ML13353_AWSArchitecture-1024x605.png" alt="AWS Architecture">
|
||||
</p>
|
||||
|
||||
### Step 1: Setup Your AWS Environment
|
||||
|
||||
First, ensure you have the following prerequisites in place:
|
||||
|
||||
- An AWS Account: If you don't already have one, sign up for an AWS account.
|
||||
|
||||
- Configured IAM Roles: You’ll need an IAM role with the necessary permissions for Amazon SageMaker, AWS CloudFormation, and Amazon S3. This role should have policies that allow it to access these services.
|
||||
|
||||
- AWS CLI: If not already installed, download and install the AWS Command Line Interface (CLI) and configure it with your account details. Follow [the AWS CLI instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) for installation.
|
||||
|
||||
- AWS CDK: If not already installed, install the AWS Cloud Development Kit (CDK), which will be used for scripting the deployment. Follow [the AWS CDK instructions](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) for installation.
|
||||
|
||||
- Adequate Service Quota: Confirm that you have sufficient quotas for two separate resources in Amazon SageMaker: one for `ml.m5.4xlarge` for endpoint usage and another for `ml.m5.4xlarge` for notebook instance usage. Each of these requires a minimum of one quota value. If your current quotas are below this requirement, it's important to request an increase for each. You can request a quota increase by following the detailed instructions in the [AWS Service Quotas documentation](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html#quota-console-increase).
|
||||
|
||||
### Step 2: Clone the YOLOv8 SageMaker Repository
|
||||
|
||||
The next step is to clone the specific AWS repository that contains the resources for deploying YOLOv8 on SageMaker. This repository, hosted on GitHub, includes the necessary CDK scripts and configuration files.
|
||||
|
||||
- Clone the GitHub Repository: Execute the following command in your terminal to clone the host-yolov8-on-sagemaker-endpoint repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/aws-samples/host-yolov8-on-sagemaker-endpoint.git
|
||||
```
|
||||
|
||||
- Navigate to the Cloned Directory: Change your directory to the cloned repository:
|
||||
|
||||
```bash
|
||||
cd host-yolov8-on-sagemaker-endpoint/yolov8-pytorch-cdk
|
||||
```
|
||||
|
||||
### Step 3: Set Up the CDK Environment
|
||||
|
||||
Now that you have the necessary code, set up your environment for deploying with AWS CDK.
|
||||
|
||||
- Create a Python Virtual Environment: This isolates your Python environment and dependencies. Run:
|
||||
|
||||
```bash
|
||||
python3 -m venv .venv
|
||||
```
|
||||
|
||||
- Activate the Virtual Environment:
|
||||
|
||||
```bash
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
- Install Dependencies: Install the required Python dependencies for the project:
|
||||
|
||||
```bash
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
- Upgrade AWS CDK Library: Ensure you have the latest version of the AWS CDK library:
|
||||
|
||||
```bash
|
||||
pip install --upgrade aws-cdk-lib
|
||||
```
|
||||
|
||||
### Step 4: Create the AWS CloudFormation Stack
|
||||
|
||||
- Synthesize the CDK Application: Generate the AWS CloudFormation template from your CDK code:
|
||||
|
||||
```bash
|
||||
cdk synth
|
||||
```
|
||||
|
||||
- Bootstrap the CDK Application: Prepare your AWS environment for CDK deployment:
|
||||
|
||||
```bash
|
||||
cdk bootstrap
|
||||
```
|
||||
|
||||
- Deploy the Stack: This will create the necessary AWS resources and deploy your model:
|
||||
|
||||
```bash
|
||||
cdk deploy
|
||||
```
|
||||
|
||||
### Step 5: Deploy the YOLOv8 Model
|
||||
|
||||
Before diving into the deployment instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
After creating the AWS CloudFormation Stack, the next step is to deploy YOLOv8.
|
||||
|
||||
- Open the Notebook Instance: Go to the AWS Console and navigate to the Amazon SageMaker service. Select "Notebook Instances" from the dashboard, then locate the notebook instance that was created by your CDK deployment script. Open the notebook instance to access the Jupyter environment.
|
||||
|
||||
- Access and Modify inference.py: After opening the SageMaker notebook instance in Jupyter, locate the inference.py file. Edit the output_fn function in inference.py as shown below and save your changes to the script, ensuring that there are no syntax errors.
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
def output_fn(prediction_output, content_type):
|
||||
print("Executing output_fn from inference.py ...")
|
||||
infer = {}
|
||||
for result in prediction_output:
|
||||
if result.boxes is not None:
|
||||
infer['boxes'] = result.boxes.numpy().data.tolist()
|
||||
if result.masks is not None:
|
||||
infer['masks'] = result.masks.numpy().data.tolist()
|
||||
if result.keypoints is not None:
|
||||
infer['keypoints'] = result.keypoints.numpy().data.tolist()
|
||||
if result.obb is not None:
|
||||
infer['obb'] = result.obb.numpy().data.tolist()
|
||||
if result.probs is not None:
|
||||
infer['probs'] = result.probs.numpy().data.tolist()
|
||||
return json.dumps(infer)
|
||||
```
|
||||
|
||||
- Deploy the Endpoint Using 1_DeployEndpoint.ipynb: In the Jupyter environment, open the 1_DeployEndpoint.ipynb notebook located in the sm-notebook directory. Follow the instructions in the notebook and run the cells to download the YOLOv8 model, package it with the updated inference code, and upload it to an Amazon S3 bucket. The notebook will guide you through creating and deploying a SageMaker endpoint for the YOLOv8 model.
|
||||
|
||||
### Step 6: Testing Your Deployment
|
||||
|
||||
Now that your YOLOv8 model is deployed, it's important to test its performance and functionality.
|
||||
|
||||
- Open the Test Notebook: In the same Jupyter environment, locate and open the 2_TestEndpoint.ipynb notebook, also in the sm-notebook directory.
|
||||
|
||||
- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you’ll plot the output to visualize the model’s performance and accuracy, as shown below.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2023/02/28/ML13353_InferenceOutput.png" alt="Testing Results YOLOv8">
|
||||
</p>
|
||||
|
||||
- Clean-Up Resources: The test notebook will also guide you through the process of cleaning up the endpoint and the hosted model. This is an important step to manage costs and resources effectively, especially if you do not plan to use the deployed model immediately.
|
||||
|
||||
### Step 7: Monitoring and Management
|
||||
|
||||
After testing, continuous monitoring and management of your deployed model are essential.
|
||||
|
||||
- Monitor with Amazon CloudWatch: Regularly check the performance and health of your SageMaker endpoint using [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/).
|
||||
|
||||
- Manage the Endpoint: Use the SageMaker console for ongoing management of the endpoint. This includes scaling, updating, or redeploying the model as required.
|
||||
|
||||
By completing these steps, you will have successfully deployed and tested a YOLOv8 model on Amazon SageMaker Endpoints. This process not only equips you with practical experience in using AWS services for machine learning deployment but also lays the foundation for deploying other advanced models in the future.
|
||||
|
||||
## Summary
|
||||
|
||||
This guide took you step by step through deploying YOLOv8 on Amazon SageMaker Endpoints using AWS CloudFormation and the AWS Cloud Development Kit (CDK). The process includes cloning the necessary GitHub repository, setting up the CDK environment, deploying the model using AWS services, and testing its performance on SageMaker.
|
||||
|
||||
For more technical details, refer to [this article](https://aws.amazon.com/blogs/machine-learning/hosting-yolov8-pytorch-model-on-amazon-sagemaker-endpoints/) on the AWS Machine Learning Blog. You can also check out the official [Amazon SageMaker Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) for more insights into various features and functionalities.
|
||||
|
||||
Are you interested in learning more about different YOLOv8 integrations? Visit the [Ultralytics integrations guide page](../integrations/index.md) to discover additional tools and capabilities that can enhance your machine-learning projects.
|
189
docs/en/integrations/clearml.md
Normal file
189
docs/en/integrations/clearml.md
Normal file
@ -0,0 +1,189 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn how to streamline and optimize your YOLOv8 model training with ClearML. This guide provides insights into integrating ClearML's MLOps tools for efficient model training, from initial setup to advanced experiment tracking and model management.
|
||||
keywords: Ultralytics, YOLOv8, Object Detection, ClearML, Model Training, MLOps, Experiment Tracking, Workflow Optimization
|
||||
---
|
||||
|
||||
# Training YOLOv8 with ClearML: Streamlining Your MLOps Workflow
|
||||
|
||||
MLOps bridges the gap between creating and deploying machine learning models in real-world settings. It focuses on efficient deployment, scalability, and ongoing management to ensure models perform well in practical applications.
|
||||
|
||||
[Ultralytics YOLOv8](https://ultralytics.com) effortlessly integrates with ClearML, streamlining and enhancing your object detection model's training and management. This guide will walk you through the integration process, detailing how to set up ClearML, manage experiments, automate model management, and collaborate effectively.
|
||||
|
||||
## ClearML
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://clear.ml/wp-content/uploads/2023/06/DataOps@2x-1.png" alt="ClearML Overview">
|
||||
</p>
|
||||
|
||||
[ClearML](https://clear.ml/) is an innovative open-source MLOps platform that is skillfully designed to automate, monitor, and orchestrate machine learning workflows. Its key features include automated logging of all training and inference data for full experiment reproducibility, an intuitive web UI for easy data visualization and analysis, advanced hyperparameter optimization algorithms, and robust model management for efficient deployment across various platforms.
|
||||
|
||||
## YOLOv8 Training with ClearML
|
||||
|
||||
You can bring automation and efficiency to your machine learning workflow by improving your training process by integrating YOLOv8 with ClearML.
|
||||
|
||||
## Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required packages for YOLOv8 and ClearML
|
||||
pip install ultralytics clearml
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, be sure to check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
## Configuring ClearML
|
||||
|
||||
Once you have installed the necessary packages, the next step is to initialize and configure your ClearML SDK. This involves setting up your ClearML account and obtaining the necessary credentials for a seamless connection between your development environment and the ClearML server.
|
||||
|
||||
Begin by initializing the ClearML SDK in your environment. The ‘clearml-init’ command starts the setup process and prompts you for the necessary credentials.
|
||||
|
||||
!!! Tip "Initial SDK Setup"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Initialize your ClearML SDK setup process
|
||||
clearml-init
|
||||
```
|
||||
|
||||
After executing this command, visit the [ClearML Settings page](https://app.clear.ml/settings/workspace-configuration). Navigate to the top right corner and select "Settings." Go to the "Workspace" section and click on "Create new credentials." Use the credentials provided in the "Create Credentials" pop-up to complete the setup as instructed, depending on whether you are configuring ClearML in a Jupyter Notebook or a local Python environment.
|
||||
|
||||
## Usage
|
||||
|
||||
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from clearml import Task
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Step 1: Creating a ClearML Task
|
||||
task = Task.init(
|
||||
project_name="my_project",
|
||||
task_name="my_yolov8_task"
|
||||
)
|
||||
|
||||
# Step 2: Selecting the YOLOv8 Model
|
||||
model_variant = "yolov8n"
|
||||
task.set_parameter("model_variant", model_variant)
|
||||
|
||||
# Step 3: Loading the YOLOv8 Model
|
||||
model = YOLO(f'{model_variant}.pt')
|
||||
|
||||
# Step 4: Setting Up Training Arguments
|
||||
args = dict(data="coco128.yaml", epochs=16)
|
||||
task.connect(args)
|
||||
|
||||
# Step 5: Initiating Model Training
|
||||
results = model.train(**args)
|
||||
```
|
||||
|
||||
### Understanding the Code
|
||||
|
||||
Let’s understand the steps showcased in the usage code snippet above.
|
||||
|
||||
**Step 1: Creating a ClearML Task**: A new task is initialized in ClearML, specifying your project and task names. This task will track and manage your model's training.
|
||||
|
||||
**Step 2: Selecting the YOLOv8 Model**: The `model_variant` variable is set to 'yolov8n', one of the YOLOv8 models. This variant is then logged in ClearML for tracking.
|
||||
|
||||
**Step 3: Loading the YOLOv8 Model**: The selected YOLOv8 model is loaded using Ultralytics' YOLO class, preparing it for training.
|
||||
|
||||
**Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco128.yaml`) and the number of epochs (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md).
|
||||
|
||||
**Step 5: Initiating Model Training**: The model training is started with the specified arguments. The results of the training process are captured in the `results` variable.
|
||||
|
||||
### Understanding the Output
|
||||
|
||||
Upon running the usage code snippet above, you can expect the following output:
|
||||
|
||||
- A confirmation message indicating the creation of a new ClearML task, along with its unique ID.
|
||||
- An informational message about the script code being stored, indicating that the code execution is being tracked by ClearML.
|
||||
- A URL link to the ClearML results page where you can monitor the training progress and view detailed logs.
|
||||
- Download progress for the YOLOv8 model and the specified dataset, followed by a summary of the model architecture and training configuration.
|
||||
- Initialization messages for various training components like TensorBoard, Automatic Mixed Precision (AMP), and dataset preparation.
|
||||
- Finally, the training process starts, with progress updates as the model trains on the specified dataset. For an in-depth understanding of the performance metrics used during training, read [our guide on performance metrics](../guides/yolo-performance-metrics.md).
|
||||
|
||||
### Viewing the ClearML Results Page
|
||||
|
||||
By clicking on the URL link to the ClearML results page in the output of the usage code snippet, you can access a comprehensive view of your model's training process.
|
||||
|
||||
#### Key Features of the ClearML Results Page
|
||||
|
||||
- **Real-Time Metrics Tracking**
|
||||
|
||||
- Track critical metrics like loss, accuracy, and validation scores as they occur.
|
||||
- Provides immediate feedback for timely model performance adjustments.
|
||||
|
||||
- **Experiment Comparison**
|
||||
|
||||
- Compare different training runs side-by-side.
|
||||
- Essential for hyperparameter tuning and identifying the most effective models.
|
||||
|
||||
- **Detailed Logs and Outputs**
|
||||
|
||||
- Access comprehensive logs, graphical representations of metrics, and console outputs.
|
||||
- Gain a deeper understanding of model behavior and issue resolution.
|
||||
|
||||
- **Resource Utilization Monitoring**
|
||||
|
||||
- Monitor the utilization of computational resources, including CPU, GPU, and memory.
|
||||
- Key to optimizing training efficiency and costs.
|
||||
|
||||
- **Model Artifacts Management**
|
||||
|
||||
- View, download, and share model artifacts like trained models and checkpoints.
|
||||
- Enhances collaboration and streamlines model deployment and sharing.
|
||||
|
||||
For a visual walkthrough of what the ClearML Results Page looks like, watch the video below:
|
||||
|
||||
<p align="center">
|
||||
<br>
|
||||
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/iLcC7m3bCes?si=oSEAoZbrg8inCg_2"
|
||||
title="YouTube video player" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
allowfullscreen>
|
||||
</iframe>
|
||||
<br>
|
||||
<strong>Watch:</strong> YOLOv8 MLOps Integration using ClearML
|
||||
</p>
|
||||
|
||||
### Advanced Features in ClearML
|
||||
|
||||
ClearML offers several advanced features to enhance your MLOps experience.
|
||||
|
||||
#### Remote Execution
|
||||
|
||||
ClearML's remote execution feature facilitates the reproduction and manipulation of experiments on different machines. It logs essential details like installed packages and uncommitted changes. When a task is enqueued, the ClearML Agent pulls it, recreates the environment, and runs the experiment, reporting back with detailed results.
|
||||
|
||||
Deploying a ClearML Agent is straightforward and can be done on various machines using the following command:
|
||||
|
||||
```bash
|
||||
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
|
||||
```
|
||||
|
||||
This setup is applicable to cloud VMs, local GPUs, or laptops. ClearML Autoscalers help manage cloud workloads on platforms like AWS, GCP, and Azure, automating the deployment of agents and adjusting resources based on your resource budget.
|
||||
|
||||
### Cloning, Editing, and Enqueuing
|
||||
|
||||
ClearML's user-friendly interface allows easy cloning, editing, and enqueuing of tasks. Users can clone an existing experiment, adjust parameters or other details through the UI, and enqueue the task for execution. This streamlined process ensures that the ClearML Agent executing the task uses updated configurations, making it ideal for iterative experimentation and model fine-tuning.
|
||||
|
||||
<p align="center"><br>
|
||||
<img width="100%" src="https://clear.ml/docs/latest/assets/images/integrations_yolov5-2483adea91df4d41bfdf1a37d28864d4.gif" alt="Cloning, Editing, and Enqueuing with ClearML">
|
||||
</p>
|
||||
|
||||
## Summary
|
||||
|
||||
This guide has led you through the process of integrating ClearML with Ultralytics' YOLOv8. Covering everything from initial setup to advanced model management, you've discovered how to leverage ClearML for efficient training, experiment tracking, and workflow optimization in your machine learning projects.
|
||||
|
||||
For further details on usage, visit [ClearML's official documentation](https://clear.ml/docs/latest/docs/integrations/yolov8/).
|
||||
|
||||
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a treasure trove of resources and insights.
|
178
docs/en/integrations/comet.md
Normal file
178
docs/en/integrations/comet.md
Normal file
@ -0,0 +1,178 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover how to track and enhance YOLOv8 model training with Comet ML's logging tools, from setup to monitoring key metrics and managing experiments for in-depth analysis.
|
||||
keywords: Ultralytics, YOLOv8, Object Detection, Comet ML, Model Training, Model Metrics Logging, Experiment Tracking, Offline Experiment Management
|
||||
---
|
||||
|
||||
# Elevating YOLOv8 Training: Simplify Your Logging Process with Comet ML
|
||||
|
||||
Logging key training details such as parameters, metrics, image predictions, and model checkpoints is essential in machine learning—it keeps your project transparent, your progress measurable, and your results repeatable.
|
||||
|
||||
[Ultralytics YOLOv8](https://ultralytics.com) seamlessly integrates with Comet ML, efficiently capturing and optimizing every aspect of your YOLOv8 object detection model's training process. In this guide, we'll cover the installation process, Comet ML setup, real-time insights, custom logging, and offline usage, ensuring that your YOLOv8 training is thoroughly documented and fine-tuned for outstanding results.
|
||||
|
||||
## Comet ML
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://www.comet.com/docs/v2/img/landing/home-hero.svg" alt="Comet ML Overview">
|
||||
</p>
|
||||
|
||||
[Comet ML](https://www.comet.ml/) is a platform for tracking, comparing, explaining, and optimizing machine learning models and experiments. It allows you to log metrics, parameters, media, and more during your model training and monitor your experiments through an aesthetically pleasing web interface. Comet ML helps data scientists iterate more rapidly, enhances transparency and reproducibility, and aids in the development of production models.
|
||||
|
||||
## Harnessing the Power of YOLOv8 and Comet ML
|
||||
|
||||
By combining Ultralytics YOLOv8 with Comet ML, you unlock a range of benefits. These include simplified experiment management, real-time insights for quick adjustments, flexible and tailored logging options, and the ability to log experiments offline when internet access is limited. This integration empowers you to make data-driven decisions, analyze performance metrics, and achieve exceptional results.
|
||||
|
||||
## Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required packages for YOLOv8 and Comet ML
|
||||
pip install ultralytics comet_ml torch torchvision
|
||||
```
|
||||
|
||||
## Configuring Comet ML
|
||||
|
||||
After installing the required packages, you’ll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
|
||||
|
||||
!!! Tip "Configuring Comet ML"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Set your Comet Api Key
|
||||
export COMET_API_KEY=<Your API Key>
|
||||
```
|
||||
|
||||
Then, you can initialize your Comet project. Comet will automatically detect the API key and proceed with the setup.
|
||||
|
||||
```python
|
||||
import comet_ml
|
||||
|
||||
comet_ml.init(project_name="comet-example-yolov8-coco128")
|
||||
```
|
||||
|
||||
If you are using a Google Colab notebook, the code above will prompt you to enter your API key for initialization.
|
||||
|
||||
## Usage
|
||||
|
||||
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# train the model
|
||||
results = model.train(
|
||||
data="coco128.yaml",
|
||||
project="comet-example-yolov8-coco128",
|
||||
batch=32,
|
||||
save_period=1,
|
||||
save_json=True,
|
||||
epochs=3
|
||||
)
|
||||
```
|
||||
|
||||
After running the training code, Comet ML will create an experiment in your Comet workspace to track the run automatically. You will then be provided with a link to view the detailed logging of your [YOLOv8 model's training](../modes/train.md) process.
|
||||
|
||||
Comet automatically logs the following data with no additional configuration: metrics such as mAP and loss, hyperparameters, model checkpoints, interactive confusion matrix, and image bounding box predictions.
|
||||
|
||||
## Understanding Your Model's Performance with Comet ML Visualizations
|
||||
|
||||
Let's dive into what you'll see on the Comet ML dashboard once your YOLOv8 model begins training. The dashboard is where all the action happens, presenting a range of automatically logged information through visuals and statistics. Here’s a quick tour:
|
||||
|
||||
**Experiment Panels**
|
||||
|
||||
The experiment panels section of the Comet ML dashboard organize and present the different runs and their metrics, such as segment mask loss, class loss, precision, and mean average precision.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_I20ts7j995-D86-BvtWYaw.png" alt="Comet ML Overview">
|
||||
</p>
|
||||
|
||||
**Metrics**
|
||||
|
||||
In the metrics section, you have the option to examine the metrics in a tabular format as well, which is displayed in a dedicated pane as illustrated here.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_FNAkQKq9o02wRRSCJh4gDw.png" alt="Comet ML Overview">
|
||||
</p>
|
||||
|
||||
**Interactive Confusion Matrix**
|
||||
|
||||
The confusion matrix, found in the Confusion Matrix tab, provides an interactive way to assess the model's classification accuracy. It details the correct and incorrect predictions, allowing you to understand the model's strengths and weaknesses.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_h-Nf-tCm8HbsvVK0d6rTng-1500x768.png" alt="Comet ML Overview">
|
||||
</p>
|
||||
|
||||
**System Metrics**
|
||||
|
||||
Comet ML logs system metrics to help identify any bottlenecks in the training process. It includes metrics such as GPU utilization, GPU memory usage, CPU utilization, and RAM usage. These are essential for monitoring the efficiency of resource usage during model training.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://www.comet.com/site/wp-content/uploads/2023/07/1_B7dmqqUMyOtyH9XsVMr58Q.png" alt="Comet ML Overview">
|
||||
</p>
|
||||
|
||||
## Customizing Comet ML Logging
|
||||
|
||||
Comet ML offers the flexibility to customize its logging behavior by setting environment variables. These configurations allow you to tailor Comet ML to your specific needs and preferences. Here are some helpful customization options:
|
||||
|
||||
### Logging Image Predictions
|
||||
|
||||
You can control the number of image predictions that Comet ML logs during your experiments. By default, Comet ML logs 100 image predictions from the validation set. However, you can change this number to better suit your requirements. For example, to log 200 image predictions, use the following code:
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ["COMET_MAX_IMAGE_PREDICTIONS"] = "200"
|
||||
```
|
||||
|
||||
### Batch Logging Interval
|
||||
|
||||
Comet ML allows you to specify how often batches of image predictions are logged. The `COMET_EVAL_BATCH_LOGGING_INTERVAL` environment variable controls this frequency. The default setting is 1, which logs predictions from every validation batch. You can adjust this value to log predictions at a different interval. For instance, setting it to 4 will log predictions from every fourth batch.
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ['COMET_EVAL_BATCH_LOGGING_INTERVAL'] = "4"
|
||||
```
|
||||
|
||||
### Disabling Confusion Matrix Logging
|
||||
|
||||
In some cases, you may not want to log the confusion matrix from your validation set after every epoch. You can disable this feature by setting the `COMET_EVAL_LOG_CONFUSION_MATRIX` environment variable to "false." The confusion matrix will only be logged once, after the training is completed.
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ["COMET_EVAL_LOG_CONFUSION_MATRIX"] = "false"
|
||||
```
|
||||
|
||||
### Offline Logging
|
||||
|
||||
If you find yourself in a situation where internet access is limited, Comet ML provides an offline logging option. You can set the `COMET_MODE` environment variable to "offline" to enable this feature. Your experiment data will be saved locally in a directory that you can later upload to Comet ML when internet connectivity is available.
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ["COMET_MODE"] = "offline"
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
This guide has walked you through integrating Comet ML with Ultralytics' YOLOv8. From installation to customization, you've learned to streamline experiment management, gain real-time insights, and adapt logging to your project's needs.
|
||||
|
||||
Explore [Comet ML's official documentation](https://www.comet.com/docs/v2/integrations/third-party-tools/yolov8/) for more insights on integrating with YOLOv8.
|
||||
|
||||
Furthermore, if you're looking to dive deeper into the practical applications of YOLOv8, specifically for image segmentation tasks, this detailed guide on [fine-tuning YOLOv8 with Comet ML](https://www.comet.com/site/blog/fine-tuning-yolov8-for-image-segmentation-with-comet/) offers valuable insights and step-by-step instructions to enhance your model's performance.
|
||||
|
||||
Additionally, to explore other exciting integrations with Ultralytics, check out the [integration guide page](../integrations/index.md), which offers a wealth of resources and information.
|
126
docs/en/integrations/coreml.md
Normal file
126
docs/en/integrations/coreml.md
Normal file
@ -0,0 +1,126 @@
|
||||
---
|
||||
comments: true
|
||||
description: Explore the process of exporting Ultralytics YOLOv8 models to CoreML format, enabling efficient object detection capabilities for iOS and macOS applications on Apple devices.
|
||||
keywords: Ultralytics, YOLOv8, CoreML Export, Model Deployment, Apple Devices, Object Detection, Machine Learning
|
||||
---
|
||||
|
||||
# CoreML Export for YOLOv8 Models
|
||||
|
||||
Deploying computer vision models on Apple devices like iPhones and Macs requires a format that ensures seamless performance.
|
||||
|
||||
The CoreML export format allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for efficient object detection in iOS and macOS applications. In this guide, we'll walk you through the steps for converting your models to the CoreML format, making it easier for your models to perform well on Apple devices.
|
||||
|
||||
## CoreML
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://github.com/RizwanMunawar/ultralytics/assets/62513924/0c757e32-3a9f-422e-9526-efde5f663ccd" alt="CoreML Overview">
|
||||
</p>
|
||||
|
||||
[CoreML](https://developer.apple.com/documentation/coreml) is Apple's foundational machine learning framework that builds upon Accelerate, BNNS, and Metal Performance Shaders. It provides a machine-learning model format that seamlessly integrates into iOS applications and supports tasks such as image analysis, natural language processing, audio-to-text conversion, and sound analysis.
|
||||
|
||||
Applications can take advantage of Core ML without the need to have a network connection or API calls because the Core ML framework works using on-device computing. This means model inference can be performed locally on the user's device.
|
||||
|
||||
## Key Features of CoreML Models
|
||||
|
||||
Apple's CoreML framework offers robust features for on-device machine learning. Here are the key features that make CoreML a powerful tool for developers:
|
||||
|
||||
- **Comprehensive Model Support**: Converts and runs models from popular frameworks like TensorFlow, PyTorch, scikit-learn, XGBoost, and LibSVM.
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://apple.github.io/coremltools/docs-guides/_images/introduction-coremltools.png" alt="CoreML Supported Models">
|
||||
</p>
|
||||
|
||||
- **On-device Machine Learning**: Ensures data privacy and swift processing by executing models directly on the user's device, eliminating the need for network connectivity.
|
||||
|
||||
- **Performance and Optimization**: Uses the device's CPU, GPU, and Neural Engine for optimal performance with minimal power and memory usage. Offers tools for model compression and optimization while maintaining accuracy.
|
||||
|
||||
- **Ease of Integration**: Provides a unified format for various model types and a user-friendly API for seamless integration into apps. Supports domain-specific tasks through frameworks like Vision and Natural Language.
|
||||
|
||||
- **Advanced Features**: Includes on-device training capabilities for personalized experiences, asynchronous predictions for interactive ML experiences, and model inspection and validation tools.
|
||||
|
||||
## CoreML Deployment Options
|
||||
|
||||
Before we look at the code for exporting YOLOv8 models to the CoreML format, let’s understand where CoreML models are usually used.
|
||||
|
||||
CoreML offers various deployment options for machine learning models, including:
|
||||
|
||||
- **On-Device Deployment**: This method directly integrates CoreML models into your iOS app. It's particularly advantageous for ensuring low latency, enhanced privacy (since data remains on the device), and offline functionality. This approach, however, may be limited by the device's hardware capabilities, especially for larger and more complex models. On-device deployment can be executed in the following two ways.
|
||||
|
||||
- **Embedded Models**: These models are included in the app bundle and are immediately accessible. They are ideal for small models that do not require frequent updates.
|
||||
|
||||
- **Downloaded Models**: These models are fetched from a server as needed. This approach is suitable for larger models or those needing regular updates. It helps keep the app bundle size smaller.
|
||||
|
||||
- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. It’s ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
|
||||
|
||||
## Exporting YOLOv8 Models to CoreML
|
||||
|
||||
Exporting YOLOv8 to CoreML enables optimized, on-device machine learning performance within Apple's ecosystem, offering benefits in terms of efficiency, security, and seamless integration with iOS, macOS, watchOS, and tvOS platforms.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to CoreML format
|
||||
model.export(format='coreml') # creates 'yolov8n.mlpackage'
|
||||
|
||||
# Load the exported CoreML model
|
||||
coreml_model = YOLO('yolov8n.mlpackage')
|
||||
|
||||
# Run inference
|
||||
results = coreml_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to CoreML format
|
||||
yolo export model=yolov8n.pt format=coreml # creates 'yolov8n.mlpackage''
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n.mlpackage source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
|
||||
|
||||
## Deploying Exported YOLOv8 CoreML Models
|
||||
|
||||
Having successfully exported your Ultralytics YOLOv8 models to CoreML, the next critical phase is deploying these models effectively. For detailed guidance on deploying CoreML models in various environments, check out these resources:
|
||||
|
||||
- **[CoreML Tools](https://apple.github.io/coremltools/docs-guides/)**: This guide includes instructions and examples to convert models from TensorFlow, PyTorch, and other libraries to Core ML.
|
||||
|
||||
- **[ML and Vision](https://developer.apple.com/videos/ml-vision)**: A collection of comprehensive videos that cover various aspects of using and implementing CoreML models.
|
||||
|
||||
- **[Integrating a Core ML Model into Your App](https://developer.apple.com/documentation/coreml/integrating_a_core_ml_model_into_your_app)**: A comprehensive guide on integrating a CoreML model into an iOS application, detailing steps from preparing the model to implementing it in the app for various functionalities.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we went over how to export Ultralytics YOLOv8 models to CoreML format. By following the steps outlined in this guide, you can ensure maximum compatibility and performance when exporting YOLOv8 models to CoreML.
|
||||
|
||||
For further details on usage, visit the [CoreML official documentation](https://developer.apple.com/documentation/coreml).
|
||||
|
||||
Also, if you’d like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of valuable resources and insights there.
|
171
docs/en/integrations/dvc.md
Normal file
171
docs/en/integrations/dvc.md
Normal file
@ -0,0 +1,171 @@
|
||||
---
|
||||
comments: true
|
||||
description: This guide provides a step-by-step approach to integrating DVCLive with Ultralytics YOLOv8 for advanced experiment tracking. Learn how to set up your environment, run experiments with varied configurations, and analyze results using DVCLive's powerful tracking and visualization tools.
|
||||
keywords: DVCLive, Ultralytics, YOLOv8, Machine Learning, Experiment Tracking, Data Version Control, ML Workflows, Model Training, Hyperparameter Tuning
|
||||
---
|
||||
|
||||
# Advanced YOLOv8 Experiment Tracking with DVCLive
|
||||
|
||||
Experiment tracking in machine learning is critical to model development and evaluation. It involves recording and analyzing various parameters, metrics, and outcomes from numerous training runs. This process is essential for understanding model performance and making data-driven decisions to refine and optimize models.
|
||||
|
||||
Integrating DVCLive with [Ultralytics YOLOv8](https://ultralytics.com) transforms the way experiments are tracked and managed. This integration offers a seamless solution for automatically logging key experiment details, comparing results across different runs, and visualizing data for in-depth analysis. In this guide, we'll understand how DVCLive can be used to streamline the process.
|
||||
|
||||
## DVCLive
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://dvc.org/static/6daeb07124bab895bea3f4930e3116e9/aa619/dvclive-studio.webp" alt="DVCLive Overview">
|
||||
</p>
|
||||
|
||||
[DVCLive](https://dvc.org/doc/dvclive), developed by DVC, is an innovative open-source tool for experiment tracking in machine learning. Integrating seamlessly with Git and DVC, it automates the logging of crucial experiment data like model parameters and training metrics. Designed for simplicity, DVCLive enables effortless comparison and analysis of multiple runs, enhancing the efficiency of machine learning projects with intuitive data visualization and analysis tools.
|
||||
|
||||
## YOLOv8 Training with DVCLive
|
||||
|
||||
YOLOv8 training sessions can be effectively monitored with DVCLive. Additionally, DVC provides integral features for visualizing these experiments, including the generation of a report that enables the comparison of metric plots across all tracked experiments, offering a comprehensive view of the training process.
|
||||
|
||||
## Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required packages for YOLOv8 and DVCLive
|
||||
pip install ultralytics dvclive
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, be sure to check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
## Configuring DVCLive
|
||||
|
||||
Once you have installed the necessary packages, the next step is to set up and configure your environment with the necessary credentials. This setup ensures a smooth integration of DVCLive into your existing workflow.
|
||||
|
||||
Begin by initializing a Git repository, as Git plays a crucial role in version control for both your code and DVCLive configurations.
|
||||
|
||||
!!! Tip "Initial Environment Setup"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Initialize a Git repository
|
||||
git init -q
|
||||
|
||||
# Configure Git with your details
|
||||
git config --local user.email "you@example.com"
|
||||
git config --local user.name "Your Name"
|
||||
|
||||
# Initialize DVCLive in your project
|
||||
dvc init -q
|
||||
|
||||
# Commit the DVCLive setup to your Git repository
|
||||
git commit -m "DVC init"
|
||||
```
|
||||
|
||||
In these commands, ensure to replace "you@example.com" with the email address associated with your Git account, and "Your Name" with your Git account username.
|
||||
|
||||
## Usage
|
||||
|
||||
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
### Training YOLOv8 Models with DVCLive
|
||||
|
||||
Start by running your YOLOv8 training sessions. You can use different model configurations and training parameters to suit your project needs. For instance:
|
||||
|
||||
```bash
|
||||
# Example training commands for YOLOv8 with varying configurations
|
||||
yolo train model=yolov8n.pt data=coco8.yaml epochs=5 imgsz=512
|
||||
yolo train model=yolov8n.pt data=coco8.yaml epochs=5 imgsz=640
|
||||
```
|
||||
|
||||
Adjust the model, data, epochs, and imgsz parameters according to your specific requirements. For a detailed understanding of the model training process and best practices, refer to our [YOLOv8 Model Training guide](../modes/train.md).
|
||||
|
||||
### Monitoring Experiments with DVCLive
|
||||
|
||||
DVCLive enhances the training process by enabling the tracking and visualization of key metrics. When installed, Ultralytics YOLOv8 automatically integrates with DVCLive for experiment tracking, which you can later analyze for performance insights. For a comprehensive understanding of the specific performance metrics used during training, be sure to explore [our detailed guide on performance metrics](../guides/yolo-performance-metrics.md).
|
||||
|
||||
### Analyzing Results
|
||||
|
||||
After your YOLOv8 training sessions are complete, you can leverage DVCLive's powerful visualization tools for in-depth analysis of the results. DVCLive's integration ensures that all training metrics are systematically logged, facilitating a comprehensive evaluation of your model's performance.
|
||||
|
||||
To start the analysis, you can extract the experiment data using DVC's API and process it with Pandas for easier handling and visualization:
|
||||
|
||||
```python
|
||||
import dvc.api
|
||||
import pandas as pd
|
||||
|
||||
# Define the columns of interest
|
||||
columns = ["Experiment", "epochs", "imgsz", "model", "metrics.mAP50-95(B)"]
|
||||
|
||||
# Retrieve experiment data
|
||||
df = pd.DataFrame(dvc.api.exp_show(), columns=columns)
|
||||
|
||||
# Clean the data
|
||||
df.dropna(inplace=True)
|
||||
df.reset_index(drop=True, inplace=True)
|
||||
|
||||
# Display the DataFrame
|
||||
print(df)
|
||||
```
|
||||
|
||||
The output of the code snippet above provides a clear tabular view of the different experiments conducted with YOLOv8 models. Each row represents a different training run, detailing the experiment's name, the number of epochs, image size (imgsz), the specific model used, and the mAP50-95(B) metric. This metric is crucial for evaluating the model's accuracy, with higher values indicating better performance.
|
||||
|
||||
#### Visualizing Results with Plotly
|
||||
|
||||
For a more interactive and visual analysis of your experiment results, you can use Plotly's parallel coordinates plot. This type of plot is particularly useful for understanding the relationships and trade-offs between different parameters and metrics.
|
||||
|
||||
```python
|
||||
from plotly.express import parallel_coordinates
|
||||
|
||||
# Create a parallel coordinates plot
|
||||
fig = parallel_coordinates(df, columns, color="metrics.mAP50-95(B)")
|
||||
|
||||
# Display the plot
|
||||
fig.show()
|
||||
```
|
||||
|
||||
The output of the code snippet above generates a plot that will visually represent the relationships between epochs, image size, model type, and their corresponding mAP50-95(B) scores, enabling you to spot trends and patterns in your experiment data.
|
||||
|
||||
#### Generating Comparative Visualizations with DVC
|
||||
|
||||
DVC provides a useful command to generate comparative plots for your experiments. This can be especially helpful to compare the performance of different models over various training runs.
|
||||
|
||||
```bash
|
||||
# Generate DVC comparative plots
|
||||
dvc plots diff $(dvc exp list --names-only)
|
||||
```
|
||||
|
||||
After executing this command, DVC generates plots comparing the metrics across different experiments, which are saved as HTML files. Below is an example image illustrating typical plots generated by this process. The image showcases various graphs, including those representing mAP, recall, precision, loss values, and more, providing a visual overview of key performance metrics:
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://dvc.org/0f1243f5a0c5ea940a080478de267cba/yolo-studio-plots.gif" alt="DVCLive Plots">
|
||||
</p>
|
||||
|
||||
### Displaying DVC Plots
|
||||
|
||||
If you are using a Jupyter Notebook and you want to display the generated DVC plots, you can use the IPython display functionality.
|
||||
|
||||
```python
|
||||
from IPython.display import HTML
|
||||
|
||||
# Display the DVC plots as HTML
|
||||
HTML(filename='./dvc_plots/index.html')
|
||||
```
|
||||
|
||||
This code will render the HTML file containing the DVC plots directly in your Jupyter Notebook, providing an easy and convenient way to analyze the visualized experiment data.
|
||||
|
||||
### Making Data-Driven Decisions
|
||||
|
||||
Use the insights gained from these visualizations to make informed decisions about model optimizations, hyperparameter tuning, and other modifications to enhance your model's performance.
|
||||
|
||||
### Iterating on Experiments
|
||||
|
||||
Based on your analysis, iterate on your experiments. Adjust model configurations, training parameters, or even the data inputs, and repeat the training and analysis process. This iterative approach is key to refining your model for the best possible performance.
|
||||
|
||||
## Summary
|
||||
|
||||
This guide has led you through the process of integrating DVCLive with Ultralytics' YOLOv8. You have learned how to harness the power of DVCLive for detailed experiment monitoring, effective visualization, and insightful analysis in your machine learning endeavors.
|
||||
|
||||
For further details on usage, visit [DVCLive’s official documentation](https://dvc.org/doc/dvclive/ml-frameworks/yolo).
|
||||
|
||||
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a collection of great resources and insights.
|
118
docs/en/integrations/edge-tpu.md
Normal file
118
docs/en/integrations/edge-tpu.md
Normal file
@ -0,0 +1,118 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover how to uplift your Ultralytics YOLOv8 model's overall performance with the TFLite Edge TPU export format, which is perfect for mobile and embedded devices.
|
||||
keywords: Ultralytics, YOLOv8, TFLite edge TPU format, Export YOLOv8, Model Deployment, Flexible Deployment
|
||||
---
|
||||
|
||||
# Learn to Export to TFLite Edge TPU Format From YOLOv8 Model
|
||||
|
||||
Deploying computer vision models on devices with limited computational power, such as mobile or embedded systems, can be tricky. Using a model format that is optimized for faster performance simplifies the process. The [TensorFlow Lite](https://www.tensorflow.org/lite) [Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) or TFLite Edge TPU model format is designed to use minimal power while delivering fast performance for neural networks.
|
||||
|
||||
The export to TFLite Edge TPU format feature allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for high-speed and low-power inferencing. In this guide, we'll walk you through converting your models to the TFLite Edge TPU format, making it easier for your models to perform well on various mobile and embedded devices.
|
||||
|
||||
## Why Should You Export to TFLite Edge TPU?
|
||||
|
||||
Exporting models to TensorFlow Edge TPU makes machine learning tasks fast and efficient. This technology suits applications with limited power, computing resources, and connectivity. The Edge TPU is a hardware accelerator by Google. It speeds up TensorFlow Lite models on edge devices. The image below shows an example of the process involved.
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://coral.ai/static/docs/images/edgetpu/compile-workflow.png" alt="TFLite Edge TPU">
|
||||
</p>
|
||||
|
||||
The Edge TPU works with quantized models. Quantization makes models smaller and faster without losing much accuracy. It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server.
|
||||
|
||||
## Key Features of TFLite Edge TPU
|
||||
|
||||
Here are the key features that make TFLite Edge TPU a great model format choice for developers:
|
||||
|
||||
- **Optimized Performance on Edge Devices**: The TFLite Edge TPU achieves high-speed neural networking performance through quantization, model optimization, hardware acceleration, and compiler optimization. Its minimalistic architecture contributes to its smaller size and cost-efficiency.
|
||||
|
||||
- **High Computational Throughput**: TFLite Edge TPU combines specialized hardware acceleration and efficient runtime execution to achieve high computational throughput. It is well-suited for deploying machine learning models with stringent performance requirements on edge devices.
|
||||
|
||||
- **Efficient Matrix Computations**: The TensorFlow Edge TPU is optimized for matrix operations, which are crucial for neural network computations. This efficiency is key in machine learning models, particularly those requiring numerous and complex matrix multiplications and transformations.
|
||||
|
||||
## Deployment Options with TFLite Edge TPU
|
||||
|
||||
Before we jump into how to export YOLOv8 models to the TFLite Edge TPU format, let’s understand where TFLite Edge TPU models are usually used.
|
||||
|
||||
TFLite Edge TPU offers various deployment options for machine learning models, including:
|
||||
|
||||
- **On-Device Deployment**: TensorFlow Edge TPU models can be directly deployed on mobile and embedded devices. On-device deployment allows the models to execute directly on the hardware, eliminating the need for cloud connectivity.
|
||||
|
||||
- **Edge Computing with Cloud TensorFlow TPUs**: In scenarios where edge devices have limited processing capabilities, TensorFlow Edge TPUs can offload inference tasks to cloud servers equipped with TPUs.
|
||||
|
||||
- **Hybrid Deployment**: A hybrid approach combines on-device and cloud deployment and offers a versatile and scalable solution for deploying machine learning models. Advantages include on-device processing for quick responses and cloud computing for more complex computations.
|
||||
|
||||
## Exporting YOLOv8 Models to TFLite Edge TPU
|
||||
|
||||
You can expand model compatibility and deployment flexibility by converting YOLOv8 models to TensorFlow Edge TPU.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to TFLite Edge TPU format
|
||||
model.export(format='edgetpu') # creates 'yolov8n_full_integer_quant_edgetpu.tflite’
|
||||
|
||||
# Load the exported TFLite Edge TPU model
|
||||
edgetpu_model = YOLO('yolov8n_full_integer_quant_edgetpu.tflite')
|
||||
|
||||
# Run inference
|
||||
results = edgetpu_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to TFLite Edge TPU format
|
||||
yolo export model=yolov8n.pt format=edgetpu # creates 'yolov8n_full_integer_quant_edgetpu.tflite'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n_full_integer_quant_edgetpu.tflite source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
|
||||
|
||||
## Deploying Exported YOLOv8 TFLite Edge TPU Models
|
||||
|
||||
After successfully exporting your Ultralytics YOLOv8 models to TFLite Edge TPU format, you can now deploy them. The primary and recommended first step for running a TFLite Edge TPU model is to use the YOLO("model_edgetpu.tflite") method, as outlined in the previous usage code snippet.
|
||||
|
||||
However, for in-depth instructions on deploying your TFLite Edge TPU models, take a look at the following resources:
|
||||
|
||||
- **[Coral Edge TPU on a Raspberry Pi with Ultralytics YOLOv8](../guides/coral-edge-tpu-on-raspberry-pi.md)**: Discover how to integrate Coral Edge TPUs with Raspberry Pi for enhanced machine learning capabilities.
|
||||
|
||||
- **[Code Examples](https://coral.ai/docs/edgetpu/compiler/)**: Access practical TensorFlow Edge TPU deployment examples to kickstart your projects.
|
||||
|
||||
- **[Run Inference on the Edge TPU with Python](https://coral.ai/docs/edgetpu/tflite-python/#overview)**: Explore how to use the TensorFlow Lite Python API for Edge TPU applications, including setup and usage guidelines.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we’ve learned how to export Ultralytics YOLOv8 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your computer vision applications.
|
||||
|
||||
For further details on usage, visit the [Edge TPU official website](https://cloud.google.com/edge-tpu).
|
||||
|
||||
Also, for more information on other Ultralytics YOLOv8 integrations, please visit our [integration guide page](index.md). There, you'll discover valuable resources and insights.
|
106
docs/en/integrations/gradio.md
Normal file
106
docs/en/integrations/gradio.md
Normal file
@ -0,0 +1,106 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn to use Gradio and Ultralytics YOLOv8 for interactive object detection. Upload images and adjust detection parameters in real-time.
|
||||
keywords: Gradio, Ultralytics YOLOv8, object detection, interactive AI, Python
|
||||
---
|
||||
|
||||
# Interactive Object Detection: Gradio & Ultralytics YOLOv8 🚀
|
||||
|
||||
## Introduction to Interactive Object Detection
|
||||
|
||||
This Gradio interface provides an easy and interactive way to perform object detection using the [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics/) model. Users can upload images and adjust parameters like confidence threshold and intersection-over-union (IoU) threshold to get real-time detection results.
|
||||
|
||||
## Why Use Gradio for Object Detection?
|
||||
|
||||
* **User-Friendly Interface:** Gradio offers a straightforward platform for users to upload images and visualize detection results without any coding requirement.
|
||||
* **Real-Time Adjustments:** Parameters such as confidence and IoU thresholds can be adjusted on the fly, allowing for immediate feedback and optimization of detection results.
|
||||
* **Broad Accessibility:** The Gradio web interface can be accessed by anyone, making it an excellent tool for demonstrations, educational purposes, and quick experiments.
|
||||
|
||||
<p align="center">
|
||||
<img width="800" alt="Gradio example screenshot" src="https://github.com/RizwanMunawar/ultralytics/assets/26833433/52ee3cd2-ac59-4c27-9084-0fd05c6c33be">
|
||||
</p>
|
||||
|
||||
## How to Install the Gradio
|
||||
|
||||
```bash
|
||||
pip install gradio
|
||||
```
|
||||
|
||||
## How to Use the Interface
|
||||
|
||||
1. **Upload Image:** Click on 'Upload Image' to choose an image file for object detection.
|
||||
2. **Adjust Parameters:**
|
||||
* **Confidence Threshold:** Slider to set the minimum confidence level for detecting objects.
|
||||
* **IoU Threshold:** Slider to set the IoU threshold for distinguishing different objects.
|
||||
3. **View Results:** The processed image with detected objects and their labels will be displayed.
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
* **Sample Image 1:** Bus detection with default thresholds.
|
||||
* **Sample Image 2:** Detection on a sports image with default thresholds.
|
||||
|
||||
## Usage Example
|
||||
|
||||
This section provides the Python code used to create the Gradio interface with the Ultralytics YOLOv8 model. Supports classification tasks, detection tasks, segmentation tasks, and key point tasks.
|
||||
|
||||
```python
|
||||
import PIL.Image as Image
|
||||
import gradio as gr
|
||||
|
||||
from ultralytics import ASSETS, YOLO
|
||||
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
|
||||
def predict_image(img, conf_threshold, iou_threshold):
|
||||
results = model.predict(
|
||||
source=img,
|
||||
conf=conf_threshold,
|
||||
iou=iou_threshold,
|
||||
show_labels=True,
|
||||
show_conf=True,
|
||||
imgsz=640,
|
||||
)
|
||||
|
||||
for r in results:
|
||||
im_array = r.plot()
|
||||
im = Image.fromarray(im_array[..., ::-1])
|
||||
|
||||
return im
|
||||
|
||||
|
||||
iface = gr.Interface(
|
||||
fn=predict_image,
|
||||
inputs=[
|
||||
gr.Image(type="pil", label="Upload Image"),
|
||||
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
|
||||
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold")
|
||||
],
|
||||
outputs=gr.Image(type="pil", label="Result"),
|
||||
title="Ultralytics Gradio",
|
||||
description="Upload images for inference. The Ultralytics YOLOv8n model is used by default.",
|
||||
examples=[
|
||||
[ASSETS / "bus.jpg", 0.25, 0.45],
|
||||
[ASSETS / "zidane.jpg", 0.25, 0.45],
|
||||
]
|
||||
)
|
||||
|
||||
if __name__ == '__main__':
|
||||
iface.launch()
|
||||
```
|
||||
|
||||
## Parameters Explanation
|
||||
|
||||
| Parameter Name | Type | Description |
|
||||
|------------------|---------|----------------------------------------------------------|
|
||||
| `img` | `Image` | The image on which object detection will be performed. |
|
||||
| `conf_threshold` | `float` | Confidence threshold for detecting objects. |
|
||||
| `iou_threshold` | `float` | Intersection-over-union threshold for object separation. |
|
||||
|
||||
### Gradio Interface Components
|
||||
|
||||
| Component | Description |
|
||||
|--------------|------------------------------------------|
|
||||
| Image Input | To upload the image for detection. |
|
||||
| Sliders | To adjust confidence and IoU thresholds. |
|
||||
| Image Output | To display the detection results. |
|
108
docs/en/integrations/index.md
Normal file
108
docs/en/integrations/index.md
Normal file
@ -0,0 +1,108 @@
|
||||
---
|
||||
comments: true
|
||||
description: Explore Ultralytics integrations with tools for dataset management, model optimization, ML workflows automation, experiment tracking, version control, and more. Learn about our support for various model export formats for deployment.
|
||||
keywords: Ultralytics integrations, Roboflow, Neural Magic, ClearML, Comet ML, DVC, Ultralytics HUB, MLFlow, Neptune, Ray Tune, TensorBoard, W&B, model export formats, PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, CoreML, TF SavedModel, TF GraphDef, TF Lite, TF Edge TPU, TF.js, PaddlePaddle, NCNN
|
||||
---
|
||||
|
||||
# Ultralytics Integrations
|
||||
|
||||
Welcome to the Ultralytics Integrations page! This page provides an overview of our partnerships with various tools and platforms, designed to streamline your machine learning workflows, enhance dataset management, simplify model training, and facilitate efficient deployment.
|
||||
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png" alt="Ultralytics YOLO ecosystem and integrations">
|
||||
|
||||
<p align="center">
|
||||
<br>
|
||||
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/ZzUSXQkLbNw"
|
||||
title="YouTube video player" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
allowfullscreen>
|
||||
</iframe>
|
||||
<br>
|
||||
<strong>Watch:</strong> Ultralytics YOLOv8 Deployment and Integrations
|
||||
</p>
|
||||
|
||||
## Datasets Integrations
|
||||
|
||||
- [Roboflow](roboflow.md): Facilitate seamless dataset management for Ultralytics models, offering robust annotation, preprocessing, and augmentation capabilities.
|
||||
|
||||
## Training Integrations
|
||||
|
||||
- [ClearML](clearml.md): Automate your Ultralytics ML workflows, monitor experiments, and foster team collaboration.
|
||||
|
||||
- [Comet ML](comet.md): Enhance your model development with Ultralytics by tracking, comparing, and optimizing your machine learning experiments.
|
||||
|
||||
- [DVC](dvc.md): Implement version control for your Ultralytics machine learning projects, synchronizing data, code, and models effectively.
|
||||
|
||||
- [MLFlow](mlflow.md): Streamline the entire ML lifecycle of Ultralytics models, from experimentation and reproducibility to deployment.
|
||||
|
||||
- [Ultralytics HUB](https://hub.ultralytics.com): Access and contribute to a community of pre-trained Ultralytics models.
|
||||
|
||||
- [Neptune](https://neptune.ai/): Maintain a comprehensive log of your ML experiments with Ultralytics in this metadata store designed for MLOps.
|
||||
|
||||
- [Ray Tune](ray-tune.md): Optimize the hyperparameters of your Ultralytics models at any scale.
|
||||
|
||||
- [TensorBoard](tensorboard.md): Visualize your Ultralytics ML workflows, monitor model metrics, and foster team collaboration.
|
||||
|
||||
- [Weights & Biases (W&B)](weights-biases.md): Monitor experiments, visualize metrics, and foster reproducibility and collaboration on Ultralytics projects.
|
||||
|
||||
- [Amazon SageMaker](amazon-sagemaker.md): Leverage Amazon SageMaker to efficiently build, train, and deploy Ultralytics models, providing an all-in-one platform for the ML lifecycle.
|
||||
|
||||
## Deployment Integrations
|
||||
|
||||
- [Neural Magic](neural-magic.md): Leverage Quantization Aware Training (QAT) and pruning techniques to optimize Ultralytics models for superior performance and leaner size.
|
||||
|
||||
- [Gradio](gradio.md) 🚀 NEW: Deploy Ultralytics models with Gradio for real-time, interactive object detection demos.
|
||||
|
||||
- [TorchScript](torchscript.md): Developed as part of the [PyTorch](https://pytorch.org/) framework, TorchScript enables efficient execution and deployment of machine learning models in various production environments without the need for Python dependencies.
|
||||
|
||||
- [ONNX](onnx.md): An open-source format created by [Microsoft](https://www.microsoft.com) for facilitating the transfer of AI models between various frameworks, enhancing the versatility and deployment flexibility of Ultralytics models.
|
||||
|
||||
- [OpenVINO](openvino.md): Intel's toolkit for optimizing and deploying computer vision models efficiently across various Intel CPU and GPU platforms.
|
||||
|
||||
- [TensorRT](tensorrt.md): Developed by [NVIDIA](https://www.nvidia.com/), this high-performance deep learning inference framework and model format optimizes AI models for accelerated speed and efficiency on NVIDIA GPUs, ensuring streamlined deployment.
|
||||
|
||||
- [CoreML](coreml.md): CoreML, developed by [Apple](https://www.apple.com/), is a framework designed for efficiently integrating machine learning models into applications across iOS, macOS, watchOS, and tvOS, using Apple's hardware for effective and secure model deployment.
|
||||
|
||||
- [TF SavedModel](tf-savedmodel.md): Developed by [Google](https://www.google.com), TF SavedModel is a universal serialization format for TensorFlow models, enabling easy sharing and deployment across a wide range of platforms, from servers to edge devices.
|
||||
|
||||
- [TF GraphDef](tf-graphdef.md): Developed by [Google](https://www.google.com), GraphDef is TensorFlow's format for representing computation graphs, enabling optimized execution of machine learning models across diverse hardware.
|
||||
|
||||
- [TFLite](tflite.md): Developed by [Google](https://www.google.com), TFLite is a lightweight framework for deploying machine learning models on mobile and edge devices, ensuring fast, efficient inference with minimal memory footprint.
|
||||
|
||||
- [TFLite Edge TPU](edge-tpu.md): Developed by [Google](https://www.google.com) for optimizing TensorFlow Lite models on Edge TPUs, this model format ensures high-speed, efficient edge computing.
|
||||
|
||||
- [PaddlePaddle](paddlepaddle.md): An open-source deep learning platform by [Baidu](https://www.baidu.com/), PaddlePaddle enables the efficient deployment of AI models and focuses on the scalability of industrial applications.
|
||||
|
||||
- [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient neural network inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms.
|
||||
|
||||
### Export Formats
|
||||
|
||||
We also support a variety of model export formats for deployment in different environments. Here are the available formats:
|
||||
|
||||
| Format | `format` Argument | Model | Metadata | Arguments |
|
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |
|
||||
| [ONNX](onnx.md) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
|
||||
| [OpenVINO](openvino.md) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half`, `int8` |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlpackage` | ✅ | `imgsz`, `half`, `int8`, `nms` |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras`, `int8` |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz`, `half`, `int8` |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
|
||||
| [NCNN](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` |
|
||||
|
||||
Explore the links to learn more about each integration and how to get the most out of them with Ultralytics.
|
||||
|
||||
## Contribute to Our Integrations
|
||||
|
||||
We're always excited to see how the community integrates Ultralytics YOLO with other technologies, tools, and platforms! If you have successfully integrated YOLO with a new system or have valuable insights to share, consider contributing to our Integrations Docs.
|
||||
|
||||
By writing a guide or tutorial, you can help expand our documentation and provide real-world examples that benefit the community. It's an excellent way to contribute to the growing ecosystem around Ultralytics YOLO.
|
||||
|
||||
To contribute, please check out our [Contributing Guide](https://docs.ultralytics.com/help/contributing) for instructions on how to submit a Pull Request (PR) 🛠️. We eagerly await your contributions!
|
||||
|
||||
Let's collaborate to make the Ultralytics YOLO ecosystem more expansive and feature-rich 🙏!
|
117
docs/en/integrations/mlflow.md
Normal file
117
docs/en/integrations/mlflow.md
Normal file
@ -0,0 +1,117 @@
|
||||
---
|
||||
comments: true
|
||||
description: Uncover the utility of MLflow for effective experiment logging in your Ultralytics YOLO projects.
|
||||
keywords: ultralytics docs, YOLO, MLflow, experiment logging, metrics tracking, parameter logging, artifact logging
|
||||
---
|
||||
|
||||
# MLflow Integration for Ultralytics YOLO
|
||||
|
||||
<img width="1024" src="https://user-images.githubusercontent.com/26833433/274929143-05e37e72-c355-44be-a842-b358592340b7.png" alt="MLflow ecosystem">
|
||||
|
||||
## Introduction
|
||||
|
||||
Experiment logging is a crucial aspect of machine learning workflows that enables tracking of various metrics, parameters, and artifacts. It helps to enhance model reproducibility, debug issues, and improve model performance. [Ultralytics](https://ultralytics.com) YOLO, known for its real-time object detection capabilities, now offers integration with [MLflow](https://mlflow.org/), an open-source platform for complete machine learning lifecycle management.
|
||||
|
||||
This documentation page is a comprehensive guide to setting up and utilizing the MLflow logging capabilities for your Ultralytics YOLO project.
|
||||
|
||||
## What is MLflow?
|
||||
|
||||
[MLflow](https://mlflow.org/) is an open-source platform developed by [Databricks](https://www.databricks.com/) for managing the end-to-end machine learning lifecycle. It includes tools for tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow is designed to work with any machine learning library and programming language.
|
||||
|
||||
## Features
|
||||
|
||||
- **Metrics Logging**: Logs metrics at the end of each epoch and at the end of the training.
|
||||
- **Parameter Logging**: Logs all the parameters used in the training.
|
||||
- **Artifacts Logging**: Logs model artifacts, including weights and configuration files, at the end of the training.
|
||||
|
||||
## Setup and Prerequisites
|
||||
|
||||
Ensure MLflow is installed. If not, install it using pip:
|
||||
|
||||
```bash
|
||||
pip install mlflow
|
||||
```
|
||||
|
||||
Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this is controlled by the settings `mflow` key. See the [settings](https://docs.ultralytics.com/quickstart/#ultralytics-settings) page for more info.
|
||||
|
||||
!!! Example "Update Ultralytics MLflow Settings"
|
||||
|
||||
=== "Python"
|
||||
Within the Python environment, call the `update` method on the `settings` object to change your settings:
|
||||
```python
|
||||
from ultralytics import settings
|
||||
|
||||
# Update a setting
|
||||
settings.update({'mlflow': True})
|
||||
|
||||
# Reset settings to default values
|
||||
settings.reset()
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
If you prefer using the command-line interface, the following commands will allow you to modify your settings:
|
||||
```bash
|
||||
# Update a setting
|
||||
yolo settings runs_dir='/path/to/runs'
|
||||
|
||||
# Reset settings to default values
|
||||
yolo settings reset
|
||||
```
|
||||
|
||||
## How to Use
|
||||
|
||||
### Commands
|
||||
|
||||
1. **Set a Project Name**: You can set the project name via an environment variable:
|
||||
|
||||
```bash
|
||||
export MLFLOW_EXPERIMENT_NAME=<your_experiment_name>
|
||||
```
|
||||
|
||||
Or use the `project=<project>` argument when training a YOLO model, i.e. `yolo train project=my_project`.
|
||||
|
||||
2. **Set a Run Name**: Similar to setting a project name, you can set the run name via an environment variable:
|
||||
|
||||
```bash
|
||||
export MLFLOW_RUN=<your_run_name>
|
||||
```
|
||||
|
||||
Or use the `name=<name>` argument when training a YOLO model, i.e. `yolo train project=my_project name=my_name`.
|
||||
|
||||
3. **Start Local MLflow Server**: To start tracking, use:
|
||||
|
||||
```bash
|
||||
mlflow server --backend-store-uri runs/mlflow'
|
||||
```
|
||||
|
||||
This will start a local server at http://127.0.0.1:5000 by default and save all mlflow logs to the 'runs/mlflow' directory. To specify a different URI, set the `MLFLOW_TRACKING_URI` environment variable.
|
||||
|
||||
4. **Kill MLflow Server Instances**: To stop all running MLflow instances, run:
|
||||
|
||||
```bash
|
||||
ps aux | grep 'mlflow' | grep -v 'grep' | awk '{print $2}' | xargs kill -9
|
||||
```
|
||||
|
||||
### Logging
|
||||
|
||||
The logging is taken care of by the `on_pretrain_routine_end`, `on_fit_epoch_end`, and `on_train_end` callback functions. These functions are automatically called during the respective stages of the training process, and they handle the logging of parameters, metrics, and artifacts.
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Logging Custom Metrics**: You can add custom metrics to be logged by modifying the `trainer.metrics` dictionary before `on_fit_epoch_end` is called.
|
||||
|
||||
2. **View Experiment**: To view your logs, navigate to your MLflow server (usually http://127.0.0.1:5000) and select your experiment and run. <img width="1024" src="https://user-images.githubusercontent.com/26833433/274933329-3127aa8c-4491-48ea-81df-ed09a5837f2a.png" alt="YOLO MLflow Experiment">
|
||||
|
||||
3. **View Run**: Runs are individual models inside an experiment. Click on a Run and see the Run details, including uploaded artifacts and model weights. <img width="1024" src="https://user-images.githubusercontent.com/26833433/274933337-ac61371c-2867-4099-a733-147a2583b3de.png" alt="YOLO MLflow Run">
|
||||
|
||||
## Disabling MLflow
|
||||
|
||||
To turn off MLflow logging:
|
||||
|
||||
```bash
|
||||
yolo settings mlflow=False
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
MLflow logging integration with Ultralytics YOLO offers a streamlined way to keep track of your machine learning experiments. It empowers you to monitor performance metrics and manage artifacts effectively, thus aiding in robust model development and deployment. For further details please visit the MLflow [official documentation](https://mlflow.org/docs/latest/index.html).
|
120
docs/en/integrations/ncnn.md
Normal file
120
docs/en/integrations/ncnn.md
Normal file
@ -0,0 +1,120 @@
|
||||
---
|
||||
comments: true
|
||||
description: Uncover how to improve your Ultralytics YOLOv8 model's performance using the NCNN export format that is suitable for devices with limited computation resources.
|
||||
keywords: Ultralytics, YOLOv8, NCNN Export, Export YOLOv8, Model Deployment
|
||||
---
|
||||
|
||||
# How to Export to NCNN from YOLOv8 for Smooth Deployment
|
||||
|
||||
Deploying computer vision models on devices with limited computational power, such as mobile or embedded systems, can be tricky. You need to make sure you use a format optimized for optimal performance. This makes sure that even devices with limited processing power can handle advanced computer vision tasks well.
|
||||
|
||||
The export to NCNN format feature allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for lightweight device-based applications. In this guide, we'll walk you through how to convert your models to the NCNN format, making it easier for your models to perform well on various mobile and embedded devices.
|
||||
|
||||
## Why should you export to NCNN?
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://repository-images.githubusercontent.com/494294418/207a2e12-dc16-41a6-a39e-eae26e662638" alt="NCNN overview">
|
||||
</p>
|
||||
|
||||
The [NCNN](https://github.com/Tencent/ncnn) framework, developed by Tencent, is a high-performance neural network inference computing framework optimized specifically for mobile platforms, including mobile phones, embedded devices, and IoT devices. NCNN is compatible with a wide range of platforms, including Linux, Android, iOS, and macOS.
|
||||
|
||||
NCNN is known for its fast processing speed on mobile CPUs and enables rapid deployment of deep learning models to mobile platforms. This makes it easier to build smart apps, putting the power of AI right at your fingertips.
|
||||
|
||||
## Key Features of NCNN Models
|
||||
|
||||
NCNN models offer a wide range of key features that enable on-device machine learning by helping developers run their models on mobile, embedded, and edge devices:
|
||||
|
||||
- **Efficient and High-Performance**: NCNN models are made to be efficient and lightweight, optimized for running on mobile and embedded devices like Raspberry Pi with limited resources. They can also achieve high performance with high accuracy on various computer vision-based tasks.
|
||||
|
||||
- **Quantization**: NCNN models often support quantization which is a technique that reduces the precision of the model's weights and activations. This leads to further improvements in performance and reduces memory footprint.
|
||||
|
||||
- **Compatibility**: NCNN models are compatible with popular deep learning frameworks like [TensorFlow](https://www.tensorflow.org/), [Caffe](https://caffe.berkeleyvision.org/), and [ONNX](https://onnx.ai/). This compatibility allows developers to use existing models and workflows easily.
|
||||
|
||||
- **Easy to Use**: NCNN models are designed for easy integration into various applications, thanks to their compatibility with popular deep learning frameworks. Additionally, NCNN offers user-friendly tools for converting models between different formats, ensuring smooth interoperability across the development landscape.
|
||||
|
||||
## Deployment Options with NCNN
|
||||
|
||||
Before we look at the code for exporting YOLOv8 models to the NCNN format, let’s understand how NCNN models are normally used.
|
||||
|
||||
NCNN models, designed for efficiency and performance, are compatible with a variety of deployment platforms:
|
||||
|
||||
- **Mobile Deployment**: Specifically optimized for Android and iOS, allowing for seamless integration into mobile applications for efficient on-device inference.
|
||||
|
||||
- **Embedded Systems and IoT Devices**: If you find that running inference on a Raspberry Pi with the [Ultralytics Guide](../guides/raspberry-pi.md) isn't fast enough, switching to an NCNN exported model could help speed things up. NCNN is great for devices like Raspberry Pi and NVIDIA Jetson, especially in situations where you need quick processing right on the device.
|
||||
|
||||
- **Desktop and Server Deployment**: Capable of being deployed in desktop and server environments across Linux, Windows, and macOS, supporting development, training, and evaluation with higher computational capacities.
|
||||
|
||||
## Export to NCNN: Converting Your YOLOv8 Model
|
||||
|
||||
You can expand model compatibility and deployment flexibility by converting YOLOv8 models to NCNN format.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to NCNN format
|
||||
model.export(format='ncnn') # creates '/yolov8n_ncnn_model'
|
||||
|
||||
# Load the exported NCNN model
|
||||
ncnn_model = YOLO('./yolov8n_ncnn_model')
|
||||
|
||||
# Run inference
|
||||
results = ncnn_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to NCNN format
|
||||
yolo export model=yolov8n.pt format=ncnn # creates '/yolov8n_ncnn_model'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model='./yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
|
||||
|
||||
## Deploying Exported YOLOv8 NCNN Models
|
||||
|
||||
After successfully exporting your Ultralytics YOLOv8 models to NCNN format, you can now deploy them. The primary and recommended first step for running a NCNN model is to utilize the YOLO("./model_ncnn_model") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your NCNN models in various other settings, take a look at the following resources:
|
||||
|
||||
- **[Android](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-android)**: This blog explains how to use NCNN models for performing tasks like object detection through Android applications.
|
||||
|
||||
- **[macOS](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-macos)**: Understand how to use NCNN models for performing tasks through macOS.
|
||||
|
||||
- **[Linux](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-linux)**: Explore this page to learn how to deploy NCNN models on limited resource devices like Raspberry Pi and other similar devices.
|
||||
|
||||
- **[Windows x64 using VS2017](https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-windows-x64-using-visual-studio-community-2017)**: Explore this blog to learn how to deploy NCNN models on windows x64 using Visual Studio Community 2017.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we've gone over exporting Ultralytics YOLOv8 models to the NCNN format. This conversion step is crucial for improving the efficiency and speed of YOLOv8 models, making them more effective and suitable for limited-resource computing environments.
|
||||
|
||||
For detailed instructions on usage, please refer to the [official NCNN documentation](https://ncnn.readthedocs.io/en/latest/index.html).
|
||||
|
||||
Also, if you're interested in exploring other integration options for Ultralytics YOLOv8, be sure to visit our [integration guide page](index.md) for further insights and information.
|
165
docs/en/integrations/neural-magic.md
Normal file
165
docs/en/integrations/neural-magic.md
Normal file
@ -0,0 +1,165 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn how to deploy your YOLOv8 models rapidly using Neural Magic’s DeepSparse. This guide focuses on integrating Ultralytics YOLOv8 with the DeepSparse Engine for high-speed, CPU-based inference, leveraging advanced neural network sparsity techniques.
|
||||
keywords: YOLOv8, DeepSparse Engine, Ultralytics, CPU Inference, Neural Network Sparsity, Object Detection, Model Optimization
|
||||
---
|
||||
|
||||
# Optimizing YOLOv8 Inferences with Neural Magic’s DeepSparse Engine
|
||||
|
||||
When deploying object detection models like [Ultralytics YOLOv8](https://ultralytics.com) on various hardware, you can bump into unique issues like optimization. This is where YOLOv8’s integration with Neural Magic’s DeepSparse Engine steps in. It transforms the way YOLOv8 models are executed and enables GPU-level performance directly on CPUs.
|
||||
|
||||
This guide shows you how to deploy YOLOv8 using Neural Magic's DeepSparse, how to run inferences, and also how to benchmark performance to ensure it is optimized.
|
||||
|
||||
## Neural Magic’s DeepSparse
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://docs.neuralmagic.com/assets/images/nm-flows-55d56c0695a30bf9ecb716ea98977a95.png" alt="Neural Magic’s DeepSparse Overview">
|
||||
</p>
|
||||
|
||||
[Neural Magic’s DeepSparse](https://neuralmagic.com/deepsparse/) is an inference run-time designed to optimize the execution of neural networks on CPUs. It applies advanced techniques like sparsity, pruning, and quantization to dramatically reduce computational demands while maintaining accuracy. DeepSparse offers an agile solution for efficient and scalable neural network execution across various devices.
|
||||
|
||||
## Benefits of Integrating Neural Magic’s DeepSparse with YOLOv8
|
||||
|
||||
Before diving into how to deploy YOLOV8 using DeepSparse, let’s understand the benefits of using DeepSparse. Some key advantages include:
|
||||
|
||||
- **Enhanced Inference Speed**: Achieves up to 525 FPS (on YOLOv8n), significantly speeding up YOLOv8's inference capabilities compared to traditional methods.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://neuralmagic.com/wp-content/uploads/2023/04/image1.png" alt="Enhanced Inference Speed">
|
||||
</p>
|
||||
|
||||
- **Optimized Model Efficiency**: Uses pruning and quantization to enhance YOLOv8's efficiency, reducing model size and computational requirements while maintaining accuracy.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://neuralmagic.com/wp-content/uploads/2023/04/YOLOv8-charts-Page-1.drawio.png" alt="Optimized Model Efficiency">
|
||||
</p>
|
||||
|
||||
- **High Performance on Standard CPUs**: Delivers GPU-like performance on CPUs, providing a more accessible and cost-effective option for various applications.
|
||||
|
||||
- **Streamlined Integration and Deployment**: Offers user-friendly tools for easy integration of YOLOv8 into applications, including image and video annotation features.
|
||||
|
||||
- **Support for Various Model Types**: Compatible with both standard and sparsity-optimized YOLOv8 models, adding deployment flexibility.
|
||||
|
||||
- **Cost-Effective and Scalable Solution**: Reduces operational expenses and offers scalable deployment of advanced object detection models.
|
||||
|
||||
## How Does Neural Magic's DeepSparse Technology Works?
|
||||
|
||||
Neural Magic’s Deep Sparse technology is inspired by the human brain’s efficiency in neural network computation. It adopts two key principles from the brain as follows:
|
||||
|
||||
- **Sparsity**: The process of sparsification involves pruning redundant information from deep learning networks, leading to smaller and faster models without compromising accuracy. This technique reduces the network's size and computational needs significantly.
|
||||
|
||||
- **Locality of Reference**: DeepSparse uses a unique execution method, breaking the network into Tensor Columns. These columns are executed depth-wise, fitting entirely within the CPU's cache. This approach mimics the brain's efficiency, minimizing data movement and maximizing the CPU's cache use.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://neuralmagic.com/wp-content/uploads/2021/03/Screen-Shot-2021-03-16-at-11.09.45-AM.png" alt="How Neural Magic's DeepSparse Technology Works ">
|
||||
</p>
|
||||
|
||||
For more details on how Neural Magic's DeepSparse technology work, check out [their blog post](https://neuralmagic.com/blog/how-neural-magics-deep-sparse-technology-works/).
|
||||
|
||||
## Creating A Sparse Version of YOLOv8 Trained on a Custom Dataset
|
||||
|
||||
SparseZoo, an open-source model repository by Neural Magic, offers [a collection of pre-sparsified YOLOv8 model checkpoints](https://sparsezoo.neuralmagic.com/?modelSet=computer_vision&searchModels=yolo). With SparseML, seamlessly integrated with Ultralytics, users can effortlessly fine-tune these sparse checkpoints on their specific datasets using a straightforward command-line interface.
|
||||
|
||||
Checkout [Neural Magic's SparseML YOLOv8 documentation](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov8) for more details.
|
||||
|
||||
## Usage: Deploying YOLOV8 using DeepSparse
|
||||
|
||||
Deploying YOLOv8 with Neural Magic's DeepSparse involves a few straightforward steps. Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements. Here's how you can get started.
|
||||
|
||||
### Step 1: Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required packages
|
||||
pip install deepsparse[yolov8]
|
||||
```
|
||||
|
||||
### Step 2: Exporting YOLOv8 to ONNX Format
|
||||
|
||||
DeepSparse Engine requires YOLOv8 models in ONNX format. Exporting your model to this format is essential for compatibility with DeepSparse. Use the following command to export YOLOv8 models:
|
||||
|
||||
!!! Tip "Model Export"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export YOLOv8 model to ONNX format
|
||||
yolo task=detect mode=export model=yolov8n.pt format=onnx opset=13
|
||||
```
|
||||
|
||||
This command will save the `yolov8n.onnx` model to your disk.
|
||||
|
||||
### Step 3: Deploying and Running Inferences
|
||||
|
||||
With your YOLOv8 model in ONNX format, you can deploy and run inferences using DeepSparse. This can be done easily with their intuitive Python API:
|
||||
|
||||
!!! Tip "Deploying and Running Inferences"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from deepsparse import Pipeline
|
||||
|
||||
# Specify the path to your YOLOv8 ONNX model
|
||||
model_path = "path/to/yolov8n.onnx"
|
||||
|
||||
# Set up the DeepSparse Pipeline
|
||||
yolo_pipeline = Pipeline.create(
|
||||
task="yolov8",
|
||||
model_path=model_path
|
||||
)
|
||||
|
||||
# Run the model on your images
|
||||
images = ["path/to/image.jpg"]
|
||||
pipeline_outputs = yolo_pipeline(images=images)
|
||||
```
|
||||
|
||||
### Step 4: Benchmarking Performance
|
||||
|
||||
It's important to check that your YOLOv8 model is performing optimally on DeepSparse. You can benchmark your model's performance to analyze throughput and latency:
|
||||
|
||||
!!! Tip "Benchmarking"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Benchmark performance
|
||||
deepsparse.benchmark model_path="path/to/yolov8n.onnx" --scenario=sync --input_shapes="[1,3,640,640]"
|
||||
```
|
||||
|
||||
### Step 5: Additional Features
|
||||
|
||||
DeepSparse provides additional features for practical integration of YOLOv8 in applications, such as image annotation and dataset evaluation.
|
||||
|
||||
!!! Tip "Additional Features"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# For image annotation
|
||||
deepsparse.yolov8.annotate --source "path/to/image.jpg" --model_filepath "path/to/yolov8n.onnx"
|
||||
|
||||
# For evaluating model performance on a dataset
|
||||
deepsparse.yolov8.eval --model_path "path/to/yolov8n.onnx"
|
||||
```
|
||||
|
||||
Running the annotate command processes your specified image, detecting objects, and saving the annotated image with bounding boxes and classifications. The annotated image will be stored in an annotation-results folder. This helps provide a visual representation of the model's detection capabilities.
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://user-images.githubusercontent.com/3195154/211942937-1d32193a-6dda-473d-a7ad-e2162bbb42e9.png" alt="Image Annotation Feature">
|
||||
</p>
|
||||
|
||||
After running the eval command, you will receive detailed output metrics such as precision, recall, and mAP (mean Average Precision). This provides a comprehensive view of your model's performance on the dataset. This functionality is particularly useful for fine-tuning and optimizing your YOLOv8 models for specific use cases, ensuring high accuracy and efficiency.
|
||||
|
||||
## Summary
|
||||
|
||||
This guide explored integrating Ultralytics’ YOLOv8 with Neural Magic's DeepSparse Engine. It highlighted how this integration enhances YOLOv8's performance on CPU platforms, offering GPU-level efficiency and advanced neural network sparsity techniques.
|
||||
|
||||
For more detailed information and advanced usage, visit [Neural Magic’s DeepSparse documentation](https://docs.neuralmagic.com/products/deepsparse/). Also, check out Neural Magic’s documentation on the integration with YOLOv8 [here](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/yolov8#yolov8-inference-pipelines) and watch a great session on it [here](https://www.youtube.com/watch?v=qtJ7bdt52x8).
|
||||
|
||||
Additionally, for a broader understanding of various YOLOv8 integrations, visit the [Ultralytics integration guide page](../integrations/index.md), where you can discover a range of other exciting integration possibilities.
|
134
docs/en/integrations/onnx.md
Normal file
134
docs/en/integrations/onnx.md
Normal file
@ -0,0 +1,134 @@
|
||||
---
|
||||
comments: true
|
||||
description: Explore how to improve your Ultralytics YOLOv8 model's performance and interoperability using the ONNX (Open Neural Network Exchange) export format that is suitable for diverse hardware and software environments.
|
||||
keywords: Ultralytics, YOLOv8, ONNX Format, Export YOLOv8, CUDA Support, Model Deployment
|
||||
---
|
||||
|
||||
# ONNX Export for YOLOv8 Models
|
||||
|
||||
Often, when deploying computer vision models, you’ll need a model format that's both flexible and compatible with multiple platforms.
|
||||
|
||||
Exporting [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models to ONNX format streamlines deployment and ensures optimal performance across various environments. This guide will show you how to easily convert your YOLOv8 models to ONNX and enhance their scalability and effectiveness in real-world applications.
|
||||
|
||||
## ONNX and ONNX Runtime
|
||||
|
||||
[ONNX](https://onnx.ai/), which stands for Open Neural Network Exchange, is a community project that Facebook and Microsoft initially developed. The ongoing development of ONNX is a collaborative effort supported by various organizations like IBM, Amazon (through AWS), and Google. The project aims to create an open file format designed to represent machine learning models in a way that allows them to be used across different AI frameworks and hardware.
|
||||
|
||||
ONNX models can be used to transition between different frameworks seamlessly. For instance, a deep learning model trained in PyTorch can be exported to ONNX format and then easily imported into TensorFlow.
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://www.aurigait.com/wp-content/uploads/2023/01/1_unnamed.png" alt="ONNX">
|
||||
</p>
|
||||
|
||||
Alternatively, ONNX models can be used with ONNX Runtime. [ONNX Runtime](https://onnxruntime.ai/) is a versatile cross-platform accelerator for machine learning models that is compatible with frameworks like PyTorch, TensorFlow, TFLite, scikit-learn, etc.
|
||||
|
||||
ONNX Runtime optimizes the execution of ONNX models by leveraging hardware-specific capabilities. This optimization allows the models to run efficiently and with high performance on various hardware platforms, including CPUs, GPUs, and specialized accelerators.
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://www.aurigait.com/wp-content/uploads/2023/01/unnamed-1.png" alt="ONNX with ONNX Runtime">
|
||||
</p>
|
||||
|
||||
Whether used independently or in tandem with ONNX Runtime, ONNX provides a flexible solution for machine learning model deployment and compatibility.
|
||||
|
||||
## Key Features of ONNX Models
|
||||
|
||||
The ability of ONNX to handle various formats can be attributed to the following key features:
|
||||
|
||||
- **Common Model Representation**: ONNX defines a common set of operators (like convolutions, layers, etc.) and a standard data format. When a model is converted to ONNX format, its architecture and weights are translated into this common representation. This uniformity ensures that the model can be understood by any framework that supports ONNX.
|
||||
|
||||
- **Versioning and Backward Compatibility**: ONNX maintains a versioning system for its operators. This ensures that even as the standard evolves, models created in older versions remain usable. Backward compatibility is a crucial feature that prevents models from becoming obsolete quickly.
|
||||
|
||||
- **Graph-based Model Representation**: ONNX represents models as computational graphs. This graph-based structure is a universal way of representing machine learning models, where nodes represent operations or computations, and edges represent the tensors flowing between them. This format is easily adaptable to various frameworks which also represent models as graphs.
|
||||
|
||||
- **Tools and Ecosystem**: There is a rich ecosystem of tools around ONNX that assist in model conversion, visualization, and optimization. These tools make it easier for developers to work with ONNX models and to convert models between different frameworks seamlessly.
|
||||
|
||||
## Common Usage of ONNX
|
||||
|
||||
Before we jump into how to export YOLOv8 models to the ONNX format, let’s take a look at where ONNX models are usually used.
|
||||
|
||||
### CPU Deployment
|
||||
|
||||
ONNX models are often deployed on CPUs due to their compatibility with ONNX Runtime. This runtime is optimized for CPU execution. It significantly improves inference speed and makes real-time CPU deployments feasible.
|
||||
|
||||
### Supported Deployment Options
|
||||
|
||||
While ONNX models are commonly used on CPUs, they can also be deployed on the following platforms:
|
||||
|
||||
- **GPU Acceleration**: ONNX fully supports GPU acceleration, particularly NVIDIA CUDA. This enables efficient execution on NVIDIA GPUs for tasks that demand high computational power.
|
||||
|
||||
- **Edge and Mobile Devices**: ONNX extends to edge and mobile devices, perfect for on-device and real-time inference scenarios. It's lightweight and compatible with edge hardware.
|
||||
|
||||
- **Web Browsers**: ONNX can run directly in web browsers, powering interactive and dynamic web-based AI applications.
|
||||
|
||||
## Exporting YOLOv8 Models to ONNX
|
||||
|
||||
You can expand model compatibility and deployment flexibility by converting YOLOv8 models to ONNX format.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to ONNX format
|
||||
model.export(format='onnx') # creates 'yolov8n.onnx'
|
||||
|
||||
# Load the exported ONNX model
|
||||
onnx_model = YOLO('yolov8n.onnx')
|
||||
|
||||
# Run inference
|
||||
results = onnx_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to ONNX format
|
||||
yolo export model=yolov8n.pt format=onnx # creates 'yolov8n.onnx'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n.onnx source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
|
||||
|
||||
## Deploying Exported YOLOv8 ONNX Models
|
||||
|
||||
Once you've successfully exported your Ultralytics YOLOv8 models to ONNX format, the next step is deploying these models in various environments. For detailed instructions on deploying your ONNX models, take a look at the following resources:
|
||||
|
||||
- **[ONNX Runtime Python API Documentation](https://onnxruntime.ai/docs/api/python/api_summary.html)**: This guide provides essential information for loading and running ONNX models using ONNX Runtime.
|
||||
|
||||
- **[Deploying on Edge Devices](https://onnxruntime.ai/docs/tutorials/iot-edge/)**: Check out this docs page for different examples of deploying ONNX models on edge.
|
||||
|
||||
- **[ONNX Tutorials on GitHub](https://github.com/onnx/tutorials)**: A collection of comprehensive tutorials that cover various aspects of using and implementing ONNX models in different scenarios.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, you've learned how to export Ultralytics YOLOv8 models to ONNX format to increase their interoperability and performance across various platforms. You were also introduced to the ONNX Runtime and ONNX deployment options.
|
||||
|
||||
For further details on usage, visit the [ONNX official documentation](https://onnx.ai/onnx/intro/).
|
||||
|
||||
Also, if you’d like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
|
284
docs/en/integrations/openvino.md
Normal file
284
docs/en/integrations/openvino.md
Normal file
@ -0,0 +1,284 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover the power of deploying your Ultralytics YOLOv8 model using OpenVINO format for up to 10x speedup vs PyTorch.
|
||||
keywords: ultralytics docs, YOLOv8, export YOLOv8, YOLOv8 model deployment, exporting YOLOv8, OpenVINO, OpenVINO format
|
||||
---
|
||||
|
||||
# Intel OpenVINO Export
|
||||
|
||||
<img width="1024" src="https://github.com/RizwanMunawar/RizwanMunawar/assets/62513924/2b181f68-aa91-4514-ba09-497cc3c83b00" alt="OpenVINO Ecosystem">
|
||||
|
||||
In this guide, we cover exporting YOLOv8 models to the [OpenVINO](https://docs.openvino.ai/) format, which can provide up to 3x [CPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html) speedup, as well as accelerating YOLO inference on Intel [GPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html) and [NPU](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.html) hardware.
|
||||
|
||||
OpenVINO, short for Open Visual Inference & Neural Network Optimization toolkit, is a comprehensive toolkit for optimizing and deploying AI inference models. Even though the name contains Visual, OpenVINO also supports various additional tasks including language, audio, time series, etc.
|
||||
|
||||
<p align="center">
|
||||
<br>
|
||||
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/kONm9nE5_Fk?si=kzquuBrxjSbntHoU"
|
||||
title="YouTube video player" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
allowfullscreen>
|
||||
</iframe>
|
||||
<br>
|
||||
<strong>Watch:</strong> How To Export and Optimize an Ultralytics YOLOv8 Model for Inference with OpenVINO.
|
||||
</p>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
Export a YOLOv8n model to OpenVINO format and run inference with the exported model.
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a YOLOv8n PyTorch model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model
|
||||
model.export(format='openvino') # creates 'yolov8n_openvino_model/'
|
||||
|
||||
# Load the exported OpenVINO model
|
||||
ov_model = YOLO('yolov8n_openvino_model/')
|
||||
|
||||
# Run inference
|
||||
results = ov_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to OpenVINO format
|
||||
yolo export model=yolov8n.pt format=openvino # creates 'yolov8n_openvino_model/'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n_openvino_model source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
| Key | Value | Description |
|
||||
|----------|--------------|------------------------------------------------------|
|
||||
| `format` | `'openvino'` | format to export to |
|
||||
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
|
||||
| `half` | `False` | FP16 quantization |
|
||||
|
||||
## Benefits of OpenVINO
|
||||
|
||||
1. **Performance**: OpenVINO delivers high-performance inference by utilizing the power of Intel CPUs, integrated and discrete GPUs, and FPGAs.
|
||||
2. **Support for Heterogeneous Execution**: OpenVINO provides an API to write once and deploy on any supported Intel hardware (CPU, GPU, FPGA, VPU, etc.).
|
||||
3. **Model Optimizer**: OpenVINO provides a Model Optimizer that imports, converts, and optimizes models from popular deep learning frameworks such as PyTorch, TensorFlow, TensorFlow Lite, Keras, ONNX, PaddlePaddle, and Caffe.
|
||||
4. **Ease of Use**: The toolkit comes with more than [80 tutorial notebooks](https://github.com/openvinotoolkit/openvino_notebooks) (including [YOLOv8 optimization](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/230-yolov8-optimization)) teaching different aspects of the toolkit.
|
||||
|
||||
## OpenVINO Export Structure
|
||||
|
||||
When you export a model to OpenVINO format, it results in a directory containing the following:
|
||||
|
||||
1. **XML file**: Describes the network topology.
|
||||
2. **BIN file**: Contains the weights and biases binary data.
|
||||
3. **Mapping file**: Holds mapping of original model output tensors to OpenVINO tensor names.
|
||||
|
||||
You can use these files to run inference with the OpenVINO Inference Engine.
|
||||
|
||||
## Using OpenVINO Export in Deployment
|
||||
|
||||
Once you have the OpenVINO files, you can use the OpenVINO Runtime to run the model. The Runtime provides a unified API to inference across all supported Intel hardware. It also provides advanced capabilities like load balancing across Intel hardware and asynchronous execution. For more information on running the inference, refer to the [Inference with OpenVINO Runtime Guide](https://docs.openvino.ai/2024/openvino-workflow/running-inference.html).
|
||||
|
||||
Remember, you'll need the XML and BIN files as well as any application-specific settings like input size, scale factor for normalization, etc., to correctly set up and use the model with the Runtime.
|
||||
|
||||
In your deployment application, you would typically do the following steps:
|
||||
|
||||
1. Initialize OpenVINO by creating `core = Core()`.
|
||||
2. Load the model using the `core.read_model()` method.
|
||||
3. Compile the model using the `core.compile_model()` function.
|
||||
4. Prepare the input (image, text, audio, etc.).
|
||||
5. Run inference using `compiled_model(input_data)`.
|
||||
|
||||
For more detailed steps and code snippets, refer to the [OpenVINO documentation](https://docs.openvino.ai/) or [API tutorial](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/002-openvino-api/002-openvino-api.ipynb).
|
||||
|
||||
## OpenVINO YOLOv8 Benchmarks
|
||||
|
||||
YOLOv8 benchmarks below were run by the Ultralytics team on 4 different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX and OpenVINO. Benchmarks were run on Intel Flex and Arc GPUs, and on Intel Xeon CPUs at FP32 precision (with the `half=False` argument).
|
||||
|
||||
!!! Note
|
||||
|
||||
The benchmarking results below are for reference and might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run.
|
||||
|
||||
All benchmarks run with `openvino` Python package version [2023.0.1](https://pypi.org/project/openvino/2023.0.1/).
|
||||
|
||||
### Intel Flex GPU
|
||||
|
||||
The Intel® Data Center GPU Flex Series is a versatile and robust solution designed for the intelligent visual cloud. This GPU supports a wide array of workloads including media streaming, cloud gaming, AI visual inference, and virtual desktop Infrastructure workloads. It stands out for its open architecture and built-in support for the AV1 encode, providing a standards-based software stack for high-performance, cross-architecture applications. The Flex Series GPU is optimized for density and quality, offering high reliability, availability, and scalability.
|
||||
|
||||
Benchmarks below run on Intel® Data Center GPU Flex 170 at FP32 precision.
|
||||
|
||||
<div align="center">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/253741543-62659bf8-1765-4d0b-b71c-8a4f9885506a.jpg" alt="Flex GPU benchmarks">
|
||||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|-------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 21.79 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 23.24 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 37.22 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3703 | 3.29 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 31.89 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 32.71 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.4472 | 43.42 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4470 | 3.92 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 50.75 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 47.90 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.4999 | 63.16 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 49.8 | 0.4997 | 7.11 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 77.45 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.5268 | 85.71 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 88.94 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5264 | 9.37 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 100.09 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 114.64 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 110.32 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5367 | 15.02 |
|
||||
|
||||
This table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across four different formats (PyTorch, TorchScript, ONNX, OpenVINO), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
|
||||
|
||||
### Intel Arc GPU
|
||||
|
||||
Intel® Arc™ represents Intel's foray into the dedicated GPU market. The Arc™ series, designed to compete with leading GPU manufacturers like AMD and Nvidia, caters to both the laptop and desktop markets. The series includes mobile versions for compact devices like laptops, and larger, more powerful versions for desktop computers.
|
||||
|
||||
The Arc™ series is divided into three categories: Arc™ 3, Arc™ 5, and Arc™ 7, with each number indicating the performance level. Each category includes several models, and the 'M' in the GPU model name signifies a mobile, integrated variant.
|
||||
|
||||
Early reviews have praised the Arc™ series, particularly the integrated A770M GPU, for its impressive graphics performance. The availability of the Arc™ series varies by region, and additional models are expected to be released soon. Intel® Arc™ GPUs offer high-performance solutions for a range of computing needs, from gaming to content creation.
|
||||
|
||||
Benchmarks below run on Intel® Arc 770 GPU at FP32 precision.
|
||||
|
||||
<div align="center">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/253741545-8530388f-8fd1-44f7-a4ae-f875d59dc282.jpg" alt="Arc GPU benchmarks">
|
||||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|---------------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 88.79 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 102.66 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 57.98 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3703 | 8.52 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 189.83 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 227.58 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.7 | 0.4472 | 142.03 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4469 | 9.19 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 411.64 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 517.12 |
|
||||
| YOLOv8m | ONNX | ✅ | 98.9 | 0.4999 | 298.68 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.4996 | 12.55 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 725.73 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.1 | 0.5268 | 892.83 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 576.11 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5262 | 17.62 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 988.92 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 1186.42 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 768.90 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5367 | 19 |
|
||||
|
||||
### Intel Xeon CPU
|
||||
|
||||
The Intel® Xeon® CPU is a high-performance, server-grade processor designed for complex and demanding workloads. From high-end cloud computing and virtualization to artificial intelligence and machine learning applications, Xeon® CPUs provide the power, reliability, and flexibility required for today's data centers.
|
||||
|
||||
Notably, Xeon® CPUs deliver high compute density and scalability, making them ideal for both small businesses and large enterprises. By choosing Intel® Xeon® CPUs, organizations can confidently handle their most demanding computing tasks and foster innovation while maintaining cost-effectiveness and operational efficiency.
|
||||
|
||||
Benchmarks below run on 4th Gen Intel® Xeon® Scalable CPU at FP32 precision.
|
||||
|
||||
<div align="center">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/253741546-dcd8e52a-fc38-424f-b87e-c8365b6f28dc.jpg" alt="Xeon CPU benchmarks">
|
||||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|---------------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.3709 | 24.36 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.3704 | 23.93 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.3704 | 39.86 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.3704 | 11.34 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.4471 | 33.77 |
|
||||
| YOLOv8s | TorchScript | ✅ | 42.9 | 0.4472 | 34.84 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.4472 | 43.23 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.4471 | 13.86 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.5013 | 53.91 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.4999 | 53.51 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.4999 | 64.16 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.4996 | 28.79 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.5293 | 75.78 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.5268 | 79.13 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.5268 | 88.45 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.5263 | 56.23 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.5404 | 96.60 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.7 | 0.5371 | 114.28 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.5371 | 111.02 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.5371 | 83.28 |
|
||||
|
||||
### Intel Core CPU
|
||||
|
||||
The Intel® Core® series is a range of high-performance processors by Intel. The lineup includes Core i3 (entry-level), Core i5 (mid-range), Core i7 (high-end), and Core i9 (extreme performance). Each series caters to different computing needs and budgets, from everyday tasks to demanding professional workloads. With each new generation, improvements are made to performance, energy efficiency, and features.
|
||||
|
||||
Benchmarks below run on 13th Gen Intel® Core® i7-13700H CPU at FP32 precision.
|
||||
|
||||
<div align="center">
|
||||
<img width="800" src="https://user-images.githubusercontent.com/26833433/254559985-727bfa43-93fa-4fec-a417-800f869f3f9e.jpg" alt="Core CPU benchmarks">
|
||||
</div>
|
||||
|
||||
| Model | Format | Status | Size (MB) | metrics/mAP50-95(B) | Inference time (ms/im) |
|
||||
|---------|-------------|--------|-----------|---------------------|------------------------|
|
||||
| YOLOv8n | PyTorch | ✅ | 6.2 | 0.4478 | 104.61 |
|
||||
| YOLOv8n | TorchScript | ✅ | 12.4 | 0.4525 | 112.39 |
|
||||
| YOLOv8n | ONNX | ✅ | 12.2 | 0.4525 | 28.02 |
|
||||
| YOLOv8n | OpenVINO | ✅ | 12.3 | 0.4504 | 23.53 |
|
||||
| YOLOv8s | PyTorch | ✅ | 21.5 | 0.5885 | 194.83 |
|
||||
| YOLOv8s | TorchScript | ✅ | 43.0 | 0.5962 | 202.01 |
|
||||
| YOLOv8s | ONNX | ✅ | 42.8 | 0.5962 | 65.74 |
|
||||
| YOLOv8s | OpenVINO | ✅ | 42.9 | 0.5966 | 38.66 |
|
||||
| YOLOv8m | PyTorch | ✅ | 49.7 | 0.6101 | 355.23 |
|
||||
| YOLOv8m | TorchScript | ✅ | 99.2 | 0.6120 | 424.78 |
|
||||
| YOLOv8m | ONNX | ✅ | 99.0 | 0.6120 | 173.39 |
|
||||
| YOLOv8m | OpenVINO | ✅ | 99.1 | 0.6091 | 69.80 |
|
||||
| YOLOv8l | PyTorch | ✅ | 83.7 | 0.6591 | 593.00 |
|
||||
| YOLOv8l | TorchScript | ✅ | 167.2 | 0.6580 | 697.54 |
|
||||
| YOLOv8l | ONNX | ✅ | 166.8 | 0.6580 | 342.15 |
|
||||
| YOLOv8l | OpenVINO | ✅ | 167.0 | 0.0708 | 117.69 |
|
||||
| YOLOv8x | PyTorch | ✅ | 130.5 | 0.6651 | 804.65 |
|
||||
| YOLOv8x | TorchScript | ✅ | 260.8 | 0.6650 | 921.46 |
|
||||
| YOLOv8x | ONNX | ✅ | 260.4 | 0.6650 | 526.66 |
|
||||
| YOLOv8x | OpenVINO | ✅ | 260.6 | 0.6619 | 158.73 |
|
||||
|
||||
## Reproduce Our Results
|
||||
|
||||
To reproduce the Ultralytics benchmarks above on all export [formats](../modes/export.md) run this code:
|
||||
|
||||
!!! Example
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a YOLOv8n PyTorch model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats
|
||||
results= model.benchmarks(data='coco128.yaml')
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Benchmark YOLOv8n speed and accuracy on the COCO128 dataset for all all export formats
|
||||
yolo benchmark model=yolov8n.pt data=coco128.yaml
|
||||
```
|
||||
|
||||
Note that benchmarking results might vary based on the exact hardware and software configuration of a system, as well as the current workload of the system at the time the benchmarks are run. For the most reliable results use a dataset with a large number of images, i.e. `data='coco128.yaml' (128 val images), or `data='coco.yaml'` (5000 val images).
|
||||
|
||||
## Conclusion
|
||||
|
||||
The benchmarking results clearly demonstrate the benefits of exporting the YOLOv8 model to the OpenVINO format. Across different models and hardware platforms, the OpenVINO format consistently outperforms other formats in terms of inference speed while maintaining comparable accuracy.
|
||||
|
||||
For the Intel® Data Center GPU Flex Series, the OpenVINO format was able to deliver inference speeds almost 10 times faster than the original PyTorch format. On the Xeon CPU, the OpenVINO format was twice as fast as the PyTorch format. The accuracy of the models remained nearly identical across the different formats.
|
||||
|
||||
The benchmarks underline the effectiveness of OpenVINO as a tool for deploying deep learning models. By converting models to the OpenVINO format, developers can achieve significant performance improvements, making it easier to deploy these models in real-world applications.
|
||||
|
||||
For more detailed information and instructions on using OpenVINO, refer to the [official OpenVINO documentation](https://docs.openvino.ai/).
|
122
docs/en/integrations/paddlepaddle.md
Normal file
122
docs/en/integrations/paddlepaddle.md
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
comments: true
|
||||
description: This guide explains how to export Ultralytics YOLOv8 models to the PaddlePaddle format for wide device support and harnessing the power of Baidu's ML framework.
|
||||
keywords: Ultralytics, YOLOv8, PaddlePaddle Export, Model Deployment, Flexible Deployment, Industrial-Grade Deep Learning, Baidu, Cross-Platform Compatibility
|
||||
---
|
||||
|
||||
# How to Export to PaddlePaddle Format from YOLOv8 Models
|
||||
|
||||
Bridging the gap between developing and deploying computer vision models in real-world scenarios with varying conditions can be difficult. PaddlePaddle makes this process easier with its focus on flexibility, performance, and its capability for parallel processing in distributed environments. This means you can use your YOLOv8 computer vision models on a wide variety of devices and platforms, from smartphones to cloud-based servers.
|
||||
|
||||
The ability to export to PaddlePaddle model format allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for use within the PaddlePaddle framework. PaddlePaddle is known for facilitating industrial deployments and is a good choice for deploying computer vision applications in real-world settings across various domains.
|
||||
|
||||
## Why should you export to PaddlePaddle?
|
||||
|
||||
<p align="center">
|
||||
<img width="75%" src="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/imgs/logo.png?raw=true" alt="PaddlePaddle Logo">
|
||||
</p>
|
||||
|
||||
Developed by Baidu, [PaddlePaddle](https://www.paddlepaddle.org.cn/en) (**PA**rallel **D**istributed **D**eep **LE**arning) is China's first open-source deep learning platform. Unlike some frameworks built mainly for research, PaddlePaddle prioritizes ease of use and smooth integration across industries.
|
||||
|
||||
It offers tools and resources similar to popular frameworks like TensorFlow and PyTorch, making it accessible for developers of all experience levels. From farming and factories to service businesses, PaddlePaddle's large developer community of over 4.77 million is helping create and deploy AI applications.
|
||||
|
||||
By exporting your Ultralytics YOLOv8 models to PaddlePaddle format, you can tap into PaddlePaddle’s strengths in performance optimization. PaddlePaddle prioritizes efficient model execution and reduced memory usage. As a result, your YOLOv8 models can potentially achieve even better performance, delivering top-notch results in practical scenarios.
|
||||
|
||||
## Key Features of PaddlePaddle Models
|
||||
|
||||
PaddlePaddle models offer a range of key features that contribute to their flexibility, performance, and scalability across diverse deployment scenarios:
|
||||
|
||||
- **Dynamic-to-Static Graph**: PaddlePaddle supports [dynamic-to-static compilation](https://www.paddlepaddle.org.cn/documentation/docs/en/guides/jit/index_en.html), where models can be translated into a static computational graph. This enables optimizations that reduce runtime overhead and boost inference performance.
|
||||
|
||||
- **Operator Fusion**: PaddlePaddle, like TensorRT, uses [operator fusion](https://developer.nvidia.com/gtc/2020/video/s21436-vid) to streamline computation and reduce overhead. The framework minimizes memory transfers and computational steps by merging compatible operations, resulting in faster inference.
|
||||
|
||||
- **Quantization**: PaddlePaddle supports [quantization techniques](https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/quantization/PTQ_en.html), including post-training quantization and quantization-aware training. These techniques allow for the use of lower-precision data representations, effectively boosting performance and reducing model size.
|
||||
|
||||
## Deployment Options in PaddlePaddle
|
||||
|
||||
Before diving into the code for exporting YOLOv8 models to PaddlePaddle, let's take a look at the different deployment scenarios in which PaddlePaddle models excel.
|
||||
|
||||
PaddlePaddle provides a range of options, each offering a distinct balance of ease of use, flexibility, and performance:
|
||||
|
||||
- **Paddle Serving**: This framework simplifies the deployment of PaddlePaddle models as high-performance RESTful APIs. Paddle Serving is ideal for production environments, providing features like model versioning, online A/B testing, and scalability for handling large volumes of requests.
|
||||
|
||||
- **Paddle Inference API**: The Paddle Inference API gives you low-level control over model execution. This option is well-suited for scenarios where you need to integrate the model tightly within a custom application or optimize performance for specific hardware.
|
||||
|
||||
- **Paddle Lite**: Paddle Lite is designed for deployment on mobile and embedded devices where resources are limited. It optimizes models for smaller sizes and faster inference on ARM CPUs, GPUs, and other specialized hardware.
|
||||
|
||||
- **Paddle.js**: Paddle.js enables you to deploy PaddlePaddle models directly within web browsers. Paddle.js can either load a pre-trained model or transform a model from [paddle-hub](https://github.com/PaddlePaddle/PaddleHub) with model transforming tools provided by Paddle.js. It can run in browsers that support WebGL/WebGPU/WebAssembly.
|
||||
|
||||
## Export to PaddlePaddle: Converting Your YOLOv8 Model
|
||||
|
||||
Converting YOLOv8 models to the PaddlePaddle format can improve execution flexibility and optimize performance for various deployment scenarios.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to PaddlePaddle format
|
||||
model.export(format='paddle') # creates '/yolov8n_paddle_model'
|
||||
|
||||
# Load the exported PaddlePaddle model
|
||||
paddle_model = YOLO('./yolov8n_paddle_model')
|
||||
|
||||
# Run inference
|
||||
results = paddle_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to PaddlePaddle format
|
||||
yolo export model=yolov8n.pt format=paddle # creates '/yolov8n_paddle_model'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model='./yolov8n_paddle_model' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
|
||||
|
||||
## Deploying Exported YOLOv8 PaddlePaddle Models
|
||||
|
||||
After successfully exporting your Ultralytics YOLOv8 models to PaddlePaddle format, you can now deploy them. The primary and recommended first step for running a PaddlePaddle model is to use the YOLO("./model_paddle_model") method, as outlined in the previous usage code snippet.
|
||||
|
||||
However, for in-depth instructions on deploying your PaddlePaddle models in various other settings, take a look at the following resources:
|
||||
|
||||
- **[Paddle Serving](https://github.com/PaddlePaddle/Serving/blob/v0.9.0/README_CN.md)**: Learn how to deploy your PaddlePaddle models as performant services using Paddle Serving.
|
||||
|
||||
- **[Paddle Lite](https://github.com/PaddlePaddle/Paddle-Lite/blob/develop/README_en.md)**: Explore how to optimize and deploy models on mobile and embedded devices using Paddle Lite.
|
||||
|
||||
- **[Paddle.js](https://github.com/PaddlePaddle/Paddle.js)**: Discover how to run PaddlePaddle models in web browsers for client-side AI using Paddle.js.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we explored the process of exporting Ultralytics YOLOv8 models to the PaddlePaddle format. By following these steps, you can leverage PaddlePaddle's strengths in diverse deployment scenarios, optimizing your models for different hardware and software environments.
|
||||
|
||||
For further details on usage, visit the [PaddlePaddle official documentation](https://www.paddlepaddle.org.cn/documentation/docs/en/guides/index_en.html)
|
||||
|
||||
Want to explore more ways to integrate your Ultralytics YOLOv8 models? Our [integration guide page](index.md) explores various options, equipping you with valuable resources and insights.
|
179
docs/en/integrations/ray-tune.md
Normal file
179
docs/en/integrations/ray-tune.md
Normal file
@ -0,0 +1,179 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover how to streamline hyperparameter tuning for YOLOv8 models with Ray Tune. Learn to accelerate tuning, integrate with Weights & Biases, and analyze results.
|
||||
keywords: Ultralytics, YOLOv8, Ray Tune, hyperparameter tuning, machine learning optimization, Weights & Biases integration, result analysis
|
||||
---
|
||||
|
||||
# Efficient Hyperparameter Tuning with Ray Tune and YOLOv8
|
||||
|
||||
Hyperparameter tuning is vital in achieving peak model performance by discovering the optimal set of hyperparameters. This involves running trials with different hyperparameters and evaluating each trial’s performance.
|
||||
|
||||
## Accelerate Tuning with Ultralytics YOLOv8 and Ray Tune
|
||||
|
||||
[Ultralytics YOLOv8](https://ultralytics.com) incorporates Ray Tune for hyperparameter tuning, streamlining the optimization of YOLOv8 model hyperparameters. With Ray Tune, you can utilize advanced search strategies, parallelism, and early stopping to expedite the tuning process.
|
||||
|
||||
### Ray Tune
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://docs.ray.io/en/latest/_images/tune_overview.png" alt="Ray Tune Overview">
|
||||
</p>
|
||||
|
||||
[Ray Tune](https://docs.ray.io/en/latest/tune/index.html) is a hyperparameter tuning library designed for efficiency and flexibility. It supports various search strategies, parallelism, and early stopping strategies, and seamlessly integrates with popular machine learning frameworks, including Ultralytics YOLOv8.
|
||||
|
||||
### Integration with Weights & Biases
|
||||
|
||||
YOLOv8 also allows optional integration with [Weights & Biases](https://wandb.ai/site) for monitoring the tuning process.
|
||||
|
||||
## Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install and update Ultralytics and Ray Tune packages
|
||||
pip install -U ultralytics "ray[tune]<=2.9.3"
|
||||
|
||||
# Optionally install W&B for logging
|
||||
pip install wandb
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a YOLOv8n model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Start tuning hyperparameters for YOLOv8n training on the COCO8 dataset
|
||||
result_grid = model.tune(data='coco8.yaml', use_ray=True)
|
||||
```
|
||||
|
||||
## `tune()` Method Parameters
|
||||
|
||||
The `tune()` method in YOLOv8 provides an easy-to-use interface for hyperparameter tuning with Ray Tune. It accepts several arguments that allow you to customize the tuning process. Below is a detailed explanation of each parameter:
|
||||
|
||||
| Parameter | Type | Description | Default Value |
|
||||
|-----------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
|
||||
| `data` | `str` | The dataset configuration file (in YAML format) to run the tuner on. This file should specify the training and validation data paths, as well as other dataset-specific settings. | |
|
||||
| `space` | `dict, optional` | A dictionary defining the hyperparameter search space for Ray Tune. Each key corresponds to a hyperparameter name, and the value specifies the range of values to explore during tuning. If not provided, YOLOv8 uses a default search space with various hyperparameters. | |
|
||||
| `grace_period` | `int, optional` | The grace period in epochs for the [ASHA scheduler](https://docs.ray.io/en/latest/tune/api/schedulers.html) in Ray Tune. The scheduler will not terminate any trial before this number of epochs, allowing the model to have some minimum training before making a decision on early stopping. | 10 |
|
||||
| `gpu_per_trial` | `int, optional` | The number of GPUs to allocate per trial during tuning. This helps manage GPU usage, particularly in multi-GPU environments. If not provided, the tuner will use all available GPUs. | None |
|
||||
| `iterations` | `int, optional` | The maximum number of trials to run during tuning. This parameter helps control the total number of hyperparameter combinations tested, ensuring the tuning process does not run indefinitely. | 10 |
|
||||
| `**train_args` | `dict, optional` | Additional arguments to pass to the `train()` method during tuning. These arguments can include settings like the number of training epochs, batch size, and other training-specific configurations. | {} |
|
||||
|
||||
By customizing these parameters, you can fine-tune the hyperparameter optimization process to suit your specific needs and available computational resources.
|
||||
|
||||
## Default Search Space Description
|
||||
|
||||
The following table lists the default search space parameters for hyperparameter tuning in YOLOv8 with Ray Tune. Each parameter has a specific value range defined by `tune.uniform()`.
|
||||
|
||||
| Parameter | Value Range | Description |
|
||||
|-------------------|----------------------------|------------------------------------------|
|
||||
| `lr0` | `tune.uniform(1e-5, 1e-1)` | Initial learning rate |
|
||||
| `lrf` | `tune.uniform(0.01, 1.0)` | Final learning rate factor |
|
||||
| `momentum` | `tune.uniform(0.6, 0.98)` | Momentum |
|
||||
| `weight_decay` | `tune.uniform(0.0, 0.001)` | Weight decay |
|
||||
| `warmup_epochs` | `tune.uniform(0.0, 5.0)` | Warmup epochs |
|
||||
| `warmup_momentum` | `tune.uniform(0.0, 0.95)` | Warmup momentum |
|
||||
| `box` | `tune.uniform(0.02, 0.2)` | Box loss weight |
|
||||
| `cls` | `tune.uniform(0.2, 4.0)` | Class loss weight |
|
||||
| `hsv_h` | `tune.uniform(0.0, 0.1)` | Hue augmentation range |
|
||||
| `hsv_s` | `tune.uniform(0.0, 0.9)` | Saturation augmentation range |
|
||||
| `hsv_v` | `tune.uniform(0.0, 0.9)` | Value (brightness) augmentation range |
|
||||
| `degrees` | `tune.uniform(0.0, 45.0)` | Rotation augmentation range (degrees) |
|
||||
| `translate` | `tune.uniform(0.0, 0.9)` | Translation augmentation range |
|
||||
| `scale` | `tune.uniform(0.0, 0.9)` | Scaling augmentation range |
|
||||
| `shear` | `tune.uniform(0.0, 10.0)` | Shear augmentation range (degrees) |
|
||||
| `perspective` | `tune.uniform(0.0, 0.001)` | Perspective augmentation range |
|
||||
| `flipud` | `tune.uniform(0.0, 1.0)` | Vertical flip augmentation probability |
|
||||
| `fliplr` | `tune.uniform(0.0, 1.0)` | Horizontal flip augmentation probability |
|
||||
| `mosaic` | `tune.uniform(0.0, 1.0)` | Mosaic augmentation probability |
|
||||
| `mixup` | `tune.uniform(0.0, 1.0)` | Mixup augmentation probability |
|
||||
| `copy_paste` | `tune.uniform(0.0, 1.0)` | Copy-paste augmentation probability |
|
||||
|
||||
## Custom Search Space Example
|
||||
|
||||
In this example, we demonstrate how to use a custom search space for hyperparameter tuning with Ray Tune and YOLOv8. By providing a custom search space, you can focus the tuning process on specific hyperparameters of interest.
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Define a YOLO model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Run Ray Tune on the model
|
||||
result_grid = model.tune(data="coco128.yaml",
|
||||
space={"lr0": tune.uniform(1e-5, 1e-1)},
|
||||
epochs=50,
|
||||
use_ray=True)
|
||||
```
|
||||
|
||||
In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco128.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs directly to the tune method as `epochs=50`.
|
||||
|
||||
## Processing Ray Tune Results
|
||||
|
||||
After running a hyperparameter tuning experiment with Ray Tune, you might want to perform various analyses on the obtained results. This guide will take you through common workflows for processing and analyzing these results.
|
||||
|
||||
### Loading Tune Experiment Results from a Directory
|
||||
|
||||
After running the tuning experiment with `tuner.fit()`, you can load the results from a directory. This is useful, especially if you're performing the analysis after the initial training script has exited.
|
||||
|
||||
```python
|
||||
experiment_path = f"{storage_path}/{exp_name}"
|
||||
print(f"Loading results from {experiment_path}...")
|
||||
|
||||
restored_tuner = tune.Tuner.restore(experiment_path, trainable=train_mnist)
|
||||
result_grid = restored_tuner.get_results()
|
||||
```
|
||||
|
||||
### Basic Experiment-Level Analysis
|
||||
|
||||
Get an overview of how trials performed. You can quickly check if there were any errors during the trials.
|
||||
|
||||
```python
|
||||
if result_grid.errors:
|
||||
print("One or more trials failed!")
|
||||
else:
|
||||
print("No errors!")
|
||||
```
|
||||
|
||||
### Basic Trial-Level Analysis
|
||||
|
||||
Access individual trial hyperparameter configurations and the last reported metrics.
|
||||
|
||||
```python
|
||||
for i, result in enumerate(result_grid):
|
||||
print(f"Trial #{i}: Configuration: {result.config}, Last Reported Metrics: {result.metrics}")
|
||||
```
|
||||
|
||||
### Plotting the Entire History of Reported Metrics for a Trial
|
||||
|
||||
You can plot the history of reported metrics for each trial to see how the metrics evolved over time.
|
||||
|
||||
```python
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
for result in result_grid:
|
||||
plt.plot(result.metrics_dataframe["training_iteration"], result.metrics_dataframe["mean_accuracy"], label=f"Trial {i}")
|
||||
|
||||
plt.xlabel('Training Iterations')
|
||||
plt.ylabel('Mean Accuracy')
|
||||
plt.legend()
|
||||
plt.show()
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
In this documentation, we covered common workflows to analyze the results of experiments run with Ray Tune using Ultralytics. The key steps include loading the experiment results from a directory, performing basic experiment-level and trial-level analysis and plotting metrics.
|
||||
|
||||
Explore further by looking into Ray Tune’s [Analyze Results](https://docs.ray.io/en/latest/tune/examples/tune_analyze_results.html) docs page to get the most out of your hyperparameter tuning experiments.
|
243
docs/en/integrations/roboflow.md
Normal file
243
docs/en/integrations/roboflow.md
Normal file
@ -0,0 +1,243 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn how to use Roboflow with Ultralytics for labeling and managing images for use in training, and for evaluating model performance.
|
||||
keywords: Ultralytics, YOLOv8, Roboflow, vector analysis, confusion matrix, data management, image labeling
|
||||
---
|
||||
|
||||
# Roboflow
|
||||
|
||||
[Roboflow](https://roboflow.com/?ref=ultralytics) has everything you need to build and deploy computer vision models. Connect Roboflow at any step in your pipeline with APIs and SDKs, or use the end-to-end interface to automate the entire process from image to inference. Whether you’re in need of [data labeling](https://roboflow.com/annotate?ref=ultralytics), [model training](https://roboflow.com/train?ref=ultralytics), or [model deployment](https://roboflow.com/deploy?ref=ultralytics), Roboflow gives you building blocks to bring custom computer vision solutions to your project.
|
||||
|
||||
!!! Question "Licensing"
|
||||
|
||||
Ultralytics offers two licensing options:
|
||||
|
||||
- The [AGPL-3.0 License](https://github.com/ultralytics/ultralytics/blob/main/LICENSE), an [OSI-approved](https://opensource.org/licenses/) open-source license ideal for students and enthusiasts.
|
||||
- The [Enterprise License](https://ultralytics.com/license) for businesses seeking to incorporate our AI models into their products and services.
|
||||
|
||||
For more details see [Ultralytics Licensing](https://ultralytics.com/license).
|
||||
|
||||
In this guide, we are going to showcase how to find, label, and organize data for use in training a custom Ultralytics YOLOv8 model. Use the table of contents below to jump directly to a specific section:
|
||||
|
||||
- Gather data for training a custom YOLOv8 model
|
||||
- Upload, convert and label data for YOLOv8 format
|
||||
- Pre-process and augment data for model robustness
|
||||
- Dataset management for [YOLOv8](https://docs.ultralytics.com/models/yolov8/)
|
||||
- Export data in 40+ formats for model training
|
||||
- Upload custom YOLOv8 model weights for testing and deployment
|
||||
- Gather Data for Training a Custom YOLOv8 Model
|
||||
|
||||
Roboflow provides two services that can help you collect data for YOLOv8 models: [Universe](https://universe.roboflow.com/?ref=ultralytics) and [Collect](https://roboflow.com/collect?ref=ultralytics).
|
||||
|
||||
Universe is an online repository with over 250,000 vision datasets totalling over 100 million images.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_universe.png" alt="Roboflow Universe" width="800">
|
||||
</p>
|
||||
|
||||
With a [free Roboflow account](https://app.roboflow.com/?ref=ultralytics), you can export any dataset available on Universe. To export a dataset, click the "Download this Dataset" button on any dataset.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_dataset.png" alt="Roboflow Universe dataset export" width="800">
|
||||
</p>
|
||||
|
||||
For YOLOv8, select "YOLOv8" as the export format:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_data_format.png" alt="Roboflow Universe dataset export" width="800">
|
||||
</p>
|
||||
|
||||
Universe also has a page that aggregates all [public fine-tuned YOLOv8 models uploaded to Roboflow](https://universe.roboflow.com/search?q=model:yolov8). You can use this page to explore pre-trained models you can use for testing or [for automated data labeling](https://docs.roboflow.com/annotate/use-roboflow-annotate/model-assisted-labeling) or to prototype with [Roboflow inference](https://roboflow.com/inference?ref=ultralytics).
|
||||
|
||||
If you want to gather images yourself, try [Collect](https://github.com/roboflow/roboflow-collect), an open source project that allows you to automatically gather images using a webcam on the edge. You can use text or image prompts with Collect to instruct what data should be collected, allowing you to capture only the useful data you need to build your vision model.
|
||||
|
||||
## Upload, Convert and Label Data for YOLOv8 Format
|
||||
|
||||
[Roboflow Annotate](https://docs.roboflow.com/annotate/use-roboflow-annotate) is an online annotation tool for use in labeling images for object detection, classification, and segmentation.
|
||||
|
||||
To label data for a YOLOv8 object detection, instance segmentation, or classification model, first create a project in Roboflow.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_create_project.png" alt="Create a Roboflow project" width="400">
|
||||
</p>
|
||||
|
||||
Next, upload your images, and any pre-existing annotations you have from other tools ([using one of the 40+ supported import formats](https://roboflow.com/formats?ref=ultralytics)), into Roboflow.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_upload_data.png" alt="Upload images to Roboflow" width="800">
|
||||
</p>
|
||||
|
||||
Select the batch of images you have uploaded on the Annotate page to which you are taken after uploading images. Then, click "Start Annotating" to label images.
|
||||
|
||||
To label with bounding boxes, press the `B` key on your keyboard or click the box icon in the sidebar. Click on a point where you want to start your bounding box, then drag to create the box:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_annotate.png" alt="Annotating an image in Roboflow" width="800">
|
||||
</p>
|
||||
|
||||
A pop-up will appear asking you to select a class for your annotation once you have created an annotation.
|
||||
|
||||
To label with polygons, press the `P` key on your keyboard, or the polygon icon in the sidebar. With the polygon annotation tool enabled, click on individual points in the image to draw a polygon.
|
||||
|
||||
Roboflow offers a SAM-based label assistant with which you can label images faster than ever. SAM (Segment Anything Model) is a state-of-the-art computer vision model that can precisely label images. With SAM, you can significantly speed up the image labeling process. Annotating images with polygons becomes as simple as a few clicks, rather than the tedious process of precisely clicking points around an object.
|
||||
|
||||
To use the label assistant, click the cursor icon in the sidebar, SAM will be loaded for use in your project.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_annotate_interactive.png" alt="Annotating an image in Roboflow with SAM-powered label assist" width="800">
|
||||
</p>
|
||||
|
||||
Hover over any object in the image and SAM will recommend an annotation. You can hover to find the right place to annotate, then click to create your annotation. To amend your annotation to be more or less specific, you can click inside or outside the annotation SAM has created on the document.
|
||||
|
||||
You can also add tags to images from the Tags panel in the sidebar. You can apply tags to data from a particular area, taken from a specific camera, and more. You can then use these tags to search through data for images matching a tag and generate versions of a dataset with images that contain a particular tag or set of tags.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_tags.png" alt="Adding tags to an image in Roboflow" width="300">
|
||||
</p>
|
||||
|
||||
Models hosted on Roboflow can be used with Label Assist, an automated annotation tool that uses your YOLOv8 model to recommend annotations. To use Label Assist, first upload a YOLOv8 model to Roboflow (see instructions later in the guide). Then, click the magic wand icon in the left sidebar and select your model for use in Label Assist.
|
||||
|
||||
Choose a model, then click "Continue" to enable Label Assist:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_label_assist.png" alt="Enabling Label Assist" width="800">
|
||||
</p>
|
||||
|
||||
When you open new images for annotation, Label Assist will trigger and recommend annotations.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_label_assist.png" alt="ALabel Assist recommending an annotation" width="800">
|
||||
</p>
|
||||
|
||||
## Dataset Management for YOLOv8
|
||||
|
||||
Roboflow provides a suite of tools for understanding computer vision datasets.
|
||||
|
||||
First, you can use dataset search to find images that meet a semantic text description (i.e. find all images that contain people), or that meet a specified label (i.e. the image is associated with a specific tag). To use dataset search, click "Dataset" in the sidebar. Then, input a search query using the search bar and associated filters at the top of the page.
|
||||
|
||||
For example, the following text query finds images that contain people in a dataset:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_dataset_management.png" alt="Searching for an image" width="800">
|
||||
</p>
|
||||
|
||||
You can narrow your search to images with a particular tag using the "Tags" selector:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_filter_by_tag.png" alt="Filter images by tag" width="350">
|
||||
</p>
|
||||
|
||||
Before you start training a model with your dataset, we recommend using Roboflow [Health Check](https://docs.roboflow.com/datasets/dataset-health-check), a web tool that provides an insight into your dataset and how you can improve the dataset prior to training a vision model.
|
||||
|
||||
To use Health Check, click the "Health Check" sidebar link. A list of statistics will appear that show the average size of images in your dataset, class balance, a heatmap of where annotations are in your images, and more.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_dataset_health_check.png" alt="Roboflow Health Check analysis" width="800">
|
||||
</p>
|
||||
|
||||
Health Check may recommend changes to help enhance dataset performance. For example, the class balance feature may show that there is an imbalance in labels that, if solved, may boost performance or your model.
|
||||
|
||||
## Export Data in 40+ Formats for Model Training
|
||||
|
||||
To export your data, you will need a dataset version. A version is a state of your dataset frozen-in-time. To create a version, first click "Versions" in the sidebar. Then, click the "Create New Version" button. On this page, you will be able to choose augmentations and preprocessing steps to apply to your dataset:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_generate_dataset.png" alt="Creating a dataset version on Roboflow" width="800">
|
||||
</p>
|
||||
|
||||
For each augmentation you select, a pop-up will appear allowing you to tune the augmentation to your needs. Here is an example of tuning a brightness augmentation within specified parameters:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_augmentations.png" alt="Applying augmentations to a dataset" width="800">
|
||||
</p>
|
||||
|
||||
When your dataset version has been generated, you can export your data into a range of formats. Click the "Export Dataset" button on your dataset version page to export your data:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_export_data.png" alt="Exporting a dataset" width="800">
|
||||
</p>
|
||||
|
||||
You are now ready to train YOLOv8 on a custom dataset. Follow this [written guide](https://blog.roboflow.com/how-to-train-yolov8-on-a-custom-dataset/) and [YouTube video](https://www.youtube.com/watch?v=wuZtUMEiKWY) for step-by-step instructions or refer to the [Ultralytics documentation](https://docs.ultralytics.com/modes/train/).
|
||||
|
||||
## Upload Custom YOLOv8 Model Weights for Testing and Deployment
|
||||
|
||||
Roboflow offers an infinitely scalable API for deployed models and SDKs for use with NVIDIA Jetsons, Luxonis OAKs, Raspberry Pis, GPU-based devices, and more.
|
||||
|
||||
You can deploy YOLOv8 models by uploading YOLOv8 weights to Roboflow. You can do this in a few lines of Python code. Create a new Python file and add the following code:
|
||||
|
||||
```python
|
||||
import roboflow # install with 'pip install roboflow'
|
||||
|
||||
roboflow.login()
|
||||
|
||||
rf = roboflow.Roboflow()
|
||||
|
||||
project = rf.workspace(WORKSPACE_ID).project("football-players-detection-3zvbc")
|
||||
dataset = project.version(VERSION).download("yolov8")
|
||||
|
||||
project.version(dataset.version).deploy(model_type="yolov8", model_path=f"{HOME}/runs/detect/train/")
|
||||
```
|
||||
|
||||
In this code, replace the project ID and version ID with the values for your account and project. [Learn how to retrieve your Roboflow API key](https://docs.roboflow.com/api-reference/authentication#retrieve-an-api-key).
|
||||
|
||||
When you run the code above, you will be asked to authenticate. Then, your model will be uploaded and an API will be created for your project. This process can take up to 30 minutes to complete.
|
||||
|
||||
To test your model and find deployment instructions for supported SDKs, go to the "Deploy" tab in the Roboflow sidebar. At the top of this page, a widget will appear with which you can test your model. You can use your webcam for live testing or upload images or videos.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_test_project.png" alt="Running inference on an example image" width="800">
|
||||
</p>
|
||||
|
||||
You can also use your uploaded model as a [labeling assistant](https://docs.roboflow.com/annotate/use-roboflow-annotate/model-assisted-labeling). This feature uses your trained model to recommend annotations on images uploaded to Roboflow.
|
||||
|
||||
## How to Evaluate YOLOv8 Models
|
||||
|
||||
Roboflow provides a range of features for use in evaluating models.
|
||||
|
||||
Once you have uploaded a model to Roboflow, you can access our model evaluation tool, which provides a confusion matrix showing the performance of your model as well as an interactive vector analysis plot. These features can help you find opportunities to improve your model.
|
||||
|
||||
To access a confusion matrix, go to your model page on the Roboflow dashboard, then click "View Detailed Evaluation":
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_model_eval.png" alt="Start a Roboflow model evaluation" width="800">
|
||||
</p>
|
||||
|
||||
A pop-up will appear showing a confusion matrix:
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_confusion_matrix.png" alt="A confusion matrix" width="800">
|
||||
</p>
|
||||
|
||||
Hover over a box on the confusion matrix to see the value associated with the box. Click on a box to see images in the respective category. Click on an image to view the model predictions and ground truth data associated with that image.
|
||||
|
||||
For more insights, click Vector Analysis. This will show a scatter plot of the images in your dataset, calculated using CLIP. The closer images are in the plot, the more similar they are, semantically. Each image is represented as a dot with a color between white and red. The more red the dot, the worse the model performed.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_vector_analysis.png" alt="A vector analysis plot" width="800">
|
||||
</p>
|
||||
|
||||
You can use Vector Analysis to:
|
||||
|
||||
- Find clusters of images;
|
||||
- Identify clusters where the model performs poorly, and;
|
||||
- Visualize commonalities between images on which the model performs poorly.
|
||||
|
||||
## Learning Resources
|
||||
|
||||
Want to learn more about using Roboflow for creating YOLOv8 models? The following resources may be helpful in your work.
|
||||
|
||||
- [Train YOLOv8 on a Custom Dataset](https://github.com/roboflow/notebooks/blob/main/notebooks/train-yolov8-object-detection-on-custom-dataset.ipynb): Follow our interactive notebook that shows you how to train a YOLOv8 model on a custom dataset.
|
||||
- [Autodistill](https://autodistill.github.io/autodistill/): Use large foundation vision models to label data for specific models. You can label images for use in training YOLOv8 classification, detection, and segmentation models with Autodistill.
|
||||
- [Supervision](https://roboflow.github.io/supervision/): A Python package with helpful utilities for use in working with computer vision models. You can use supervision to filter detections, compute confusion matrices, and more, all in a few lines of Python code.
|
||||
- [Roboflow Blog](https://blog.roboflow.com/): The Roboflow Blog features over 500 articles on computer vision, covering topics from how to train a YOLOv8 model to annotation best practices.
|
||||
- [Roboflow YouTube channel](https://www.youtube.com/@Roboflow): Browse dozens of in-depth computer vision guides on our YouTube channel, covering topics from training YOLOv8 models to automated image labeling.
|
||||
|
||||
## Project Showcase
|
||||
|
||||
Below are a few of the many pieces of feedback we have received for using YOLOv8 and Roboflow together to create computer vision models.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_showcase_1.png" alt="Showcase image" width="500">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_showcase_2.png" alt="Showcase image" width="500">
|
||||
<img src="https://media.roboflow.com/ultralytics/rf_showcase_3.png" alt="Showcase image" width="500">
|
||||
</p>
|
153
docs/en/integrations/tensorboard.md
Normal file
153
docs/en/integrations/tensorboard.md
Normal file
@ -0,0 +1,153 @@
|
||||
---
|
||||
comments: true
|
||||
description: Walk through the integration of YOLOv8 with TensorBoard to be able to use TensorFlow's visualization toolkit for enhanced model training analysis, offering capabilities like metric tracking, model graph visualization, and more.
|
||||
keywords: TensorBoard, YOLOv8, Visualization, TensorFlow, Training Analysis, Metric Tracking, Model Graphs, Experimentation, Ultralytics
|
||||
---
|
||||
|
||||
# Gain Visual Insights with YOLOv8’s Integration with TensorBoard
|
||||
|
||||
Understanding and fine-tuning computer vision models like [Ultralytics’ YOLOv8](https://ultralytics.com) becomes more straightforward when you take a closer look at their training processes. Model training visualization helps with getting insights into the model's learning patterns, performance metrics, and overall behavior. YOLOv8's integration with TensorBoard makes this process of visualization and analysis easier and enables more efficient and informed adjustments to the model.
|
||||
|
||||
This guide covers how to use TensorBoard with YOLOv8. You'll learn about various visualizations, from tracking metrics to analyzing model graphs. These tools will help you understand your YOLOv8 model's performance better.
|
||||
|
||||
## TensorBoard
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://www.tensorflow.org/static/tensorboard/images/tensorboard.gif" alt="Tensorboard Overview">
|
||||
</p>
|
||||
|
||||
[TensorBoard](https://www.tensorflow.org/tensorboard), TensorFlow's visualization toolkit, is essential for machine learning experimentation. TensorBoard features a range of visualization tools, crucial for monitoring machine learning models. These tools include tracking key metrics like loss and accuracy, visualizing model graphs, and viewing histograms of weights and biases over time. It also provides capabilities for projecting embeddings to lower-dimensional spaces and displaying multimedia data.
|
||||
|
||||
## YOLOv8 Training with TensorBoard
|
||||
|
||||
Using TensorBoard while training YOLOv8 models is straightforward and offers significant benefits.
|
||||
|
||||
## Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8 and Tensorboard
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
TensorBoard is conveniently pre-installed with YOLOv8, eliminating the need for additional setup for visualization purposes.
|
||||
|
||||
For detailed instructions and best practices related to the installation process, be sure to check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
## Configuring TensorBoard for Google Collab
|
||||
|
||||
When using Google Colab, it's important to set up TensorBoard before starting your training code:
|
||||
|
||||
!!! Example "Configure TensorBoard for Google Collab"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
%load_ext tensorboard
|
||||
%tensorboard --logdir path/to/runs
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a pre-trained model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Train the model
|
||||
results = model.train(data='coco128.yaml', epochs=100, imgsz=640)
|
||||
```
|
||||
|
||||
Upon running the usage code snippet above, you can expect the following output:
|
||||
|
||||
```plaintext
|
||||
TensorBoard: Start with 'tensorboard --logdir path_to_your_tensorboard_logs', view at http://localhost:6006/
|
||||
```
|
||||
|
||||
This output indicates that TensorBoard is now actively monitoring your YOLOv8 training session. You can access the TensorBoard dashboard by visiting the provided URL (http://localhost:6006/) to view real-time training metrics and model performance. For users working in Google Colab, the TensorBoard will be displayed in the same cell where you executed the TensorBoard configuration commands.
|
||||
|
||||
For more information related to the model training process, be sure to check our [YOLOv8 Model Training guide](../modes/train.md). If you are interested in learning more about logging, checkpoints, plotting, and file management, read our [usage guide on configuration](../usage/cfg.md).
|
||||
|
||||
## Understanding Your TensorBoard for YOLOv8 Training
|
||||
|
||||
Now, let’s focus on understanding the various features and components of TensorBoard in the context of YOLOv8 training. The three key sections of the TensorBoard are Time Series, Scalars, and Graphs.
|
||||
|
||||
### Time Series
|
||||
|
||||
The Time Series feature in the TensorBoard offers a dynamic and detailed perspective of various training metrics over time for YOLOv8 models. It focuses on the progression and trends of metrics across training epochs. Here's an example of what you can expect to see.
|
||||
|
||||

|
||||
|
||||
#### Key Features of Time Series in TensorBoard
|
||||
|
||||
- **Filter Tags and Pinned Cards**: This functionality allows users to filter specific metrics and pin cards for quick comparison and access. It's particularly useful for focusing on specific aspects of the training process.
|
||||
|
||||
- **Detailed Metric Cards**: Time Series divides metrics into different categories like learning rate (lr), training (train), and validation (val) metrics, each represented by individual cards.
|
||||
|
||||
- **Graphical Display**: Each card in the Time Series section shows a detailed graph of a specific metric over the course of training. This visual representation aids in identifying trends, patterns, or anomalies in the training process.
|
||||
|
||||
- **In-Depth Analysis**: Time Series provides an in-depth analysis of each metric. For instance, different learning rate segments are shown, offering insights into how adjustments in learning rate impact the model's learning curve.
|
||||
|
||||
#### Importance of Time Series in YOLOv8 Training
|
||||
|
||||
The Time Series section is essential for a thorough analysis of the YOLOv8 model's training progress. It lets you track the metrics in real time to promptly identify and solve issues. It also offers a detailed view of each metric's progression, which is crucial for fine-tuning the model and enhancing its performance.
|
||||
|
||||
### Scalars
|
||||
|
||||
Scalars in the TensorBoard are crucial for plotting and analyzing simple metrics like loss and accuracy during the training of YOLOv8 models. They offer a clear and concise view of how these metrics evolve with each training epoch, providing insights into the model's learning effectiveness and stability. Here's an example of what you can expect to see.
|
||||
|
||||

|
||||
|
||||
#### Key Features of Scalars in TensorBoard
|
||||
|
||||
- **Learning Rate (lr) Tags**: These tags show the variations in the learning rate across different segments (e.g., `pg0`, `pg1`, `pg2`). This helps us understand the impact of learning rate adjustments on the training process.
|
||||
|
||||
- **Metrics Tags**: Scalars include performance indicators such as:
|
||||
|
||||
- `mAP50 (B)`: Mean Average Precision at 50% Intersection over Union (IoU), crucial for assessing object detection accuracy.
|
||||
|
||||
- `mAP50-95 (B)`: Mean Average Precision calculated over a range of IoU thresholds, offering a more comprehensive evaluation of accuracy.
|
||||
|
||||
- `Precision (B)`: Indicates the ratio of correctly predicted positive observations, key to understanding prediction accuracy.
|
||||
|
||||
- `Recall (B)`: Important for models where missing a detection is significant, this metric measures the ability to detect all relevant instances.
|
||||
|
||||
- To learn more about the different metrics, read our guide on [performance metrics](../guides/yolo-performance-metrics.md).
|
||||
|
||||
- **Training and Validation Tags (`train`, `val`)**: These tags display metrics specifically for the training and validation datasets, allowing for a comparative analysis of model performance across different data sets.
|
||||
|
||||
#### Importance of Monitoring Scalars
|
||||
|
||||
Observing scalar metrics is crucial for fine-tuning the YOLOv8 model. Variations in these metrics, such as spikes or irregular patterns in loss graphs, can highlight potential issues such as overfitting, underfitting, or inappropriate learning rate settings. By closely monitoring these scalars, you can make informed decisions to optimize the training process, ensuring that the model learns effectively and achieves the desired performance.
|
||||
|
||||
### Difference Between Scalars and Time Series
|
||||
|
||||
While both Scalars and Time Series in TensorBoard are used for tracking metrics, they serve slightly different purposes. Scalars focus on plotting simple metrics such as loss and accuracy as scalar values. They provide a high-level overview of how these metrics change with each training epoch. While, the time-series section of the TensorBoard offers a more detailed timeline view of various metrics. It is particularly useful for monitoring the progression and trends of metrics over time, providing a deeper dive into the specifics of the training process.
|
||||
|
||||
### Graphs
|
||||
|
||||
The Graphs section of the TensorBoard visualizes the computational graph of the YOLOv8 model, showing how operations and data flow within the model. It's a powerful tool for understanding the model's structure, ensuring that all layers are connected correctly, and for identifying any potential bottlenecks in data flow. Here's an example of what you can expect to see.
|
||||
|
||||

|
||||
|
||||
Graphs are particularly useful for debugging the model, especially in complex architectures typical in deep learning models like YOLOv8. They help in verifying layer connections and the overall design of the model.
|
||||
|
||||
## Summary
|
||||
|
||||
This guide aims to help you use TensorBoard with YOLOv8 for visualization and analysis of machine learning model training. It focuses on explaining how key TensorBoard features can provide insights into training metrics and model performance during YOLOv8 training sessions.
|
||||
|
||||
For a more detailed exploration of these features and effective utilization strategies, you can refer to TensorFlow’s official [TensorBoard documentation](https://www.tensorflow.org/tensorboard/get_started) and their [GitHub repository](https://github.com/tensorflow/tensorboard).
|
||||
|
||||
Want to learn more about the various integrations of Ultralytics? Check out the [Ultralytics integrations guide page](../integrations/index.md) to see what other exciting capabilities are waiting to be discovered!
|
128
docs/en/integrations/tensorrt.md
Normal file
128
docs/en/integrations/tensorrt.md
Normal file
@ -0,0 +1,128 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover the power and flexibility of exporting Ultralytics YOLOv8 models to TensorRT format for enhanced performance and efficiency on NVIDIA GPUs.
|
||||
keywords: Ultralytics, YOLOv8, TensorRT Export, Model Deployment, GPU Acceleration, NVIDIA Support, CUDA Deployment
|
||||
---
|
||||
|
||||
# TensorRT Export for YOLOv8 Models
|
||||
|
||||
Deploying computer vision models in high-performance environments can require a format that maximizes speed and efficiency. This is especially true when you are deploying your model on NVIDIA GPUs.
|
||||
|
||||
By using the TensorRT export format, you can enhance your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for swift and efficient inference on NVIDIA hardware. This guide will give you easy-to-follow steps for the conversion process and help you make the most of NVIDIA's advanced technology in your deep learning projects.
|
||||
|
||||
## TensorRT
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-601/tensorrt-developer-guide/graphics/whatistrt2.png" alt="TensorRT Overview">
|
||||
</p>
|
||||
|
||||
[TensorRT](https://developer.nvidia.com/tensorrt), developed by NVIDIA, is an advanced software development kit (SDK) designed for high-speed deep learning inference. It’s well-suited for real-time applications like object detection.
|
||||
|
||||
This toolkit optimizes deep learning models for NVIDIA GPUs and results in faster and more efficient operations. TensorRT models undergo TensorRT optimization, which includes techniques like layer fusion, precision calibration (INT8 and FP16), dynamic tensor memory management, and kernel auto-tuning. Converting deep learning models into the TensorRT format allows developers to realize the potential of NVIDIA GPUs fully.
|
||||
|
||||
TensorRT is known for its compatibility with various model formats, including TensorFlow, PyTorch, and ONNX, providing developers with a flexible solution for integrating and optimizing models from different frameworks. This versatility enables efficient model deployment across diverse hardware and software environments.
|
||||
|
||||
## Key Features of TensorRT Models
|
||||
|
||||
TensorRT models offer a range of key features that contribute to their efficiency and effectiveness in high-speed deep learning inference:
|
||||
|
||||
- **Precision Calibration**: TensorRT supports precision calibration, allowing models to be fine-tuned for specific accuracy requirements. This includes support for reduced precision formats like INT8 and FP16, which can further boost inference speed while maintaining acceptable accuracy levels.
|
||||
|
||||
- **Layer Fusion**: The TensorRT optimization process includes layer fusion, where multiple layers of a neural network are combined into a single operation. This reduces computational overhead and improves inference speed by minimizing memory access and computation.
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://developer-blogs.nvidia.com/wp-content/uploads/2017/12/pasted-image-0-3.png" alt="TensorRT Layer Fusion">
|
||||
</p>
|
||||
|
||||
- **Dynamic Tensor Memory Management**: TensorRT efficiently manages tensor memory usage during inference, reducing memory overhead and optimizing memory allocation. This results in more efficient GPU memory utilization.
|
||||
|
||||
- **Automatic Kernel Tuning**: TensorRT applies automatic kernel tuning to select the most optimized GPU kernel for each layer of the model. This adaptive approach ensures that the model takes full advantage of the GPU's computational power.
|
||||
|
||||
## Deployment Options in TensorRT
|
||||
|
||||
Before we look at the code for exporting YOLOv8 models to the TensorRT format, let’s understand where TensorRT models are normally used.
|
||||
|
||||
TensorRT offers several deployment options, and each option balances ease of integration, performance optimization, and flexibility differently:
|
||||
|
||||
- **Deploying within TensorFlow**: This method integrates TensorRT into TensorFlow, allowing optimized models to run in a familiar TensorFlow environment. It's useful for models with a mix of supported and unsupported layers, as TF-TRT can handle these efficiently.
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/graphics/tf-trt-workflow.png" alt="TensorRT Overview">
|
||||
</p>
|
||||
|
||||
- **Standalone TensorRT Runtime API**: Offers granular control, ideal for performance-critical applications. It's more complex but allows for custom implementation of unsupported operators.
|
||||
|
||||
- **NVIDIA Triton Inference Server**: An option that supports models from various frameworks. Particularly suited for cloud or edge inference, it provides features like concurrent model execution and model analysis.
|
||||
|
||||
## Exporting YOLOv8 Models to TensorRT
|
||||
|
||||
You can improve execution efficiency and optimize performance by converting YOLOv8 models to TensorRT format.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to TensorRT format
|
||||
model.export(format='engine') # creates 'yolov8n.engine'
|
||||
|
||||
# Load the exported TensorRT model
|
||||
tensorrt_model = YOLO('yolov8n.engine')
|
||||
|
||||
# Run inference
|
||||
results = tensorrt_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to TensorRT format
|
||||
yolo export model=yolov8n.pt format=engine # creates 'yolov8n.engine''
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
|
||||
|
||||
## Deploying Exported YOLOv8 TensorRT Models
|
||||
|
||||
Having successfully exported your Ultralytics YOLOv8 models to TensorRT format, you're now ready to deploy them. For in-depth instructions on deploying your TensorRT models in various settings, take a look at the following resources:
|
||||
|
||||
- **[Deploying Deep Neural Networks with NVIDIA TensorRT](https://developer.nvidia.com/blog/deploying-deep-learning-nvidia-tensorrt/)**: This article explains how to use NVIDIA TensorRT to deploy deep neural networks on GPU-based deployment platforms efficiently.
|
||||
|
||||
- **[End-to-End AI for NVIDIA-Based PCs: NVIDIA TensorRT Deployment](https://developer.nvidia.com/blog/end-to-end-ai-for-nvidia-based-pcs-nvidia-tensorrt-deployment/)**: This blog post explains the use of NVIDIA TensorRT for optimizing and deploying AI models on NVIDIA-based PCs.
|
||||
|
||||
- **[GitHub Repository for NVIDIA TensorRT:](https://github.com/NVIDIA/TensorRT)**: This is the official GitHub repository that contains the source code and documentation for NVIDIA TensorRT.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we focused on converting Ultralytics YOLOv8 models to NVIDIA's TensorRT model format. This conversion step is crucial for improving the efficiency and speed of YOLOv8 models, making them more effective and suitable for diverse deployment environments.
|
||||
|
||||
For more information on usage details, take a look at the [TensorRT official documentation](https://docs.nvidia.com/deeplearning/tensorrt/).
|
||||
|
||||
If you're curious about additional Ultralytics YOLOv8 integrations, our [integration guide page](../integrations/index.md) provides an extensive selection of informative resources and insights.
|
122
docs/en/integrations/tf-graphdef.md
Normal file
122
docs/en/integrations/tf-graphdef.md
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
comments: true
|
||||
description: A guide that walks you step-by-step through how to export Ultralytics YOLOv8 models to TF GraphDef format for smooth deployment and efficient model performance.
|
||||
keywords: Ultralytics, YOLOv8, TF GraphDef Export, Model Deployment, TensorFlow Ecosystem, Cross-Platform Compatibility, Performance Optimization
|
||||
---
|
||||
|
||||
# How to Export to TF GraphDef from YOLOv8 for Deployment
|
||||
|
||||
When you are deploying cutting-edge computer vision models, like YOLOv8, in different environments, you might run into compatibility issues. Google's TensorFlow GraphDef, or TF GraphDef, offers a solution by providing a serialized, platform-independent representation of your model. Using the TF GraphDef model format, you can deploy your YOLOv8 model in environments where the complete TensorFlow ecosystem may not be available, such as mobile devices or specialized hardware.
|
||||
|
||||
In this guide, we'll walk you step by step through how to export your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models to the TF GraphDef model format. By converting your model, you can streamline deployment and use YOLOv8's computer vision capabilities in a broader range of applications and platforms.
|
||||
|
||||
## Why Should You Export to TF GraphDef?
|
||||
|
||||
TF GraphDef is a powerful component of the TensorFlow ecosystem that was developed by Google. It can be used to optimize and deploy models like YOLOv8. Exporting to TF GraphDef lets us move models from research to real-world applications. It allows models to run in environments without the full TensorFlow framework.
|
||||
|
||||
The GraphDef format represents the model as a serialized computation graph. This enables various optimization techniques like constant folding, quantization, and graph transformations. These optimizations ensure efficient execution, reduced memory usage, and faster inference speeds.
|
||||
|
||||
GraphDef models can use hardware accelerators such as GPUs, TPUs, and AI chips, unlocking significant performance gains for the YOLOv8 inference pipeline. The TF GraphDef format creates a self-contained package with the model and its dependencies, simplifying deployment and integration into diverse systems.
|
||||
|
||||
## Key Features of TF GraphDef Models
|
||||
|
||||
TF GraphDef offers distinct features for streamlining model deployment and optimization.
|
||||
|
||||
Here's a look at its key characteristics:
|
||||
|
||||
- **Model Serialization**: TF GraphDef provides a way to serialize and store TensorFlow models in a platform-independent format. This serialized representation allows you to load and execute your models without the original Python codebase, making deployment easier.
|
||||
|
||||
- **Graph Optimization**: TF GraphDef enables the optimization of computational graphs. These optimizations can boost performance by streamlining execution flow, reducing redundancies, and tailoring operations to suit specific hardware.
|
||||
|
||||
- **Deployment Flexibility**: Models exported to the GraphDef format can be used in various environments, including resource-constrained devices, web browsers, and systems with specialized hardware. This opens up possibilities for wider deployment of your TensorFlow models.
|
||||
|
||||
- **Production Focus**: GraphDef is designed for production deployment. It supports efficient execution, serialization features, and optimizations that align with real-world use cases.
|
||||
|
||||
## Deployment Options with TF GraphDef
|
||||
|
||||
Before we dive into the process of exporting YOLOv8 models to TF GraphDef, let's take a look at some typical deployment situations where this format is used.
|
||||
|
||||
Here's how you can deploy with TF GraphDef efficiently across various platforms.
|
||||
|
||||
- **TensorFlow Serving:** This framework is designed to deploy TensorFlow models in production environments. TensorFlow Serving offers model management, versioning, and the infrastructure for efficient model serving at scale. It's a seamless way to integrate your GraphDef-based models into production web services or APIs.
|
||||
|
||||
- **Mobile and Embedded Devices:** With tools like TensorFlow Lite, you can convert TF GraphDef models into formats optimized for smartphones, tablets, and various embedded devices. Your models can then be used for on-device inference, where execution is done locally, often providing performance gains and offline capabilities.
|
||||
|
||||
- **Web Browsers:** TensorFlow.js enables the deployment of TF GraphDef models directly within web browsers. It paves the way for real-time object detection applications running on the client side, using the capabilities of YOLOv8 through JavaScript.
|
||||
|
||||
- **Specialized Hardware:** TF GraphDef's platform-agnostic nature allows it to target custom hardware, such as accelerators and TPUs (Tensor Processing Units). These devices can provide performance advantages for computationally intensive models.
|
||||
|
||||
## Exporting YOLOv8 Models to TF GraphDef
|
||||
|
||||
You can convert your YOLOv8 object detection model to the TF GraphDef format, which is compatible with various systems, to improve its performance across platforms.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to TF GraphDef format
|
||||
model.export(format='pb') # creates 'yolov8n.pb'
|
||||
|
||||
# Load the exported TF GraphDef model
|
||||
tf_graphdef_model = YOLO('yolov8n.pb')
|
||||
|
||||
# Run inference
|
||||
results = tf_graphdef_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to TF GraphDef format
|
||||
yolo export model=yolov8n.pt format=pb # creates 'yolov8n.pb'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model='yolov8n.pb' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
|
||||
|
||||
## Deploying Exported YOLOv8 TF GraphDef Models
|
||||
|
||||
Once you’ve exported your YOLOv8 model to the TF GraphDef format, the next step is deployment. The primary and recommended first step for running a TF GraphDef model is to use the YOLO("model.pb") method, as previously shown in the usage code snippet.
|
||||
|
||||
However, for more information on deploying your TF GraphDef models, take a look at the following resources:
|
||||
|
||||
- **[TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving)**: A guide on TensorFlow Serving that teaches how to deploy and serve machine learning models efficiently in production environments.
|
||||
|
||||
- **[TensorFlow Lite](https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter)**: This page describes how to convert machine learning models into a format optimized for on-device inference with TensorFlow Lite.
|
||||
|
||||
- **[TensorFlow.js](https://www.tensorflow.org/js/guide/conversion)**: A guide on model conversion that teaches how to convert TensorFlow or Keras models into TensorFlow.js format for use in web applications.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we explored how to export Ultralytics YOLOv8 models to the TF GraphDef format. By doing this, you can flexibly deploy your optimized YOLOv8 models in different environments.
|
||||
|
||||
For further details on usage, visit the [TF GraphDef official documentation](https://www.tensorflow.org/api_docs/python/tf/Graph).
|
||||
|
||||
For more information on integrating Ultralytics YOLOv8 with other platforms and frameworks, don't forget to check out our [integration guide page](index.md). It has great resources and insights to help you make the most of YOLOv8 in your projects.
|
121
docs/en/integrations/tf-savedmodel.md
Normal file
121
docs/en/integrations/tf-savedmodel.md
Normal file
@ -0,0 +1,121 @@
|
||||
---
|
||||
comments: true
|
||||
description: A guide that goes through exporting from Ultralytics YOLOv8 models to TensorFlow SavedModel format for streamlined deployments and optimized model performance.
|
||||
keywords: Ultralytics YOLOv8, TensorFlow SavedModel, Model Deployment, TensorFlow Serving, TensorFlow Lite, Model Optimization, Computer Vision, Performance Optimization
|
||||
---
|
||||
|
||||
# Understand How to Export to TF SavedModel Format From YOLOv8
|
||||
|
||||
Deploying machine learning models can be challenging. However, using an efficient and flexible model format can make your job easier. TF SavedModel is an open-source machine-learning framework used by TensorFlow to load machine-learning models in a consistent way. It is like a suitcase for TensorFlow models, making them easy to carry and use on different devices and systems.
|
||||
|
||||
Learning how to export to TF SavedModel from [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models can help you deploy models easily across different platforms and environments. In this guide, we'll walk through how to convert your models to the TF SavedModel format, simplifying the process of running inferences with your models on different devices.
|
||||
|
||||
## Why Should You Export to TF SavedModel?
|
||||
|
||||
The TensorFlow SavedModel format is a part of the TensorFlow ecosystem developed by Google as shown below. It is designed to save and serialize TensorFlow models seamlessly. It encapsulates the complete details of models like the architecture, weights, and even compilation information. This makes it straightforward to share, deploy, and continue training across different environments.
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" src="https://sdtimes.com/wp-content/uploads/2019/10/0_C7GCWYlsMrhUYRYi.png" alt="TF SavedModel">
|
||||
</p>
|
||||
|
||||
The TF SavedModel has a key advantage: its compatibility. It works well with TensorFlow Serving, TensorFlow Lite, and TensorFlow.js. This compatibility makes it easier to share and deploy models across various platforms, including web and mobile applications. The TF SavedModel format is useful both for research and production. It provides a unified way to manage your models, ensuring they are ready for any application.
|
||||
|
||||
## Key Features of TF SavedModels
|
||||
|
||||
Here are the key features that make TF SavedModel a great option for AI developers:
|
||||
|
||||
- **Portability**: TF SavedModel provides a language-neutral, recoverable, hermetic serialization format. They enable higher-level systems and tools to produce, consume, and transform TensorFlow models. SavedModels can be easily shared and deployed across different platforms and environments.
|
||||
|
||||
- **Ease of Deployment**: TF SavedModel bundles the computational graph, trained parameters, and necessary metadata into a single package. They can be easily loaded and used for inference without requiring the original code that built the model. This makes the deployment of TensorFlow models straightforward and efficient in various production environments.
|
||||
|
||||
- **Asset Management**: TF SavedModel supports the inclusion of external assets such as vocabularies, embeddings, or lookup tables. These assets are stored alongside the graph definition and variables, ensuring they are available when the model is loaded. This feature simplifies the management and distribution of models that rely on external resources.
|
||||
|
||||
## Deployment Options with TF SavedModel
|
||||
|
||||
Before we dive into the process of exporting YOLOv8 models to the TF SavedModel format, let's explore some typical deployment scenarios where this format is used.
|
||||
|
||||
TF SavedModel provides a range of options to deploy your machine learning models:
|
||||
|
||||
- **TensorFlow Serving:** TensorFlow Serving is a flexible, high-performance serving system designed for production environments. It natively supports TF SavedModels, making it easy to deploy and serve your models on cloud platforms, on-premises servers, or edge devices.
|
||||
|
||||
- **Cloud Platforms:** Major cloud providers like Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure offer services for deploying and running TensorFlow models, including TF SavedModels. These services provide scalable and managed infrastructure, allowing you to deploy and scale your models easily.
|
||||
|
||||
- **Mobile and Embedded Devices:** TensorFlow Lite, a lightweight solution for running machine learning models on mobile, embedded, and IoT devices, supports converting TF SavedModels to the TensorFlow Lite format. This allows you to deploy your models on a wide range of devices, from smartphones and tablets to microcontrollers and edge devices.
|
||||
|
||||
- **TensorFlow Runtime:** TensorFlow Runtime (tfrt) is a high-performance runtime for executing TensorFlow graphs. It provides lower-level APIs for loading and running TF SavedModels in C++ environments. TensorFlow Runtime offers better performance compared to the standard TensorFlow runtime. It is suitable for deployment scenarios that require low-latency inference and tight integration with existing C++ codebases.
|
||||
|
||||
## Exporting YOLOv8 Models to TF SavedModel
|
||||
|
||||
By exporting YOLOv8 models to the TF SavedModel format, you enhance their adaptability and ease of deployment across various platforms.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to TF SavedModel format
|
||||
model.export(format='saved_model') # creates '/yolov8n_saved_model'
|
||||
|
||||
# Load the exported TF SavedModel model
|
||||
tf_savedmodel_model = YOLO('./yolov8n_saved_model')
|
||||
|
||||
# Run inference
|
||||
results = tf_savedmodel_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to TF SavedModel format
|
||||
yolo export model=yolov8n.pt format=saved_model # creates '/yolov8n_saved_model'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model='./yolov8n_saved_model' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
|
||||
|
||||
## Deploying Exported YOLOv8 TF SavedModel Models
|
||||
|
||||
Now that you have exported your YOLOv8 model to the TF SavedModel format, the next step is to deploy it. The primary and recommended first step for running a TF GraphDef model is to use the YOLO("./yolov8n_saved_model") method, as previously shown in the usage code snippet.
|
||||
|
||||
However, for in-depth instructions on deploying your TF SavedModel models, take a look at the following resources:
|
||||
|
||||
- **[TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving)**: Here’s the developer documentation for how to deploy your TF SavedModel models using TensorFlow Serving.
|
||||
|
||||
- **[Run a TensorFlow SavedModel in Node.js](https://blog.tensorflow.org/2020/01/run-tensorflow-savedmodel-in-nodejs-directly-without-conversion.html)**: A TensorFlow blog post on running a TensorFlow SavedModel in Node.js directly without conversion.
|
||||
|
||||
- **[Deploying on Cloud](https://blog.tensorflow.org/2020/04/how-to-deploy-tensorflow-2-models-on-cloud-ai-platform.html)**: A TensorFlow blog post on deploying a TensorFlow SavedModel model on the Cloud AI Platform.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we explored how to export Ultralytics YOLOv8 models to the TF SavedModel format. By exporting to TF SavedModel, you gain the flexibility to optimize, deploy, and scale your YOLOv8 models on a wide range of platforms.
|
||||
|
||||
For further details on usage, visit the [TF SavedModel official documentation](https://www.tensorflow.org/guide/saved_model).
|
||||
|
||||
For more information on integrating Ultralytics YOLOv8 with other platforms and frameworks, don't forget to check out our [integration guide page](index.md). It's packed with great resources to help you make the most of YOLOv8 in your projects.
|
122
docs/en/integrations/tflite.md
Normal file
122
docs/en/integrations/tflite.md
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
comments: true
|
||||
description: Explore how to improve your Ultralytics YOLOv8 model's performance and interoperability using the TFLite export format suitable for edge computing environments.
|
||||
keywords: Ultralytics, YOLOv8, TFLite Export, Export YOLOv8, Model Deployment
|
||||
---
|
||||
|
||||
# A Guide on YOLOv8 Model Export to TFLite for Deployment
|
||||
|
||||
<p align="center">
|
||||
<img width="75%" src="https://github.com/ultralytics/ultralytics/assets/26833433/6ecf34b9-9187-4d6f-815c-72394290a4d3" alt="TFLite Logo">
|
||||
</p>
|
||||
|
||||
Deploying computer vision models on edge devices or embedded devices requires a format that can ensure seamless performance.
|
||||
|
||||
The TensorFlow Lite or TFLite export format allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for tasks like object detection and image classification in edge device-based applications. In this guide, we'll walk through the steps for converting your models to the TFLite format, making it easier for your models to perform well on various edge devices.
|
||||
|
||||
## Why should you export to TFLite?
|
||||
|
||||
Introduced by Google in May 2017 as part of their TensorFlow framework, [TensorFlow Lite](https://www.tensorflow.org/lite/guide), or TFLite for short, is an open-source deep learning framework designed for on-device inference, also known as edge computing. It gives developers the necessary tools to execute their trained models on mobile, embedded, and IoT devices, as well as traditional computers.
|
||||
|
||||
TensorFlow Lite is compatible with a wide range of platforms, including embedded Linux, Android, iOS, and MCU. Exporting your model to TFLite makes your applications faster, more reliable, and capable of running offline.
|
||||
|
||||
## Key Features of TFLite Models
|
||||
|
||||
TFLite models offer a wide range of key features that enable on-device machine learning by helping developers run their models on mobile, embedded, and edge devices:
|
||||
|
||||
- **On-device Optimization**: TFLite optimizes for on-device ML, reducing latency by processing data locally, enhancing privacy by not transmitting personal data, and minimizing model size to save space.
|
||||
|
||||
- **Multiple Platform Support**: TFLite offers extensive platform compatibility, supporting Android, iOS, embedded Linux, and microcontrollers.
|
||||
|
||||
- **Diverse Language Support**: TFLite is compatible with various programming languages, including Java, Swift, Objective-C, C++, and Python.
|
||||
|
||||
- **High Performance**: Achieves superior performance through hardware acceleration and model optimization.
|
||||
|
||||
## Deployment Options in TFLite
|
||||
|
||||
Before we look at the code for exporting YOLOv8 models to the TFLite format, let’s understand how TFLite models are normally used.
|
||||
|
||||
TFLite offers various on-device deployment options for machine learning models, including:
|
||||
|
||||
- **Deploying with Android and iOS**: Both Android and iOS applications with TFLite can analyze edge-based camera feeds and sensors to detect and identify objects. TFLite also offers native iOS libraries written in [Swift](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/swift) and [Objective-C](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/objc). The architecture diagram below shows the process of deploying a trained model onto Android and iOS platforms using TensorFlow Lite.
|
||||
|
||||
<p align="center">
|
||||
<img width="75%" src="https://1.bp.blogspot.com/-6fS9FD8KD7g/XhJ1l8y2S4I/AAAAAAAACKw/MW9MQZ8gtiYmUe0naRdN0n2FwkT1l4trACLcBGAsYHQ/s1600/architecture.png" alt="Architecture">
|
||||
</p>
|
||||
|
||||
- **Implementing with Embedded Linux**: If running inferences on a [Raspberry Pi](https://www.raspberrypi.org/) using the [Ultralytics Guide](../guides/raspberry-pi.md) does not meet the speed requirements for your use case, you can use an exported TFLite model to accelerate inference times. Additionally, it's possible to further improve performance by utilizing a [Coral Edge TPU device](https://coral.withgoogle.com/).
|
||||
|
||||
- **Deploying with Microcontrollers**: TFLite models can also be deployed on microcontrollers and other devices with only a few kilobytes of memory. The core runtime just fits in 16 KB on an Arm Cortex M3 and can run many basic models. It doesn't require operating system support, any standard C or C++ libraries, or dynamic memory allocation.
|
||||
|
||||
## Export to TFLite: Converting Your YOLOv8 Model
|
||||
|
||||
You can improve on-device model execution efficiency and optimize performance by converting them to TFLite format.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to TFLite format
|
||||
model.export(format='tflite') # creates 'yolov8n_float32.tflite'
|
||||
|
||||
# Load the exported TFLite model
|
||||
tflite_model = YOLO('yolov8n_float32.tflite')
|
||||
|
||||
# Run inference
|
||||
results = tflite_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to TFLite format
|
||||
yolo export model=yolov8n.pt format=tflite # creates 'yolov8n_float32.tflite'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model='yolov8n_float32.tflite' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
|
||||
|
||||
## Deploying Exported YOLOv8 TFLite Models
|
||||
|
||||
After successfully exporting your Ultralytics YOLOv8 models to TFLite format, you can now deploy them. The primary and recommended first step for running a TFLite model is to utilize the YOLO("model.tflite") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your TFLite models in various other settings, take a look at the following resources:
|
||||
|
||||
- **[Android](https://www.tensorflow.org/lite/android/quickstart)**: A quick start guide for integrating TensorFlow Lite into Android applications, providing easy-to-follow steps for setting up and running machine learning models.
|
||||
|
||||
- **[iOS](https://www.tensorflow.org/lite/guide/ios)**: Check out this detailed guide for developers on integrating and deploying TensorFlow Lite models in iOS applications, offering step-by-step instructions and resources.
|
||||
|
||||
- **[End-To-End Examples](https://www.tensorflow.org/lite/examples)**: This page provides an overview of various TensorFlow Lite examples, showcasing practical applications and tutorials designed to help developers implement TensorFlow Lite in their machine learning projects on mobile and edge devices.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we focused on how to export to TFLite format. By converting your Ultralytics YOLOv8 models to TFLite model format, you can improve the efficiency and speed of YOLOv8 models, making them more effective and suitable for edge computing environments.
|
||||
|
||||
For further details on usage, visit [TFLite’s official documentation](https://www.tensorflow.org/lite/guide).
|
||||
|
||||
Also, if you're curious about other Ultralytics YOLOv8 integrations, make sure to check out our [integration guide page](../integrations/index.md). You'll find tons of helpful info and insights waiting for you there.
|
126
docs/en/integrations/torchscript.md
Normal file
126
docs/en/integrations/torchscript.md
Normal file
@ -0,0 +1,126 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn to export your Ultralytics YOLOv8 models to TorchScript format for deployment through platforms like embedded systems, web browsers, and C++ applications.
|
||||
keywords: Ultralytics, YOLOv8, Export to Torchscript, Model Optimization, Deployment, PyTorch, C++, Faster Inference
|
||||
---
|
||||
|
||||
# YOLOv8 Model Export to TorchScript for Quick Deployment
|
||||
|
||||
Deploying computer vision models across different environments, including embedded systems, web browsers, or platforms with limited Python support, requires a flexible and portable solution. TorchScript focuses on portability and the ability to run models in environments where the entire Python framework is unavailable. This makes it ideal for scenarios where you need to deploy your computer vision capabilities across various devices or platforms.
|
||||
|
||||
Export to Torchscript to serialize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for cross-platform compatibility and streamlined deployment. In this guide, we'll show you how to export your YOLOv8 models to the TorchScript format, making it easier for you to use them across a wider range of applications.
|
||||
|
||||
## Why should you export to TorchScript?
|
||||
|
||||

|
||||
|
||||
Developed by the creators of PyTorch, TorchScript is a powerful tool for optimizing and deploying PyTorch models across a variety of platforms. Exporting YOLOv8 models to [TorchScript](https://pytorch.org/docs/stable/jit.html) is crucial for moving from research to real-world applications. TorchScript, part of the PyTorch framework, helps make this transition smoother by allowing PyTorch models to be used in environments that don't support Python.
|
||||
|
||||
The process involves two techniques: tracing and scripting. Tracing records operations during model execution, while scripting allows for the definition of models using a subset of Python. These techniques ensure that models like YOLOv8 can still work their magic even outside their usual Python environment.
|
||||
|
||||

|
||||
|
||||
TorchScript models can also be optimized through techniques such as operator fusion and refinements in memory usage, ensuring efficient execution. Another advantage of exporting to TorchScript is its potential to accelerate model execution across various hardware platforms. It creates a standalone, production-ready representation of your PyTorch model that can be integrated into C++ environments, embedded systems, or deployed in web or mobile applications.
|
||||
|
||||
## Key Features of TorchScript Models
|
||||
|
||||
TorchScript, a key part of the PyTorch ecosystem, provides powerful features for optimizing and deploying deep learning models.
|
||||
|
||||

|
||||
|
||||
Here are the key features that make TorchScript a valuable tool for developers:
|
||||
|
||||
- **Static Graph Execution**: TorchScript uses a static graph representation of the model’s computation, which is different from PyTorch’s dynamic graph execution. In static graph execution, the computational graph is defined and compiled once before the actual execution, resulting in improved performance during inference.
|
||||
|
||||
- **Model Serialization**: TorchScript allows you to serialize PyTorch models into a platform-independent format. Serialized models can be loaded without requiring the original Python code, enabling deployment in different runtime environments.
|
||||
|
||||
- **JIT Compilation**: TorchScript uses Just-In-Time (JIT) compilation to convert PyTorch models into an optimized intermediate representation. JIT compiles the model’s computational graph, enabling efficient execution on target devices.
|
||||
|
||||
- **Cross-Language Integration**: With TorchScript, you can export PyTorch models to other languages such as C++, Java, and JavaScript. This makes it easier to integrate PyTorch models into existing software systems written in different languages.
|
||||
|
||||
- **Gradual Conversion**: TorchScript provides a gradual conversion approach, allowing you to incrementally convert parts of your PyTorch model into TorchScript. This flexibility is particularly useful when dealing with complex models or when you want to optimize specific portions of the code.
|
||||
|
||||
## Deployment Options in TorchScript
|
||||
|
||||
Before we look at the code for exporting YOLOv8 models to the TorchScript format, let’s understand where TorchScript models are normally used.
|
||||
|
||||
TorchScript offers various deployment options for machine learning models, such as:
|
||||
|
||||
- **C++ API**: The most common use case for TorchScript is its C++ API, which allows you to load and execute optimized TorchScript models directly within C++ applications. This is ideal for production environments where Python may not be suitable or available. The C++ API offers low-overhead and efficient execution of TorchScript models, maximizing performance potential.
|
||||
|
||||
- **Mobile Deployment**: TorchScript offers tools for converting models into formats readily deployable on mobile devices. PyTorch Mobile provides a runtime for executing these models within iOS and Android apps. This enables low-latency, offline inference capabilities, enhancing user experience and data privacy.
|
||||
|
||||
- **Cloud Deployment**: TorchScript models can be deployed to cloud-based servers using solutions like TorchServe. It provides features like model versioning, batching, and metrics monitoring for scalable deployment in production environments. Cloud deployment with TorchScript can make your models accessible via APIs or other web services.
|
||||
|
||||
## Export to TorchScript: Converting Your YOLOv8 Model
|
||||
|
||||
Exporting YOLOv8 models to TorchScript makes it easier to use them in different places and helps them run faster and more efficiently. This is great for anyone looking to use deep learning models more effectively in real-world applications.
|
||||
|
||||
### Installation
|
||||
|
||||
To install the required package, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required package for YOLOv8
|
||||
pip install ultralytics
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
### Usage
|
||||
|
||||
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLOv8 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
|
||||
|
||||
!!! Example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load the YOLOv8 model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Export the model to TorchScript format
|
||||
model.export(format='torchscript') # creates 'yolov8n.torchscript'
|
||||
|
||||
# Load the exported TorchScript model
|
||||
torchscript_model = YOLO('yolov8n.torchscript')
|
||||
|
||||
# Run inference
|
||||
results = torchscript_model('https://ultralytics.com/images/bus.jpg')
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Export a YOLOv8n PyTorch model to TorchScript format
|
||||
yolo export model=yolov8n.pt format=torchscript # creates 'yolov8n.torchscript'
|
||||
|
||||
# Run inference with the exported model
|
||||
yolo predict model=yolov8n.torchscript source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
|
||||
|
||||
## Deploying Exported YOLOv8 TorchScript Models
|
||||
|
||||
After successfully exporting your Ultralytics YOLOv8 models to TorchScript format, you can now deploy them. The primary and recommended first step for running a TorchScript model is to utilize the YOLO("model.torchscript") method, as outlined in the previous usage code snippet. However, for in-depth instructions on deploying your TorchScript models in various other settings, take a look at the following resources:
|
||||
|
||||
- **[Explore Mobile Deployment](https://pytorch.org/mobile/home/)**: The PyTorch Mobile Documentation provides comprehensive guidelines for deploying models on mobile devices, ensuring your applications are efficient and responsive.
|
||||
|
||||
- **[Master Server-Side Deployment](https://pytorch.org/serve/getting_started.html)**: Learn how to deploy models server-side with TorchServe, offering a step-by-step tutorial for scalable, efficient model serving.
|
||||
|
||||
- **[Implement C++ Deployment](https://pytorch.org/tutorials/advanced/cpp_export.html)**: Dive into the Tutorial on Loading a TorchScript Model in C++, facilitating the integration of your TorchScript models into C++ applications for enhanced performance and versatility.
|
||||
|
||||
## Summary
|
||||
|
||||
In this guide, we explored the process of exporting Ultralytics YOLOv8 models to the TorchScript format. By following the provided instructions, you can optimize YOLOv8 models for performance and gain the flexibility to deploy them across various platforms and environments.
|
||||
|
||||
For further details on usage, visit [TorchScript’s official documentation](https://pytorch.org/docs/stable/jit.html).
|
||||
|
||||
Also, if you’d like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
|
156
docs/en/integrations/weights-biases.md
Normal file
156
docs/en/integrations/weights-biases.md
Normal file
@ -0,0 +1,156 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover how to train your YOLOv8 models efficiently with Weights & Biases. This guide walks through integrating Weights & Biases with YOLOv8 to enable seamless experiment tracking, result visualization, and model explainability.
|
||||
keywords: Ultralytics, YOLOv8, Object Detection, Weights & Biases, Model Training, Experiment Tracking, Visualizing Results
|
||||
---
|
||||
|
||||
# Enhancing YOLOv8 Experiment Tracking and Visualization with Weights & Biases
|
||||
|
||||
Object detection models like [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) have become integral to many computer vision applications. However, training, evaluating, and deploying these complex models introduces several challenges. Tracking key training metrics, comparing model variants, analyzing model behavior, and detecting issues require substantial instrumentation and experiment management.
|
||||
|
||||
This guide showcases Ultralytics YOLOv8 integration with Weights & Biases’ for enhanced experiment tracking, model-checkpointing, and visualization of model performance. It also includes instructions for setting up the integration, training, fine-tuning, and visualizing results using Weights & Biases’ interactive features.
|
||||
|
||||
## Weights & Biases
|
||||
|
||||
<p align="center">
|
||||
<img width="640" src="https://docs.wandb.ai/assets/images/wandb_demo_experiments-4797af7fe7236d6c5c42adbdc93deb4c.gif" alt="Weights & Biases Overview">
|
||||
</p>
|
||||
|
||||
[Weights & Biases](https://wandb.ai/site) is a cutting-edge MLOps platform designed for tracking, visualizing, and managing machine learning experiments. It features automatic logging of training metrics for full experiment reproducibility, an interactive UI for streamlined data analysis, and efficient model management tools for deploying across various environments.
|
||||
|
||||
## YOLOv8 Training with Weights & Biases
|
||||
|
||||
You can use Weights & Biases to bring efficiency and automation to your YOLOv8 training process.
|
||||
|
||||
## Installation
|
||||
|
||||
To install the required packages, run:
|
||||
|
||||
!!! Tip "Installation"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Install the required packages for YOLOv8 and Weights & Biases
|
||||
pip install --upgrade ultralytics==8.0.186 wandb
|
||||
```
|
||||
|
||||
For detailed instructions and best practices related to the installation process, be sure to check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
## Configuring Weights & Biases
|
||||
|
||||
After installing the necessary packages, the next step is to set up your Weights & Biases environment. This includes creating a Weights & Biases account and obtaining the necessary API key for a smooth connection between your development environment and the W&B platform.
|
||||
|
||||
Start by initializing the Weights & Biases environment in your workspace. You can do this by running the following command and following the prompted instructions.
|
||||
|
||||
!!! Tip "Initial SDK Setup"
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Initialize your Weights & Biases environment
|
||||
import wandb
|
||||
wandb.login()
|
||||
```
|
||||
|
||||
Navigate to the Weights & Biases authorization page to create and retrieve your API key. Use this key to authenticate your environment with W&B.
|
||||
|
||||
## Usage: Training YOLOv8 with Weights & Biases
|
||||
|
||||
Before diving into the usage instructions for YOLOv8 model training with Weights & Biases, be sure to check out the range of [YOLOv8 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
|
||||
|
||||
!!! Example "Usage: Training YOLOv8 with Weights & Biases"
|
||||
|
||||
=== "Python"
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
from wandb.integration.ultralytics import add_wandb_callback
|
||||
import wandb
|
||||
|
||||
# Step 1: Initialize a Weights & Biases run
|
||||
wandb.init(project="ultralytics", job_type="training")
|
||||
|
||||
# Step 2: Define the YOLOv8 Model and Dataset
|
||||
model_name = "yolov8n"
|
||||
dataset_name = "coco128.yaml"
|
||||
model = YOLO(f"{model_name}.pt")
|
||||
|
||||
# Step 3: Add W&B Callback for Ultralytics
|
||||
add_wandb_callback(model, enable_model_checkpointing=True)
|
||||
|
||||
# Step 4: Train and Fine-Tune the Model
|
||||
model.train(project="ultralytics", data=dataset_name, epochs=5, imgsz=640)
|
||||
|
||||
# Step 5: Validate the Model
|
||||
model.val()
|
||||
|
||||
# Step 6: Perform Inference and Log Results
|
||||
model(["path/to/image1", "path/to/image2"])
|
||||
|
||||
# Step 7: Finalize the W&B Run
|
||||
wandb.finish()
|
||||
```
|
||||
|
||||
### Understanding the Code
|
||||
|
||||
Let’s understand the steps showcased in the usage code snippet above.
|
||||
|
||||
- **Step 1: Initialize a Weights & Biases Run**: Start by initializing a Weights & Biases run, specifying the project name and the job type. This run will track and manage the training and validation processes of your model.
|
||||
|
||||
- **Step 2: Define the YOLOv8 Model and Dataset**: Specify the model variant and the dataset you wish to use. The YOLO model is then initialized with the specified model file.
|
||||
|
||||
- **Step 3: Add Weights & Biases Callback for Ultralytics**: This step is crucial as it enables the automatic logging of training metrics and validation results to Weights & Biases, providing a detailed view of the model's performance.
|
||||
|
||||
- **Step 4: Train and Fine-Tune the Model**: Begin training the model with the specified dataset, number of epochs, and image size. The training process includes logging of metrics and predictions at the end of each epoch, offering a comprehensive view of the model's learning progress.
|
||||
|
||||
- **Step 5: Validate the Model**: After training, the model is validated. This step is crucial for assessing the model's performance on unseen data and ensuring its generalizability.
|
||||
|
||||
- **Step 6: Perform Inference and Log Results**: The model performs predictions on specified images. These predictions, along with visual overlays and insights, are automatically logged in a W&B Table for interactive exploration.
|
||||
|
||||
- **Step 7: Finalize the W&B Run**: This step marks the end of data logging and saves the final state of your model's training and validation process in the W&B dashboard.
|
||||
|
||||
### Understanding the Output
|
||||
|
||||
Upon running the usage code snippet above, you can expect the following key outputs:
|
||||
|
||||
- The setup of a new run with its unique ID, indicating the start of the training process.
|
||||
- A concise summary of the model’s structure, including the number of layers and parameters.
|
||||
- Regular updates on important metrics such as box loss, cls loss, dfl loss, precision, recall, and mAP scores during each training epoch.
|
||||
- At the end of training, detailed metrics including the model's inference speed, and overall accuracy metrics are displayed.
|
||||
- Links to the Weights & Biases dashboard for in-depth analysis and visualization of the training process, along with information on local log file locations.
|
||||
|
||||
### Viewing the Weights & Biases Dashboard
|
||||
|
||||
After running the usage code snippet, you can access the Weights & Biases (W&B) dashboard through the provided link in the output. This dashboard offers a comprehensive view of your model's training process with YOLOv8.
|
||||
|
||||
## Key Features of the Weights & Biases Dashboard
|
||||
|
||||
- **Real-Time Metrics Tracking**: Observe metrics like loss, accuracy, and validation scores as they evolve during the training, offering immediate insights for model tuning.
|
||||
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/TB76U9O"><a href="//imgur.com/D6NVnmN">Take a look at how the experiments are tracked using Weights & Biases.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
|
||||
- **Hyperparameter Optimization**: Weights & Biases aids in fine-tuning critical parameters such as learning rate, batch size, and more, enhancing the performance of YOLOv8.
|
||||
|
||||
- **Comparative Analysis**: The platform allows side-by-side comparisons of different training runs, essential for assessing the impact of various model configurations.
|
||||
|
||||
- **Visualization of Training Progress**: Graphical representations of key metrics provide an intuitive understanding of the model's performance across epochs.
|
||||
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/kU5h7W4" data-context="false" ><a href="//imgur.com/a/kU5h7W4">Take a look at how Weights & Biases helps you visualize validation results.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
|
||||
- **Resource Monitoring**: Keep track of CPU, GPU, and memory usage to optimize the efficiency of the training process.
|
||||
|
||||
- **Model Artifacts Management**: Access and share model checkpoints, facilitating easy deployment and collaboration.
|
||||
|
||||
- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases’ image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media#image-overlays).
|
||||
|
||||
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/UTSiufs" data-context="false" ><a href="//imgur.com/a/UTSiufs">Take a look at how Weights & Biases’ image overlays helps visualize model inferences.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
|
||||
|
||||
By using these features, you can effectively track, analyze, and optimize your YOLOv8 model's training, ensuring the best possible performance and efficiency.
|
||||
|
||||
## Summary
|
||||
|
||||
This guide helped you explore Ultralytics’ YOLOv8 integration with Weights & Biases. It illustrates the ability of this integration to efficiently track and visualize model training and prediction results.
|
||||
|
||||
For further details on usage, visit [Weights & Biases' official documentation](https://docs.wandb.ai/guides/integrations/ultralytics).
|
||||
|
||||
Also, be sure to check out the [Ultralytics integration guide page](../integrations/index.md), to learn more about different exciting integrations.
|
Reference in New Issue
Block a user