mirror of
https://gitee.com/nanjing-yimao-information/ieemoo-ai-gift.git
synced 2025-08-23 23:50:25 +00:00
update
This commit is contained in:
95
docs/en/yolov5/environments/aws_quickstart_tutorial.md
Normal file
95
docs/en/yolov5/environments/aws_quickstart_tutorial.md
Normal file
@ -0,0 +1,95 @@
|
||||
---
|
||||
comments: true
|
||||
description: Follow this comprehensive guide to set up and operate YOLOv5 on an AWS Deep Learning instance for object detection tasks. Get started with model training and deployment.
|
||||
keywords: YOLOv5, AWS Deep Learning AMIs, object detection, machine learning, AI, model training, instance setup, Ultralytics
|
||||
---
|
||||
|
||||
# YOLOv5 🚀 on AWS Deep Learning Instance: Your Complete Guide
|
||||
|
||||
Setting up a high-performance deep learning environment can be daunting for newcomers, but fear not! 🛠️ With this guide, we'll walk you through the process of getting YOLOv5 up and running on an AWS Deep Learning instance. By leveraging the power of Amazon Web Services (AWS), even those new to machine learning can get started quickly and cost-effectively. The AWS platform's scalability is perfect for both experimentation and production deployment.
|
||||
|
||||
Other quickstart options for YOLOv5 include our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>, [GCP Deep Learning VM](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial), and our Docker image at [Docker Hub](https://hub.docker.com/r/ultralytics/yolov5) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>.
|
||||
|
||||
## Step 1: AWS Console Sign-In
|
||||
|
||||
Start by creating an account or signing in to the AWS console at [https://aws.amazon.com/console/](https://aws.amazon.com/console/). Once logged in, select the **EC2** service to manage and set up your instances.
|
||||
|
||||

|
||||
|
||||
## Step 2: Launch Your Instance
|
||||
|
||||
In the EC2 dashboard, you'll find the **Launch Instance** button which is your gateway to creating a new virtual server.
|
||||
|
||||

|
||||
|
||||
### Selecting the Right Amazon Machine Image (AMI)
|
||||
|
||||
Here's where you choose the operating system and software stack for your instance. Type 'Deep Learning' into the search field and select the latest Ubuntu-based Deep Learning AMI, unless your needs dictate otherwise. Amazon's Deep Learning AMIs come pre-installed with popular frameworks and GPU drivers to streamline your setup process.
|
||||
|
||||

|
||||
|
||||
### Picking an Instance Type
|
||||
|
||||
For deep learning tasks, selecting a GPU instance type is generally recommended as it can vastly accelerate model training. For instance size considerations, remember that the model's memory requirements should never exceed what your instance can provide.
|
||||
|
||||
**Note:** The size of your model should be a factor in selecting an instance. If your model exceeds an instance's available RAM, select a different instance type with enough memory for your application.
|
||||
|
||||
For a list of available GPU instance types, visit [EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/), specifically under Accelerated Computing.
|
||||
|
||||

|
||||
|
||||
For more information on GPU monitoring and optimization, see [GPU Monitoring and Optimization](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-gpu.html). For pricing, see [On-Demand Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) and [Spot Pricing](https://aws.amazon.com/ec2/spot/pricing/).
|
||||
|
||||
### Configuring Your Instance
|
||||
|
||||
Amazon EC2 Spot Instances offer a cost-effective way to run applications as they allow you to bid for unused capacity at a fraction of the standard cost. For a persistent experience that retains data even when the Spot Instance goes down, opt for a persistent request.
|
||||
|
||||

|
||||
|
||||
Remember to adjust the rest of your instance settings and security configurations as needed in Steps 4-7 before launching.
|
||||
|
||||
## Step 3: Connect to Your Instance
|
||||
|
||||
Once your instance is running, select its checkbox and click Connect to access the SSH information. Use the displayed SSH command in your preferred terminal to establish a connection to your instance.
|
||||
|
||||

|
||||
|
||||
## Step 4: Running YOLOv5
|
||||
|
||||
Logged into your instance, you're now ready to clone the YOLOv5 repository and install dependencies within a Python 3.8 or later environment. YOLOv5's models and datasets will automatically download from the latest [release](https://github.com/ultralytics/yolov5/releases).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/ultralytics/yolov5 # clone repository
|
||||
cd yolov5
|
||||
pip install -r requirements.txt # install dependencies
|
||||
```
|
||||
|
||||
With your environment set up, you can begin training, validating, performing inference, and exporting your YOLOv5 models:
|
||||
|
||||
```bash
|
||||
# Train a model on your data
|
||||
python train.py
|
||||
|
||||
# Validate the trained model for Precision, Recall, and mAP
|
||||
python val.py --weights yolov5s.pt
|
||||
|
||||
# Run inference using the trained model on your images or videos
|
||||
python detect.py --weights yolov5s.pt --source path/to/images
|
||||
|
||||
# Export the trained model to other formats for deployment
|
||||
python export.py --weights yolov5s.pt --include onnx coreml tflite
|
||||
```
|
||||
|
||||
## Optional Extras
|
||||
|
||||
To add more swap memory, which can be a savior for large datasets, run:
|
||||
|
||||
```bash
|
||||
sudo fallocate -l 64G /swapfile # allocate 64GB swap file
|
||||
sudo chmod 600 /swapfile # modify permissions
|
||||
sudo mkswap /swapfile # set up a Linux swap area
|
||||
sudo swapon /swapfile # activate swap file
|
||||
free -h # verify swap memory
|
||||
```
|
||||
|
||||
And that's it! 🎉 You've successfully created an AWS Deep Learning instance and run YOLOv5. Whether you're just starting with object detection or scaling up for production, this setup can help you achieve your machine learning goals. Happy training, validating, and deploying! If you encounter any hiccups along the way, the robust AWS documentation and the active Ultralytics community are here to support you.
|
94
docs/en/yolov5/environments/azureml_quickstart_tutorial.md
Normal file
94
docs/en/yolov5/environments/azureml_quickstart_tutorial.md
Normal file
@ -0,0 +1,94 @@
|
||||
---
|
||||
comments: true
|
||||
description: Azure Machine Learning YOLOv5 quickstart
|
||||
keywords: Ultralytics, YOLO, Deep Learning, Object detection, quickstart, Azure, AzureML
|
||||
---
|
||||
|
||||
# YOLOv5 🚀 on AzureML
|
||||
|
||||
This guide provides a quickstart to use YOLOv5 from an AzureML compute instance.
|
||||
|
||||
Note that this guide is a quickstart for quick trials. If you want to unlock the full power AzureML, you can find the documentation to:
|
||||
|
||||
- [Create a data asset](https://learn.microsoft.com/azure/machine-learning/how-to-create-data-assets)
|
||||
- [Create an AzureML job](https://learn.microsoft.com/azure/machine-learning/how-to-train-model)
|
||||
- [Register a model](https://learn.microsoft.com/azure/machine-learning/how-to-manage-models)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need an [AzureML workspace](https://learn.microsoft.com/azure/machine-learning/concept-workspace?view=azureml-api-2).
|
||||
|
||||
## Create a compute instance
|
||||
|
||||
From your AzureML workspace, select Compute > Compute instances > New, select the instance with the resources you need.
|
||||
|
||||
<img width="1741" alt="create-compute-arrow" src="https://github.com/ouphi/ultralytics/assets/17216799/3e92fcc0-a08e-41a4-af81-d289cfe3b8f2">
|
||||
|
||||
## Open a Terminal
|
||||
|
||||
Now from the Notebooks view, open a Terminal and select your compute.
|
||||
|
||||

|
||||
|
||||
## Setup and run YOLOv5
|
||||
|
||||
Now you can, create a virtual environment:
|
||||
|
||||
```bash
|
||||
conda create --name yolov5env -y
|
||||
conda activate yolov5env
|
||||
conda install pip -y
|
||||
```
|
||||
|
||||
Clone YOLOv5 repository with its submodules:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/ultralytics/yolov5
|
||||
cd yolov5
|
||||
git submodule update --init --recursive # Note that you might have a message asking you to add your folder as a safe.directory just copy the recommended command
|
||||
```
|
||||
|
||||
Install the required dependencies:
|
||||
|
||||
```bash
|
||||
pip install -r yolov5/requirements.txt
|
||||
pip install onnx>=1.10.0
|
||||
```
|
||||
|
||||
Train the YOLOv5 model:
|
||||
|
||||
```bash
|
||||
python train.py
|
||||
```
|
||||
|
||||
Validate the model for Precision, Recall, and mAP
|
||||
|
||||
```bash
|
||||
python val.py --weights yolov5s.pt
|
||||
```
|
||||
|
||||
Run inference on images and videos:
|
||||
|
||||
```bash
|
||||
python detect.py --weights yolov5s.pt --source path/to/images
|
||||
```
|
||||
|
||||
Export models to other formats:
|
||||
|
||||
```bash
|
||||
python detect.py --weights yolov5s.pt --source path/to/images
|
||||
```
|
||||
|
||||
## Notes on using a notebook
|
||||
|
||||
Note that if you want to run these commands from a Notebook, you need to [create a new Kernel](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-access-terminal?view=azureml-api-2#add-new-kernels) and select your new Kernel on the top of your Notebook.
|
||||
|
||||
If you create Python cells it will automatically use your custom environment, but if you add bash cells, you will need to run `source activate <your-env>` on each of these cells to make sure it uses your custom environment.
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
%%bash
|
||||
source activate newenv
|
||||
python val.py --weights yolov5s.pt
|
||||
```
|
@ -0,0 +1,71 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn how to set up and run YOLOv5 in a Docker container. This tutorial includes the prerequisites and step-by-step instructions.
|
||||
keywords: YOLOv5, Docker, Ultralytics, Image Detection, YOLOv5 Docker Image, Docker Container, Machine Learning, AI
|
||||
---
|
||||
|
||||
# Get Started with YOLOv5 🚀 in Docker
|
||||
|
||||
This tutorial will guide you through the process of setting up and running YOLOv5 in a Docker container.
|
||||
|
||||
You can also explore other quickstart options for YOLOv5, such as our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>, [GCP Deep Learning VM](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial), and [Amazon AWS](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Nvidia Driver**: Version 455.23 or higher. Download from [Nvidia's website](https://www.nvidia.com/Download/index.aspx).
|
||||
2. **Nvidia-Docker**: Allows Docker to interact with your local GPU. Installation instructions are available on the [Nvidia-Docker GitHub repository](https://github.com/NVIDIA/nvidia-docker).
|
||||
3. **Docker Engine - CE**: Version 19.03 or higher. Download and installation instructions can be found on the [Docker website](https://docs.docker.com/install/).
|
||||
|
||||
## Step 1: Pull the YOLOv5 Docker Image
|
||||
|
||||
The Ultralytics YOLOv5 DockerHub repository is available at [https://hub.docker.com/r/ultralytics/yolov5](https://hub.docker.com/r/ultralytics/yolov5). Docker Autobuild ensures that the `ultralytics/yolov5:latest` image is always in sync with the most recent repository commit. To pull the latest image, run the following command:
|
||||
|
||||
```bash
|
||||
sudo docker pull ultralytics/yolov5:latest
|
||||
```
|
||||
|
||||
## Step 2: Run the Docker Container
|
||||
|
||||
### Basic container:
|
||||
|
||||
Run an interactive instance of the YOLOv5 Docker image (called a "container") using the `-it` flag:
|
||||
|
||||
```bash
|
||||
sudo docker run --ipc=host -it ultralytics/yolov5:latest
|
||||
```
|
||||
|
||||
### Container with local file access:
|
||||
|
||||
To run a container with access to local files (e.g., COCO training data in `/datasets`), use the `-v` flag:
|
||||
|
||||
```bash
|
||||
sudo docker run --ipc=host -it -v "$(pwd)"/datasets:/usr/src/datasets ultralytics/yolov5:latest
|
||||
```
|
||||
|
||||
### Container with GPU access:
|
||||
|
||||
To run a container with GPU access, use the `--gpus all` flag:
|
||||
|
||||
```bash
|
||||
sudo docker run --ipc=host -it --gpus all ultralytics/yolov5:latest
|
||||
```
|
||||
|
||||
## Step 3: Use YOLOv5 🚀 within the Docker Container
|
||||
|
||||
Now you can train, test, detect, and export YOLOv5 models within the running Docker container:
|
||||
|
||||
```bash
|
||||
# Train a model on your data
|
||||
python train.py
|
||||
|
||||
# Validate the trained model for Precision, Recall, and mAP
|
||||
python val.py --weights yolov5s.pt
|
||||
|
||||
# Run inference using the trained model on your images or videos
|
||||
python detect.py --weights yolov5s.pt --source path/to/images
|
||||
|
||||
# Export the trained model to other formats for deployment
|
||||
python export.py --weights yolov5s.pt --include onnx coreml tflite
|
||||
```
|
||||
|
||||
<p align="center"><img width="1000" src="https://user-images.githubusercontent.com/26833433/142224770-6e57caaf-ac01-4719-987f-c37d1b6f401f.png" alt="GCP running Docker"></p>
|
@ -0,0 +1,87 @@
|
||||
---
|
||||
comments: true
|
||||
description: Discover how to deploy YOLOv5 on a GCP Deep Learning VM for seamless object detection. Ideal for ML beginners and cloud learners. Get started with our easy-to-follow tutorial!
|
||||
keywords: YOLOv5, Google Cloud Platform, GCP, Deep Learning VM, ML model training, object detection, AI tutorial, cloud-based AI, machine learning setup
|
||||
---
|
||||
|
||||
# Mastering YOLOv5 🚀 Deployment on Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) ⭐
|
||||
|
||||
Embarking on the journey of artificial intelligence and machine learning can be exhilarating, especially when you leverage the power and flexibility of a cloud platform. Google Cloud Platform (GCP) offers robust tools tailored for machine learning enthusiasts and professionals alike. One such tool is the Deep Learning VM that is preconfigured for data science and ML tasks. In this tutorial, we will navigate through the process of setting up YOLOv5 on a GCP Deep Learning VM. Whether you’re taking your first steps in ML or you’re a seasoned practitioner, this guide is designed to provide you with a clear pathway to implementing object detection models powered by YOLOv5.
|
||||
|
||||
🆓 Plus, if you're a fresh GCP user, you’re in luck with a [$300 free credit offer](https://cloud.google.com/free/docs/gcp-free-tier#free-trial) to kickstart your projects.
|
||||
|
||||
In addition to GCP, explore other accessible quickstart options for YOLOv5, like our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> for a browser-based experience, or the scalability of [Amazon AWS](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial). Furthermore, container aficionados can utilize our official Docker image at [Docker Hub](https://hub.docker.com/r/ultralytics/yolov5) <img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"> for an encapsulated environment.
|
||||
|
||||
## Step 1: Create and Configure Your Deep Learning VM
|
||||
|
||||
Let’s begin by creating a virtual machine that’s tuned for deep learning:
|
||||
|
||||
1. Head over to the [GCP marketplace](https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning) and select the **Deep Learning VM**.
|
||||
2. Opt for a **n1-standard-8** instance; it offers a balance of 8 vCPUs and 30 GB of memory, ideally suited for our needs.
|
||||
3. Next, select a GPU. This depends on your workload; even a basic one like the Tesla T4 will markedly accelerate your model training.
|
||||
4. Tick the box for 'Install NVIDIA GPU driver automatically on first startup?' for hassle-free setup.
|
||||
5. Allocate a 300 GB SSD Persistent Disk to ensure you don't bottleneck on I/O operations.
|
||||
6. Hit 'Deploy' and let GCP do its magic in provisioning your custom Deep Learning VM.
|
||||
|
||||
This VM comes loaded with a treasure trove of preinstalled tools and frameworks, including the [Anaconda](https://www.anaconda.com/) Python distribution, which conveniently bundles all the necessary dependencies for YOLOv5.
|
||||
|
||||

|
||||
|
||||
## Step 2: Ready the VM for YOLOv5
|
||||
|
||||
Following the environment setup, let's get YOLOv5 up and running:
|
||||
|
||||
```bash
|
||||
# Clone the YOLOv5 repository
|
||||
git clone https://github.com/ultralytics/yolov5
|
||||
|
||||
# Change the directory to the cloned repository
|
||||
cd yolov5
|
||||
|
||||
# Install the necessary Python packages from requirements.txt
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
This setup process ensures you're working with a Python environment version 3.8.0 or newer and PyTorch 1.8 or above. Our scripts smoothly download [models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) rending from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases), making it hassle-free to start model training.
|
||||
|
||||
## Step 3: Train and Deploy Your YOLOv5 Models 🌐
|
||||
|
||||
With the setup complete, you're ready to delve into training and inference with YOLOv5 on your GCP VM:
|
||||
|
||||
```bash
|
||||
# Train a model on your data
|
||||
python train.py
|
||||
|
||||
# Validate the trained model for Precision, Recall, and mAP
|
||||
python val.py --weights yolov5s.pt
|
||||
|
||||
# Run inference using the trained model on your images or videos
|
||||
python detect.py --weights yolov5s.pt --source path/to/images
|
||||
|
||||
# Export the trained model to other formats for deployment
|
||||
python export.py --weights yolov5s.pt --include onnx coreml tflite
|
||||
```
|
||||
|
||||
With just a few commands, YOLOv5 allows you to train custom object detection models tailored to your specific needs or utilize pre-trained weights for quick results on a variety of tasks.
|
||||
|
||||

|
||||
|
||||
## Allocate Swap Space (optional)
|
||||
|
||||
For those dealing with hefty datasets, consider amplifying your GCP instance with an additional 64GB of swap memory:
|
||||
|
||||
```bash
|
||||
sudo fallocate -l 64G /swapfile
|
||||
sudo chmod 600 /swapfile
|
||||
sudo mkswap /swapfile
|
||||
sudo swapon /swapfile
|
||||
free -h # confirm the memory increment
|
||||
```
|
||||
|
||||
### Concluding Thoughts
|
||||
|
||||
Congratulations! You are now empowered to harness the capabilities of YOLOv5 with the computational prowess of Google Cloud Platform. This combination provides scalability, efficiency, and versatility for your object detection tasks. Whether for personal projects, academic research, or industrial applications, you have taken a pivotal step into the world of AI and machine learning on the cloud.
|
||||
|
||||
Do remember to document your journey, share insights with the Ultralytics community, and leverage the collaborative arenas such as [GitHub discussions](https://github.com/ultralytics/yolov5/discussions) to grow further. Now, go forth and innovate with YOLOv5 and GCP! 🌟
|
||||
|
||||
Want to keep improving your ML skills and knowledge? Dive into our [documentation and tutorials](https://docs.ultralytics.com/) for more resources. Let your AI adventure continue!
|
Reference in New Issue
Block a user