# Ultralytics

## Overview

The Ultralytics Python package provides a user-friendly interface for training, evaluating, and deploying deep learning models. It
supports both custom dataset training and pre-trained model inference, making it accessible to beginners while offering advanced
customization options for experienced developers.

Ultralytics is best known for spearheading the development of YOLO models, including the latest versions such as YOLOv8, YOLOv10,
and YOLOv11. The package also supports other architectures like SAM, DETR, and more. You can read about it in the [official
documentation](https://www.ultralytics.com/).

Key Features:

 * Ready-to-use pre-trained models for quick deployment.
 * Custom model training with minimal configuration.
 * Support for multiple export formats, including ONNX, TensorRT, and OpenVINO.

### Introduction to YOLO Models

YOLO (You Only Look Once) is a family of real-time object detection models known for their speed and accuracy. Unlike traditional
detection methods that analyze images in multiple steps, YOLO models frame detection as a single-pass regression problem,
predicting bounding boxes and class probabilities directly from an entire image in one evaluation.

Initially, YOLO models were designed for object detection, but newer versions now support multi-head tasks such as pose
estimation, segmentation, and more.

## Usage

We’ve made it easy to deploy YOLO models trained with Ultralytics on Luxonis devices. To help you get started, we provide
end-to-end tutorials, categorized by task:

Object Detection

 * [YOLOv5 Tutorial](https://github.com/luxonis/ai-tutorials/blob/main/training/others/object-detection/YoloV5_training.ipynb)
 * [YOLOv7 Tutorial](https://github.com/luxonis/ai-tutorials/blob/main/training/others/object-detection/YoloV7_training.ipynb)
 * [YOLOv8 Tutorial](https://github.com/luxonis/ai-tutorials/blob/main/training/others/object-detection/YoloV8_training.ipynb)

Instance Pose

 * [YOLOv11
   Tutorial](https://github.com/luxonis/ai-tutorials/blob/main/training/others/pose-estimation/yolo11_pose_estimation_training.ipynb)

Instance segmentation

 * [YOLOv11
   Tutorial](https://github.com/luxonis/ai-tutorials/blob/main/training/others/instance-segmentation/yolo11_instance_segmentation_training.ipynb)

If you already have pre-trained weights and simply need to convert them for Luxonis devices, follow these steps:

 1. Convert your model using [YOLO ONNX
    Conversion](https://docs.luxonis.com/software-v3/ai-inference/conversion/onnx-conversion.md).
 2. Then, complete the conversion process by following [RVC
    Conversion](https://docs.luxonis.com/software-v3/ai-inference/conversion/rvc-conversion.md).

### DepthAI Integration

YOLO models require specialized post-processing, including:

 * Decoding raw outputs into bounding boxes (and keypoints or instance masks for certain models).
 * Non-Maximum Suppression (NMS) to filter overlapping detections.

Once your object detection model is converted into an RVC-ready format, you can integrate it using the [DetectionNetwork
node](https://docs.luxonis.com/software-v3/depthai/depthai-components/nodes/detection_network.md), which handles all necessary
post-processing.

For instance keypoint detection or instance segmentation models, refer to the see the YOLO Models section on the
[Inference](https://docs.luxonis.com/software-v3/ai-inference/inference.md) page. In these cases, use the DepthAI Nodes package
and set YOLOExtendedParser as the heads parser in [NNArchive](https://docs.luxonis.com/software-v3/ai-inference/nn-archive.md).
