ON THIS PAGE

  • Training
  • Overview
  • CLI
  • Python API
  • Configuration File Example
  • Hyperparameter Tuning
  • Configuration Example
  • CLI
  • Python API
  • Tracking
  • Supported Tracking Tools
  • Remote Database Storage
  • Configuration Example

Training

Overview

Start your model training with LuxonisTrain using the CLI or Python API. Below are examples of how to call the training process and what the configuration looks like.

CLI

To train your model, use the following command:
Command Line
1luxonis_train train --config configs/detection_light_model.yaml
You can also customize configuration parameters directly from the command line if needed:
Command Line
1luxonis_train train \
2  --config configs/detection_light_model.yaml \
3  loader.params.dataset_dir "roboflow://team-roboflow/coco-128/2/coco"
This command override the loader.params.dataset_dir parameter in the configuration file.

Python API

Alternatively, you can use the Python API for more programmatic control:
Python
1from luxonis_train import LuxonisModel
2
3model = LuxonisModel(
4  "configs/detection_light_model.yaml",
5  {"loader.params.dataset_dir": "roboflow://team-roboflow/coco-128/2/coco"}
6)
7model.train()

Configuration File Example

Below is a sample configuration file, detection_light_model.yaml, used for training. For more information about the model, please see the Detection Model details.The trainer section is very important and influences the training process. To learn more about the trainer hyperparameters, refer to the Luxonis Train Trainer section.In this example, the dataset will be parsed to create the LuxonisDataset dataset. However, if you are using a custom dataset, ensure your dataset is structured according to the LuxonisDataset format. For detailed guidance on preparing your data, refer to the Luxonis ML.
Yaml
1model:
2  name: model_name
3
4  predefined_model:
5    name: DetectionModel
6    params:
7      variant: light
8
9loader:
10  params:
11    dataset_name: "coco_test"
12    dataset_dir: "roboflow://team-roboflow/coco-128/2/coco"
13
14trainer:
15  batch_size: 8
16  epochs: 200
17  n_workers: 8
18  validation_interval: 10
19
20  preprocessing:
21    train_image_size: [384, 384]
22    normalize:
23      active: true
24    augmentations:
25      - name: Defocus
26      - name: Sharpen
27      - name: Flip
28
29  callbacks:
30    - name: ExportOnTrainEnd
31    - name: ArchiveOnTrainEnd
32    - name: TestOnTrainEnd
33
34  optimizer:
35    name: SGD
36    params:
37      lr: 0.02
38
39  scheduler:
40    name: ConstantLR
Important aspects of training include hyperparameter tuning and tracking your training process.

Hyperparameter Tuning

If not using one of the predefined models and the hyperparameters provided with them from here, it is advisable to perform hyperparameter tuning before tackling the training on a larger dataset.Hyperparameter tuning is essential for achieving optimal model performance. LuxonisTrain leverages Optuna to help you experiment with different hyperparameter settings, improving model accuracy and efficiency.For more information, see the Tuner Luxonis-train section.

Configuration Example

Include a tuner section in your configuration file:
Yaml
1tuner:
2  study_name: det_study
3  n_trials: 10
4  storage:
5    storage_type: local
6  params:
7    trainer.optimizer.name_categorical: ["Adam", "SGD"]
8    trainer.optimizer.params.lr_float: [0.0001, 0.001]
9    trainer.batch_size_int: [4, 16, 4]

CLI

To start the hyperparameter tuning process, use the following command:
Command Line
1luxonis_train tune --config configs/detection_light_model.yaml

Python API

You can also use the Python API to start the tuning process:
Python
1from luxonis_train import LuxonisModel
2
3model = LuxonisModel("configs/example_tuning.yaml")
4model.tune()

Tracking

Tracking the training process is crucial for monitoring model performance, diagnosing issues, and making informed improvements. LuxonisTrain supports popular tracking tools and remote database storage to facilitate effective logging and tracking throughout your training workflow.For more information, see the Tracker Luxonis-train section.

Supported Tracking Tools

  • MLFlow: To use MLFlow for tracking, the following environment variables are required:
    • MLFLOW_S3_BUCKET
    • MLFLOW_S3_ENDPOINT_URL
    • MLFLOW_TRACKING_URI
  • WandB (Weights & Biases): To use WandB, you will need:
    • WANDB_API_KEY
  • TensorBoard: Luxonis Train also supports TensorBoard for visualizing your training process.

Remote Database Storage

For remote database storage, we support:
  • POSTGRES_PASSWORD
  • POSTGRES_HOST
  • POSTGRES_PORT
  • POSTGRES_DB

Configuration Example

Here's an example of how to configure tracking in your detection_light_model.yaml file:
Yaml
1tracker:
2  project_name: coco_test
3  save_directory: output
4  is_tensorboard: true
5  is_wandb: false
6  is_mlflow: false