Concepts
Overview
Before diving into data preparation and model training, it's crucial to understand the core concepts that make LuxonisTrain highly flexible and customizable. This section will walk you through the fundamentals of Configuration and Customization.Configuration
LuxonisTrain uses YAML configuration files to define every aspect of your training pipeline, from model architecture to optimization settings, offering flexibility in how you configure your training process. This page will guide you through the various configuration options available, ensuring you can tailor the tool to your needs. For a complete list of all parameters, their possible values, and default settings, please visit the LuxonisTrain documentation.Configuration Components
- Model: Defines the architecture of your model, including nodes, losses, visualizers and metrics.
- Loader: Configures data loading by selecting the appropriate dataset. For more details on how to create a dataset, refer to the Data Preparation section.
- Trainer: Specifies training parameters, including batch size, epochs, preprocessing (normalization, augmentations), optimizer, scheduler, callbacks, and more. For more details, refer to the Training section.
- Tracker: Monitors and logs training metrics for analysis. For more details, refer to the Training section.
- Exporter: Defines export settings for your trained model to ensure compatibility and performance on deployment platforms. For more details, refer to the Exporting section.
- Tuner: Tunes hyperparameters to optimize model performance. For more details, refer to the Training section.
Example Configuration File
Here’s a sample YAML configuration file to give you an idea of how to structure your LuxonisTrain setup:Yaml
1model:
2 name: model_name
3
4 # Use a predefined detection model instead of manually defining the architecture
5 predefined_model:
6 name: DetectionModel
7 params:
8 variant: light
9
10# Dataset configuration: Download and parse the COCO dataset from RoboFlow
11loader:
12 params:
13 dataset_name: coco_test
14 dataset_dir: "roboflow://team-roboflow/coco-128/2/coco"
15
16trainer:
17 batch_size: 8
18 epochs: 200
19 n_workers: 8
20 validation_interval: 10
21
22 preprocessing:
23 train_image_size: [384, 384]
24
25 # Image normalization using ImageNet standards
26 normalize:
27 active: true
28
29 # Image augmentations using Albumentations library
30 augmentations:
31 - name: Defocus
32 - name: Sharpen
33 - name: Flip
34
35 callbacks:
36 - name: ExportOnTrainEnd
37 - name: ArchiveOnTrainEnd
38 - name: TestOnTrainEnd
39
40 optimizer:
41 name: SGD
42 params:
43 lr: 0.02
44
45 scheduler:
46 name: ConstantLR
Predefined Models
You don't need to start designing architecture from scratch; you can use a predefined model based on the task you want to solve. Those models are well tested to be fast and accurate. You can check out all of them here.Customization
LuxonisTrain's modular architecture allows you to customize and extend the framework to fit your specific needs. You can customize components like Loaders, Nodes, Losses, Metrics, Visualizers, Callbacks, Optimizers, and Schedulers. Additionally, the training strategy can be fully customized, allowing you to implement different learning rates for different parameter groups, apply various schedulers to different weights, and create complex warmup phases. This granular control enables sophisticated optimization strategies that can significantly improve model convergence and performance.Custom Components
To implement a custom component, subclass the respective base class and register it. Here are some examples:Example: Custom Callback
Python
1import lightning.pytorch as pl
2
3from luxonis_train import LuxonisLightningModule
4from luxonis_train.utils.registry import CALLBACKS
5
6
7@CALLBACKS.register()
8class CustomCallback(pl.Callback):
9 def __init__(self, message: str, **kwargs):
10 super().__init__(**kwargs)
11 self.message = message
12
13 # Will be called at the end of each training epoch.
14 # Consult the PyTorch Lightning documentation for more callback methods.
15 def on_train_epoch_end(
16 self,
17 trainer: pl.Trainer,
18 pl_module: LuxonisLightningModule,
19 ) -> None:
20 print(self.message)
Example: Custom Loss Function
Python
1import lightning.pytorch as pl
2
3from luxonis_train import LuxonisLightningModule
4from luxonis_train.utils.registry import CALLBACKS
5
6
7@CALLBACKS.register()
8class CustomCallback(pl.Callback):
9 def __init__(self, message: str, **kwargs):
10 super().__init__(**kwargs)
11 self.message = message
12
13 # Will be called at the end of each training epoch.
14 # Consult the PyTorch Lightning documentation for more callback methods.
15 def on_train_epoch_end(
16 self,
17 trainer: pl.Trainer,
18 pl_module: LuxonisLightningModule,
19 ) -> None:
20 print(self.message)
Using Custom Components in Configuration
To use custom components in your training configuration, specify the component name and parameters in the YAML file. Here's an example:Yaml
1model:
2 nodes:
3 - name: SegmentationHead
4 losses:
5 - name: CustomLoss
6 params:
7 smoothing: 0.0001
8
9trainer:
10 callbacks:
11 - name: CustomCallback
12 params:
13 lr: "Hello from the custom callback!"
Including Custom Components in Training
After creating custom components, you need to import them before starting training. There are two ways to accomplish this:Using the CLI
Command Line
1luxonis_train --source custom_components.py train --config config.yaml
Using the Python API
Python
1# Import your custom components first
2from custom_components import *
3from luxonis_train import LuxonisModel
4
5model = LuxonisModel("config.yaml")
6model.train()