Skip to content

[Apollo] This is the implementation of yolox3d first stage, a camera 2D object detection for autonomous driving perception.

License

Notifications You must be signed in to change notification settings

ApolloAuto/apollo-model-yolox

Repository files navigation

Introduction

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

This repo is an implementation of PyTorch version YOLOX, there is also a MegEngine implementation.

Updates!!

  • 【2023/10/23】 We employ YOLOX(commit id ac58e0a5e68e57454b7b9ac822aced493b553c53) as the first stage in Apollo camera_detection_multi_stage component.
  • 【2023/02/28】 We support assignment visualization tool, see doc here.
  • 【2022/04/14】 We support jit compile op.
  • 【2021/08/19】 We optimize the training process with 2x faster training and ~1% higher performance! See notes for more details.
  • 【2021/08/05】 We release MegEngine version YOLOX.
  • 【2021/07/28】 We fix the fatal error of memory leak
  • 【2021/07/26】 We now support MegEngine deployment.
  • 【2021/07/20】 We have released our technical report on Arxiv.

Quick Start

Installation

Step1. Install YOLOX from source.

# clone code
git clone [email protected]:ApolloAuto/apollo-model-yolox.git

cd apollo-model-yolox

# creat conda env
conda create -n apollo_yolox python=3.8
conda activate apollo_yolox

# install requirements
pip3 install -r requirements.txt
Demo

Step1. Download a pretrained model from the benchmark table.

Model size Params
(M)
Datasets Class weights
YOLOX-voc-s 640 8.9 KITTI 6 link
YOLOX-voc-s 640 8.9 L4 8 link

Step2. For example, here we use best_L4_ckpt model:

python tools/demo.py image -n yolox-s -c /path/to/your/best_L4_ckpt.pth --path sample/ --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]

then you will find result under path YOLOX_outputs/yolox_s/.

Reproduce our results on KITTI

Step1. Prepare KITTI dataset

cd <YOLOX_HOME>
ln -s /path/to/your/KITTI ./datasets/KITTI

Step2. Tools for kitti type datasets We provide tools for KITTI type datasets which can help to trans it to VOC type : readme

Step3. change kitti configs

  1. change yolox_voc_s
  • class number: 8 to 6
  • data_dir=os.path.join(get_yolox_datadir(), "CUSTOMER") to KITTI
  1. change voc_classes.py to KITTI class.
  2. modify yolox_voc_s.py todo items.
  3. modify voc.py line 119 change jpg to png
  • self._imgpath = os.path.join("%s", "JPEGImages", "%s.jpg") # to png

Step4. Reproduce our results on KITTI:

python3 tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 1 -b 16

or resume

python3 tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 1 -b 16 -c /path/to/your/latest_ckpt.pth --resume
  • -d: number of gpu devices
  • -b: total batch size, the recommended number for -b is num-gpu * 8
  • --fp16: mixed precision training
  • --cache: caching imgs into RAM to accelarate training, which need large system RAM.
  • -c: checkpoint file path

If you want to visuialize your results, please refer to Visualization guides

Export

We support batch testing for fast evaluation:

python tools/export_onnx.py --input data -n yolox-s -c YOLOX_outputs/yolox_voc_s/latest_ckpt.pth  --output-name yolox.onnx
  • --input: onnx model input blob name.
  • -c: path of model.
  • --output-name: the file name of covert model

Multi Machine Training

We also support multi-nodes training. Just add the following args:

  • --num_machines: num of your total training nodes
  • --machine_rank: specify the rank of each node

Suppose you want to train YOLOX on 2 machines, and your master machines's IP is 123.123.123.123, use port 12312 and TCP.

On master machine, run

python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num_machines 2 --machine_rank 0

On the second machine, run

python tools/train.py -n yolox-s -b 128 --dist-url tcp://123.123.123.123:12312 --num_machines 2 --machine_rank 1

Logging to Weights & Biases

To log metrics, predictions and model checkpoints to W&B use the command line argument --logger wandb and use the prefix "wandb-" to specify arguments for initializing the wandb run.

python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o [--cache] --logger wandb wandb-project <project name>
                         yolox-m
                         yolox-l
                         yolox-x

An example wandb dashboard is available here

Others

See more information with the following command:

python -m yolox.tools.train --help
Evaluation

We support batch testing for fast evaluation:

python -m yolox.tools.eval -n  yolox-s -c yolox_s.pth -b 64 --exp_file exps/example/yolox_voc/yolox_voc_s.py -d 8 --conf 0.001 [--fp16] [--fuse]
                               yolox-m
                               yolox-l
                               yolox-x
  • --fuse: fuse conv and bn
  • -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
  • -b: total batch size across on all GPUs

To reproduce speed test, we use the following command:

python -m yolox.tools.eval -n  yolox-s -c yolox_s.pth -b 1 --exp_file exps/example/yolox_voc/yolox_voc_s.py -d 1 --conf 0.001 --fp16 --fuse
                               yolox-m
                               yolox-l
                               yolox-x
Tutorials

Deployment

  1. MegEngine in C++ and Python
  2. ONNX export and an ONNXRuntime
  3. TensorRT in C++ and Python
  4. ncnn in C++ and Java
  5. OpenVINO in C++ and Python
  6. Accelerate YOLOX inference with nebullvm in Python

Cite YOLOX

If you use YOLOX in your research, please cite our work by using the following BibTeX entry:

 @article{yolox2021,
  title={YOLOX: Exceeding YOLO Series in 2021},
  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
  journal={arXiv preprint arXiv:2107.08430},
  year={2021}
}

About

[Apollo] This is the implementation of yolox3d first stage, a camera 2D object detection for autonomous driving perception.

Resources

License

Stars

Watchers

Forks

Packages

No packages published