ODTFormer: Efficient Obstacle Detection and Tracking with Stereo Cameras Based on Transformer

1Northeastern University, 2Brown University

ODTFormer detects and tracks voxel occupancies from temporal sequences of stereo pairs.

Abstract

Obstacle detection and tracking represent a critical component in robot autonomous navigation. In this paper, we propose ODTFormer, a Transformer-based model to address both Obstacle Detection and Tracking problems. For the detection task, our approach leverages deformable attention to construct a 3D cost volume, which is decoded progressively in the form of voxel occupancy grids. We further track the obstacles by matching the voxels between consecutive frames. The entire model can be optimized in an end-to-end manner. Through extensive experiments on DrivingStereo and KITTI benchmarks, our model achieves state-of-the-art performance in the obstacle detection task. We also report comparable accuracy to state-of-the-art obstacle tracking models while requiring only a fraction of their computation cost, typically ten-fold to twenty-fold less.

There's also a blog post I wrote reflecting the major technical difficulties and how we overcame them within the project - Why Cost Volume Construction Can Be a Non-Trivial Yet Interesting Problem in Transformer-Based Models?

If you like what we've delivered from this project, I hope you also find it interesting.

Video

Detection Results

Tracking Results

BibTeX


    	@article{ding2024odtformer,
		  title={ODTFormer: Efficient Obstacle Detection and Tracking with Stereo Cameras Based on Transformer},
		  author={Ding, Tianye and Li, Hongyu and Jiang, Huaizu},
		  journal={arXiv preprint arXiv:2403.14626},
		  year={2024}
		}