Downloads - DAVIS 2017

Evaluation code, dataset and results

Code

Semi-supervised

Unsupervised

Interactive

Evaluation Test-Dev and Test-Challenge

Downloads

Semi-supervised

The official metrics will be computed using the images and annotations at 480p resolution, but feel free to use the full resolution ones (4k, 1080p, etc.) in any step of your research.
TrainVal - Images and Annotations
Test-Dev 2017 - Images and First-Frame Annotations
Test-Challenge 2017 - Images and First-Frame Annotations

Unsupervised

The TrainVal video sequences are the same as in Semi-supervised, but the annotations are different.
The video sequences in Test-Dev 2019 and Test-Challenge 2019 are new.
The official metrics will be computed using the images and annotations at 480p resolution, but feel free to use the full resolution ones (4k, 1080p, etc.) in any step of your research.
TrainVal - Images and Annotations
Test-Dev 2019 - Images
Test-Challenge 2019 - Images

Object categories

Contains the semantic masks for all the publicly available frames in the Semi-supervised sets, a JSON file with the category for each object and another JSON file with the id and the super category for each category.
TrainVal, Test-Dev, Test-Challenge

Scribbles

Contains the three human annotated scribbles for each of the objects in the TrainVal Semi-supervised set.
TrainVal - Annotated scribbles

Publications

Please cite this paper if you use the Unsupervised code or dataset.
arXiv

The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation
S. Caelles, J. Pont-Tuset, F. Perazzi, A. Montes, K.-K. Maninis, and L. Van Gool
arXiv:1905.00737, 2019
[PDF] [BibTex]

@article{Caelles_arXiv_2019,
              author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
              title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
              journal = {arXiv:1905.00737},
              year = {2019}
            }
Please cite this paper if you use the Interactive code or the scribbles.
arXiv

The 2018 DAVIS Challenge on Video Object Segmentation
S. Caelles, A. Montes, K.-K. Maninis, Y. Chen, L. Van Gool, F. Perazzi, and J. Pont-Tuset
arXiv:1803.00557, 2018
[PDF] [BibTex]

@article{Caelles_arXiv_2018,
              author = {Sergi Caelles and Alberto Montes and Kevis-Kokitsi Maninis and Yuhua Chen and Luc {Van Gool} and Federico Perazzi and Jordi Pont-Tuset},
              title = {The 2018 DAVIS Challenge on Video Object Segmentation},
              journal = {arXiv:1803.00557},
              year = {2018}
            }
Please cite this paper if you use the Semi-supervised code or dataset.
arXiv

The 2017 DAVIS Challenge on Video Object Segmentation
J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbeláez, A. Sorkine-Hornung, and L. Van Gool
arXiv:1704.00675, 2017
[PDF] [BibTex]

@article{Pont-Tuset_arXiv_2017,
              author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
              title = {The 2017 DAVIS Challenge on Video Object Segmentation},
              journal = {arXiv:1704.00675},
              year = {2017}
            }
Please also consider citing the following paper as some sequences are borrowed from it.
CVPR

A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation
F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung
Computer Vision and Pattern Recognition (CVPR) 2016
[PDF] [Supplemental] [BibTex]

@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}