DAVIS Challenge on Video Object Segmentation 2019

Workshop in conjunction with CVPR 2019, Long Beach, California


The unsupervised scenario assumes the user does not interact with the algorithm to obtain the segmentation masks. Methods should provide a set of object candidates with no overlapping pixels that span through the whole video sequence.

Dates and Phases

- Test-Dev 2019: Open end.
- Test-Challenge 2019: 12th May 2019 23:59 UTC - 24th May 2019 23:59 UTC.


- Right after the Challenge closes (24th May) we will invite all participants to submit a short abstract (400 words maximum) of their method (Deadline 29th of May, 23:59 UTC).
- Together with the results obtained, we will decide which teams are accepted at the workshop. Date of notification June 3rd.
- Accepted teams will be able to submit a paper describing their approach (Deadline 12th June, 23:59 UTC). The template of the paper is the same as CVPR, but length will be limited to 4 pages including references.
- Papers will also be invited to the workshop in form of oral presentation or poster.
- Accepted papers will be self-published in the web of the challenge (not in the official proceedings, although they have the same value).

Datasets (Download here, 480p resolution)

- Train 2017 + Val 2017: 90 sequences, the 50 original DAVIS 2016 sequences (reannotated with multiple objects) plus 40 new sequences.
- Test-Dev 2017: 30 new sequences. Ground truth not publicly available, unlimited number of submissions.
- Test-Challenge 2017: 30 new sequences. Ground truth not publicly available, limited to 5 submissions in total.
Feel free to train or pre-train your algorithms on any other dataset apart from DAVIS (Youtube-VOS, MS COCO, Pascal, etc.) or use the full resolution DAVIS annotations and images.


More information coming soon!


This year we are trying to increase the prize pool, so stay tuned!


More information coming soon!

Other considerations

- Each entry must be associated to a team and provide its affiliation.
- The best entry of each team will be public in the leaderboard at all times.
- We will only consider the "unsupervised" scenario: no human input of any kind allowed when testing a sequence. Although we have no way to check at the challenge stage, we will make our best to detect it a posteriori before the workshop.
- We reserve the right to remove one of the entry methods to the competition when there is a high technical similarity to methods published in previous conferences or workshops. We do so in order to keep the workshop interesting and to push state of the art to move forward.
- The new annotations in this dataset belong to the organizers of the challenge and are licensed under a Creative Commons Attribution 4.0 License.



The 2018 DAVIS Challenge on Video Object Segmentation
S. Caelles, A. Montes, K.-K. Maninis, Y. Chen, L. Van Gool, F. Perazzi, and J. Pont-Tuset
arXiv:1803.00557, 2018
[PDF] [BibTex]

              author = {Sergi Caelles and Alberto Montes and Kevis-Kokitsi Maninis and Yuhua Chen and Luc {Van Gool} and Federico Perazzi and Jordi Pont-Tuset},
              title = {The 2018 DAVIS Challenge on Video Object Segmentation},
              journal = {arXiv:1803.00557},
              year = {2018}

The 2017 DAVIS Challenge on Video Object Segmentation
J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbeláez, A. Sorkine-Hornung, and L. Van Gool
arXiv:1704.00675, 2017
[PDF] [BibTex]

              author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
              title = {The 2017 DAVIS Challenge on Video Object Segmentation},
              journal = {arXiv:1704.00675},
              year = {2017}

A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation
F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung
Computer Vision and Pattern Recognition (CVPR) 2016
[PDF] [Supplemental] [BibTex]

author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
Please cite these papers in your publications if DAVIS helps your research.


If you have any further questions, contact us!.