DAVIS Challenge on Video Object Segmentation 2017

Workshop in conjunction with CVPR 2017, Honolulu, Hawaii

Dates and Phases

- Test-Dev 2017: April 2017 - Open end.
- Test-Challenge 2017: 18th June 2017 23:59 UTC - 30th June 2017 23:59 UTC.


- Right after the Challenge closes (1st July) we will invite all participants to submit a short abstract (400 words maximum) of their method (Deadline 4th of July, 23:59 UTC).
- Together with the results obtained, we will decide which teams are accepted at the workshop. Date of notification July 6th.
- Accepted teams will be able to submit a paper describing their approach (Deadline 16th July, 23:59 UTC). The template of the paper is the same as CVPR, but length will be limited to 6 pages including references.
- Papers will also be invited to the workshop in form of oral presentation or poster.
- Accepted papers will be self-published in the web of the challenge (not in the official proceedings, although they have the same value).

Datasets (Download here, 480p resolution)

- Train 2017 + Val 2017: 90 sequences, the 50 original DAVIS 2016 sequences (reannotated with multiple objects) plus 40 new sequences. The sequences and ground truth are publicly available as of beginning April 2017.
- Test-Dev 2017: 30 new sequences, available as of beginning April 2017. Ground truth not publicly available, unlimited number of submissions.
- Test-Challenge 2017: 30 new sequences, available at the start of the Test-Challenge phase. Ground truth not publicly available, limited to 5 submissions in total.
- Feel free to train or pre-train your algorithms on any other dataset apart from DAVIS (MS COCO, Pascal, etc.) or use the full resolution DAVIS annotations and images.


Submissions to all phases will be done through the Codalab site of the challenge.
Please register to the site and refer to the instructions on how to submit your results.


The winner team will recieve an NVIDIA Titan Xp GPU as prize.


- The per-object measures are those described in the original DAVIS CVPR 2016 paper: Region Jaccard (J) and Boundary F measure (F).
- The overall ranking measures will be computed as the mean between J and F, both averaged over all objects. Precise definitions available in the paper.



The 2017 DAVIS Challenge on Video Object Segmentation
J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbeláez, A. Sorkine-Hornung, and L. Van Gool
arXiv:1704.00675, 2017
[PDF] [BibTex]

  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}

A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation
F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung
Computer Vision and Pattern Recognition (CVPR) 2016
[PDF] [Supplemental] [BibTex]

author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
Please cite both papers in your publications if DAVIS helps your research.

Other considerations

- Each entry must be associated to a team and provide its affiliation.
- The best entry of each team will be public in the leaderboard at all times.
- We will only consider the "semi-supervised" scenario: the mask of the first frame is given, no human interaction/refinement allowed. Although we have no way to check the latter at the challenge stage, we will make our best to detect it a posteriori before the workshop.
- The new annotations in this dataset belong to the organizers of the challenge and are licensed under a Creative Commons Attribution 4.0 License.


If you have any further questions, contact us!.