DAVIS Challenge on Video Object Segmentation 2018

Workshop in conjunction with CVPR 2018, Salt Lake City, Utah

Dates and Phases

- Test-Dev 2018: Open end.
- Test-Challenge 2018: 13th May 2018 23:59 UTC - 25th May 2018 23:59 UTC.

Papers

- Right after the Challenge closes (26th May) we will invite all participants to submit a short abstract (400 words maximum) of their method (Deadline 29th of May, 23:59 UTC).
- Together with the results obtained, we will decide which teams are accepted at the workshop. Date of notification June 1st.
- Accepted teams will be able to submit a paper describing their approach (Deadline 13th June, 23:59 UTC). The template of the paper is the same as CVPR, but length will be limited to 4 pages including references.
- Papers will also be invited to the workshop in form of oral presentation or poster.
- Accepted papers will be self-published in the web of the challenge (not in the official proceedings, although they have the same value).

Datasets (Download here, 480p resolution)

- Train 2017 + Val 2017: 90 sequences, the 50 original DAVIS 2016 sequences (reannotated with multiple objects) plus 40 new sequences. The sequences and ground truth are publicly available as of beginning April 2017.
- Test-Dev 2017: 30 new sequences, available as of beginning April 2017. Ground truth not publicly available, unlimited number of submissions.
- Test-Challenge 2017: 30 new sequences, available at the start of the Test-Challenge phase. Ground truth not publicly available, limited to 5 submissions in total.
- Feel free to train or pre-train your algorithms on any other dataset apart from DAVIS (MS COCO, Pascal, etc.) or use the full resolution DAVIS annotations and images.

Submission

Submissions to all phases will be done through the Codalab site of the challenge.
Please register to the site and refer to the instructions on how to submit your results.

Prizes

The winner team will receive an NVIDIA Titan Xp GPU as prize. The top five submissions will receive a subscription to Adobe CC for 1 year.

Evaluation

- The per-object measures are those described in the original DAVIS CVPR 2016 paper: Region Jaccard (J) and Boundary F measure (F).
- The overall ranking measures will be computed as the mean between J and F, both averaged over all objects. Precise definitions available in the paper.

Test-Challenge 2018 Leaderboard

Citation

arXiv

The 2018 DAVIS Challenge on Video Object Segmentation
S. Caelles, A. Montes, K.-K. Maninis, Y. Chen, L. Van Gool, F. Perazzi, and J. Pont-Tuset
arXiv:1803.00557, 2018
[PDF] [BibTex]

@article{Caelles_arXiv_2018,
  author = {Sergi Caelles and Alberto Montes and Kevis-Kokitsi Maninis and Yuhua Chen and Luc {Van Gool} and Federico Perazzi and Jordi Pont-Tuset},
  title = {The 2018 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1803.00557},
  year = {2018}
}
arXiv

The 2017 DAVIS Challenge on Video Object Segmentation
J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbeláez, A. Sorkine-Hornung, and L. Van Gool
arXiv:1704.00675, 2017
[PDF] [BibTex]

@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
CVPR

A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation
F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung
Computer Vision and Pattern Recognition (CVPR) 2016
[PDF] [Supplemental] [BibTex]

@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}
Please cite these papers in your publications if DAVIS helps your research.

Other considerations

- Each entry must be associated to a team and provide its affiliation.
- The best entry of each team will be public in the leaderboard at all times.
- We will only consider the "semi-supervised" scenario: the mask of the first frame is given, no human interaction/refinement allowed. Although we have no way to check the latter at the challenge stage, we will make our best to detect it a posteriori before the workshop.
- The new annotations in this dataset belong to the organizers of the challenge and are licensed under a Creative Commons Attribution 4.0 License.

Contact

If you have any further questions, contact us!.