CVPR 2023

Workshop on Autonomous Driving

Sunday, June 18, 2023

Vancouver, Canada

About The Workshop

The CVPR 2023 Workshop on Autonomous Driving (WAD) aims to gather researchers and engineers from academia and industry to discuss the latest advances in perception for autonomous driving. In this full-day workshop, we will host speakers as well as technical benchmark challenges to present the current state of the art, limitations and future directions in the field - arguably one of the most promising applications of computer vision and artificial intelligence. The previous chapters of this workshop attracted hundreds of researchers to attend. This year, multiple industry sponsors are also joining our organizing efforts to push it to a new level.

Challenges
Speakers
About Image
About Image
Schedule

Sunday, June 18
East Ballroom C, Vancouver Convention Centre
Local time, Vancouver (PST)

Recordings are now available on our YouTube channel
and also linked below (click on title).

09:15am
09:30am
Opening Remarks
09:30am
10:00am
10:00am
10:30am
10:30am
11:30am
11:30am
01:00pm
Lunch Break & Poster Session
01:00pm
01:30pm
01:30pm
02:00pm
02:00pm
02:30pm
02:30pm
03:00pm
03:00pm
03:15pm
Coffee Break
03:15pm
03:45pm
03:45pm
04:45pm
04:45pm
05:00pm
Coffee Break
05:00pm
05:30pm
05:30pm
06:00pm
06:00pm
06:15pm
Closing Remarks
News
  • [June 29] Workshop recordings are available now on the CVPR virtual site, our YouTube channel, and also linked in the schedule.
  • [June 18] Our workshop has concluded. Thanks to everyone involved! Recordings will be posted here within the next few weeks.
  • [June 17] You can find our posters at boards #23-#38 in the West Exhibit Hall from 11:30am - 1:00pm.
  • [June 16] The Waymo Open Dataset Challenge reports are available now and can be found below.
  • [June 12] Please note an update to the schedule in the early afternoon slot.
  • [April 5] The Argoverse Challenges are also open now! Find details below.
  • [April 3] We sent out author notifications for submitted papers.
  • [Mar 30] The BDD100K Challenges are also open now! Find details below.
  • [Mar 16] The Waymo Open Dataset Challenges are open now! Find details below.
  • [Mar 6] Submissions to our paper track have now closed. Stay tuned for the dataset challenges which will be announced soon!
  • [Feb 28] We extended the paper submission deadline to March 6, 2023.
  • [Feb 24] We are now accepting submissions for our paper track! Please refer to our Call for Papers for more details.
  • [Jan 8] The workshop got accepted.
Challenges

We will host three challenges to promote research in computer vision and motion prediction for autonomous driving. Waymo, Berkeley DeepDrive and Argoverse have prepared large-scale benchmark datasets with high-quality ground truth annotations. We invite researchers around the world to invent new algorithms to tackle a range of challenging, realistic autonomous driving tasks.

Challenge Image

Challenge 1: Waymo Open Dataset Challenges

The 2023 Waymo Open Dataset Challenges are now live! We are inviting researchers to participate in four challenges.

  • 2D Video Panoptic Segmentation Challenge: Given a panoramic video of camera images, produce a set of panoptic segmentation labels for each pixel in each image, where each object in the scene is tracked across cameras and over time.
  • Pose Estimation Challenge: Given one or more lidar range images and associated camera images, predict 3D keypoints for pedestrians and cyclists in the scene, up to 25m from the ADV.
  • Motion Prediction Challenge: Predict the future positions of multiple agents in the scene given 1 second of past history. We have made lidar sensor data available as a model input.
  • Sim Agents Challenge: Produce sim agent models that control agents in the scene, which will be evaluated against the goal of being human-like. This is the first competition on simulated agents, a fundamental but relatively underexplored topic in autonomous driving, now available to the research community.

First-place winners in each of the four Challenges will receive $10,000 in Google Cloud credits. Additionally, teams with the best performing or noteworthy submissions may be invited to present their work (here!) at the Workshop on Autonomous Driving at CVPR. The 2023 Waymo Open Dataset Challenges close at 11:59 PM Pacific on May 23, 2023, but the leaderboards will remain open for future submissions. The Challenges’ rules and eligibility criteria are available here.

Challenge Image

Challenge 2: BDD100K Challenges

Object tracking is one of the most fundamental problems in computer vision, with direct applications in autonomous driving. Accurate tracking in the perception system can lead to more accurate future prediction and planning in driving systems. Although we have a large amount of video data, the popular multi-object tracking challenges and benchmarks only provide tens of image sequences for training and testing. Therefore, we are hosting large-scale tracking challenges to invite computer vision researchers and practitioners to study the multi-object tracking problem in a more realistic setup. We hope our challenges can lead to new advances in computer vision and new pragmatic algorithms for self-driving cars.

  • Multiple Object Tracking (MOT): Given a video sequence of camera images, predict 2D bounding boxes for each object and their association across frames.
  • Multiple Object Tracking and Segmentation (MOTS): In addition to MOT, also predict a segmentation mask for each object.

The challenges will end at 5 PM GMT on June 7, 2023. There are more than $3,500 in cash prizes and the winners will get a chance to present their findings along with the renowned invited speakers in this workshop. For more information, please check out our challenge website here.

Challenge Image

Challenge 3: Argoverse Challenges

Argoverse is hosting four competitions this spring:

Top performing teams will be highlighted at the workshop.

Call for Papers

Important Dates

  • Workshop paper submission deadline: March 3, 2023 March 6, 2023 (23:59 PST)
  • Notification to authors: March 31, 2023
  • Camera ready deadline: April 7, 2023

Topics Covered

Topics of the papers include but are not limited to:

  • Autonomous navigation and exploration
  • Vision based advanced driving assistance systems, driver monitoring and advanced interfaces
  • Vision systems for unmanned aerial and underwater vehicles
  • Deep Learning, machine learning, and image analysis techniques in vehicle technology
  • Performance evaluation of vehicular applications
  • On-board calibration of acquisition systems (e.g., cameras, radars, lidars)
  • 3D reconstruction and understanding
  • Vision based localization (e.g., place recognition, visual odometry, SLAM)

Presentation Guidelines

All accepted papers will be presented as posters. The guidelines for the posters are the same as at the main conference.

Submission Guidelines

  • We solicit short papers on autonomous vehicle topics
  • Submitted manuscript should follow the CVPR 2023 paper template
  • The page limit is 8 pages (excluding references)
  • We do not accept dual Submissions
  • Submissions will be rejected without review if they:
    • contain more than 8 pages (excluding references)
    • violate the double-blind policy or violate the dual-submission policy
  • The accepted papers will be linked at the workshop webpage and also in the main conference proceedings if the authors agree
  • Papers will be peer reviewed under double-blind policy, and must be submitted online through the CMT submission system at: https://cmt3.research.microsoft.com/WAD2023
Accepted Papers

The poster session will be held from 11:30am to 1:00pm in the West Exhibit Hall at poster board #23-#38.

Does Image Anonymization Impact Computer Vision Training?

Authors: Håkon Hukkelås; Frank Lindseth
The paper can be found here: openaccess@thecvf

DPPD: Deformable Polar Polygon Object Detection

Authors: Yang Zheng; Oles Andrienko; Yonglei Zhao; Minwoo Park; Trung Pham
The paper can be found here: openaccess@thecvf

EGA-Depth: Efficient Guided Attention for Self-Supervised Multi-Camera Depth Estimation

Authors: Yunxiao Shi; Hong Cai; Amin Ansari; Fatih Porikli
The paper can be found here: openaccess@thecvf

Exploiting the Complementarity of 2D and 3D Networks to Address Domain-Shift in 3D Semantic Segmentation

Authors: Adriano Cardace; Pierluigi Zama Ramirez; Samuele Salti; Luigi Di Stefano
The paper can be found here: openaccess@thecvf

FUTR3D: A Unified Sensor Fusion Framework for 3D Detection

Authors: Xuanyao Chen; Tianyuan Zhang; Yue Wang; Yilun Wang; Hang Zhao
The paper can be found here: openaccess@thecvf

HazardNet: Road Debris Detection by Augmentation of Synthetic Models

Authors: Tae Eun Choe; Jane Wu; Xiaolin Lin; Karen Kwon; Minwoo Park
The paper can be found here: openaccess@thecvf

Improvements to Image Reconstruction-Based Performance Prediction for Semantic Segmentation in Highly Automated Driving

Authors: Andreas Bär; Daniel Kusuma; Tim Fingscheidt
The paper can be found here: openaccess@thecvf

Improving Rare Classes on nuScenes LiDAR segmentation Through Targeted Domain Adaptation

Authors: Vickram Rajendran; Chuck B Tang; Frits H van Paasschen
The paper can be found here: openaccess@thecvf

Joint Camera and LiDAR Risk Analysis

Authors: Oliver Zendel; Johannes Huemer; Markus Murschitz; Gustavo Fernandez Dominguez; Amadeus-Cosimo Lobe
The paper can be found here: openaccess@thecvf

LiDAR-Based Localization on Highways Using Raw Data and Pole-Like Object Features

Authors: Sheng-Cheng Lee; Victor Lu; Chieh-Chih Wang; Wen-Chieh Lin
The paper can be found here: openaccess@thecvf

MobileDeRainGAN: An Efficient Semi-Supervised Approach to Single Image Rain Removal for Task-Driven Applications

Authors: Ruphan Swaminathan; Pradyot Korupolu
The paper can be found here: openaccess@thecvf

MotionTrack: End-to-End Transformer-based Multi-Object Tracking with LiDAR-Camera Fusion

Authors: Ce Zhang; Chengjie Zhang; Yiluan Guo; Lingji Chen; Michael K Happold
The paper can be found here: openaccess@thecvf

RadarGNN: Transformation Invariant Graph Neural Network for Radar-based Perception

Authors: Felix S Fent; Philipp Bauerschmidt; Markus Lienkamp
The paper can be found here: openaccess@thecvf

TorchSparse++: Efficient Point Cloud Engine

Authors: Haotian Tang; Shang Yang; Zhijian Liu; Ke Hong; Zhongming Yu; Xiuyu Li; Guohao Dai; Yu Wang; Song Han
The paper can be found here: openaccess@thecvf

Training Strategies for Vision Transformers for Object Detection

Authors: Apoorv Singh
The paper can be found here: openaccess@thecvf

Ultra-Sonic Sensor based Object Detection for Autonomous Vehicles

Authors: Tommaso Nesti; Santhosh Boddana; Burhaneddin Yaman
The paper can be found here: openaccess@thecvf

Waymo Open Dataset Challenge Reports

Motion Prediction Challenge

MTR++_Ens - Shaoshuai Shi*, Li Jiang*, Dengxin Dai, Bernt Schiele - Max Planck Institute for Informatics (Report)

IAIR+ - Miao Kang, Liushuai Shi, Jinpeng Dong, Yuhao Huang, Ke Ye, Yufeng Hu, Junjie Zhang, Yonghao Dong, Yizhe Li, Sanping Zhou - Xi'an Jiaotong Univeristy (Report)

GTR-R36 - Haochen Liu, Xiaoyu Mo, Zhiyu Huang, Chen Lv - Nanyang Technological University (Report)

DM - Ting Yu, Lingxin Jiang, Wei Li - Inceptio Technology (Report)

Sim Agents Challenge

MVTE - Yu Wang, Tiebiao Zhao, Fan Yi - Pegasus (Report)

MTR+++ - Cheng Qian, Minghao Tian, Di Xiu - DiDi Autonomous Driving, Northeastern University of China, University of Chinese Academy of Sciences (Report)

CAD - Hsu-kuang Chiu, Stephen F. Smith - Carnegie Mellon University (Report)

multipath - Wenxi Wang, Haotian Zhen - DiDi Voyager, Xi'an Jiaotong University (Report)

Interactive Autoregression ("sim_agents_tutorial" on the leaderboard) - Xiaoyu Mo, Haochen Liu, Zhiyu Huang, Chen Lv - Nanyang Technological University (Report)

2D Video Panoptic Segmentation Challenge

AmapNet-v_230523181741 - Biye Jiang, Xiang Wu, Feng Xiong - Alibaba (Report)

Clip KMax with Video Stitching - Venkata Pradeep Kadubandi, Aniket Murarka (Report)

Pose Estimation Challenge

Squirtle ‡ - Dongqiangzi Ye*, Yufei Xie*, Weijia Chen*, Zixiang Zhou*, Hassan Foroosh (Report)

‡ Disqualified from the Challenge
* Equal contribution