INTERPRET: INTERACTION-Dataset-Based PREdicTion Challenge

ICCV2021 Competition

Introduction

In the field of autonomous driving, it is a consensus that behavior prediction (e.g., trajectories, intentions) is one of the most challenging problems, blocking the realization of full autonomy. The problem cannot be solved without support from real-world motion data containing highly interactive behavior, as well as proper task settings and evaluation. From data perspective, we constructed an INTERnational, Adversarial and Cooperative moTION dataset (INTERACTION dataset), which accurately recovers large amounts of highly interactive motions of road users (e.g., vehicles, pedestrians) in a variety of driving scenarios from different countries.
 
Mostly, the prediction tasks nowadays only evaluate the prediction performance of a single agent in each case, but multi-modality joint prediction of multiple agents is required in complex scenarios. Moreover, the downstream planning modules require conditional prediction given the future motion of the ego vehicle in highly interactive scenarios. Therefore, we propose to construct four tracks including 1) single-agent, 2) conditional single-agent, 3) multi-agent, and 4) conditional multi-agent prediction tracks, and provide corresponding train, validation, and test data to support the new task settings and evaluation. With all the four tracks, we present the INTERACTION-Dataset-based PREdicTion (INTERPRET) Challenge to address the under-explored task settings and evaluation for prediction. The winners will be announced in ICCV2021.

Dataset Introduction

The INTERACTION dataset contains naturalistic motions of various traffic participants in a variety of highly interactive driving scenarios from different countries. The dataset can serve for many behavior-related research areas, such as
 
1) intention/behavior/motion prediction,
 
2) behavior cloning and imitation learning,
 
3) behavior analysis and modeling.
 
4) motion pattern and representation leaming,
 
5) interactive behavior extraction and categorization,
 
6) social and human like behavior generation,
 
7) decision-making and planning algorithm development and verification,
 
8) driving scenario/case generation, etc.​
 

Sign Up

Sign up

Download Dataset

Fill in the request form, then download dataset

Design Model

Design prediction models/algorithms to generate csm files containing the predicted trajectories

Submit File

Submit the csv files or docker images and get assessments

Leaderboard

See your rank on the leader-boards

Tracks

Model Input Output of the Task
Single-Agent Track 1 s historical states of all agents 3 s future states of a single target agent
Conditional Single-Agent Track 1 s historical states of all agents + 3 s future states of the ego agent 3 s future states of a single target agent
Multi-Agent Track 1 s historical states of all agents 3 s future states of all fully observable (1 s + 3 s) agents
Conditional Multi-Agent Track (shown in the video) 1 s historical states of all agents + 3 s future states of the ego agent 3 s future states of multiple selected agents interacting with the ego agent
The video illustrates exemplar cases in the test set of the Conditional Multi-Agent Track. The task is to conduct joint prediction of multiple target agents (green) interacting with the ego agent (red), given the ego future trajectory. As shown in the exemplar cases, the test set of the track contains large amounts of strong interactions among the ego and target agents in a variety of complex scenarios (all without traffic lights), some of which are highly critical with near-collision situations . The prediction task is extremely challenging but practical based on high quality trajectory data (accurate, smooth and complete). The other three tracks downgrade the Conditional Multi-Agent Track from “number of agent” and “conditional” perspective. Conditional Single-Agent Track only requires prediction of a single agent interacting with the ego agent given the ego future trajectory. The task of Multi-Agent Track is to conduct joint prediction of all agents in the scene without potential ego future motion. Single-Agent Track requires the prediction of only a single agent without given the ego future motion, similar as most of the existing prediction benchmarks.

Organizers

All from Mechanical Systems Control Laboratory (MSC Lab) at UC Berkeley
 

Wei Zhan

Postdoc
Wzhan@berkeley.edu

Liting Sun

Postdoc
Litingsun@berkeley.edu

Hengbo Ma

PhD Student

Chenran Li

PhD Student

Xiaosong Jia

PhD Student

Masayoshi Tomizuka

Professor
Tomizuka@berkeley.edu

Collaborators of the INTERACTION Dataset

All from Mechanical Systems Control Laboratory (MSC Lab) at UC Berkeley
 

Di Wang

Visiting PhD Student

Haojie Shi

Exchange Student

Aubrey Clausse

PhD Candidate

Arnaud de La Fortelle

Professor

Maximilian Naumann

PhD Candidate

Julius Kümmerle

PhD Candidate

Hendrik Königshof

PhD Candidate

Christoph Stiller

Professor

MSC Lab Introduction

The Mechanical Systems Control Laboratory (MSC Lab) at UC Berkeley is led by Professor Masayoshi Tomizuka. He joined the Mechanical Engineering Department of UCB in 1974, and currently holds Cheryl and John Neerhout, Jr. Distinguished Professorship. MSC Lab has been working on vehicle automation and control for over 30 years, including participating the Demo’ 97 collaborating with California PATH.
In the past decade, research of MSC lab has focused on intelligent/autonomous systems and their interaction with humans from manufacturing (industrial robots) to transportation (autonomous vehicles). The synergy between model-based control and machine learning has been emphasized, and the research program covers safe and efficient planning/control, interactive prediction/decision, robust perception/localization, as well as simulation/test/dataset in collaboration with many industrial partners on autonomous driving. Several recent publications on autonomous driving by laboratory members received Best Student Paper or Best Paper Finalist in flagship conferences on robotics and intelligent vehicle/transportation, such as IROS, ITSC and IV.
Professor Tomizuka received AACC Richard E. Bellman Heritage Award (highest lifetime achievement award for US control system engineers and scientists). He also received Soichiro Honda Medal from ASME “for pioneering and sustained contributions in applying modern systems and control theory to the comprehensive analysis and control algorithm development of automated vehicle lateral guidance, which has inspired further developments in the field”.

Sponsorship

In-kind donations or solutions
 

© 2024 Infra Challenge | All Rights Reserved