HumOmni: The 1st Human-Centric Omni-Model Evaluation

Benchmarking contextualized affective speech generation and proactive multimodal interaction


🌟 News: Training data are provided in the Track details page 🌟

Calls Overview

HumOmni-2026 features two evaluation tracks focused on benchmarking omni-models for empathetic speech understanding and generation (EmpathyEval) and proactive assistance in streaming video scenarios (ProactivEval), with an emphasis on human-centric evaluation and realistic multimodal interaction.

Evaluation Tracks

Track 1: EmpathyEval

Evaluates how multimodal systems understand human context and paralinguistic cues, and produce appropriate affective spoken responses.

  • The benchmark includes Context-Variant and Tone-Variant settings.
  • Inputs include textual context, one speech utterance, and candidate response audio.
  • The evaluation metric is accuracy.

View Track 1 Details

Track 2: ProactivEval

Evaluation of proactive multimodal systems that decide when to respond and what to say during streaming video understanding.

  • Inputs include videos and user instructions.
  • The evaluation metrics are PAUC and Duplicate, focusing on timeliness, correctness, and response redundancy.

View Track 2 Details

Awards

  • Participants with the most successful and innovative entries will be invited to present at the workshop and receive awards sponsored by Huawei.
  • For each track, the top team will receive a $1,000 USD cash prize.
  • The 2nd and 3rd teams will receive $800 USD and $600 USD, separately.

Important Dates

The shared timeline is currently aligned across both tracks, with milestones shown as completed, ongoing, or upcoming.

Time Agenda Note
April 2026 to June 10, 2026 Registration Registration open
April 30, 2026 Phase 1 Training set release
May 15, 2026 Phase 1 Test set release
June 1 to June 30, 2026 Phase 1 Public evaluation and leaderboard refresh
TBD (one day between July 2 and July 9, 2026) Phase 2 Unified online testing for the top 10 teams
July 1 to July 9, 2026 Phase 2 Final model submission
July 10 to July 30, 2026 Phase 2 Organizer-side internal testing
August 1, 2026 Awards Winning teams announced
August 2026 (TBD) Awards On-site award ceremony

General Rules

  • To ensure fairness, the Top-10 winners are required to submit a technical report for reproducibility verification.
  • Each entry must be associated with one team and its affiliation, and all members of one team must register together.
  • Using multiple accounts to increase the number of submissions is strictly prohibited.
  • Results must follow the required format and submission instructions, otherwise they will be considered invalid.
  • The best entry of each team will remain public on the leaderboard at all times.
  • The organizers reserve the absolute right to disqualify entries that are incomplete, illegible, late, or violating the rules.
  • In case of any inconsistency between the English and Chinese rules, the English content shall prevail.

Organizers

Geng Wang

Geng Wang

Huawei

Hong Lanqing

Hong Lanqing

Huawei

Huang Yuqi

Huang Yuqi

CUHK

Lee Tsz Fung

Lee Tsz Fung

PolyU

Li Jing

Li Jing

PolyU

Li Piji

Li Piji

NUAA

Luo Xuan

Luo Xuan

PolyU

Wu Jibin

Wu Jibin

PolyU

Zhao Libo

Zhao Libo

PolyU

Advisors

Contact

For workshop-related inquiries, please contact humomni2026@googlegroups.com.